Error: Unable to load celery application. The module main was not found. Supervisor + celery - celery

I can’t start a bunch supervisor and celery. Because celery does not see my module app.
/etc/supervisor/conf.d/celery.conf
[program:celery]
command=/home/ubuntu/django/.env/bin/celery -A main worker --app=main --loglevel=info
user=root
stdout_logfile=/home/ubuntu/django/deployment/logs/celery.log
stderr_logfile=/home/ubuntu/django/deployment/logs/celery_main.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
-django
--.env
--main
---settings.py
---celery.py
...
--orders
--shop
if I run this command in a virtual environment in my project directory everything works fine. But if I want to do it at a distance I can not, Why? In my logs celery says Error: Unable to load celery application. The module main was not found.

What I don't see in your configuration file is the working directory, that could explain why the celery command can not find the module, but it is working when you run it manually.
Try adding:
directory=/home/ubuntu/django
to your configuration file and see if this will fix the error.

Related

Check celery config with command line

I have celery running in a docker container and I want to check that the option CELERY_TASK_RESULT_EXPIRES = '3600' has been applied.
I tried using celery inspect conf and celery inspect stats but the commands never end. Other than that celery is running fine and doing its work.
You can get that from celery inspect. Try this
celery -A app inspect report --timeout 10
Found flower. It is installed with
pip install flower
flower -A celery-app-name --port=5555
And then celery can be accessed via REST API. The following will give the workers config
curl -w "\n" http://localhost:5555/api/workers

Add new service to existing supervisord process

Say I already have a supervisord process running on my machine. How can I add a new service/process for supervisord to monitor? For example, assume I have this simple .conf file:
run-suman-daemon.conf
[program:suman-daemon]
command=/Users/alexamil/WebstormProjects/suman/cli/suman-daemon.sh
I tried:
supervisord add run-suman-daemon.conf
but I get this error:
Error: positional arguments are not supported: ['add', 'sup.conf']
For help, use /usr/local/bin/supervisord -h
The supervisord daemon is running and I can connect to it with supervisorctl
You can use following commands for reading the new configuration and starting the new processes
supervisorctl reread
supervisorctl update
If you want to add a process dynamically, add this section to your supervisord.conf:
[include]
files = dir-with-your-conf-files/*.conf
All conf files placed in dir-with-your-conf-files will be loaded by the main config files. So you put, remove, change files in that dir (for example create symbolic links) and then run:
# reread configuration
supervisorctl reread
# start/stop new/old processes
supervisorctl update
I think you should call supervisorctl update first.

celery ImportError: No module named 'tasks'

I am trying to learn how to use celery to later integrate into my flask app. I am just trying to execute the basic example found on the Celery Docs I have created a file called task.py and from within that folder where the file task.py is existing i am running celery -A tasks worker --loglevel=info but it is giving an error. I can't seem to figure out what is wrong.
from celery import Celery
app = Celery('tasks', broker='amqp://localhost')
#app.task
def add(x, y):
return x + y
error I am seeing
celery -A tasks worker --loglevel=info
ImportError: No module named 'tasks'
Try executing the command from application folder level. If your tasks.py is inside flask_app/configs/tasks.py, then run the following command from inside flask_app folder.
celery worker --app=configs.tasks:app --loglevel=info
if you want to daemonize celery use following command
celery multi start worker --app=configs.tasks:app --loglevel=info
** multi start will daemonize the celery,
and be sure to activate virtualenv before running the command, if the application is running inside one.
I am successfully running celery in django with django-celery, had faced the same issue.

Using supervisor to run a flask app

I am deploying my Flask application on WebFaction. I am using flask-socketio which has lead me to deploying it with a Custom Websocket App (listening on port). Flask-socketio's instructs me to deploy my app by starting the serving with the call socketio.run(app, port= < port_listening_on >) in my main python script. I have installed eventlet on the server so socketio.run should run the app on the eventlet web server.
I can call python < app >.py and all works great – server runs, can view it at the domain, sockets working, etc. My problems start when I attempt to turn this into a long running process. I've been advised to use supervisor which I have installed and configured on my webapp following these instructions: https://community.webfaction.com/questions/18483/how-do-i-install-and-use-supervisord-to-control-long-running-processes
The problem is once I actually add the command for supervisor to run my app it errors with:
Exited too quickly
My log states the above error as well as:
(exit status 1; not expected)
In my supervisor config file I currently have the following program config:
[program:<prog_name>]
command=/usr/bin/python2.7 /home/<user>/webapps/<app_name>/<app>.py
autostart=true
autorestart=true
I have tried a removing and adding settings but it all leads to the same FATAL error.
So this is what part of my supervisor config looks like, I'm using gunicorn to run my flask app.
Also, I'm logging errors to a file from the supervisor config, so if you do that, it might help you see why it's not starting correctly.
[program:gunicorn]
command=/juzten/venv/bin/gunicorn run:app --preload -p rocket.pid -b 0.0.0.0:5000 --access-logfile "-"
directory=/juzten/app-folder-name
user=juzten
autostart=true
autorestart=unexpected
stdout_logfile=/juzten/gunicorn.log
stderr_logfile=/juzten/gunicorn.log

Supervisord can't stop celery, how to do the same using Monit

I can't stop my celery worker using Supervisord, in the config file, it looks like this:
command=/usr/local/myapp/src/manage.py celery worker --concurrency=1 --loglevel=INFO
and when I try to stop it using the following command:
sudo service supervisord stop
It shows that the worker has stopped, while it is not.
One more problem, when you restart a program outside supervisord scope, it totally loses control over that program, because of the parent-child relationship between supervisord and its child processes
My question is: how to run celery workers using Monit?