Celery with AWS SQS queue suddenly cannot connect - celery

For about a year now this has worked problem-free, but seemingly suddenly stopped working this past week or two, and I'm not sure where to look.
To start celery I just run this command:
PYTHONPATH=[path to project]:. celery -A update.tasks worker -Q update_local --concurrency 2 -E
On AWS I have an SQS setup named update_local.
In the celery config.py file I have BROKER_URL = sqs://[AWS:KEYS]# for the account that has the update_local SQS queue.
When I run it now, I get:
[2020-03-10 22:26:23,789: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 61] Connection refused.
Trying again in 2.00 seconds...
Over and over again.
I was able to get it to run error free, but I have to specify the config file with the --config flag. Not sure why it no longer finds the file by itself.

Related

Error: Unable to load celery application. The module main was not found. Supervisor + celery

I can’t start a bunch supervisor and celery. Because celery does not see my module app.
/etc/supervisor/conf.d/celery.conf
[program:celery]
command=/home/ubuntu/django/.env/bin/celery -A main worker --app=main --loglevel=info
user=root
stdout_logfile=/home/ubuntu/django/deployment/logs/celery.log
stderr_logfile=/home/ubuntu/django/deployment/logs/celery_main.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
-django
--.env
--main
---settings.py
---celery.py
...
--orders
--shop
if I run this command in a virtual environment in my project directory everything works fine. But if I want to do it at a distance I can not, Why? In my logs celery says Error: Unable to load celery application. The module main was not found.
What I don't see in your configuration file is the working directory, that could explain why the celery command can not find the module, but it is working when you run it manually.
Try adding:
directory=/home/ubuntu/django
to your configuration file and see if this will fix the error.

Cannot run Kafka Producer due to socket.error: [Errno 48] Address already in use

I have a local docker-machine and I am trying to run a Kafka producer written in python. However, it gives a socket.error: [Errno 48] Address already in use and stopped. Appreciate any helps!
Error Msg:
Docker machine
Images on docker-machine
Containers
Command to run the producer
$ python producer.py
P.S. I don't think anything would be wrong in the producer.py, because I ran it successfully a couple days before and I didn't change anything ever since.
It turns out that I already have a process running, which uses the port 9092.
sudo lsof -i:9092
So after I kill it, I can run my producer successfully again
kill 28987
But I remembered I shut down the producer last time I used it, wonder how it was still open...

Limit number of processes in Celery with supervisor

I'm running Celery in a small instance in AWS Elastic Beanstalk.
However, when I do top, I see there are 3 celery processes running. I want to have only.
I'm running this using supervisor and in my config file I have (only showing relevant lines):
[program:celeryd]
directory=/opt/python/current/app/src
command=/opt/python/run/venv/bin/celery worker -A ..."
user=celery
numprocs=1
killasgroup=true
I've also followed the suggestion in this answer and created a file /etc/default/celeryd with this content:
# Extra arguments to celeryd
CELERYD_OPTS="--concurrency=1"
After restarting Celery (with supervisorctl -c config-file-path.conf restart celeryd), I see the 3 processes again. Any ideas? Thanks!
You are starting worker with celery command. Changing /etc/default/celeryd won't have any effect on celery command. Moreover celeryd is deprecated.
When a worker is started, celery launches a default process and n(concurrency) subprocesses.
You can start the worker with
[program:celery]
command=/opt/python/run/venv/bin/celery worker -c 1 -A foo"
This will start a worker with concurrency of 1 and there will be 2 processes.

Celery multi doesn't start workers

I'm trying to start multiple workers on my server with command from celery docs celery multi start Leslie -E.
But it only shows:
celery multi v3.1.17 (Cipater)
> Starting nodes...
> Leslie#test: OK
and exits.
And there are no workers in output of ps aux | grep celery.
Also I tried to start it on local machine and it works fine, I see 5 workers as expected.
So, what is the reason?
I had unsatisfactory results with the celery multi command. I think that supervisord works a lot better. You can find an example supervisord config file here

celery stdout/stederr logging while running under supervisor

I'm running celery worker with some concurrency level (e.g. 4) under supervisord:
[program:wgusf-wotwgs1.celery]
command=/home/httpd/wgusf-wotwgs1/app/bin/celery -A roles.frontend worker -c 4 -l info
directory=/home/httpd/wgusf-wotwgs1/app/src
numprocs=1
stdout_logfile=/home/httpd/wgusf-wotwgs1/logs/supervisor_celery.log
stderr_logfile=/home/httpd/wgusf-wotwgs1/logs/supervisor_celery.log
autostart=true
autorestart=true
startsecs=3
killasgroup=true
stopsignal=QUIT
user=wgusf-wotwgs1
Problem is next: some part of stdout messages from worker (about successful execution of tasks/receiving tasks) are missing in logfile. But while running celery worker with the same concurrency level from shell - everything seems ok, messages are steadily appearing for all the tasks.
Any ideas how to fix this behavior?
I think it's because by default celery reports things to stderr instead of stdout