Celery works, but with flower doesn't work - celery

I have installed celery and RabitMQ and flower. I am able to browse to the flower port. I have the following simple worker that I can attach to celery and call from a python program:
# -*- coding: utf-8 -*-
"""
Created on Sat Dec 12 16:37:33 2015
#author: idf
"""
from celery import Celery
app = Celery('tasks', broker='amqp://guest#localhost//')
#app.task
def add(x, y):
return x + y
This program calls it
# -*- coding: utf-8 -*-
"""
Created on Sat Dec 12 16:40:16 2015
#author: idf
"""
from tasks import add
add.delay(36, 5)
I start celery like this:
idf#DellInsp:~/Documents/Projects/python3$ celery -A tasks worker --loglevel=info
[2015-12-12 19:22:46,223: WARNING/MainProcess] /home/idf/anaconda3/lib/python3.5/site-packages/celery/apps/worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
-------------- celery#DellInsp v3.1.19 (Cipater)
---- **** -----
--- * *** * -- Linux-3.19.0-39-lowlatency-x86_64-with-debian-jessie-sid
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x7f61485e61d0
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. tasks.add
[2015-12-12 19:22:46,250: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2015-12-12 19:22:46,267: INFO/MainProcess] mingle: searching for neighbors
[2015-12-12 19:22:47,275: INFO/MainProcess] mingle: all alone
[2015-12-12 19:22:47,286: WARNING/MainProcess] celery#DellInsp ready.
[2015-12-12 19:22:47,288: INFO/MainProcess] Received task: tasks.add[3c0e5317-ac53-465e-a8fd-3e2861e31db6]
[2015-12-12 19:22:47,289: INFO/MainProcess] Task tasks.add[3c0e5317-ac53-465e-a8fd-3e2861e31db6] succeeded in 0.00045899399992777035s: 41
^C
worker: Hitting Ctrl+C again will terminate all running tasks!
worker: Warm shutdown (MainProcess)
Notice the correct output of 41
However, if I pass in the flower parameter, nothing happens when I execute the call. I also don't see any tasks on the flower website.
idf#DellInsp:~/Documents/Projects/python3$ celery flower -A tasks worker --loglevel=info
[I 151212 19:23:59 command:113] Visit me at http://localhost:5555
[I 151212 19:23:59 command:115] Broker: amqp://guest:**#localhost:5672//
[I 151212 19:23:59 command:118] Registered tasks:
['celery.backend_cleanup',
'celery.chain',
'celery.chord',
'celery.chord_unlock',
'celery.chunks',
'celery.group',
'celery.map',
'celery.starmap',
'tasks.add']
[I 151212 19:23:59 mixins:231] Connected to amqp://guest:**#127.0.0.1:5672//
[W 151212 19:24:01 control:44] 'stats' inspect method failed
[W 151212 19:24:01 control:44] 'active_queues' inspect method failed
[W 151212 19:24:01 control:44] 'registered' inspect method failed
[W 151212 19:24:01 control:44] 'scheduled' inspect method failed
[W 151212 19:24:01 control:44] 'active' inspect method failed
[W 151212 19:24:01 control:44] 'reserved' inspect method failed
[W 151212 19:24:01 control:44] 'revoked' inspect method failed
[W 151212 19:24:01 control:44] 'conf' inspect method failed
^Cidf#DellInsp:~/Documents/Projects/python3$
Finally, not sure it is an error, but my flower website does not have a workers Tab.

I am not sure I understood, but are you running both flower and the worker together? Flower does not process tasks. You must run both, then Flower can be used as a monitoring tool.
Run celery:
celery -A tasks worker --loglevel=info
Open another shell and run flower:
celery -A tasks flower --loglevel=info
Then go to http://localhost:5555 and see your worker. Of course you must run some task if you want to see something.

Faced same issue. Here is how I have it works:
rabbitmq:
image: rabbitmq:3-management
flower:
image: mher/flower
ports:
- 5555:5555
command:
- "celery"
- "--broker=amqp://guest#rabbitmq:5672//"
- "flower"
- "--broker_api=http://guest:guest#rabbitmq:15672/api//"
depends_on:
- rabbitmq

Related

Celery worker exited prematurely on restart using systemd

I'm using celery with systemd. I noticed that most times on restart, I lose the workers mid-task. From the celery multi documentation, it seems like the celery multi stopwait should be waiting for the tasks to finish.
Got the following error on restart:
Process "ForkPoolWorker-10" pid:16902 exited with "signal 15 (SIGTERM)"
celery.conf
[Unit]
Description=Celery background worker
After=network.target
[Service]
Type=forking
User=celery
Group=celery
WorkingDirectory=/src
ExecStart=celery multi start worker -A main.celery -Q celery --logfile=/data/celery.log --loglevel=info --concurrency=10 --pidfile=/var/run/celery/%%n.pid
ExecStop=celery multi stopwait worker --pidfile=/var/run/celery/%%n.pid
[Install]
WantedBy=multi-user.target
I also read the systemd documentation, we should at least be waiting 90 seconds for the task to be completed before sending out a SIGTERM. I receive this error in less than 10 seconds of running the restart command.
What am I doing wrong?
Using celery version: 5.2.2 (dawn-chorus)

Daphne keeps exiting without any error under supervisor

I am running daphne under supervisord following the official docs.It keeps failing for some reason wihout any clear error
Below is the supervisor log
2021-07-18 15:35:01,110 INFO Creating socket tcp://localhost:8000
2021-07-18 15:35:01,112 INFO spawned: 'asgi0' with pid 15075
2021-07-18 15:35:01,116 INFO exited: asgi0 (exit status 0; not expected)
2021-07-18 15:35:01,116 INFO Closing socket tcp://localhost:8000
2021-07-18 15:35:04,128 INFO Creating socket tcp://localhost:8000
2021-07-18 15:35:04,130 INFO spawned: 'asgi0' with pid 15079
2021-07-18 15:35:04,134 INFO exited: asgi0 (exit status 0; not expected)
2021-07-18 15:35:04,134 INFO Closing socket tcp://localhost:8000
2021-07-18 15:35:05,136 INFO gave up: asgi0 entered FATAL state, too many start retries too quickly
Here's the supervisor config file
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# environment=DJANGO_SETTINGS_MODULE=mysite.dev_settings
# Directory where your site's project files are located
directory=/home/jaga/C42
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=bash -c 'source /etc/environment' && daphne -u /run/daphne/daphne%(process_num)d.sock --fd 10 --access-log - --proxy-headers mysite.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=1
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/var/log/asgi.log
redirect_stderr=true
Same happened when there were 4 processes.

Supervisord (exit status 2; not expected) ubuntu

I'm trying to run Celery with Supervisord on Ubuntu, but am getting:
INFO exited: celery (exit status 2; not expected)
INFO spawned: 'celery' with pid 15517
INFO gave up: celery entered FATAL state, too many start retries too
quickly
This is the Supervisord script:
cd into the directory and activate the virtual environment
celery -A [APP_NAME].celery worker -E -l info --concurrency=2
If I run this script manually, Celery starts up without any issues. But running sudo supervisorctl start celery errors out with the error messages above.

Windows issue when using pm2 deploy

I have read several issues and ideas on how to work with pm2 under a Windows machine, and believe it or not, my previous machine I had it working very well ... then I had to re.format it and completely forgot what I did before :(
and I've installed pm2 after npm and with the command: npm install pm2#latest -g
for deploying and under the windows command line (cmd) I do:
pm2 deploy production
but I always get:
--> Deploying to production environment
--> on host 10.200.73.136
Deploy failed
Deploy failed
if I use the git bash to run, I get weird git errors:
balex#DESKTOP-3LKNA7U /d/Gavekortet/gogift-mainsite (master)
$ pm2 deploy production
--> Deploying to production environment
--> on host 10.200.73.136
0 [main] sh 16020 C:\Program Files\Git\usr\bin\sh.exe: *** fatal error in forked process - fork: can't reserve memory for parent stack 0x600000 - 0x800000, (child has 0x400000 - 0x600000), Win32 error 487
660 [main] sh 16020 cygwin_exception::open_stackdumpfile: Dumping stack trace to sh.exe.stackdump
0 [main] sh 13588 fork: child -1 - forked process 16020 died unexpectedly, retry 0, exit code 0x100, errno 11
sh: fork: retry: No child processes
1007561 [main] sh 8808 C:\Program Files\Git\usr\bin\sh.exe: *** fatal error in forked process - fork: can't reserve memory for parent stack 0x600000 - 0x800000, (child has 0x400000 - 0x600000), Win32 error 487
1008780 [main] sh 8808 cygwin_exception::open_stackdumpfile: Dumping stack trace to sh.exe.stackdump
2266018 [main] sh 13588 fork: child -1 - forked process 8808 died unexpectedly, retry 0, exit code 0x100, errno 11
sh: fork: retry: No child processes
4274490 [main] sh 14924 C:\Program Files\Git\usr\bin\sh.exe: *** fatal error in forked process - fork: can't reserve memory for parent stack 0x600000 - 0x800000, (child has 0x400000 - 0x600000), Win32 error 487
4275199 [main] sh 14924 cygwin_exception::open_stackdumpfile: Dumping stack trace to sh.exe.stackdump
5799995 [main] sh 13588 fork: child -1 - forked process 14924 died unexpectedly, retry 0, exit code 0x100, errno 11
sh: fork: retry: No child processes
9804559 [main] sh 6320 C:\Program Files\Git\usr\bin\sh.exe: *** fatal error in forked process - fork: can't reserve memory for parent stack 0x600000 - 0x800000, (child has 0x400000 - 0x600000), Win32 error 487
9804986 [main] sh 6320 cygwin_exception::open_stackdumpfile: Dumping stack trace to sh.exe.stackdump
11142795 [main] sh 13588 fork: child -1 - forked process 6320 died unexpectedly, retry 0, exit code 0x100, errno 11
sh: fork: retry: No child processes
Any idea what am I missing (that I did before and can't remember exactly what) :(
P.S. same steps on my Mac machine, works flawlessly
I had this same problem today. Make sure that you have the latest version of pm2 (2.4.0 at the time of writing) and Git Bash.

Eclipse using MinGW make fails without msys

As background, I'm trying to build ChibiOS for STM32 on a Windows 8.1 host. This works perfectly well if I simply run make in the demo directory in the msys.bat command prompt. The toolchain and paths should thus be fine.
Now, if I simply set up an Eclipse project, it will try to run make.exe directly and fails. The output is similar to running make (either make.exe or mingw32-make.exe) from a plain cmd prompt.
make all
0 [main] sh 5524 sync_with_child: child 2444(0x188) died before initialization with status code 0xC0000142
22 [main] sh 5524 sync_with_child: *** child state waiting for longjmp
/usr/bin/sh: fork: Resource temporarily unavailable
0 [main] sh 188 sync_with_child: child 1152(0x188) died before initialization with status code 0xC0000142
26 [main] sh 188 sync_with_child: *** child state waiting for longjmp
/usr/bin/sh: fork: Resource temporarily unavailable
0 [main] sh 5096 sync_with_child: child 3200(0x18C) died before initialization with status code 0xC0000142
25 [main] sh 5096 sync_with_child: *** child state waiting for longjmp
/usr/bin/sh: fork: Resource temporarily unavailable
0 [main] sh 5232 sync_with_child: child 3820(0x184) died before initialization with status code 0xC0000142
25 [main] sh 5232 sync_with_child: *** child state waiting for longjmp
/usr/bin/sh: fork: Resource temporarily unavailable
make: Nothing to be done for 'all'.
20:39:33 Build Finished (took 4s.171ms)
I've seen some info saying this is some aspect of Windows 8.1. Can I convince Eclipse to use msys somehow or is there another known clean way to make a make (any make) work without it?
Possibly related:
http://forum.chibios.org/phpbb/viewtopic.php?p=16023
http://sourceforge.net/p/mingw/bugs/1013/?page=0
Bizarrely, the issue was solved for me by replacing msys-1.0.dll in WinAVR directory with the msys one. I'm guessing there is an ancient version there that somehow gets loaded while it's not in the system path as far as I can tell.
The links in the question refer to updating the dll or replacing it with a patched one.