I'm trying to use stream_framework in my application (NOT Django) but I'm having a problem calling the stream_framework shared tasks. Celery seems to find the tasks:
-------------- celery#M3800 v3.1.25 (Cipater)
---- **** -----
--- * *** * -- Linux-4.15.0-34-generic-x86_64-with-Ubuntu-18.04-bionic
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: task:0x7f8d22176dd8
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: redis://localhost:6379/0
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. formshare.processes.feeds.tasks.test_shared_task
. stream_framework.tasks.fanout_operation
. stream_framework.tasks.fanout_operation_hi_priority
. stream_framework.tasks.fanout_operation_low_priority
. stream_framework.tasks.follow_many
. stream_framework.tasks.unfollow_many
[2018-09-17 10:06:28,240: INFO/MainProcess] Connected to redis://localhost:6379/0
[2018-09-17 10:06:28,246: INFO/MainProcess] mingle: searching for neighbors
[2018-09-17 10:06:29,251: INFO/MainProcess] mingle: all alone
I run celery with:
celery -A formshare.processes.feeds.celery_app worker --loglevel=info
My celery_app has:
from celery import Celery
celeryApp = Celery('task', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0', include='formshare.processes.feeds.tasks')
The problem is that delay() does not run the shared task. I also created a shared task within my application but when I call delay() the task is also not called. I guess I need to register them as callable from my application? I don't seem to find any information online.
I also tried to auto discover the tasks but I got the same problem:
celeryApp.autodiscover_tasks(['stream_framework', 'formshare.processes.feeds'],force=True)
Any idea is highly appreciated.
Shared task are a specific thing used to actually share tasks between different applications (mainly Django apps I think, but I used them in flask for example).
We had the same issue and to get it to work we set
celery_app.set_default()
On the celery instantiation
Otherwise another way of getting things right is to actually call the task via the app itself, so something around these lines
from celery import current_app
.
.
.
current_app.tasks['my.tasks.to.exec'].delay(something)
This always works as, given it s a shared task and therefore is not bound to any app when you import it, in this case it belongs to the app configured as the "current_app"
Related
Summary
I'm using Apache-Airflow for the first time. I've gotten the webserver, SequentialExecutor and LocalExecutor to work, but I'm running into issues when using the CeleryExecutor with rabbitmq-server. I currently have two AWS EC2 instances.
Error
To summarize: My worker cannot connect to the rabbitmq-server on my scheduler node. Whenever I run airflow worker on the worker instance, it gives:
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f53a8dce400
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> default exchange=default(direct) key=default
[2019-02-15 02:26:23,742: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Configuration
I followed all of the directions I could find online. Both instances have the same airflow.cfg file, with
[core]
executor = CeleryExecutor
[celery]
broker_url = pyamqp://username:password#hostname:port/virtual_host
and result_backend pointing at the same MySQL database on RDS that airflow is working off of.
From what I could tell, no matter what, the worker node always tried connecting to a local rabbitmq-server and completely ignored that broker_url in my airflow.cfg file.
What I've Tried
I went spelunking in the source code, and noticed in celery/app/base.py, if I error log out the configurations it gets in _get_config() when it goes to create a connection, there are actually TWO values in the dictionary returned.
BROKER_URL = None
broker_url = pyamqp://username:password#hostname:port/virtual_host
and all of the connection logic seems to point at the BROKER_URL key.
I tried setting BROKER_URL and CELERY_BROKER_URL in airflow.cfg, but it seems to be case insensitive, and ignores the latter. Just to see if it would work, I modified the _get_config() method and hacked in:
s['BROKER_URL'] = s['broker_url']
return s
And, like I expected, everything started working.
Am I doing something wrong? I'd really rather not use this hack, but I can't understand why it's ignoring the configuration values.
Thanks!
From the error message, it seems like the hostname being passed in the URI is wrong:
If rabbitmq-server and worker are in different machines: instead of localhost/127.0.0.1, the hostname should be the IP address of the rabbitmq machine
If rabbitmq-server and worker are in the same machine as part of a Docker Compose application (e.g. if you took inspiration from here): the hostname should be the service name associated to the RabbitMQ image in docker-compose.yml, e.g. amqp://guest:guest#rabbitmq:5672/
I am developing web application in Python/Django, and I have several tasks which are running in celery.
I have to run task A one at a time so I have created worker with --concurrency=1 and routed task A to that worker using following command.
celery -A proj worker -Q A -c 1 -l INFO
Everything is working fine as this worker handle task A and other tasks are routed to default queue.
But, above worker return all task when I use inspect command to get registered task for worker. That is absolutely true because when I start worker, it displays all tasks of projects as registered task but handle only task A.
Following is the output of worker when I start it.
$ celery -A proj worker -Q A -c 1 -l INFO
-------------- celery#pet_sms v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Linux-4.8.10-040810-generic-x86_64-with-Ubuntu-16.04-xenial 2018-04-26 14:11:49
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: proj:0x7f298a10d208
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> A exchange=A(direct) key=A
[tasks]
. task_one
. task_two
. task_three
. A
. task_four
. task_five
Is there any way to register specific task to the worker in celery?
Notice following two parts in your worker log.
[queues]
.> A exchange=A(direct) key=A
[tasks]
. task_one
. task_two
. task_three
. A
. task_four
. task_five
The first part [queues] shows the queues your worker consumes.
And it shows A, exchange=A(direct), key=A, indicating this worker only consumes tasks that are from the queue A. Which is exactly what you want. And you were achieving this effect because you specified -Q A when you started the worker by the command $ celery -A proj worker -Q A -c 1 -l INFO.
The second part [tasks] shows all the registered tasks of this app.
Though other tasks such as task_one task_five are all registered, since these tasks do not go into queue A, therefore this worker does not consume the tasks task_one task_five.
I have a spare disk on my T5440 Solaris10 box that I want to use for extra ZFS filesystems
The problem is that this disk was mounted in my original OS installation - but I carried it a live upgrade and the mount point for 'carried over' into the new boot environment (BE)
So when I try and create a zpool on this disk - Solaris complains that that it is in use ....
How can I get c0t0d0 into a state that I can newfs it or create a zpool on it?
root#solaris>zpool create -f spare_pool c0t0d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t0d0s7 is in use for live upgrade /export/home.
Please see ludelete(1M).
root#solaris>
root#solaris>lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
new_zfs_BE yes yes yes no -
root#solaris>lufslist new_zfs_BE
boot environment name: new_zfs_BE
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool2/swap swap 34359738368 - -
rpool2/ROOT/new_zfs_BE zfs 5213962240 / -
/dev/dsk/c0t0d0s7 ufs 121010061312 /export/home -
rpool2 zfs 42872619520 /rpool2 -
In your case /export/home is already mounted from rpool2 and then again trying from /dev/dsk/c0t0d0s7 because of this you will not able to delete the BE or patched the BE.
To recover from this issue hand edit /etc/lu/ICF.* file and delete below file.
/dev/dsk/c0t0d0s7 ufs 121010061312 /export/home -
And then try to create your pool.
I'm upgrading celery and django-celery from:
celery==2.4.5
django-celery==2.3.3
To:
celery==3.0.24
django-celery==3.0.23
After the pip upgrade i run the migrations and all is well.
I then restarted celery worker and celery beat with the below commands:
django-admin.py celery worker --loglevel=DEBUG --config=portal.settings.development -E
django-admin.py celery beat --loglevel=DEBUG --config=portal.settings.development
The celery beat initial output shows it knows about the tasks:
__ - ... __ - _
Configuration ->
. broker -> amqp://zonza:**#localhost:5672/zonza
. loader -> djcelery.loaders.DjangoLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%DEBUG
. maxinterval -> now (0s)
[INFO] Wed, 18 Jun 2014 13:31:18 +0000 celery.beat 2184 140177823078144 beat: Starting...
[2014-06-18 13:31:18,332: DEBUG/MainProcess] DatabaseScheduler: intial read
[2014-06-18 13:31:18,332: INFO/MainProcess] Writing entries...
[2014-06-18 13:31:18,333: DEBUG/MainProcess] DatabaseScheduler: Fetching database schedule
[2014-06-18 13:31:18,366: DEBUG/MainProcess] Current schedule:
<ModelEntry: SOON_EXPIRY_ALERT SOON_EXPIRY_ALERT(*[], **{}) {4}>
<ModelEntry: celery.backend_cleanup celery.backend_cleanup(*[], **{}) {4}>
<ModelEntry: REFRESH_DB_CACHE REFRESH_DB_CACHE(*[], **{}) {4}>
Now none of my Periodic Tasks run :/ Any ideas?
edit: if i change the scheduler setting to the default 'celery.beat.PersistentScheduler' one, the tasks will work. but i think we need to use the djcelery one in this project for a number of reasons
edit2: after about 40mins of nothing the tasks now start running properly, this obviously is not ideal, i have no idea why
It should be in the changelogs somewhere, but Celery changed from storing dates in local time to storing them in UTC.
The database scheduler is not able to automatically convert to the new format, so you need to reset the last_run_at fields for every periodic task.
Something like:
UPDATE djcelery_periodic_task SET last_run_at=NULL
I am unable to run the resque-web on my server due to some issues I still have to work on but I still have to check and retry failed jobs in my resque queues.
Has anyone any experience on how to peek the failed jobs queue to see what the error was and then how to retry it using the redis-cli command line?
thanks,
Found a solution on the following link:
http://ariejan.net/2010/08/23/resque-how-to-requeue-failed-jobs
In the rails console we can use these commands to check and retry failed jobs:
1 - Get the number of failed jobs:
Resque::Failure.count
2 - Check the errors exception class and backtrace
Resque::Failure.all(0,20).each { |job|
puts "#{job["exception"]} #{job["backtrace"]}"
}
The job object is a hash with information about the failed job. You may inspect it to check more information. Also note that this only lists the first 20 failed jobs. Not sure how to list them all so you will have to vary the values (0, 20) to get the whole list.
3 - Retry all failed jobs:
(Resque::Failure.count-1).downto(0).each { |i| Resque::Failure.requeue(i) }
4 - Reset the failed jobs count:
Resque::Failure.clear
retrying all the jobs do not reset the counter. We must clear it so it goes to zero.