How to configure Delay timing in celery? - celery

input:
>>> from registration import tasks
>>> tasks.mail.delay()
<AsyncResult: 36a2e61c-493b-472c-99f1-cb2391110aa9>
>>> r=tasks.mail.delay()
>>> r.ready()
output:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/celery/result.py", line 174, in ready
return self.state in self.backend.READY_STATES
File "/usr/local/lib/python2.6/dist-packages/celery/result.py", line 277, in state
return self.backend.get_status(self.id)
File "/usr/local/lib/python2.6/dist-packages/celery/backends/base.py", line 487, in _is_disabled
raise NotImplementedError('No result backend configured. '
NotImplementedError: No result backend configured. Please see the documentation for more information.
How can i configure the back end so taht i can fix specific period of delay

A result backend can be configured via CELERY_RESULT_BACKEND setting if you use celeryconfig.py configuration module
CELERY_RESULT_BACKEND="amqp"
or via Celery instance
celery = Celery('tasks', backend='amqp', broker='amqp://')
Please check out the docs for more info.

Related

AWS SAM Local RuntimeError in Eclipse

I am trying to run a basic lambda function (using article blueprint in AWS serverless project creator in Eclipse). When I try to run the project as AWS SAM Local, I get an error:
Traceback (most recent call last):
File "/Users/bliu4/.local/bin/sam", line 11, in <module>
sys.exit(cli())
File "/Users/bliu4/anaconda3/lib/python3.6/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/Users/bliu4/anaconda3/lib/python3.6/site-packages/click/core.py", line 676, in main
_verify_python3_env()
File "/Users/bliu4/anaconda3/lib/python3.6/site-packages/click/_unicodefun.py", line 118, in _verify_python3_env
'for mitigation steps.' + extra)
RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Consult http://click.pocoo.org/python3/for mitigation steps.
Has anyone else run into this issue? Any idea how to change the python version sam is calling? Thanks.

Airflow scheduler failure

I have followed
this tutorial in attempt to build an airflow cluster on localhost with my own DAGs. When I ran airflow scheduler after having set executor = CeleryExecutor in the config file, I received the following traceback:
Traceback (most recent call last):
File "/home/yurii/Tools/anaconda3/bin/airflow", line 28, in
args.func(args)
File"/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/airflow/bin/cli.py", line 839, in scheduler job.run()
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/airflow/jobs.py", line 200, in run
self._execute()
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/airflow/jobs.py", line 1309, in _execute
self._execute_helper(processor_manager)
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/airflow/jobs.py", line 1441, in _execute_helper
self.executor.heartbeat()
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/airflow/executors/base_executor.py", line 124, in heartbeat
self.execute_async(key, command=command, queue=queue)
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 80, in execute_async
args=[command], queue=queue)
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/celery/app/task.py", line 573, in apply_async
**dict(self._get_exec_options(), **options)
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/celery/app/base.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/celery/app/amqp.py", line 310, in publish_task
**kwargs
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/kombu/messaging.py", line 172, in publish
routing_key, mandatory, immediate, exchange, declare)
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/kombu/connection.py", line 449, in _ensured
return fun(*args, **kwargs)
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/kombu/messaging.py", line 188, in _publish
mandatory=mandatory, immediate=immediate,
File "/home/yurii/Tools/anaconda3/lib/python3.6/site-packages/librabbitmq/init.py", line 122, in basic_publish
mandatory or False, immediate or False,
TypeError: an integer is required (got type NoneType)
Some additional information:
I am using Airflow 1.8.0 along with Celery 3.1.25 and RabbitMQ 3.5.7 as a broker and backend, but also tried Airflow 1.9.0 with Celery 4.2.
Airflow with sequential executor works without any problems.
`airflow test "dag_name" "task_name" "exec_date" runs succeessfully.
I am new to Airflow/Celery/RabbitMQ/SQL, so any help would be appreciated!
To add to previous answer. Using py-amqp involves either changing from broker_url = amqp://XXXXX to broker_url = pyamqp://XXXXX OR
pip uninstall librabbitmq.
Additionally you may need to change celery_result_backend variable to result_backend in your airflow.cfg. The celery_ prefix has been removed for variables in the [celery] node in airflow.cfg in recent versions.
It seems you are using librabbitmq as amqp broker which is not recommended by celery core team. Use py-amqp as the rabbitmq broker and you should get rid of this error.

Tensorflow Serving Client: Socket Closed Error

I'm new to Tensorflow Serving. I'm following the tutorial describe here
https://www.tensorflow.org/serving/serving_inception#part_0_create_a_docker_image
I already setup the image and the cluster is running and everything. However when trying to run the client locally. I receive the next error:
Traceback (most recent call last): File "inception_client.py", line 56, in <module>
tf.app.run() File "/home/dangz90/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv)) File "inception_client.py", line 51, in main
result = stub.Predict(request, 30.0) # 10 secs timeout File "/home/dangz90/.local/lib/python2.7/site-packages/grpc/beta/_client_adaptations.py", line 309, in __call__
self._request_serializer, self._response_deserializer) File "/home/dangz90/.local/lib/python2.7/site-packages/grpc/beta/_client_adaptations.py", line 195, in _blocking_unary_unary
raise _abortion_error(rpc_error_call) grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.UNAVAILABLE, details="Socket closed")

redis-py raises AttributeError

In what circumstances would redis-py raise the following AttributeError exception?
Isn't redis-py built by design to raise only redis.exceptions.RedisError based exceptions?
What would be a reasonable handling logic?
Traceback (most recent call last):
File "c:\Python27\Lib\threading.py", line 551, in __bootstrap_inner
self.run()
File "c:\Python27\Lib\threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File C:\Users\Administrator\Documents\my_proj\my_module.py", line 33, in inner
ret = protected_func(*args, **kwargs)
File C:\Users\Administrator\Documents\my_proj\my_module.py", line 104, in _listen
for message in _pubsub.listen():
File "C:\Users\Administrator\virtual_environments\my_env\lib\site-packages\redis\client.py", line 1555, in listen
r = self.parse_response()
File "C:\Users\Administrator\virtual_environments\my_env\lib\site-packages\redis\client.py", line 1499, in parse_response
response = self.connection.read_response()
File "C:\Users\Administrator\virtual_environments\my_env\lib\site-packages\redis\connection.py", line 306, in read_response
response = self._parser.read_response()
File "C:\Users\Administrator\virtual_environments\my_env\lib\site-packages\redis\connection.py", line 104, in read_response
response = self.read()
File "C:\Users\Administrator\virtual_environments\my_env\lib\site-packages\redis\connection.py", line 89, in read
return self._fp.readline()[:-2]
AttributeError: 'NoneType' object has no attribute 'readline'
seems like an old question, but I faced the same problem recently.
My setup was using celery with redis as a broker. A ThreadPoolExecutor uses the shared celery object to batch tasks to workers. The batcher function waits for the submitted tasks to finish using celery.result.ResultSet.
After quick investigations, I found that celery somewhere uses a pub/sub mechanism to wait for the tasks to finish. And that is it, pub/sub don't play well with thread-safety per the official readme https://github.com/andymccurdy/redis-py#thread-safety
Honestly, I didn't try to prove my theory and fixed my problem by switching to a ProcessPoolExecutor instead.

ipython 0.13 zmq errors

I encounter weird behavior of an ipython cluster. The calculations finish, but many results never reach the client (and the engines just idle after finishing their first calculation).
I suspect something is wrong with zmq because 1) from time to time I see the following error:
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/parallel/client/asyncresult.py", line 118, in get
if not self.ready():
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/parallel/client/asyncresult.py", line 132, in ready
self.wait(0)
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/parallel/client/asyncresult.py", line 142, in wait
self._ready = self._client.wait(self.msg_ids, timeout)
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/parallel/client/client.py", line 1058, in wait
self.spin()
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/parallel/client/client.py", line 1015, in spin
self._flush_results(self._task_socket)
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/parallel/client/client.py", line 814, in _flush_results
idents,msg = self.session.recv(sock, mode=zmq.NOBLOCK)
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/zmq/session.py", line 642, in recv
idents, msg_list = self.feed_identities(msg_list, copy)
File "/data/misc/nano/python/env_stable/lib/python2.7/site-packages/IPython/zmq/session.py", line 673, in feed_identities
idx = msg_list.index(DELIM)
ValueError: '<IDS|MSG>' is not in list
Additionally IPython.zmq has two test failures:
======================================================================
ERROR: test_send (IPython.zmq.tests.test_session.TestSession)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/clusterdata/python/env_stable/lib/python2.7/site-packages/IPython/zmq/tests/test_session.py", line 76, in test_send
socket = MockSocket(zmq.Context.instance(),zmq.PAIR)
File "/clusterdata/python/env_stable/lib/python2.7/site-packages/IPython/zmq/tests/test_session.py", line 34, in __init__
self.data = []
File "/clusterdata/python/env_stable/lib/python2.7/site-packages/zmq/sugar/attrsettr.py", line 38, in __setattr__
self.__class__.__name__, upper_key)
AttributeError: MockSocket has no such option: DATA
======================================================================
ERROR: test_send (IPython.zmq.tests.test_session.TestSession)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/clusterdata/python/env_stable/lib/python2.7/site-packages/zmq/tests/__init__.py", line 108, in tearDown
raise RuntimeError("context could not terminate, open sockets likely remain in test")
RuntimeError: context could not terminate, open sockets likely remain in test
----------------------------------------------------------------------
I use pyzmq 13.0.0 (as installed by pip), and the zeromq 3.2.2, compiled by the setup of pyzmq. I use ipython 13.1 and python 2.7.3.
Any suggestions of what could this be, and if not how I could figure out more information why these errors occur?
Update: It turns out the slowdown was due to a long task queue of ipcontroller, which was then taking 100% CPU and lagging horribly. That is a separate issue, but I would still appreciate feedback on the above.
Answered by #minrk in comments. ZMQ errors were unimportant, performance was due to scheduling, and was solved by setting TaskScheduler.hwm=0.