received unregistered task of type - celery

I am trying to run tasks which are in the memory .
registerd tasks on worker
[2012-09-13 11:10:18,928: WARNING/PoolWorker-1] [u'B.run', u'M1.run', u'M11.run', u'M22.run', u'M23.run', u'M24.run', u'M25.run', u'M26.run', u'M4.run', u'celery.backend_cleanup', u'celery.chain', u'celery.chord', u'celery.chord_unlock', u'celery.chunks', u'celery.group', u'celery.map', u'celery.starmap', u'impmod.run', u'initializerNew.run']
but it still gives errors:
[2012-09-13 11:19:59,848: ERROR/MainProcess] Received unregistered task of type 'M24.run'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'retries': 0, 'task': 'M24.run', 'eta': None, 'args': [{'cnt': '3', 'ids': '0001-0004,0002-0004', 'NagID': 2, 'wgt': '3', 'ModID': 'M24', 'ProfileModuleID': 64, 'mhs': '1'}, 0], 'expires': None, 'callbacks': None, 'errbacks': None, 'kwargs': {}, 'id': 'ddf5f520-803b-4dc9-ad3b-a931d90950a6', 'utc': True} (394b)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery-3.0.4-py2.7.egg/celery/worker/consumer.py", line 410, in on_task_received
strategies[name](message, body, message.ack_log_error)
KeyError: 'M24.run'

Can you attach command which starts Celery? Looks like this application have different sys.path, that's why celery app couldn't import 'M24.run' task.
Also you should remember that Celery requires setting module names where your tasks are localted.
Something similar to
CELERY_INCLUDE = [
'M24',
]

Related

Bitbake server does not start

I am having some trouble with building the Yocto project, hope I can find some help. Is there any way to fix the following issue? Let me know if you need any more information. Thanks.
My goal
I am building the default image from this guide : https://docs.yoctoproject.org/brief-yoctoprojectqs/index.html
This build takes up a lot of space, so I want to build it on a network drive.
Current situation
I am able to finish the build normally if I am using a normal folder.
If I use the the shared drive that is mounted in the system, the build never starts. The error looks like this :
$ bitbake core-image-sato
NOTE: Bitbake server didn't start within 5 seconds, waiting for 90
ERROR: Error parsing configuration files
Traceback (most recent call last):
File "/mnt/NetworkShare/yocto/poky/bitbake/lib/bb/persist_data.py", line 45, in SQLTable.wrap_func(*args=('CREATE TABLE IF NOT EXISTS BB_URI_HEADREVS(key TEXT PRIMARY KEY NOT NULL, value TEXT);',), **kwargs={}):
if self.connection is None and reconnect:
> self.reconnect()
File "/mnt/NetworkShare/yocto/poky/bitbake/lib/bb/persist_data.py", line 105, in SQLTable.reconnect():
self.connection.text_factory = str
> self._setup_database()
File "/mnt/NetworkShare/yocto/poky/bitbake/lib/bb/persist_data.py", line 50, in SQLTable.wrap_func(*args=(), **kwargs={}):
try:
> return f(self, *args, **kwargs)
except sqlite3.OperationalError as exc:
File "/mnt/NetworkShare/yocto/poky/bitbake/lib/bb/persist_data.py", line 79, in SQLTable.wrap_func(*args=(), **kwargs={}):
with contextlib.closing(self.connection.cursor()) as cursor:
> return f(self, cursor, *args, **kwargs)
return wrap_func
File "/mnt/NetworkShare/yocto/poky/bitbake/lib/bb/persist_data.py", line 93, in SQLTable._setup_database(cursor=<sqlite3.Cursor object at 0x7f3d59c5dab0>):
def _setup_database(self, cursor):
> cursor.execute("pragma synchronous = off;")
# Enable WAL and keep the autocheckpoint length small (the default is
sqlite3.OperationalError: disk I/O error
Details
The /etc/fstab line to mount the drive is :
NetworkShare /mnt/NetworkShare 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0
The host is Ubuntu server 20.04 running in a VM inside UnRAID. I don't think the VM is the issue (It's possible that I am very wrong) because I get the same error if I mount an external share on my own computer (OpenSuse Tumbleweed) and try to build in it.
The bitbake-cookerdaemon.log :
1221 13:38:18.293775 --- Starting bitbake server pid 1221 at 2022-01-19 13:38:18.293689 ---
1221 13:38:18.333537 Started bitbake server pid 1221
1221 13:38:18.339125 Entering server connection loop
1221 13:38:18.340399 Accepting [<socket.socket fd=6, family=AddressFamily.AF_UNIX, type=SocketKind.SOCK_STREAM, proto=0, laddr=bitbake.sock>] ([])
1221 13:38:18.341382 Processing Client
1221 13:38:18.342099 Connecting Client
1221 13:38:18.343689 Running command ['setFeatures', [2]]
1221 13:38:18.344805 Command Completed
1221 13:38:18.346085 Running command ['updateConfig', {'abort': True, 'force': False, 'invalidate_stamp': None, 'dry_run': False, 'dump_signatures': [], 'extra_assume_provided': [], 'profile': False, 'prefile': [], 'postfile': [], 'server_timeout': None, 'nosetscene': False, 'setsceneonly': False, 'skipsetscene': False, 'runall': None, 'runonly': None, 'writeeventlog': None, 'build_verbose_shell': False, 'build_verbose_stdout': False, 'default_loglevel': 20, 'debug_domains': {}}, {'SHELL': '/bin/bash', 'PWD': '/mnt/NetworkShare/yocto/poky/build', 'LOGNAME': 'metics', 'HOME': '/home/metics', 'BBPATH': '/mnt/NetworkShare/yocto/poky/build', 'BB_ENV_EXTRAWHITE': 'ALL_PROXY BBPATH_EXTRA BB_LOGCONFIG BB_NO_NETWORK BB_NUMBER_THREADS BB_SETSCENE_ENFORCE BB_SRCREV_POLICY DISTRO FTPS_PROXY FTP_PROXY GIT_PROXY_COMMAND HTTPS_PROXY HTTP_PROXY MACHINE NO_PROXY PARALLEL_MAKE SCREENDIR SDKMACHINE SOCKS5_PASSWD SOCKS5_USER SSH_AGENT_PID SSH_AUTH_SOCK STAMPS_DIR TCLIBC TCMODE all_proxy ftp_proxy ftps_proxy http_proxy https_proxy no_proxy ', 'USER': 'metics', 'PATH': '/mnt/NetworkShare/yocto/poky/scripts:/mnt/NetworkShare/yocto/poky/bitbake/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'LC_ALL': 'en_US.UTF-8', 'TERMCAP': 'SC|screen.xterm-256color|VT 100/ANSI X3.64 virtual terminal:DO=\\E[%dB:LE=\\E[%dD:RI=\\E[%dC:UP=\\E[%dA:bs:bt=\\E[Z:cd=\\E[J:ce=\\E[K:cl=\\E[H\\E[J:cm=\\E[%i%d;%dH:ct=\\E[3g:do=^J:nd=\\E[C:pt:rc=\\E8:rs=\\Ec:sc=\\E7:st=\\EH:up=\\EM:le=^H:bl=^G:cr=^M:it#8:ho=\\E[H:nw=\\EE:ta=^I:is=\\E)0:li#51:co#110:am:xn:xv:LP:sr=\\EM:al=\\E[L:AL=\\E[%dL:cs=\\E[%i%d;%dr:dl=\\E[M:DL=\\E[%dM:dc=\\E[P:DC=\\E[%dP:im=\\E[4h:ei=\\E[4l:mi:IC=\\E[%d#:ks=\\E[?1h\\E=:ke=\\E[?1l\\E>:vi=\\E[?25l:ve=\\E[34h\\E[?25h:vs=\\E[34l:ti=\\E[?1049h:te=\\E[?1049l:us=\\E[4m:ue=\\E[24m:so=\\E[3m:se=\\E[23m:mb=\\E[5m:md=\\E[1m:mh=\\E[2m:mr=\\E[7m:me=\\E[m:ms:Co#8:pa#64:AF=\\E[3%dm:AB=\\E[4%dm:op=\\E[39;49m:AX:vb=\\Eg:G0:as=\\E(0:ae=\\E(B:ac=\\140\\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++,,hhII00:po=\\E[5i:pf=\\E[4i:Km=\\E[M:k0=\\E[10~:k1=\\EOP:k2=\\EOQ:k3=\\EOR:k4=\\EOS:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:F1=\\E[23~:F2=\\E[24~:kB=\\E[Z:kh=\\E[1~:#1=\\E[1~:kH=\\E[4~:#7=\\E[4~:kN=\\E[6~:kP=\\E[5~:kI=\\E[2~:kD=\\E[3~:ku=\\EOA:kd=\\EOB:kr=\\EOC:kl=\\EOD:km:', 'WINDOW': '0', 'XDG_SESSION_TYPE': 'tty', 'MOTD_SHOWN': 'pam', 'LANG': 'en_US.UTF-8', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'SSH_CONNECTION': '10.0.0.12 60522 10.0.0.19 22', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_SESSION_CLASS': 'user', 'PYTHONPATH': '/mnt/NetworkShare/yocto/poky/bitbake/lib:', 'TERM': 'screen.xterm-256color', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'SHLVL': '2', 'XDG_SESSION_ID': '1', 'XDG_RUNTIME_DIR': '/run/user/1000', 'SSH_CLIENT': '10.0.0.12 60522 22', 'XDG_DATA_DIRS': '/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'STY': '1116.pts-0.ubuntuserver', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'BUILDDIR': '/mnt/NetworkShare/yocto/poky/build', 'SSH_TTY': '/dev/pts/0', 'OLDPWD': '/mnt/NetworkShare/yocto/poky', '_': '/mnt/NetworkShare/yocto/poky/bitbake/bin/bitbake'}, ['/mnt/NetworkShare/yocto/poky/bitbake/bin/bitbake', 'core-image-sato']]
1221 13:38:33.830099 Command Completed
1221 13:38:33.831731 Processing Client
1221 13:38:33.832344 Disconnecting Client
1221 13:38:33.833129 No timeout, exiting.
1221 13:38:33.933875 Exiting
1221 13:38:33.942717 Original lockfile contents: ['1221\n']
1221 13:38:33.954461 Exiting as we could obtain the lock
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name='/mnt/NetworkShare/yocto/poky/build/bitbake-cookerdaemon.log' mode='a+' encoding='UTF-8'>
It means your hard disk is full. You should delete some files before re-running job to create a new image.

Unable to run airflow scheduler

I have recently installed airflow on an AWS server by using this guide for ubuntu 16.04. After a painful and successful install started the webserver. I tried a sample dag as follows
from airflow.operators.python_operator import PythonOperator
from airflow.operators.dummy_operator import DummyOperator
from datetime import timedelta
from airflow import DAG
import airflow
# DEFAULT ARGS
default_args = {
'owner': 'airflow',
'start_date': airflow.utils.dates.days_ago(2),
'depends_on_past': False}
dag = DAG('init_run', default_args=default_args, description='DAG SAMPLE',
schedule_interval='#daily')
def print_something():
print("HELLO AIRFLOW!")
with dag:
task_1 = PythonOperator(task_id='do_it', python_callable=print_something)
task_2 = DummyOperator(task_id='dummy')
task_1 << task_2
But when i open the UI the tasks in the dag are still in "No Status" no matter how many times i trigger manually or refresh the page.
Later i found out that airflow scheduler is not running and shows the following error:
{celery_executor.py:228} ERROR - Error sending Celery task:No module named 'MySQLdb'
Celery Task ID: ('init_run', 'dummy', datetime.datetime(2019, 5, 30, 18, 0, 24, 902499, tzinfo=<TimezoneInfo [UTC, GMT, +00:00:00, STD]>), 1)
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/executors/celery_executor.py", line 118, in send_task_to_executor
result = task.apply_async(args=[command], queue=queue)
File "/usr/local/lib/python3.7/site-packages/celery/app/task.py", line 535, in apply_async
**options
File "/usr/local/lib/python3.7/site-packages/celery/app/base.py", line 728, in send_task
amqp.send_task_message(P, name, message, **options)
File "/usr/local/lib/python3.7/site-packages/celery/app/amqp.py", line 552, in send_task_message
**properties
File "/usr/local/lib/python3.7/site-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 510, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kombu/messaging.py", line 194, in _publish
[maybe_declare(entity) for entity in declare]
File "/usr/local/lib/python3.7/site-packages/kombu/messaging.py", line 194, in <listcomp>
[maybe_declare(entity) for entity in declare]
File "/usr/local/lib/python3.7/site-packages/kombu/messaging.py", line 102, in maybe_declare
return maybe_declare(entity, self.channel, retry, **retry_policy)
File "/usr/local/lib/python3.7/site-packages/kombu/common.py", line 121, in maybe_declare
return _maybe_declare(entity, channel)
File "/usr/local/lib/python3.7/site-packages/kombu/common.py", line 145, in _maybe_declare
entity.declare(channel=channel)
File "/usr/local/lib/python3.7/site-packages/kombu/entity.py", line 608, in declare
self._create_queue(nowait=nowait, channel=channel)
File "/usr/local/lib/python3.7/site-packages/kombu/entity.py", line 617, in _create_queue
self.queue_declare(nowait=nowait, passive=False, channel=channel)
File "/usr/local/lib/python3.7/site-packages/kombu/entity.py", line 652, in queue_declare
nowait=nowait,
File "/usr/local/lib/python3.7/site-packages/kombu/transport/virtual/base.py", line 531, in queue_declare
self._new_queue(queue, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kombu/transport/sqlalchemy/__init__.py", line 82, in _new_queue
self._get_or_create(queue)
File "/usr/local/lib/python3.7/site-packages/kombu/transport/sqlalchemy/__init__.py", line 70, in _get_or_create
obj = self.session.query(self.queue_cls) \
File "/usr/local/lib/python3.7/site-packages/kombu/transport/sqlalchemy/__init__.py", line 65, in session
_, Session = self._open()
File "/usr/local/lib/python3.7/site-packages/kombu/transport/sqlalchemy/__init__.py", line 56, in _open
engine = self._engine_from_config()
File "/usr/local/lib/python3.7/site-packages/kombu/transport/sqlalchemy/__init__.py", line 51, in _engine_from_config
return create_engine(conninfo.hostname, **transport_options)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/__init__.py", line 443, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/strategies.py", line 87, in create
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 104, in dbapi
return __import__("MySQLdb")
ModuleNotFoundError: No module named 'MySQLdb'
Here is the setting in the config file (airflow.cfg):
sql_alchemy_conn = postgresql+psycopg2://airflow#localhost:5432/airflow
broker_url = sqla+mysql://airflow:airflow#localhost:3306/airflow
result_backend = db+postgresql://airflow:airflow#localhost/airflow
I been struggling with this issue for two days now, Please help
In your airflow.cfg, there should also be a config option for celery_result_backend. Are you able to let us know what this value is set to? If it is not present in your config, set it to the same value as the result_backend
i.e:
celery_result_backend = db+postgresql://airflow:airflow#localhost/airflow
And then restart the airflow stack to ensure the configuration changes apply.
(I wanted to leave this as a comment but don't have enough rep to do so)
I think the example you are following didnt told you to install mysql and it seems you are using it in broker URL.
you can install mysql and than configure it. (for python 3.5+)
pip install mysqlclient
Alternatively, for a quick fix. You can also use rabbit MQ(Rabbitmq is a message broker, that you will require to rerun airflow dags with celery) guest user login
and then your broker_url will be
broker_url = amqp://guest:guest#localhost:5672//
if not already installed, Rabbitmq can be installed with following command.
sudo apt install rabbitmq-server
Change configuration NODE_IP_ADDRESS=0.0.0.0 in configuration file located at
/etc/rabbitmq/rabbitmq-env.conf
start RabbitMQ service
sudo service rabbitmq-server start

Neo4j Doc manager not building relationships from array of _id field

I am trying to use Neo4j doc manager to connect data from mongodb to Neo4j.
As per the documentation of Neo4j Docmanager, I am using a _id field to link two nodes. But when I have one node connected to two different nodes. How do I do it..?
I have tried giving this value an array of ObjectIds, but this fails with the exception.
[ERROR] mongo_connector.oplog_manager:288 - Unable to process oplog document {'ns': 'metadata.titles', 'h': 8825348528118145634, 'ts': Timestamp(1499669635, 1), 'o': {'title': 'software Engineer', 'skills_id': [{'skills_id': '595ce56c813b1e12cecd61e6'}]}, 'v': 2, 'op': 'i'}
Traceback (most recent call last):
File "/opt/deployment/elastic5/python3/mongo-connector/mongo_connector/util.py", line 33, in wrapped
return f(*args, **kwargs)
File "/opt/deployment/elastic5/python3/local/lib/python3.4/site-packages/mongo_connector/doc_managers/neo4j_doc_manager.py", line 64, in upsert
builder = NodesAndRelationshipsBuilder(doc, doc_type, doc_id, metadata)
File "/opt/deployment/elastic5/python3/local/lib/python3.4/site-packages/mongo_connector/doc_managers/nodes_and_relationships_builder.py", line 18, in __init__
self.build_nodes_query(doc_type, doc, doc_id)
File "/opt/deployment/elastic5/python3/local/lib/python3.4/site-packages/mongo_connector/doc_managers/nodes_and_relationships_builder.py", line 27, in build_nodes_query
self.build_node_with_reference(doc_type, key, id, document[key])
File "/opt/deployment/elastic5/python3/local/lib/python3.4/site-packages/mongo_connector/doc_managers/nodes_and_relationships_builder.py", line 64, in build_node_with_reference
self.explicit_ids.update({document_key: doc_type})
TypeError: unhashable type: 'list'

Showing KeyError: 'schedules.tasks.run' while running the django celery for periodic tasks

I've created a classes based periodic task using djcelery to send emails to the client. Task is performing the action and sending email when it is called from shell but while using the crontab, I am getting KeyError as "Schedule.tasks.run". I have added the following setting and created the tasks:
settings.py
import os
import djcelery
djcelery.setup_loader()
BROKER_URL = 'django://'
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
CELERYBEAT_SCHEDULE = {
"runs-every-30-seconds": {
"task": "schedules.tasks.EndingDrawslotScheduler.run",
"schedule": timedelta(seconds=30),
"args": (16, 16)
},
}
app.conf.timezone = 'UTC'
INSTALLED_APPS = ('djcelery',
'kombu.transport.django',)
Error-Info:
The full contents of the message body was:
{'utc': True, 'callbacks': None, 'id': '6ad19ff8-9825-4d54-a8b2-0a8322fc9fb1',
'args': [], 'taskset': None, 'retries': 0, 'timelimit': (None, None),
'kwargs': {}, 'expires': None, 'errbacks': None, 'chord': None, 'task':
'schedules.tasks.run', 'eta': None} (262b)
Traceback (most recent call last):
File "/home/s/proj/env/lib/python3.5/site-packages/celery/worker/consumer.py", line 465, in on_task_received strategies[type_](message, body,
KeyError: 'schedules.tasks.run'

Rake::Tasks sources missed

I just start using Rake instead of Make for building my projects, and would like to use some kind of "task template" for automating the building.
Consider the following snippets:
task :test1 => ['1', '2']
task :test2 => ['3', '4']
Rake::Tasks.each do |task|
p task
p task.sources
end
The output is:
$ rake
<Rake::Task test1 => [1, 2]>
[]
<Rake::Task test2 => [3, 4]>
[]
My question is why task.sources is [], that is the prerequisites are missed? Thanks in advance.
The prerequisites of a task are accessed with task.prerequisites.
task.sources and task.sourceis only used for tasks that are built from a rule as described in the rdocs: http://ruby-doc.org/stdlib-2.1.2/libdoc/rake/rdoc/Rake/Task.html#method-i-source