Showing KeyError: 'schedules.tasks.run' while running the django celery for periodic tasks - scheduled-tasks

I've created a classes based periodic task using djcelery to send emails to the client. Task is performing the action and sending email when it is called from shell but while using the crontab, I am getting KeyError as "Schedule.tasks.run". I have added the following setting and created the tasks:
settings.py
import os
import djcelery
djcelery.setup_loader()
BROKER_URL = 'django://'
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
CELERYBEAT_SCHEDULE = {
"runs-every-30-seconds": {
"task": "schedules.tasks.EndingDrawslotScheduler.run",
"schedule": timedelta(seconds=30),
"args": (16, 16)
},
}
app.conf.timezone = 'UTC'
INSTALLED_APPS = ('djcelery',
'kombu.transport.django',)
Error-Info:
The full contents of the message body was:
{'utc': True, 'callbacks': None, 'id': '6ad19ff8-9825-4d54-a8b2-0a8322fc9fb1',
'args': [], 'taskset': None, 'retries': 0, 'timelimit': (None, None),
'kwargs': {}, 'expires': None, 'errbacks': None, 'chord': None, 'task':
'schedules.tasks.run', 'eta': None} (262b)
Traceback (most recent call last):
File "/home/s/proj/env/lib/python3.5/site-packages/celery/worker/consumer.py", line 465, in on_task_received strategies[type_](message, body,
KeyError: 'schedules.tasks.run'

Related

Airflow - KubernetesPodOperator - Broken DAG: unexpected keyword argument 'request_cpu'

I'm using the following Airflow version inside my Docker container and I am currently having some issues related to a broken DAG
FROM apache/airflow:2.3.4-python3.9
I have other DAGs running with the same argument 'request_cpu' and perfectly functional, I'm not sure what the issue could be
Broken DAG: [/home/airflow/airflow/dags/my_project.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 858, in __init__
self.resources = coerce_resources(resources)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 133, in coerce_resources
return Resources(**resources)
TypeError: Resources.__init__() got an unexpected keyword argument 'request_cpu'
This is my current DAG configuration
# DAG configuration
DAG_ID = "my_project_id"
DAG_DESCRIPTION = "description"
DAG_IMAGE = image
default_args = {
"owner": "airflow",
"depends_on_past": False,
"max_active_tasks": 1,
"max_active_runs": 1,
"email_on_failure": True,
"email": ["my#mail.com"],
"retries": 0,
"email_on_retry": False,
"image_pull_policy": "Always",
}
# Define desired resources.
compute_resources = {
# Cpu: 500m milliCPU is about half cpu, other values, 1, 2, 4... for full cpu allocation
"request_cpu": "500m",
# Memory: Mi for Megabytes or Gi for Gigabytes
"request_memory": "512Mi",
"limit_cpu": "500m",
"limit_memory": "1Gi",
}
with DAG(
DAG_ID,
default_args=default_args,
start_date=datetime(2022, 5, 9),
schedule_interval="0 21 */16 * *", # Every 16 days or twice per month
max_active_runs=1,
max_active_tasks=1,
catchup=False,
description=DAG_DESCRIPTION,
tags=["my tags"],
) as dag:
# AWS credentials
creds = tools.get_config_params(key="AWS-keys")
my_task = KubernetesPodOperator(
namespace="airflow",
image=DAG_IMAGE,
image_pull_secrets=[k8s.V1LocalObjectReference("docker-registry")],
container_resources=compute_resources,
env_vars={
"AWS_ACCESS_KEY_ID": creds["access_key"],
"AWS_SECRET_ACCESS_KEY": creds["secret_access_key"],
"EXECUTION_DATE": "{{ execution_date }}",
},
cmds=["python3", "my_project.py"],
is_delete_operator_pod=True,
in_cluster=False,
name="my-project-name",
task_id="my-task",
config_file=os.path.expanduser("~") + "/.kube/config",
get_logs=True,
resources=compute_resources,
)
First resources is deprecated so you should use only container_resources.
The container_resources is expecting V1ResourceRequirements not dict. You should do:
from kubernetes.client import models as k8s
compute_resources=k8s.V1ResourceRequirements(
requests={
'memory': '512Mi',
'cpu': '500m'
},
limits={
'memory': '1Gi',
'cpu': 500m
}
)
Then
my_task = KubernetesPodOperator(..., container_resources=compute_resources)

zeep.exceptions.Fault: Server was unable to process request. ---> Object reference not set to an instance of an object

I'm trying to send request and receive the response of a soap service using the python package zeep.
But I can't do this, I get this error message:
Traceback (most recent call last):
File "/home/oussama/PycharmProjects/pythonProject/main.py", line 44, in <module>
res = client.service.addShip(**data)
File "/usr/local/lib/python3.6/dist-packages/zeep/proxy.py", line 51, in __call__
kwargs,
File "/usr/local/lib/python3.6/dist-packages/zeep/wsdl/bindings/soap.py", line 135, in send
return self.process_reply(client, operation_obj, response)
File "/usr/local/lib/python3.6/dist-packages/zeep/wsdl/bindings/soap.py", line 229, in process_reply
return self.process_error(doc, operation)
File "/usr/local/lib/python3.6/dist-packages/zeep/wsdl/bindings/soap.py", line 333, in process_error
detail=fault_node.find("detail"),
zeep.exceptions.Fault: Server was unable to process request. ---> Object reference not set to an instance of an object.
Here is my code:
import zeep
client = zeep.Client(wsdl='http://track.smsaexpress.com/SECOM/SMSAwebService.asmx?WSDL')
data = {
'passKey': 'xxxxxxx',
'refNo': None,
'sentDate': None,
'idNo': None,
'cName': None,
'cntry': None,
'cCity': None,
'cZip': None,
'cPOBox': None,
'cMobile': None,
'cTel1': None,
'cTel2': None,
'cAddr1': None,
'cAddr2': None,
'shipType': None,
'PCs': 1,
'cEmail': None,
'carrValue': None,
'carrCurr': None,
'codAmt': None,
'weight': None,
'custVal': None,
'custCurr': None,
'insrAmt': None,
'insrCurr': None,
'itemDesc': None,
'sName': None,
'sContact': None,
'sAddr1': None,
'sAddr2': None,
'sCity': None,
'sPhone': None,
'sCntry': None,
'prefDelvDate': None,
'gpsPoints': None,
}
res = client.service.addShip(**data)
print(res)
Here (Link) you can find some info about the service
The zeep Client object is looking for a string and does not like the None keyword. Change the None to "" or '' (i.e. string space) and you should be good to go.
import zeep
client = zeep.Client(wsdl='http://track.smsaexpress.com/SECOM/SMSAwebService.asmx?WSDL')
data = {
'passKey': 'xxxxxxx',
'refNo': "",
'sentDate': "",
'idNo': "",
'cName': "",
'cntry': "",
'cCity': "",
'cZip': "",
'cPOBox': "",
'cMobile': "",
'cTel1': "",
'cTel2': "",
'cAddr1': "",
'cAddr2': "",
'shipType': "",
'PCs': 1,
'cEmail': "",
'carrValue': "",
'carrCurr': "",
'codAmt': "",
'weight': "",
'custVal': "",
'custCurr': "",
'insrAmt': "",
'insrCurr': "",
'itemDesc': "",
'sName': "",
'sContact': "",
'sAddr1': "",
'sAddr2': "",
'sCity': "",
'sPhone': "",
'sCntry': "",
'prefDelvDate': "",
'gpsPoints': "",
}
res = client.service.addShip(**data)
print(res)
I think the definition from the wsdl differs from the implementation on the server side. if you change the request that all optional fields contain a valid value it will return a result stating that the passKey is incorrect.
If you use a mock tool like SoapUI that mocks the server side it is perfectly fine to send a request with the dictionary looking like this
data = {'PCs' : 1}
In a side node the wsdl has both soap1.1 and soap1.2 implemented if you mock it make sure you use the correct endpoint url otherwise you keep sending data to the original server.

I can't create services account via python

I try to create a service account via kubernetes client of Python, but in the post return the dict with the manifest, but do not create the service account.
My code is :
import kubernetes.client
from kubernetes import client, config
from kubernetes.client.rest import ApiException
from pprint import pprint
from kubernetes import config
config.load_kube_config()
client.configuration.debug = True
v1 = client.CoreV1Api()
# create an instance of the API class
namespace = 'users' # str | object name and auth scope, such as for teams and projects
body = {'metadata': {'name': 'test.david'} }
pretty = 'true'
dry_run = 'All' # str | When present, indicates that modifications should not be persisted.
An invalid or unrecognized dryRun directive will result in an error response and no further
processing of the request. Valid values are: - All: all dry run stages will be processed
(optional)
try:
api_response = v1.create_namespaced_service_account(namespace,body,dry_run=dry_run,
pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->create_namespaced_service_account: %s\n" % e)
The response :
{'api_version': 'v1',
'automount_service_account_token': None,
'image_pull_secrets': None,
'kind': 'ServiceAccount',
'metadata': {'annotations': None,
'cluster_name': None,
'creation_timestamp': datetime.datetime(2020, 5, 25, 23, 30, 26, tzinfo=tzutc()),
'deletion_grace_period_seconds': None,
'deletion_timestamp': None,
'finalizers': None,
'generate_name': None,
'generation': None,
'initializers': None,
'labels': None,
'managed_fields': None,
'name': 'test.david',
'namespace': 'users',
'owner_references': None,
'resource_version': None,
'self_link': '/api/v1/namespaces/users/serviceaccounts/test.david',
'uid': 'b64cff7c-9edf-11ea-8b22-0a714f906f03'},
'secrets': None}
what am I doing wrong?
Need to set
dry_run = ''
As when dry_run present, indicates that modifications should not be persisted.

Correct way to invoke the copy module with module param 'content'

I have a custom action plugin and I need to write out returned variable data on the controller to a file. I'm trying this locally right now.
copy_module_args = dict()
copy_module_args["content"] = 'test'
copy_module_args["dest"] = dest
copy_module_args["owner"] = owner
copy_module_args["group"] = group
copy_module_args["mode"] = mode
try:
result = merge_hash(result, self._execute_module(
module_name="copy",
module_args=copy_module_args,
task_vars=task_vars))
except (AnsibleError, TypeError) as err:
err_msg = "Failed to do stuff"
raise AnsibleActionFail(to_text(err_msg), to_text(err))
The result of ._execute_module is
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Source None not found"}
The vaule of result is
{'msg': 'Source None not found', 'failed': True, 'invocation': {'module_args': {'content': 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', 'dest': '/home/me/testfile', 'owner': 'me', 'group': 'me', 'mode': None, 'backup': False, 'force': True, 'follow': False, 'src': None, '_original_basename': None, 'validate': None, 'directory_mode': None, 'remote_src': None, 'local_follow': None, 'checksum': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None, 'regexp': None, 'delimiter': None, 'unsafe_writes': None}}, '_ansible_parsed': True}
This invocation is trying to use the "src" param even though I'm only passing the "content" param. I know this because when I add "src" the failure message changes. I excepted, from the docs and from reading the copy module and template module source that at a bare minimum my implementation would result in:
- name: Copy using inline content
copy:
content: 'test'
dest: /home/me/testfile
Does anyone know what I'm missing or why "src" is being preferred over "content" even though it's not being specified?
The content: argument is just syntatic sugar for writing it to a tempfile, so I would guess you will need to take charge of that, or find a way to invoke the copy action, which apparently runs before the copy module.
I was able to see that "content" was being handled in the action plugin, not the module. I've adapted what I found to fit my needs. I call the action plugin, instead of the module directly.
copy_module_args = dict()
copy_module_args["content"] = 'test'
copy_module_args["dest"] = dest
copy_module_args["owner"] = owner
copy_module_args["group"] = group
copy_module_args["mode"] = mode
copy_module_args["follow"] = True
copy_module_args["force"] = False
copy_action = self._task.copy()
copy_action.args.update(copy_module_args)
# Removing args passed in via the playbook that aren't meant for
# the copy module
for remove in ("arg1", "arg2", "arg3", "arg4"):
copy_action.args.pop(remove, None)
try:
copy_action = self._shared_loader_obj.action_loader.get('copy',
task=copy_action,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=self._templar,
shared_loader_obj=self._shared_loader_obj)
result = merge_hash(result, copy_action.run(task_vars=task_vars))
This allows me to leverage copy how I originally intended, by utilising its idempotency and checksumming without having to write my own.
changed: [localhost] => {"changed": true, "checksum": "00830d74b4975d59049f6e0e7ce551477a3d9425", "dest": "/home/me/testfile", "gid": 1617705057, "group": "me", "md5sum": "6f007f4188a0d35835f4bb84a2548b66", "mode": "0644", "owner": "me", "size": 9, "src": "/home/me/.ansible/tmp/ansible-tmp-1560715301.737494-249856394953357/source", "state": "file", "uid": 1300225668}
And running it again,
ok: [localhost] => {"changed": false, "dest": "/home/me/testfile", "src": "/home/me/testfile/.ansible/tmp/ansible-local-9531902t7jt3/tmp_nq34zm5"}

received unregistered task of type

I am trying to run tasks which are in the memory .
registerd tasks on worker
[2012-09-13 11:10:18,928: WARNING/PoolWorker-1] [u'B.run', u'M1.run', u'M11.run', u'M22.run', u'M23.run', u'M24.run', u'M25.run', u'M26.run', u'M4.run', u'celery.backend_cleanup', u'celery.chain', u'celery.chord', u'celery.chord_unlock', u'celery.chunks', u'celery.group', u'celery.map', u'celery.starmap', u'impmod.run', u'initializerNew.run']
but it still gives errors:
[2012-09-13 11:19:59,848: ERROR/MainProcess] Received unregistered task of type 'M24.run'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'retries': 0, 'task': 'M24.run', 'eta': None, 'args': [{'cnt': '3', 'ids': '0001-0004,0002-0004', 'NagID': 2, 'wgt': '3', 'ModID': 'M24', 'ProfileModuleID': 64, 'mhs': '1'}, 0], 'expires': None, 'callbacks': None, 'errbacks': None, 'kwargs': {}, 'id': 'ddf5f520-803b-4dc9-ad3b-a931d90950a6', 'utc': True} (394b)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery-3.0.4-py2.7.egg/celery/worker/consumer.py", line 410, in on_task_received
strategies[name](message, body, message.ack_log_error)
KeyError: 'M24.run'
Can you attach command which starts Celery? Looks like this application have different sys.path, that's why celery app couldn't import 'M24.run' task.
Also you should remember that Celery requires setting module names where your tasks are localted.
Something similar to
CELERY_INCLUDE = [
'M24',
]