Celery beat stuck on start - celery

After celery launch, I have the following output:
[2022-12-24 17:42:25,851: INFO/MainProcess] Connected to redis://localhost:6379//
[2022-12-24 17:42:25,854: INFO/MainProcess] mingle: searching for neighbors
[2022-12-24 17:42:26,506: INFO/Beat] beat: Starting...
[2022-12-24 17:42:26,862: INFO/MainProcess] mingle: all alone
[2022-12-24 17:42:26,881: INFO/MainProcess] celery#Bulrathi-Mac-mini.local ready.
implying beat is stuck on start (indeed, periodic tasks are not executed).
I'm starting the celery like this:
celery -A app.celery worker -B -l info
My code is
from datetime import timedelta
from celery import Celery
from celery.utils.log import get_task_logger
celery = Celery(
__name__,
broker='redis://localhost:6379',
include=['tasks']
)
celery.conf.timezone = 'UTC'
logger = get_task_logger(__name__)
#celery.task
def periodic_task():
logger.debug('yay')
CELERYBEAT_SCHEDULE = {
'every-second': {
'task': 'periodic_task',
'schedule': timedelta(seconds=1),
}
}
I googled for the similar issues, but didn't find any suitable solution. Your help is highly appreciated, thank you in advance.

Related

airflow dag - task is immediately put into 'up_for_retry' state ('start_date' is 1 day ago)

I do not know if i am lack of airflow scheduler knowledge or if this is a potential bug from airflow.
situation is like this:
my dag's start date is set to be "start_date": airflow.utils.dates.days_ago(1),
i uploaded the dag to the folder where airflow scans the DAGs
i then turn the dag on (it was by default 'off')
the tasks in the pipeline immediately goes into 'up_for_retry' and you do not really see what had been tried before.
airflow Version Info: Version : 1.10.14. it is run on kubenetes in azure
use Celery executor with Redis
the task instance details are listed below:
Task Instance Details
Dependencies Blocking Task From Getting Scheduled
Dependency Reason
Task Instance State Task is in the 'up_for_retry' state which is not a valid state for execution. The task must be cleared in order to be run.
Not In Retry Period Task is not ready for retry yet but will be retried automatically. Current date is 2021-05-17T09:06:57.239015+00:00 and task will be retried at 2021-05-17T09:09:50.662150+00:00.
am i missing something to judge if it is a bug or if it is expected?
addition, below is the DAG definition as requested.
import airflow
from airflow import DAG
from airflow.contrib.operators.databricks_operator import DatabricksSubmitRunOperator
from airflow.models import Variable
dag_args = {
"owner": "our_project_team_name",
"retries": 1,
"email": ["ouremail_address_replaced_by_this_string"],
"email_on_failure": True,
"email_on_retry": True,
"depends_on_past": False,
"start_date": airflow.utils.dates.days_ago(1),
}
# Implement cluster reuse on Databricks, pick from light, medium, heavy cluster type based on workloads
clusters = Variable.get("our_project_team_namejob_cluster_config", deserialize_json=True)
databricks_connection = "our_company_databricks"
adl_connection = "our_company_wasb"
pipeline_name = "process_our_data_from_boomi"
dag = DAG(dag_id=pipeline_name, default_args=dag_args, schedule_interval="0 3 * * *")
notebook_dir = "/Shared/our_data_name"
lib_path_sub = ""
lib_name_dev_plus_branch = ""
atlas_library = {
"whl": f"dbfs:/python-wheels/atlas{lib_path_sub}/atlas_library-0{lib_name_dev_plus_branch}-py3-none-any.whl"
}
create_our_data_name_source_data_from_boomi_notebook_params = {
"existing_cluster_id": clusters["our_cluster_name"],
"notebook_task": {
"notebook_path": f"{notebook_dir}/create_our_data_name_source_data_from_boomi",
"base_parameters": {"Extraction_date": "{{ ds_nodash }}"},
},
}
create_our_data_name_standardized_table_from_source_xml_notebook_params = {
"existing_cluster_id": clusters["our_cluster_name"],
"notebook_task": {
"notebook_path": f"{notebook_dir}/create_our_data_name_standardized_table_from_source_xml",
"base_parameters": {"Extraction_date": "{{ ds_nodash }}"},
},
}
create_our_data_name_enriched_table_from_standardized_notebook_params = {
"existing_cluster_id": clusters["our_cluster_name"],
"notebook_task": {
"notebook_path": f"{notebook_dir}/create_our_data_name_enriched",
"base_parameters": {"Extraction_date": "{{ ds_nodash }}"},
},
}
layer_1_task = DatabricksSubmitRunOperator(
task_id="Load_our_data_name_to_source",
databricks_conn_id=databricks_connection,
dag=dag,
json=create_our_data_name_source_data_from_boomi_notebook_params,
libraries=[atlas_library],
)
layer_2_task = DatabricksSubmitRunOperator(
task_id="Load_our_data_name_to_standardized",
databricks_conn_id=databricks_connection,
dag=dag,
json=create_our_data_name_standardized_table_from_source_xml_notebook_params,
libraries=[
{"maven": {"coordinates": "com.databricks:spark-xml_2.11:0.5.0"}},
{"pypi": {"package": "inflection"}},
atlas_library,
],
)
layer_3_task = DatabricksSubmitRunOperator(
task_id="Load_our_data_name_to_enriched",
databricks_conn_id=databricks_connection,
dag=dag,
json=create_our_data_name_enriched_table_from_standardized_notebook_params,
libraries=[atlas_library],
)
layer_1_task >> layer_2_task >> layer_3_task
after getting some help from #AnandVidvat about trying to make retry=0 experiment and some firend help to change operator to either DummyOperator or PythonOperator, i can confirm that the issue is not to do with DatabricksOperator or airflow version 1.10.x. i.e it is not an airflow bug.
so in summary, when a DAG, has meaningful operator, my setup fails in first Execution without any task log, and during retry works OK (the task log hides the fact it had been retried, because the failure had no logs).
In order to reduce the total run time. The workaround/patch, before finding the real cause, is to set the retry_delay to 10 seconds (default is 5 mins, and it makes DAG run long unnessicssarily.)
Next step is to figure out what is causing this 1st failure thing, by checking logs on scheduler or woker pods in our current setup (azure K8s, postgresql, Redis, celery executor).
p.s. I used below DAG tested and get the conclusion.
import airflow
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
import time
from pprint import pprint
dag_args = {
"owner": "min_test",
"retries": 1,
"email": ["c243d70b.domain.onmicrosoft.com#emea.teams.ms"],
"email_on_failure": True,
"email_on_retry": True,
"depends_on_past": False,
"start_date": airflow.utils.dates.days_ago(1),
}
pipeline_name = "min_test_debug_airflow_baseline_PythonOperator_1_retry"
dag = DAG(
dag_id=pipeline_name,
default_args=dag_args,
schedule_interval="0 3 * * *",
tags=["min_test_airflow"],
)
def my_sleeping_function(random_base):
"""This is a function that will run within the DAG execution"""
time.sleep(random_base)
def print_context(ds, **kwargs):
pprint(kwargs)
print(ds)
return "Whatever you return gets printed in the logs"
run_this = PythonOperator(
task_id="print_the_context",
provide_context=True,
python_callable=print_context,
dag=dag,
)
# Generate 3 sleeping tasks, sleeping from 0 to 2 seconds respectively
for i in range(3):
task = PythonOperator(
task_id="sleep_for_" + str(i),
python_callable=my_sleeping_function,
op_kwargs={"random_base": float(i) / 10},
dag=dag,
)
task.set_upstream(run_this)

Celery beat running tasks every minute even thought It's set for every two hours

I'm trying to use celery beat to run tasks daily at a specific time.
However for testing purposes, I'm setting up two tasks to run every two hours, this is what my config looks like:
CELERYBEAT_SCHEDULE = {
'daily-google-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(hour='*/2'),
'args': (['G'])
},
'daily-facebook-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(hour='*/2'),
'args': (['F'])
}
}
This is how I run celery:
celery beat -A app.engine.celery --schedule=/tmp/celerybeat-schedule --pidfile=/tmp/celerybeat.pid -l info
Everything runs in Docker containers using docker-compose so I make sure that I re-build the app's image and restart the containers.
I even enter into the running container and I see the crontab setup in the code... however in my logs, I see the task running every minute.
What else can I do to debug this?
I appreciate any help,
Thanks
Your crontab is configured to run „At every minute past every 2nd hour“.
from celery.schedules import crontab
str(crontab(hour='*/2'))
'<crontab: * */2 * * * (m/h/d/dM/MY)>'
Ref: https://crontab.guru/#*_*/2_*_*_*
Correct crontab for „Every two hours“ is: 0 */2 * * *.
Ref: https://crontab.guru/every-2-hours
This should fix your issue:
CELERYBEAT_SCHEDULE = {
'daily-google-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(minute='0', hour='*/2'),
'args': (['G'])
},
'daily-facebook-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(minute='0', hour='*/2'),
'args': (['F'])
}
}

How to restart a dag when it fails on airflow 1.8?

With:
default_args = {
...
    'retries': 1,
'retry_delay': timedelta (seconds = 1),
...
}
I can get the task that fails to retry several times, but how can I get it when a task fails, the DAG starts again?
Of course, automatically...
You can run a second "Fail Check" DAG that queries for any task instances where the task_id matches what you want and the state is failed using the provide_session util. Then, you'll want to optionally clear downstream tasks as well and set the state of the relevant DagRun to running.
from datetime import datetime, timedelta
from sqlalchemy import and_
import json
from airflow import DAG
from airflow.models import TaskInstance, DagRun
from airflow.utils.db import provide_session
from airflow.operators.python_operator import PythonOperator
default_args = {'start_date': datetime(2018, 6, 11),
'retries': 2,
'retry_delay': timedelta(minutes=2),
'email': [],
'email_on_failure': True}
dag = DAG('__RESET__FAILED_TASKS',
default_args=default_args,
schedule_interval='#daily',
catchup=False
)
#provide_session
def check_py(session=None, **kwargs):
relevant_task_id = 'relevant_task_id'
obj = (session
.query(TaskInstance)
.filter(and_(TaskInstance.task_id == relevant_task_id,
TaskInstance.state == 'failed'))
.all())
if obj is None:
raise KeyError('No failed Task Instances of {} exist.'.format(relevant_task_id))
else:
# Clear the relevant tasks.
(session
.query(TaskInstance)
.filter(and_(TaskInstance.task_id == relevant_task_id,
TaskInstance.state == 'failed'))
.delete())
# Clear downstream tasks and set relevant DAG state to RUNNING
for _ in obj:
_ = json.loads(_.val)
# OPTIONAL: Clear downstream tasks in the specified Dag Run.
for task in _['downstream_tasks']:
(session
.query(TaskInstance)
.filter(and_(TaskInstance.task_id == task,
TaskInstance.dag_id == _['dag_id'],
TaskInstance.execution_date == datetime.strptime(_['ts'],
"%Y-%m-%dT%H:%M:%S")))
.delete())
# Set the Dag Run state to "running"
dag_run = (session
.query(DagRun)
.filter(and_(DagRun.dag_id == _['dag_id'],
DagRun.execution_date == datetime.strptime(_['ts'],
"%Y-%m-%dT%H:%M:%S")))
.first())
dag_run.set_state('running')
with dag:
run_check = PythonOperator(task_id='run_check',
python_callable=check_py,
provide_context=True)
run_check
The canonical solution to this in Airflow is to create a subdagoperator that wraps all the other tasks in the dag, and apply the retry to that.
You could potentially use the on_failure_callback feature to call a python / bash script that would restart the DAG. There is not currently a feature provided by Airflow to automatically restart the DAG upon task failure.

celery giving Rate limit attempt for unknown task task name

basically i am running two workers on celery, different module and different queue but same rabbitmq
celery worker -l info -A module_name.main.tasks -Q queue_one
celery worker -l info -A module_name.sub.sub_task -Q queue_two
when i try to rate limit 1st task present on 1 module i get this error from 2nd worker running the the other module..
app.control.rate_limit('module_name.main.tasks.method', '30/m')
Rate limit attempt for unknown task
I would prefer if the rate limit call would go to the worker that is working on that module and not to the other workers which are not working on that module.
any idea how to resolve this ??
Update: adding code:
celery_worker_base.py:
from __future__ import absolute_import
from celery import Celery
app = Celery('poc',
backend='mongodb://user:pass#ip:27017/collection',
broker='amqp://user:pass#ip/vhost',
include=['poc.main.proj.tasks'])
# Optional configuration, see the application user guide.
app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
CELERY_ROUTES = {'poc.main.proj.tasks': {'queue': 'proj_tasks'}}
)
app.control.rate_limit('poc.main.proj.tasks.get', '30/m')
app.control.rate_limit('poc.main.proj.tasks.compute', '30/m')
if __name__ == '__main__':
app.start()
celery worker code: tasks.py
from __future__ import absolute_import
from poc.celery.celery_worker_base import app
#app.task
def get(url):
print "calling get"
#app.task
def compute(info):
print "calling compute"
Another module: celery_master.py
from __future__ import absolute_import
from celery import Celery
from datetime import timedelta
from poc.config.config import *
from boto import ec2
master_app = Celery('poc',
backend='mongodb://user:pass#ip:27017/collection',
broker='amqp://user:pass#ip/vhost',
include=['poc.main.proj.tasks'])
# Optional configuration, see the application user guide.
master_app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
CELERYBEAT_SCHEDULE = {
'instance-check-every-fifteen-minute': {
'task': 'poc.main.instance.check.check_count',
'schedule': timedelta(seconds=900),
'options': {'queue' : 'instance_check'}
}
},
CELERY_ROUTES = {'poc.main.instance.check': {'queue': 'instance_check'}},
CELERY_TIMEZONE = 'UTC'
)
region = ec2.connect_to_region(
REGION,
aws_access_key_id=AWS_ACCESS_KEY,
aws_secret_access_key=AWS_SECRET_KEY
)
if __name__ == '__main__':
master_app.start()
master worker: check.py
from __future__ import absolute_import
from celery import Celery
from poc.config.config import *
from poc.celery.celery_master import master_app, region
#master_app.task
def check_count():
print "calling check"
PS: thanks for not down-voting the question.
Regarding celery not being able to find the task, I would ensure you are passing them according to how they are listed by app.control.app.tasks.
This provides a dict of known tasks, where the keys are what are eligible for passing to control.rate_limit().

want to find when the job will be started in celery

i am new to celery. i have some configuration in celeryconfig.py as follow:
from datetime import timedelta
BROKER_URL='redis://localhost:6379/0'
CELERY_RESULT_BACKEND="redis"
CELERY_REDIS_HOST="localhost"
CELERY_REDIS_PORT=6379
CELERY_REDIS_DB=0
CELERY_IMPORT=("mail")
CELERYBEAT_SCHEDULE={'runs-every-30-seconds' :
{
'task': 'mail.mail',
'schedule': timedelta(seconds=30),
},
}
i have scheduled that the job will run periodically in 30 seconds. now i want that the jobs should start on 29 aug at 4:00PM then how should i configure this??
You should use Cron instead of timedelta. The Celery documentation discusses this specifically, and provides some useful examples. See Crontab schedules
Here is an example from Celery:
from celery.schedules import crontab
CELERYBEAT_SCHEDULE = {
# Executes every Monday morning at 7:30 A.M
'every-monday-morning': {
'task': 'tasks.add',
'schedule': crontab(hour=7, minute=30, day_of_week=1),
'args': (16, 16),
},
}
To make this work for your condition, you will also need to specify the cron month_of_year parameter.