How can celery beat queue a task it missed while it was down? - celery

Suppose we have the following task schedule (taken from docs):
app.conf.beat_schedule = {
# Executes every Monday morning at 7:30 a.m.
'add-every-monday-morning': {
'task': 'tasks.add',
'schedule': crontab(hour=7, minute=30, day_of_week=1),
'args': (16, 16),
},
}
During deployment, our django application will be down for some minutes. What happens if the server was down at the time the task was supposed to be queued? - In this case, Monday morning at 7:30 a.m.
I imagine the task will not be executed until next monday, but it should be possible to detect that this task was not queued and proceed to queue the delayed task... How can it be done?

Related

airflow dag - task is immediately put into 'up_for_retry' state ('start_date' is 1 day ago)

I do not know if i am lack of airflow scheduler knowledge or if this is a potential bug from airflow.
situation is like this:
my dag's start date is set to be "start_date": airflow.utils.dates.days_ago(1),
i uploaded the dag to the folder where airflow scans the DAGs
i then turn the dag on (it was by default 'off')
the tasks in the pipeline immediately goes into 'up_for_retry' and you do not really see what had been tried before.
airflow Version Info: Version : 1.10.14. it is run on kubenetes in azure
use Celery executor with Redis
the task instance details are listed below:
Task Instance Details
Dependencies Blocking Task From Getting Scheduled
Dependency Reason
Task Instance State Task is in the 'up_for_retry' state which is not a valid state for execution. The task must be cleared in order to be run.
Not In Retry Period Task is not ready for retry yet but will be retried automatically. Current date is 2021-05-17T09:06:57.239015+00:00 and task will be retried at 2021-05-17T09:09:50.662150+00:00.
am i missing something to judge if it is a bug or if it is expected?
addition, below is the DAG definition as requested.
import airflow
from airflow import DAG
from airflow.contrib.operators.databricks_operator import DatabricksSubmitRunOperator
from airflow.models import Variable
dag_args = {
"owner": "our_project_team_name",
"retries": 1,
"email": ["ouremail_address_replaced_by_this_string"],
"email_on_failure": True,
"email_on_retry": True,
"depends_on_past": False,
"start_date": airflow.utils.dates.days_ago(1),
}
# Implement cluster reuse on Databricks, pick from light, medium, heavy cluster type based on workloads
clusters = Variable.get("our_project_team_namejob_cluster_config", deserialize_json=True)
databricks_connection = "our_company_databricks"
adl_connection = "our_company_wasb"
pipeline_name = "process_our_data_from_boomi"
dag = DAG(dag_id=pipeline_name, default_args=dag_args, schedule_interval="0 3 * * *")
notebook_dir = "/Shared/our_data_name"
lib_path_sub = ""
lib_name_dev_plus_branch = ""
atlas_library = {
"whl": f"dbfs:/python-wheels/atlas{lib_path_sub}/atlas_library-0{lib_name_dev_plus_branch}-py3-none-any.whl"
}
create_our_data_name_source_data_from_boomi_notebook_params = {
"existing_cluster_id": clusters["our_cluster_name"],
"notebook_task": {
"notebook_path": f"{notebook_dir}/create_our_data_name_source_data_from_boomi",
"base_parameters": {"Extraction_date": "{{ ds_nodash }}"},
},
}
create_our_data_name_standardized_table_from_source_xml_notebook_params = {
"existing_cluster_id": clusters["our_cluster_name"],
"notebook_task": {
"notebook_path": f"{notebook_dir}/create_our_data_name_standardized_table_from_source_xml",
"base_parameters": {"Extraction_date": "{{ ds_nodash }}"},
},
}
create_our_data_name_enriched_table_from_standardized_notebook_params = {
"existing_cluster_id": clusters["our_cluster_name"],
"notebook_task": {
"notebook_path": f"{notebook_dir}/create_our_data_name_enriched",
"base_parameters": {"Extraction_date": "{{ ds_nodash }}"},
},
}
layer_1_task = DatabricksSubmitRunOperator(
task_id="Load_our_data_name_to_source",
databricks_conn_id=databricks_connection,
dag=dag,
json=create_our_data_name_source_data_from_boomi_notebook_params,
libraries=[atlas_library],
)
layer_2_task = DatabricksSubmitRunOperator(
task_id="Load_our_data_name_to_standardized",
databricks_conn_id=databricks_connection,
dag=dag,
json=create_our_data_name_standardized_table_from_source_xml_notebook_params,
libraries=[
{"maven": {"coordinates": "com.databricks:spark-xml_2.11:0.5.0"}},
{"pypi": {"package": "inflection"}},
atlas_library,
],
)
layer_3_task = DatabricksSubmitRunOperator(
task_id="Load_our_data_name_to_enriched",
databricks_conn_id=databricks_connection,
dag=dag,
json=create_our_data_name_enriched_table_from_standardized_notebook_params,
libraries=[atlas_library],
)
layer_1_task >> layer_2_task >> layer_3_task
after getting some help from #AnandVidvat about trying to make retry=0 experiment and some firend help to change operator to either DummyOperator or PythonOperator, i can confirm that the issue is not to do with DatabricksOperator or airflow version 1.10.x. i.e it is not an airflow bug.
so in summary, when a DAG, has meaningful operator, my setup fails in first Execution without any task log, and during retry works OK (the task log hides the fact it had been retried, because the failure had no logs).
In order to reduce the total run time. The workaround/patch, before finding the real cause, is to set the retry_delay to 10 seconds (default is 5 mins, and it makes DAG run long unnessicssarily.)
Next step is to figure out what is causing this 1st failure thing, by checking logs on scheduler or woker pods in our current setup (azure K8s, postgresql, Redis, celery executor).
p.s. I used below DAG tested and get the conclusion.
import airflow
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
import time
from pprint import pprint
dag_args = {
"owner": "min_test",
"retries": 1,
"email": ["c243d70b.domain.onmicrosoft.com#emea.teams.ms"],
"email_on_failure": True,
"email_on_retry": True,
"depends_on_past": False,
"start_date": airflow.utils.dates.days_ago(1),
}
pipeline_name = "min_test_debug_airflow_baseline_PythonOperator_1_retry"
dag = DAG(
dag_id=pipeline_name,
default_args=dag_args,
schedule_interval="0 3 * * *",
tags=["min_test_airflow"],
)
def my_sleeping_function(random_base):
"""This is a function that will run within the DAG execution"""
time.sleep(random_base)
def print_context(ds, **kwargs):
pprint(kwargs)
print(ds)
return "Whatever you return gets printed in the logs"
run_this = PythonOperator(
task_id="print_the_context",
provide_context=True,
python_callable=print_context,
dag=dag,
)
# Generate 3 sleeping tasks, sleeping from 0 to 2 seconds respectively
for i in range(3):
task = PythonOperator(
task_id="sleep_for_" + str(i),
python_callable=my_sleeping_function,
op_kwargs={"random_base": float(i) / 10},
dag=dag,
)
task.set_upstream(run_this)

Check programatically the status of an action in oozie workflow from another oozie workflow

I am running some code in oozie workflow named WF1's action named AC1.. This workflow is not scheduled but runs continuously.. usually action AC1 will get its turn 4 times a day. Time at which this action runs is not known previously.
Now, there is another Oozie workflow WF2, scheduled to run at 4:00 AM in the morning using Oozie coordinator. This WF2 runs for 3-4 minutes only as this is a small code required to be run in off-peak hours.
In this WF2, we want to check the status of workflow action AC1 (running as part of WF1 [everytime AC1 instance runs, a new id gets assigned to it]. Is it possible to get the status of AC1 using name only, without knowing the id?
I know I have a workaround where I can store the status of AC1 in Hive table and keep querying the same to know the status. But if something is offered out of the box, it will be helpful.
There are several ways to do it (as you mention).
The built-in way is to use the job information
So you can do a simple get and get a response with job status on all actions, in the below example you can go to actions look for your action name and change the status for example:
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
.
{
id: "0-200905191240-oozie-W",
appName: "indexer-workflow",
appPath: "hdfs://user/bansalm/indexer.wf",
externalId: "0-200905191230-oozie-pepe",
user: "bansalm",
status: "RUNNING",
conf: "<configuration> ... </configuration>",
createdTime: "Thu, 01 Jan 2009 00:00:00 GMT",
startTime: "Fri, 02 Jan 2009 00:00:00 GMT",
endTime: null,
run: 0,
actions: [
{
id: "0-200905191240-oozie-W#indexer",
name: "AC1",
type: "map-reduce",
conf: "<configuration> ...</configuration>",
startTime: "Thu, 01 Jan 2009 00:00:00 GMT",
endTime: "Fri, 02 Jan 2009 00:00:00 GMT",
status: "OK",
externalId: "job-123-200903101010",
externalStatus: "SUCCEEDED",
trackerUri: "foo:8021",
consoleUrl: "http://foo:50040/jobdetailshistory.jsp?jobId=...",
transition: "reporter",
data: null,
errorCode: null,
errorMessage: null,
retries: 0
},

Celery beat running tasks every minute even thought It's set for every two hours

I'm trying to use celery beat to run tasks daily at a specific time.
However for testing purposes, I'm setting up two tasks to run every two hours, this is what my config looks like:
CELERYBEAT_SCHEDULE = {
'daily-google-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(hour='*/2'),
'args': (['G'])
},
'daily-facebook-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(hour='*/2'),
'args': (['F'])
}
}
This is how I run celery:
celery beat -A app.engine.celery --schedule=/tmp/celerybeat-schedule --pidfile=/tmp/celerybeat.pid -l info
Everything runs in Docker containers using docker-compose so I make sure that I re-build the app's image and restart the containers.
I even enter into the running container and I see the crontab setup in the code... however in my logs, I see the task running every minute.
What else can I do to debug this?
I appreciate any help,
Thanks
Your crontab is configured to run „At every minute past every 2nd hour“.
from celery.schedules import crontab
str(crontab(hour='*/2'))
'<crontab: * */2 * * * (m/h/d/dM/MY)>'
Ref: https://crontab.guru/#*_*/2_*_*_*
Correct crontab for „Every two hours“ is: 0 */2 * * *.
Ref: https://crontab.guru/every-2-hours
This should fix your issue:
CELERYBEAT_SCHEDULE = {
'daily-google-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(minute='0', hour='*/2'),
'args': (['G'])
},
'daily-facebook-connect': {
'task': 'app.engine.schedule_fetcher',
'schedule': crontab(minute='0', hour='*/2'),
'args': (['F'])
}
}

Can celery's beat tasks execute at timed intervals?

This is the beat tasks setting:
celery_app.conf.update(
CELERYBEAT_SCHEDULE = {
'taskA': {
'task': 'crawlerapp.tasks.manual_crawler_update',
'schedule': timedelta(seconds=3600),
},
'taskB': {
'task': 'crawlerapp.tasks.auto_crawler_update_day',
'schedule': timedelta(seconds=3600),
},
'taskC': {
'task': 'crawlerapp.tasks.auto_crawler_update_hour',
'schedule': timedelta(seconds=3600),
},
})
Normally taskA,taskB,taskC execute at the same time after my command celery -A myproj beat as the beat tasks. But now I want that taskA execute first,and then some time later taskB excute second,taskC excute at last.And after 3600 seconds they excute again.And after 3600 seconds they excute again.And after 3600 seconds they excute again. Is it possible?
Yeah, it's possible. Create a chain for all three tasks and then use this chained task for scheduling.
In your tasks.py file:
from celery import chain
chained_task = chain(taskA, taskB, taskC)
Then schedule the chained_task:
celery_app.conf.update(
CELERYBEAT_SCHEDULE = {
'chained_task': {
'task': 'crawlerapp.tasks.manual_crawler_update',
'schedule': timedelta(seconds=3600),
},
})
By this, your task will execute in order once in 3600 seconds.

want to find when the job will be started in celery

i am new to celery. i have some configuration in celeryconfig.py as follow:
from datetime import timedelta
BROKER_URL='redis://localhost:6379/0'
CELERY_RESULT_BACKEND="redis"
CELERY_REDIS_HOST="localhost"
CELERY_REDIS_PORT=6379
CELERY_REDIS_DB=0
CELERY_IMPORT=("mail")
CELERYBEAT_SCHEDULE={'runs-every-30-seconds' :
{
'task': 'mail.mail',
'schedule': timedelta(seconds=30),
},
}
i have scheduled that the job will run periodically in 30 seconds. now i want that the jobs should start on 29 aug at 4:00PM then how should i configure this??
You should use Cron instead of timedelta. The Celery documentation discusses this specifically, and provides some useful examples. See Crontab schedules
Here is an example from Celery:
from celery.schedules import crontab
CELERYBEAT_SCHEDULE = {
# Executes every Monday morning at 7:30 A.M
'every-monday-morning': {
'task': 'tasks.add',
'schedule': crontab(hour=7, minute=30, day_of_week=1),
'args': (16, 16),
},
}
To make this work for your condition, you will also need to specify the cron month_of_year parameter.