Airflow 2.0.2: Dag doesn't render correctly the template - postgresql

I have two simple tasks, one is getting the list of ids, and the other has to shows the list of ids with echo command. The result in xcom push seems right . I have a list of tuple as below.
The output of return function(xcom push) is a list of tuple, as below:
[(19343160,), (19350561,), (19351381,), (19351978,), (19356674,), (19356676,), (19356678,), (19356681,), (19356682,), (19359607,)]
Here is my code:
def read_sql(file_name):
with open(SQL_PATH + file_name) as f:
sql = f.read()
return sql
def query_and_push(sql):
pg_hook = PostgresHook(postgres_conn_id='redshift')
records = pg_hook.get_records(sql=sql)
return records
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['airflow#example.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
with DAG(
'xcom_using_jinja_template',
default_args=default_args,
description='',
schedule_interval=timedelta(days=1),
start_date=days_ago(2),
tags=['test'],
) as dag:
t1 = PythonOperator(
task_id='get_query_id',
python_callable=query_and_push,
provide_context=True,
op_kwargs={
'sql' : read_sql('warmupqueryid.sql')
}
)
templated_command = dedent(
"""
{% for item in params.query_ids %}
echo {{ item[0] }};
{% endfor %}
"""
)
t2 = BashOperator(
task_id='templated',
depends_on_past=False,
bash_command=templated_command,
params={'query_ids': " {{ ti.xcom_pull(task_ids='get_query_id'), key='return_value' }}"},
)
t1 >> t2
My last task is failing due to this error, and I don't understand why it's not getting the value of xcom push. I am not sure if this is a bug, or if I 've just missed something.
*** Reading remote log from s3://ob-airflow-pre/logs/xcom_using_jinja_template/templated/2021-05-26T17:22:44.023533+00:00/1.log.
[2021-05-26 17:22:45,633] {taskinstance.py:877} INFO - Dependencies all met for <TaskInstance: xcom_using_jinja_template.templated 2021-05-26T17:22:44.023533+00:00 [queued]>
[2021-05-26 17:22:45,663] {taskinstance.py:877} INFO - Dependencies all met for <TaskInstance: xcom_using_jinja_template.templated 2021-05-26T17:22:44.023533+00:00 [queued]>
[2021-05-26 17:22:45,663] {taskinstance.py:1068} INFO -
--------------------------------------------------------------------------------
[2021-05-26 17:22:45,663] {taskinstance.py:1069} INFO - Starting attempt 1 of 2
[2021-05-26 17:22:45,664] {taskinstance.py:1070} INFO -
--------------------------------------------------------------------------------
[2021-05-26 17:22:45,675] {taskinstance.py:1089} INFO - Executing <Task(BashOperator): templated> on 2021-05-26T17:22:44.023533+00:00
[2021-05-26 17:22:45,679] {standard_task_runner.py:52} INFO - Started process 413 to run task
[2021-05-26 17:22:45,683] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'xcom_using_jinja_template', 'templated', '2021-05-26T17:22:44.023533+00:00', '--job-id', '1811', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/xcom_test.py', '--cfg-path', '/tmp/tmpkk2x0gyd', '--error-file', '/tmp/tmpc2ka7x4x']
[2021-05-26 17:22:45,683] {standard_task_runner.py:77} INFO - Job 1811: Subtask templated
[2021-05-26 17:22:45,859] {logging_mixin.py:104} INFO - Running <TaskInstance: xcom_using_jinja_template.templated 2021-05-26T17:22:44.023533+00:00 [running]> on host airflow-worker-1.airflow-worker.airflow.svc.cluster.local
[2021-05-26 17:22:45,945] {taskinstance.py:1281} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_EMAIL=airflow#example.com
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=xcom_using_jinja_template
AIRFLOW_CTX_TASK_ID=templated
AIRFLOW_CTX_EXECUTION_DATE=2021-05-26T17:22:44.023533+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-05-26T17:22:44.023533+00:00
[2021-05-26 17:22:45,946] {bash.py:135} INFO - Tmp dir root location:
/tmp
[2021-05-26 17:22:45,947] {bash.py:158} INFO - Running command:
echo ;
echo {;
echo {;
echo ;
echo t;
echo i;
echo .;
echo x;
echo c;
echo o;
echo m;
echo _;
echo p;
echo u;
echo l;
echo l;
echo (;
echo t;
echo a;
echo s;
echo k;
echo _;
echo i;
echo d;
echo s;
echo =;
echo ';
echo g;
echo e;
echo t;
echo _;
echo q;
echo u;
echo e;
echo r;
echo y;
echo _;
echo i;
echo d;
echo ';
echo );
echo ,;
echo ;
echo k;
echo e;
echo y;
echo =;
echo ';
echo r;
echo e;
echo t;
echo u;
echo r;
echo n;
echo _;
echo v;
echo a;
echo l;
echo u;
echo e;
echo ';
echo ;
echo };
echo };
[2021-05-26 17:22:45,954] {bash.py:169} INFO - Output:
[2021-05-26 17:22:45,955] {bash.py:173} INFO -
[2021-05-26 17:22:45,955] {bash.py:173} INFO - {
[2021-05-26 17:22:45,955] {bash.py:173} INFO - {
[2021-05-26 17:22:45,955] {bash.py:173} INFO -
[2021-05-26 17:22:45,955] {bash.py:173} INFO - t
[2021-05-26 17:22:45,955] {bash.py:173} INFO - i
[2021-05-26 17:22:45,955] {bash.py:173} INFO - .
[2021-05-26 17:22:45,955] {bash.py:173} INFO - x
[2021-05-26 17:22:45,955] {bash.py:173} INFO - c
[2021-05-26 17:22:45,955] {bash.py:173} INFO - o
[2021-05-26 17:22:45,955] {bash.py:173} INFO - m
[2021-05-26 17:22:45,955] {bash.py:173} INFO - _
[2021-05-26 17:22:45,955] {bash.py:173} INFO - p
[2021-05-26 17:22:45,956] {bash.py:173} INFO - u
[2021-05-26 17:22:45,956] {bash.py:173} INFO - l
[2021-05-26 17:22:45,956] {bash.py:173} INFO - l
[2021-05-26 17:22:45,956] {bash.py:173} INFO - bash: -c: line 34: syntax error near unexpected token `;'
[2021-05-26 17:22:45,956] {bash.py:173} INFO - bash: -c: line 34: ` echo (;'
[2021-05-26 17:22:45,956] {bash.py:177} INFO - Command exited with return code 1
[2021-05-26 17:22:45,976] {taskinstance.py:1482} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1138, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/bash.py", line 180, in execute
raise AirflowException('Bash command failed. The command returned a non-zero exit code.')
airflow.exceptions.AirflowException: Bash command failed. The command returned a non-zero exit code.
[2021-05-26 17:22:45,978] {taskinstance.py:1525} INFO - Marking task as UP_FOR_RETRY. dag_id=xcom_using_jinja_template, task_id=templated, execution_date=20210526T172244, start_date=20210526T172245, end_date=20210526T172245
[2021-05-26 17:22:46,014] {local_task_job.py:146} INFO - Task exited with return code 1
when I change params.query_ids to the list in above(harcoded) I am getting what I have excepted.
templated_command = dedent(
"""
{% for item in [(19343160,), (19350561,), (19351381,), (19351978,), (19356674,), (19356676,), (19356678,), (19356681,), (19356682,), (19359607,)] %}
echo {{ item[0] }};
{% endfor %}
"""
)
Expected result:
[2021-05-27 10:59:05,887] {bash.py:158} INFO - Running command:
echo 19343160;
echo 19350561;
echo 19351381;
echo 19351978;
echo 19356674;
echo 19356676;
echo 19356678;
echo 19356681;
echo 19356682;
echo 19359607;

I answered this on the Astronomer forum but providing the answer here as well for others - if helpful.
You won’t be able to use params in a Jinja-templated way directly with the bash_command as written since params is not a template_field for the BashOperator. However, you can reference the return_value XCom from the get_query_id task as a variable in Jinja like so:
templated_command = dedent(
"""
{% set query_ids = ti.xcom_pull(task_ids='get_query_id', key='return_value') %}
{% for item in query_ids %}
echo {{ item[0] }};
{% endfor %}
"""
)
t2 = BashOperator(
task_id='templated',
depends_on_past=False,
bash_command=templated_command,
)
Now the templated_command directly references the XCom you need and sets it to a variable in the Jinja string and get the output you expect:

Related

How does vimscript execute functions through multiple threads?

I have a function for generating tags via ctags:
:function! UpdateCtags()
if !has('linux')
echohl ErrorMsg | echo 'This function only supports running under the linux operating system.' | echohl None
return
endif
echo 'Generating labels...'
let output = system('ctags -f ' . g:tags_file . ' -R /usr/include')
if v:shell_error == 0
echo 'Generated labels successfully.'
else
echohl ErrorMsg | echo output | echohl None
endif
:endfunction
I want to simply execute the UpdateCtags function through multiple threads, how can I do this?
My neovim version:
$ nvim -v
NVIM v0.7.2
Build type: Release
LuaJIT 2.1.0-beta3
Compiled by builduser
Features: +acl +iconv +tui
See ":help feature-compile"
system vimrc file: "$VIM/sysinit.vim"
fall-back for $VIM: "/usr/share/nvim"
Run :checkhealth for more info

boot.img too large when I compile AOSP11 source code for OTA

I trying to enable AB OTA in AOSP11 source code.
Followed below steps:
Step 1:
BOARD_USES_AB_IMAGE := true
Step 2 : CONFIG_ANDROID_AB=y in u boot source code.
Getting below compilation error :
libcameradevice curr board is yy356x
[ 91% 48807/53497] Target boot image from recovery: out/target/product/xx/boot.img
FAILED: out/target/product/xx/boot.img
/bin/bash -c "(out/host/linux-x86/bin/mkbootimg --kernel out/target/product/xx/kernel --ramdis
k out/target/product/xx/ramdisk-recovery.img --cmdline "console=ttyFIQ0 androidboot.baseband=N/
A androidboot.wificountrycode=CN androidboot.veritymode=enforcing androidboot.hardware=yy30board andro
idboot.console=ttyFIQ0 androidboot.verifiedbootstate=orange firmware_class.path=/vendor/etc/firmware i
nit=/init rootwait ro init=/init androidboot.selinux=permissive buildvariant=userdebug" --recovery_dt
bo out/target/product/xx/rebuild-dtbo.img --dtb out/target/product/xx/dtb.img --os_version
11 --os_patch_level 2021-08-05 --second kernel/resource.img --header_version 2 --output out/target/
product/xx/boot.img ) && (size=$(for i in out/target/product/xx/boot.img; do stat -c "%s" "$i" | tr -d '\n'; echo +; done; echo 0); total=$(( $( echo "$size" ) )); printname=$(ec
ho -n " out/target/product/xx/boot.img" | tr " " +); maxsize=$(( 100663296-0)); if [ "
$total" -gt "$maxsize" ]; then echo "error: $printname too large ($total > $maxsize)"; false;
elif [ "$total" -gt $((maxsize - 32768)) ]; then echo "WARNING: $printname approaching size li
mit ($total now; limit $maxsize)"; fi )"
error: +out/target/product/xx/boot.img too large (103485440 > 100663296)
[ 91% 48808/53497] //libcore/mmodules/intracoreapi:art-module-intra-core-api-stubs-source metalava mer
metalava detected access to files that are not explicitly specified. See /home/test/aosp/aa_android/
out/soong/.intermediates/libcore/mmodules/intracoreapi/art-module-intra-core-api-stubs-source/android_
common/art-module-intra-core-api-stubs-source-violations.txt for details.
01:25:09 ninja failed with: exit status 1
failed to build some targets (02:42:20 (hh:mm:ss))

ERROR: YoctoProject - core-image-sato: do_populate_sdk

I am a beginner in Yocto project. I am trying to build the image for Beaglebone Black Board with command-line: bitbake core-image-sato -c populate_sdk and I had an error (the detail in the below) in last task.
Enviroment build: Ubuntu 16.04 LTS, using Bash Shell instead of Dash Shell.
I tried to build again many times but still facing same error. Anybody can help me to fix this error?
Log file:
NOTE: Executing create_sdk_files ...
DEBUG: Executing shell function create_sdk_files
DEBUG: Shell function create_sdk_files finished
NOTE: Executing check_sdk_sysroots ...
DEBUG: Executing python function check_sdk_sysroots
DEBUG: Python function check_sdk_sysroots finished
NOTE: Executing archive_sdk ...
DEBUG: Executing shell function archive_sdk
/home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392: line 106: 11617 Broken pipe tar --owner=root --group=root -cf - .
11618 Killed | xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz
WARNING: /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392:1 exit 137 from 'xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz'
ERROR: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk(d)
0003:
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/classes/populate_sdk_base.bbclass', lineno: 169, function: do_populate_sdk
0165:
0166: populate_sdk(d)
0167:
0168:fakeroot python do_populate_sdk() {
*** 0169: populate_sdk_common(d)
0170:}
0171:SSTATETASKS += "do_populate_sdk"
0172:SSTATE_SKIP_CREATION_task-populate-sdk = '1'
0173:do_populate_sdk[cleandirs] = "${SDKDEPLOYDIR}"
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/classes/populate_sdk_base.bbclass', lineno: 166, function: populate_sdk_common
0162: manifest_type=Manifest.MANIFEST_TYPE_SDK_HOST)
0163: create_manifest(d, manifest_dir=d.getVar('SDK_DIR'),
0164: manifest_type=Manifest.MANIFEST_TYPE_SDK_TARGET)
0165:
*** 0166: populate_sdk(d)
0167:
0168:fakeroot python do_populate_sdk() {
0169: populate_sdk_common(d)
0170:}
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/lib/oe/sdk.py', lineno: 413, function: populate_sdk
0409: env_bkp = os.environ.copy()
0410:
0411: img_type = d.getVar('IMAGE_PKGTYPE')
0412: if img_type == "rpm":
*** 0413: RpmSdk(d, manifest_dir).populate()
0414: elif img_type == "ipk":
0415: OpkgSdk(d, manifest_dir).populate()
0416: elif img_type == "deb":
0417: DpkgSdk(d, manifest_dir).populate()
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/lib/oe/sdk.py', lineno: 60, function: populate
0056: self.sysconfdir, "ld.so.cache")
0057: self.mkdirhier(os.path.dirname(link_name))
0058: os.symlink("/etc/ld.so.cache", link_name)
0059:
*** 0060: execute_pre_post_process(self.d, self.d.getVar('SDK_POSTPROCESS_COMMAND'))
0061:
0062: def movefile(self, sourcefile, destdir):
0063: try:
0064: # FIXME: this check of movefile's return code to None should be
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/lib/oe/utils.py', lineno: 260, function: execute_pre_post_process
0256: for cmd in cmds.strip().split(';'):
0257: cmd = cmd.strip()
0258: if cmd != '':
0259: bb.note("Executing %s ..." % cmd)
*** 0260: bb.build.exec_func(cmd, d)
0261:
0262:# For each item in items, call the function 'target' with item as the first
0263:# argument, extraargs as the other arguments and handle any exceptions in the
0264:# parent thread
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/build.py', lineno: 249, function: exec_func
0245: with bb.utils.fileslocked(lockfiles):
0246: if ispython:
0247: exec_func_python(func, d, runfile, cwd=adir)
0248: else:
*** 0249: exec_func_shell(func, d, runfile, cwd=adir)
0250:
0251: try:
0252: curcwd = os.getcwd()
0253: except:
File: '/usr/lib/python3.5/contextlib.py', lineno: 77, function: __exit__
0073: # Need to force instantiation so we can reliably
0074: # tell if we get the same exception back
0075: value = type()
0076: try:
*** 0077: self.gen.throw(type, value, traceback)
0078: raise RuntimeError("generator didn't stop after throw()")
0079: except StopIteration as exc:
0080: # Suppress StopIteration *unless* it's the same exception that
0081: # was passed to throw(). This prevents a StopIteration
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/utils.py', lineno: 431, function: fileslocked
0427: if files:
0428: for lockfile in files:
0429: locks.append(bb.utils.lockfile(lockfile))
0430:
*** 0431: yield
0432:
0433: for lock in locks:
0434: bb.utils.unlockfile(lock)
0435:
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/build.py', lineno: 249, function: exec_func
0245: with bb.utils.fileslocked(lockfiles):
0246: if ispython:
0247: exec_func_python(func, d, runfile, cwd=adir)
0248: else:
*** 0249: exec_func_shell(func, d, runfile, cwd=adir)
0250:
0251: try:
0252: curcwd = os.getcwd()
0253: except:
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/build.py', lineno: 450, function: exec_func_shell
0446: with open(fifopath, 'r+b', buffering=0) as fifo:
0447: try:
0448: bb.debug(2, "Executing shell function %s" % func)
0449: with open(os.devnull, 'r+') as stdin, logfile:
*** 0450: bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
0451: finally:
0452: os.unlink(fifopath)
0453:
0454: bb.debug(2, "Shell function %s finished" % func)
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/process.py', lineno: 182, function: run
0178: if not stderr is None:
0179: stderr = stderr.decode("utf-8")
0180:
0181: if pipe.returncode != 0:
*** 0182: raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
0183: return stdout, stderr
Exception: bb.process.ExecutionError: Execution of '/home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392' failed with exit code 137:
/home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392: line 106: 11617 Broken pipe tar --owner=root --group=root -cf - .
11618 Killed | xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz
WARNING: /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392:1 exit 137 from 'xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz'
ERROR: Logfile of failure stored in: /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/log.do_populate_sdk.4392
ERROR: Task (/home/huongnguyen/Desktop/poky/openembedded-core/meta/recipes-sato/images/core-image-sato.bb:do_populate_sdk) failed with exit code '1'
Exit code 137 means something killed xz during the build. You may be running out of memory: check dmesg after this happens, there might be a log line about out-of-memory killer.
Had the same problem and could make it go away with XZ_MEMLIMIT="75%" bitbake image-name -c do_populate_sdk. The bitbake.conf in my version of Yocto defaults XZ_MEMLIMIT to 50%.
Had the same problem and none of the usual methods, like, deleting hidden repo file worked.
I then clean the build using bitbake -c clean mybuildname and then again made the build and it worked flawlessly, I hope it helps someone.

Celery: Routing tasks issue - only one worker consume all tasks from all queues

I've some tasks with manually configured routes and 3 workers which were configured to consume tasks from specific queue. But only one worker consuming all of the tasks and I've no idea how to fix this issue.
My celeryconfig.py
class CeleryConfig:
enable_utc = True
timezone = 'UTC'
imports = ('events.tasks')
broker_url = Config.BROKER_URL
broker_transport_options = {'visibility_timeout': 10800} # 3H
worker_hijack_root_logger = False
task_protocol = 2
task_ignore_result = True
task_publish_retry_policy = {'max_retries': 3, 'interval_start': 0, 'interval_step': 0.2, 'interval_max': 0.2}
task_time_limit = 30 # sec
task_soft_time_limit = 15 # sec
task_default_queue = 'low'
task_default_exchange = 'low'
task_default_routing_key = 'low'
task_queues = (
Queue('daily', Exchange('daily'), routing_key='daily'),
Queue('high', Exchange('high'), routing_key='high'),
Queue('normal', Exchange('normal'), routing_key='normal'),
Queue('low', Exchange('low'), routing_key='low'),
Queue('service', Exchange('service'), routing_key='service'),
Queue('award', Exchange('award'), routing_key='award'),
)
task_route = {
# -- SCHEDULE QUEUE --
base_path.format(task='refresh_rank'): {'queue': 'daily'}
# -- HIGH QUEUE --
base_path.format(task='execute_order'): {'queue': 'high'},
# -- NORMAL QUEUE --
base_path.format(task='calculate_cost'): {'queue': 'normal'},
# -- SERVICE QUEUE --
base_path.format(task='send_pin'): {'queue': 'service'},
# -- LOW QUEUE
base_path.format(task='invite_to_tournament'): {'queue': 'low'},
# -- AWARD QUEUE
base_path.format(task='get_lesson_award'): {'queue': 'award'},
# -- TEST TASK
worker_concurrency = multiprocessing.cpu_count() * 2 + 1
worker_prefetch_multiplier = 1 #
worker_max_tasks_per_child = 1
worker_max_memory_per_child = 90000 # 90MB
beat_max_loop_interval = 60 * 5 # 5 min
I run workers in a docker, part of my stack.yml
version: "3.7"
services:
worker_high:
command: celery worker -l debug -A runcelery.celery -Q high -n worker.high#%h
worker_normal:
command: celery worker -l debug -A runcelery.celery -Q normal,award,service,low -n worker.normal#%h
worker_schedule:
command: celery worker -l debug -A runcelery.celery -Q daily -n worker.schedule#%h
beat:
command: celery beat -l debug -A runcelery.celery
flower:
command: flower -l debug -A runcelery.celery --port=5555
broker:
image: redis:5.0-alpine
I thought that my config is right and run command correct too, but docker logs and flower shown that only worker.normal consume all tasks.
I
Update
Here is part of task.py:
def refresh_rank_in_tournaments():
logger.debug(f'Start task refresh_rank_in_tournaments')
return AnalyticBackgroundManager.refresh_tournaments_rank()
base_path is shortcut for full task path:
base_path = 'events.tasks.{task}'
execute_order task code:
#celery.task(bind=True, default_retry_delay=5)
def execute_order(self, private_id, **kwargs):
try:
return OrderBackgroundManager.execute_order(private_id, **kwargs)
except IEXException as exc:
raise self.retry(exc=exc)
This task will call in a view as tasks.execute_order.delay(id)
Your worker.normal is subscribed to the normal,award,service,low queues. Furthermore, the low queue is the default one, so every task that does not have explicitly set queue will be executed on worker.normal.

How to record FATAL events to separate file with log4perl

I'm using log4perl and I want to record all FATAL events in the separate file.
Here is my script:
#!/usr/bin/perl
use strict;
use warnings FATAL => 'all';
use Log::Log4perl qw(get_logger);
Log::Log4perl::init('log4perl.conf');
my $l_aa = get_logger('AA');
$l_aa->fatal('fatal');
my $l_bb = get_logger('BB');
$l_bb->info('info');
And here is my config file:
## What to log
log4perl.logger = FATAL, FatalLog
log4perl.logger.BB = INFO, MainLog
## Logger MainLog
log4perl.appender.MainLog = Log::Log4perl::Appender::File
log4perl.appender.MainLog.filename = log4perl_main.log
log4perl.appender.MainLog.layout = PatternLayout
log4perl.appender.MainLog.layout.ConversionPattern = \
[%d{yyyy-MM-dd HH:mm:ss}] %p - %c - %m%n
## Logger FatalLog
log4perl.appender.FatalLog = Log::Log4perl::Appender::File
log4perl.appender.FatalLog.filename = log4perl_fatal.log
log4perl.appender.FatalLog.layout = PatternLayout
log4perl.appender.FatalLog.layout.ConversionPattern = \
[%d{yyyy-MM-dd HH:mm:ss}] %p - %c - %m%n
I'm expecting that with this setup the file log4perl_fatal.log will get only FATAL-level events. But here is what I get after running the script:
$ tail -f *log
==> log4perl_fatal.log <==
[2014-04-13 08:41:22] FATAL - AA - fatal
[2014-04-13 08:41:22] INFO - BB - info
==> log4perl_main.log <==
[2014-04-13 08:41:22] INFO - BB - info
Why I'm getting INFO-level event in log4perl_fatal.log?
How can I recordy only FATAL-level events in the separate file?
PS Here is a GitHub repo with this script & config.
Your conf file has the following line:
log4perl.logger = FATAL, FatalLog
what you need is the following:
log4perl.logger.AA = FATAL, FatalLog
Otherwise, the FatalLog becomes a catch-all for both loggers, instead of isolated to this instance:
my $l_aa = get_logger('AA');
This is the question that is coverd in log4perl FAQ — https://metacpan.org/pod/Log::Log4perl::FAQ#How-can-I-collect-all-FATAL-messages-in-an-extra-log-file
In the example log4perl_fatal.log gets INFO level events because of appender additivity.
To fix it this line should be added to config file:
log4perl.appender.FatalLog.Threshold = FATAL
Then the output files get the expected output:
$ tail log4perl*log
==> log4perl_fatal.log <==
[2014-05-04 20:00:39] FATAL - AA - fatal
==> log4perl_main.log <==
[2014-05-04 20:00:39] INFO - BB - info