How to use "-b" with celery - celery

everyone.
I have a problem with celery's parameter "-b", i found it in the celery document:
-b, --broker
celery command line option
but it seems doesn't take effect when I use like this, example:
celery -A tasks worker -b redis://yuhui:mypassword#192.168.1.100/0 --loglevel=INFO
tasks.py
from celery import Celery
app = Celery('tasks')
#app.task
def add(x, y):
return x + y
The command line logs like blow:
-------------- celery#yuhui v4.4.2 (cliffs)
--- ***** -----
-- ******* ---- Linux-5.3.0-46-generic-x86_64-with-debian-buster-sid 2020-04-19 11:45:00
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x7f903f18cdd0
- ** ---------- .> transport: redis://yuhui:**#192.168.1.100:6379/0
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
But, it will stuck without any return when I execute celery -A tasks inspect active.
If I change this line
app = Celery('tasks', broker='redis://yuhui:mypassword#192.168.1.100/0')
It will be fine.
BTW, I don't have a redis on my current machine.
So, how to use this parameter?

You need to pass broker param to inspect command
celery -A tasks inspect -b redis://yuhui:mypassword#192.168.1.100/0 active

Related

Celery with eventlet or gevent doesn't work properly

I'm running Celery using code, like this:
if __name__ == '__main__':
worker = celery.Worker()
worker.setup_defaults(
loglevel=logging.INFO,
pool='eventlet',
concurrency=500
)
worker.start()
When running Celery like this, I get the following output :
-------------- celery#some.server.com v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.10.0-19-cloud-amd64-x86_64-with-glibc2.31 2022-12-14 15:23:55
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: __main__:0x7fdda296baf0
- ** ---------- .> transport: redis://localhost:6379/6
- ** ---------- .> results: redis://localhost:6379/6
- *** --- * --- .> concurrency: 500 (eventlet)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. task1
Received task: task1[b248771c-6dd5-469d-bc53-eaf63c4f6b60]
Received task: task1[b248771c-6dd5-469d-bc53-eaf63c4f6b62]
Received task: task1[b248771c-6dd5-469d-bc53-eaf63c4f6b64]
Received task: task1[b248771c-6dd5-469d-bc53-eaf63c4f6b67]
Received task: task1[b248771c-6dd5-469d-bc53-eaf63c4f6b68]
Received task: task1[b248771c-6dd5-469d-bc53-eaf63c4f6b70]
But no task are executed.
If I call CTRL+C, a warm shutdown is done, and NOW, the tasks are executed
> CTRL+C
[INFO/MainProcess] Task task1[b248771c-6dd5-469d-bc53-eaf63c4f6b60] succeeded in 1.0062870910001s: None
[INFO/MainProcess] Task task1[b248771c-6dd5-469d-bc53-eaf63c4f6b62] succeeded in 1.0062870910001s: None
[INFO/MainProcess] Task task1[b248771c-6dd5-469d-bc53-eaf63c4f6b64] succeeded in 1.0062870910001s: None
[INFO/MainProcess] Task task1[b248771c-6dd5-469d-bc53-eaf63c4f6b66] succeeded in 1.0062870910001s: None
[INFO/MainProcess] Task task1[b248771c-6dd5-469d-bc53-eaf63c4f6b67] succeeded in 1.0062870910001s: None
[INFO/MainProcess] Task task1[b248771c-6dd5-469d-bc53-eaf63c4f6b68] succeeded in 1.0062870910001s: None
One thing odd I noticed here, is that the number of tasks loaded is related to the concurrency parameter. If I set it to 2, I'll have three tasks loaded, and on warm shutdown, 2 will be executed and the last one is put back in the queue.
(uh?)
Now, if I change the pool to gevent, it loads identically, but NO tasks are executed when I stop the script, instead, they are all added back to the queue.
Finally - AND THIS IS IMPORTANT - If I set the pool to prefork, it works ... perfectly fine ...
So the issue is not related to my code. Do you have any idea what is going on?
I tried to disable mingle, heartbeat and gossip with no luck.
The versions of eventlet and gevent are the latest ones of today:
eventlet : 0.33.2
gevent : 22.10.2
What is going on? Is Celery compatible with eventlet/gevent or is it just a myth?

/opt/gitlab/embedded/bin/ruby: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory

I'm trying to install gitlab-ce on my raspberry pi 4B, 4GB model. My operating system is Raspberry Pi OS Lite 64bit.
Installer taken from here
There was an error running gitlab-ctl reconfigure:
Multiple failures occurred:
* Mixlib::ShellOut::ShellCommandFailed occurred in Chef Infra Client run: runit_service[gitlab-kas] (gitlab-kas::enable line 121) had an error: Mixlib::ShellOut::ShellCommandFailed: ruby_block[restart_log_service] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/runit/libraries/provider_runit_service.rb line 65) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas/log ----
STDOUT: timeout: run: /opt/gitlab/service/gitlab-kas/log: (pid 21560) 34s, got TERM
STDERR:
---- End output of /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas/log ----
Ran /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas/log returned 1
* Mixlib::ShellOut::ShellCommandFailed occurred in delayed notification: execute[clear the gitlab-rails cache] (gitlab::gitlab-rails line 477) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '127'
---- Begin output of /opt/gitlab/bin/gitlab-rake cache:clear ----
STDOUT:
STDERR: /opt/gitlab/embedded/bin/ruby: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory
---- End output of /opt/gitlab/bin/gitlab-rake cache:clear ----
Ran /opt/gitlab/bin/gitlab-rake cache:clear returned 127
* Mixlib::ShellOut::ShellCommandFailed occurred in delayed notification: runit_service[gitlab-kas] (gitlab-kas::enable line 121) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas ----
STDOUT: timeout: run: /opt/gitlab/service/gitlab-kas: (pid 21561) 65s, got TERM
STDERR:
---- End output of /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas ----
Ran /opt/gitlab/embedded/bin/sv restart /opt/gitlab/service/gitlab-kas returned 1
*Update: Distro info:
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
The installation scripts for the Raspberry Pi only work on debian buster. Notice the distro/version specified for the package is raspbian/buster:
However, you have installed the newer bullseye version of raspbian:
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
In order to use this install script, you'll need to use the legacy debian buster release for Raspberry PI OS.

How to push the process running status in centos machine using telegraf?

I am trying to monitor the running process in my machine.
To achieve this i leveraged Telegraf, Influxdb, Grafana
In telegraf I used procstat plugin and i used the tag procstat_lookup
My telegraf conf file:
[[inputs.procstat]]
pid_tag = true
exe = ""
systemd_unit = "sshd"
[[inputs.procstat]]
pid_tag = true
exe = ""
systemd_unit = "influxd"
When i run the query Select * from procstat_lookup where time >= now() - 120s in Ubuntu Machine I get the output:
time exe pattern pid_count pid_finder result result_code running
systemd_unit
---- --- ------- --------- ---------- ------ ----------- ------- ------------
1569906900000000000 1 pgrep success 0 1 apache2
1569906900000000000 1 pgrep success 0 1 sshd
But when i run the same query in Centos Machine I get the output:
time pid_count systemd_unit
---- --------- ------------
1569909600000000000 1 apache2
1569909600000000000 1 sshd
I wonder why there is a different output for the same configuration in the two different OS

using celery in airflow

I am new to airflow, for now I find out airflow is using celery to schedule its tasks. To run airflow, I need to run command 'airflow worker' which will start celery. However, there is always a bug here. Since I have searched in Internet, most problem happen to celery.py which write by user themselves. I use celery just by start airflow. So it is a little bit different.
Anyone could help me? Below is the screenshot of the bug.
airflow#linux-test:~$ airflow worker
[2018-06-22 07:29:04,068] {__init__.py:57} INFO - Using executor CeleryExecutor
[2018-06-22 07:29:04,125] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2018-06-22 07:29:04,146] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
-------------- celery#linux-test v4.2.0 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.15.0-22-generic-x86_64-with-Ubuntu-18.04-bionic 2018-06-22 07:29:04
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f2267122310
- ** ---------- .> transport: amqp://airflow:**#localhost:5672/airflow
- ** ---------- .> results: postgresql://airflow:**#localhost:5432/airflow
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
[2018-06-22 07:29:04,630] {__init__.py:57} INFO - Using executor CeleryExecutor
[2018-06-22 07:29:04,689] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2018-06-22 07:29:04,715] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
Starting flask
[2018-06-22 07:29:04,858] {_internal.py:88} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
[2018-06-22 07:29:06,122: ERROR/ForkPoolWorker-1] Pool process <celery.concurrency.asynpool.Worker object at 0x7f22648c8e10> error: TypeError("Required argument 'object' (pos 1) not found",)
Traceback (most recent call last):
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 289, in __call__
sys.exit(self.workloop(pid=pid))
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 347, in workloop
req = wait_for_job()
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 447, in receive
ready, req = _receive(1.0)
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 419, in _recv
return True, loads(get_payload())
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/common.py", line 107, in pickle_loads
return load(BytesIO(s))
TypeError: Required argument 'object' (pos 1) not found
[2018-06-22 07:29:06,127: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:18839 exited with 'exitcode 1'
Uninstalling librabbitmq worked for me : pip uninstall librabbitmq. I didn't understand very well why, but apparently, there's some optimization on that library that made the thing fail. Here's the answer I found on some website (I had to translate the page, thus my inability to understand well the solution)
Hope it helps

Devel::Cover merging coverage data for Perl scripts and modules

I'm having issues merging data for coverage on Perl scripts and modules.. Running Devel::Cover individually works just fine, but when I try to combine the data I lose statistics for just the Perl script not the module..
Let me explain..
I have a directory tree that looks like so..
Code_Coverage_Test
|
|---->lib
|
|---->t
|
Inside the root Code_Coverage_Test directory I have the Build.pl file that builds the tests for the module and script that kickoff two other scripts that automate some commands for me..
./Build.pl
#!/usr/bin/perl -w
use strict;
use Module::Build;
my $buildTests = Module::Build->new(
module_name => 'testPMCoverage',
license => 'perl',
dist_abstract => 'Perl .pm Test Code Coverage',
dist_author => 'me#myEmail.com',
build_requires => {
'Test::More' => '0.10',
},
);
$buildTests->create_build_script();
./startTests.sh
#!/bin/sh
cd t
./doPMtest.sh
./doPLtest.sh
cd ../
perl Build testcover
Inside the lib dir I have the files I'm trying to run Code coverage on..
lib/testPLCoverage.pl
#!/usr/bin/perl -w
use strict;
print "Ok!";
lib/testPMCoverage.pm
use strict;
use warnings;
package testPMCoverage;
sub hello {
return "Hello";
}
sub bye {
return "Bye";
}
1;
In the t dir I have my .t test file for the module and 2 scripts that kickoff the tests for me.. Both of which are called by the startTests.sh in the root directory
t/testPMCoverage.t
#!/usr/bin/perl -w
use strict;
use Test::More;
require_ok( 'testPMCoverage' );
my $test = testPMCoverage::hello();
is($test, "Hello", "hello() test");
done_testing();
t/doPLtest.sh
#!/bin/sh
#Test 1
cd ../
cd lib
perl -MDevel::Cover=-db,../cover_db testPLCoverage.pl
t/doPMtest.sh
#!/bin/bash
cd ../
perl Build.pl
perl Build test
The issue I'm running into is that when the doPLtests.sh script runs, I get coverage data, no problem..
---------------------------- ------ ------ ------ ------ ------ ------ ------
File STMT Bran Cond Sub pod Time total
---------------------------- ------ ------ ------ ------ ------ ------ ------
testPLCoverage.pl 100.0 n/a n/a 100.0 n/a 100.0 100.0
Total 100.0 n/a n/a 100.0 n/a 100.0 100.0
---------------------------- ------ ------ ------ ------ ------ ------ ------
However, when the doPMtest.sh script finishes and the startTests.sh script initiates the Build testcover command I lose that data along the way and I get these messages ...
Reading database path/Code_Coverage_Tests/cover_db
Devel::Cover: Warning: can't open testPLCoverage.pl for MD5 digest: No such file or directory
Devel::Cover: Warning: can't locate structure for statement in testPLCoverage.pl
Devel::Cover: Warning: can't locate structure for subroutine in testPLCoverage.pl
Devel::Cover: Warning: can't locate structure for time in testPLCoverage.pl
..and somehow I lose the data
---------------------------- ------ ------ ------ ------ ------ ------ ------
File STMT Bran Cond Sub pod Time total
---------------------------- ------ ------ ------ ------ ------ ------ ------
blib/lib/testPMCoverage.pm 87.5 n/a n/a 75.0 0.0 100.0 71.4
testPLCoverage.pl n/a n/a n/a n/a n/a n/a n/a
Total 87.5 n/a n/a 75.0 0.0 100.0 71.4
---------------------------- ------ ------ ------ ------ ------ ------ ------
How can I combine the Perl module and Perl script tests to get valid code coverage in ONE file?
Perl doesn't store the full path to the files it uses. If it finds the file via a relative path then only the relative path is stored. You can see this in the paths perl shows in the warning and error messages from those files.
When Devel::Cover deals with files it uses the path given by perl. You can see this in the reports from Devel::Cover where you have testPLCoverage.pl and blib/lib/testPMCoverage.pm.
What this means for you in practice is that whenever you put coverage into a coverage DB you should ensure that you are doing it from the same directory, so that Devel::Cover can match and locate the files in the coverage DB.
I think this is the problem you are hitting.
My suggestion is that in t/doPLtest.sh you don't cd into lib. You can run something like:
perl -Mblib -MDevel::Cover=-db,../cover_db lib/testPLCoverage.pl
(As an aside, why is that file in lib?)
I think that would mean that Devel::Cover would be running from the project root in each case and so should allow it to match and find the files.