Celery Beat - Pyramid Mailer - celery

So, I have some plain python code which works pefectly in a normal python shell:
from pyramid_mailer.mailer import Mailer
from pyramid_mailer.message import Message
from pyramid_mailer.message import Attachment
mailer = Mailer(
host="172.10.10.240",
port="25")
message = Message(
subject="Orders with invalid status",
sender='r#example.com'],
recipients=['luke#example.com'],
html="<p>Test</p>")
mailer.send_immediately(message)
But, If I create a celery beat task such as this:
from pyramid_celery import celery_app as app
from pyramid_mailer.mailer import Mailer
from pyramid_mailer.message import Message
from pyramid_mailer.message import Attachment
mailer = Mailer(
host="172.10.10.240",
port="25")
#app.task
def wronglines_celery():
message = Message(
subject="Orders with invalid status",
sender='r#example.com'],
recipients=['luke#example.com'],
html="<p>Test</p>")
mailer.send_immediately(message)
This second example does not generate an email, it runs perfectly fine and throws no error at all, even with the log level set to DEBUG.
Running celery beat with:
celery beat -A pyramid_celery.celery_app --ini development.ini
Using the pyramid_celery plug-in as referenced in the official documentation on the celery website. My development.ini file can be seen below (relevant parts):
[celery]
BROKER_URL = amqp://app_rmq:password#localhost:5672/myvhost
CELERY_IMPORTS = intranet.celery_tasks
# Check once a day for orders with wrong line status
[celerybeat:task1]
task = intranet.celery_tasks.wronglines_celery
type = crontab
schedule = {"hour": 16, "minute": 30}
[logger_celery]
level = DEBUG
handlers =
qualname = celery
# Begin logging configuration
[loggers]
keys = root, intranet, sqlalchemy, celery
EDIT:
If I launch celery (without beat) it works perfectly, e.g. if I launch with:
celery worker -A pyramid_celery.celery_app --ini development.ini
All tasks execute (over and over) but all emails send and nothing throws an error, it seems to be the introduction of beat which is causing issues.

Are you sure its not working? The way we've configured your crontab it says "Only run once a day at 4:30". So if you ran that until it hit 4:30 I would expect it to execute properly.
Can you change your schedule to be {} instead to have it run every minute as a basic test?
I've added a crontab example to the examples here:
https://github.com/sontek/pyramid_celery/blob/master/examples/scheduler_example/development.ini#L33-L36
If you can provide more code (maybe a sample repo or modification of the examples already in the repo) that shows it not working I can take a look and hopefully fix the bug.

So, after much googlig and frustrating debugging I found an old github issue. That claimed celery tasks were working only when launched with a worker, and not with beat. The user states
Beat does not execute tasks, it just sends the messages. You need both a beat instance and a worker instance!
So to launch the work and the beat instance with the same command, shown here:
celery worker --beat -A pyramid_celery.celery_app --ini development.ini
I will be sending a pull request today to fix the documentation with regards to the correct way to launch a worker and beat instance.

By default, Celery tasks silently fail on error output. It most likely throws an exception which you never seen.
To be sure what's going to fail, put pdb (ipdb) breakpoint in task code, start celery worker on the foreground and step through the code line-by-line.

Related

How can I get result in celery when celery worker is running on a different server using AsyncResult

I am sending task to celery worker which is running on a remote server using the following code from lets say server A
import os
from celery import Celery
from celery.result import AsyncResult
redis_url = os.getenv("REDIS_ENDPOINT")
app = Celery("tasks", backend=f"redis://{redis_url}", broker=f"redis://{redis_url}")
def send(...):
result = app.send_task("mytask_name", ...)
return result.id
def receive(...):
result = AsyncResult(id=task_id, app=app)
return result.ready() ? result.get() : ""
Method send will send task to worker which is running on remote server, lets say server B. My issue is with the receive method, AsyncResult doesn't seem to work for me. The reason I am not storing the result Object that I get after sending the task is in a case where server A is distributed as well. The send and receive might not get called on the same server. Is there a way to get the results using celery in this type of setup?

Laravel 8 "Queue::push" is working, but "dispatch" is not

I'm facing an issue with Laravel queued jobs.
I'm using Laravel v8.40.0 with Redis v6.2.5 and Horizon v5.7.14 for managing jobs.
I have a job class called MyJob which should write a message in log file.
If I use Queue::push(new MyJob()) everything seems to work fine: I see the job in Horizon and the new row in log file.
But if I use dispatch(new MyJob()) or MyJob::dispatch() it doesn't seem to push my job into queue: I can't see the job in Horizon and I see no results in log file.
I was following the docs (https://laravel.com/docs/8.x/queues#dispatching-jobs) to use queues correctly and I don't understand where I'm doing wrong.
Thank you

Locust is not running

OS: Windows 7
**Locust version:**0.11.0
I am exploring the locust tool to see if i can use this tool in my project.
I have created the below file to have hands-on but apparently script is not running.
I am not sure on the reason though.
Can someone help me please?
Locoust.py
from locust import HttpLocust, TaskSet
def login(l):
l.client.post("/login", {"username":"ellen_key", "password":"education"})
def logout(l):
l.client.post("/logout", {"username":"ellen_key", "password":"education"})
def index(l):
l.client.get("/")
def profile(l):
l.client.get("/profile")
class UserBehavior(TaskSet):
tasks = {index: 2, profile: 1}
def on_start(self):
login(self)
def on_stop(self):
logout(self)
class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait = 5000
max_wait = 9000
Output:-
Tool kept running as below.
With default arguments, you need to access the web monitor at localhost:8089 in order to see the application.
If you want to run without the web frontend, you need to specify the arguments (clients, runtime, hatchrate, etc) in such a way to replicate what the webclient parameters are.
Run locust in headless mode (without UI) with below args to start the test automatically
locust -f locustio.py --headless -u 200 -r 10 --run-time 1h
-u specifies the number of Users to spawn.
-r specifies the spawn rate (number of users to start per second).
If you want to specify the run time for a test, you can do that with --run-time or -t
You can also run locust with head/UI.
locust -f locustio.py
Then go to Locust’s web interface
Once you’ve started Locust, you should open up a browser and point it to http://127.0.0.1:8089. Then you should be greeted with something like this:
Ref : https://docs.locust.io/en/stable/quickstart.html?#start-locust
In Windows, by default the Web-host is listening on IPv6, so while accessing the site using http://0.0.0.0:8089 might yield Error :This site can’t be reached 127.0.0.1 refused to connect.
Run the locustby specifying the web-host as the argument. Then the web will be accessible.
locust --web-host 0.0.0.0

Celery configuration gets updated when calling a different task

I have multiple tasks as different django apps using a RabbitMQ broker. This was setup with standard django configuration and was working perfectly. I was using groups, chains and calling them from different modules.
As a standard practice, I had:
celery.py:
app = Celery('<proj>')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
And in project/init.py:
from __future__ import absolute_import
from .celery import app as celery_app
All tasks were inherited from celery.Task with run() overwritten.
Now I got a requirement to call a different task on a different RabbitMQ broker.
So here's what I did where I had to call the different task:
diff_app = Celery('diff')
diff_app.config_from_object({'BROKER_URL':'<DIFF_BROKER_URL>'})
Now to call:
diff_app.send_task('<task_name>', (args1,arg2,))
After I do this, when I call my previous tasks, they get routed to this new broker. The moment I comment out this code, everything is fine back again.
When I check celery_app (described above) conf, the broker url is correct. But when I check any previous task->app->conf->broker url, it is updated with new broker. How to fix this?
I removed 'autodiscover_tasks' and associated '_app' with each 'Task' class. This got me through with the issue.

GitLab CI - Project Build In Neverending Pending-State

I'm in some trouble with GitLab CI.
I followed offical guide on:
https://github.com/gitlabhq/gitlab-ci/blob/master/doc/installation.md
Everything was ok, no errors nowhere. I followed Runner-Setup, too.
Anything alright.
But...
When I add a runner to a project and then try to build nothing happens.
It could be that I have not fully understood something or some of my configs are wrong.
I'm absolutely new to GitLab CI, but I like it and I want to learn new stuff.
I would be very very glad if someone could help me in some way.
Thanks!
BIG UPDATE:
Just figured out that:
~/gitlab-runners/gitlab-ci-runner$ bin/runner
Starting a runner process manually solves the problem but if I look at the gitlab-ci-runner in /etc/init.d -> it is running !?!
~/gitlab-runners/gitlab-ci-runner$ sudo /etc/init.d/gitlab-ci-runner start
Number of registered runners in PID file=1
Number of running runners=0
Error! GitLab CI runner(s) (gitlab-ci-runner) appear to be running already! Try stopping them first. Exiting.
~/gitlab-runners/gitlab-ci-runner$ sudo /etc/init.d/gitlab-ci-runner stop
Number of registered runners in PID file=1
Number of running runners=0
WARNING: Numbers of registered runners don't match number of running runners. Will try to stop them all
Registered runners=1
Running runners=0
Trying to stop registered runners...kill: No such process
OK
Trying to kill ghost runners...OK
What's wrong here? I'm out of my power or not seeing the problem?!
Problem solved!
You need to edit some values in /etc/init.d/gitlab-ci-runner script!
APP_ROOT="**PATH_TO**/gitlab-runners/gitlab-ci-runner"
APP_USER="**USER_WITH_DIRRIGHTS!**"
PID_PATH="$APP_ROOT/tmp/pids"
PROCESS_NAME="ruby ./bin/runner"
RUNNERS_PID="$PID_PATH/runners.pid"
RUNNERS_NUM=1 # number of runners to spawn
START_RUNNER="nohup bundle exec ./bin/runner"
Now it works!
In my case tags in the runner were different from tags in the .gitlab-ci.yml. Once I changed them so runner tags include all of the config file tests, tasks began to run.