Celery: How to schedule worker processes/children restart - celery

I try to figure out how to setup my "celery workers" to restart after a living day.
Indeed, I configured my worker children/processes to restart after executing a task.
But in some cases, there are no tasks to execute in 3-4 days. So I need to restart the long-living children.
Do you how to do this ?
This is my actual celery app setup:
app = Celery(
"celery",
broker=f"amqp://bla#blabla/blablabla",
backend="rpc://",
)
app.conf.task_serializer = "pickle"
app.conf.result_serializer = "pickle"
app.conf.accept_content = ["pickle", "application/json"]
app.conf.broker_connection_max_retries = 5
app.conf.broker_pool_limit = 1
app.conf.worker_max_tasks_per_child = 1 # Ensure 1 task is executed before restarting child
app.conf.worker_max_living_time_before_restart = 60 * 60 * 24 # The conf I want
Thank you :)

Related

Add a constraint of renewable ressource

I am new to these optimisation problems, I just found the or-tools library and saw that cp_model can fix problems that are close to mine.
I have printers and some tasks, that I want to schedule in order to finish the production the earliest. The tasks uses time on machine and raw material, that I must refill at the end of coil. For the moment, I don't consider changing a plastic coil before using all the material.
Here is some information about my situation:
1- The printers are all same, they can do every task, with the same efficiency.
2- A printer can only print one task at a time.
3- A printer cannot start without human around, so tasks can start only at certain hours (in the code below, from 0AM to 10 AM).
4- A task can finish at any time.
5- If a printer has no more material, it needs to be change, this can happen only on opening hours.
6- If a printer has no more material, the task is paused until new material is put.
7- I consider having unlimited quantity of material coil.
Thanks to examples and some search in the documentation, I have been able to fix all the issues that are not related to material. I have been able to set a maximum quantity per machine, but it is not my issue.
I don't understand how I can pause/resume my intervals (for the moment I set the duration to a fixed one).
from ortools.sat.python import cp_model
from ortools.util.python import sorted_interval_list
import random
Domain = sorted_interval_list.Domain
def main():
random.seed(0)
nb_jobs = 10
nb_machine = 2
horizon = 30000000
job_list = [] #Job : (id,time,quantity)
listOfEnds = []
quantityPerMachine = []
maxQuantity = 5
#create the jobs
for i in range(nb_jobs):
time = random.randrange(1,24)
quantity = random.randrange(1,4)
job_list.append([i,time,quantity])
print([job[1] for job in job_list])
print("total time to print = ",horizon)
print("quantity")
print([job[2] for job in job_list])
print("total quantity = ",sum([job[2] for job in job_list]))
model = cp_model.CpModel()
makespan = model.NewIntVar(0, horizon, 'makespan')
machineForJob = {}
#boolean variable for each combination of machine and job. if True then machine works on this job
for machine in range(nb_machine):
for job in job_list:
j = job[0]
machineForJob[(machine,j)]=(model.NewBoolVar(f'x[{machine},{j}]'))
#for each job, apply sum()==1 to ensure one machine works on each job only
for j in range(nb_jobs):
model.Add(sum([machineForJob[(machine,j)] for machine in range(nb_machine)])==1)
#set the affectation of the jobs
time_intervals={}
starts = {}
ends = {}
#Time domain represents working hours when someone can start the taks
timeDomain = []#[[0],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[24],[25],[26]]
for i in range(20):
for j in range(10):
t= [j +i*24]
timeDomain.append(t)
for machine in range(nb_machine):
time_intervals[machine] = []
for job in job_list:
j = job[0]
duration = job[1]
starts[(machine,j)] = model.NewIntVarFromDomain(Domain.FromIntervals(timeDomain),f'start {machine},{j}')
ends[(machine,j)] = (model.NewIntVar(0, horizon, f'end {machine},{j}'))
time_intervals[machine].append(model.NewOptionalIntervalVar(starts[(machine,j)], duration, ends[(machine,j)],
machineForJob[(machine,j)],f'interval {machine},{j} '))
#time should not overlap, quantity of raw material is limited,
for machine in range(nb_machine):
#model.Add(quantityPerMachine[machine] <= maxQuantity) Not working as expected as the raw material cannot be refilled
model.AddNoOverlap(time_intervals[machine])
#calculate time per machine
time_per_machine = []
for machine in range(nb_machine):
q = 0
s = 0
for job in job_list:
s+= job[1]*machineForJob[(machine,job[0])]
listOfEnds.append(ends[(machine,job[0])])
q+= job[2]*machineForJob[(machine,job[0])]
time_per_machine.append(s)
quantityPerMachine.append(q)
#Goal is to finish all taks earliest
model.AddMaxEquality(makespan,listOfEnds)
model.Minimize(makespan)
solver = cp_model.CpSolver()
# Sets a time limit of 10 seconds.
solver.parameters.max_time_in_seconds = 600.0
#Solve and prints
status = solver.Solve(model)
if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE:
print("optimal =",status == cp_model.OPTIMAL )
print(f'Total cost = {solver.ObjectiveValue()}')
for i in range(nb_machine):
for j in range(nb_jobs):
if solver.BooleanValue(machineForJob[(i,j)]):
print(
f'Machine {i} assigned to Job {j} Time = {job_list[j][1]},Quantity = {job_list[j][2]}')
print(f"[{solver.Value(starts[(i,j)])} ,{solver.Value(ends[(i,j)])}]")
else:
print('No solution found.')
# Statistics.
print('\nStatistics')
print(' - conflicts: %i' % solver.NumConflicts())
print(' - branches : %i' % solver.NumBranches())
print(' - wall time: %f s' % solver.WallTime())

PBS jobs queue then immediately exit sometimes

I've been using a PBS managed computing cluster for a few years now at my school. A few months ago I ran into this problem and they were never able to figure it out. When I submit jobs, they queue and then some run immediately. I believe that the jobs that should stay queued because of lack of resources die pretty much immediately. This happens intermittently depending on how many nodes I can use at one time. Sometimes I submit say 10 jobs, the first two will run, the next three will fail, then the next five will run.
I do not get either stdout or stderr files created for these failed jobs. The ones that do run do create these files. I get an email when these jobs die which I've attached here with some identifying information removed. Exit status -9 means "Could not create/open stdout stderr files" but I don't know how to fix that problem since it's so intermittent.
PBS Job Id: 11335.pearl.hpcc.XXX.edu
Job Name: mc1055
Exec host: m09/5
Aborted by PBS Server
Job cannot be executed
See Administrator for help
Exit_status=-9
resources_used.cput=00:00:00
resources_used.vmem=0kb
resources_used.walltime=00:00:02
resources_used.mem=0kb
resources_used.energy_used=0
req_information.task_count.0=1
req_information.lprocs.0=1
req_information.thread_usage_policy.0=allowthreads
req_information.hostlist.0=m09:ppn=1
req_information.task_usage.0.task.0={"task":{"cpu_list":"9","mem_list":"0","cores":0,"threads":1,"host":"m09"}}
Error_Path: CLUSTERNAME.hpcc.XXX.edu:/PATH/TOSCRIPT/run/mc1055.e11335
Output_Path: CLUSTERNAME.hpcc.XXX.edu:/PATH/TOSCRIPT/run/mc1055.o11335
I also looked at qstat -f right when a job failed and it's below. If I don't catch it right away it disappears from qstat.
Job Id: 11339.pearl.hpcc.XXX.edu
Job_Name = mc1059
Job_Owner = USERNAME#CLUSTERNAME.hpcc.XXX.edu
resources_used.cput = 00:00:00
resources_used.vmem = 0kb
resources_used.walltime = 00:00:00
resources_used.mem = 0kb
resources_used.energy_used = 0
job_state = C
queue = default
server = CLUSTERNAME.hpcc.XXX.edu
Account_Name = ADVISOR
Checkpoint = u
ctime = Mon Jan 4 20:02:25 2021
Error_Path = CLUSTERNAME.hpcc.XXX.edu/PATH/TOSCRIPT/mc1059.e11339
exec_host = m09/9
Hold_Types = n
Join_Path = n
Keep_Files = n
Mail_Points = a
mtime = Mon Jan 4 20:03:14 2021
Output_Path = CLUSTERNAME.hpcc.XXX.edu/PATH/TOSCRIPT/mc1059.o11339
Priority = 0
qtime = Mon Jan 4 20:02:25 2021
Rerunable = True
Resource_List.nodes = 1:ppn=1
Resource_List.walltime = 50:00:00
Resource_List.var = mkuuid:1e94a3e50dd44803bab2d3a7c2286ee2
Resource_List.nodect = 1
session_id = 0
Variable_List = PBS_O_QUEUE=largeq,
PBS_O_HOME=/PATH,PBS_O_LOGNAME=USERNAME,
PBS_O_PATH=lots of things
PBS_O_MAIL=/var/spool/mail/USERNAME,PBS_O_SHELL=/bin/bash,
PBS_O_LANG=en_US,KRB5CCNAME=FILE:/tmp/krb5cc_404112_hd4Yty,
PBS_O_WORKDIR=/PATH/TOSCRIPT/run,
PBS_O_HOST=CLUSTERNAME.hpcc.XXX.edu,
PBS_O_SERVER=CLUSTERNAME.hpcc.XXX.edu
euser = USERNAME
egroup = physics
queue_type = E
etime = Mon Jan 4 20:02:25 2021
exit_status = -9
submit_args = -l var=mkuuid:1e94a3e50dd44803bab2d3a7c2286ee2 -v KRB5CCNAME
/PATH/TOSCRIPT/run/tmp/montec_1059
start_time = Mon Jan 4 20:03:14 2021
start_count = 1
fault_tolerant = False
comp_time = Mon Jan 4 20:03:14 2021
job_radix = 0
total_runtime = 7.218811
submit_host = CLUSTERNAME.hpcc.XXX.edu
init_work_dir = /PATH/TOSCRIPT/run
request_version = 1
req_information.task_count.0 = 1
req_information.lprocs.0 = 1
req_information.thread_usage_policy.0 = allowthreads
req_information.hostlist.0 = m09:ppn=1
req_information.task_usage.0.task.0.cpu_list = 5
req_information.task_usage.0.task.0.mem_list = 1
req_information.task_usage.0.task.0.cores = 0
req_information.task_usage.0.task.0.threads = 1
req_information.task_usage.0.task.0.host = m09

Odoo 12 : Not enought limit time to finish the backup?

I use the auto_backup to backup production database everyday.
It was working well until now.
Now, the backup can't finish until the end, I mean, I get the half size of the .zip file and it is impossible to restore it.
Normaly, the backup takes about 15mn.
I think that it's related to the Odoo configuration.
Here it is :
workers = 3
longpolling_port = 8072
limit_memory_soft = 2013265920
limit_memory_hard = 2415919104
limit_request = 8192
limit_time_cpu = 600
limit_time_real = 3600
limit_time_real_cron = 3600
proxy_mode = True
Can you help me?
I have another question, What does mean limit_time_real_cron = -1 if the limit_time_real_cron = 0 is unlimited?
Try to increase limit_time_cpu.

Celery: Routing tasks issue - only one worker consume all tasks from all queues

I've some tasks with manually configured routes and 3 workers which were configured to consume tasks from specific queue. But only one worker consuming all of the tasks and I've no idea how to fix this issue.
My celeryconfig.py
class CeleryConfig:
enable_utc = True
timezone = 'UTC'
imports = ('events.tasks')
broker_url = Config.BROKER_URL
broker_transport_options = {'visibility_timeout': 10800} # 3H
worker_hijack_root_logger = False
task_protocol = 2
task_ignore_result = True
task_publish_retry_policy = {'max_retries': 3, 'interval_start': 0, 'interval_step': 0.2, 'interval_max': 0.2}
task_time_limit = 30 # sec
task_soft_time_limit = 15 # sec
task_default_queue = 'low'
task_default_exchange = 'low'
task_default_routing_key = 'low'
task_queues = (
Queue('daily', Exchange('daily'), routing_key='daily'),
Queue('high', Exchange('high'), routing_key='high'),
Queue('normal', Exchange('normal'), routing_key='normal'),
Queue('low', Exchange('low'), routing_key='low'),
Queue('service', Exchange('service'), routing_key='service'),
Queue('award', Exchange('award'), routing_key='award'),
)
task_route = {
# -- SCHEDULE QUEUE --
base_path.format(task='refresh_rank'): {'queue': 'daily'}
# -- HIGH QUEUE --
base_path.format(task='execute_order'): {'queue': 'high'},
# -- NORMAL QUEUE --
base_path.format(task='calculate_cost'): {'queue': 'normal'},
# -- SERVICE QUEUE --
base_path.format(task='send_pin'): {'queue': 'service'},
# -- LOW QUEUE
base_path.format(task='invite_to_tournament'): {'queue': 'low'},
# -- AWARD QUEUE
base_path.format(task='get_lesson_award'): {'queue': 'award'},
# -- TEST TASK
worker_concurrency = multiprocessing.cpu_count() * 2 + 1
worker_prefetch_multiplier = 1 #
worker_max_tasks_per_child = 1
worker_max_memory_per_child = 90000 # 90MB
beat_max_loop_interval = 60 * 5 # 5 min
I run workers in a docker, part of my stack.yml
version: "3.7"
services:
worker_high:
command: celery worker -l debug -A runcelery.celery -Q high -n worker.high#%h
worker_normal:
command: celery worker -l debug -A runcelery.celery -Q normal,award,service,low -n worker.normal#%h
worker_schedule:
command: celery worker -l debug -A runcelery.celery -Q daily -n worker.schedule#%h
beat:
command: celery beat -l debug -A runcelery.celery
flower:
command: flower -l debug -A runcelery.celery --port=5555
broker:
image: redis:5.0-alpine
I thought that my config is right and run command correct too, but docker logs and flower shown that only worker.normal consume all tasks.
I
Update
Here is part of task.py:
def refresh_rank_in_tournaments():
logger.debug(f'Start task refresh_rank_in_tournaments')
return AnalyticBackgroundManager.refresh_tournaments_rank()
base_path is shortcut for full task path:
base_path = 'events.tasks.{task}'
execute_order task code:
#celery.task(bind=True, default_retry_delay=5)
def execute_order(self, private_id, **kwargs):
try:
return OrderBackgroundManager.execute_order(private_id, **kwargs)
except IEXException as exc:
raise self.retry(exc=exc)
This task will call in a view as tasks.execute_order.delay(id)
Your worker.normal is subscribed to the normal,award,service,low queues. Furthermore, the low queue is the default one, so every task that does not have explicitly set queue will be executed on worker.normal.

Getting description of a PBS job queue

Is there any command that would allow me to query the description of a running/ queued PBS job for its attributes such as RAM, number of processors, GPUs etc?
Use qstat command:
qstat -f job_id
Expanding on the answer posted by dimm.
If a job is registered in a queue, you can query it's attributes with qstat command. If the job has already finished, you can only grep relevant information from the log files. There is a handy tracejob command to do the grepping for you.
In PBS Pro and Torque each job registered with a queue has two sets of attributes:
Resource_List has resources requested for a running or queued job
resources_used holds actual resource usage for a running job.
For example in PBS Pro you could get the following attributes for Resource_List
Resource_List.mem = 2000mb
Resource_List.mpiprocs = 8
Resource_List.ncpus = 8
Resource_List.nodect = 1
Resource_List.place = free
Resource_List.qlist = queue1
Resource_List.select = 1:ncpus=8:mpiprocs=8
Resource_List.walltime = 02:00:00
 
And the following values for resources_used
resources_used.cpupercent = 800
resources_used.cput = 00:03:31
resources_used.mem = 529992kb
resources_used.ncpus = 8
resources_used.vmem = 3075580kb
resources_used.walltime = 00:00:28
For finished jobs tracejob could fetch you only some of the requested resources:
ncpus=8:mem=2048000kb
and the final values for resources_used
resources_used.cpupercent=799
resources_used.cput=00:54:29
resources_used.mem=725520kb
resources_used.ncpus=8
resources_used.vmem=3211660kb
resources_used.walltime=00:06:53