Is there any pythonic way to get all running/pending celery tasks for current loggedin django user? Pseudo code for what I am trying:
#celery.task
def process_task(user, task_to_do):
#get all running or pending(queued) task for current user
user_tasks = user.get_task(status=PENDING or status=STARTED)
if not user_task:
#allow user to schedule additional task
process....
else:
return "Your previous tasks is already running"
That's in general a tricky task.
First you need to implement inspecting of workers
inspector = app.control.inspect()
scheduled = inspector.scheduled()
reserved = inspector.reserved()
active = inspector.active()
Celery will get them from your broker. The point is - broker does not store information about user, so you need to add user it to task kwargs.
user_task.delay(user=user)
Than you'll be able to filter results from thees functions by kwarg user in result:
[{'worker1.example.com':
[{'eta': '2010-06-07 09:07:52', 'priority': 0,
'request': {
'name': 'tasks.usertask',
'id': '1a7980ea-8b19-413e-91d2-0b74f3844c4d',
'args': '[]',
'kwargs': '{'user':'7'}'}},
...
The problem here - it will be slow.
Related
I've been struggling to find a solution for my problem, I hope I've come to the right place.
I have a django rest framework API which connect to a postgresql db and I run bots on my own API in order to do stuff. Here is my code :
def get_or_create_eventloop():
"""Get the eventLoop only one time (create it if does not exist)"""
try:
return asyncio.get_event_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
return asyncio.get_event_loop()
My DB class which use asyncpg to connect / create a pool :
Class DB():
def __init__(self,loop):
self.pool = loop.run_until_complete(self.connect_to_db())
def connect_to_db():
return await asyncpg.create_pool(host="host",
database="database",
user="username",
password="pwd",
port=5432)
My API class :
Class Api(APIView):
#create a loop event since its not the main thread
loop = get_or_create_eventloop()
nest_asyncio.apply() #to avoid the <loop already running> problem
#init my DB pool directly so I wont have to connect each time
db_object = DB(loop)
def post(self,request):
... #I want to be able to call "do_something()"
async def do_something(self):
...
I have my bots running and sending post/get request to my django api via aiohttp.
The problem I'm facing is :
How to implement my post function in my API so he can handle multiple requests knowing that it's a new thread each time therefore a new event loop is created AND the creation of the pool in asyncpg is LINKED to the current event loop, i.e can't create new event loop, I need to keep working on the one created at the beginning so I can access my db later (via pool.acquire etc)
This is what I tried so far without success :
def post(self,request):
self.loop.run_until_complete(self.do_something())
This create :
RuntimeError: Non-thread-safe operation invoked on an event loop other than the current one
which I understand, we are trying to call the event loop from another thread possibly
I also tried to use asyng_to_sync from DJANGO :
#async_to_sync
async def post(..):
resp = await self.do_something()
The problem here is when doing async_to_sync it CREATES a new event loop for the thread, therefore I won't be able to access my DB POOL
edit : cf https://github.com/MagicStack/asyncpg/issues/293 for that (I would love to implement something like that but can't find a way)
Here is a quick example of one of my bot (basic stuff) :
import asyncio
from aiohttp import ClientSession
async def send_req(url, session):
async with session.post(url=url) as resp:
return await resp.text()
async def run(r):
url = "http://localhost:8080/"
tasks = []
async with ClientSession() as session:
for i in range(r):
task = asyncio.asyncio.create_task(send_req(url, session))
tasks.append(task)
responses = await asyncio.gather(*tasks)
print(responses)
if __name__ == '__main__':
asyncio.run(main())
Thank you in advance
After days of looking for an answer, I found the solution for my problem. I just used the package psycopg3 instead of asyncpg (now I can put #async_to_sync to my post function and it works)
For example i have the following class. How i can prevent execution of get_entity task if create_entity task was not executed?
class MyTaskSequence(TaskSequence):
#seq_task(1)
def create_entity(self):
self.round += 1
with self.client.post('/entities', json={}, catch_response=True) as resp:
if resp.status_code != HTTPStatus.CREATED:
resp.failure()
# how to stop other tasks for that run?
self.entity_id = resp.json()['data']['entity_id']
#seq_task(2)
def get_entity(self):
# It is being always executed,
# but it should not be run if create_entity task failed
resp = self.client.get(f'/entities/{self.entity_id}')
...
I found TaskSet.interrupt method in documentation, but does not allow to cancel root TaskSet. I tried to make parent TaskSet for my task sequence, so TaskSet.interrupt works.
class MyTaskSet(TaskSet):
tasks = {MyTaskSequence: 10}
But now i see that all results in ui are cleared after i call interrupt!
I just need to skip dependent tasks in this sequence. I need the results.
The easiest way to solve this is just to use a single #task with multiple requests inside it. Then, if a request fails just do a return after resp.failure()
Might self.interrupt() be what you are looking for?
See https://docs.locust.io/en/latest/writing-a-locustfile.html#interrupting-a-taskset for reference.
Why not using on_start(self): which runs once whenever a locust created, it can set a global which can be checked whether the locust executes the tasks
class MyTaskSequence(TaskSequence):
entity_created = false
def on_start(self):
self.round += 1
with self.client.post('/entities', json={}, catch_response=True) as resp:
if resp.status_code != HTTPStatus.CREATED:
self.entity_created = true
resp.failure()
self.entity_id = resp.json()['data']['entity_id']
#seq_task(2)
def get_entity(self):
if self.entity_created:
resp = self.client.get(f'/entities/{self.entity_id}')
...
I am developing a new extension which uses tasks. I need to create a task which will call a function, rather than starting a new process or shell.
I can create a new task which can execute a shell command.
let task = new vscode.Task(kind, taskName, taskSource, new vscode.ShellExecution(`echo Hello World`));
I would like to make a task which will call another method. Is there a way to do this?
There happens to be a "proposed API" for this exact purpose:
"custom execution" section in the March 2019 release notes with a code example:
let execution = new vscode.CustomExecution((terminalRenderer, cancellationToken, args): Thenable<number> => {
return new Promise<number>(resolve => {
// This is the custom task callback!
resolve(0);
});
});
const taskName = "First custom task";
let task = new vscode.Task2(kind, vscode.TaskScope.Workspace, taskName, taskType,
execution);
original issue: Allow extension to provide callback functions as tasks (#66818)
initial implementation pull request
relevant section in vscode.proposed.d.ts
So, I can get the build details, but it does not contain any info on the build jobs. E.g. Each build job has run on a build agent - how can I get this piece using REST Api?
We are talking about a vNext build, not XAML.
You can find all tasks and jobs in timeline records: Timeline - Get. You can paste into browser this template to check results for a specific build:
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/timeline
I use Microsoft.TeamFoundationServer.Client package and this is example for it:
static void PrintTimeLine(string TeamProjectName, int BuildId)
{
var timeline = BuildClient.GetBuildTimelineAsync(TeamProjectName, BuildId).Result;
if (timeline.Records.Count > 0)
{
Console.WriteLine("Task Name-----------------------------Start Time---Finish Time---Result");
foreach(var record in timeline.Records)
if (record.RecordType == "Task")
Console.WriteLine("{0, -35} | {1, -10} | {2, -10} | {3}",
(record.Name.Length < 35) ? record.Name : record.Name.Substring(0, 35),
(record.StartTime.HasValue) ? record.StartTime.Value.ToLongTimeString() : "",
(record.FinishTime.HasValue) ? record.FinishTime.Value.ToLongTimeString() : "",
(record.Result.HasValue) ? record.Result.Value.ToString() : "");
}
}
https://github.com/ashamrai/TFRestApi/blob/master/19.TFRestApiAppQueueBuild/TFRestApiApp/Program.cs
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId} will let you know the Agent used under the object queue and there it shows the agent queue (91) number and the pool id (8)
"queue":{
"id":91,
"name":"MotBuild-Default",
"pool":{
"id":8,
"name":"MotBuild-Default"
}
Use
https://dev.azure.com/{org}/_apis/distributedtask/pools/{pool_id}?api-version=5.0-preview.1 or https://dev.azure.com/{org}/{project}/_apis/distributedtask/queues/{queue_id} will return the pool.
So now using https://dev.azure.com/{org}/_apis/distributedtask/pools/{pool_id}/agents will return a LIST of agents under the Agent Pools
Now that I've explained all that let's try to tie everything together.
1) Use https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId} and find the Queue and Pool IDs.
2) use https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/timeline and find the record of type Job and the property workerName which will return NAME of the Agent used.
3) Query the Agents with https://dev.azure.com/{org}/_apis/distributedtask/pools/{pool_id}/agents and find the agent id by filtering the name from the name found in step #2 above.
4) Finally query https://dev.azure.com/{org}/_apis/distributedtask/pools/{pool_id}/agents/{agent_id} will return a high-level info of the agent, not much info.
This next api is undocumented
5) To get the detailed capabilities query https://dev.azure.com/{org}/_apis/distributedtask/pools/{pool_id}/agents/{agent_id}?includeCapabilities=true which a huge result set will be returned!! I think this is what you want.
Read more about the APIs at:
Pools
Queues
Agents
My application A calls a celery task longtask in application B. However, longtask is registered in B but not in A, so A calls it by using send_task. I want a mechanism in A to check periodically if longtask is complete. How do I do it?
send_task returns an AsyncResult that contains the task id. You can use this id to periodically check on the result of longtask.
result = my_app.send_task('longtask', kwargs={})
task_id = result.id
# anywhere else in your code you can reuse the
# task_id to check the status
from celery.result import AsyncResult
import time
done = False
while not done:
result = AsyncResult(task_id)
current_status = result.status
if current_status == 'SUCCESS':
print('yay! we are done')
done = True
time.sleep(10)