In celery, how to keep task's original priority after retrying?
My task after retrying does not have priority of the original task anymore.
To do this you could pass original priority into retry call
#app.task(bind=True)
def task_to_retry(self):
try:
raise RuntimeError()
except RuntimeError:
self.retry(priority=self.request.delivery_info.get('priority'))
Related
I am trying to use priority based queuing in my queue block. My process is as follows:
source, wait, queue, packing_machine
On exit of the wait block, the agent gets assigned a priority in agent.atrPriority. In queue I have selected Queuing: Priority based and at Agent priority I use: agent.atrPriority.
By printing to the console I am checking if the sequence in which the agents enter the packing_machine block is correct (according to their priority), but it isn't. It keeps sending the agents from queue to packing_station on a FIFO basis.
I have tried assiging agent.atrPriority at different places in the model, but I do not think that that is the problem. i have also tried using agents comparison with agent1.atrPriority.before(agent2.atrPriority); but it gives the error ' Cannot invoke before(int) on the primitive type int.
Does anyone know why it is not working accordingly?
The queue is working, so it is not a bug.
Try a quick test: put another delay between the Wait and the Queue. Set the delay duration to be 0.0001 sec or something tiny.
If this fixes it, the culprit is that you change the atrPriority field "on Exit" of Wait, which is effectively too late. It basically changes after the downstream Queue accesses the priority value.
Another option: change the atrPriority value before you call wait.free(...). This way, you can be sure the priority is set to the right value before the agent enters the queue
in a non preemptive system, after an ISR finishes execution, will the interrupted task continue execution even if a higher priority task was activated?
This answer is specific to FreeRTOS, and may not be relevant to other RTOS'es.
FreeRTOS is preemptive by default. However, it can also be configured to be non-preemptive by the config option in FreeRTOSConfig.h
#define configUSE_PREEMPTION 0
Normally, a return from ISR does not trigger a context switch. But in preemptive systems it's often desirable, so in most FreeRTOS examples, you see portYIELD_FROM_ISR(xHigherPriorityTaskWoken); at the end of the ISR, which triggers a context switch if xHigherPriorityTaskWoken is pdTRUE.
xHigherPriorityTaskWoken is initialized to pdFALSE at the start of the ISR (manually by the user), and operations which can cause a context switch, such as vTaskNotifyGiveFromISR() , xQueueSendToBackFromISR() etc. , take it as an argument and set it to pdTRUE if a context switch is required after the system call.
In a non-preemptive configuration, you simply pass NULL to such system calls instead of xHigherPriorityTaskWoken, and do not call portYIELD_FROM_ISR() at the end of the ISR. In this case, even if a higher priority task is awakened by the ISR, execution returns to the currently running task and remains there until this task yields or makes a blocking system call.
You may mix ISR yield mechanism with preemption method. For example, you can force a context switch (preemption) from ISR even when configUSE_PREEMPTION is 0, but this may cause problems if the interrupted/preempted task doesn't expect it to happen, so I don't recommend it.
I have read the tutorial about GCD and Dispatch Queue in Swift 3
But I'm really confused about the order of synchronous execution and asynchronous execution and main queue and background queue.
I know that if we use sync then we execute them one after the precious one, if we use async then we can use QoS to set their priority, but how about this case?
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
let queue2 = DispatchQueue(label: "com.appcoda.myqueue2")
for i in 1000..<1005 {
print(i)
}
queue1.async {
for i in 0..<5{
print(i)
}
}
queue2.sync {
for i in 100..<105{
print( i)
}
}
}
The outcome shows that we ignore the asynchronous execution. I know queue2 should be completed before queue1 since it's synchronous execution but why we ignore the asynchronous execution and
what is the actual difference between async, sync and so-called main queue?
You say:
The outcome shows that we ignore the asynchronous execution. ...
No, it just means that you didn't give the asynchronously dispatched code enough time to get started.
I know queue2 should be completed before queue1 since it's synchronous execution ...
First, queue2 might not complete before queue1. It just happens to. Make queue2 do something much slower (e.g. loop through a few thousand iterations rather than just five) and you'll see that queue1 can actually run concurrently with respect to what's on queue2. It just takes a few milliseconds to get going and the stuff on your very simple queue2 is finishing before the stuff on queue1 gets a chance to start.
Second, this behavior is not technically because it's synchronous execution. It's just that async takes a few milliseconds to get it's stuff running on some worker thread, whereas the synchronous call, because of optimizations that I won't bore you with, gets started more quickly.
but why we ignore the asynchronous execution ...
We don't "ignore" it. It just takes a few milliseconds to get started.
and what is the actual difference between async, sync and so-called main queue?
"Async" merely means that the current thread may carry on and not wait for the dispatched code to run on some other thread. "Sync" means that the current thread should wait for the dispatched code to finish.
The "main thread" is a different topic and simply refers to the primary thread that is created for driving your UI. In practice, the main thread is where most of your code runs, basically running everything except that which you manually dispatch to some background queue (or code that is dispatched there for you, e.g. URLSession completion handlers).
sync and async are related to the same thread / queue. To see the difference please run this code:
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
queue1.async {
for i in 0..<5{
print(i)
}
}
print("finished")
queue1.sync {
for i in 0..<5{
print(i)
}
}
print("finished")
}
The main queue is the thread the entire user interface (UI) runs on.
First of all I prefer to use the term "delayed" instead of "ignored" about the code execution, because all your code in your question is executed.
QoS is an enum, the first class means the highest priority, the last one the lowest priority, when you don't specify any priority you have a queue with default priority and default is in the middle:
userInteractive
userInitiated
default
utility
background
unspecified
Said that, you have two synchronous for-in loops and one async, where the priority is based by the position of the loops and the kind of the queues (sync/async) in the code because here we have 3 different queues (following the instructions about your link queuesWithQoS() could be launched in viewDidAppearso we can suppose is in the main queue)
The code show the creation of two queues with default priority, so the sequence of the execution will be:
the for-in loop with 1000..<1005 in the main queue
the synchronous queue2 with default priority
the asynchronous queue1 (not ignored, simply delayed) with default priority
Main queue have always the highest priority where all the UI instructions are executed.
I need Scalaz Task (or some wrapper) which is already running, and can return value immediately if it is completed, or after some waiting if it is not. In terms of Future I could do it like this:
val f = myTask.get.started
This way I have Future running asynchronously, which on f.run returns result immediately when called after the computation is complete, or blocks for some time and waits for completion if it is not. However, this way I loose error handling.
How to have Task and not use Future, but still have it already running asynchronously before run, or runAsync is called on it?
The intention of scalaz.Task is clear control over the execution, which makes referential transparency possible. If you want to fork off the Task, use:
val result = Task.fork(myTask)
and the task will run in its own threadpool as soon as you run it with one of the unsafe* methods.
Consider the following codes:
/*----------------------------------------------------------------------------
First Thread
*---------------------------------------------------------------------------*/
void Thread1 (void const *argument)
{
for (;;)
{
osMutexWait(mutex, osWaitForever);
Thread1_Functions;
osMutexRelease(mutex);
}
}
/*----------------------------------------------------------------------------
Second Thread
*---------------------------------------------------------------------------*/
void Thread2 (void const *argument)
{
for(;;)
{
osMutexWait(mutex, osWaitForever);
Thread2_Functions;
osMutexRelease(mutex);
}
}
As far as I've noticed from RTOS's scheduling ,RTOS assign a specific time to each task and after this time is over,it switches to the other task.
Then in this specific time,inside task's infinite loop ,maybe loop is repeated several times until task's specific time finished.
Assume task is finished in less than of it's time's half,then it has a time to fully run this task once again.
in last line after releasing mutex , then it will achieve mutex before than task2 for second time,Am I true ?
assume timer tick occur when MCU run Thread1_Functions for second time,then task2 cant run because mutex owned by task1, RTOS run task 1 again and if timer tick occur every time in the Thread1_Functions, then task2 has no chance to running,Am I true ?
First, let me clear up the scheduling method that you described. You said, "RTOS assign a specific time to each task and after this time is over, it switches to the other task." This scheduling method is commonly called "time slicing". And all RTOS do not necessarily use this method all the time. Time slicing may be used for tasks that have the same priority (or if the RTOS does not support task priorities). But if the tasks have different priorities then the scheduler will not use time-slicing and will instead schedule according to task priority.
But let's assume that the two tasks in your example have the same priority and the scheduler is time-slicing.
Thread1 runs and gets the mutex.
Thread1's time slice expires and the scheduler switches to Thread2.
Thread2 attempts to get the mutex but blocks since Thread1 already owns the mutex.
The scheduler switches back to Thread1 since Thread2 is blocked.
Thread1 releases the mutex.
When the mutex is released, the scheduler should switch to any higher priority task that is waiting for the mutex. But since Thread2 is the same priority, let's assume the scheduler does not switch and Thread1 continues to run within its time slice.
Thread1 attempts to get the mutex again.
In your scenario Thread1 successfully gets the mutex again and this could result in Thread2 never being able to run. In order to prevent this from happening the mutex service should prioritize requests for the mutex. Mutex requests from higher priority tasks receive higher priority. And mutex requests from equal priority tasks should be first come, first served. In other words, the mutex service should put requests from equal priority tasks into a queue. Remember Thread2 already has a pending request for the mutex (step 3 above). So when Thread1 attempts to get the mutex again (step 6), Thread1's request should be queued behind the earlier request from Thread2. And when Thread1's second request for the mutex get queued behind the request from Thread2, the scheduler should block Thread1 and switch to Thread2, giving the mutex to Thread2.
Update: The above is just an idea for how an unspecified RTOS might handle the situation in order to avoid starving Thread2. You didn't mention a specific RTOS until your comment below. I don't know whether Keil RTX works like I described above. And now I'm wondering what your question really is.
Are you asking what will Keil RTX do in this situation? I'm not sure. You'll have to look at the code for osMutexRelease() to see whether it switches to a task with the same priority. Also look at osMutexWait() to see how it prioritizes tasks of the same priority.
Or are you stating the Keil RTX allows Thread2 to starve in this situation and are you asking how to fix it. To fix this situation you could call osThreadYeild() after releasing the mutex. Like this:
void Thread1 (void const *argument)
{
for (;;)
{
osMutexWait(mutex, osWaitForever);
Thread1_Functions;
osMutexRelease(mutex);
osThreadYeild();
}
}