Differentiating between Binary semaphore and Mutex using same code - mutex

void forward(void *pvparam)
{
while(1)
{
if(xSemaphoreTake(xSemaphore,1000)==pdTRUE)
{
UART0_SendStr("Frwd took it\n");
}
else
{
UART0_SendStr("Frwd couldn't take it\n");
}
vTaskDelay(1000);
}
}
void back(void *pvparam)
{
vTaskDelay(100);
while(1)
{
if(xSemaphoreGive(xSemaphore)==pdTRUE)
{
UART0_SendStr("Back Gave it:MF\n");
}
else
{
UART0_SendStr("Back couldn't give it:MS\n");
}
vTaskDelay(1000);
}
}
Above code is the one which i am using for both Binary semaphore and Mutex.
The only difference is for binary semaphore i am writing "xSemaphoreCreateBinary(xsemaphore);" in main and
for mutex
xSemaphoreCreateMutex(xsemaphore) in main.
According to the definetion
"A semaphore(Mutex) occupied by the task can only be given by that Task and the Semaphore(Binary) created by a Task can be given by any of the Tasks"
But both the codes(i.e for binary semaphore and mutex) are giving same output.

Mutexes are used for controlling exclusive access to resources / data. If you take a mutex protecting the resource / data, you must give it back when you have finished, otherwise you will permanently block the resource. Mutexes also allow for priority inheritance.
Binary Semaphores on the other hand are used for task synchronization purposes. You do not give a give the semaphore back once you have taken it.
Looking at your code above I think that the Binary Semaphore is the way to go.

Related

FreeRTOS mutex/binary semaphore and deadlock

I am new to FreeRTOS, so I started with what I think is a great tutorial, the one presented by Shawn Hymel. I'm also implementing the code that I'm writting in a ESP32 DevkitC V4.
However, I think that I don't understand the difference between binary semaphores and mutexes. When I run this code that tries to avoid deadlock between two tasks that use two mutexes to protect a critical section (as shown in the tutorial):
// Use only core 1 for demo purposes
#if CONFIG_FREERTOS_UNICORE
static const BaseType_t app_cpu = 0;
#else
static const BaseType_t app_cpu = 1;
#endif
//Settings
TickType_t mutex_timeout = 1000 / portTICK_PERIOD_MS;
//Timeout for any task that tries to take a mutex!
//Globals
static SemaphoreHandle_t mutex_1;
static SemaphoreHandle_t mutex_2;
//**********************************************************
//Tasks
//Task A (High priority)
void doTaskA(void*parameters){
while(1){
//Take mutex 1
if( xSemaphoreTake(mutex_1, mutex_timeout) == pdTRUE){
Serial.println("Task A took mutex 1");
vTaskDelay(1 / portTICK_PERIOD_MS);
//Take mutex 2
if(xSemaphoreTake(mutex_2, mutex_timeout) == pdTRUE){
Serial.println("Task A took mutex 2");
//Critical section protected by 2 mutexes
Serial.println("Task A doing work");
vTaskDelay(500/portTICK_PERIOD_MS); //simulate that critical section takes 500ms
} else {
Serial.println("Task A timed out waiting for mutex 2. Trying again...");
}
} else {
Serial.println("Task A timed out waiting for mutex 1. Trying again...");
}
//Return mutexes
xSemaphoreGive(mutex_2);
xSemaphoreGive(mutex_1);
Serial.println("Task A going to sleep");
vTaskDelay(500/portTICK_PERIOD_MS);
//Wait to let other task execute
}
}
//Task B (low priority)
void doTaskB(void * parameters){
while(1){
//Take mutex 2 and wait to force deadlock
if(xSemaphoreTake(mutex_2, mutex_timeout)==pdTRUE){
Serial.println("Task B took mutex 2");
vTaskDelay(1 / portTICK_PERIOD_MS);
if(xSemaphoreTake(mutex_1, mutex_timeout) == pdTRUE){
Serial.println("Task B took mutex 1");
//Critical section protected by 2 mutexes
Serial.println("Task B doing work");
vTaskDelay(500/portTICK_PERIOD_MS); //simulate that critical section takes 500ms
} else {
Serial.println("Task B timed out waiting for mutex 1");
}
} else {
Serial.println("Task B timed out waiting for mutex 2");
}
//Return mutexes
xSemaphoreGive(mutex_1);
xSemaphoreGive(mutex_2);
Serial.println("Task B going to sleep");
vTaskDelay(500/portTICK_PERIOD_MS);
//Wait to let other task execute
}
}
void setup(){
Serial.begin(115200);
vTaskDelay(1000 / portTICK_PERIOD_MS);
Serial.println();
Serial.println("---FreeRTOS Deadlock Demo---");
//create mutexes
mutex_1 = xSemaphoreCreateMutex();
mutex_2 = xSemaphoreCreateMutex();
//Start task A (high priority)
xTaskCreatePinnedToCore(doTaskA, "Task A", 1500, NULL, 2, NULL, app_cpu);
//Start task B (low priority)
xTaskCreatePinnedToCore(doTaskB, "Task B", 1500, NULL, 1, NULL, app_cpu);
vTaskDelete(NULL);
}
void loop(){
}
My ESP32 starts automatically rebooting after both tasks reach their first mutex in execution, displaying this message:
---FreeRTOS Deadlock Demo---
Task A took mutex 1
Task B took mutex 2
Task A timed out waiting for mutex 2. Trying again...
assert failed: xQueueGenericSend queue.c:832 (pxQueue->pcHead != ((void *)0) || pxQueue->u.xSemaphore.xMutexHolder == ((void *)0) || pxQueue->u.xSemaphore.xMutexHolder == xTaskGetCurrentTaskHandle())
I am unable to interpret the error. However, when I change the definition of the mutexes to binary semaphores in setup():
//create mutexes
mutex_1 = xSemaphoreCreateBinary();
mutex_2 = xSemaphoreCreateBinary();
The code runs fine in the ESP32. Would anyone please explain me why this happens? Many thanks and sorry if the question wasn't adequately made, as this is my first one.
One of the key differences between semaphores and mutexes is the concept of ownership. Semaphores, don't have a thread that owns them. A higher priority thread can acquire a semaphore even if a lower priority thread has already acquired it. On the other hand, mutexes are owned by the thread that acquires them and can only be released by that thread.
In your code above, mutex_1 is acquired by Task A and mutex_2 is acquired by Task B. At this point, Task A is trying to acquire mutex_2. When it is an actual mutex, Task A cannot acquire it since it is owned by Task B. If this were a semaphore, however, Task A could acquire it from Task B. Thus clearing the deadlock.
The error here plays into that. After task A times out waiting for mutex_2, it starts to release the mutexes. It can release mutex_1 no problem because it owns it. When it tries to release mutex_2, it cannot because it is not the owner. Thus the OS throws an error because a task shouldn't try to release a mutex it doesn't own.
If you want to read a little more about the differences between mutexes and semaphores, you can check out this article.

Will an item submitted to the main `DispatchQueue` ever interrupt currently executing code on the main thread?

The below code is used to execute a long running calculation on a background thread:
enum CalculationInterface {
private static var latestKey: AnyObject? // Used to cancel previous calculations when a new one is initiated.
static func output(from input: Input, return: #escaping (Output?) -> ()) {
self.latestKey = EmptyObject()
let key = self.latestKey! // Made to enable capturing `self.latestKey's` value.
DispatchQueue.global().async {
do {
let output = try calculateOutput(from: input, shouldContinue: { key === self.latestKey }) // Function cancels by throwing an error.
DispatchQueue.main.async { if (key === self.latestKey) { `return`(output) } }
} catch {}
}
}
}
This function is called from the main thread like so:
/// Initiates calculation of the output and sets it to the result when finished.
private func recalculateOutput() {
self.output = .calculating // Triggers calculation in-progress animation for user.
CalculationInterface.output(from: input) { self.output = $0 } // Ends animation once set and displays calculated output to user.
}
I'm wondering if it's possible for the closure that's pushed to DispatchQueue.main to execute while the main thread is running my code. Or in other words execute after self.output = .calculating but before self.latestKey is re-set to the new object. If it could, then the stale calculation output could be displayed to the user.
I'm wondering if it's possible for the closure that's pushed to DispatchQueue.main to execute while the main thread is running my code
No, it isn't possible. The main queue is a serial queue. If code is running on the main queue, no "other" main queue code can run. Your DispatchQueue.main.async effectively means: "Wait until all code running on the main queue comes naturally to an end, and then run this on the main queue."
On the other hand, DispatchQueue.global() is not a serial queue. Thus it is theoretically possible for two calls to calculateOutput to overlap. That isn't something you want to have happen; you want to be sure that any executing instance of calculateOutput finishes (and we proceed to grapple with the latestKey) before another one can start. In other words, you want to ensure that the sequence
set latestKey on the main thread
perform calculateOutput in the background
look at latestKey on the main thread
happens coherently. The way to ensure that is to set aside a DispatchQueue that you create with DispatchQueue(label:), that you will always use for running calculateOutput. That queue will be a serial queue by default.

process Swift DispatchQueue without affecting resource

I have a Swift DispatchQueue that receives data at 60fps.
However, depending on phones or amount of data received, the computation of those data becomes expensive to process at 60fps. In actuality, it is okay to process only half of them or as much as the computation resource allows.
let queue = DispatchQueue(label: "com.test.dataprocessing")
func processData(data: SomeData) {
queue.async {
// data processing
}
}
Does DispatchQueue somehow allow me to drop some data if a resource is limited? Currently, it is affecting the main UI of SceneKit. Or, is there something better than DispatchQueue for this type of task?
There are a couple of possible approaches:
The simple solution is to keep track of your own Bool as to whether your task is in progress or not, and when you have more data, only process it if there's not one already running:
private var inProgress = false
private var syncQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".sync.progress") // for reasons beyond the scope of this question, reader-writer with concurrent queue is not appropriate here
func processData(data: SomeData) {
let isAlreadyRunning = syncQueue.sync { () -> Bool in
if self.inProgress { return true }
self.inProgress = true
return false
}
if isAlreadyRunning { return }
processQueue.async {
defer {
self.syncQueue.async { self.inProgress = false }
}
// process `data`
}
}
All of that syncQueue stuff is to make sure that I have thread-safe access to the inProgress property. But don't get lost in those details; use whatever synchronization mechanism you want (e.g. a lock or whatever). All we want to make sure is that we have thread-safe access to the Bool status flag.
Focus on the basic idea, that we'll keep track of a Bool flag to know whether the processing queue is still tied up processing the prior set of SomeData. If it is busy, return immediately and don't process this new data. Otherwise, go ahead and process it.
While the above approach is conceptually simple, it won't offer great performance. For example, if your processing of data always takes 0.02 seconds (50 times per second) and your input data is coming in at a rate of 60 times per second, you'll end up getting 30 of them processed per second.
A more sophisticated approach is to use a GCD user data source, something that says "run the following closure when the destination queue is free". And the beauty of these dispatch user data sources is that it will coalesce them together. These data sources are useful for decoupling the speed of inputs from the processing of them.
So, you first create a data source that simply indicates what should be done when data comes in:
private var dataToProcess: SomeData?
private lazy var source = DispatchSource.makeUserDataAddSource(queue: processQueue)
func configure() {
source.setEventHandler() { [unowned self] in
guard let data = self.syncQueue.sync(execute: { self.dataToProcess }) else { return }
// process `data`
}
source.resume()
}
So, when there's data to process, we update our synchronized dataToProcess property and then tell the data source that there is something to process:
func processData(data: SomeData) {
syncQueue.async { self.dataToProcess = data }
source.add(data: 1)
}
Again, just like the previous example, we're using syncQueue to synchronize our access to some property across multiple threads. But this time we're synchronizing dataToProcess rather than the inProgress state variable we used in the first example. But the idea is the same, that we must be careful to synchronize our interation with a property across multiple threads.
Anyway, using this pattern with the above scenario (input coming in at 60 fps, whereas processing can only process 50 per second), the resulting performance much closer to the theoretical max of 50 fps (I got between 42 and 48 fps depending upon the queue priority), rather than 30 fps.
The latter process can conceivably lead to more frames (or whatever you're processing) to be processed per second and results in less idle time on the processing queue. The following image attempts to graphically illustrate how the two alternatives compare. In the former approach, you'll lose every other frame of data, whereas the latter approach will only lose a frame of data when two separate sets of input data came in prior to the processing queue becoming free and they were coalesced into a single call to the dispatch source.

Why event flag related functions does not work correctly outside of tasks in keil rtx?

As you know event flags are very useful (e.g. let task running),but unfortunately their control functions (os_evt_clr/set/wait) does not work outside of tasks bodies correctly(e.g. in interrupt handling functions). For alternative i used a variable ,I initialized it in Interrupt handler when needed ,then used it on another task to running a os_evt_set() function for let MCU entering a task.
bool Instance_Variable;
Interrupt_Handler()
{
if(xxxx)
Instance_Variable=1
}
//--------------------------
Secondary_Task()
{
//This is frequently run task
if(Instance_Variable==1)
{
os_evt_set (0x0001, Primary_Task_ID);
Instance_Variable=0;
}
}
//--------------------------
Primary_Task()
{
Result = os_evt_wait_or (0x0001, 0xFFFF);
//Task's body
os_evt_clr (0x0001, Primary_Task_ID);
}
Any better approach?WBR.
You can't use function prefixed with os_ inside ISR. Usage hints from RTX documentation:
Functions that begin with os_ can be called from a task, but not from an interrupt service routine.
Functions that begin with isr_ can be called from an IRQ interrupt service routine but not from a task.
This code will work:
Interrupt_Handler() {
if(xxxx) {
isr_evt_set (0x0001, Primary_Task_ID);
}
}
//--------------------------
Primary_Task() {
Result = os_evt_wait_or (0x0001, 0xFFFF);
//Task's body
os_evt_clr (0x0001, Primary_Task_ID);
}

Siesta handling multiple requests

I have a loop where I POST requests to the server:
for (traineeId, points) in traineePointsDict {
// create a new point
let parameters: NSDictionary = [
"traineeId": "\(traineeId)",
"numPoints": points,
"exerciseId": "\(exerciseId)"
]
DataManager.sharedInstance.api.points.request(.POST, json: parameters).success { data in
if data.json["success"].int == 1 {
self.pointCreated()
} else {
self.pointFailToCreate()
}
}.failure { error in
self.pointFailToCreate()
}
}
The problem is that for some reason the last request fails and I am guessing that this is due to posting too many requests to the server at the same time.
Is there a way to chain these requests so they wait for the one before to complete before executing the next?
I have been looking at PromiseKit, but I don't really know how to implement this and I am looking for a quick solution.
Siesta does not control how requests are queued or how many requests run concurrently. You have two choices:
control it on the app side, or
control it in the network layer.
I’d investigate option 2 first. It gives you less fine-grained control, but it give you more robust options on the cheap and is less prone to mistakes. If you are using URLSession as your networking layer (which is Siesta’s default), then investigate whether the HTTPMaximumConnectionsPerHost property of URLSessionConfiguration does what you need. (Here are some examples of passing custom configuration to Siesta.)
If that doesn’t work for you, a simple version of #1 is to use a completion handler to chain the requests:
func chainRequests(_ queue: [ThingsToRequest])
guard let thing = queue.first else { return }
params = makeParamsFor(thing)
resource.request(.POST, json: params)
.onSuccess {
...
}.onFailure {
...
}.onCompletion { _ in
chainRequests(queue[1 ..< queue.count])
}
}
Note that you can attach multiple overlapping handlers to the same request, and they’re called in the order you attached them. Note also that Siesta guarantees that the completion block is always called, no matter the outcome. This means that each request will result in calls to either closures 1 & 3 or closures 2 & 3. That’s why this approach works.