Clarification about Scala Future that never complete and its effect on other callbacks - scala

While re-reading scala.lan.org's page detailing Future here, I have stumbled up on the following sentence:
In the event that some of the callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the blocking construct (see below).
Why may the other callbacks not be executed at all? I may install a number of callbacks for a given Future. The thread that completes the Future, may or may not execute the callbacks. But, because one callback is not playing footsie, the rest should not be penalized, I think.
One possibility I can think of is the way ExecutionContext is configured. If it is configured with one thread, then this may happen, but that is a specific behaviour and a not generally expected behaviour.
Am I missing something obvious here?

Callbacks are called within an ExecutionContext that has an eventually limited number of threads - if not by the specific context implementation, then by the underlying operating system and/or hardware itself.
Let's say your system's limit is OS_LIMIT threads. You create OS_LIMIT + 1 callbacks. From those, OS_LIMIT callbacks immediately get a thread each - and none ever terminate.
How can you guarantee that the remaining 1 callback ever gets a thread?
Sure, there could be some detection mechanisms built into the Scala library, but it's not possible in the general case to make an optimal implementation: maybe you want the callback to run for a month.
Instead (and this seems to be the approach in the Scala library), you could provide facilities for handling situations that you, the developer, know are risky. This removes the element of surprise from the system.
Perhaps most importantly - it enables the developer to "bake in" the necessary information about handler/task characteristics directly into his/her program, rather than relying on some obscure piece of language functionality (which may change from version to version).

Related

What condition variables can do that unlock+yield cannot?

In POSIX, there's the requirement that when a wait is called on a condition variable and a mutex, the 2 operations - unlocking the mutex and blocking the thread, be atomically performed, in such way that any broadcast/signal should take effect as if they happened after blocking. I suppose there should be equivalent requirements on C11, C++ condition variables as well, and I won't go on to do a verbose enumeration.
However, in some system (such as many people's nostalgia WinXP), there wasn't a condition variable mechanism. Instead, they have to perform a unlock+yield to achieve similar (same?) effect. And this works, because even if the broadcast/signal occured in-between the unlock and yield, when the thread is re-scheduled, its observable behavior is the same as if the wake occured after the block. WinXP supported mutex, and it had an SleepEx function that can work like an yield.
So it begs the question: What condition variables can do, that unlock+yield cannot?
In response to the comment: I use WinXP as an example because it happens to be one that supported mutex but not condvar, and the fact that it's one generation's memory. Of course, we assume correctness and reasonable performance, and the question doesn't specifically ask Windows and it asks any implementation in general.

Can we edit callback function HAL_UART_TxCpltCallback for our convenience?

I am a newbie to both FreeRTOS and STM32. I want to know how exactly callback function HAL_UART_TxCpltCallback for HAL_UART_Transmit_IT works ?
Can we edit that that callback function for our convenience ?
Thanks in Advance
You call HAL_UART_Transmit_IT to transmit your data in the "interrupt" (non-blocking) mode. This call returns immediately, likely well before your data gets fully trasmitted.
The sequence of events is as follows:
HAL_UART_Transmit_IT stores a pointer and length of the data buffer you provide. It doesn't perform a copy, so your buffer you passed needs to remain valid until callback gets called. For example it cannot be a buffer you'll perform delete [] / free on before callbacks happen or a buffer that's local in a function you're going to return from before a callback call.
It then enables TXE interrupt for this UART, which happens every time the DR (or TDR, depending on STM in use) is empty and can have new data written
At this point interrupt happens immediately. In the IRQ handler (HAL_UART_IRQHandler) a new byte is put in the DR (TDR) register which then gets transmitted - this happens in UART_Transmit_IT.
Once this byte gets transmitted, TXE interrupt gets triggered again and this process repeats until reaching the end of the buffer you've provided.
If any error happens, HAL_UART_ErrorCallback will get called, from IRQ handler
If no errors happened and end of buffer has been reached, HAL_UART_TxCpltCallback is called (from HAL_UART_IRQHandler -> UART_EndTransmit_IT).
On to your second question whether you can edit this callback "for convenience" - I'd say you can do whatever you want, but you'll have to live with the consequences of modifying code what's essentially a library:
Upgrading HAL to newer versions is going to be a nightmare. You'll have to manually re-apply all your changes you've done to that code and test them again. To some extent this can be automated with some form of version control (git / svn) or even patch files, but if the code you've modified gets changed by ST, those patches will likely not apply anymore and you'll have to do it all by hand again. This may require re-discovering how the implementation changed and doing all your work from scratch.
Nobody is going to be able to help you as your library code no longer matches code that everyone else has. If you introduced new bugs by modifying library code, no one will be able to reproduce them. Even if you provided your modifications, I honestly doubt many here will bother to apply your changes and test them in practice.
If I was to express my personal opinion it'd be this: if you think there's bugs in the HAL code - fix them locally and report them to ST. Once they're fixed in future update, fully overwrite your HAL modifications with updated official release. If you think HAL code lacks functionality or flexibility for your needs, you have two options here:
Suggest your changes to ST. You have to keep in mind that HAL aims to serve "general purpose" needs.
Just don't use HAL for this specific peripheral. This "mixed" approach is exactly what I do personally. In some cases functionality provided by HAL for given peripheral is "good enough" to serve my needs (in my case one example is SPI where I fully rely on HAL) while in some other cases - such as UART - I use HAL only for initialization, while handling transmission myself. Even when you decide not to use HAL functions, it can still provide some value - you can for example copy their IRQ handler to your code and call your functions instead. That way you at least skip some parts in development.

Why does MCNearbyServiceAdvertiser use a dispatch queue internally?

While I was browsing through the iOS 7 runtime headers, something caught my eye. In the MCNearbyServiceAdvertiser class, part of the Multipeer Connectivity framework, a property called syncQueue is and multiple methods prefixed with sync are defined. Some of the methods both exist in a prefixed and non-prefixed version, such as startAdvertisingPeer and syncStartAdvertisingPeer.
My question is, what would be the purpose of both this property and these prefixed methods, and how are they combined?
(edit: removed the remark that the queue is serial as pointed out by CouchDeveloper, since we cannot know this)
As you know, the implementation is private.
Having a dispatch queue whose name is syncQueue may not mean that this queue is a serial queue. It might be a concurrent queue as well.
We can only have a guess what the startAdvertisingPeer and the "prefixed" version syncStartAdvertisingPeer might mean.
For example, in order to fulfill internal prerequisites startAdvertisingPeer might assume that it is always invoked from an execution context except the syncQueue. That way, it can synchronously dispatch to the syncQueue with invoking syncStartAdvertisingPeer without ending up in a deadlock. On the other hand, syncStartAdvertisingPeer will always assume to execute on the syncQueue, that way guaranteeing concurrency.
But, as stated, we don't know the actual details - it's just a rough guess. Usually, you should read the documentation - and not some private header details to draw a picture in your mind how this class might likely work.

What is the difference between hook and callback?

By reading some text, especially the iOS document about delegate, all the protocol method are called hook that the custom delegate object need to implement. But some other books, name these hook as callback, what is the difference between them? Are they just different name but the same mechanism? In addition to Obj-C, some other programming languages, such as C, also got the hook, same situation with Obj-C?
The terminology here is a bit fuzzy. In general the two attempt to achieve similar results.
In general, a callback is a function (or delegate) that you register with the API to be called at the appropriate time in the flow of processing (e.g to notify you that the processing is at a certain stage)
A hook traditionally means something a bit more general that serves the purpose of modifying calls to the API (e.g. modify the passed parameters, monitor the called functions). In this meaning it is usually much lower level than what can be achieved by higher-level languages like Java.
In the context of iOS, the word hook means the exact same thing as callback above
Let me chime in with a Javascript answer. In Javascript, callbacks, hooks and events are all used. In this order, they are each higher level concepts than the other.
Unfortunately, they are often used improperly which leads to confusion.
Callbacks
From a control flow perspective, a callback is a function, usually given as an argument, that you execute before returning from your function.
This is usually used in asynchoronous situations when you need to wait for I/O (e.g. HTTP request, a file read, a database query etc.). You don't want to wait with a synchronous while loop, so other functions can be executed in the meantime.
When you get your data, you (permanently) relinquish control and call the callback with the result.
function myFunc(someArg, callback) {
// ...
callback(error, result);
}
Because the callback function may be some code that hasn't been executed yet, and you don't know what's above your function in the call stack, generally instead of throwing errors you pass on the error to the callback as an argument. There are error-first and result-first callback conventions.
Mostly callbacks have been replaced by Promises in the Javascript world and since ES2017+, you can natively use async/await to get rid of callback-rich spaghetti code and make asynchronous control flow look like it was synchronous.
Sometimes, in special cascading control flows you run callbacks in the middle of the function. E.g. in Koa (web server) middleware or Redux middleware you run next() which returns after all the other middlewares in the stack have been run.
Hooks
Hooks are not really a well-defined term, but in Javascript practice, you provide hooks when you want a client (API/library user, child classes etc.) to take optional actions at well-defined points in your control flow.
So a hook may be some function (given as e.g. an argument or a class method) that you call at a certain point e.g. during a database update:
data = beforeUpdate(data);
// ...update
afterUpdate(result);
Usually the point is that:
Hooks can be optional
Hooks usually are waited for i.e. they are there to modify some data
There is at most one function called per hook (contrary to events)
React makes use of hooks in its Hooks API, and they - quoting their definition - "are functions that let you “hook into” React state and lifecycle features", i.e. they let you change React state and also run custom functions each time when certain parts of the state change.
Events
In Javascript, events are emitted at certain points in time, and clients can subscribe to them. The functions that are called when an event happens are called listeners - or for added confusion, callbacks. I prefer to shun the term "callback" for this, and use the term "listener" instead.
This is also a generic OOP pattern.
In front-end there's a DOM interface for events, in node.js you have the EventEmitter interface. A sophisticated asynchronous version is implemented in ReactiveX.
Properties of events:
There may be multiple listeners/callbacks subscribed (to be executed) for the same event.
They usually don't receive a callback, only some event information and are run synchronously
Generally, and unlike hooks, they are not for modifying data inside the event emitter's control flow. The emitter doesn't care 'if there is anybody listening'. It just calls the listeners with the event data and then continues right away.
Examples: events happen when a data stream starts or ends, a user clicks on a button or modifies an input field.
The two term are very similar and are sometimes used interchangably. A hook is an option in a library were the user code can link a function to change the behavior of the library. The library function need not run concurrent with the user code; as in a destructor.
A callback is a specific type of hook where the user code is going to initiate the library call, usually an I/O call or GUI call, which gives contol over to the kernel or GUI subsystem. The controlling process then 'calls back' the user code on an interupt or signal so the user code can supply the handler.
Historically, I've seen hook used for interupt handlers and callback used for GUI event handlers. I also see hook used when the routine is to be static linked and callback used in dynamic code.
Two great answers already, but I wanted to throw in one more piece of evidence the terms "hook" and "callback" are the same, and can be used interchangeably: FreeRTOS favors the term "hook" but recognizes "callback" as an equivalent term, when they say:
The idle task can optionally call an application defined hook (or callback) function - the idle hook.
The tick interrupt can optionally call an application defined hook (or callback) function - the tick hook.
The memory allocation schemes implemented by heap_1.c, heap_2.c, heap_3.c, heap_4.c and heap_5.c can optionally include a malloc() failure hook (or callback) function that can be configured to get called if pvPortMalloc() ever returns NULL.
Source: https://www.freertos.org/a00016.html

Tutorial OpenCl event handling

In my last question, OpenCl cleanup causes segfault. , somebody hinted that missing event handling, i.e. not waiting for code to finish, could cause the seg faults. Since then I looked again into the tutorials I used, but they don't pay attention to events (Matrix Multiplication 1 (OpenCL) and NVIDIA_OpenCL_GettingStartedLinux.pdf) or talk about it in detail and (for me) understandable.
Do you know a tutorial on where and how to wait in OpenCL?
Merci!
I don't have a tutorial on events in OpenCL, and I'm by no means an expert, but since no one else is responding...
As a rule of thumb, you'll need to wait for any function named clEnqueue*. Those functions return immediately before the job is done. The easiest way to make sure your queue is finished is to call clFinish(). It won't return until the entire queue has completed.
If you want to get a little fancier, most of the clEnqueue* functions have an optional cl_event parameter that you can pass in. You can check on a particular event with clGetEventInfo(), and you can wait for a particular set of events to finish with clWaitForEvents().