Critical timing with GTK - gtk3

I would like to set up time-critical timing on GTK. It's very important that the functions are called ON TIME. Moreso than even drawing the GUI. However, neither g_timeout_add, nor g_idle_add, are precise enough. "Note that timeout functions may be delayed, due to the processing of other event sources. Thus they should not be relied on for precise timing. After each call to the timeout function, the time of the next timeout is recalculated based on the current time and the given interval (it does not try to 'catch up' time lost in delays)." And the fact that idle functions are only called when nothing else is happening.
I have been considering a separate thread just for timing, are there any other options?

Related

Does the sleep() function cause a timer interrupt upon completion?

Do the family of sleep functions (sleep(), nanosleep()) cause timer interrupts once they complete (i.e., are done sleeping)? If not, how does the OS know exactly when they are done? If so, I understand timers have a high interrupt priority. Does this mean a program using sleep() once awoken will likely cause another program running on one of the CPUs (in a multi-processor) to be removed in favor of the recently awoken program?
Does the sleep() function cause a timer interrupt upon completion?
Maybe.
For keeping track of time delays there's 2 common ways it could be implemented:
a) A timer IRQ occurs at a fixed frequency (e.g. maybe every 1 millisecond). When the IRQ occurs the OS checks if any time delays expired and deals with them. In this case there's a compromise between precision and overhead (to get better precision you need to increase the "IRQs per second" which increases the overhead of dealing with all the IRQs).
b) The OS re-configures the timer to generate an IRQ when the soonest delay should expire whenever necessary (when the soonest delay is cancelled, a sooner delay is created, or the soonest delay expires). This has no "precision vs. overhead" compromise, but has more overhead for re-configuring the timer hardware. This is typically called "tickless" (as there's no regular/fixed frequency "tick").
Note that modern 80x86 systems have a local APIC timer per CPU that supports "IRQ on TSC deadline". For "tickless", this means you can normally get better than 1 nanosecond precision without much need for locks (using "per CPU" structures to keep track of time delays); and the cost of re-configuring the timer is very small (as the timer hardware is built directly into the CPU itself).
For "tickless" (which is likely much better for modern systems) you would end up with a timer IRQ when "sleep()" expires most of the time (unless some other delay expires at the same/similar time).
Does this mean a program using sleep() once awoken will likely cause another program running on one of the CPUs (in a multi-processor) to be removed in favor of the recently awoken program?
Whether a recently unblocked task preempts immediately depends on:
a) The scheduler design. For some schedulers (e.g. naive "round robin") it may never happen immediately.
b) The priorities of the unblocked task and the currently running task/s.
c) Optimizations. Task switches cost overhead so attempts to minimize the number of task switches (e.g. postponing/skipping a task switch if some other task switch is likely to happen soon anyway) are practical. There's also complexity involving load balancing, power management, cache efficiency, memory (NUMA, etc) and other things that may be considered.
The Linux man pages notes:
Portability notes
On some systems, sleep() may be implemented using alarm(2) and SIGALRM (POSIX.1 permits
this); mixing calls to alarm(2) and sleep() is a bad idea.

What is the correct operation of a CANopen inhibit timer?

I understand that the operation of a CANopen inhibit timer is to ensure a minimum time between successive transmissions of the same message, but the specification does not make it clear what to do if the data changes during the inhibit time (and the transmission is on change-of-state). Should I buffer the data and transmit it when the inhibit timer expires, or discard it and wait for a change after the timer has expired?
My assumption would be, since it is not clearly defined, I can choose whichever approach I want, but I'd appreciate the input of any experienced architects / developers on this.
Thanks.
You're correct that the inhibit time is simply the minimum time between consecutive CAN frames with the same CAN-ID. The standard does not specify the behavior for multiple events during the inhibit time window, because it depends on the situation.
For services like NMT, EMCY and perhaps LSS, you'd want to buffer the messages and send them later. In this case the inhibit time is simply a means to help slow (or badly programmed) devices to handle short bursts of messages. I've seen devices that could only handle 3 CAN frames at once, so it's often necessary, but you would not want them to miss messages.
For event-driven Transmit-PDOs, it depends on what the PDO represents. If you use it to track state, it might make sense to drop events during the inhibit window. They're invalidated by subsequent events anyway. To ensure you always emit the latest state, you can store the most recent event and transmit it once the inhibit time has elapsed, or use the event-timer to ensure you're never too far behind. I've used this strategy in the past for analog inputs where line noise would sometimes cause event bursts.
If you use PDOs to track events (or state changes), you'd be better of buffering them so no events get lost. However, this can introduce potentially unbounded delays if the event period is shorter than the inhibit time.
For the products we're working on at Lely (dairy farm robots), we actually prefer to use SYNC-driven PDOs instead. It results in a much more predictable CAN bus load. And we don't have to track state at the receiver side because we receive a full update on every SYNC. However, the receiver is always one SYNC period behind the transmitter, so this may not be appropriate for your use case.

Figure out when a context switch is happening in Swift

To be honest, I don't know if there might be a solution to my question but I'd like to catch, in Swift, when a context switch is happening.
I was imaging a func which takes a long time in order to be completed such as a write operation on a remote server and I was thinking if there might be a way to understand when (which line at least) the thread which is executing that task is performing a context switch because another task waiting for a long time has to be executed.
Sorry if for you might seem a stupid question or if I made mistakes whilst trying to explain the above
EDIT:
I'm talking about context switches that are happening automatically requested by the scheduler.. So imagine again we are in the middle of this long function which does tons of operations and the scheduler gave this task an amount of seconds, for example 10 seconds in order to make it complete. If the process runs out of time and doesnt end the task, it will be suspended and for example the thread will execute another task of another process. When it ends, scheduler might think to give another try to the suspended job and the execution will be resumed starting where it has been suspended ( so will read the value from PC register and will keep going on )
You can absolutely get what you've asked for, but I have a feeling it's going to be much less useful to you than you may believe. The vast (vast, vast!) majority of context switches are going to occur at points that you probably would think of as "uninteresting." lstat64(), mach_vm_map_trap(), mach_msg_trap(), mach_port_insert_member_trap(), kevent_id(), the list goes on. Most threads spend most of their time deep in the OS stack. "A write operation on a remote server" isn't going to block after it takes some long period of time. It's going to proactively block itself because it knows it's going to take a (mind-bogglingly) long period of time.
Even so, you can certainly explore this with Instruments. Just choose the System Trace profile and it'll show you all the threads and the system cores and how your threads and all the other threads on the device are getting scheduled, every system call, etc etc etc. It's a huge amount of information, so you usually only profile for a few seconds at a time. But it'll look something like this:
This is useful information if you're at the point where context switches are a major bottleneck. This might happen if you're dealing with excessive lock contention, or if you're thrashing your L1 cache because you keep getting interrupted by some other thread. So if you have some thread that you expect to stay running pretty continuously, and it's getting blocked, this is really valuable information. Or if you have two threads that you think should work back and forth smoothly, but they seem to be fighting (switching rapidly), then that's something you could work on. (But this is rarely one of the first places you'd look for performance tuning unless you're working on quite low-level code.)
From your description, I think you may have the wrong idea about the scheduler. Nothing in the scheduler is going to be on the order of 10 seconds. In the scheduler world, milliseconds are a lot of time. You should be thinking about things that take microseconds and even nanoseconds. If you're working on code that assumes fetching data from RAM is free, then you're on the wrong time scale. A network call is so ludicrously slow that you can basically estimate it as "forever." At the context-switch level you're looking at events like:
00:00.770.247 Virtual memory Zero Fill took 10.50 µs small_free_list_add_ptr ← (31 other frames)
I think its a kinda cool question but not clear.The answer is all about my understanding on your question. Normally if you implement C++ or C program for context switching, consider you write code with mutexes or semaphores.In these sections processes or threads work in critical section and sometimes there executing the context switching manually or interruption way. In iOS there are same identical implementation on concurrency such as DispatchSemaphore.(Actually mutex is a Semaphore that working with lock system.) You can read the documentation from here.
For starting to this, this is the Semaphore class definition in Swift.
class DispatchSemaphore : DispatchObject
You can init it with int value such as
let semaphore = DispatchSemaphore(value: intValue)
And if you want to use mutex variant you can easily use lock variable. Such as
var lock = Lock()
When you lock the thread with correct implementation, you will be in critical section and you can switch it with unlock or etc.
It's obviously look similar with POSIX pthread_lock_t
You can handle the context switching within the lock or semaphore critical section like that
lock.lock()
// critical section
// handle the calculation, if its longer unlock, or let it do its job
lock.unlock()
semaphore.wait(timeout: DispatchTime.distantFuture)
// critical section
// handle the calculation, if its too long signal to exit it in here, or let it do its job
// if it is a normal job, it will be signal for exit already.
semaphore.signal()
The handled answer is contains context switching in threads.
Addition to my question, when you ask about the context switching, its natural means is that changing the threads or processes basically. So when you get the data from background thread and then implement it for the UITableView' you probably will do reloadData method call in DispatchQueue.main.async . Its a common usage of context switching.

Why is response time important in CPU scheduling?

I'm looking for an example of a job for which response time is important.
One definition of response time is:
The time taken in an interactive program from the issuance of a command to the commence of a response to that command.
I've read that response time is important for interactivity, but I can't understand why. If the job isn't fully completed, what output could be produced that would be of interest to a user?
Wouldn't the user only care about how soon a job finishes, as that's the first time any output is produced?
For example, consider these two possible schedulings of two jobs:
Case 1: |---B---|---A---|
Case 2: |-A-|---B---|-A-|
Suppose that job A and B are issued at the same time, A being a command typed in by the user and B being some background process.
The response time for job A as I understand it would be shorter in case 2. As job A finishes (and produces output) at the same time in the two cases, I don't understand how the user benefits (or even notices) the better response time in case 2.
When writing an operating system, one has to take into consideration what will the intended audience be. In some cases it matters most to finish jobs as quickly as possible (supercomputer systems), in some cases it matters most to be as responsive as possible (regular desktop systems), and in some cases it matters most to be as predictable as possible (real-time systems).
For finishing jobs as fast as possible, tasks should be interrupted the rarest possible (so big intervals between task switches are the best option). Here response time doesn't really matter much. It should be noted that task switches usually take some time (thousands of CPU cycles usually) due to having to save the state (including registers and paging structures) of the old task to memory and restore the state (including registers and paging structures) of the new task from memory. This also causes cache and TLB misses, since the cached information doesn't usually belong to the current process.
For being the most responsive possible, tasks should be interrupted as often as possible so the user doesn't experience the so-called lag. This is where response time is important. Note however that on interrupt-driven architectures (like x86) an interrupt from the keyboard or the mouse would automatically pause execution of the current task and call the interrupt handler, which processes the input and sends it to the appropriate program.
For being the most predictable possible, input should be processed neither too fast, neither too slow. This means that response time is constrained from both ways, thus being much more important than in "most responsive possible" designs. A misprediction can even be a fatal failure in mission-critical systems.
In a nutshell, importance of response time varies from design to design and can range from nearly unimportant to critical.
I think I have an answer to my own question. The problem was, I was just thinking about simple processes like ls that once issued runs for some amount of time and then, when they're finished, deliver their first and only output.
However, suppose job A in the example from the question is a program with multiple print statements. Output will in that case be produced before the process is complete (and some of the printouts may well occur during the first scheduled burst). It would thus make sense for interactivity to want to begin running such a process as soon as possible.

what is the difference among sleep() , usleep() & [NSThread sleepForTimeInterval:]?

Can any body explain me what is the difference among sleep() , usleep() & [NSThread sleepForTimeInterval:] ?
What is the best condition to use these methods ?
sleep(3) is a posix standard library method that attempts to suspend the calling thread for the amount of time specified in seconds. usleep(3) does the same, except it takes a time in microseconds instead. Both are actually implemented with the nanosleep(2) system call.
The last method does the same thing except that it is part of the Foundation framework rather than being a C library call. It takes an NSTimeInterval that represents the amount of time to be slept as a double indicating seconds and fractions of a second.
For all intents and purposes, they all do functionally the same thing, i.e., attempt to suspend the calling thread for some specified amount of time.
What is the best condition to use
these methods ?
Never
Or, really, pretty much almost assuredly never ever outside of the most unique of circumstances.
What are you trying to do?
On most OSs, sleep(0) and its variants can be used to improve efficiency in a polling situation to give other threads a chance to work until the thread scheduler decides to wake up the polling thread. It beats a full-on while loop. I haven't found much use for a non-zero timeout though, and apple in particular has done a pretty good job of building an event driven architecture that should eliminate the need for polling in most situations anyway.
-Example usage of sleep is in the following state:
In network simulation scenario, we usually have events that are executed event by event,using a scheduler. The scheduler executes events in orderly fashion.
When an event is finished executing, and the scheduler moves to the next event, the scheduler compares the next event execution time with the machine clock. If the next event is scheduled for a future time, the simulator sleeps until that realtime is reached and then executes the next event.
-From linux Man pages:
The usleep() function suspends execution of the calling thread for (at least) usec microseconds. The sleep may be lengthened slightly by any system activity or by the time spent processing the call or by the granularity of system timers.
while sleep is delaying the execution of a task(could be a thread or anything) for sometime .Refer to 1 and 2 for more details about the functions.