Are i/o device polling intervals often consistent? - operating-system

this is a quick question about common existing operating systems.
Is a polled io device (say of 120hz or 250hz) generally getting polled at a fixed rate or there are usually considerable fluctuations in polling intervals, and if there are fluctuations, are they in terms of milliseconds or micro/nanoseconds?

This depends upon the processor architecture, system and application design. Your basic reference is this Wikipedia article.
In an embedded system where the result and latency of polling a particular device may be the most important and central purpose of the system, you are likely to see a tight loop busy-waiting at processor instruction speeds (micro/nanoseconds) with low jitter. These intervals may not be completely deterministic due to modern processor architecture improvements such as speculative branching depending on the surrounding code; see this relevant StackOverflow answer.
In a multitasking system doing lots of things and occasionally polling for, say, keystrokes from a HID of course there will be considerably higher latency in units more like milliseconds. Tasks may switch, processes may be swapped in and out etc.
This is a quick answer to your quick question - trying to put you into the ballpark but making clear that there could be a lot at play here depending on your environment.

Related

Akka IO app consumes 100% cpu

I am trying to profile an Akka app that is constantly at or near 100% CPU usage. I took a CPU sample using visualvm. The sample indicates that there are 2 threads that make up 98.9% of CPU usage. 79% of the cpu time was spent on a method called sun.misc.Unsafe. Other answers on SO say that it just means that a thread is waiting but in the native implementation layer (outside of the jvm).
In questions similar to mine, people have been told to look elsewhere without being given specifics. Where should I look to figure out what's causing the cpu spike?
The application is a server that primarily uses Akka IO to listen for TCP socket connections.
Without seeing any of the source code, or even knowing what IO channel you are talking about (sockets, files, etc), there is very little insight that anyone here can give you.
I do have some rather general suggestions though.
First, you should be using reactive techniques and reactive IO in your application. This issue could be occurring because you are polling the status of some resource in a tight loop, or using a blocking call when you should be using a reactive one. This tends to be an anti-pattern and a performance drain exactly because you can spend CPU cycles doing nothing but "actively waiting". I recommend double checking for:
resource polling
blocking calls
system calls
disk flushes
waiting on a Future when it would be appropriate to map it instead
Second, you should NOT be using Mutexes or other thread synchronization in your application. If so, then you might be suffering from a live-lock. Unlike dead-locks, live-locks manifest with symptoms like 100% CPU usage as threads constantly lock and unlock concurrency primitives in an attempt to "catch them all". Wikipedia has a nice technical description of what a live lock looks like. With Akka in place you shouldn't have any need to use Mutexes or any thread synchronization primitives. If you are then you probably need to re-design your application.
Third, you should be throttling IO (as well as error handling like reconnection attempts). This issue could be occurring because your system lacks effective throttling. Often with data channels we leave their bandwidth unconstrained. However this can become an issue when that channel reaches 100% saturation and begins to steal resources from other parts of the system. This can happen, for example, if you are moving large files around without a reasonable limit.
Alternatively, you also need to throttle connection retries when you encounter any errors, rather than retrying immediately. Lots of systems will attempt to reconnect to a server if they lose their connection. While normally desirable, this can lead to problematic behavior if you use a naive reconnection strategy. For example, imagine a network client that was written this way:
class MyClient extends Client {
... other code...
def onDisconnect() = {
reconnect()
}
}
Every time the Client disconnects for ANY reason it will attempt to reconnect. You can see how this would cause a tight loop between the error handling code and the Client if the Wifi cut-out or a network cable was unplugged.
Fourth, your application should have well defined data sources and sinks. Your issue could be caused by a "data loop", that is some set of Akka actors that are just sending messages to the next actor in the chain, with the last actor sending the message back to the first actor in the chain. Make sure you have a clear and definite way for messages to enter and exit your system.
Fifth, use appropriate profiling and instrumentation for your application. Instrument your application using Kamon or Coda Hale's Metrics library.
Finding an appropriate profiler will be more difficult, since we as a community have far to go to develop mature tools for reactive applications. Personally I have found visualvm useful, but not always overwhelmingly helpful for detecting code paths that are CPU bound. The issue is that sampling profilers are only able to collect data when the JVM reaches a safepoint. This has the potential to bias certain code paths. The fix is to use a profiler that supports AsyncGetStackTrace.
Best of luck! And please add more context if you can.

Basics of Real Time OS

I am trying to learn an RTOS from scratch and for this, I use freeRTOS.org as a reference. I find out this site as a best resource to learn an RTOS. However, I have some doubts and I was trying to find out but not able to get exact answers.
1) How to find out that device have Real-time capability e.g. some controller has (TI Hercules) and other don't have(MSP430)?
2) Does that depend upon the architecture of the CORE (ARM Cortex-R CPU in TI Hercules TMS570)?
I know that these questions make nuisance, but I don't know how to get the answer of these questions.
Thanks in advance
EDIT:
One more query I have that what is meant by "OS" in RTOS? Does that mean the same OS like others or it's just contains the source code file for the API's?
Figuring out whether a device has a "Real-Time" capability is somewhat arbitrary and depends on your project's timing requirements. If you have timing requirements that are very high, you'll want to use a faster microcontroller/processor.
Using an RTOS (e.g. FreeRTOS, eCOS, or uCOS-X) can help ensure that a given task will execute at a predictable time. The FreeRTOS website provides a good discussion of what operating systems are and what it means for an operating system to claim Real-Time capabilities. http://www.freertos.org/about-RTOS.html
You can also see from the ports pages of uC/OS-X and FreeRTOS that they can run on a variety target microcontrollers / microprocessors.
Real-time capability is a matter of degree. A 32-bit DSP running at 1 GHz has more real-time capability than an 8-bit microcontroller running at 16 MHz. The more powerful microcontroller could be paired with faster memories and ports and could manage applications requiring large amounts of data and computations (such as real-time video image processing). The less powerful microcontroller would be limited to less demanding applications requiring a relatively small amount of data and computations (perhaps real-time motor control).
The MSP430 has real-time capabilities and it's used in a variety of real-time applications. There are many RTOS that have been ported to the MSP430, including FreeRTOS.
When selecting a microcontroller for a real-time application you need to consider the data bandwidth and computational requirements of the application. How much data needs to be processed in what amount of time? Also consider the range and precision of the data (integer or floating point). Then figure out which microcontroller can support those requirements.
While Cortex-R is optimised for hard real-time; that does not imply that other processors are not suited to real-time applications, or even better suited to a specific application. What you need to consider is whether a particular combination of RTOS and processor will meet the real-time constraints of your application; and even then the most critical factor is your software design rather then the platform.
The main goal you want to obtain from an RTOS is determinism, most other features are already available in most other non-RTOS operating systems.
The -OS part in RTOS means Operating System, simply put, and as all other operating systems, RTOSes provide the required infrastructure for managing processor resources so you work on a higher level when designing your application. For accessing those functionalities the OS provides an API. Using that API you can use semaphores, message queues, mutexes, etc.
An RTOS has one requirement to be an RTOS, it must be pre-emptive. This means that it must support task priorities so when a higher-priority task gets ready to run, one of possible task states, the scheduler must switch the current context to that task.
This operation has two implications, one is the requirement of one precise and dedicated timer, tick timer, and the other is that, during context switching, there is a considerable memory operations overhead. The current CPU status, or CPU's in case of multi-core SoCs, must be copied into the pre-empted task's context information and the new ready to run task's context must be restored in the CPU.
ARM processors already provide support for the System Timer, which is intended for a dedicated use as an OS tick timer. Not so long ago, the tick timer was required to be implemented with a regular, non-dedicated timer.
One optimization in cores designed for RTOSes with real-time capabilities is the ability to save/restore the CPU context state with minimum code, so it results in much less execution time than that in regular processors.
It is possible to implement an RTOS in nearly any processor, and there are some implementations targeted to resource constrained cores. You mainly need a timer with interrupt capacity and RAM. If the CPU is very fast you can run the OS tick at high rates, sub-millisecond in some real-time applications with DSPs, or at a lower rate like just 10~100 ticks per second for applications with low timing requirements in low end CPUs.
Some theoretical knowledge would be quite useful too, e.g. figuring out whether a given task set is schedulable under given scheduling approach (sometimes it may not), differences between static-priority and dynamic-priority scheduling, priority inversion problem, etc.

Partial FPGA reconfiguration and performance

These questions may sound very esoteric to most, but I'd really like to know more about this stuff.
1st
I'm wondering how long does it take for an FPGA to reconfigure itself, from the time its modelled circuit is powered down to the time a new one is in place and operational.
I am aware that Place-&-Route is a costly process, but that is because the P&R tools must decide where to put the components and how to route them.
Consider that P&R analysis is done, and all that's left is actually reconfiguring the FPGA: is that a slow process by itself? Can it be done hundreds or thousands of times per second?
There are several implications for such a possibility that I'm curious about. To name 2, it could allow us to serve an FPGA to multiple concurrent "clients" (the same way a GPU is capable of rendering stuff for multiple different programs), or provide for extremely fine-tuned circuits for long number-crunching processes of well-defined but numerous processing stages of highly asynchronous processing (think: complex Haskell programs).
2nd
Anothing thing I'd like to ask is whether an FPGA can be partially reconfigured in realtime, while the modelled circuit is powered and operational, as long as the parts being reconfigured are powered off, of course.
Several interesting implications would arise from such a possibility as well, for example allowing for realtime reconfigurable buses, hardware emulation of neural networks, etc.
Are such things being extensively researched right now? And how likely are they to be researched in the future?
The reconfiguration time depends on a lot of things. The big ones are
how much of the FPGA you are reconfiguring (how many bits need to go in)
How fast you can get the data in (using quad-SPI seems to be the favoured way of bringing FPGAs up fast nowadays)
Big FPGAs can be many 10s to 100s of milliseconds to completely reconfigure.
A small configuration can be achieved within the PCI express startup time (100ms IIRC) in order to enable a pure FPGA card to be enumerated in time and then the rest of the config can be loaded later.
In terms of very dynamic reconfiguration, its more likely that the bottle neck is swapping the various data sets in and out that go with each bitstream - I imagine anything which needs a lot of FPGA to accelerate it is a pretty large dataset... but you might have other applications in mind?

Programming considerations for virtualized applications

There are lots of questions on SO asking about the pros and cons of virtualization for both development and testing.
My question is subtly different - in a world in which virtualization is commonplace, what are the things a programmer should consider when it comes to writing software that may be deployed into a virtualized environment? Some of my initial thoughts are:
Detecting if another instance of your application is running
Communicating with hardware (physical/virtual)
Resource throttling (app written for multi-core CPU running on single-CPU VM)
Anything else?
You have most of the basics covered with the three broad points. Watch out for:
Hardware communication related issues. Disk access speeds are vastly different (and may have unusually high extremes - imagine a VM that is shut down for 3 days in the middle of a disk write....). Network access may interrupt with unusual responses
Fancy pointer arithmetic. Try to avoid it
Heavy reliance on unusually uncommon low level/assembly instructions
Reliance on machine clocks. Remember that any calls you're making to the clock, and time intervals, may regularly return unusual values when running on a VM
Single CPU apps may find themselves running on multiple CPU machines, that do funky things like Work Stealing
Corner cases and unusual failure modes are much more common. You might not have to worry as much that the network card will disappear in the middle of your communication on a real machine, as you would on a virtual one
Manual management of resources (memory, disk, etc...). The more automated the work, the better the virtual environment is likely to be at handling it. For example, you might be better off using a memory-managed type of language/environment, instead of writing an application in C.
In my experience there are really only a couple of things you have to care about:
Your application should not fail because of CPU time shortage (i.e. using timeouts too tight)
Don't use low-priority always-running processes to perform tasks on the background
The clock may run unevenly
Don't truss what the OS says about system load
Almost any other issue should not be handled by the application but by the virtualizer, the host OS or your preferred sys-admin :-)

I'm writing an application that implements a questionnaire. Does qualify as being a real-time application?

Keeping it simple, I have a server and client. The server sends questions one by one and the client the answers, as soon as they are given.
So, would you say this application is real time?
Based on this quote from wikipedia, which summarizes my understand of what a real-time application is:
"A system is said to be real-time if the total correctness of an operation depends not
only upon its logical correctness, but also upon the time in which it is performed. The classical conception is that in a hard real-time or immediate real-time system, the completion of an operation after its deadline is considered useless - ultimately, this may cause a critical failure of the complete system. A soft real-time system on the other hand will tolerate such lateness, and may respond with decreased service quality (e.g., omitting frames while displaying a video)."
I would say no, it is not real-time.
No, Real-time systems are ones where the OS/Application has to respond to the environment within a known period, for example an embedded flight control system on a fighter jet.
Wikipedia has a fairly good article on Real-time computing.
If you are using for the communication a protocol like TCP/IP, that isnt realtime system, because these communication link are not by nature deterministic in matter of response time, the only sure thing is that the message will arrive, when? who knows...