this is a quick question about common existing operating systems.
Is a polled io device (say of 120hz or 250hz) generally getting polled at a fixed rate or there are usually considerable fluctuations in polling intervals, and if there are fluctuations, are they in terms of milliseconds or micro/nanoseconds?
This depends upon the processor architecture, system and application design. Your basic reference is this Wikipedia article.
In an embedded system where the result and latency of polling a particular device may be the most important and central purpose of the system, you are likely to see a tight loop busy-waiting at processor instruction speeds (micro/nanoseconds) with low jitter. These intervals may not be completely deterministic due to modern processor architecture improvements such as speculative branching depending on the surrounding code; see this relevant StackOverflow answer.
In a multitasking system doing lots of things and occasionally polling for, say, keystrokes from a HID of course there will be considerably higher latency in units more like milliseconds. Tasks may switch, processes may be swapped in and out etc.
This is a quick answer to your quick question - trying to put you into the ballpark but making clear that there could be a lot at play here depending on your environment.
I am a new student studying OS course. I have already know that OS can serve for better communication between applications and hardwares in modern computer. But sometimes it seems more time efficient if applications can control hardware directly. May I ask whether it is possible?
yes it is possible but that would be a single application computer that computer only can run one particular application.
Applications handling hardware directly is faster as there is less of overhead of what OS does in its management.
You can take the example of DMA - Direct Memory Access. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.
But you should keep in mind the importance of operating system in handling other hardwares as not everything can be managed that trivially and need processing for decision making.
These questions may sound very esoteric to most, but I'd really like to know more about this stuff.
1st
I'm wondering how long does it take for an FPGA to reconfigure itself, from the time its modelled circuit is powered down to the time a new one is in place and operational.
I am aware that Place-&-Route is a costly process, but that is because the P&R tools must decide where to put the components and how to route them.
Consider that P&R analysis is done, and all that's left is actually reconfiguring the FPGA: is that a slow process by itself? Can it be done hundreds or thousands of times per second?
There are several implications for such a possibility that I'm curious about. To name 2, it could allow us to serve an FPGA to multiple concurrent "clients" (the same way a GPU is capable of rendering stuff for multiple different programs), or provide for extremely fine-tuned circuits for long number-crunching processes of well-defined but numerous processing stages of highly asynchronous processing (think: complex Haskell programs).
2nd
Anothing thing I'd like to ask is whether an FPGA can be partially reconfigured in realtime, while the modelled circuit is powered and operational, as long as the parts being reconfigured are powered off, of course.
Several interesting implications would arise from such a possibility as well, for example allowing for realtime reconfigurable buses, hardware emulation of neural networks, etc.
Are such things being extensively researched right now? And how likely are they to be researched in the future?
The reconfiguration time depends on a lot of things. The big ones are
how much of the FPGA you are reconfiguring (how many bits need to go in)
How fast you can get the data in (using quad-SPI seems to be the favoured way of bringing FPGAs up fast nowadays)
Big FPGAs can be many 10s to 100s of milliseconds to completely reconfigure.
A small configuration can be achieved within the PCI express startup time (100ms IIRC) in order to enable a pure FPGA card to be enumerated in time and then the rest of the config can be loaded later.
In terms of very dynamic reconfiguration, its more likely that the bottle neck is swapping the various data sets in and out that go with each bitstream - I imagine anything which needs a lot of FPGA to accelerate it is a pretty large dataset... but you might have other applications in mind?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have an AT90USB162 AVR chip which I want to run a multitasking RTOS, so I am evaluating possible RTOS for using with my AVR chip. Which multitasking RTOS's are supported by AVR? Maybe QNX? (Is it possible to run a QNX kernel on an AVR microchip?).
Thanks in advance.
The Atmel AT90USB162 is an 8-bit AVR RISC-based microcontroller -- QNX would be a stretch, and AVR is not in their BSP directory
Micrium supports AVR with uC/OS-II
FreeRTOS also supports AVR
When you say "RTOS", I presume you mean pre-emptive multi-tasking? I'm guessing (since this is an 8-bit AVR) you don't need a filesystem, network stack, etc.?
If you're looking for a tiny, pre-emptive multi-tasking kernel, you might want to check out the Quantum Platform - I've used it on very resource-constrained platforms like AVR & MSP430. Co-workers have used it on 8-bit 8051 and HC11 variants as well.
The QP's preemptive kernel (QK) is a run-to-completion kernel, which reduces its stack (RAM) requirements and makes context switching less resource-intensive (no TCBs, less context to save & restore).
There is a QP/C variant, which is "small", and a QP-nano variant, which is "tiny". Since those terms are absolutely meaningless without numbers, the QP-nano page has a comparison of kernel types & their typical sizes. For example (minimum figures provided): typical RTOS, 10K ROM, 10K RAM; QP/C - 8K ROM, 1K RAM; QP-nano - 2K ROM, 100 bytes of RAM.
The good thing is that all the code is available so you can download & try it & see for yourself.
QNX - not a chance! QNX is a relatively large and sophisticated OS for 32bit devices with MMU, providing not only kernel level scheduling but also file systems, fault-tolerant networking, POSIX API, GUI etc. Its most significant feature is its support for memory protection - each thread runs in its own virtual memory segment, so only runs on devices with appropriate hardware support.
What features do you want from your OS? On an 8 bit device it is only reasonable to expect basic priority based pre-emptive scheduling and IPC. Other services such as networking, filesystem, USB etc. are usually add-ons from the RTOS vendor or must be integrated yourself from third-party code.
The obvious choice if you want to spend no money is FreeRTOS. It is competent, though in some ways unconventional architecturally, even if fairly conventional at the API level. In my tests on ARM it had slower context switch times that other kernels I compared it with others I tested, but that may not be the case on AVR, and would only be an issue if you require real-time response times in order of a few microseconds. AVR has a rather large register set, so context switches are typically expensive in any case.
Atmel have a list of third-party support including RTOS at http://www.atmel.com/products/AVR/thirdparty.asp#. They list the following:
CMX Systems, Inc: CMX-RTX, CMX-Tiny+ (Add-ons: CMX-MicroNet, CMX-FFS)
FreeRTOS.org: FreeRTOS
Micriµm, Inc: µC/ OS-II
Nut/OS: RTOS and TCP/IP stack with a Posix-like API.
SEGGER: embOS
I have personal experience of CMX-Tiny+ (on dsPIC), embOS (on ARM), and FreeRTOS (on ARM), and uC/OS-II. They are all competent, uC-OS-II has the minor restriction of only allowing a single task at each priority level (no round-robin scheduling), but consequently probably faster context switches. In the case of embOS I have, successfully integrated third-party file-system and USB code, though the vendor has their own add-ons for these as well.
Though not a direct answer to your question, being 8 bit controller with limited resource, think of the advantage before committing to an OS Layer, the advantage of an OS layer will be beneficiary only when the project has to handle major subsystems that are tedious to code and maintain ex. file system, graphics, audio, networking, etc.
Since most of the suppliers provide integrated development environment and standard libraries and more over you can write code with high level languages like C, C++, for simple controls task sticking to your own frame work will be much more manageable
Athomthreads is a lightweight RTOS supported by AVR. It supports:
Preemptive scheduler with 255 priority levels
Round-robin at same priority level
Semaphore
Mutex
Message Queue
Timers
It is open source and has about 1k lines of code. By comparision, the demo project for AVR build with Eclipse produces a .bin file of 96 to 127 kb. Of course FreeRTOS has more features (like memory management, including dynamic memory) and better security. But if you only need multi-threading atomthreads is nice.
Here is a comprehensive comparison between multiple RTOSs.
There are lots of questions on SO asking about the pros and cons of virtualization for both development and testing.
My question is subtly different - in a world in which virtualization is commonplace, what are the things a programmer should consider when it comes to writing software that may be deployed into a virtualized environment? Some of my initial thoughts are:
Detecting if another instance of your application is running
Communicating with hardware (physical/virtual)
Resource throttling (app written for multi-core CPU running on single-CPU VM)
Anything else?
You have most of the basics covered with the three broad points. Watch out for:
Hardware communication related issues. Disk access speeds are vastly different (and may have unusually high extremes - imagine a VM that is shut down for 3 days in the middle of a disk write....). Network access may interrupt with unusual responses
Fancy pointer arithmetic. Try to avoid it
Heavy reliance on unusually uncommon low level/assembly instructions
Reliance on machine clocks. Remember that any calls you're making to the clock, and time intervals, may regularly return unusual values when running on a VM
Single CPU apps may find themselves running on multiple CPU machines, that do funky things like Work Stealing
Corner cases and unusual failure modes are much more common. You might not have to worry as much that the network card will disappear in the middle of your communication on a real machine, as you would on a virtual one
Manual management of resources (memory, disk, etc...). The more automated the work, the better the virtual environment is likely to be at handling it. For example, you might be better off using a memory-managed type of language/environment, instead of writing an application in C.
In my experience there are really only a couple of things you have to care about:
Your application should not fail because of CPU time shortage (i.e. using timeouts too tight)
Don't use low-priority always-running processes to perform tasks on the background
The clock may run unevenly
Don't truss what the OS says about system load
Almost any other issue should not be handled by the application but by the virtualizer, the host OS or your preferred sys-admin :-)