Why is response time important in CPU scheduling? - operating-system

I'm looking for an example of a job for which response time is important.
One definition of response time is:
The time taken in an interactive program from the issuance of a command to the commence of a response to that command.
I've read that response time is important for interactivity, but I can't understand why. If the job isn't fully completed, what output could be produced that would be of interest to a user?
Wouldn't the user only care about how soon a job finishes, as that's the first time any output is produced?
For example, consider these two possible schedulings of two jobs:
Case 1: |---B---|---A---|
Case 2: |-A-|---B---|-A-|
Suppose that job A and B are issued at the same time, A being a command typed in by the user and B being some background process.
The response time for job A as I understand it would be shorter in case 2. As job A finishes (and produces output) at the same time in the two cases, I don't understand how the user benefits (or even notices) the better response time in case 2.

When writing an operating system, one has to take into consideration what will the intended audience be. In some cases it matters most to finish jobs as quickly as possible (supercomputer systems), in some cases it matters most to be as responsive as possible (regular desktop systems), and in some cases it matters most to be as predictable as possible (real-time systems).
For finishing jobs as fast as possible, tasks should be interrupted the rarest possible (so big intervals between task switches are the best option). Here response time doesn't really matter much. It should be noted that task switches usually take some time (thousands of CPU cycles usually) due to having to save the state (including registers and paging structures) of the old task to memory and restore the state (including registers and paging structures) of the new task from memory. This also causes cache and TLB misses, since the cached information doesn't usually belong to the current process.
For being the most responsive possible, tasks should be interrupted as often as possible so the user doesn't experience the so-called lag. This is where response time is important. Note however that on interrupt-driven architectures (like x86) an interrupt from the keyboard or the mouse would automatically pause execution of the current task and call the interrupt handler, which processes the input and sends it to the appropriate program.
For being the most predictable possible, input should be processed neither too fast, neither too slow. This means that response time is constrained from both ways, thus being much more important than in "most responsive possible" designs. A misprediction can even be a fatal failure in mission-critical systems.
In a nutshell, importance of response time varies from design to design and can range from nearly unimportant to critical.

I think I have an answer to my own question. The problem was, I was just thinking about simple processes like ls that once issued runs for some amount of time and then, when they're finished, deliver their first and only output.
However, suppose job A in the example from the question is a program with multiple print statements. Output will in that case be produced before the process is complete (and some of the printouts may well occur during the first scheduled burst). It would thus make sense for interactivity to want to begin running such a process as soon as possible.

Related

What happens with nested branches and speculative execution?

Alright, so I know that if a particular conditional branch has a condition that takes time to compute (memory access, for instance), the CPU assumes a condition result and speculatively executes along that path. However, what would happen if, along that path, yet another slow conditional branch pops up (assuming, of course, that the first condition hasn't been resolved yet and the CPU can't just commit the changes)? Does the CPU just speculate inside the speculation? What happens if the last condition is mispredicted but the first wasn't? Does it just rollback all the way?
I'm talking about something like this:
if (value_in_memory == y){
// computations
if (another_val_memory == x){
//computations
}
}
Speculative execution is the regular state of execution, not a special mode that an out of order CPU enters when it sees a branch and then leaves when the branch is no longer in flight.
This is easier to see if you consider that it's not just branches that can fault, but many instructions, including those that access memory, have restrictions on their input values, etc. So any substantial out of order execution implies constant speculation, and CPUs are built around that idea.
So "nested branches" doesn't end up being special in that sense.
Now, modern CPUs have a variety of methods for quick branch misprediction recovery, faster than recovery from other types of faults1. For example they may snapshot the state of the register mapping at some branches, to allow recovery to start before the branch is at the head of the reorder buffer. Since it is not always feasible to snapshot at all branches, there might be complicated heuristics involved to decide where to take snapshots.
I mention this last part because it is one way in which nested branches might matter: when there are lots of branches in flight, you might hit some microarchitectural limits related to the tracking of these branches for recovery purposes. For more details, you can look through patents for "branch order buffer" (for Intel techniques, but there are no doubt others).
1 The basic recovery method is keep executing until the faulting instruction is the next to retire, and then throw away all younger instructions. In the context of branch mispredictions, this means you could actually suffer two or more mispredictions only the oldest of which actually takes effect: e.g., a younger branch mispredicts, and while executing up to that branch (at which point recovery can occur), another mispredict occurs, so the younger one ends up getting discarded.
(Maybe not a complete answer, but I had some of this written when #BeeOnRope posted an answer. Posting this anyway for some more links and technical details in case anyone's curious.)
Everything is always speculative until it reaches retirement and becomes non-speculative, definitely happened, part of the architectural state.
e.g. any load might fault with a bad address, any div might trap on divide by zero. See also Out-of-order execution vs. speculative execution That and What exactly happens when a skylake CPU mispredicts a branch? mention that branch mispredicts are handled specially, because they're expected to be frequent. Fast-recovery can start before a mis-predicted branch reaches retirement, unlike the behaviour for a faulting load for example. (That's part of why Meltdown is exploitable.)
So even "regular" instructions are executed speculatively before being commited, and the only distinction between them is a human-made distinction, not computer-made? I presume, then, that the CPU stores multiple, possible rollback points? For instance if I have load instructions that may lead to page faults or simply use stale values, inside a conditional branch, the CPU identifies such instructions and scenarios and saves a state for each of them? I feel like I misunderstood because this may lead to a lot of storing register states and complicated dependencies.
The retirement state is always consistent so you can always roll back to there and discard all in-flight work, e.g. if an external interrupt arrives you want to handle it without waiting for a chain of a dozen cache miss loads to all execute. When an interrupt occurs, what happens to instructions in the pipeline?
This tracking basically happens for free or is something you need to do anyway to be able to detect which instruction faulted, not just that there was a problem somewhere. (This is called "precise exceptions")
The real distinction humans can usefully make is speculation that has a real chance of being wrong during execution of non-error cases. If your code gets a bad pointer, it doesn't really matter how it performs; it's going to page-fault and that's going to be very slow compared to local OoO exec details.
You're talking about a modern out-of-order (OoO) execution (not just fetch) CPU, like modern Intel or AMD x86, high-end ARM, MIPS r10000, etc.
The front-end is in-order (with speculation down predicted paths), and so is commit (aka retirement) from the out-of-order back-end into non-speculative retirement state. (aka known-good architectural state).
The CPU uses two major structures to track instructions (or on x86, uops = parts of instructions) in the back-end. The last stage of the front-end (after fetch / decode) allocates/renames instructions and adds them into both of these structures at once.
RS = Reservation Station = scheduler: not-yet-executed instructions, waiting for an execution unit. The RS tracks dependencies and sends the oldest-ready uops to execution units that are ready.
ROB = ReOrder Buffer: not-yet-retired instructions. Instructions enter and leave in-order so it can just be a circular buffer.
Includes a flag to mark each entry as executed or not, set once the RS has sent it to an execution unit which reports success. The oldest instructions in the ROB that all have their done-executing bit set can "retire".
Also includes a flag which indicates "fault if this reaches retirement". This avoids spending time handling page faults from load instruction on the wrong path of execution (that might well have pointers into an unmapped page), for example. Either in the shadow of a branch mispredict, or just after another instruction (in program order) that should have faulted first but OoO exec got to it later.
(I'm also leaving out register-renaming onto a large physical register file.
That's the "rename" part. Allocate includes choosing which execution port an instruction will use, and reserving a load or store buffer entry for memory instructions.)
(There's also a store-buffer; stores don't write directly to L1d cache, they write to the store buffer. This makes it possible to speculatively execute stores and still roll back without them becoming visible to other cores. It also decouples cache-miss stores from execution. Once a store instruction retires, the store-buffer entry "graduates" and is eligible to commit to L1d cache, once MESI gets exclusive access to the cache line, and once memory-ordering rules are satisfied.)
Execution units detect whether an instruction should fault, or was mis-speculated and should roll back, but don't necessarily act on that until the instruction reaches retirement.
In-order retirement is the step that recovers program-order after OoO exec, including the case of exceptions of mis-speculation.
Terminology: Intel calls it "issue" when instructions are sent from the front-end into the ROB + RS. Other computer architecture people often call that "dispatch".
Sending uops from the RS to execution units is called "dispatch" by Intel, "issue" by other people.

Figure out when a context switch is happening in Swift

To be honest, I don't know if there might be a solution to my question but I'd like to catch, in Swift, when a context switch is happening.
I was imaging a func which takes a long time in order to be completed such as a write operation on a remote server and I was thinking if there might be a way to understand when (which line at least) the thread which is executing that task is performing a context switch because another task waiting for a long time has to be executed.
Sorry if for you might seem a stupid question or if I made mistakes whilst trying to explain the above
EDIT:
I'm talking about context switches that are happening automatically requested by the scheduler.. So imagine again we are in the middle of this long function which does tons of operations and the scheduler gave this task an amount of seconds, for example 10 seconds in order to make it complete. If the process runs out of time and doesnt end the task, it will be suspended and for example the thread will execute another task of another process. When it ends, scheduler might think to give another try to the suspended job and the execution will be resumed starting where it has been suspended ( so will read the value from PC register and will keep going on )
You can absolutely get what you've asked for, but I have a feeling it's going to be much less useful to you than you may believe. The vast (vast, vast!) majority of context switches are going to occur at points that you probably would think of as "uninteresting." lstat64(), mach_vm_map_trap(), mach_msg_trap(), mach_port_insert_member_trap(), kevent_id(), the list goes on. Most threads spend most of their time deep in the OS stack. "A write operation on a remote server" isn't going to block after it takes some long period of time. It's going to proactively block itself because it knows it's going to take a (mind-bogglingly) long period of time.
Even so, you can certainly explore this with Instruments. Just choose the System Trace profile and it'll show you all the threads and the system cores and how your threads and all the other threads on the device are getting scheduled, every system call, etc etc etc. It's a huge amount of information, so you usually only profile for a few seconds at a time. But it'll look something like this:
This is useful information if you're at the point where context switches are a major bottleneck. This might happen if you're dealing with excessive lock contention, or if you're thrashing your L1 cache because you keep getting interrupted by some other thread. So if you have some thread that you expect to stay running pretty continuously, and it's getting blocked, this is really valuable information. Or if you have two threads that you think should work back and forth smoothly, but they seem to be fighting (switching rapidly), then that's something you could work on. (But this is rarely one of the first places you'd look for performance tuning unless you're working on quite low-level code.)
From your description, I think you may have the wrong idea about the scheduler. Nothing in the scheduler is going to be on the order of 10 seconds. In the scheduler world, milliseconds are a lot of time. You should be thinking about things that take microseconds and even nanoseconds. If you're working on code that assumes fetching data from RAM is free, then you're on the wrong time scale. A network call is so ludicrously slow that you can basically estimate it as "forever." At the context-switch level you're looking at events like:
00:00.770.247 Virtual memory Zero Fill took 10.50 µs small_free_list_add_ptr ← (31 other frames)
I think its a kinda cool question but not clear.The answer is all about my understanding on your question. Normally if you implement C++ or C program for context switching, consider you write code with mutexes or semaphores.In these sections processes or threads work in critical section and sometimes there executing the context switching manually or interruption way. In iOS there are same identical implementation on concurrency such as DispatchSemaphore.(Actually mutex is a Semaphore that working with lock system.) You can read the documentation from here.
For starting to this, this is the Semaphore class definition in Swift.
class DispatchSemaphore : DispatchObject
You can init it with int value such as
let semaphore = DispatchSemaphore(value: intValue)
And if you want to use mutex variant you can easily use lock variable. Such as
var lock = Lock()
When you lock the thread with correct implementation, you will be in critical section and you can switch it with unlock or etc.
It's obviously look similar with POSIX pthread_lock_t
You can handle the context switching within the lock or semaphore critical section like that
lock.lock()
// critical section
// handle the calculation, if its longer unlock, or let it do its job
lock.unlock()
semaphore.wait(timeout: DispatchTime.distantFuture)
// critical section
// handle the calculation, if its too long signal to exit it in here, or let it do its job
// if it is a normal job, it will be signal for exit already.
semaphore.signal()
The handled answer is contains context switching in threads.
Addition to my question, when you ask about the context switching, its natural means is that changing the threads or processes basically. So when you get the data from background thread and then implement it for the UITableView' you probably will do reloadData method call in DispatchQueue.main.async . Its a common usage of context switching.

Can a process ask for x amount of time but take y amount instead?

If I am running a set of processes and they all want these burst times: 3, 5, 2 respectively, with the total expected time of execution being 10 time units.
Is it possible for one of the processes to take up more that what they ask for? For example even though it asked for 3 it took 11 instead because it was waiting on the user to enter some input. So the total execution time turns out to be 18.
This was all done in a non-preemptive cpu scheduler.
The reality is that software has no idea how long anything will take - my CPU runs at a different "nominal speed" to your CPU, both our CPUs keep changing their speed for power management reasons, and the speed of software executed by both our CPUs is effected by things like what other CPUs are doing (especially for SMT/hyper-threading) and what other devices happen to be doing at the time (their effect on caches, shared RAM bandwidth, etc); and software can't predict the future (e.g. guess when an IRQ will occur and take some time and upset the cache contents, guess when a read from memory will take 10 times longer because there was a single bit error that ECC needed to correct, guess when the CPU will get hot and reduce its speed to avoid melting, etc). It is possible to record things like "start time, burst time and end time" as it happens (to generate historical data from the past that can be analysed) but typically these things are only seen in fabricated academic exercises that have nothing to do with reality.
Note: I'm not saying fabricated academic exercises are bad - it's a useful tool to help learn basic theory before moving on to more advanced (and more realistic) theory.
Instead; for a non-preemptive scheduler, tasks don't try to tell the scheduler how much time they think they might take - the task can't know this information and the scheduler can't do anything with that information (e.g. a non-preemptive scheduler can't preempt the task when it takes longer than it guessed it might take). For a non-preemptive scheduler; a task simply runs until it calls a kernel function that waits for something (e.g. read() that waits for data from disk or network, sleep() that waits for time to pass, etc) and when that happens the kernel function that was called ends up telling the scheduler that the task is waiting and doesn't need the CPU, and the scheduler finds a different task to run that can use the CPU; and if the task never calls a kernel function that waits for something then the task runs "forever".
Of course "the task runs forever" can be bad (not just for malicious code that deliberately hogs all CPU time as a denial of service attack, but also for normal tasks that have bugs), which is why (almost?) nobody uses non-preemptive schedulers. For example; if one (lower priority) task is doing a lot of heavy processing (e.g. spending hours generating a photo-realistic picture using ray tracing techniques) and another (higher priority) task stops waiting (e.g. because it was waiting for the user to press a key and the user did press a key) then you want the higher priority task to preempt the lower priority task "immediately" (e.g. because most users don't like it when it takes hours for software to respond to their actions).

Micro scheduler for real-time kernel in embedded C applications?

I am working with time-critical applications where the microsecond counts. I am interested to a more convenient way to develop my applications using a non bare-metal approach (some kind of framework or base foundation common to all my projects).
A considered real-time operating system such as RTX, Xenomai, Micrium or VXWorks are not really real-time under my terms (or under the terms of electronic engineers). So I prefer to talk about soft-real-time and hard-real-time applications. An hard-real-time application has an acceptable jitter less than 100 ns and a heat-beat of 100..500 microseconds (tick timer).
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick. Therefore the tasks take usually much more than one tick to complete and this is the case of most available operating systems or micro kernels.
For my applications a typical task has a duration of 10..100 microseconds, with few exceptions that can last for more than one tick. So any real-time operating system cannot not fulfill my requirements. That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs. I still want to struggle a bit and in my case I now realize I have to consider a new category of operating system that I never heard about (and that may not exist yet). Let's call this category nano-kernel or subtick-scheduler
In such dreamed kernels I would find:
2 types of tasks:
Preemptive tasks (that run in their own memory space)
Non-preemptive tasks (that run in the kernel space and must complete in less than one tick.
Deterministic kernel scheduler (fixed duration after the ISR to reach the theoretical zero second jitter)
Ability to run multiple tasks per tick
For a better understanding of what I am looking for I made this figure below that represents the two types or kernels. The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt. Task 1 was summoned with a maximum execution time value so it needs 2 ticks to complete. Task 2 is set with low priority, so it consumes the remaining time of each tick upon completion. Task 3 is non-preemptive so it operates on the kernel space which save some precious context switch time.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds. By analyzing my needs I land on different topology for my scheduler were multiple tasks can be executed under the same tick interrupt. Obviously I am not the only one with this kind of need and my problem might simply be a kind of X-Y problem. So said differently I am not really looking at what I am really looking for.
After this (pretty) long introduction I can formulate my question:
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
Sorry, but first of all, let me say that your entire post is completely wrong and shows complete lack of understanding how preemptive RTOS works.
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick.
This is completely wrong.
In reality, a tick frequency in RTOS determines only two things:
resolution of timeouts, sleeps and so on,
context switch due to round-robin scheduling (where two or more threads with the same priority are "runnable" at the same time for a long period of time.
During a single tick - which typically lasts 1-10ms, but you can usually configure that to be whatever you like - scheduler can do hundreds or thousands of context switches. Or none. When an event arrives and wakes up a thread with sufficiently high priority, context switch will happen immediately, not with the next tick. An event can be originated by the thread (posting a semaphore, sending a message to another thread, ...), interrupt (posting a semaphore, sending a message to a queue, ...) or by the scheduler (expired timeout or things like that).
There are also RTOSes with no system ticks - these are called "tickless". There you can have resolution of timeouts in the range of nanoseconds.
That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs.
Actually this is a reason why these "engineers" should read something instead of pretending to know everything and seeking "innovative" solutions to non-existing problems. This is completely wrong.
The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
This is not a feature of a RTOS, but the way you wrote your application - if a high priority task is constantly doing something, then lower priority tasks will NOT get any chance to run. But this is just because you assigned wrong priorities.
Unless you use cooperative RTOS, but if you have such high requirements, why would you do that?
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt.
This is exactly how EVERY preemptive RTOS works.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds.
Completely wrong. In every known RTOS it is not a problem to get a response time down to single microseconds (1-3us) with a chip that has clock in the range of 100MHz. So you actually can run "jobs" which are as short as 10us without too much overhead. You can even have "jobs" as short as 10ns, but then the overhead will be pretty high...
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
This pattern is called preemptive RTOS. Do note that threads in RTOS are NOT executed in "tick interrupt". They are executed in standard "thread" context, and tick interrupt is only used to switch context of one thread to another.
What you described in your post is a "cooperative" RTOS, which does NOT preempt threads. You use that in systems with extremely limited resources and with low timing requirements. In every other case you use preemptive RTOS, which is capable of handling the events immediately.

How do Real Time Operating Systems work?

I mean how and why are realtime OSes able to meet deadlines without ever missing them? Or is this just a myth (that they do not miss deadlines)? How are they different from any regular OS and what prevents a regular OS from being an RTOS?
Meeting deadlines is a function of the application you write. The RTOS simply provides facilities that help you with meeting deadlines. You could also program on "bare metal" (w/o a RTOS) in a big main loop and meet you deadlines.
Also keep in mind that unlike a more general purpose OS, an RTOS has a very limited set of tasks and processes running.
Some of the facilities an RTOS provide:
Priority-based Scheduler
System Clock interrupt routine
Deterministic behavior
Priority-based Scheduler
Most RTOS have between 32 and 256 possible priorities for individual tasks/processes. The scheduler will run the task with the highest priority. When a running task gives up the CPU, the next highest priority task runs, and so on...
The highest priority task in the system will have the CPU until:
it runs to completion (i.e. it voluntarily give up the CPU)
a higher priority task is made ready, in which case the original task is pre-empted by the new (higher priority) task.
As a developer, it is your job to assign the task priorities such that your deadlines will be met.
System Clock Interrupt routines
The RTOS will typically provide some sort of system clock (anywhere from 500 uS to 100ms) that allows you to perform time-sensitive operations.
If you have a 1ms system clock, and you need to do a task every 50ms, there is usually an API that allows you to say "In 50ms, wake me up". At that point, the task would be sleeping until the RTOS wakes it up.
Note that just being woken up does not insure you will run exactly at that time. It depends on the priority. If a task with a higher priority is currently running, you could be delayed.
Deterministic Behavior
The RTOS goes to great length to ensure that whether you have 10 tasks, or 100 tasks, it does not take any longer to switch context, determine what the next highest priority task is, etc...
In general, the RTOS operation tries to be O(1).
One of the prime areas for deterministic behavior in an RTOS is the interrupt handling. When an interrupt line is signaled, the RTOS immediately switches to the correct Interrupt Service Routine and handles the interrupt without delay (regardless of the priority of any task currently running).
Note that most hardware-specific ISRs would be written by the developers on the project. The RTOS might already provide ISRs for serial ports, system clock, maybe networking hardware but anything specialized (pacemaker signals, actuators, etc...) would not be part of the RTOS.
This is a gross generalization and as with everything else, there is a large variety of RTOS implementations. Some RTOS do things differently, but the description above should be applicable to a large portion of existing RTOSes.
In RTOSes the most critical parameters which should be taken care of are lower latencies and time determinism. Which it pleasantly does by following certain policies and tricks.
Whereas in GPOSes, along with acceptable latencies the critical parameters is high throughput. you cannot count on GPOS for time determinism.
RTOSes have tasks which are much lighter than processes/threads in GPOS.
It is not that they are able to meet deadlines, it is rather that they have deadlines fixed whereas in a regular OS there is no such deadline.
In a regular OS the task scheduler is not really strict. That is the processor will execute so many instructions per second, but it may occasionally not do so. For example a task might be pre-empted to allow a higher priority one to execute (and may be for longer time). In RTOS the processor will always execute the same number of tasks.
Additionally there is usually a time limit for tasks to completed after which a failure is reported. This does not happen in regular OS.
Obviously there is lot more detail to explain, but the above are two of the important design aspects that are used in RTOS.
Your RTOS is designed in such a way that it can guarantee timings for important events, like hardware interrupt handling and waking up sleeping processes exactly when they need to be.
This exact timing allows the programmer to be sure that his (say) pacemaker is going to output a pulse exactly when it needs to, not a few tens of milliseconds later because the OS was busy with another inefficient task.
It's usually a much simpler OS than a fully-fledged Linux or Windows, simply because it's easier to analyse and predict the behaviour of simple code. There is nothing stopping a fully-fledged OS like Linux being used in a RTOS environment, and it has RTOS extensions. Because of the complexity of the code base it will not be able to guarantee its timings down to as small-a scale as a smaller OS.
The RTOS scheduler is also more strict than a general purpose scheduler. It's important to know the scheduler isn't going to change your task priority because you've been running a long time and don't have any interactive users. Most OS would reduce internal the priority of this type of process to favour short-term interactive programs where the interface should not be seen to lag.
You might find it helpful to read the source of a typical RTOS. There are several open-source examples out there, and the following yielded links in a little bit of quick searching:
FreeRTOS
eCos
A commercial RTOS that is well documented, available in source code form, and easy to work with is µC/OS-II. It has a very permissive license for educational use, and (a mildly out of date version of) its source can be had bound into a book describing its theory of operation using the actual implementation as example code. The book is MicroC OS II: The Real Time Kernel by Jean Labrosse.
I have used µC/OS-II in several projects over the years, and can recommend it.
"Basically, you have to code each "task" in the RTOS such that they will terminate in a finite time."
This is actually correct. The RTOS will have a system tick defined by the architecture, say 10 millisec., with all tasks (threads) both designed and measured to complete within specific times. For example in processing real time audio data, where the audio sample rate is 48kHz, there is a known amount of time (in milliseconds) at which the prebuffer will become empty for any downstream task which is processing the data. Therefore using the RTOS requires correct sizing of the buffers, estimating and measuring how long this takes, and measuring the latencies between all software layers in the system. Then the deadlines can be met. Otherwise the applications will miss the deadlines. This requires analysis of the worst-case data processing throughout the entire stack, and once the worst-case is known, the system can be designed for, say, 95% processing time with 5% idle time (this processing may not ever occur in any real usage, because worst-case data processing may not be an allowed state within all layers at any single moment in time).
Example timing diagrams for the design of a real time operating system network app are in this article at EE Times,
PRODUCT HOW-TO: Improving real-time voice quality in a VoIP-based telephony design
http://www.eetimes.com/design/embedded/4007619/PRODUCT-HOW-TO-Improving-real-time-voice-quality-in-a-VoIP-based-telephony-design
I haven't used an RTOS, but I think this is how they work.
There's a difference between "hard real time" and "soft real time". You can write real-time applications on a non-RTOS like Windows, but they're 'soft' real-time:
As an application, I might have a thread or timer which I ask the O/S to run 10 times per second ... and maybe the O/S will do that, most of the time, but there's no guarantee that it will always be able to ... this lack of guarantee is why it's called 'soft'. The reason why the O/S might not be able to is that a different thread might be keeping the system busy doing something else. As an application, I can boost my thread priority to for example HIGH_PRIORITY_CLASS, but even if I do this the O/S still has no API which I can use to request a guarantee that I'll be run at certain times.
A 'hard' real-time O/S does (I imagine) have APIs which let me request guaranteed execution slices. The reason why the RTOS can make such guarantees is that it's willing to abend threads which take more time than expected / than they're allowed.
What is important is realtime applications, not realtime OS. Usually realtime applications are predictable: many tests, inspections, WCET analysis, proofs, ... have been performed which show that deadlines are met in any specified situations.
It happens that RTOSes help doing this work (building the application and verifying its RT constraints). But I've seen realtime applications running on standard Linux, relying more on hardware horsepower than on OS design.
... well ...
A real-time operating system tries to be deterministic and meet deadlines, but it all depends on the way you write your application. You can make a RTOS very non real-time if you don't know how to write "proper" code.
Even if you know how to write proper code:
It's more about trying to be deterministic than being fast.
When we talk about determinism it's
1) event determinism
For each set of inputs the next states and outputs of a system are known
2) temporal determinism
… also the response time for each set of outputs is known
This means that if you have asynchronous events like interrupts your system is strictly speaking not anymore temporal deterministic. (and most systems use interrupts)
If you really want to be deterministic poll everything.
... but maybe it's not necessary to be 100% deterministic
The textbook/interview answer is "deterministic pre-emption". The system is guaranteed to transfer control within a bounded period of time if a higher priority process is ready to run (in the ready queue) or an interrupt is asserted (typically input external to the CPU/MCU).
They actually don't guarantee meeting deadlines; what they do that makes them truly RTOS is to provide the means to recognize and deal with deadline overruns. 'Hard' RT systems generally are those where missing a deadline is disastrous and some kind of shutdown is required, whereas a 'soft' RT system is one where continuing with degraded functionality makes sense. Either way an RTOS permits you to define responses to such overruns. Non RT OS's don't even detect overruns.
Basically, you have to code each "task" in the RTOS such that they will terminate in a finite time.
Additionally your kernel would allocate specific amounts of time to each task, in an attempt to guarantee that certain things happened at certain times.
Note that this is not an easy task to do however. Imagine things like virtual function calls, in OO it's very difficult to determine these things. Also an RTOS must be carefully coded with regard to priority, it may require that a high priority task is given the CPU within x milliseconds, which may be difficult to do depending on how your scheduler works.