Howto design a clock driven multi-agent simulation - scala

I want to create a multi-agent simulation model for a real word manufacturing process to evaluate some dispatching rules. The simulation needs to produce event logs to evaluate time effect of the dispatching rules compared to the real manufacturing event logs.
How can I incorporate the 'current simulation time' into this kind of multi-agent, message passing intensive simulation?
Background:
The classical discrete event simulation (which handles the time-advancement nicely) cannot be applied here, as the agents in the system represent relatively complex behavior and routing requirements plus the dispatching rules require them to communicate frequently. This and other process complexities rule out a centralized scheduling approach as well.
In the manufacturing science, there are thousands of papers using a multi-agent simulation for their solution of some manufacturing related problem. However, I haven't found a paper yet which describes the inner workings or implementation details of these simulations in the required detail.
Unfortunately, using the shortest process time for discrete time stepping in a system might be infeasible as the range of process time is between 0.1s and 24 hours. There is a possibility my simulation will be used for what-if evaluations in a project later on so the simulation needs to run as fast as possible - no option for overnight simulation runs.
The problem size is about 500 resources and 1000 - 10000 product agents, most of them is finished and not participating in any further communication or resource occupation.
Consequently, in result to the communication new events can trigger an agent to do something before its original 'next time' event would arrive. For example, an agent is currently blocked on a resource lasting an hour. However, another higher priority agent needs that resource right away and asks the fist agent to release that resource.
In some sense, I need a way to create a hybrid of classical message passing agent-simulation and the discrete event simulation.
I considered a mediator agent that is involved in every message - a message router and time enforcer which sends around the messages and the timer tick events. Also the mediator agent keeps a list of next event times for various agents. However, I feel there should be a better way to solve my problem as the concept puts an enormous pressure at the mediator agent.
Update
It took a while, but it seems I managed to create a mini-framework and combined the DES and Agent concept into one. I'm sure its nothing new but at least unique: http://code.google.com/p/tidra-framework/ if you are interested.

This problem sounds as if it should be tackled by using parallel discrete-event simulation - the mediator agent you are planning to implement ('is involved in every message', 'sends around messages and timer tick events') seems to be doing the job of a discrete-event simulator right now. You can make this scale to the desired problem size by using more of such simulators in parallel and then use a synchronization algorithm to maintain causality etc. (see, e.g., this book for details). Of course, this requires some considerable effort, and you might be better off by really trying out the sequential algorithms first.
A nice way of augmenting the classical DES-view of logical processes (= agents) that communicate with each other via events could be to blend in some ideas from other formalisms used to describe discrete-event systems, such as DEVS. In DEVS, each entity can specify the duration it will be in a certain state (e.g., the agent blocking a resource), and will only be interrupted by incoming messages (and then change its state accordingly, e.g. the agent freeing the resource).
BTW In which sense do you think that the agents are too complex to be handled with discrete-event simulation? If you regard each agent as a logical process, it doesn't really matter how complex it is from a simulation point of view - or am I getting something wrong here?

Related

Turnaround timing: How much below the lead time it should be?

I have been developping a Matlab Simulink model to a client and it is required the model to be compatible with a time-rate of 1ms. My client will integrate this model with his model parent that runs at this time rate in a multi-core machine.
In spite of this will run a multi-core machine, my model will share a core with other tasks, that means I have more reasons to make sure my model wont consume a lot of processing and cause task overruns.
I believe there is a good practice rule or conceptual definition that estipulates a desired turnaround time depending on lead time. There is?
I wanted to know how much bellow the lead time (1ms) I should worry to keep the turnaround time bellow.
I would appreciate if you could point any reference.
if your application needs a precise timing, you should consider a trigger signal send from a hardware, like an arduino, or the internal trigger/timer of the instrument itself. This will not be influenced by the load from your computer.
if you have to use software control, you can use the software to send a trigger signal from serial port. it will more like on-demand mode, but precise time interval is difficult.

ZeroMQ Choosing Correct Client-Worker Model for a Call Center

I have a project that needs to be written in Perl so I've chosen ZeroMQ.
There is a single client program, generating work for a variable number of workers. The workers are real human operators who will complete a task then request a new task. The job of the client program is keep all available workers busy all day. It's a call center.
So each worker can only process one task at time, and there may be some time before requesting a new task. And the number of workers may vary during the day.
The client needs to keep a queue of tasks ready to give to workers as and when they request them. Whenever the client queue gets low the client can generate more tasks to top-up the queue.
What design pattern (i.e. what ZeroMQ Socket combination) should I use for this? I've skimmed through all the patterns in the 0MQ Guide and can't find anything that matches this.
Thanks
Sure. ... there is not a single, solo Archetype to match the Requirement List use several ZeroMQ Scalable Formal Communication Patterns
Typical software Project uses many ZeroMQ sockets ( with various Archetypes ) as a certain form of node-node signalisation and message-passing platform.
It is fair to note, that automated Load-Balancers may work fine for automated processes, but not always so for processes, executed by Humans or interacting with Humans.
Humans ( both the Call centre Agents and their Line-Supervisors ) introduce another layer of requirements - sometimes with a need to introduce non-just-Round-Robin workload distribution logic, sometimes need to switch a call from Agent A to another Agent B ( which a trivial archetype will simply not be capable of and might get into troubles, if it's hardwired-logic runs into a collision ( mutually blocked REQ-REP stale-mate being one such example ).
So simply forget to wait for one super-powered archetype, but rather create a smart network of behaviours, that will cover your distributed-computing problem desired event-handling.
There are many other aspects, one ought learn before taking the first ZeroMQ socket into service.
failure resillience
performance scaling
latency-profiling ( high-priority voice-traffic, vs. low-priority logging )
watchdog acknowledgements and timeout situations handling
cross-compatibility issues ( version 2.1x vs 3.x vs 4.+ API )
processing robustness against a malfunctioning agent / malicious attack / deadly spurious traffic storms ... to name just a few of problems
all of which has some built-ins in the ZeroMQ toolbox, some of which may need some advanced thinking, so as to handle known constraints.
The Best Next Step?
A would advocate for a fabulous Pieter HINTJENS' book "Code Connected, Volume 1" -- for everyone, who is serious into distributed processing, this is a must-read -- do not hesitate to check other my posts to find a direct URL to a PDF-version of this ZeroMQ Bible.
Worth time and one's tears and sweat.

Micro scheduler for real-time kernel in embedded C applications?

I am working with time-critical applications where the microsecond counts. I am interested to a more convenient way to develop my applications using a non bare-metal approach (some kind of framework or base foundation common to all my projects).
A considered real-time operating system such as RTX, Xenomai, Micrium or VXWorks are not really real-time under my terms (or under the terms of electronic engineers). So I prefer to talk about soft-real-time and hard-real-time applications. An hard-real-time application has an acceptable jitter less than 100 ns and a heat-beat of 100..500 microseconds (tick timer).
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick. Therefore the tasks take usually much more than one tick to complete and this is the case of most available operating systems or micro kernels.
For my applications a typical task has a duration of 10..100 microseconds, with few exceptions that can last for more than one tick. So any real-time operating system cannot not fulfill my requirements. That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs. I still want to struggle a bit and in my case I now realize I have to consider a new category of operating system that I never heard about (and that may not exist yet). Let's call this category nano-kernel or subtick-scheduler
In such dreamed kernels I would find:
2 types of tasks:
Preemptive tasks (that run in their own memory space)
Non-preemptive tasks (that run in the kernel space and must complete in less than one tick.
Deterministic kernel scheduler (fixed duration after the ISR to reach the theoretical zero second jitter)
Ability to run multiple tasks per tick
For a better understanding of what I am looking for I made this figure below that represents the two types or kernels. The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt. Task 1 was summoned with a maximum execution time value so it needs 2 ticks to complete. Task 2 is set with low priority, so it consumes the remaining time of each tick upon completion. Task 3 is non-preemptive so it operates on the kernel space which save some precious context switch time.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds. By analyzing my needs I land on different topology for my scheduler were multiple tasks can be executed under the same tick interrupt. Obviously I am not the only one with this kind of need and my problem might simply be a kind of X-Y problem. So said differently I am not really looking at what I am really looking for.
After this (pretty) long introduction I can formulate my question:
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
Sorry, but first of all, let me say that your entire post is completely wrong and shows complete lack of understanding how preemptive RTOS works.
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick.
This is completely wrong.
In reality, a tick frequency in RTOS determines only two things:
resolution of timeouts, sleeps and so on,
context switch due to round-robin scheduling (where two or more threads with the same priority are "runnable" at the same time for a long period of time.
During a single tick - which typically lasts 1-10ms, but you can usually configure that to be whatever you like - scheduler can do hundreds or thousands of context switches. Or none. When an event arrives and wakes up a thread with sufficiently high priority, context switch will happen immediately, not with the next tick. An event can be originated by the thread (posting a semaphore, sending a message to another thread, ...), interrupt (posting a semaphore, sending a message to a queue, ...) or by the scheduler (expired timeout or things like that).
There are also RTOSes with no system ticks - these are called "tickless". There you can have resolution of timeouts in the range of nanoseconds.
That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs.
Actually this is a reason why these "engineers" should read something instead of pretending to know everything and seeking "innovative" solutions to non-existing problems. This is completely wrong.
The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
This is not a feature of a RTOS, but the way you wrote your application - if a high priority task is constantly doing something, then lower priority tasks will NOT get any chance to run. But this is just because you assigned wrong priorities.
Unless you use cooperative RTOS, but if you have such high requirements, why would you do that?
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt.
This is exactly how EVERY preemptive RTOS works.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds.
Completely wrong. In every known RTOS it is not a problem to get a response time down to single microseconds (1-3us) with a chip that has clock in the range of 100MHz. So you actually can run "jobs" which are as short as 10us without too much overhead. You can even have "jobs" as short as 10ns, but then the overhead will be pretty high...
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
This pattern is called preemptive RTOS. Do note that threads in RTOS are NOT executed in "tick interrupt". They are executed in standard "thread" context, and tick interrupt is only used to switch context of one thread to another.
What you described in your post is a "cooperative" RTOS, which does NOT preempt threads. You use that in systems with extremely limited resources and with low timing requirements. In every other case you use preemptive RTOS, which is capable of handling the events immediately.

Event or polled based embedded MCU system architecture?

I have prior experience in writing both event and poll based embedded systems (for tiny MCU's with no preemptive OS).
In an event based system, tasks usually receives events (messages) on a queue and handles them in turn.
In a polled based system, tasks polls status with a certain interval and responds to change.
Which architecture do you prefer? Can both co-exist?
UPDATE: POINTS MADE
POLL BASED
- Tight coupling related to timing aspects (#Lundin)
* Can co-exist alongside event system using queues (#embedded.kyle)
* Fine for smaller programs (#Lundin)
EVENT BASED
+ More flexible system in the long run (#embedded.kyle)
- RTOS edition adds complexity (#Lundin)
* Small programs = state-machine controlled (#Lundin)
* Can be implemented using queues and a "super-loop" (inside controller/main) (#embedded.kyle)
* Only true "events" are hw interrupts ones (#Lundin)
RELATED QUESTIONS
* Looking for a comparison of different scheduling algorithms for a Finite State Machine (#embedded.kyle)
RELATED INFO
* "Prefer Using Active Objects Instead of Naked Threads" (#Miro)
http://www.drdobbs.com/parallel/prefer-using-active-objects-instead-of-n/225700095
* "Use Threads Correctly = Isolation + Asynchronous Messages" (#Miro)
http://www.drdobbs.com/parallel/use-threads-correctly-isolation-asynch/215900465
There is really no such thing as "event-driven" on a bare bone MCU platform, despite what the buzzword-spitters are trying to tell you. The only kind of true events you can receive are hardware interrupts.
Depending on the nature of the application and its real time requirements, interrupts may or may not be suitable. Generally, it is far easier to achieve deterministic real time with a polling system. However, systems relying solely on polling are very hard to maintain, because you get tight coupling between all timing aspects.
Suppose you try to start up a LCD, which is slow. Instead of polling some timer repeatedly while burning CPU cycles in an empty loop, you would perhaps decide to receive some data over a bus in the meantime. And then you want to print the data received on the LCD. Such a design has created a tight coupling between the LCD startup time and the serial bus, and another tight coupling between the serial bus and the printing of data. From an object-oriented point-of-view these things are not related to each other at all. If you were to speed up the serial bus at some point in the future, then suddenly you could encounter LCD printing bugs, because it has not finished starting up when you try to print on it.
In a small program, it is perfectly fine to use polling like in the above example. But if the program has potential of growing, polling will make it very complex and the tight coupling will ultimately lead to many strange and fatal bugs.
On the other hand, multi-threading and RTOS adds quite a lot of extra complexity which in turn can lead to bugs as well. Where to draw the line isn't simple to determine.
Out of personal experience I'd say that any program smaller than 20-30k LOC will not benefit from scheduling and multitasking, beyond simple state machines. If the program gets larger than that, I'd consider a multitasking RTOS.
Also, low-end MCUs (8- and 16-bitters) are far from suitable to run an OS. If you find that you need an OS to handle complexity on a 8- or 16-bit platform, you probably picked the wrong MCU to begin with. I'd be sceptical against any attempts to introduce an OS on anything smaller than a 32-bitter.
Actually, event-driven programming and threads can be combined and the resulting pattern is widely known as "active objects" or "actors".
Active objects (actors) are encapsulated, event-driven state machines, which communicate with one another asynchronously by posting events to each other. Active objects process all events in their own thread of execution (at least conceptually, if a cooperative scheduler is used), so they avoid by design most concurrency hazards.
Actors and active objects are all the rage (again) in the general-purpose computing (you can search for Erlang, Scala, Akka). Herb Sutter has written a couple of good articles that explain the "active object" pattern: "Prefer Using Active Objects Instead of Naked Threads" (http://www.drdobbs.com/parallel/prefer-using-active-objects-instead-of-n/225700095) and "Use Threads Correctly = Isolation + Asynchronous Messages" (http://www.drdobbs.com/parallel/use-threads-correctly-isolation-asynch/215900465)
Here is what Herb says in the first of these articles:
"Using raw threads directly is trouble for a number of reasons ...
Active objects dramatically improve our ability to reason about our thread's code and operation by giving us higher-level abstractions and idioms that raise the semantic level of our program and let us express our intent more directly. As with all good patterns, we also get better vocabulary to talk about our design. Note that active objects aren't a novelty: UML and various libraries have provided support for active classes"
So, all this is really not new. But what's perhaps less known, especially in the embedded systems community, is that active objects are not only fully applicable to the embedded systems, but they are actually a perfect match for embedded and they are lighter than a traditional RTOS.
I've been using the event-driven active objects for over a decade now and have created the QP family of active object frameworks for embedded systems (see http://www.state-machine.com/). I would never go back to the polling "superloop" or the raw RTOS.
I prefer whichever architecture is best suited to the application at hand.
Both can co-exist in a multilevel queue architecture. One queue works on a poll basis running in the main loop. While another, most likely tasked with higher priority events, works by using interrupt based preemption.
See my answer to this SO question for a more detailed explanation and comparison of the different scheduling algorithms.

How do Real Time Operating Systems work?

I mean how and why are realtime OSes able to meet deadlines without ever missing them? Or is this just a myth (that they do not miss deadlines)? How are they different from any regular OS and what prevents a regular OS from being an RTOS?
Meeting deadlines is a function of the application you write. The RTOS simply provides facilities that help you with meeting deadlines. You could also program on "bare metal" (w/o a RTOS) in a big main loop and meet you deadlines.
Also keep in mind that unlike a more general purpose OS, an RTOS has a very limited set of tasks and processes running.
Some of the facilities an RTOS provide:
Priority-based Scheduler
System Clock interrupt routine
Deterministic behavior
Priority-based Scheduler
Most RTOS have between 32 and 256 possible priorities for individual tasks/processes. The scheduler will run the task with the highest priority. When a running task gives up the CPU, the next highest priority task runs, and so on...
The highest priority task in the system will have the CPU until:
it runs to completion (i.e. it voluntarily give up the CPU)
a higher priority task is made ready, in which case the original task is pre-empted by the new (higher priority) task.
As a developer, it is your job to assign the task priorities such that your deadlines will be met.
System Clock Interrupt routines
The RTOS will typically provide some sort of system clock (anywhere from 500 uS to 100ms) that allows you to perform time-sensitive operations.
If you have a 1ms system clock, and you need to do a task every 50ms, there is usually an API that allows you to say "In 50ms, wake me up". At that point, the task would be sleeping until the RTOS wakes it up.
Note that just being woken up does not insure you will run exactly at that time. It depends on the priority. If a task with a higher priority is currently running, you could be delayed.
Deterministic Behavior
The RTOS goes to great length to ensure that whether you have 10 tasks, or 100 tasks, it does not take any longer to switch context, determine what the next highest priority task is, etc...
In general, the RTOS operation tries to be O(1).
One of the prime areas for deterministic behavior in an RTOS is the interrupt handling. When an interrupt line is signaled, the RTOS immediately switches to the correct Interrupt Service Routine and handles the interrupt without delay (regardless of the priority of any task currently running).
Note that most hardware-specific ISRs would be written by the developers on the project. The RTOS might already provide ISRs for serial ports, system clock, maybe networking hardware but anything specialized (pacemaker signals, actuators, etc...) would not be part of the RTOS.
This is a gross generalization and as with everything else, there is a large variety of RTOS implementations. Some RTOS do things differently, but the description above should be applicable to a large portion of existing RTOSes.
In RTOSes the most critical parameters which should be taken care of are lower latencies and time determinism. Which it pleasantly does by following certain policies and tricks.
Whereas in GPOSes, along with acceptable latencies the critical parameters is high throughput. you cannot count on GPOS for time determinism.
RTOSes have tasks which are much lighter than processes/threads in GPOS.
It is not that they are able to meet deadlines, it is rather that they have deadlines fixed whereas in a regular OS there is no such deadline.
In a regular OS the task scheduler is not really strict. That is the processor will execute so many instructions per second, but it may occasionally not do so. For example a task might be pre-empted to allow a higher priority one to execute (and may be for longer time). In RTOS the processor will always execute the same number of tasks.
Additionally there is usually a time limit for tasks to completed after which a failure is reported. This does not happen in regular OS.
Obviously there is lot more detail to explain, but the above are two of the important design aspects that are used in RTOS.
Your RTOS is designed in such a way that it can guarantee timings for important events, like hardware interrupt handling and waking up sleeping processes exactly when they need to be.
This exact timing allows the programmer to be sure that his (say) pacemaker is going to output a pulse exactly when it needs to, not a few tens of milliseconds later because the OS was busy with another inefficient task.
It's usually a much simpler OS than a fully-fledged Linux or Windows, simply because it's easier to analyse and predict the behaviour of simple code. There is nothing stopping a fully-fledged OS like Linux being used in a RTOS environment, and it has RTOS extensions. Because of the complexity of the code base it will not be able to guarantee its timings down to as small-a scale as a smaller OS.
The RTOS scheduler is also more strict than a general purpose scheduler. It's important to know the scheduler isn't going to change your task priority because you've been running a long time and don't have any interactive users. Most OS would reduce internal the priority of this type of process to favour short-term interactive programs where the interface should not be seen to lag.
You might find it helpful to read the source of a typical RTOS. There are several open-source examples out there, and the following yielded links in a little bit of quick searching:
FreeRTOS
eCos
A commercial RTOS that is well documented, available in source code form, and easy to work with is µC/OS-II. It has a very permissive license for educational use, and (a mildly out of date version of) its source can be had bound into a book describing its theory of operation using the actual implementation as example code. The book is MicroC OS II: The Real Time Kernel by Jean Labrosse.
I have used µC/OS-II in several projects over the years, and can recommend it.
"Basically, you have to code each "task" in the RTOS such that they will terminate in a finite time."
This is actually correct. The RTOS will have a system tick defined by the architecture, say 10 millisec., with all tasks (threads) both designed and measured to complete within specific times. For example in processing real time audio data, where the audio sample rate is 48kHz, there is a known amount of time (in milliseconds) at which the prebuffer will become empty for any downstream task which is processing the data. Therefore using the RTOS requires correct sizing of the buffers, estimating and measuring how long this takes, and measuring the latencies between all software layers in the system. Then the deadlines can be met. Otherwise the applications will miss the deadlines. This requires analysis of the worst-case data processing throughout the entire stack, and once the worst-case is known, the system can be designed for, say, 95% processing time with 5% idle time (this processing may not ever occur in any real usage, because worst-case data processing may not be an allowed state within all layers at any single moment in time).
Example timing diagrams for the design of a real time operating system network app are in this article at EE Times,
PRODUCT HOW-TO: Improving real-time voice quality in a VoIP-based telephony design
http://www.eetimes.com/design/embedded/4007619/PRODUCT-HOW-TO-Improving-real-time-voice-quality-in-a-VoIP-based-telephony-design
I haven't used an RTOS, but I think this is how they work.
There's a difference between "hard real time" and "soft real time". You can write real-time applications on a non-RTOS like Windows, but they're 'soft' real-time:
As an application, I might have a thread or timer which I ask the O/S to run 10 times per second ... and maybe the O/S will do that, most of the time, but there's no guarantee that it will always be able to ... this lack of guarantee is why it's called 'soft'. The reason why the O/S might not be able to is that a different thread might be keeping the system busy doing something else. As an application, I can boost my thread priority to for example HIGH_PRIORITY_CLASS, but even if I do this the O/S still has no API which I can use to request a guarantee that I'll be run at certain times.
A 'hard' real-time O/S does (I imagine) have APIs which let me request guaranteed execution slices. The reason why the RTOS can make such guarantees is that it's willing to abend threads which take more time than expected / than they're allowed.
What is important is realtime applications, not realtime OS. Usually realtime applications are predictable: many tests, inspections, WCET analysis, proofs, ... have been performed which show that deadlines are met in any specified situations.
It happens that RTOSes help doing this work (building the application and verifying its RT constraints). But I've seen realtime applications running on standard Linux, relying more on hardware horsepower than on OS design.
... well ...
A real-time operating system tries to be deterministic and meet deadlines, but it all depends on the way you write your application. You can make a RTOS very non real-time if you don't know how to write "proper" code.
Even if you know how to write proper code:
It's more about trying to be deterministic than being fast.
When we talk about determinism it's
1) event determinism
For each set of inputs the next states and outputs of a system are known
2) temporal determinism
… also the response time for each set of outputs is known
This means that if you have asynchronous events like interrupts your system is strictly speaking not anymore temporal deterministic. (and most systems use interrupts)
If you really want to be deterministic poll everything.
... but maybe it's not necessary to be 100% deterministic
The textbook/interview answer is "deterministic pre-emption". The system is guaranteed to transfer control within a bounded period of time if a higher priority process is ready to run (in the ready queue) or an interrupt is asserted (typically input external to the CPU/MCU).
They actually don't guarantee meeting deadlines; what they do that makes them truly RTOS is to provide the means to recognize and deal with deadline overruns. 'Hard' RT systems generally are those where missing a deadline is disastrous and some kind of shutdown is required, whereas a 'soft' RT system is one where continuing with degraded functionality makes sense. Either way an RTOS permits you to define responses to such overruns. Non RT OS's don't even detect overruns.
Basically, you have to code each "task" in the RTOS such that they will terminate in a finite time.
Additionally your kernel would allocate specific amounts of time to each task, in an attempt to guarantee that certain things happened at certain times.
Note that this is not an easy task to do however. Imagine things like virtual function calls, in OO it's very difficult to determine these things. Also an RTOS must be carefully coded with regard to priority, it may require that a high priority task is given the CPU within x milliseconds, which may be difficult to do depending on how your scheduler works.