I have yet to find a definition of an RTOS that is specific enough to have meaning. The best one I can find is on wiki:
https://en.wikipedia.org/wiki/Real-time_operating_system
However I have some critical comments/questions:
"Real Time" seems to be undefined in all the definitions for RTOS I've found. Nothing can be fast as actual real time (infinitesimally small!). Therefore, I believe "real time" only makes sense in the context of the observer. Real time for a human using an iPhone user might be <20ms because human eye sight cannot detect changes faster than that. For an air bag deployment it might be <1ms. All definitions on the internet seem to gloss over the definition of "real time"!
If RTOS is defined by the requirement to execute something within a specific time frame ("deadline"), why does jitter come into the definition? If the iPhone response jitters between 12-14ms, is it no longer responding in real time? It meets the 20ms requirement, right? If one time the response went to 100ms, the user might notice, at which point the system is not an RTOS
How can there possibly be a "soft" RTOS?! The definition of RTOS is meeting a particular deadline time requirement. If it doesn't meet it, than its not an RTOS! The very definition of RTOS prohibits a "soft" RTOS
To me it seems there is no formal and precise definition of RTOS. It's a general term to explain the characteristic of an OS who's main priority is the appearance of "real time" (per requirement number) to a particular type of observer. It also seems like the name has taken on implementation meaning such as how things are processed, multi-tasking, message passing, semaphores, etc... all which may NOT be part of an RTOS at all if the system fails to respond within the "deadline" requirement, right?
Sorry about such a ubiquitous question, but I can't get a clear picture in my brain. All definitions I've found are simply not precise enough or cloud the definition with implementation details.
You're right that no definition defines the exact time bounds. That's not the goal of a definition. Real time isn't dependent on the observer, though, but the application. As applications differ, time bounds differ, and therefore a definition cannot give that bound as a number.
Jitter is irrelevant as long as the application's time bound is met. You're absolutely right about the example. If the deadline is 20 ms, taking 100 ms is a failure. If the OS is to blame for the delay, it's not an RTOS.
"Soft realtime" has a very specific meaning, and this is probably the only thing you really got wrong. The concept at work here is, what do you do when a task exceeds its deadline? (Note: this could be either the fault of the task itself or the RTOS.) In a hard realtime system, the task simply has no value anymore. A late outcome is as good as no outcome, and you cancel the task. No point in risking other tasks.
Soft RTOS is actually more complex. Finishing the task still has value, although diminished. So the RTOS cannot hard kill the task, but the OS still has to ensure other tasks meet their deadlines. That requires extra care, which wouldn't have been necessary if you'd just kill the task.
There is an Embedded Systems Dictionary. Here are some excerpts:
real-time adj. Having timeliness requirements, typically in the form of deadlines that can’t be missed.
real-time operating system n. An operating system designed specifically for use in real-time systems. Abbreviated RTOS.
real-time system n. Any computer system, embedded or otherwise, that has timeliness requirements. The following question can be used
to distinguish real-time systems from the rest: “Is a late answer as
bad, or even worse, than a wrong answer?” In other words, what happens
if the computation doesn’t finish in time? If nothing bad happens,
it’s not a real-time system. If someone dies or the mission fails,
it’s generally considered “hard” real-time, which is meant to imply
that the system has hard deadlines. Everything in between is “soft”
real-time.
Related
I need to measure the time forklift spends in collision, however movement_log
for agent type that is a forklift managed by transporter, fleet is not available. I also can not use statecharts because it uses much performance.
Situation: I am simulating a warehouse with one-way aisles and the capacity of these one-way aisles is 2 vehicles. There are situations
where a forklift (the yellow one) needs to wait behind another one in one-way aisle, I currently have that modeled properly I just don't know how to detect this situation and log it.
Thank you
I would do it as following:
Create a new 2-dimensional variable called collisionLog.
Check the speed [getSpeed() function] and state [TransporterState getState() function] every 1 second.
Write these into the collisionLog.
Once the simulation is completed, remove the rows with idle status.
Then do the calculations based on the fact that when speed is zero and transporter is busy, then you have the waiting vehicle.
There is no accessible trigger point (typically an action of a block) to trap when transporters have collisions. Yes, that situation obviously has to be captured internally to enable the transporters to avoid collisions, but in this situation that is not exposed as a block action, or action anywhere else. (AnyLogic space markup elements never have actions, except for some of the newer Material Handling library ones like Station, because these effectively represent a process step.)
The Transporter Control block has all the settings for collision detection and avoidance, but no related actions.
So your alternatives are really
'Scan' for this situation occurring: Yashar's answer, inferring that zero speed when non-idle means 'waiting due to collision' (which may or may not be 100% robust) being one way.
Explicitly break down the movement (from the process perspective) to define the potential 'conflicts' and decision-making within the process flow (e.g., if you're trying to move to an aisle, move to an entrance node, reserve a space in the aisle using resource pools or similar, and only enter when free). Clearly that doesn't cover every possible case, but may be useful in some situations.
The actions that do exist in the Transporter Control block could help a bit here (for both alternatives) since at least you have action points on entering paths and nodes. (You could also raise an enhancement request with AnyLogic to add collision-related actions here....)
I have a huge model with large number of forklifts, checking any attribute every second would result in huge performance loss
I also can not use statecharts because it uses much performance
Have you actually tried it though? Some things do not result in as much of a performance hit as you might think, and performance should not be an a priori 'that will be too slow' thing; ideally you have requirements for acceptable performance and you work round that. (There are always trade-offs between performance, functionality and maintainability.)
[You also don't say how you think using statecharts could have helped. Did you mean doing the 'scanning' approach via a statechart, say with cyclic entry/exit from a Scan state?]
I studied the principles of chaos, and looks for some opensource project, such as chaosblade which is open sourced by Alibaba, and mangle, by vmware.
These tools are both fault injection tools, and do nothing to analysis on the tested system.
According to the principles of chaos, we should
1.Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
2.Hypothesize that this steady state will continue in both the control group and the experimental group.
3.Introduce variables that reflect real world events like servers that crash, hard drives that malfunction, network connections that are severed, etc.
4.Try to disprove the hypothesis by looking for a difference in steady state between the control group and the experimental group.
so how we do step 4? Should we use monitoring system to monitor some major metrics, to check the status of the system after fault injection.
Is there any good suggestions or best practice?
so how we do step 4? Should we use monitoring system to monitor some major metrics, to check the status of the system after fault injection.
As always the answer is it depends.... It depends how do you want to measure your hypothesis, it depends on the hypothesis itself and it depends on the system. But normally it makes totally sense to introduce metrics to improve/increase the observability.
If your hypothesis is like Our service can process 120 requests in a second, even if one node fails. Then you could do it via metrics to measure that yes, but you could also measure it via the requests you send and receive the responses back. It is up to you.
But if your Hypothesis is I get a response for an request which was send before a node goes down. Then it makes more sense to verify this directly with the requests and response.
At our project we use for example chaostoolkit, which lets you specify the hypothesis in json or yaml and related action to prove it.
So you can say I have a steady state X and if I do Y, then the steady state X should be still valid. The toolkit is also able to verify metrics if you want to.
The Principles of Chaos are a bit above the actual testing, they reflect the philosophy of designed vs actual system and system under injection vs baseline, but are a bit too abstract to apply in everyday testing, they are a way of reasoning, not a work process methodology.
I'm think the control group vs experiment wording is one especially doubtful part - you stage a test (injection) in a controlled environment and try to catch if there is a user-facing incident, SLA breach of any kind or a degradation. I do not see where there is a control group out there if you test on a stand or dedicated environment.
We use a very linear variety of chaos methodology which is:
find failure points in the system (based on architecture, critical user scenarios and history of incidents)
design choas test scenarios (may be a single attack or more elaborate sequence)
run tests, register results and reuse green for new releases
start tasks to fix red tests, verify the solutions when they are available
One may say we are actually using the Principles of Choas in 1 and 2, but we tend to think of choas testing as quite linear and simple process.
Mangle 3.0 released with an option for analysis using resiliency score. Detailed documentation available at https://github.com/vmware/mangle/blob/master/docs/sre-developers-and-users/resiliency-score.md
I want to ask about supporting Lock-step(lockstep, lock-step) processors in SW-level.
As I know, in AUTOSAR-ASILD, Lock-step processor is used for fault torelant system as below scenario.
The input signals for a processor is copied to another processor(its Lock-step pair).
The output signals from two different processors are compared.
If two output signals are different, trap is generated.
I think that if there is generated trap, then this generated trap should be processed somewhere in SW-level.
However, I could not find any standard for this processing.
I have read some error handling in SW topics specified in AUTOSAR, but I could not find any satisfying answers.
So, my question is summarized as below.
In AUTOSAR or other standard, where is the right place that processes Lock-step trap(SW-C or RTE or BSW)?.
In AUTOSAR or other standard, what is the right action that processes Lock-step trap(RESET or ABORT)?
Thank you.
There are multiple concepts involved here, from different sources.
The ASIL levels are defined by ISO 26262. ASIL-D is the highest level and using a lockstep CPU is one of the methods typically used to achieve ASIL-D compliance for the whole system. Autosar doesn't define how you achieve ASIL-D, or any ASIL level at all. From an Autosar perspective, lockstep would be an implementation detail of the MCU driver, and Autosar doesn't require MCUs to support lockstep. How a particular lockstep implementation works (whether the outputs are compared after each instruction or not, etc.) depends on the hardware, so you can find those answers in the corresponding hardware manual.
Correspondingly, some decisions have to be made by people working on the system, including an expert on functional safety. The decision on what to do on lockstep failure is one such decision - how you react to a lockstep trap should be defined at the system level. This is also not defined by Autosar, although the most reasonable option is to reset your microcontroller after saving some information about the error.
As for where in the Autosar stack the trap should be handled, this is also an implementation decision, although the reasonable choice is for this to happen at the MCAL level - to the extent that talking about levels even makes sense here, as the trap will run in interrupt/trap context and not the normal OS task context. Typically, a trap would come with a higher priority than any interrupt, and also typically it's not possible to disable the traps in software. A trap will be handled by some routine that is registered by the OS in the same way it registers ISRs, so you'd want to configure the trap handler in whatever tool you're using for OS configuration. The lockstep trap may (again, depending on the hardware) be considered a non-recoverable trap, meaning that the trap handler should trigger a reset eventually. Calling the standard ShutdownOS() function may be reasonable.
I read Deprecating the Observer Pattern with Scala.React and found reactive programming very interesting.
But there is a point I can't figure out: the author described the signals as the nodes in a DAG(Directed acyclic graph). Then what if you have two signals(or event sources, or models, w/e) depending on each other? i.e. the 'two-way binding', like a model and a view in web front-end programming.
Sometimes it's just inevitable because the user can change view, and the back-end(asynchronous request, for example) can change model, and you hope the other side to reflect the change immediately.
The loop dependencies in a reactive programming language can be handled with a variety of semantics. The one that appears to have been chosen in scala.React is that of synchronous reactive languages and specifically that of Esterel. You can have a good explanation of this semantics and its alternatives in the paper "The synchronous languages 12 years later" by Benveniste, A. ; Caspi, P. ; Edwards, S.A. ; Halbwachs, N. ; Le Guernic, P. ; de Simone, R. and available at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1173191&tag=1 or http://virtualhost.cs.columbia.edu/~sedwards/papers/benveniste2003synchronous.pdf.
Replying #Matt Carkci here, because a comment wouldn't suffice
In the paper section 7.1 Change Propagation you have
Our change propagation implementation uses a push-based approach based on a topologically ordered dependency graph. When a propagation turn starts, the propagator puts all nodes that have been invalidated since the last turn into a priority queue which is sorted according to the topological order, briefly level, of the nodes. The propagator dequeues the node on the lowest level and validates it, potentially changing its state and putting its dependent nodes, which are on greater levels, on the queue. The propagator repeats this step until the queue is empty, always keeping track of the current level, which becomes important for level mismatches below. For correctly ordered graphs, this process monotonically proceeds to greater levels, thus ensuring data consistency, i.e., the absence of glitches.
and later at section 7.6 Level Mismatch
We therefore need to prepare for an opaque node n to access another node that is on a higher topological level. Every node that is read from during n’s evaluation, first checks whether the current propagation level which is maintained by the propagator is greater than the node’s level. If it is, it proceed as usual, otherwise it throws a level mismatch exception containing a reference to itself, which is caught only in the main propagation loop. The propagator then hoists n by first changing its level to a level above the node which threw the exception, reinserting n into the propagation queue (since it’s level has changed) for later evaluation in the same turn and then transitively hoisting all of n’s dependents.
While there's no mention about any topological constraint (cyclic vs acyclic), something is not clear. (at least to me)
First arises the question of how is the topological order defined.
And then the implementation suggests that mutually dependent nodes would loop forever in the evaluation through the exception mechanism explained above.
What do you think?
After scanning the paper, I can't find where they mention that it must be acyclic. There's nothing stopping you from creating cyclic graphs in dataflow/reactive programming. Acyclic graphs only allow you to create Pipeline Dataflow (e.g. Unix command line pipes).
Feedback and cycles are a very powerful mechanism in dataflow. Without them you are restricted to the types of programs you can create. Take a look at Flow-Based Programming - Loop-Type Networks.
Edit after second post by pagoda_5b
One statement in the paper made me take notice...
For correctly ordered graphs, this process
monotonically proceeds to greater levels, thus ensuring data
consistency, i.e., the absence of glitches.
To me that says that loops are not allowed within the Scala.React framework. A cycle between two nodes would seem to cause the system to continually try to raise the level of both nodes forever.
But that doesn't mean that you have to encode the loops within their framework. It could be possible to have have one path from the item you want to observe and then another, separate, path back to the GUI.
To me, it always seems that too much emphasis is placed on a programming system completing and giving one answer. Loops make it difficult to determine when to terminate. Libraries that use the term "reactive" tend to subscribe to this thought process. But that is just a result of the Von Neumann architecture of computers... a focus of solving an equation and returning the answer. Libraries that shy away from loops seem to be worried about program termination.
Dataflow doesn't require a program to have one right answer or ever terminate. The answer is the answer at this moment of time due to the inputs at this moment. Feedback and loops are expected if not required. A dataflow system is basically just a big loop that constantly passes data between nodes. To terminate it, you just stop it.
Dataflow doesn't have to be so complicated. It is just a very different way to think about programming. I suggest you look at J. Paul Morison's book "Flow Based Programming" for a field tested version of dataflow or my book (once it's done).
Check your MVC knowledge. The view doesn't update the model, so it won't send signals to it. The controller updates the model. For a C/F converter, you would have two controllers (one for the F control, on for the C control). Both controllers would send signals to a single model (which stores the only real temperature, Kelvin, in a lossless format). The model sends signals to two separate views (one for C view, one for F view). No cycles.
Based on the answer from #pagoda_5b, I'd say that you are likely allowed to have cycles (7.6 should handle it, at the cost of performance) but you must guarantee that there is no infinite regress. For example, you could have the controllers also receive signals from the model, as long as you guaranteed that receipt of said signal never caused a signal to be sent back to the model.
I think the above is a good description, but it uses the word "signal" in a non-FRP style. "Signals" in the above are really messages. If the description in 7.1 is correct and complete, loops in the signal graph would always cause infinite regress as processing the dependents of a node would cause the node to be processed and vice-versa, ad inf.
As #Matt Carkci said, there are FRP frameworks that allow loops, at least to a limited extent. They will either not be push-based, use non-strictness in interesting ways, enforce monotonicity, or introduce "artificial" delays so that when the signal graph is expanded on the temporal dimension (turning it into a value graph) the cycles disappear.
Is it actually important for a programmer to know if the standard library function he/she is using is actually executing a system call? If so, why?
Intuitively I'm guessing the only importance is in knowing if the general standard function is a library function or a system call itself. In other cases, I'm guessing there isn't much of a need to know if a library functions uses internally a system call?
It is not always possible to know (for sure) if a library function wraps a system call. But in one way or another, this knowledge can help improve the portability and (or) efficiency of your program. At least in the following two cases, knowing the syscall-level behaviours of your program is helpful.
When your program is time critical. Some system calls are expensive, and the library functions that wrap them are even more expensive. Thus time-critical tasks may need to switch to equivalent functions that do not enter kernel space at all.
It is also worth noticing the vsyscall (or vdso) mechanism of linux, which accelerates some system calls (i.e. gettimeofday) through mapping their implementations into user-space memory. See this for more details.
When your program needs to be deployed to some restricted environments with system call auditing. In order for your programs to survive such environments, it could be necessary to profile your program for any potential policy violations, or perhaps less tough if you are aware of the restrictions when you wrote the program.
Sometimes it might be important, and sometimes it isn't. I don't think there's any universal answer to this question. Reasons I can think of that might be important in some contexts are: if the system call requires user permissions that the user might not have; in performance critical code a system call might be too heavyweight; if you're writing a signal-handler where most system calls are forbidden; if it might use some system resource (e.g. reading from /dev/random for every random number could use up the whole entropy pool - you'd want to know if that's going to happen every time you call rand()).