Real-time application with graphic interface - real-time

I need to develop the real-time application which can handle user's input (from some external control panel) as fast as possible and provide some output to LCD monitor (very fast as well).
To be more exact - I need to handle fixed-time interrupts (with period of 1ms) to recalculate internal model - with current state fetched from external control panel.
When internal model is changed i need to update a picture on LCD monitor (now I think the most proper way is to update on each interrupt). Also don't want any delays here.
What is the most suitable platform to implement it? And also which one is the most cost-effective?
I've heard about QNX, IntervalZero RTX, rtlinux but don't know the details and abilities of each one.
Thanks!

As far as the different OSs, I know QNX has very good "hard" real time and has been built & optimized from the ground up. It also now has Qt running on it (QNX 6.5) for full featured GUIness.
I have heard (2nd hand) anecdotal information that rtlinux is very close to hard realtime (guaranteed realtime), but it can sometimes be late if a driver (usually 3rd party) is not coded well. [This was from a RTOS vendor, so take it for what it is worth.]
As a design issue, I'd decouple the three separate operations into three threads with different priorities: one thread to fetch the data and set a semaphore that new data is ready, one thread to update the model and set a semaphore that the model is ready, and one thread to update the GUI. I would run the GUI thread at a much slower update rate. Most monitors are in the 60-120Hz range for updating. Why update faster than the data can be shown on the screen?

Related

What should be Memory Protection Strategy for ARM Cortex CPU?

I need to implement a multitasking system with MPU for ARM Cortex M3/M4 processors.
In that system, there will be a Kernel which manages resource in Privileged mode and user applications in Unprivilege mode. And I want to seperate User Application from rest of it and system resources.
Therefore, when I switch to a new task, I am releasing Stack and Global Memory area of user application.
It can be done easily using ARM Cortex MPU registers.
But problem is that, when a context switching is occurred, I need to use also some global variables of Kernel.
For example, I am calling a function to get next TCB in PendSV Handler during context switching but task pool is out of user app area and it is protected from user application.
So, it seems there should be balance, right? What are the secure and efficient strategies for memory protection?
Privilieged mode can be raised before context switching when Yield function is called but it does not seem a good solution.
What are the general strategies on that issue?
Perhaps you might take a look at an existing open source implementation and see what design decisions were made there. FreeRTOS for example has Cortex-M MPU support here; it may not answer your exact question directly and you may have to inspect the source code to get complete details.
Possibly divide the data memory into three regions - user, kernel and shared.

Unity hard real time synchronization

I need to synchronize Unity app to a 3rd party app where time synchronization is crucial (1-2ms varient max).
The way this is done today (without Unity) is getting priority of the OS scheduler with a designated app which will assure a constant delay.
A constant delay is good enough as it can be used in the data analysis which is not done in real time. Today the constant delay is measured once on the beginning.
Thanks in advance.
This kind of delays should be easy to achieve in a background thread.
Threads work well in Unity, despite common belief. The only thing you need to look out for is not to access Unity objects from the thread.
Easiest way to do this is to start a thread in a MonoBehaviour.Start with the IsBackground property set to true (so you don't have to worry about it blocking your application exit) and communicate to and from it with a message queue (for example a List<Action> with locked access).

how to force an application to run in one core and no other applications run in that core on windows?

I think my questions are unusual, but I wanna work on real time targeting in MATLAB Simulink, but I don't want to use XPC target. I just want no interrupt on the program (simulink) when it is running in order to have a real time interruptless control system. and in that order i can use my control module without target system.
first of all, please ignore my weak english. and I have some questions:
1. can we force a core to only be used by simulink and nothing else?
2. how much usually (and how much maximum) does an interrupt take time?
3. is there any other way that we can use in simulink?
thank you
a. In case you have a multicore platform: Stay away from core 0. Windows assigns certain tasks specifically to core 0. See the SetThreadAffinityMask function to get information how to run a thread on specific cores.
b. Possibly raise the thread/process priority. See the SetThreadPriority function and the SetPriorityClass function for details about setting priorities and Scheduling Priorities for dertails about the priority ranges.
Priority class REALTIME_PRIORITY_CLASS with thread priority THREAD_PRIORITY_TIME_CRITICAL will run your thread at utmost priority whenever it is ready to run. Be aware that such a priority setting will disallow any other process/thread to gain CPU on that core while your thread is running.
Well, Simulink is essentially a single-threaded application. There are some ways in which you can use a second core when running in Rapid Accelerator mode (see documentation), but by and large, everything runs on one core. I'm guessing it may change in the future, as a lot of people would like to split the execution of a single large model across multiple cores, but right now it's not possible as far as I know.
Simulink, however is not a real-time application, given that it runs on Windows or other non-real time O/S. Why do you not want to use xPC Target? As you are working on a real-time target, that would be the best option. Other options would be to use Real-Time Windows Target, SIL or even PIL if you have access to your real-time target hardware. Have a look at the example Software and Processor-in-the-Loop (SIL and PIL) Simulation. I think you can configure the code generation process to be executed on one core only, but better to ask MathWorks to be sure.
Using imageCFG you can preset affinity of a program. It modifies the exe file to run on desired core.
http://www2.robpol86.com/guides/ImageCFG/

Suggestions on program layout and storage of Cocoa program that analyzes

Short background: I am currently writing a program in Xcode for the Mac, which I plan to take parts of (conceptually, if not whole chunks of the code) over to the iPhone. It involves constantly receiving data through bluetooth from a external sensor (regardless of user interaction the data must be received). I've built a simple program on the Mac using IOBluetooth that pairs and starts receiving the data just fine, and I plan on using BTstack and a jailbroken iPhone in order to access the bluetooth chip on the iPhone.
Before I get too far I want to conceptually lay this program out correctly, because I am used to procedural programming and Obj-C is a new beast for me. As I stated, I would like to be able to save as much of this code as possible when I move to the iPhone (I understand there are different classes for views etc, but I see -lots- of similarities).
1) With my program I will be constantly receiving data in the background (regardless of user actions - ie, once the user starts the program and picks the BT device, the data will flow) and I need to store and analyze that data before it can be presented to the user. So (the question), how would one lay this out? I was thinking of putting all of my BT code in the appdelegate, and then having a view controller (on the mac would just be one which handles the window, but on the iPhone would be a tab controller with multiple sub view controllers), and a model that analyzes and stores the data (also as log files, for future reference) that is accessed by the "controller", in this case the appdelegate. Does this layout make sense? Is it kosher MVC/Cocoa to put all of the BT code and analysis in appdelegate, or should it(they) be in its own class(es) (knowing the BT code on both the mac and iPhone must constantly receive bursts of data)? How could it be improved?
2) A related question on the analysis side. I haven't found a single Cocoa example on the net that has analysis (I've found programs, but no explanation of the model they use). The basic data that is saved is very small ~50kB per hour. However, the results (including spectrum and waterfall plots) could be >2MB per hour (this is a program that one might run for many hours a day). To analyze "on the go" and just throw the results in a scrolling buffer I know would be very fast, but I want my program to allow the user to look back at specific time segments in the past. The question I have is should the model object analyze the data and store the results alongside the basic data, or should the model only store the basic data, and return that data to the controller which would then analyze it to present it to the view (this would be very CPU heavy if regraphing even minutes of data, let alone hours)?
Any thoughts or suggestions would be greatly appreciated, as I feel laying proper groundwork could save me untold hours of coding (and fixed/debugging) later.
As for your question 1:
I suggest you to write a class/object which manages the bluetooth data, separately from the app delegate. The app delegate is where the view objects meet the controller, and as such there will be lots of calls to AppKit (on OS X) and to UIKit (on iOS). The change will be so great that #ifdef between the OSes inside the same file won't make much sense for the app delegate.
Rather, make an ivar holding the Bluetooth controller inside the app delegate. That way your code will be better structured, and will be easier to be reused.
As for your question 2:
On an OS X machine, which usually comes with plenty of RAM these days, holding and caching all the resulting data on the RAM would be just fine, if it's 2MB per hour.
On an iOS machine, RAM is a seriously endangered resource. If your program caches the calculated data in the memory and consumes a lot of RAM and the user send it to the background, the OS might outright kill your program instead of suspending it, for example. Then you'll need to recalculate data anyway, because your app is re-launched. 
The filesystem capacity is quite big even on an iOS machine. So one way out is to write out your calculated data onto the disk, and let the view controller reload the previous calculated data from there. That way, your program can access pre-calculated data even after it's relaunched.
That cacheing code can be even shared between OS X and iOS, if you don't hard-code the cache directory into the program.
If your software on the iPhone is supposed to run continuously in the background processing data from BTstack, I recommend to create a LaunchDaemon for the data processing and provide a regular app for the configuration. (Although BTstack Mouse / Keyboard / GPS don't follow this advice, they will when I get around to update them - Celeste uses a daemon for the actual file transfers e.g.)

Programming on real-time system

My problem is understanding programming on real-time system. I'm confuse about this topic. What can I do and what I can not do in my source code? I know there are attensions to do during source code programming but I don't know exactly what. Some examples. Is possibile using dynamic memory allocation(new)? Is possible access to disk during real-time? What kind of IPC(Interprocess communication) can I use? Can I use standard interprocess locking? And what is with file locking? I have searched on internet but didn't find what I want. Where can I better understand this problems? I hope someone can help me. Sorry for my english!
You can do whatever your language/compiler of choice supports.
What you should do now, it really depends on what's the target system, what is your program (you could be writing an OS for all I know), etc...
Realtime system is all about determinacy - fixed timing for each . Check this out for some guidelines:
http://cs.brown.edu/~ugur/8rulesSigRec.pdf
What defines a real-time/near-real time system?
On the software side (your focus):
a. Avoid buffering or caching in your code. Caching are meant to speed up subsequent processing after the first, but then this result in indeterminacy of timing.
b. Minimize conditional branching, as it will generate different path resulting in different timing, this is especially important for the time-sensitive component.
c. Avoid asynchronous, or interrupt based design. Use polling whenever possible - that will increase the predictability of the timing.
d. Use a realtime OS (like LynxOS RTOS) whenever possible. It has high responsiveness and predictability in its processing. But if you look at its internals, you will see that it skips a lot of error processing, it has low threshold for maximum numbers of processes it can spawn etc. Ie, there is a lot of spare CPU computing power leftover always, to ensure that the responsiveness is there. Of course, the moment you pushed the numbers to its limits (eg, spawning lots of processes) the realtime behavior of LynxOS does not exhibit anymore.
Just lots of commonsense applied when you do coding.....