My problem is understanding programming on real-time system. I'm confuse about this topic. What can I do and what I can not do in my source code? I know there are attensions to do during source code programming but I don't know exactly what. Some examples. Is possibile using dynamic memory allocation(new)? Is possible access to disk during real-time? What kind of IPC(Interprocess communication) can I use? Can I use standard interprocess locking? And what is with file locking? I have searched on internet but didn't find what I want. Where can I better understand this problems? I hope someone can help me. Sorry for my english!
You can do whatever your language/compiler of choice supports.
What you should do now, it really depends on what's the target system, what is your program (you could be writing an OS for all I know), etc...
Realtime system is all about determinacy - fixed timing for each . Check this out for some guidelines:
http://cs.brown.edu/~ugur/8rulesSigRec.pdf
What defines a real-time/near-real time system?
On the software side (your focus):
a. Avoid buffering or caching in your code. Caching are meant to speed up subsequent processing after the first, but then this result in indeterminacy of timing.
b. Minimize conditional branching, as it will generate different path resulting in different timing, this is especially important for the time-sensitive component.
c. Avoid asynchronous, or interrupt based design. Use polling whenever possible - that will increase the predictability of the timing.
d. Use a realtime OS (like LynxOS RTOS) whenever possible. It has high responsiveness and predictability in its processing. But if you look at its internals, you will see that it skips a lot of error processing, it has low threshold for maximum numbers of processes it can spawn etc. Ie, there is a lot of spare CPU computing power leftover always, to ensure that the responsiveness is there. Of course, the moment you pushed the numbers to its limits (eg, spawning lots of processes) the realtime behavior of LynxOS does not exhibit anymore.
Just lots of commonsense applied when you do coding.....
Related
I've been researching about memory segmentation, and it's been giving me lots of ideas for potential ways in which user code could benefit from swapping out segment registers. Specifically, I am interested in the x86-64 architecture because that's what I have.
Is there any way in which a user-mode program can create a new segment, for internal use?
To what extent can a program configure its own address space?
I imagine the GDT is way outside of a process' reach, but can a process modify the LDT?
Sorry if I sound naive, this is new stuff for me.
I also imagine that, if there even is a way to do this, it will almost certainly pass through OS specific functions. How would one do this in, say, Win32? I have found GetThreadSelectorEntry (https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getthreadselectorentry), but I haven't been able to find the equivalent SetThreadSelectorEntry.
I have a custom UIView which is for a GameOfLife exercise .. my board size is large enough that processing it limits "framerate". (currently this begins to kick in below the 60fps of the screen at about 100*100 in my simulator)
I am already keeping a set of only the cells that I need to calculate in each step.
But now I'd like to divide that set up into parts and thread each part. How can I do this ?
Things that I need to know:
A starter is: How can I divide the Set into n parts?
How do I determine how many threads I benefit from?
Can I use DispatchQueue.global(qos: .userInitiated) or is there some pool feature I should be using?
You can increase the frame rate and buffer performance by utilizing the underlying GPU architecture built into iOS, SceneKit & Metal rather than just rely on the CPU.
It's difficult to answer this rather broad question with the limited information provided, however I can push you in the right direction... It seems to me that you are using Single-Thread Rendering forcing the execution to be Serialized on the CPU. The underlying CPU and GPU architecture can work together to automatically run threads in parallel while still being thread safe and synchronous but learning curve to reach this point is quite high, but don't be discouraged because simple changes can greatly improve your code's performance.
"I'd like to divide that set up into parts and thread each part. How can I do this ?"
It would be advantageous to learn about Metal and the Metal Command Queue or MTLCommandQueue, utilizing the gpu and cpu together performance can be dramatically improved. There are many great tutorials on the internet that can explain it better and in more detail than I can here. In short, you can use command buffers to encode an entire scene or divide work out among threads automatically.
"how do I determine how many threads I benefit from ?"
Metal can 'automatically' control the number of threads needed to run a command, so allocating and deallocating threads 'manually' is not needed. Depending on the load and execution metal and the CPU work together to divide work out to as many threads as needed to complete the work on time.
"can I use DispatchQueue.global(qos: .userInitiated) ?"
Grand Central Dispatch is not typically used for graphics, in general GCD is used off of the main CPU thread to run concurrent code on multicore hardware unrelated to graphics. GCD doesn't utilize the GPU and Metal to run code unless it explicitly calls that code. Again There are plenty of tutorials available for GCD, but these tutorials may not improve the performance of your application as dramatically as metal command queues would. Without the code in question it would be difficult to analyze performance issues or recommend an effective solution.
There are plenty of Game of Life example on GitHub written in swift. I'd recommend you review theses and see how others are doing it, learn by example as they say.
Watch the WWDC Metal for Game Developers video from apple, this will give you a general insight on how metal performs, but without a stable underlying understanding of it you may not be able to implement its features right away.
There is a good ray wenderlich metal tutorial that can get you started using Metal and SceneKit effectively.
I know deadlocks was a hot research topic in past. But, even though I studied lots of modern operating systems, I cannot see any major problem about deadlocks now. I know some (most) resources which deadlocks can occur strictly managed by operating system itself and seems it prevent deadlocks someway, I really didn't see any case related to a deadlock. I know lots of features about resources handled different than others in popular systems with different design principles but, they can all maintain system deadlock-free.
Try to use two mutexes in your program and in first thread close in sequence: mutex1, sleep(500ms), mutex2, in second thread: mutex2, sleep(1000ms), mutex1.
In systems. In windows (including 8.1) if your application uses SendMessage and broadcast HWND_BROADCAST - if one application is hung, your application also will be in hung state. Also in part cases of DDE communication (including ShellExecute for part of programs), if one application is not responsive, your application can be in hung state.
But you can use SendMessageTimeout...
The deadlock will always be possible if processes or threads will be synchronized. Synchronization of processes and threads is a "must-have" element of applications.
AND... SYSTEM-WIDE deadlock (Windows):
Save all your documents before this action.
Create HWND h1 with parent=0 or parent=GetDesktopWindow and styles 0x96cf0000
Create HWND h2 with parent=h1 and styles 0x96cf0000
Create HWND h3 with parent=h2 and styles 0x56cf0000 (here must be a child window).
Use ::SetParent(h1, h3);
Then click any of these windows.
The system will in cyclic (triangle) order try to reorder windows. The application is hung but if any other application will try to use SetWindowPos, the application will newer return from this function. The Task Manager won't help, the Alt+Ctrl+Del also stops to work. 100% of usage of CPU... Only hard reset will help you.
There is possibility to prevent it but this situation must be detected ASAP.
Operating system deadlocks still happen. When a system has limited contended resources that it can't reclaim a deadlock is still possible.
In linux, look at kernel stalls, these happen when I/O doesn't release in a timely manner. Kernel stalls are particularly interesting between vmware and guest operating systems.
For external instigators, deadlocks happen when san systems and networks have issues.
New release deadlocks happen fairly often while maturing a kernel, not per user, but as a whole from the community.
Ever get a blue screen or instant reboot? Some of those are caused by lost resources.
Kernels are fairly mature, and have gotten good at reclaiming resources, but aren't perfect.
Most modern resource handlers tend to present as services now instead of being lockable objects. Most resource sharing within the operating system relies on separate channels, alleviating much of the overlap. There's a higher reliance on queues and toggles instead of direct locking contention on shared buffers. These are generalities of trends in OS parts and pieces that contribute to less opportunity for deadlocks, but there's not a way to guarantee a deadlock less system.
I am studying about operating systems(Silberscatz, Galvin et al). My programming experiences are limited to occasional coding of exercise problems given in a programing text or an algorithm text. In other words I do not have a proper application programming or system programming experience. I think my below question is a result of a lack of experience of the above and hence a lack of context.
I am specifically studying IPC mechanisms. While reading about shared memory(SM) I couldn't imagine a real life scenario where processes communicate using SM. An inspection of processes attached to the same SM segment on my linux(ubuntu) machine(using 'ipcs' in a small shell script) is uploaded here
Most of the sharing by applications seem to be with the X deamon. From what I know , X is the process responsible for giving me my GUI. I infered that these applications(mostly applets which stay on my taskbar) share data with X about what needs to change in their appearances and displayed values. Is this a reasonable inference??
If so,
my question is, what is the difference between my applications communicating with 'X' via shared memory segments versus my applications invoking certain API's provided by 'X' and communicate to 'X' about the need to refresh their appearances?? BY difference I mean, why isn't the later approach used?
Isn't that how user processes and the kernel communicate? Application invokes a system call when it wants to, say read a file, communicating the name of the file and other related info via arguments of the system call?
Also could you provide me with examples of routinely used applications which make use of shared memory and message-passing for communication?
EDIT
I have made the question more clearer. I have formatted the edited part to be bold
First, since the X server is just another user space process, it cannot use the operating system's system call mechanism. Even when the communication is done through an API, if it is between user space processes, there will be some inter-process-communication (IPC) mechanism behind that API. Which might be shared memory, sockets, or others.
Typically shared memory is used when a lot of data is involved. Maybe there is a lot of data that multiple processes need to access, and it would be a waste of memory for each process to have its own copy. Or a lot of data needs to be communicated between processes, which would be slower if it were to be streamed, a byte at a time, through another IPC mechanism.
For graphics, it is not uncommon for a program to keep a buffer containing a pixel map of an image, a window, or even the whole screen that then needs to be regularly copied to the screen. Sometimes at a very high rate...30 times a second or more. I suspect this is why X uses shared memory when possible.
The difference is that with an API you as a developer might not have access to what is happening inside these functions, so memory would not necessarily be shared.
Shared Memory is mostly a specific region of memory to which both apps can write and read from. This off course requires that access to that memory is synchronized so things don't get corrupted.
Using somebody's API does not mean you are sharing memory with them, that process will just do what you asked and perhaps return the result of that operation to you, however that doesn't necessarily go via shared memory. Although it could, it depends, as always.
The preference for one over another I'd say depends on the specifications of the particular application and what it is doing and what it needs to share. I can imagine that a big dataset of some kind or another would be shared by shared memory, but passing a file name to another app might only need an API call. However largely dependent on requirements I'd say.
How to decide if a "Device " need an Operating System(embedded OS ) or not?
This is a general interview question.
Any thought?
Thank you all.
In my opinion, if more then one application needs to be run on that device, it should have an operating system. Otherwise it would be a waste.
In my experience an operating system is essentially used
to manage resources on the device like scheduling tasks, allocating resource etc and
to abstract away some of the low level hardware interface like thread handling, interrupt handling etc.
If either of these functions are needed it might be a good idea to use an operating system. Now of course in all cases some form of the above two functions will be needed. But in simple devices it might be just easier to code up the specific function instead of trying to port an OS to the device. But in other cases where the device is a lot more complex it might be a better investment of time to try and use an OS versus having to code it all up.