Is it necessary to bind a real-time thread to an LWP? - operating-system

We know that an operating system maps user-level threads to the kernel
using the many-to-many model and that the mapping is done through
the use of LWPs. Furthermore, the system allows program developers to
create real-time threads. So, Is it necessary to bind a real-time thread to an
LWP?

This—
We know that an operating system maps user-level threads to the kernel using the many-to-many model and that the mapping is done through the use of LWPs.
—is COMPLETELY AND TOTALLY wrong. The entire concept of mapping user threats to kernel threads is entirely the creation of horrible books on operating system and does not exist in the real world (at least in the mainstream).
There is no such thing as a user thread. What horrible textbooks call a "user thread" is a thread simulated by a library. So there is no real point in covering user threads in operating systems at all because they do not exist in the operating system.
Thus, there are threads and there are simulated threads. Outside of horrible textbooks, there is no one to one model, many to many model, or anything like that.

Related

how does the Operating Systems code and user applications code run on same processor

We all know that the Operating Systems is responsible for handling resources needed by user application. The OS is also a piece of code that runs, then how does it manages other user programs?
does the OS runs on dedicated processor and monitor the user program on some other processor?
how does the OS actually handles user applications?
It depends upon the structure of the operating system. For any modern operating system the kernel is invoked through exceptions or interrupts. The operating system "monitors" processes during interrupts. An operating system schedules timer interrupts. When the timer goes off the interrupt handler determines whether it needs to switch to a different process.
Another OS management path is through exceptions. An application invokes the operating system through exceptions. An exception handler can also cause the operating system to switch to another process. If a process invokes a read and wait system service, that exception handler will certainly switch to a new process.
In ye olde days, it was common for multi-processors to have one processor that was the dedicated master and was the only processor to handle certain tasks. Now, all normal operating systems use symmetric multi-processing where any processor can handle any task.
An entire book is needed to answer your too broad question.
Read Operating System: Three Easy Pieces (a freely downloadable book).
does the OS runs on dedicated processor and monitor the user program on some other processor?
In general no. The same processor (or core) is either in user-mode (for user programs; read about user space and process isolation and protection rings) or in supervisor-mode (for the operating system kernel)
how does the OS actually handles user applications?
Often by providing system calls which are done, in some controlled way, from applications.
Some academic OSes, e.g. Singularity, have been designed with other principles in mind (formal proof techniques for isolation).
Read also about micro-kernels, unikernels, etc.

Definition of "Application" from Microsoft and uC/OS

uC/OS-III User's Manual says:
The design process of a real-time application involves splitting the work into tasks(also called threads), and each task responsible for a portion of the job.
From this quote, we can inferred that an application consists of tasks (threads).
Also, In Processes and Threads from Microsoft:
An application consists of one or more processes
Why different difinition?
Is this because uC/OS-III is for embedded environment and Microsoft is for PC environment?
In a PC environment, a process is basically the same thing as a program. A process has an address space - a chunk of virtual memory that can only be accessed by that process. It consists of one or several threads, executing in the same address space, sharing the same memory. Different threads can run on different CPU cores, executing simultaneously.
On embedded RTOS systems, we don't really have all the dead weight of a hosted system process. Traditionally, RTOS therefore speaks of tasks, which is essentially the same thing as a thread. Except most microcontrollers are still single core, so the multi-tasking is simulated through task switches, everything running on one core. Older PC worked in the same manner.
Traditional microcontrollers don't have virtual memory, but addresses physical memory addresses directly. Therefore anything running on the microcontroller can access anything, by default.
Nowadays, upper-end embedded systems and hosted system are smeared together, as are the concepts. High-end microcontrollers have memory mapping units (MMU) capable of setting up virtual address spaces. PC programmers trinkle down into embedded systems and start looking for threads. And so on. The various concepts are blurring.
One (of several) dictionary definitions of "application" is:
a program or piece of software designed to fulfil a particular purpose
In that sense both the Microsoft and uC/OS definitions are valid, it is simply that in the specific environments the structure, and execution environment of an application differ. What they describe is what an application is composed of in the context of the specific platforms and execution environments.
I would suggest that "application" has no particular technical meaning; it is simply "the purpose to which a system or software is put" - it is just English, not a specific technical concept.
The boundary of an "application" is context dependent and a Desktop software application is a very different context that an embedded microcontroller application. Equally you could draw your application boundary to encompass entire systems comprising many computers or processors running a variety of software and other equipment.
It means whatever the writer/speaker intends and can normally be inferred by the context. Don't waste your time looking for the one true definition or be confused by different usage.

Is Virtual Memory in some way related to Virtualization Technology?

I think it is a bit vague question. But I was trying to get a clear understanding on how a hypervisor interacts with operating systems under the hood, and what makes them two so different. Let me just drive you through my thought process.
Why do we need a virtualization manager a.k.a. a hypervisor, if we already have an operating system to manage resources which are shared?
One answer that I got was: suppose the system crashes, and if we have no virtualization manager, then it's a full loss. So, virtualization keeps another system unaffected, by providing isolation.
Okay, then why do we need an operating system? Well, both operating systems and hypervisors have different task to handle: hypervisor handles how to allocate the resources (compute, networking etc.), while OS handles process management, file system, memory (hmm.. We also have a virtual memory. Right?)
I think I haven't asked the question in a trivial manner? But I am confused, so may be I could get a little help to clear my insight.
"Virtual" roughly means "something that is not what it seems". It is a common task in computing to substitute one thing with another.
A "virtual resource" is a common approach for that. It also means that there is an entity in a system that transparently substitutes one portion of resource with another. Memory is one of the most important resources in computing systems, therefore "Virtual Memory" is one of the first terms that historically was introduced.
However, there are other resources that are worth virtualizing. One can virtualize registers, or, more specifically, their values. Input/output devices, time, number of processors, network connections — all these resources can be and are virtualized these days (see: Intel VT-d, Virtual Time papers, Multicore simulators, Virtual switches and network adapters as respective examples). A combination of such things is what roughly constitutes a "Virtualization Technology". It is not a well-defined term, unless you talk about Intel® Virtualization Technology, which is one-vendor trade name.
In this sense, a hypervisor is such an entity that substitutes/manages chosen resources transparently to other controlled entities, which are then said to reside inside "containers", "jails", "virtual machines" — different names exist.
Both operating system and hypervisors have different task to handle
In fact, they don't.
An operating system is just a hypervisor for regular user applications, as it manages resources behind their back and transparently for them. The resources are: virtual memory, because an OS makes it seem that every application has a huge flat memory space for its own needs; virtual time, because each application does not manage context switching points; virtual I/O, because each application uses system calls to access devices instead of directly writing into their registers.
A hypervisor is a fancy way to say a "second level operating system", as it virtualizes resources visible to operating systems. The resources are essentially the same: memory, time, I/O; a new thing are system registers.
It can go on and on, i.e. you can have hypervisors of higher levels that virtualize certain resources for entities of lower level. For Intel systems, it roughly corresponds to the stack SMM -> VMM -> OS -> user application, where SMM (system management mode) is the outermost hypervisor and user application is the inner entity (that actually does useful job of running a web browser and web server you use right now).
Why do we need a virtualization manager aka hypervisor, if we already have an operating system to manage how the resources are shared?
We don't need it if chosen computer architecture supports more than one level of indirection for resource management (e.g. nested virtualization). Thus, it depends on chosen architecture. On certain IBM systems (System/360, years 1960-1970), hypervisors were invented and used much earlier than operating systems had been introduced in a modern sense. More common IBM Personal Computer architecture based on Intel x86 CPUs (around 1975) had deficiencies that did not allow to achieve required level of isolation between multiple OSes without introducing a second layer of abstraction (hypervisors) into the architecture (which happened around 2005).

Why is user mode thread handling not acceptable on a multi-core computer?

It is my understanding that threads in user mode are often cheaper to execute and don't require system calls. Why would it not be acceptable on a multi-core machine ?
Threads don't require system calls, but that doesn't mean that they can't make system calls.
There are two fundamental problems with implementing threads in a user mode library:
what happens when a system call blocks
If a user-mode thread issues a system call that blocks (e.g. open or read), the process is blocked until that operation completes. This
means that when a thread blocks, all threads (within that process)
stop executing. Since the threads were implemented in user-mode, the
operating system has no knowledge of them, and cannot know that other
threads (in that process) might still be runnable.
exploiting multi-processors
If the CPU has multiple execution cores, the operating system can schedule processes on each to run in parallel. But if the operating
system is not aware that a process is comprised of multiple threads,
those threads cannot execute in parallel on the available cores.
Both of these problems are solved if threads are implemented by the operating system rather than by a user-mode thread library.
Taken from here.

Basics of Real Time OS

I am trying to learn an RTOS from scratch and for this, I use freeRTOS.org as a reference. I find out this site as a best resource to learn an RTOS. However, I have some doubts and I was trying to find out but not able to get exact answers.
1) How to find out that device have Real-time capability e.g. some controller has (TI Hercules) and other don't have(MSP430)?
2) Does that depend upon the architecture of the CORE (ARM Cortex-R CPU in TI Hercules TMS570)?
I know that these questions make nuisance, but I don't know how to get the answer of these questions.
Thanks in advance
EDIT:
One more query I have that what is meant by "OS" in RTOS? Does that mean the same OS like others or it's just contains the source code file for the API's?
Figuring out whether a device has a "Real-Time" capability is somewhat arbitrary and depends on your project's timing requirements. If you have timing requirements that are very high, you'll want to use a faster microcontroller/processor.
Using an RTOS (e.g. FreeRTOS, eCOS, or uCOS-X) can help ensure that a given task will execute at a predictable time. The FreeRTOS website provides a good discussion of what operating systems are and what it means for an operating system to claim Real-Time capabilities. http://www.freertos.org/about-RTOS.html
You can also see from the ports pages of uC/OS-X and FreeRTOS that they can run on a variety target microcontrollers / microprocessors.
Real-time capability is a matter of degree. A 32-bit DSP running at 1 GHz has more real-time capability than an 8-bit microcontroller running at 16 MHz. The more powerful microcontroller could be paired with faster memories and ports and could manage applications requiring large amounts of data and computations (such as real-time video image processing). The less powerful microcontroller would be limited to less demanding applications requiring a relatively small amount of data and computations (perhaps real-time motor control).
The MSP430 has real-time capabilities and it's used in a variety of real-time applications. There are many RTOS that have been ported to the MSP430, including FreeRTOS.
When selecting a microcontroller for a real-time application you need to consider the data bandwidth and computational requirements of the application. How much data needs to be processed in what amount of time? Also consider the range and precision of the data (integer or floating point). Then figure out which microcontroller can support those requirements.
While Cortex-R is optimised for hard real-time; that does not imply that other processors are not suited to real-time applications, or even better suited to a specific application. What you need to consider is whether a particular combination of RTOS and processor will meet the real-time constraints of your application; and even then the most critical factor is your software design rather then the platform.
The main goal you want to obtain from an RTOS is determinism, most other features are already available in most other non-RTOS operating systems.
The -OS part in RTOS means Operating System, simply put, and as all other operating systems, RTOSes provide the required infrastructure for managing processor resources so you work on a higher level when designing your application. For accessing those functionalities the OS provides an API. Using that API you can use semaphores, message queues, mutexes, etc.
An RTOS has one requirement to be an RTOS, it must be pre-emptive. This means that it must support task priorities so when a higher-priority task gets ready to run, one of possible task states, the scheduler must switch the current context to that task.
This operation has two implications, one is the requirement of one precise and dedicated timer, tick timer, and the other is that, during context switching, there is a considerable memory operations overhead. The current CPU status, or CPU's in case of multi-core SoCs, must be copied into the pre-empted task's context information and the new ready to run task's context must be restored in the CPU.
ARM processors already provide support for the System Timer, which is intended for a dedicated use as an OS tick timer. Not so long ago, the tick timer was required to be implemented with a regular, non-dedicated timer.
One optimization in cores designed for RTOSes with real-time capabilities is the ability to save/restore the CPU context state with minimum code, so it results in much less execution time than that in regular processors.
It is possible to implement an RTOS in nearly any processor, and there are some implementations targeted to resource constrained cores. You mainly need a timer with interrupt capacity and RAM. If the CPU is very fast you can run the OS tick at high rates, sub-millisecond in some real-time applications with DSPs, or at a lower rate like just 10~100 ticks per second for applications with low timing requirements in low end CPUs.
Some theoretical knowledge would be quite useful too, e.g. figuring out whether a given task set is schedulable under given scheduling approach (sometimes it may not), differences between static-priority and dynamic-priority scheduling, priority inversion problem, etc.