I am a new student studying OS course. I have already know that OS can serve for better communication between applications and hardwares in modern computer. But sometimes it seems more time efficient if applications can control hardware directly. May I ask whether it is possible?
yes it is possible but that would be a single application computer that computer only can run one particular application.
Applications handling hardware directly is faster as there is less of overhead of what OS does in its management.
You can take the example of DMA - Direct Memory Access. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.
But you should keep in mind the importance of operating system in handling other hardwares as not everything can be managed that trivially and need processing for decision making.
Related
since we know that operating system works in protected mode
and BIOS works in a Real mode ( 16 bit). so when an interrupt is called from
either Operating System or application program, does CPU switch mode back and forth everytime?
In general; hardware is capable of doing lots of things at once (playing sounds while generating 3D graphics while sending data to network while transferring data to multiple disks while waiting for user input while all CPUs are busy doing actual processing); and BIOS functions are not capable of allowing more than one thing to happen at time (e.g. will waste 100% of CPU time waiting for a hard disk controller to transfer data while the CPU does nothing and while nothing else can use any other BIOS service for anything).
For this reason alone, BIOS services are not usable and not used by any modern OS (except briefly during boot).
Of course it's not the only reason - no IO prioritisation, no support for hot-plug of any kind, no support for power management, no support for system management (e.g. https://en.wikipedia.org/wiki/S.M.A.R.T. ), no support for GPU, no support for any sound card (other than "PC speaker beep"), no support for networking (excluding PXE), no support for IO APICs, etc. Ironically, out of all of the problems, "BIOS services are designed for real mode" is the least important problem because it's the only problem that an OS can work around.
Instead, each OS has native drivers that don't have any of these limitations.
Note: this is partly why it's relatively easy for modern operating systems to support UEFI (where none of the BIOS services exist at all) - for most, they replace the boot loader/boot code and don't need to change much else.
Peripheral devices require drivers to work in a computer system (operating system).
Does a CPU need a driver to work?
Same question for a main memory?
The answer is no.
The reason is that the motherboard comes with an (upgradable) BIOS, which takes care of making sure the CPU features function correctly (obviously, an AMD processor won't work on an Intel motherboard). You can upgrade the BIOS, but that should be avoided until, ... reasons of course.
Same goes for memory, it does not require a driver either.
Just so that you know, if you ever tried overclocking you can notice that you can alter the way the RAM functions, ganged/unganged mods and so on. My point is that there is already an interface established using code allowing you to make changes in real time, isn't that the very purpose we even have drivers, to be able to use a peripheral with expected outcome.
On the other hand, peripheral devices are just extensions, which the motherboard does not know how to handle, hence needing a set of instructions i.e. drivers.
In a modern system both memory and the CPU require kernel mode code — as do devices — to function.
Memory requires management of virtual memory tables. The CPU requires maintenance of process control structures.
In the business, such code is not called a "driver".
Generally, one thinks of a device driver as being kernel mode code that responds to devices through the interrupt vector.
That said, on some systems there are "printer drivers" that do not fit that definition of driver.
In short, do memory and CPU have something called a "driver"? No.
Do they have something analogous to a driver? Yes.
I am trying to learn an RTOS from scratch and for this, I use freeRTOS.org as a reference. I find out this site as a best resource to learn an RTOS. However, I have some doubts and I was trying to find out but not able to get exact answers.
1) How to find out that device have Real-time capability e.g. some controller has (TI Hercules) and other don't have(MSP430)?
2) Does that depend upon the architecture of the CORE (ARM Cortex-R CPU in TI Hercules TMS570)?
I know that these questions make nuisance, but I don't know how to get the answer of these questions.
Thanks in advance
EDIT:
One more query I have that what is meant by "OS" in RTOS? Does that mean the same OS like others or it's just contains the source code file for the API's?
Figuring out whether a device has a "Real-Time" capability is somewhat arbitrary and depends on your project's timing requirements. If you have timing requirements that are very high, you'll want to use a faster microcontroller/processor.
Using an RTOS (e.g. FreeRTOS, eCOS, or uCOS-X) can help ensure that a given task will execute at a predictable time. The FreeRTOS website provides a good discussion of what operating systems are and what it means for an operating system to claim Real-Time capabilities. http://www.freertos.org/about-RTOS.html
You can also see from the ports pages of uC/OS-X and FreeRTOS that they can run on a variety target microcontrollers / microprocessors.
Real-time capability is a matter of degree. A 32-bit DSP running at 1 GHz has more real-time capability than an 8-bit microcontroller running at 16 MHz. The more powerful microcontroller could be paired with faster memories and ports and could manage applications requiring large amounts of data and computations (such as real-time video image processing). The less powerful microcontroller would be limited to less demanding applications requiring a relatively small amount of data and computations (perhaps real-time motor control).
The MSP430 has real-time capabilities and it's used in a variety of real-time applications. There are many RTOS that have been ported to the MSP430, including FreeRTOS.
When selecting a microcontroller for a real-time application you need to consider the data bandwidth and computational requirements of the application. How much data needs to be processed in what amount of time? Also consider the range and precision of the data (integer or floating point). Then figure out which microcontroller can support those requirements.
While Cortex-R is optimised for hard real-time; that does not imply that other processors are not suited to real-time applications, or even better suited to a specific application. What you need to consider is whether a particular combination of RTOS and processor will meet the real-time constraints of your application; and even then the most critical factor is your software design rather then the platform.
The main goal you want to obtain from an RTOS is determinism, most other features are already available in most other non-RTOS operating systems.
The -OS part in RTOS means Operating System, simply put, and as all other operating systems, RTOSes provide the required infrastructure for managing processor resources so you work on a higher level when designing your application. For accessing those functionalities the OS provides an API. Using that API you can use semaphores, message queues, mutexes, etc.
An RTOS has one requirement to be an RTOS, it must be pre-emptive. This means that it must support task priorities so when a higher-priority task gets ready to run, one of possible task states, the scheduler must switch the current context to that task.
This operation has two implications, one is the requirement of one precise and dedicated timer, tick timer, and the other is that, during context switching, there is a considerable memory operations overhead. The current CPU status, or CPU's in case of multi-core SoCs, must be copied into the pre-empted task's context information and the new ready to run task's context must be restored in the CPU.
ARM processors already provide support for the System Timer, which is intended for a dedicated use as an OS tick timer. Not so long ago, the tick timer was required to be implemented with a regular, non-dedicated timer.
One optimization in cores designed for RTOSes with real-time capabilities is the ability to save/restore the CPU context state with minimum code, so it results in much less execution time than that in regular processors.
It is possible to implement an RTOS in nearly any processor, and there are some implementations targeted to resource constrained cores. You mainly need a timer with interrupt capacity and RAM. If the CPU is very fast you can run the OS tick at high rates, sub-millisecond in some real-time applications with DSPs, or at a lower rate like just 10~100 ticks per second for applications with low timing requirements in low end CPUs.
Some theoretical knowledge would be quite useful too, e.g. figuring out whether a given task set is schedulable under given scheduling approach (sometimes it may not), differences between static-priority and dynamic-priority scheduling, priority inversion problem, etc.
I see that kernel mode drivers are risky as they run in privileged mode, but are there any monolithic kernel's that do any form of driver/loadable module sandboxing or is this really the domain of microkernels?
Yes, there are platforms with "monolithic" (for some definition of monolithic) kernels that do driver sandboxing for some drivers. Windows does this in more recent versions with the user mode driver framework. There are two reasons for doing this:-
It allows isolation. A failure in a user mode driver need not bring down the whole system. This is important for drivers for hardware which is not considered system critical. An example of this might be your printer, or your soundcard. In those cases if the driver fails, it can often simply be restarted and the user often won't even notice this happened.
It makes writing and debugging drivers much easier. Driver writers can use regular user mode libraries and regular user mode debuggers, without having to worry about things like IRQL and DPCs.
The other poster said there is no sense to this. Hopefully the above explains why you might want to do this. Additionally, the other poster said their is a performance concern. Again, this depends on the type of the driver. In Windows this is typically used for USB drivers. In the case of USB drivers, the driver is not talking directly to the hardware directly anyway regardless of the mode that the driver operates in - they are talking to another driver which is talking to the USB host controller, so there is much less overhead of user mode communication than there would be if you were writing a driver that had to bit bang IO ports from user mode. Also, you would avoid writing user mode drivers for hardware which was performance critical - in the case of printers and audio hardware the user mode transitions are so much faster than the hardware itself, that the performance cost of the one or two additional mode context switches is probably irrelevant.
So sometimes it is worth doing simply because the additional robustness and ease of development make the small and often unnoticeable performance reduction worthwhile.
There are no sense in this sandboxing, OS fully trust to drivers code. Basically this drivers become part of kernel. You can't failover after FS crash or any major subsystem of kernel. Basically it`s bad (failover after crash, imagine that you can do after storage driver of boot disk crash?), because can lead to data loss for example.
And second - sandboxing lead to perfomance hit to all kernel code.
There are lots of questions on SO asking about the pros and cons of virtualization for both development and testing.
My question is subtly different - in a world in which virtualization is commonplace, what are the things a programmer should consider when it comes to writing software that may be deployed into a virtualized environment? Some of my initial thoughts are:
Detecting if another instance of your application is running
Communicating with hardware (physical/virtual)
Resource throttling (app written for multi-core CPU running on single-CPU VM)
Anything else?
You have most of the basics covered with the three broad points. Watch out for:
Hardware communication related issues. Disk access speeds are vastly different (and may have unusually high extremes - imagine a VM that is shut down for 3 days in the middle of a disk write....). Network access may interrupt with unusual responses
Fancy pointer arithmetic. Try to avoid it
Heavy reliance on unusually uncommon low level/assembly instructions
Reliance on machine clocks. Remember that any calls you're making to the clock, and time intervals, may regularly return unusual values when running on a VM
Single CPU apps may find themselves running on multiple CPU machines, that do funky things like Work Stealing
Corner cases and unusual failure modes are much more common. You might not have to worry as much that the network card will disappear in the middle of your communication on a real machine, as you would on a virtual one
Manual management of resources (memory, disk, etc...). The more automated the work, the better the virtual environment is likely to be at handling it. For example, you might be better off using a memory-managed type of language/environment, instead of writing an application in C.
In my experience there are really only a couple of things you have to care about:
Your application should not fail because of CPU time shortage (i.e. using timeouts too tight)
Don't use low-priority always-running processes to perform tasks on the background
The clock may run unevenly
Don't truss what the OS says about system load
Almost any other issue should not be handled by the application but by the virtualizer, the host OS or your preferred sys-admin :-)