Why OS with deadlock are not that good? - operating-system

I started learning OS and understanding it's concept but can somebody explain as to why is it bad for an operating system to have a deadlock system?

According to what I gathered it's not bad for a system to have a deadlock prevention system, but if an OS has a deadlock prevention system then it may slow down the system as whenever a process or thread will request for a resource then first the system will check if there's any possibility to have a deadlock situation and also such system might be expensive. So, that's why most systems ignore deadlock.

Related

Can a process perform IO while residing in secondary memory?

I am confused about suspend wait and block terms in operating system.
I read it as a process which is in wait or block state needs to be thrown in sec memory and resumes with main memory after completing IO.
Can a process perform IO while residing in secondary memory?
It depends entirely upon the operating system and hardware. Some system do allow this. Others do not. It is a lot of fun porting software from a system that does to a system that does not.

Is it necessary to bind a real-time thread to an LWP?

We know that an operating system maps user-level threads to the kernel
using the many-to-many model and that the mapping is done through
the use of LWPs. Furthermore, the system allows program developers to
create real-time threads. So, Is it necessary to bind a real-time thread to an
LWP?
This—
We know that an operating system maps user-level threads to the kernel using the many-to-many model and that the mapping is done through the use of LWPs.
—is COMPLETELY AND TOTALLY wrong. The entire concept of mapping user threats to kernel threads is entirely the creation of horrible books on operating system and does not exist in the real world (at least in the mainstream).
There is no such thing as a user thread. What horrible textbooks call a "user thread" is a thread simulated by a library. So there is no real point in covering user threads in operating systems at all because they do not exist in the operating system.
Thus, there are threads and there are simulated threads. Outside of horrible textbooks, there is no one to one model, many to many model, or anything like that.

can application and hardware interact directly

I am a new student studying OS course. I have already know that OS can serve for better communication between applications and hardwares in modern computer. But sometimes it seems more time efficient if applications can control hardware directly. May I ask whether it is possible?
yes it is possible but that would be a single application computer that computer only can run one particular application.
Applications handling hardware directly is faster as there is less of overhead of what OS does in its management.
You can take the example of DMA - Direct Memory Access. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.
But you should keep in mind the importance of operating system in handling other hardwares as not everything can be managed that trivially and need processing for decision making.

Does Operating System runs on a CPU without being context switched? [duplicate]

This question already has answers here:
How does the OS scheduler regain control of CPU?
(3 answers)
Closed 5 years ago.
I know that a single-CPU system can run only one process at any instant. My doubt is, how does OS being itself a separate process runs on the CPU mean while managing to schedule some other process aswell simultaneously (which is not possible,as only one process can be run on a single-CPU system).
In other words,if another process is consuming the CPU at any time does the OS be context switched ?? or where does the OS runs(as it has to be active always to monitor) ??
I even don't know whether its an appropriate question... but kindly let me know if you have an answer. OR correct me if I am wrong !!
Thanks in Advance !!
In a modern operating system the kernel, the core of the OS, in complete control of how much time it allocates to the various user processes it's managing. It can interrupt the execution of a user process through various mechanisms provided by the CPU itself. This is called preempting the process and can be done on a schedule, like executing a user process for a particular number of nanoseconds before automatically interrupting it.
In older operating systems, like DOS and Windows 1.0 through 3.11, macOS 9 and earier, plus many others, they employ a different mode where the user process is responsible for yielding control. If the process doesn't yield there may be little recourse to reassert control of the system. This can lead to crashes or lock-ups, a frequent problem with non-preemptive operating systems of all stripes.
Even then there is often hardware support for things like hardware timers that can trigger a particular chunk of code on a regular basis which can be used to rescue the system from a run-away process. Just because a bit of code is running is no guarantee that it will continue to run indefinitely, without interruption.
A modern CPU is a fantastically complicated piece of equipment. Those with support for things like CPU virtualization can make the single physical CPU behave as if it's a number of virtual CPUs all sharing the same hardware. Each of these virtual CPUs is free to do whatever it wants, including dividing up its time using either a pre-emptive or cooparative model, as well as splitting itself into even more virtual CPUs.
The long and the short of it here is to not assume that the kernel must be actively executing to be in control. It has a number of tools at its disposal to wrest control of the CPU back from any process that might be running.

Programming considerations for virtualized applications

There are lots of questions on SO asking about the pros and cons of virtualization for both development and testing.
My question is subtly different - in a world in which virtualization is commonplace, what are the things a programmer should consider when it comes to writing software that may be deployed into a virtualized environment? Some of my initial thoughts are:
Detecting if another instance of your application is running
Communicating with hardware (physical/virtual)
Resource throttling (app written for multi-core CPU running on single-CPU VM)
Anything else?
You have most of the basics covered with the three broad points. Watch out for:
Hardware communication related issues. Disk access speeds are vastly different (and may have unusually high extremes - imagine a VM that is shut down for 3 days in the middle of a disk write....). Network access may interrupt with unusual responses
Fancy pointer arithmetic. Try to avoid it
Heavy reliance on unusually uncommon low level/assembly instructions
Reliance on machine clocks. Remember that any calls you're making to the clock, and time intervals, may regularly return unusual values when running on a VM
Single CPU apps may find themselves running on multiple CPU machines, that do funky things like Work Stealing
Corner cases and unusual failure modes are much more common. You might not have to worry as much that the network card will disappear in the middle of your communication on a real machine, as you would on a virtual one
Manual management of resources (memory, disk, etc...). The more automated the work, the better the virtual environment is likely to be at handling it. For example, you might be better off using a memory-managed type of language/environment, instead of writing an application in C.
In my experience there are really only a couple of things you have to care about:
Your application should not fail because of CPU time shortage (i.e. using timeouts too tight)
Don't use low-priority always-running processes to perform tasks on the background
The clock may run unevenly
Don't truss what the OS says about system load
Almost any other issue should not be handled by the application but by the virtualizer, the host OS or your preferred sys-admin :-)