How often do deadlocks occur(about to occur) in an operating system - operating-system

We study a lot about deadlocks in operating system courses. How often do they really occur ?
or rather how often is there a chance for a deadlock(the OS actually doesn't let the dead lock to occur)?

I was asking the same question. I came across this in my textbook:
Expense is one important consideration. Ignoring the possibility of deadlocks is
cheaper than the other approaches. Since in many systems, deadlocks occur
infrequently (say, once per year), the extra expense of the other methods may not
seem worthwhile.
It also mentions that if it occurs, the system would likely just slow down a bit, and eventually need a restart. Something we are all familiar with.
The textbook quote is from page 323 of Brian, W.'s Operating System Basics.
But as Joao mentioned, it's up to the developer to produce quality code. So I guess in theory, if you were running a lot of junk programs you could face deadlocks regularly.

Deadlocks occur as often as you program multi thread programs with synchronizers and you don't know what you're doing.
You always have to release a mutex, for example, before placing the thread on wait as long as you need that same mutex to awake that thread, and that's just a small example.

Deadlock in multi threaded process of a user program will make the user application nonfunctional and this has got nothing to do with deadlock in operating systems. Deadlock in an operating system will occur if kernel allocates resource in an improper way and this happens very very rarely. The frequency is once in years and thus popular operating systems like Windows and Unix take the ostrich approach of ignoring the deadlock.

Related

how often deadlock detection to be done?

If deadlocks are less likely to occur in the system and processes are frequently requesting resources what are the main logical reasons due to which we must only run the deadlock algorithm only when they occur and not in a continuous loop testing the deadlock condition to be true?
This is highly system dependent. Quality operating systems incorporate general purpose locking mechanisms as system services that detect deadlocks. Deadlock checks are normally instigated when a process requests a lock; not through continuous loop.
Many quick and dirty system have no deadlock detection at all.

what is effect of deadlock on other processes which are not involve in deadlock?

I am not finding the exact answer that if deadlock occurres in the system then the system will stop working or some of the processes which are not involved in the deadlock can keep executing.
When deadlock occures Then system will go in deadlock or only that processes which are in deadlock?
Only that processes which are in deadlock will.
And it's one of the reasons that most of modern personal computers ignore it.
(Since deadlock prevention,avoidance,detection and recovery are expensive)
I guess the only process which goes in the deadlock is affected, not the other one as generally the most of the operating systems like Windows and Linux uses the deadlock ignorance(Ostrich algorithm) method to avoid the deadlock in the system, while on the other hand, other algorithms such as deadlock prevention/detection/avoidance are expensive to implement and its somewhat about an unrealistic assumption that how much a process is gonna need the amount of resource to execute completely, generally deadlock avoidance/detection is used in database software, for example, many database operations involve locking several records, Ergo no other processes are affected except the process which entered into an infinite blocking/starvation(Deadlock).

how resource integrity is maintained using Semaphores

I am new to computer science and it may sound stupid to some of you. Although i have searched for related question, but this scenario stuck in my mind.
I understand that Mutexes provide lock facility for a section or resource. Consider the example of a buffer(an array of size 10 say) where a thread puts some value in it. We lock the mutex before using it releases after. This whole process is done by same thread.
Now if i have to do this same process with semaphores. I can limit the number of threads that can enter the critical section in this case. But how can the integrity of the buffer is maintained.
(read and write operations handled by different threads on buffer)
Semaphores are an higher abstraction. Semaphores controls the ability to create processes and make sure the instance created is distinct in which case it is kicked out. In a nutshell.
The usual solution with semaphores is to allow multiple simultaneous readers or one writer. See the Wikipedia article Readers-writers problem for examples.

Can two processes simultaneously run on one CPU core?

Can two processes simultaneously run on one CPU core, which has hyper threading? I learn from the Internet. But, I do not see a clear straight answer.
Edit:
Thanks for discussion and sharing! My purse to post my question here is not to discuss about parallel computing. It will be too big to be discussed here. I just want to know if a multithread application can benefit more from hyper threading than a multi process application. After further reading, I have following as my learning notes.
1) A Hyper-Threading Technology enabled CPU Core has two set of CPU state and Interrupt Logic. Meanwhile, it has only one set of Execution Units and Cache. (I have not study what is pipeline yet)
2) Multi threading benefits from Hyper Threading only if there is latency happen in some executed thread. I think this point can exactly map to the common reason for why and when software programmer use multi thread. If the multi thread application has been optimized. It may not gain any benefit from Hypter threading.
3) If the CPU state maps to process state, I believe Marc is correct that multiple process application can even benefit more from hyper threading technology.
4) When CPU vendor says "thread", it looks like their "thread" is different from thread that I know as a java programmer?
No, a hyperthreaded CPU core still only has a single execution pipeline. Even though it appears as two CPUs to the overlying OS, there's still only ever one instruction being executed at any given time.
Hyperthreading was intended to allow the CPU to continue executing one thread while another thread was stalled waiting for a resource or other operation to complete, without leaving too many stages of the pipeline empty and useless. This goes back to the Pentium 4 days, with its absurdly long pipeline - a stall was essentially catastrophic for efficiency and throughput, and hyperthreading allowed Intel to keep the cpu busy doing other things while it cleaned up from the stall.
While Marc B's answer is pretty much the definitive summary of how HT works, I just want to make a small contribution by linking this article, which should clear up a lot of things about HT: http://software.intel.com/en-us/articles/performance-insights-to-intel-hyper-threading-technology/
Short answer, yes.
A single core cpu(a processor), can run 2 or more threads simultaneously. These threads may belong to the one program, or they may belong different programs and thus processes. This type of multithreading is called Simultaneous MultiThreading(SMT).
Information that claims cpu core can execute only one instruction at any given time is also not true. Modern CPUs exploit Instruction Level Parallelism(ILP) by duplicating pipeline resources(e.g 2 ALUs instead of 1). This type of pipeline is called "superscalar" pipeline.
Wikipedia page of Simultaneous Multithreading:
Simultaneous multithreading

Can a shared ready queue limit the scalability of a multiprocessor system?

Can a shared ready queue limit the scalability of a multiprocessor system?
Simply put, most definetly. Read on for some discussion.
Tuning a service is an art-form or requires benchmarking (and the space for the amount of concepts you need to benchmark is huge). I believe that it depends on factors such as the following (this is not exhaustive).
how much time an item which is picked up from the ready qeueue takes to process, and
how many worker threads are their?
how many producers are their, and how often do they produce ?
what type of wait concepts are you using ? spin-locks or kernel-waits (the latter being slower) ?
So, if items are produced often, and if the amount of threads is large, and the processing time is low: the data structure could be locked for large windows, thus causing thrashing.
Other factors may include the data structure used and how long the data structure is locked for -e.g., if you use a linked list to manage such a queue the add and remove oprations take constant time. A prio-queue (heaps) takes a few more operations on average when items are added.
If your system is for business processing you could take this question out of the picture by just using:
A process based architecure and just spawning multiple producer consumer processes and using the file system for communication,
Using a non-preemtive collaborative threading programming language such as stackless python, Lua or Erlang.
also note: synchronization primitives cause inter-processor cache-cohesion floods which are not good and therefore should be used sparingly.
The discussion could go on to fill a Ph.D dissertation :D
A per-cpu ready queue is a natural selection for the data structure. This is because, most operating systems will try to keep a process on the same CPU, for many reasons, you can google for.What does that imply? If a thread is ready and another CPU is idling, OS will not quickly migrate the thread to another CPU. load-balance kicks in long run only.
Had the situation been different, that is it was not a design goal to keep thread-cpu affinities, rather thread migration was frequent, then keeping separate per-cpu run queues would be costly.