Do (POSIX) Operating Systems recover resources after a process crashes? - operating-system

Let's assume we have a process that allocates a socket listening on a specific port, does something with it and then terminates abnormaly. Now a second process starts and wants to allocate a socket listening on the same port that was previously held by the crahsed process. Is this socket available for re-allocation?
How does the Operating System recover resources that weren't released properly? Does the OS track the process id along with each allocated resource?
Is this cleanup something I can expect every POSIX compliant system to do?

This is up to the operating system but generally an OS maintains a process control structure to, among other things, manage its resources. When a process allocates a resource from the system (such as opening a file or allocating memory), details of the allocation are placed in that structure. When the process terminates, anything left in it gets cleaned up - but it's best to explicitly clean up as you go.

Specific details will depend upon the operating system, but generally speaking user-code is run in a virtual address space/sandbox where it does not have any direct access to hardware resources. Anything that the user process wants to access/allocate must be provided by calling the OS and asking it for the desired resource.
Thus the OS has a simple way of knowing who has been allocated which resources, and as long as it keeps track of this information, cleaning up resources in the event of a crashed process is as simple as fetching the list of resources allocated to that process, and marking them all as available again.

Related

Practical ways of implementing preemptive scheduling without hardware support?

I understand that using Hardware support for implementing preemptive scheduling is great for efficiency.
I want to know, What are practical ways we can do preemptive scheduling without taking support from hardware? I think one of way is Software Timers.
Also, Other way in multiprocessor system is using the one processor acting as master keep looking at slave processor's processor.
Consider, I'm fine with non-efficient way.
Please, elaborate all ways you think / know can work. Also, preferably but not necessarily works for single processor system.
In order to preempt a process, the operating system has to somehow get control of the CPU without the process's cooperation. Or viewed from the other perspective: The CPU has to somehow decide to stop running the process's code and start running the operating system's code.
Just like processes can't run at the same time as other processes, they can't run at the same time as the OS. The CPU executes instructions in order, that's all it knows. It doesn't run two things at once.
So, here are some reasons for the CPU to switch to executing operating system code instead of process code:
A hardware device sends an interrupt to this CPU - such as a timer, a keypress, a network packet, or a hard drive finishing its operation.
The software running on a different CPU sends an inter-processor interrupt to this CPU.
The running process decides to call a function in the operating system. Depending on the CPU architecture, it could work like a normal call, or it could work like a fake interrupt.
The running process executes an instruction which causes an exception, like accessing unmapped memory, or dividing by zero.
Some kind of hardware debugging interface is used to overwrite the instruction pointer, causing the CPU to suddenly execute different code.
The CPU is actually a simulation and the OS is interpreting the process code, in which case the OS can decide to stop interpreting whenever it wants.
If none of the above things happen, OS code doesn't run. Most OSes will re-evaluate which process should be running, when a hardware event occurs that causes a process to be woken up, and will also use a timer interrupt as a last resort to prevent one program hogging all the CPU time.
Generally, when OS code runs, it has no obligation to return to the same place it was called from. "Preemption" is simply when the OS decides to jump somewhere other than the place it was called from.

Sharing memory between processes on different computers

Can someone help me with sharing memory between three or more machines, each machine having its own copy of the memory to speed up the read operation
For example, I first create a socket to communicate between these processes, but how can I make memory visible between the machines? I know how make it visible in one machine.
EDIT: Maybe we should use server machine to manage shared memory read and write operation?
You cannot share memory across machine boundaries. You have to serialize the data being shared, such as with an IPC mechanism like a named pipe or a socket. Transmit the shared data to each machine, where they then copy it into their own local memory. Any changes to the local memory has to be transmitted to the other machines so they have an updated local copy.
If you are having problems implementing that, they you need to show what you have actually attempted.

How do computers prevent programs from interfering with each other?

For example, I heard in class that global variables are just put in a specific location in memory. What is to prevent two programs from accidentally using the same memory location for different variables?
Also, do both programs use the same stack for their arguments and local variables? If so, what's to prevent the variables from interleaving with each other and messing up the indexing?
Just curious.
Most modern processors have a memory management unit (MMU) that provide the OS the ability to create protected separate memory sections for each process including a separate stack for each process. With the help of the MMU the processor can restrict each process to modifying / accessing only memory that has been allocated to it. This prevents one process from writing into a another processes memory space.
Most modern operating systems will use the features of the MMU to provide protection for each process.
Here are some useful links:
Memory Management Unit
Virtual Memory
This is something that modern operating systems do by loading each process in a separate virtual address space. Multiple processes may reference the same virtual address, but the operating system, helped by modern hardware, will map each one to a separate physical address, and make sure that one process cannot access physical memory allocated to another process1.
1 Debuggers are a notable exception: operating system often provide special mechanisms for debuggers to attach to other processes and examine their memory space.
The short answer to your question is that the operating system deals with these issues. They are very serious issues, and a significant percentage of an operating systems job is keeping everything in a separate space. The operating system runs programs that track all the other programs and make sure they are each using a space. This keeps the stacks separate too. Each program is running its own stack assigned by the OS. How the OS does this assigning is actually a complex task.

Memcached and virtual memory

According to this thread (not very reliable, I know) memcached does not use the disk, not even virtual memory.
My questions are:
Is this true?
If so, how does memcached ensures that the memory he gets assigned never overflows to disk?
memcached avoids going to swap through two mechanisms:
Informing the system administrators that the machines should never go to swap. This allows the admins to maybe not configure swap space for the machine (seems like a bad idea to me) or configure the memory limits of the running applications to ensure that nothing ever goes into swap. (Not just memcached, but all applications.)
The mlockall(2) system call can be used (-k) to ensure that all the process's memory is always locked in memory. This is mediated via the setrlimit(2) RLIMIT_MEMLOCK control, so admins would need to modify e.g. /etc/security/limits.conf to allow the memcached user account to lock a lot more memory than is normal. (Locked memory is mediated to prevent untrusted user accounts from starving the rest of the system of free memory.)
Both these steps are fair assuming the point of the machine is to run memcached and perhaps very little else. This is often a fair assumption, as larger deployments will dedicate several (or many) machines to memcached.
You configure memcached to use a fixed amount of memory. When that memory is full memcached just deletes old data to stay under the limit. It is that simple.

Why does syscall need to switch into kernel mode?

I'm studying for my operating systems final and was wondering if someone could tell me why the OS needs to switch into kernel mode for syscalls?
A syscall is used specifically to run an operating in the kernel mode since the usual user code is not allowed to do this for security reasons.
For example, if you wanted to allocate memory, the operating system is privileged to do it (since it knows the page tables and is allowed to access memory of other processes), but you as a user program should not be allowed to peek or ruin the memory of other processes.
It's a way of sandboxing you. So you send a syscall requesting the operating system to allocate memory, and that happens at the kernel level.
Edit: I see now that the Wikipedia article is surprisingly useful on this
Since this is tagged "homework", I won't just give the answer away but will provide a hint:
The kernel is responsible for accessing the hardware of the computer and ensuring that applications don't step on one another. What would happen if any application could access a hardware device (say, the hard drive) without the cooperation of the kernel?