Systemverilog spawned processes execute after parent executes blocking statement - system-verilog

This paper titled Systemverilog Event Regions Race Avoidance & Guidelines submits an example that contradicts the Systemverilog IEEE 1800-2012 LRM:
...when forking background processes, it is often very useful to allow
newly created subprocesses a chance to start executing before
continuing the execution of the parent process. This is easily
accomplished with the following code:
program test;
initial begin
fork
process1;
process2;
process3;
join_none
#0;
// parent process continue
end
endprogram
However IEEE Systemverilog LRM IEEE 1800-2012 states:
"join_none. The parent process continues to execute concurrently with
all the processes spawned by the fork. The spawned processes do not
start executing until the parent thread executes a blocking statement
or terminates."
Which is it?

There is no contradiction here. Look at it this way:
join_none. The parent process continues to execute concurrently with all the processes spawned by the fork. The spawned processes do not start executing until the parent thread executes a blocking statement or terminates.
We are being told that the forked processes do not start immediately. They wait for the parent process, which spawned them, to yield (by either terminating or by encountering a blocking statement). The fork statement basically schedules the processes. The scheduler gets a chance to start executing them only when the already running thread (parent process) yields.
The first example you quoted, suggests that you give a chance to the spawned processes to start executing. To do so it asks you to introduce a #0 statement. When the parent process encounters #0, a blocking statement, it yields. The spawned processes therefor get a chance to start executing.

Related

How to daemonize a process that inherits all parents threads?

I have a process that creates multiple threads and creates a socket.
Now i want to create a daemon process by calling a fork() and exit the parent process.
But the threads that are created by parent process get exited when parent is killed.
Is there a way I can inherit those threads and socket to child process ?
(Code is run in CPP)
But the threads that are created by parent process get exited when parent is killed.
Not exactly. The parent's threads are unaffected, and the child only ever gets the thread that called fork(). This is not the same thing as the child getting the other threads, and them thereafter terminating. In particular, no cancellation handlers or exit handlers that may have been registered by them are called in the child, and this may leave mutexes and other synchronization objects in an unusable and unrecoverable state. Cleaning up such a mess is the intended purpose of fork handlers, but those are tricky to use correctly, and they must be used consistently throughout the process to be effective.
I there a way I can inherit those threads and socker to child process ?
A child process inherits its parent's open file descriptors automatically, so nothing special needs to be done about the socket. But the other threads? No.
The POSIX documentation for fork is explicit about all this:
The new process (child process) shall be an exact copy of the calling
process (parent process) except as detailed below:
[...]
The child process shall have its own copy of the parent's file descriptors. Each of the child's file descriptors shall refer to the
same open file description with the corresponding file descriptor of
the parent.
[...]
A process shall be created with a single thread. If a multi-threaded process calls fork(), the new process shall contain a
replica of the calling thread and its entire address space, possibly
including the states of mutexes and other resources. Consequently, to
avoid errors, the child process may only execute async-signal-safe
operations until such time as one of the exec functions is called.
When the application calls fork() from a signal handler and any of the fork handlers registered by pthread_atfork() calls a function that
is not async-signal-safe, the behavior is undefined.
If the objective of forking is to disassociate from the original session and parent in order to run in the background as a daemon, then your best bet is for the initial thread to do that immediately upon startup, before creating any additional threads or opening any sockets.

What happens if a process preempts while executing wait and signal operations?

The main reason for using semaphores is to prevent the producer-consumer problem.
But I wonder what would happen if a process gets preempted while executing wait operation and another process also executes wait operation.
Let's take
S value as 1.
What if while executing Wait(), S value is loaded into register reg as 1.
now S value is decremented.
Now reg is 0.
And now if another process wants to execute the wait to access the critical section
it considers S value as 1.
loads reg as 1.
and again decrements.
reg is 0.
Now both processes enter the critical section.
The code for the wait function is
Down(Semaphore S){
S.value=S.value-1;
if(S.value<0)
{
put PCB in suspended list;
sleep;
}
else
return;
}
The code for the signal function is
Signal(Semaphore S){
S.value=S.value+1;
if(S.value<=0)
{
Select a process from suspendend list;
wakeup();
}
}
isn't semaphore variable also a critical section variable as it is common for two or many processes? how can we prevent such race conditions?
You are correct that if the code for semaphore operations is as given above, there is indeed a risk that something bad could happen if a thread gets preempted in the middle of implementing an operation. The reason that this isn’t a problem in practice is that the actual implementations of semaphore operations are a bit more involved than what you gave.
Some implementations of semaphores, for example, will begin by physically disabling the interrupt mechanism on the machine to ensure that the current thread cannot possibly be preempted during execution of the operation. Others are layered on top of other synchronization primitives that use similar techniques to prevent preemption. Others might use other mechanisms besides disabling interrupts that have the same effect of ensuring that the process can’t be halted midway in the middle of performing the needed synchronization, or at least, ensuring that any places where preemption can occur are well-marked and properly thought through.
Hope this helps!

QUERY REGARDING PARENT AND CHILD PROCESS

Can a for loop be preempted in between?
suppose we have a parent and child proces;both have for loop execution
Can one process's for loop be preempted in between so that another process for loop is started?and if so, will it resume afterwards?
This depends obviously on the operating system used and many parameters, e.g. the priority of the processes. But generally a process can be interrupted after each machine instruction. This means it can be interrupted even within a single line of code in a language like C.
If a process has been interrupted, it will normally be resumed with the next machine instruction.

Kill and free the memory of never ending threads from main in perl

There is main running and two threads running,The two threads are infinite event loops where we are waiting for some event,
The issue is I want to wait for the event to come for some finite amount of time and then exit from the program
But when I am exiting the threads are still running
I want to free all the space occupied by this threads and then exit from the script
How to fix this?
How to do a clean up before I exit
How to kill threads that are never ending
Process running your script ends when your script ends. Operation system is responsible to free all resource allocated by this process including all threads. Do you observe opposite? If so, fill bug report to your operation system.

Networking using run loop

I have an application which uses some external library for analytics. Problem is that I suspect it does some things synchronously, which blocks my thread and makes watchdog kill my app after 10 secs (0x8badf00d code). It is really hard to reproduce (I cannot), but there are quite few cases "in the wild".
I've read some documentation, which suggested that instead creating another thread I should use run-loops. Unfortunately the more I read about them, the more confused I get. And the last thing i want to do is release a fix which will break even more things :/
What I am trying to achieve is:
From main thread add a task to the run-loop, which calls just one function: initMyAnalytics(). My thread continues running, even if initMyAnalytics() gets locked waiting for network data. After initMyAnalytics() finishes, it quietly quits and never gets called again (so it doesnt loop or anything).
Any ideas how to achieve it? Code examples are welcome ;)
Regards!
You don't need to use a run loop in that case. Run loops' purpose is to proceed events from various sources sequentially in a particular thread and stay idle when they have nothing to do. Of course, you can detach a thread, create a run loop, add a source for your function and run the run loop until the function ends. The same as you can use a semi-trailer truck to carry your groceries home.
Here, what you need are dispatch queues. Dispatch queues are First-In-First-Out data structures that run tasks asynchronously. In contrary to run loops, a dispatch queue isn't tied to a particular thread: the working threads are automatically created and terminated as and when required.
As you only have one task to execute, you don't need to create a dispatch queue. Instead you will use an existing global concurrent queue. A concurrent queue execute one or more tasks concurrently, which is perfectly fine in our case. But if we had many tasks to execute and wanted each task to wait for its predecessor to end, we would need to create a serial queue.
So all you have to do is:
create a task for your function by enclosing it into a Block
get a global queue using dispatch_get_global_queue
add the task to the queue using dispatch_async.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
initMyAnalytics();
});
DISPATCH_QUEUE_PRIORITY_DEFAULT is a macro that evaluates to 0. You can get different global queues with different priorities. The second parameter is reserved for future use and should always be 0.