Signal is always caught by parent process first - command-line

Consider the following piece of code running under Solaris 11.3 (a simplified version of system(3C)):
int main(int argc, char **argv) {
pid_t pid = fork();
pid_t w;
int status;
if (pid == 0) {
execvp(argv[1], argv + 1);
perror("Failed to exec");
exit(127);
}
if (pid > 0) {
w = waitpid(pid, &status, 0);
if (w == -1) {
perror("Wait: ");
exit(1);
}
else if (WIFEXITED(status) > 0) {
printf("\nFinish code: %d\n", WEXITSTATUS(status));
}
else {
printf("\nUnexpected termination of child process.\n");
}
}
if (pid == -1) {
perror("Failed to fork");
}
}
The problem I get is that whenever the process is finished via a signal (for instance, SIGINT) the "Unexpected termination" message is never printed.
The way I see it, the whole process group receives signals from the terminal, and in this case the parent process simply terminates before waitpid(2) returns (Which happens every time, apparently).
If that is the case, I have a follow-up question. How to retrieve infromation about the signal that terminated the child process from the parent without using a signal handler? For example, I could have added another if-else block with a WIFSIGNALED check and a WTERMSIG call passing the variable status (In fact, I did, but upon termination with Ctrl+C the program delivered no output whatsoever)
So what exactly and in which order is happening there?

You say, “… whenever the process is finished via a signal
(for instance, SIGINT) …”, but you aren’t specific enough
to enable anybody to answer your question definitively. 
If you are sending a signal to the child process with a kill command,
you have an odd problem. 
But if, as I suspect (and as you suggest when you say
“the whole process group receives signals from the terminal”),
you are just typing Ctrl+C, it’s simple:
When you type an INTR, QUIT, or SUSP character,
the corresponding signal (SIGINT, SIGQUIT, or SIGTSTP) is sent
simultaneously to all processes in the terminal process group.
OK, strictly speaking, it’s not simultaneous. 
It happens in a loop in the terminal driver
(specifically, I believe, the “line discipline” handler), in the kernel. 
No user process execution can occur before this loop completes.
You say “… the parent process simply terminates
before waitpid(2) returns (… every time, apparently).” 
Technically this is true. 
As described above, all processes in the process group
(including your parent and child processes) receive the signal
(essentially) simultaneously. 
Since the parent is not handling the signal, itself,
it terminates before it can possibly do any processing
triggered by the child’s receipt of the signal.
You say “Signal is always caught by parent process first”. 
No; see above. 
And the processes terminate in an unspecified order —
this may be the order in which they appear in the process table
(which is indeterminate),
or determined by some subtle (and, perhaps, undocumented) aspect
of the scheduler’s algorithm.
Related U&L questions:
What is the purpose of abstractions, session, session leader
and process groups?
What are the responsibilities of each Pseudo-Terminal (PTY) component
(software, master side, slave side)?

Does it work ok if you send signals via a "kill" from another tty? I tried this on linux. Seems the same behavior.
I think you're right if that shell control signals are passed to the process group....and you have a race. You need in the parent to catch and delay them.
What I've done is do "./prog cat"
Doing a kill -SIGINT
works fine.
Doing a control-C prints nothing.
Doing a setsid() in front has the parent terminate, but the child keep running.

Related

is there an abnormal way to terminate a child process to get certain outputs in this code?

i'm new to operating systems and i found this code , and i don't understand why certain outputs like : abc , we can't get
suppose we have this code in c :
int main()
{
if(fork()==0)
printf("a");
else
{
printf("b");
waitpid(-1);
}
printf("c");
return 0;
}
waitpid() waits for a child process to terminate.
can the child process be terminated in abnormal way ? so that we can have this outputs : abc, bc ?
according to at least the linux manpage for fork:
RETURN VALUE
On success, the PID of the child process is returned in the parent, and
0 is returned in the child. On failure, -1 is returned in the parent,
no child process is created, and errno is set appropriately.
so if your child program isn't ever created the entire output will be c for the parent process and nothing for the child process, because it never came to be.
Also it is possible that the parent process is killed before it can output a, or c, then you'll only get the child's output, bc. or maybe the parent is killed before it can even fork! there are lots of possibilities and with good timing (and some calls to the sleep function inbetween) you could probably reproduce them.

Anylogic: Queue TimeOut blocks flow

I have a pretty simple Anylogic DE model where POs are launched regularly, and a certain amount of material gets to the incoming Queue in one shot (See Sample Picture below). Then the Manufacturing process starts using that material at a regular rate, but I want to check if the material in the queue gets outdated, so I'm using the TimeOut option of that queue, in order to scrap the outdated material (older than 40wks).
The problem is that every time that some material gets scrapped through this Timeout exit, the downstream Manufacturing process "stops" pulling more material, instead of continuing, and it does not get restarted until a new batch of material gets received into the Queue.
What am I doing wrong here? Thanks a lot in advance!!
Kindest regards
Your situation is interesting because there doesn't seem to be anything wrong with what you're doing. So even though what you are doing seems to be correct, I will provide you with a workaround. Instead of the Queue block, use a Wait block. You can assign a timeout and link the timeout port just like you did for the queue (seem image at the end of the answer).
In the On Enter field of the wait block (which I will assume is named Fridge), write the following code:
if( MFG.size() < MFG.capacity ) {
self.free(agent);
}
In the On Enter of MFG block write the following:
if( self.size() < self.capacity && Fridge.size() > 0 ) {
Fridge.free(Fridge.get(0));
}
And finally, in the On Exit of your MFG block write the following:
if( Fridge.size() > 0 ) {
Fridge.free(Fridge.get(0));
}
What we are doing in the above, is we are manually pushing the agents. Each time an agent is processed, the model checks if there is capacity to send more, if yes, a new agent is sent.
I know this is an unpleasant workaround, but it provides you with a solution until AnyLogic support can figure it out.

perl gracefully shut down child processes

I'm new to perl. I have a process that uses poe::wheel::run to kick off multiple child processes. I'm trying to find a way to gracefully stop the wheel processes when SIG{INT} signal received.
I've been able to gracefully stop the child processes when it detects the parent isn't running.
I have a sig_int_handler that kills all processes (parent and children)
I cannot find a way for the child processes to detect and act on a sig_int_flag set to true. Is this possible???
I'd like it to ...
receive SIG{INT}
set variable to sig_int_flag = 1 (or something)
handler sends message that signal received then sleep for 30 seconds.
after 30 seconds -- kill all processes.
meanwhile the wheel is on a loop that
processes a file
checks for and breaks out
if parent pid not detected or
if sig_int_flag == 1 (not working)
otherwise processes next file
The idea is to give the wheel 30 seconds to finish what it's doing. if the child processes are not dead on their own -- we kill them.
Is this possible?
Thanks
I cannot find a way for the child processes to detect and act on a sig_int_flag set to true. Is this possible???
For a child to see the same flag which the parent sets, the flag has to reside in shared memory.
Another way is to send a signal to the child where a handler sets the flag.

Using `chan pending output` instead of writable fileevent

Yo, I've written a server with a simple protocol: the client sends a line, the server sends a line back in response, repeat. To prevent a client from filling Tcl's output buffer by sending lots of lines but not accepting data back, can I just check chan pending output instead of using the writable fileevent?
proc respond {stream msg} {
if {[chan pending output $stream] <= 1024} {
puts $stream $msg
} else {
#close $stream
}
}
For output, chan pending output will correctly describe the number of bytes waiting in the output queue. Normally, that value will be bounded by the -buffersize value that you chan configure (or fconfigure) it to have.
That value will only be exceeded when the channel is non-blocking; with a blocking channel, when the value would go over it, instead there's a blocking write to the underlying device (socket, pipe, file, serial line, whatever) so by the time you could see that it went over, it's back under the limit again.
But if you're using non-blocking channels, you really should use chan event (or fileevent). Luckily for the actual writes, Tcl will actually do this for you automatically; the single most useful thing you could want from a writable event is already there. In practice, the most common actual use of a writable event is in detecting when an async socket connection becomes ready for service.
So what you are doing will work, but you'll have to think carefully about what to do if the output buffer is “getting full”; the idea that a message can need to be delayed is a place where a simple abstraction tends to become leaky. With 8.6's coroutines, you could (probably) do a transparent suspend or something like that, but getting that sort of thing right can take a little thought. (For example, a GUI client might need to show a busy indicator and put things into a state where the user can't enter more requests.)

Code with a potential deadlock

// down = acquire the resource
// up = release the resource
typedef int semaphore;
semaphore resource_1;
semaphore resource_2;
void process_A(void) {
down(&resource_1);
down(&resource_2);
use_both_resources();
up(&resource_2);
up(&resource_1);
}
void process_B(void) {
down(&resource_2);
down(&resource_1);
use_both_resources();
up(&resource_1);
up(&resource_2);
}
Why does this code causes deadlock?
If we change the code of process_B where the both processes ask for the resources in the same order as:
void process_B(void) {
down(&resource_1);
down(&resource_2);
use_both_resources();
up(&resource_2);
up(&resource_1);
}
Then there is no deadlock.
Why so?
Imagine that process A is running and try to get the resource_1 and gets it.
Now, process B takes control and try to get resource_2. And gets it. Now, process B tries to get resource_1 and does not get it, because it belongs to resource A. Then, process B goes to sleep.
Process A gets control again and try to get resource_2, but it belongs to process B. Now he goes to sleep too.
At this point, process A is waiting for resource_2 and process B is waiting for resource_1.
If you change the order, process B will never lock resource_2 unless it gets resource_1 first, the same for process A.
They will never be dead locked.
A necessary condition for a deadlock is a cycle of resource acquisitions. The first example constructs this a cycle 1->2->1. The second example acquires the resources in a fixed order which makes a cycle and henceforth a deadlock impossible.