I'm new to perl. I have a process that uses poe::wheel::run to kick off multiple child processes. I'm trying to find a way to gracefully stop the wheel processes when SIG{INT} signal received.
I've been able to gracefully stop the child processes when it detects the parent isn't running.
I have a sig_int_handler that kills all processes (parent and children)
I cannot find a way for the child processes to detect and act on a sig_int_flag set to true. Is this possible???
I'd like it to ...
receive SIG{INT}
set variable to sig_int_flag = 1 (or something)
handler sends message that signal received then sleep for 30 seconds.
after 30 seconds -- kill all processes.
meanwhile the wheel is on a loop that
processes a file
checks for and breaks out
if parent pid not detected or
if sig_int_flag == 1 (not working)
otherwise processes next file
The idea is to give the wheel 30 seconds to finish what it's doing. if the child processes are not dead on their own -- we kill them.
Is this possible?
Thanks
I cannot find a way for the child processes to detect and act on a sig_int_flag set to true. Is this possible???
For a child to see the same flag which the parent sets, the flag has to reside in shared memory.
Another way is to send a signal to the child where a handler sets the flag.
Related
i'm new to operating systems and i found this code , and i don't understand why certain outputs like : abc , we can't get
suppose we have this code in c :
int main()
{
if(fork()==0)
printf("a");
else
{
printf("b");
waitpid(-1);
}
printf("c");
return 0;
}
waitpid() waits for a child process to terminate.
can the child process be terminated in abnormal way ? so that we can have this outputs : abc, bc ?
according to at least the linux manpage for fork:
RETURN VALUE
On success, the PID of the child process is returned in the parent, and
0 is returned in the child. On failure, -1 is returned in the parent,
no child process is created, and errno is set appropriately.
so if your child program isn't ever created the entire output will be c for the parent process and nothing for the child process, because it never came to be.
Also it is possible that the parent process is killed before it can output a, or c, then you'll only get the child's output, bc. or maybe the parent is killed before it can even fork! there are lots of possibilities and with good timing (and some calls to the sleep function inbetween) you could probably reproduce them.
I have a pretty simple Anylogic DE model where POs are launched regularly, and a certain amount of material gets to the incoming Queue in one shot (See Sample Picture below). Then the Manufacturing process starts using that material at a regular rate, but I want to check if the material in the queue gets outdated, so I'm using the TimeOut option of that queue, in order to scrap the outdated material (older than 40wks).
The problem is that every time that some material gets scrapped through this Timeout exit, the downstream Manufacturing process "stops" pulling more material, instead of continuing, and it does not get restarted until a new batch of material gets received into the Queue.
What am I doing wrong here? Thanks a lot in advance!!
Kindest regards
Your situation is interesting because there doesn't seem to be anything wrong with what you're doing. So even though what you are doing seems to be correct, I will provide you with a workaround. Instead of the Queue block, use a Wait block. You can assign a timeout and link the timeout port just like you did for the queue (seem image at the end of the answer).
In the On Enter field of the wait block (which I will assume is named Fridge), write the following code:
if( MFG.size() < MFG.capacity ) {
self.free(agent);
}
In the On Enter of MFG block write the following:
if( self.size() < self.capacity && Fridge.size() > 0 ) {
Fridge.free(Fridge.get(0));
}
And finally, in the On Exit of your MFG block write the following:
if( Fridge.size() > 0 ) {
Fridge.free(Fridge.get(0));
}
What we are doing in the above, is we are manually pushing the agents. Each time an agent is processed, the model checks if there is capacity to send more, if yes, a new agent is sent.
I know this is an unpleasant workaround, but it provides you with a solution until AnyLogic support can figure it out.
Yesterday, i had a interview and i was asked this question of code snippet using fork() .
void main()
{............
for (int k=1;k<=10;k++)
{
pid[k]=fork();
if(!pid[k])
execvp(.....);
}
}
According to my understanding, i told 1024 total processes will be there including the parent as 2^n -1 = 1023 + 1 parent where n = total forks
But, the interviewer replied that my answer was wrong.
What is wrong with my understanding?
Given this code
pid[k]=fork();
if(!pid[k])
execvp(.....);
and reading the man page of fork which states that
On success, the PID of the child process is returned in the parent,
and 0 is returned in the child.
we know that the child process will perform the exec call (and go on executing a different program), whereas the parent will loop and create another child.
This means that a child will be created for each iteration of the loop, in this case 10 times. So, the answer is 10 children + 1 parent = 11.
Now, if the program that gets started by exec is the same program, the fun will stop only when the computer's memory is exhausted: on every iteration 10 programs will each create 10 children, which each will create 10 children, and so on. A pecularity of fork() is that parent and child get an image of the same variables (which would lead to a predictable number of children, ie some figure related to a power of 2), obviously this isn't true when a program gets exec'd, which means that available memory will be the only limit.
Consider the following piece of code running under Solaris 11.3 (a simplified version of system(3C)):
int main(int argc, char **argv) {
pid_t pid = fork();
pid_t w;
int status;
if (pid == 0) {
execvp(argv[1], argv + 1);
perror("Failed to exec");
exit(127);
}
if (pid > 0) {
w = waitpid(pid, &status, 0);
if (w == -1) {
perror("Wait: ");
exit(1);
}
else if (WIFEXITED(status) > 0) {
printf("\nFinish code: %d\n", WEXITSTATUS(status));
}
else {
printf("\nUnexpected termination of child process.\n");
}
}
if (pid == -1) {
perror("Failed to fork");
}
}
The problem I get is that whenever the process is finished via a signal (for instance, SIGINT) the "Unexpected termination" message is never printed.
The way I see it, the whole process group receives signals from the terminal, and in this case the parent process simply terminates before waitpid(2) returns (Which happens every time, apparently).
If that is the case, I have a follow-up question. How to retrieve infromation about the signal that terminated the child process from the parent without using a signal handler? For example, I could have added another if-else block with a WIFSIGNALED check and a WTERMSIG call passing the variable status (In fact, I did, but upon termination with Ctrl+C the program delivered no output whatsoever)
So what exactly and in which order is happening there?
You say, “… whenever the process is finished via a signal
(for instance, SIGINT) …”, but you aren’t specific enough
to enable anybody to answer your question definitively.
If you are sending a signal to the child process with a kill command,
you have an odd problem.
But if, as I suspect (and as you suggest when you say
“the whole process group receives signals from the terminal”),
you are just typing Ctrl+C, it’s simple:
When you type an INTR, QUIT, or SUSP character,
the corresponding signal (SIGINT, SIGQUIT, or SIGTSTP) is sent
simultaneously to all processes in the terminal process group.
OK, strictly speaking, it’s not simultaneous.
It happens in a loop in the terminal driver
(specifically, I believe, the “line discipline” handler), in the kernel.
No user process execution can occur before this loop completes.
You say “… the parent process simply terminates
before waitpid(2) returns (… every time, apparently).”
Technically this is true.
As described above, all processes in the process group
(including your parent and child processes) receive the signal
(essentially) simultaneously.
Since the parent is not handling the signal, itself,
it terminates before it can possibly do any processing
triggered by the child’s receipt of the signal.
You say “Signal is always caught by parent process first”.
No; see above.
And the processes terminate in an unspecified order —
this may be the order in which they appear in the process table
(which is indeterminate),
or determined by some subtle (and, perhaps, undocumented) aspect
of the scheduler’s algorithm.
Related U&L questions:
What is the purpose of abstractions, session, session leader
and process groups?
What are the responsibilities of each Pseudo-Terminal (PTY) component
(software, master side, slave side)?
Does it work ok if you send signals via a "kill" from another tty? I tried this on linux. Seems the same behavior.
I think you're right if that shell control signals are passed to the process group....and you have a race. You need in the parent to catch and delay them.
What I've done is do "./prog cat"
Doing a kill -SIGINT
works fine.
Doing a control-C prints nothing.
Doing a setsid() in front has the parent terminate, but the child keep running.
Having the following process
and a message boundary event which has cancelActivity set to false so that after Should cancel the Sub Process could continue where it was before the event was received - in case that No was selected.
How could I simulate as if cancelActivity was set to true in case that the user selects Yes (i.e. cancel/stop Sub Process when the No end is reached)?
Please ask if I wasn't clear about this.
With the boundary event attached to the sub process, there is no way of going to another task after should cancel.
You can neither use link events (not allowed from parent to sub process), nor a simple sequence flow (not allowed between two processes).
So I guess you need attach the message event to each relevant task within the sub process or you need to use two separate boundary events (one interrupting and one non-interrupting).
Further the answer above.
The process flow really doesnt properly capture your requirement.
You should have the email event running in parallel to your sub process.
On receipt of the email, flow directly into the should cancel human/user task.
If yes, send a signal event (signals are easier to implement than messages) that is captured by a signal boundary event receiver on the sub process and simply flow to the end (i.e. terminate).
If no, you simply exit (you may need to start another email receiver - depends on your requirements).
This way, the sub process boundary event is not triggered until after you have made your decision to terminate or not.
Hope this helps.
maybe you need the bpmn definition below