Does an only father process can terminate (kill) his children? Or even another process without any relation to that specific process can kill it?
which processes can kill the specific process?
Any process that has process ID of another process with same user-id can terminate it by sending a SIGQUIT signal to that process using
kill(pid, SIGQUIT).
You need to include <sys/types.h>
and <signal.h> for using this system call.
It is stated on man page of kill:
For a process to have permission to send a signal it must either be privileged (under Linux: have the CAP_KILL capability), or the real or effective user ID of the sending process must equal the real or saved set-user-ID of the target process. In the case of SIGCONT it suffices when the sending and receiving processes belong to the same session.
Related
I write a wallboard for asterisk queue system. The document says that when a call is transferred away by an agent an ATTENDEDTRANSFER (or BLINDTRANSFER) event log should be added to the queue_log file automatically. Unfortunately there is no line for any transferred calls in the log file (queue_log in my case). Is there any setting to be changed to let the system to log them properly ?
When I check the CEL files, I see the transfer logs. So the system is logging correctly for CEL but not for queue_log.
I tried to transfer the call to another agent, to an IVR and to another user who is not an agent for any queue. The result is the same, no log for the transfer process.
Any suggestions ?
I use the following:
Asterisk Version: 13.22.0
Freepbx 14.0.5.25
All trunks and clients are connected via SIP
If your phone do transfer via internal features - no log entries.
You have parse AMI events for needed info.
Write your own queue wallboard is VERY hard task. Queue module have really alot of issues.
Can recommend read some already written modules like https://www.asternic.net or queuemetric
I have a queue with few agents and leastrecent as strategy and timeout is set up on 10(sec). When phone ring and agent push 'busy' button i will that asterisk ring to next free agent without waiting sets up timeout.
It is possible to set up?
Try this in queue configuration:
timeoutrestart=yes
ringinuse=no
What I want to know is whether a process Pid gets terminated when the socket closes if the controlling process is created using gen_tcp:controlling_process(Socket, Pid), and also under what conditions does the socket send this message {tcp_closed, Socket}? Is there a way to prevent the socket on the server side from closing, or is that always normal? Also is there any specific way to terminate a process knowing the Pid?
gen_tcp:controlling_process(Socket, Pid) is used to give the control of a Socket to the process Pid. It implies nothing about the behavior of this process when the Socket is closed.
There are to cases to consider:
You open the server Socket in the mode {active,false} in this case the server will know that the Socket is closed when it calls the function gen_tcp:recv(Sock, Len) getting the answer {error, closed} instead of the expected {ok, Binary}. I recommend you to use this mode if you intend to use gen_tcp:controlling_process(Socket, Pid) because it allow you to avoid a race condition in your code.
You open the server Socket in the mode {active,true} or {active,once}, in this case, the server will receive a message {tcp_closed, Socket}. In this case, there is a race condition see topic Erlang: Avoiding race condition with gen_tcp:controlling_process
I don't think it is the role of the server to prevent any Socket to close, but rather to always be ready to accept a connection.
Last it is always possible to terminate a process, using your own protocol for a smooth end for example Pid ! {stop,Reason} or a more brutal way using exit(Pid, Reason).
I'm reading the perlipc perldoc and was confused by the section entitled "Interactive Client with IO::Socket". It shows a client program that connects with some server and sends a message, receives a response, sends another message, receives a response, ad infinitum. The author, Tom Christiansen, states that writing the client as a single-process program would be "much harder", and proceeds to show an implementation that forks a child process dedicated to reading STDIN and sending to the server, while the parent process reads from the server and writes to STDOUT.
I understand how this works, but I don't understand why it wouldn't be much simpler (rather than harder) to write it as a single-process program:
while (1) {
read from STDIN
write to server
read from server
write to STDOUT
}
Maybe I'm missing the point, but it seems to me this is a bad example. Would you ever really design an client/server application protocol where the server might suddenly think of something else to say, interjecting characters onto the terminal where the client is in the middle of typing his next query?
UPDATE 1: I understand that the example permits asynchronicity; what I'm puzzled about is why concurrent I/O between a CLI client and a server would ever be desirable (due to the jumbling of input and output of text on the terminal). I can't think of any CLI app - client/server or not - that does that.
UPDATE 2: Oh!! Duh... my solution only works if there's exactly one line sent from the server for every line sent by the client. If the server can send an unknown number of lines in response, I'd have to sit in a "read from server" loop - which would never end, unless my protocol defined some special "end of response" token. By handling the sending and receiving in separate processes, you leave it up to the user at the terminal to detect "end of response".
(I wonder whether it's the client, or the server, that typically generates a command prompt? I'd always assumed it was the client, but now I'm thinking it makes more sense for it to be the server.)
Because the <STDIN> read request can block, doing the same thing in a single process requires more complicated, asynchronous handling of the input/output functions:
while (1) {
if there is data in STDIN
read from stdin
write to server
if there is data from server
read from server
write to STDOUT
}
I'm using $r->pool->cleanup_register(\&cleanup); to run a subroutine after a page has been processed and printed to the client. My hope was that the client would see the complete page, and Apache could continue doing some processing in the background that takes a few seconds.
But the client browser hangs until cleanup sub has returned. Is there a way to get apache to finalise the connection with the client before all my code has returned?
I'm convinced I've done this before, but I can't find it again.
Use a job queue system and do the long operation completely asynchronously -- just schedule the operation during the web request. A job queue also handles peak load situations better than doing something expensive within the web server processes themselves.
You want to flush the buffer. It doesn't finalize the connection, but your client will see the output before the task completes.
sub handler {
my $r = shift;
$r->content_type('text/html');
$r->rflush; # send the headers out
$r->print(long_operation());
return Apache2::Const::OK;
}