I'm playing around with Mojolicious and websockets. I want to send the output of multiple external commands on the server to the webpage. I have no problems with connecting and receiving messages, but I also want to send a message back to the server to stop an external command while letting the others keep sending messages back to the client. I also want to stop checking the external command once it exits.
The external command is simply a one-liner that spits out an integer every few seconds. I have two websockets that display the numbers in separate divs. Clicking either of the stop buttons sends the message, but that's where I need to figure out how to shut down that websocket (and only that websocket) and shut down the external command.
When I connect the websocket, I run the external command and set up a Mojo::IOLoop->recurring to check if there's output.
When I want to stop, I figure that I should call Mojo::IOLoop->remove($id), but that doesn't seem to completely remove it and I get error messages like Mojo::Reactor::Poll: Timer failed: Can't call method "is_websocket" on an undefined value.
If I call finish on the controller object to shut down the websocket, it seems to stop everything.
I have the entire Mojolicious::Lite app as a gist, but here's the parts where I
use feature qw(signatures);
no warnings qw(experimental::signatures);
## other boilerplate redacted
websocket '/find' => sub ( $c ) {
state $loop = Mojo::IOLoop->singleton;
app->log->debug( "websocket for find" );
$c->inactivity_timeout( 50 );
my $id;
$c->on( message => sub ( $ws, $message ) {
my $json = decode_json( $message );
my $command = $json->{c};
my $name = $json->{n};
app->log->debug( "Got $command command for $name" );
if( $command eq "start" ) {
$id = run_command( $ws );
app->log->debug( "run_command for $name returned [$id]" );
}
elsif( $command eq "stop" ) {
app->log->debug( "stopping loop for $name [$id]" );
# XXX What should I do here?
# $ws->finish;
# $loop->remove( $id );
}
elsif( $command eq "open" ) {
app->log->debug( "opening websocket for $name" );
}
}
);
$c->on(
finish => sub ( $c, $code ) {
app->log->debug("WebSocket closed with status $code");
}
);
};
app->start;
sub run_command ( $ws ) {
app->log->debug( "In run_command: $ws" );
open my $fh, "$^X -le '\$|++; while(1) { print int rand(100); sleep 3 }' |";
$fh->autoflush;
my $id;
$id = Mojo::IOLoop->recurring( 1 => sub ($loop) {
my $m = <$fh>;
unless( defined $m ) {
app->log->debug( "Closing down recurring loop from the inside [$id]" );
# XXX: what should I do here?
close $fh;
return;
};
chomp $m;
app->log->debug( "Input [$m] for [$id] from $fh" );
$ws->send( encode_json( { 'm' => $m } ) );
});
return $id;
}
Other questions that may benefit from this answer:
Output command to socket without buffering using Mojo::IOLoop
I played around with this a bit. Logioniz's answer made me think that I shouldn't be polling or handling the filehandle details myself. I still don't know where it was hanging.
Instead, I used Mojo::Reactor's io to set a filehandle to monitor:
sub run_command ( $ws ) {
my $pid = open my $fh, "$^X -le '\$|++; print \$\$; while(1) { print int rand(100); sleep 3 }' |";
$fh->autoflush;
my $reactor = Mojo::IOLoop->singleton->reactor->io(
$fh => sub ($reactor, $writeable) {
my $m = <$fh>;
chomp $m;
$ws->send( encode_json( { 'm' => $m } ) );
}
);
return ( $fh, $pid );
}
When I'm done with that command, I can unwatch that filehandle and kill the process. I finish the websocket:
elsif( $command eq "stop" ) {
$loop->reactor->watch( $fh, 0, 0 );
kill 'KILL', $pid or app->log->debug( "Could not kill $pid: $!" );
$ws->finish;
}
I still don't know why remove($fh) doesn't work. I figure I'm leaking some IOLoop things doing it this way.
I think that you block event loop because your recurrent invoke every second and my $m = <$fh>; wait result about 2-3 second. So you block event loop.
I think so because when i run your app the event finish not call on inactivity timeout, but call event recurrent. finish event MUST call on inactivity timeout always.
I think that your code must be in separate process to avoid blocking event loop.
Try to use this module to execute in separate process.
I write small example.
Related
For some reason I am implementing some specific network protocol similar to STOMP in plain pure Perl.
The connection can be either a direct network socket, or an SSL tunnel provided by openssl s_client created by a call to open3 (no IO::Socket::SSL available on the host).
Depending on the dialog a request to the server may or may not have a response, or may have multiple responses. How can I test the file descriptors for the existence of data? Currently when no data is available, it waits until the defined timeout.
EDIT: I have probably a vocabulary issue between file handle vs. file descriptor to perform my research. I just found that eof() may help but cannot use it correctly yet.
While it is a bit complicated to provide an SCCCE, here is the interesting parts of the code:
# creation of a direct socket connection
sub connect_direct_socket {
my ($host, $port) = #_;
my $sock = new IO::Socket::INET(PeerAddr => $host,
PeerPort => $port,
Proto => 'tcp') or die "Can't connect to $host:$port\n";
$sock->autoflush(1);
say STDERR "* connected to $host port $port" if $args{verbose} || $args{debug};
return $sock, $sock, undef;
}
# for HTTPS, we are "cheating" by creating a tunnel with OpenSSL in s_client mode
my $tunnel_pid;
sub connect_ssl_tunnel {
my ($dest) = #_;
my ($host, $port);
$host = $dest->{host};
$port = $dest->{port};
my $cmd = "openssl s_client -connect ${host}:${port} -servername ${host} -quiet";# -quiet -verify_quiet -partial_chain';
$tunnel_pid = open3(*CMD_IN, *CMD_OUT, *CMD_ERR, $cmd);
say STDERR "* connected via OpenSSL to $host:$port" if $args{verbose} || $args{debug};
say STDERR "* command = $cmd" if $args{debug};
$SIG{CHLD} = sub {
print STDERR "* REAPER: status $? on ${tunnel_pid}\n" if waitpid($tunnel_pid, 0) > 0 && $args{debug};
};
return *CMD_IN, *CMD_OUT, *CMD_ERR;
}
# later
($OUT, $IN, $ERR) = connect_direct_socket($url->{host}, $url->{port});
# or
($OUT, $IN, $ERR) = connect_ssl_tunnel($url);
# then I am sending with a
print $OUT $request;
# and read the response with
my $selector = IO::Select->new();
$selector->add($IN);
FRAME:
while (my #ready = $selector->can_read($args{'max-wait'} || $def_max_wait)) {
last unless #ready;
foreach my $fh (#ready) {
if (fileno($fh) == fileno($IN)) {
my $buf_size = 1024 * 1024;
my $block = $fh->sysread(my $buf, $buf_size);
if($block){
if ($buf =~ s/^\n*([^\n].*?)\n\n//s){
# process data here
}
if ($buf =~ s/^(.*?)\000\n*//s ){
goto EOR;
# next FRAME;
} }
$selector->remove($fh) if eof($fh);
}
}
}
EOR:
EDIT 2 and epilogue
As a summary, depending in the protocol dialog
a request can have an expected response (for instance a CONNECT must return a CONNECTED)
a request to get the pending messages can return a single response, multiple responses at once (without intermediate request), or no response (and in this case the can_read() with no parameter of Ikegami is blocking, what I want to avoid).
Thanks to Ikegami I have changed my code as the following:
the timeout argument to can_read() is passed as an argument to the sub that is processing the responses
for initial connections I am passing a timeout of several seconds
when I expect instant responses I am passing a timeout of 1 second
in the process loop, after any correct response I replace the initial timeout by a 0.1 to not block if no more data is waiting in the filehandle
Here is my updated code:
sub process_stomp_response {
my $IN = shift;
my $timeout = shift;
my $resp = [];
my $buf; # allocate the buffer once and not in loop - thanks Ikegami!
my $buf_size = 1024 * 1024;
my $selector = IO::Select->new();
$selector->add($IN);
FRAME:
while (1){
my #ready = $selector->can_read($timeout);
last FRAME unless #ready; # empty array = timed-out
foreach my $fh (#ready) {
if (fileno($fh) == fileno($IN)) {
my $bytes = $fh->sysread($buf, $buf_size);
# if bytes undef -> error, if 0 -> eof, else number of read bytes
my %frame;
if (defined $bytes){
if($bytes){
if ($buf =~ s/^\n*([^\n].*?)\n\n//s){
# process frame headers here
# [...]
}
if ($buf =~ s/^(.*?)\000\n*//s ){
# process frame body here
# [...]
push #$resp, \%frame;
$timeout = 0.1; # for next read short timeout
next FRAME;
}
} else {
# EOF
$selector->remove($fh);
last FRAME;
}
} else {
# something is wrong
say STDERR "Error reading STOMP response: $!";
}
} else {
# what? not the given fh
}
}
}
return $resp;
}
Do not use eof in conjunction with select (which can_read wraps). It performs a buffered read, which breaks select.
select will mark a handle as ready for reading when it reaches EOF, and sysread returns zero on EOF. So all you need to do to detect EOF is to check for sysread returning zero.
Note that using a new buffer for every pass was a mistake sysread can easily return only part of a message. The following fixes this, and shows how to handle errors and EOF from sysread.
Globals:
my %clients_by_fd;
When you get a new connection:
$selector->add( $fh );
$clients_by_fd{ fileno( $fh ) } = {
buf => "",
# Any other info you want here.
};
Event loop:
while ( 1 ) {
my #ready = $selector->can_read();
for my $fh ( #ready ) {
my $client = $clients_by_fd{ fileno( $fh ) };
my $buf_ref = \$client->{ buf };
my $rv = sysread( $fh, $$buf_ref, 1024*1024, length( $$buf_ref ) );
if ( !$rv ) {
if ( defined( $rv ) ) {
# EOF
if ( length( $$buf_ref ) ) {
warn( "Error reading: Incomplete message\n" );
}
} else {
# Error
warn( "Error reading: $!\n" );
}
delete $clients_by_fd{ fileno( $fh ) };
$select->remove( $fh );
}
while ( $$buf_ref =~ s/^.*?\n\n//s ) {
process_message( $client, $& );
}
}
}
I am trying to implement a routine which will take in a "command" and associated "timeout".
If the command completes within the specified time, it should return the output.
Or else - it should kill the process.
sub runWithTimeout {
my ($pCommand,$pTimeOut) = #_;
my (#aResult);
print "Executing command [$pCommand] with timeout [$pTimeOut] sec/s \n";
eval {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm $pTimeOut;
#aResult = `$pCommand`;
alarm 0;
};
if ($#) {
print("Command [$pCommand] timed out\n");
# Need to kill the process.However I don't have the PID here.
# kill -9 pid
} else {
print "Command completed\n";
#print Dumper(\#aResult);
}
}
Sample Invocation :
&runWithTimeout('ls -lrt',5);
Executing command [ls -lrt] with timeout [5] sec/s
Command completed
&runWithTimeout('sleep 10;ls -lrt',5);
Executing command [sleep 10;ls -lrt] with timeout [5] sec/s
Command [sleep 10;ls -lrt] timed out
Guess if I have the PID with me - I can use "kill" on the PID in the if block.
Any pointer on how can I get the PID(or any other better approach) - it would be a great help.
Don't run the command with backticks, and use open instead. For bonus points - use IO::Select and can_read to see if you've got any output:
use IO::Select;
my $pid = open ( my $output_fh, '-|', 'ls -lrt' );
my $select = IO::Select -> new ( $output_fh );
while ( $select -> can_read ( 5 ) ) {
my $line = <$output_fh>;
print "GOT: $line";
}
##timed out after 5s waiting.
kill 15, $pid;
I am trying to build a program wich creates some forks and writes the results of the forks back to the main program. Thereby I try to use IO::Pipe
sub ForkRequests {
my $pipe = IO::Pipe->new();
my $pid;
foreach my $feature ( #features ) {
if ( $pid = fork() ) {
$pipe->reader();
while ( <$pipe> ) {
print $_. "\n";
}
}
elsif ( defined $pid ) {
#child
$pipe->writer();
#somecalculations [...]
print $pipe $calcresults;
}
}
}
I got my code for doing a pipe from the module's documentation.
If i now try to execute, I get an error message
Can't locate object method "reader" via package "IO::Pipe::End" at lmtest3.pl line 56.
Can't locate object method "writer" via package "IO::Pipe::End" at lmtest3.pl line 63.
Can't locate object method "reader" via package "IO::Pipe::End" at lmtest3.pl line 56, <GEN0> line 1.
Can't locate object method "writer" via package "IO::Pipe::End" at lmtest3.pl line 63, <GEN0> line 1.
So, my code does not seem to initiate a pipe object, but an IO::Pipe::End.
So my question is, can anybody see the mistake in there? Why does it return the wrong object, and how would this be done correctly?
EDIT
I have some requests to some servers (most of the time 1 request to 7 ervers).
Those request names are saved in #features, and will be executed at the point of #somecalculations.
Because the server response is pretty slow, I want those requests to start in parallel. They all have to get back to the main program and print the reply to the console.
I tried this code
sub ForkRequests {
my $i = 0;
my #pipes;
my $pid;
foreach my $feature ( #features ) {
#pipes[$i] = IO::Pipe->new();
if ( $pid = fork() ) {
#pipes[$i]->reader();
}
elsif ( defined $pid ) {
#child
#pipes[$i]->writer();
# calculations
my $w = #pipes[$i];
print $w $calc;
print $w "end\n";
}
$i++;
}
}
if ( $pid == 1 ) {
while ( 1 ) {
foreach my $pipe ( #pipes ) {
while ( <$pipe> ) {
unless ( $_ == "end" ) {
print $_. "\n";
}
else { last; }
}
}
}
}
else {
exit;
}
}
as said, to save those pipes, but I still got a problem in reading them, as the program exits before it gets answers.
The problem is that you are forking multiple child processes but trying to use the same pipe for all of them.
The reader method converts $pipe into an IO::Pipe::End object that you can read data from, so the first child is connected correctly. But you then call reader again on the same $pipe, and the error is thrown because it is no longer an object of the right class.
You simply need to create a new pipe for each child process:
sub fork_requests {
for my $feature ( #features ) {
my $pipe = IO::Pipe->new;
my $pid;
if ( $pid = fork ) {
$pipe->reader;
print while <$pipe>;
}
elsif ( defined $pid ) {
$pipe->writer;
# some calculations . . .
print $pipe $calcresults;
exit;
}
}
}
Update
Okay I think I understand what it is you need. This complete program should show you.
I have written fork_requests so that it expects a list of features as parameters, and I have written the child code so that it sleeps for two seconds to emulate the processing time and then simply prints the name of the feature.
The parent code stores all the pipes in an array, as I suggested, and prints the output from each of them in the order they were queued. All five child processes complete after two seconds, so the parent is suspended for that time and then prints the features originally passed in.
use strict;
use warnings;
use IO::Pipe;
STDOUT->autoflush;
fork_requests('A' .. 'E');
sub fork_requests {
my #pipes;
for my $feature ( #_ ) {
my $pipe = IO::Pipe->new;
my $pid;
if ( $pid = fork ) {
$pipe->reader;
push #pipes, $pipe;
}
elsif ( defined $pid ) {
$pipe->writer;
select $pipe;
# some calculations . . .
sleep 2;
my $calcresults = $feature;
print $calcresults, "\n";
exit;
}
}
for my $pipe ( #pipes ) {
print while <$pipe>;
}
}
output
A
B
C
D
E
I want my server to stop for a few seconds and then start again.
I am trying to use sleep(5).
Will this help?
I've tried with a Perl script containing:
if($mech=~ m/(Connection refused)/) {print "SLEEPING\n";sleep(5);redo;}
A server hosts web pages. A client connects to a server. A Perl script using WWW::Mechanize is a client.
I suspect you're trying to do the following:
my $retries = 3;
my $resp; # response goes here
while ( $retries-- > 0 ) {
$resp = $ua->get( "http://www.google.com/" );
if ( ! $resp->is_success ) {
warn( "Failed to get webpage: " . $resp->status_line );
sleep( 5 );
next; # continue the while loop
}
last; # success!
}
if ( $retries == 0 ) {
die( "Too many retries!" );
}
# old versions of LWP::UserAgent didn't have decoded_content()
my $content = $resp->can('decoded_content') ?
$resp->decoded_content : $resp->content;
print( $content );
UPDATE
Here is the code you supplied in a comment:
use WWW::Mechanize;
my $mech = new WWW::Mechanize;
eval {
$mech->get($url);
};
if($#) {
print STDERR "Burst\n";
print STDERR Dumper($mech);
# my $var=$mech;
# print STDERR "var $var\n";
# if($mech=~ m/(Connection refused)/)
# {
# print "SLEEPING\n";
# sleep(5);
# redo;
# }
From perdoc -f redo:
The "redo" command restarts the loop block without evaluating the conditional again. The "continue" block, if any, is not executed. If the LABEL is omitted, the command refers to the innermost enclosing loop.
Seeing as you haven't put your code in a loop the call to redo doesn't have any effect.
This is probably not Perl-specific, but my demo is in Perl.
My master program opens a listening socket, and then forks a child process. The child's first job is to connect to the master and say HELLO. Then it continues its initialization, and when it is ready, it sends READY to the master.
The master, after having forked the child, waits for HELLO and then goes about other initialization (forking other children, mainly). Once it has forked all the children and heard HELLO back from each, it proceeds to wait for all of them to say READY.
It does this using IO::Select->can_read, and then $socket->getline to retrieve the message.
In short, the parent is failing to receive the READY, even though it's being sent by the child.
Here is a hastily-stripped-down version of my program that demo's the bug (I tried to remove irrelevancies but a few may remain). I'm still confused by issues like whether message boundaries are preserved, and whether "\n" is needed, and which method to use for reading from a socket. I really don't want to have to think about assembling message fragments, and I'm hoping IO::Select will spare me that.
The demo only spawns one child, for simplicity.
#!/usr/bin/env perl
use warnings;
use strict;
use Carp;
use File::Basename;
use IO::Socket;
use IO::Select;
use IO::File; # for CONSTANTS
use Net::hostent; # for OO version of gethostbyaddr
use File::Spec qw{rel2abs}; # for getting path to this script
use POSIX qw{WNOHANG setsid}; # for daemonizing
use 5.010;
my $program = basename $0;
my $progpath = File::Spec->rel2abs(__FILE__);
my $progdir = dirname $progpath;
$| = 1; # flush STDOUT buffer regularly
# Set up a child-reaping subroutine for SIGCHLD. Prevent zombies.
#
say "setting up sigchld";
$SIG{CHLD} = sub {
local ( $!, $^E, $# );
while ( ( my $kid = waitpid( -1, WNOHANG ) ) > 0 ) {
say "Reaping child process $kid";
}
};
# Open a port for incoming connections
#
my $listen_socket = IO::Socket::INET->new(
Proto => 'tcp',
LocalPort => 2000,
Listen => SOMAXCONN,
Reuse => 1
);
croak "Can't set up listening socket: $!\n" unless $listen_socket;
my $readers = IO::Select->new($listen_socket)
or croak "Can't create the IO::Select read object";
say "Forking";
my $manager_pid;
if ( !defined( $manager_pid = fork ) ) {
exit;
}
elsif ( 0 == $manager_pid ) {
#
# ------------------ BEGIN CHILD CODE HERE -------------------
say "Child starting";
my ($master_addr, $master_port) = split /:/, 'localhost:2000';
my $master_socket = IO::Socket::INET->new(
Proto => "tcp",
PeerAddr => $master_addr,
PeerPort => $master_port,
) or die "Cannot connect to $master_addr:$master_port";
say "Child sending HELLO.";
$master_socket->printflush("HELLO\n");
# Simulate elapsed time spent initializing...
#
say "Child sleeping for 1 second, pretending to be initializing ";
sleep 1;
#
# Finished initializing.
say "Child sending READY.";
$master_socket->printflush("READY\n");
say "Child sleeping indefinitely now.";
sleep;
exit;
# ------------------- END CHILD CODE HERE --------------------
}
# Resume parent code
# The following blocks until we get a connect() from the manager
say "Parent blocking on ready readers";
my #ready = $readers->can_read;
my $handle;
for $handle (#ready) {
if ( $handle eq $listen_socket ) { #connect request?
my $manager_socket = $listen_socket->accept();
say "Parent accepting connection.";
# The first message from the manager must be his greeting
#
my $greeting = $manager_socket->getline;
chomp $greeting;
say "Parent received $greeting";
}
else {
say( $$, "This has to be a bug" );
}
}
say "Parent will now wait until child sends a READY message.";
say "NOTE: if the bug works, Ill never receive the message!!";
################################################################################
#
# Wait until all managers have sent a 'READY' message to indicate they've
# finished initializing.
#
################################################################################
$readers->add($handle); # add the newly-established socket to the child
do {
#ready = $readers->can_read;
say "Parent is ignoring a signal." if !#ready;
} until #ready;
# a lot of overkill for demo
for my $socket (#ready) {
if ( $socket ne $listen_socket ) {
my $user_input;
$user_input = $socket->getline;
my $bytes = length $user_input;
if ( $bytes > 0 ) {
chomp $user_input;
if ( $user_input eq 'READY' ) {
say "Parent got $user_input!";
$readers->remove($socket);
}
else {
say( $$, "$program RECVS $user_input??" );
}
}
else {
say( $$, "$program RECVs zero length message? EOF?" );
$readers->remove($socket);
}
}
else {
say( $$, "$program RECVS a connect on the listen socket??" );
}
} # end for #ready
say "Parent is ready to sleep now.";
I don't know if that's your (only) problem, but always use sysread with select. Never used buffered IO like getline. getline doubly makes no sense since it can block for data that hasn't been received yet.
Your select loop should look like:
For ever,
Wait for sockets to become ready for reading.
For each socket ready to be read,
sysread($that_socket, $buffer_for_that_socket, 64*1024,
length($buffer_for_that_socket));
If sysread returned undef,
Handle error.
If sysread return false,
Handle closed socket. Don't forget about data left in the buffer.
Otherwise, handle read data:
while ($buffer_for_that_socket =~ s/^(.*)\n//) { my $msg = $1; ... }