Running AnyEvent under Dancer application - perl

I would like to do some non-blocking SSH to a couple of thousand machines that i'm tracking (my own machines), I have a Dancer application up and running, and I'm willing to use AnyEvent::timer to execute SSH commands asynchronously (each machine has its own polling interval, and I don't want one machine to wait for another to complete with its SSH work).
I'm wondering, what is the best way to act asynchronously in a synchronous environment?

It is not very good idea to run any external commands from within your web scripts.
For one, should your external call block or crash for any reason, it will create bad experience for the user (even it that user is just you).
Then, running external commands as web user may have a lot of security implications - I would think your web user most likely has passwordless ssh set up, doesn't it? What if someone figures out some security hole in your script and manages to use it to ssh into your servers?
Instead, you should create separate service or process which will regularly poll your servers status using ssh (or what else) and save results of that scan into database - Postgres or MySQL.
Then, change your Dancer app to display collected results from database, rather than doing live ssh request. This way it will be very fast and secure.

I could not be a good idea but it is possible. I have a big Dancer application to execute scripts remotely and I'm doing it with fork and Net::SSH2. I tried with thread but there are some modules that are not thread-safe so I recommend to use fork.
I have some comments in my blog http://perlondancer.blogspot.mx/2014/04/executing-remote-commands-from-dancer.html and in this gist is the code example below: https://gist.github.com/johandry/11197516
#!/usr/bin/env perl
use strict;
use warnings;
use Dancer;
use Net::SSH2;
sub execCommand ($$) {
my ( $ssh2, $cmd ) = #_;
my %args=(
timeout => 1_000, # polling timeout
bufsize => 10_240, # read buffer size when polling
);
$ssh2->blocking(1); #needed for ssh->channel
my $chan=$ssh2->channel(); # create SSH2 channel
if ($ssh2->error()) {
return (undef, undef, 100);
}
# exec $cmd (caveat: only one command line can be executed over this channel. No "ls -l;whoami" combo. Use ssh->shell instead.
unless ($chan->exec($cmd)) {
return (undef, undef, 500);
}
# defin polling context: will poll stdout (in) and stderr (ext)
my #poll = ( { handle => $chan, events => ['in','ext'] } );
my %std=(); # hash of strings. store stdout/stderr results
$ssh2->blocking( 0 ); # needed for channel->poll
while(!$chan->eof) { # there still something to read from channel
$ssh2->poll( $args{'timeout'}, [ #poll ] ); # if any event, it will be store into $poll;
my( $n, $buf ); # number of bytes read (n) into buffer (buf)
foreach my $poll ( #poll ) { # for each event
foreach my $ev ( qw( in ext ) ) { #for each stdout/stderr
next unless $poll->{revents}{$ev};
#there are something to read here, into $std{$ev} hash
if( $n = $chan->read( $buf, $args{'bufsize'}, $ev eq 'ext' ) ) { #got n byte into buf for stdout ($ev='in') or stderr ($ev='ext')
$std{$ev}.=$buf;
}
} #done foreach
}
}
$chan->wait_closed(); #not really needed but cleaner
my $exit_code=$chan->exit_status();
$chan->close(); #not really needed but cleaner
$ssh2->blocking(1); # set it back for sanity (future calls)
return ($std{'in'},$std{'ext'},$exit_code);
}
sub execute ($$$$) {
my ($ip, $username, $password, $cmd) = #_;
my $pid = fork();
if ($pid) {
# This is the parent (DANCER)
debug "Process started with PID $pid\n";
} elsif ( $pid == 0 ) {
# This is the child
my $ssh2 = Net::SSH2->new();
$ssh2->connect( $ip ) or debug("Cannot connect to $ip");
my $publicKeyFile = './id_rsa.pub'; # path(setting('appdir'), 'db', 'id_rsa.pub'); # I prefer to copy the public key in your app dir due to permissions issues
my $privateKeyFile = './id_rsa'; # path(setting('appdir'), 'db', 'id_rsa'); # I prefer to copy the private key in your app dir due to permissions issues
if ( $ssh2->auth_publickey( $username, $publicKeyFile, $privateKeyFile, $password ) ) {
my ($stdout, $stderr, $exitcode) = execCommand($ssh2, $cmd);
} else {
debug "Could not authenticate to $ip with $username";
}
$ssh2->disconnect();
} else {
debug "Could not fork: $!\n";
}
}
set logger => "console";
set log => "core";
set show_errors => 1;
get '/uptime/:ip' => sub {
my $username = "the username";
my $password = "the password";
execute(param('ip'), $username, $password, "uptime > /tmp/dancer_example.txt");
return 'uptime is running';
};
dance;
true;

Net::SSH2 can be used asynchronously, but it is quite buggy and crashes often. Forget about using it for running thousands (or just hundreds) of connections in parallel on the same process. It may be ok if you use it wrapped in new processes as recomended by #Johandry, but then you can just run the ssh command using AnyEvent::Util::run_cmd.
Net::OpenSSH is another Perl module that can be used asynchronously. It shouldn't be too difficult to integrate it inside AnyEvent.

Related

Parallel execution of command using Parallel::ForkManager

I want to know whether my understanding is right or not for the below script/logic.
I have list of nodes and I need to run a certain command on each of the node by utilizing number of servers which I have by doing SSH to the servers, means the process should happen parallelly.
I have node_list.txt file which contains list of nodes:
node1
node2
.
.
node49
node50
I have defined number of servers in an array #hosts where I should do SSH and execute the command to each node by splitting the node_file.txt into equal number of parts(called $node_list_X.txt) in an available servers.
Once I have these files (node_list_1.txt,node_list_2.txt,node_list_3.txt,node_list_4.txt) I will be logging into each the server which is already been defined and executing certain commands on each hosts by passing node_list_X.txt file parallelly.
To execute this parallelly I am using Parallel::ForkManager Perl module.
So that, lets say in each host -
192.168.0.1 -> node_list_1.txt (13 nodes)
192.168.0.2 -> node_list_2.txt (13 nodes)
192.168.0.3 -> node_list_3.txt (12 nodes)
192.168.0.4 -> node_list_4.txt (12 nodes)
will run parallelly.
Script is below:
...
my #hosts = ("192.168.0.1", "192.168.0.2", "192.168.0.3","192.168.0.4");
open(my $node_fh, '<', $node_file)
or die "can't open $node_file: $!";
my #lines = <$node_fh>;
my %Files;
my $num_buckets = scalar #hosts;
my $per_bucket = int( #lines / $num_buckets );
my $num_extras = #lines % $num_buckets;
my $path = "/home/user/vinod/test/";
for my $bucket_num (0..$num_buckets-1) {
my $num_lines = $per_bucket;
if ($num_extras) {
++$num_lines;
--$num_extras;
}
last if($num_lines == 0);
my $qfn = $path."node_list_${bucket_num}.txt";
open(my $fh, '>', $qfn)
or die("Can't create \"$qfn\": $!\n");
$fh->print(splice(#lines, 0, $num_lines));
$Files{$bucket_num} = $qfn;
}
print Dumper(\%Files);
my $command = #"defining my command here";
my $pm = Parallel::ForkManager->new(5);
my $ssh;
DATA_LOOP:
foreach my $n (0..$num_buckets-1) {
if( exists $Files{$n} ) {
my $pid = $pm->start and next DATA_LOOP;
$command_to_execute = $command." ".$Files{$n};
$ssh = SSH_Connection( $hosts[$n-1], "user", "password" );
$result = $ssh->capture($command_to_execute);
$pm->finish;
}
}
$pm->wait_all_children;
undef $ssh;
#SSH Connect
sub SSH_Connection {
my ( $host, $user, $passwd ) = #_;
my $ssh = Net::OpenSSH->new($host,
user => $user,
password => $passwd,
master_opts => [-o => "StrictHostKeyChecking=no"]
);
$ssh->error and die "Couldn't establish SSH connection: ". $ssh->error;
return $ssh;
}
Here everything works fine.
When I am defining $pm object, parallel process set to 5.
my $pm = Parallel::ForkManager->new(5);
Does this means at a time in a particular server (Ex:192.168.0.1) it should run 5 parallel process. Means it should take 5 nodes from a node_list_1.txt (out of 13) file and execute the command?
Is my understdning correct? If not, what could be the possible solution to run the command in each server parallelly with multi-threading?
Does this means at a time in a perticular server (Ex:192.168.0.1) it should run 5 parallel process.
No. P::FM doesn't know anything about servers. It manages processes, and ->new(5) means ->start will wait for one of the processes it created to finish before creating a new one if 5 of them are still executing.
what could be the possible solution to run the command in each server parallelly with multi-threading?
Assuming you meant multi-tasking generally rather than multi-threading specifically (since you aren't using threads), create a process for each host could be done as follows:
my %children;
my $error = 0;
for my $host (#hosts) {
my $pid = fork();
if (!defined($pid)) {
warn("Can't execute on $host: Can't fork: $!\n");
next;
}
if ($pid) {
++$children{$pid};
next;
}
if (!eval {
do_it($host);
return 1; # No exception
}) {
warn("Error executing commands on $host: $#");
}
}
while (%children) {
( my $pid = wait() ) >= 0
or die("Can't wait: $!\n");
delete($children{$pid});
}
If you want to run jobs on a bunch of different servers, consider a proper job queue. Perl's Minion is very nice. Various servers can connect to it, ask for jobs in various ways, and send back the results.
Have you consider using Net::OpenSSH::Parallel?
It seems to me that it directly supports what you want to do and is able to handle lots of connections in parallel, schedule then, handle errors and retry failed commands, etc.
Update: But will it allow me to run jobs parallelly inside each host?
But what do you really want to do? distribute jobs over a set of workers? In that case, brian d foy solution is probably a better option.
Anyway, Net::OpenSSH::Parallel was never intended for that, but yet it can do it:
my #hosts = ...;
my #tasks = ...;
my $n_workers = 5;
my $ossh = Net::OpenSSH::Parallel->new;
for my $host (#hosts) {
for my $ix (0..$n_workers) {
$ossh->add_host("$host-$ix", host => $host);
}
}
my $fetch_task = sub {
my ($pssh, $label) = #_;
if (defined (my $task = shift #tasks)) {
$ossh->push($label, cmd => $task);
$ossh->push($label, sub => $fetch_task);
}
}
$ossh->push('*', sub => $fetch_task)
$ossh->run

Managing parallel processes

I am starting multiple bash scripts from a Perl script and I want to monitor them and log their behavior.
I know that I can tell whether a process is still running with kill 0, $pid and I can get the exit code from $?, but with launching multiple scripts in the background I can't relate values of $? to the processes that gave it as an exit code.
How can I launch those scripts in parallel, but get the exit code from each them? I need something like proc_get_status from PHP.
Sorry for not providing the code from the beginning.
I stripped down the code, so the important things are to see.
use warnings;
use strict;
use IPC::Open3;
use IO::Handle;
my $timeLimit = 60*60; # some time limit not to be crossed
my $startTime = time();
my #commands = (); # fill up with commands to be executed
my #processes = ();
foreach my $cmd (#commands) {
my $stdout = IO::Handle->new;
my $stderr = IO::Handle->new;
my $pid = open3(undef, $stdout, $stderr, $cmd);
push #processes, {"pid" => $pid, "out" => $stdout, "err" => $stderr, "cmd" => $fullcmd};
}
do {
if (time() - $startTime > $timeLimit) {
kill 2, $_->{pid} foreach (#processes);
#processes = ();
last;
} else {
for (my $i = 0; $i < #processes; $i++) {
unless (kill 0, $processes[$i]) {
# if it's not running, I would like to check the exit code and log it from here on.
# also remove it from the array, thats why I used for and not foreach, so I can use splice later.
}
}
}
} while (#processes > 0);
You have already hit upon the insight of storing background job data in mini-objects. Take the next step and try a full-featured parallelization package like Forks::Super. You can create background process objects that you can then query for their status and exit code. Forks::Super supports process timeouts and an open3-like interface.
use Forks::Super;
$Forks::Super::MAX_PROC = 10; # optional, block while 10 jobs already running
...
foreach my $cmd (#commands) {
my $job = fork {
cmd => $cmd, # run $cmd in background process
child_fh => 'out,err', # child STDOUT,STDERR available to parent
timeout => $timeLimit # kill the job after $timeLimit seconds
};
push #processes, $job;
}
while (#processes) {
sleep 5;
foreach my $job (#processes) {
if ($job->is_complete) {
$job->wait;
my $exit_code = $job->status;
my $output = $job->read_stdout;
my $error = $job->read_stderr;
# ... log status, output, error, $job->{cmd}, etc. ...
$job->dispose; # close filehandles and other clean up
}
}
#processes = grep { !$_->is_reaped } #processes;
}
You can use wait and waitpid to get the status of individual children. The perlipc documentation gives a few examples in the section on "Signals".
Since you're using IPC::Open3, the Synopsis also has an example of using waitpid():
my($wtr, $rdr, $err);
use Symbol 'gensym'; $err = gensym;
$pid = open3($wtr, $rdr, $err,
'some cmd and args', 'optarg', ...);
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
First, take a look at Perl's fork() function. This would be the typical way I do things like this. There's a good explanation with examples here.
An easy to use forking module is provided by Parallel::ForkManger.
There is also Perl's interpreter-base threads which is a bit lower-level, harder to use, and spawns threads rather than forking processes.
Another possible way is with GNU Parallel. parallel is a very powerful tool to run commands in parallel. You can easily run and manage multiple commands and scripts with it. It has a ---joblog option which might be helpful for you.
All of these approaches provide ways to get the exit code of the sub-processes. In the end, the best choice depends on your current implementation which you did not provide.

Resolving issue with Net::OpenSSH and passing multiple commands to a router

I'm working on moving a Perl script that pushed commands to routers. We have turned off telnet, so I'm working on getting SSH to work. After looking at a number of SSH libraries in Perl, I've opted to use Net::OpenSSH. I have no problem logging in and passing commands to the routers, but the problem I'm having is with entering config mode and subsequently passing a command.
The problem is that with each command entered, the underlying system appears to logout then reenter with the next subsequent command. For example with a Juniper router I'm trying to do the following:
edit private
set interfaces xe-1/3/2 description "AVAIL: SOMETHING GOES HERE"
commit
exit
quit
Tailing the syslog from the router I'm seeing something like this...
(...)
UI_LOGIN_EVENT: User 'tools' login, class 'j-remote-user' [65151], ssh-connection 'xxx.xxx.xxx.xxx 42247 xxx.xxx.xxx.xxx 22', client-mode 'cli'
UI_CMDLINE_READ_LINE: User 'tools', command 'edit private '
UI_DBASE_LOGIN_EVENT: User 'tools' entering configuration mode
UI_DBASE_LOGOUT_EVENT: User 'tools' exiting configuration mode
UI_LOGOUT_EVENT: User 'tools' logout
UI_AUTH_EVENT: Authenticated user 'remote' at permission level 'j-remote-user'
UI_LOGIN_EVENT: User 'tools' login, class 'j-remote-user' [65153], ssh-connection 'xxx.xxx.xxx.xxx 42247 xxx.xxx.xxx.xxx 22', client-mode 'cli'
UI_CMDLINE_READ_LINE: User 'tools', command 'set interfaces '
UI_LOGOUT_EVENT: User 'tools' logout
(...)
As you notice I'm getting a LOGOUT_EVENT after each command entered. Of course exiting config mode immediately after entering it causes the set interfaces command to fail as it's no longer in config mode.
The Perl code I'm using is as follows...
#!/usr/bin/perl -w
use strict;
use lib qw(
/usr/local/admin/protect/perl
/usr/local/admin/protect/perl/share/perl/5.10.1
);
use Net::OpenSSH;
my $hostname = "XXXXX";
my $username = "tools";
my $password = "XXXXX";
my $timeout = 60;
my $cmd1 = "edit private";
my $cmd2 = 'set interfaces xe-1/3/2 description "AVAIL: SOMETHING GOES HERE"';
my $cmd3 = "commit";
my $cmd4 = "exit";
my $ssh = Net::OpenSSH->new($hostname, user => $username, password => $password, timeout => $timeout,
master_opts => [-o => "StrictHostKeyChecking=no"]);
$ssh->error and die "Unable to connect to remote host: " . $ssh->error;
my #lines = eval { $ssh->capture($cmd1) };
foreach (#lines) {
print $_;
};
#lines = eval { $ssh->capture($cmd2) };
foreach (#lines) {
print $_;
};
#lines = eval { $ssh->capture($cmd3) };
foreach (#lines) {
print $_;
};
#lines = eval { $ssh->capture($cmd4) };
foreach (#lines) {
print $_;
};
$ssh->system("quit");
The sequence of events is the same as when telnet was used. The only real change was in using SSH objects verses Telnet objects. I'm stumped. Any ideas you could provide would be quite helpful.
[SOLVED, sort of]
The suggestion let Net::Telnet do the driving was the correct one. The following code works...
#!/usr/bin/perl -w
use strict;
use Net::OpenSSH;
use Net::Telnet;
use Data::Dumper;
my $promptEnd = '/\w+[\$\%\#\>]\s{0,1}$/o';
my $cmd1 = "show system uptime | no-more";
my $cmd2 = "show version brief | no-more";
my $hostname = "xxx.xxx";
my $username = "xxxxxxx";
my $password = "xxxxxxx";
my $timeout = 60;
my $ssh = Net::OpenSSH->new(
$hostname,
user => $username,
password => $password,
timeout => $timeout,
master_opts => [ -o => "StrictHostKeyChecking=no" ]
);
$ssh->error and die "Unable to connect to remote host: " . $ssh->error;
my ( $fh, $pid ) = $ssh->open2pty( { stderr_to_stdout => 1 } );
my %params = (
fhopen => $fh,
timeout => $timeout,
errmode => 'return',
);
$conn = Net::Telnet->new(%params);
$conn->waitfor($promptEnd);
#lines = $conn->cmd($cmd1);
foreach (#lines) {
print $_;
}
#lines = $conn->cmd($cmd2);
foreach (#lines) {
print $_;
}
$conn->cmd("quit");
The problem I'm having is that I can't seem to separate the code into subroutines. Once the $conn object is returned from a subroutine, the underlying ssh connection drops. I need to separate this logic in order to not have to rewrite many, many programs and lines of code that relay on this pusher routine. However that problem I'll direct to another question.
[Edit, fully solved]
Just an update in case anyone needs to do something similar.
While the above worked very well when run under a single subroutine, I found that any time I passed the handle to another subroutine, the telnet handle remained open, but the ssh connection dropped.
To solve this I found that if I passed the ssh handle to another subroutine, then later attached the open2pty, and attached Net::Telnet, then I could pass the Net::Telnet handle between subroutines without the underlying ssh connection dropping. This also worked for Net::Telnet::Cisco as well. I have this code working well with Cisco, Juniper, and Brocade routers.
You should also consider adding a few more parameters to the Net::Telnet->new() because it is interacting with ssh rather than a TELNET server.
-telnetmode => 0
-output_record_separator => "\r",
-cmd_remove_mode => 1,
Because there is no TELNET server on remote side, -telnetmode => 0 turns off TELNET negotiation.
The end-of-line is most likely just a carriage-return (i.e. -output_record_separator => "\r") rather than the TCP or TELNET combination of carriage-return linefeed ("\r\n").
Always strip the echoed back input -cmd_remove_mode => 1
There are several possibilities:
Some routers accept having the sequence of commands sent up front via stdin:
my $out = $ssh->capture({stdin_data => join("\r\n", #cmds, '')})
In other cases you will have to use something like Expect to send a command, wait for the prompt to appear again, send another command, etc.
If you were using Net::Telnet before, the Net::OpenSSH docs explain how to integrate both (though I have to admit that combination is not very tested).
Also, some routers provide some way to escape to a full Unix-like shell. I.e., preppending the commands with a bang:
$ssh->capture("!ls");

Perl (tk): how to run asynchronously a system command, being able to react to it's output?

I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.

How do I fork correctly in a perl module for znc?

I'm currently writing an IRC bot. The scripts are loaded as perl modules in ZNC but the bot gets disconnected with an Input/Output error if I create a forked process. This is a working example script without fork, but this causes the bot to freeze until the script finishes doing its task.
package imdb;
use warnings;
use strict;
sub new
{
my ($class) = #_;
my $self = {};
bless( $self, $class );
return( $self );
}
sub OnChanMsg
{
my ($self, $nick, $channel,$text) = #_;
#unless (my $pid = fork()) {
my $result = a_slow_process($text);
ZNC::PutIRC( "PRIVMSG $channel :$result" );
# exit;
#}
return( ZNC::CONTINUE );
}
sub OnShutdown
{
my ( $me ) = #_;
}
sub a_slow_process {
my $input = shift;
sleep 10;
return "You said $input.";
}
1;
The fork code that is causing the error is commented out. How do I fix this?
Edited to add: I was told that ZNC::PutIRC should not be put in the child process.
A fork() call has effects on open file and socket handles, including:
File descriptors (and sometimes locks on
those descriptors) are shared, while everything else is copied.
...
Beginning with v5.6.0, Perl will attempt to flush all files
opened for output before forking the child process, but this
may not be supported on some platforms (see perlport). To be
safe, you may need to set $| ($AUTOFLUSH in English) or call
the "autoflush()" method of "IO::Handle" on any open handles in
order to avoid duplicate output.
and in general it is not a good idea to set up a socket connection in one process and try to read/write on that connection in a child process.
A workaround might be to make a new ZNC connection in the child process (after a_slow_process() is done), write your private message, and then close the new connection.
If you're not adverse to rewriting your module in c++, znc has a CExecSock which wraps popen2() and should do what you need. You can look in the shell.cpp module for example usage.