What is the best way to handle parallel telnet connections to cisco devices via Perl.
I need to open several telnet connections to keep in background and to feed commands
in interactive or timed batch way. Is this possible to do it with the anyevent or POE libraries?
Thanks.
Threads are a headache. An event loop, such as AnyEvent, is much simpler and more performant, specially if you want to submit commands in a timed fashion and need to handle thousand of connections.
See AnyEvent::Socket on how to open a connection and read & write data: http://metacpan.org/pod/AnyEvent::Socket
You could also use Net::Telnet on top of it, as supports using an already opened filehandle: http://metacpan.org/pod/Net::Telnet#fhopen
If you run into trouble with AnyEvent, just ask a new question.
The simplest way is to use treads. You could use 'queues' to send commands and receive outputs back and forth.
You could simply create x number of threads, then queue a lot of commands and send them.
If you need to process the output, is a bit trickier.
http://metacpan.org/pod/Thread::Queue
It could be solved by event based modules as well but that one needs a very different approach. This way you could create a not threaded function then convert it into threaded one easily.
#without processing the output
use strict;
use warnings;
use threads;
use Thread::Queue;
my $q = Thread::Queue->new(); # A new empty queue
my $maxThreads = 20;
# Create Worker threads
for (1..$maxThreads){
my $thr = threads->create(
sub {
# Thread will loop until no more work
while (defined(my $cmd = $q->dequeue())) {
do_someting($cmd);
}
}
);
}
# Send work to the threads
$q->enqueue($cmd1, ...);
# Signal that there is no more work to be sent
$q->end();
# Join up with the thread when it finishes
$thr->join();
Related
I'm very new to Tcl/Tk and have been dealing with an issue for the last couple of days. Basically I have a server written in C and a client GUI written in Tcl/Tk. So far it doesn't do a ton. To test it, I start up the server so that it's listening for connections, then run my GUI. When I click one of the buttons, the GUI should open up a separate toplevel window with a text widget embedded in it. (This part works.) Then, my client connects to the server and gives it a couple of settings, and through this the server decides what info to send back. The server's response is what gets printed to that second window's text widget.
What I'm trying to add in now is a Stop button. Right now, my server is set up to wait a couple of seconds, then write the same message to the client. This is set up inside a loop that is waiting to hear a "Stop" command from my client. I have a Stop button in the GUI with a command set up to write that command to the server when clicked. However, all of my buttons get frozen as soon as I hit the begin button and messages are written to the client.
Basically, how can I keep allowing my server to write to my client while still keeping the rest of my GUI usable? I want my client to write a new line to the text widget on my separate window whenever it receives a new message from the server, but I still want the main GUI window that has all my command buttons to behave independently.
In general, it depends on whether what you are doing is CPU-intensive (where reading from a plain file counts as CPU-intensive) or I/O-intensive (where running things in another process counts as I/O-intensive; database calls often count as CPU-intensive here despite not really needing to). I'm only going to mention summaries of what's going on as you aren't quite providing enough information.
For I/O-intensive code, you want to structure your code to be event-driven. Tcl has good tools for this, in that fileevent works nicely on sockets, terminals and pipelines on all supported platforms. The coroutine system of Tcl 8.6 can help a lot with preventing the callbacks required from turning your code into a tangled mess!
For CPU-intensive code, the main option is to run in another thread. That thread won't be able to touch the GUI directly (which in turn will be free to be responsive), but will be able to do all the work and send messages back to the main thread with whatever UI updates it wants done. (Technically you can do this with I/O-intensive code too, but it's more irritating than using a coroutine.) Farming things out to a subprocess is just another variation on this where the communications are more expensive (but much isolation is enforced by the OS).
If you're dealing with sockets, you're probably I/O-intensive. Assume that until you show otherwise. Here's a simple example:
proc gets_async {sock} {
set sock [lindex $args end]
fileevent $sock readable [info coroutine]
while {[gets $sock data] < 0 && [fblocked $sock]} {
yield
}
fileevent $sock readable {}
return $data
}
proc handler {socket} {
set n 0
while {![eof $socket]} {
# Write to the server
puts $socket "this is message [incr n] to the server"
# Read from the server
puts [gets_async $socket]
}
close $socket
}
proc launchCommunications {host port} {
set sock [socket $host $port]
fconfigure $sock -blocking 0 -encoding utf-8
coroutine comms($host:$port) handler $socket
}
Note that gets_async is much like coroutine::util gets in Tcllib.
I've added advisory locking to Sqitch, using Postgres advisory locks and MySQL GET_LOCK(). This feature prevents more than one instance of Sqitch from deploying to a database at one time. This works great, but I wanted to add a lock timeout, too, so that one never finds a CI/CD process hung for hours or days because something went amiss.
MySQL's GET_LOCK() supports a timeout argument, but Postgres advisory locks do not. Since I thought it likely that other database engines would also not have timeouts, I thought it best to implement the timeout in Perl. Following the DBI manual, I used Sys::SigAction to set and handle the timeout:
# Try waiting for the lock.
require App::Sqitch::SigAction;
return $self->_locked(1) unless App::Sqitch::SigAction::timeout_call($wait, sub {
$self->wait_lock
});
I also added tests to confirm it works with both MySQL and Postgres. So far so good.
Alas, Sys::SigAction does not work on Windows. I took a stab and testing it on Windows, but since Windows Perl is not compiled with d_sigaction, which Sys::SigAction also requires, I didn't get far. I tried implementing the Perl-standard alarm/$SIG{ALRM} pattern, but it failed to send the signal while waiting on the Postgres lock.
Which has led me here and to my question: What is the best cross-platform pattern for timing out some execution in Perl? Ideally it has a straight-forward interface, works on *nix and Windows, and effectively handles breaking out of a database query.
I ended up ditching Sys::SigAction following discussion here and elsewhere, and instead switched to:
Letting the database handle the timeout, as MySQL's get_lock() does
Adding a simple interface for polling with exponential backoff and timeout that engines can use to poll for a lock instead of waiting (similar to Retry::Backoff)
Switching the Postgres implementation to use the async query support in DBD::Pg to send off the lock request, and uses the backoff/timeout interface to check to see if it has returned and cancel the query if it times out
I was especially pleased to realize I could do #3, as I originally used the timeout/backoff interface to poll with pg_try_advisory_lock( key ), which just feels heavy. Better to asynchronously call pg_advisory_lock ( key ) and poll for its response. It looks like this:
sub wait_lock {
my $self = shift;
# Asyncronouslly request a lock with an indefinite wait.
my $dbh = $self->dbh;
$dbh->do(
'SELECT pg_advisory_lock(75474063)',
{ pg_async => DBD::Pg::PG_ASYNC() },
);
# Use _timeout to periodically check for the result.
return 1 if $self->_timeout(sub { $dbh->pg_ready && $dbh->pg_result });
# Timed out, cancel the query and return false.
$dbh->pg_cancel;
return 0;
}
Of course the MySQL implementation is simpler, since get_lock() does all the work:
sub wait_lock {
my $self = shift;
$self->dbh->selectcol_arrayref(
q{SELECT get_lock('sqitch working', ?)},
undef, $self->lock_timeout
)->[0]
}
Imagine inter-processes communication
+--------------+ +------------+
| main_process | ==produces data somewhat to=> | monitoring |
+--------------+ +------------+
where:
the main_process is running non-stop and produces some data for monitoring
the monitoring is running only sometimes, and when it is running it should read the data produced by main_process
and when the monitoring is not running the data produced by main_process should be not saved.
The question is: How to write like "on demand" IPC?
The code for main_process is basically the next (the real one is more complicated):
use 5.014;
use warnings;
my $box = new BlackBox( callback => sub {
my ($self, $jref) = #_;
#
# processing of $jref
#
});
$box->run();
The callback is called every 2-5 seconds and as i told above, this process should run non-stop. I can't change the BlackBox.
I need:
write the processing of $jref part - what should send $jref to somewhere
and the monitoring process itself, what should read the data, when it is running...
Don't need any code, need only some pointers to the right direction, or idea how to do this, without filling up my memory or HDD, so the simplest way:
write the $jref to the file is not suitable because it will fill my HDD when the monitoring is not running.
If someone care, the $jref is a reference to json string, so i can do:
use JSON::XS qw(decode_json):
my $perlref = decode_json($$jref);
My first thought was "UDP to localhost?"
Variations on that idea include AF_UNIX or a named pipe. With a stream socket you'd do a non-blocking connect, and with the pipe you'd do O_WRONLY|O_NONBLOCK, and if you get EAGAIN just return without writing.
You can save your file handle and reuse it across multiple calls, just close it and reopen if you get EPIPE. You'll want $SIG{PIPE}='IGNORE'; hopefully the black box doesn't object to that.
The reading side is as simple as cat $path_to_fifo or nc -l -u -p $udpport, slightly harder if you do an AF_UNIX socket.
It sounds like using a socket will be enough. Choose a port number P and try to connect to the monitor from the main application (localhost:P). If the port is open, send the actual data, not the reference. If the port is closed then just ignore the data. Your monitor will listen on P and process any data received through this port.
Problem:
scriptA.cgi is sitting in an infinite loop and handling an open socket to a Flash client.
scriptB.cgi is called from the web, does what it needs to do and then needs to inform scriptA to send a message to the client.
Is this possible? I'm stuck on how to have scriptB identify the instance of scriptA that is sitting there with the socket connection, rather than launching one of its own.
all thoughts appreciated.
If the communication needs are simple, this is a good application for signals.
Edited to store process id from scriptA and read it in scriptB -- scripts A and B have to agree on a name.
# script B
do_scriptB_job();
if (open(my $PID_FILE, "<", "scriptA.pid.file")) {
$process_id_for_scriptA = <$PID_FILE>;
close $PID_FILE;
kill 'USR1', $process_id_for_scriptA; # makes scriptA run the SIGUSR1 handler
}
# script A
open(my $PID_FILE, ">", "scriptA.pid.file");
print $PID_FILE $$;
close $PID_FILE;
my $signaled = 0;
$SIG{"USR1"} = \sub { $signaled = 1 } # simple SIGUSR1 handler, set a variable
while ( in_infinite_loop ) {
if ($signaled) {
# this block runs only if SIGUSR1 was received
# since last time this block was run
send_a_message_to_the_client();
$signaled = 0;
} else {
do_something_else();
}
}
unlink "scriptA.pid.file"; # cleanup
When script A receives a SIGUSR1 signal, the script will be interrupted to run the USR1 signal handler, setting $signaled. The thread of execution will then resume and the script can use the information.
Have scriptA store it's pid somwhere (in a db with some kind of id), then scriptB can look up the pid in the db and send a signal to scriptA.
Edit:
Answering question asked in comment
The pid of the process can be found using perls builtin variables $$ or $PID or $PROCESS_ID depending on how old your perl is.
See perlvar for details.
I hope this is the ID you where looking for. If not you'll have to find a way to separate the different scriptA instances. (Perhaps by session id, or socket. Here I cant help you further)
Other people have mentioned how to get the PID (if you didn't fork() it yourself, just have the other-process write it... somewhere... that both processes know how to get it. or walk the process table, but that's a horrible solution and completely unscalable beyond a singleton).
Since you note that any thoughts are welcome, note that perldoc perlipc explains a variety of mechanisms you might use for the actual communication:
NAME
perlipc - Perl interprocess communication (signals, fifos, pipes, safe
subprocesses, sockets, and semaphores)
DESCRIPTION
The basic IPC facilities of Perl are built out of the good old Unix
signals, named pipes, pipe opens, the Berkeley socket routines, and SysV
IPC calls. Each is used in slightly different situations.
Domain sockets: http://www.perl.com/doc/FMTEYEWTK/IPC/unix.html
?
I was tempted to answer, 'send signals' or 'use some kind of IPC to talk between apps' but, a far easier and scalable approach is to use a sqlite (or other) database that all scripts can talk to,
ScriptA.cgi would poll the database by doing something like 'SELECT event FROM events WHERE clientID=?'.
ScriptB.cgi would simply insert a row into the events table with the right clientID.
That avoids all of the 'find the pid' mess and also mean that you don't get the blocking IO problems you would get with named pipes or if one script crashed.
I'm working on a mathematical model that uses data generated from XFOIL, a popular aerospace tool used to find the lift and drag coefficients on airfoils.
I have a Perl script that calls XFOIL repeatedly with different input parameters to generate the data I need. I need XFOIL to run 5,600 times, at around 100 seconds per run, soabout 6.5 days to complete.
I have a quad-core machine, but my experience as a programmer is limited, and I really only know how to use basic Perl.
I would like to run four instances of XFOIL at a time, all on their own core. Something like this:
while ( 1 ) {
for ( i = 1..4 ) {
if ( ! exists XFOIL_instance(i) ) {
start_new_XFOIL_instance(i, input_parameter_list);
}
}
}
So the program is checking (or preferably sleeping) until an XFOIL instance is free, when we can start a new instance with the new input parameter list.
Try Parallel::ForkManager. It's a module that provides a simple interface for forking off processes like this.
Here's some example code:
#!/usr/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
my #input_parameter_list =
map { join '_', ('param', $_) }
( 1 .. 15 );
my $n_processes = 4;
my $pm = Parallel::ForkManager->new( $n_processes );
for my $i ( 1 .. $n_processes ) {
$pm->start and next;
my $count = 0;
foreach my $param_set (#input_parameter_list) {
$count++;
if ( ( $count % $i ) == 0 ) {
if ( !output_exists($param_set) ) {
start_new_XFOIL_instance($param_set);
}
}
}
$pm->finish;
}
$pm->wait_all_children;
sub output_exists {
my $param_set = shift;
return ( -f "$param_set.out" );
}
sub start_new_XFOIL_instance {
my $param_set = shift;
print "starting XFOIL instance with parameters $param_set!\n";
sleep( 5 );
touch( "$param_set.out" );
print "finished run with parameters $param_set!\n";
}
sub touch {
my $fn = shift;
open FILE, ">$fn" or die $!;
close FILE or die $!;
}
You'll need to supply your own implementations for the start_new_XFOIL_instance and the output_exists functions, and you'll also want to define your own sets of parameters to pass to XFOIL.
This looks like you can use gearman for this project.
www.gearman.org
Gearman is a job queue. You can split your work flow into a lot of mini parts.
I would recommend using amazon.com or even their auction able servers to complete this project.
Spending 10cents per computing hour or less, can significantly spead up your project.
I would use gearman locally, make sure you have a "perfect" run for 5-10 of your subjobs before handing it off to an amazon compute farm.
Perl threads will take advantage of multiple cores and processors. The main pro of threads is its fairly easy to share data between the threads and coordinate their activities. A forked process cannot easily return data to the parent nor coordinate amongst themselves.
The main cons of Perl threads is they are relatively expensive to create compared to a fork, they must copy the entire program and all its data; you must have them compiled into your Perl; and they can be buggy, the older the Perl, the buggier the threads. If your work is expensive, the creation time should not matter.
Here's an example of how you might do it with threads. There's many ways to do it, this one uses Thread::Queue to create a big list of work your worker threads can share. When the queue is empty, the threads exit. The main advantages are that its easier to control how many threads are active, and you don't have to create a new, expensive thread for each bit of work.
This example shoves all the work into the queue at once, but there's no reason you can't add to the queue as you go. If you were to do that, you'd use dequeue instead of dequeue_nb which will wait around for more input.
use strict;
use warnings;
use threads;
use Thread::Queue;
# Dummy work routine
sub start_XFOIL_instance {
my $arg = shift;
print "$arg\n";
sleep 1;
}
# Read in dummy data
my #xfoil_args = <DATA>;
chomp #xfoil_args;
# Create a queue to push work onto and the threads to pull work from
# Populate it with all the data up front so threads can finish when
# the queue is exhausted. Makes things simpler.
# See https://rt.cpan.org/Ticket/Display.html?id=79733
my $queue = Thread::Queue->new(#xfoil_args);
# Create a bunch of threads to do the work
my #threads;
for(1..4) {
push #threads, threads->create( sub {
# Pull work from the queue, don't wait if its empty
while( my $xfoil_args = $queue->dequeue_nb ) {
# Do the work
start_XFOIL_instance($xfoil_args);
}
# Yell when the thread is done
print "Queue empty\n";
});
}
# Wait for threads to finish
$_->join for #threads;
__DATA__
blah
foo
bar
baz
biff
whatever
up
down
left
right
Did you consider gnu parallel parallel.
It will allow you to run several install instances of your program with different inputs and
fill your CPU cores as they begin available. It's often a very simple an efficient way to achieve parallelization of simple tasks.
This is quite old but if someone is still looking for suitable answers to this question, you might want to consider Perl Many-Core-Engine (MCE)