inserting expect into perl loop - perl

I have the following script that runs a command and puts the data in a DB. I need to account for the possibility of being asked for a password "password:" some of the time. How do I wrap an expect call into this?
#!/usr/software/bin/perl
use strict;
use DatabaseLib;
use Data::Dumper;
use Expect;
#Connect to database
my $dbh = DBI->connect($DB_CONNECT_STRING, $DB_USER, $DB_PASSWORD, { RaiseError => 1, AutoCommit => 1 })
or die "failed to connect to database: $DB_CONNECT_STRING";
my $expect = Expect->new;
my %burtHash;
my #cols = qw/date_create sub_by impact date-lastmod lastmod-by bug_rel case_score state s p type subtype subteam found_by target_release/;
my #burtInfo = `mycommand`;
my $timeout = 20;
my $password = "password";
while(my $ele = shift(#burtInfo)){
my ($index, #data) = split(/\s+/, $ele);
for my $i(0 .. $#cols){
$burtHash{$index}->{$cols[$i]} = shift(#data);
}
for my $id (keys %burtHash){
my %burt_details;
for my $col (keys %{$burtHash{$id}} ) {
$burt_details{$col} = $burtHash{$id}->{$col};
}
if ( $id =~ /\d+/) {
burt_update(
$dbh,
$id ,
\%burt_details,
);
}
}
}
I think I just need to put in something like this and call it, but i'm not sure where/how:
$expect->expect($timeout,
[ qr/password:/i, #/
sub {
my $self = shift;
$self->send("$password\n");
exp_continue;
}
]);

You're not using $expect anywhere there. You have to run your command via $expect->spawn so that your Expect object can handle things. And then you'll need some way of gathering its output (I'm thinking using $expect->log_file(...) to set the log to a string filehandle or something).
Once you're using $expect->spawn, then you can insert your password check. But there's no way you can do this with qx (the backticks).

Related

Append resulted data to scalar variable in Parallel::ForkManager Perl

I have a code, which is working as expected. But I have difficulty in storing the output of each of the executed command in $result = $ssh->capture($command_to_execute);. This uses Parallel::ForkManager module to run the commands in different Hosts by using different files as an input parameters.
Once the command execution is done, I want the resulted output should be stored in $result variable. It should append each of the hosts results in same variable and at the end I want to process the values which are in $result. I am using .= to append the resulted data to $result but it doent seems to be working.
Pasting my code here for reference:
.
.
.
my $result;
my $pm = Parallel::ForkManager->new(5);
DATA_LOOP:
foreach my $n (1..$num_buckets) {
my $pid = $pm->start and next DATA_LOOP;
$command_to_execute = $my_command." ".$Files{$n};
my $ssh = SSH_Connection( $list_of_ips[$n-1], 'username', 'pass' );
$result = $ssh->capture($command_to_execute);
$result .= "Result from File:$Files{$n} and Host:$list_of_ips[$n-1] is $result\n";
print "Result: INSIDE: $result";
$pm->finish;
}
$pm->wait_all_children;
print "Result: OUTSIDE: $result";
print "Done\n";
sub SSH_Connection {
my ( $host, $user, $passwd ) = #_;
my $ssh = Net::OpenSSH->new($host,
user => $user,
password => $passwd,
master_opts => [-o => "StrictHostKeyChecking=no"]
);
$ssh->error and die "Couldn't establish SSH connection: ". $ssh->error;
return $ssh;
}
print "Result: INSIDE: $result"; Could able to print result one by one. But print "Result: OUTSIDE: $result";
is empty, which should actually have the combined results of $results which has been taken from inside the for loop.
As shown in the documentation of Parallel::ForkManager, to get a result from a child, you need to supply a reference to the result as another parameter to finish.
$pm->finish(0, [$Files{$n}, $list_of_ips[$n-1], $result]);
Use run_on_finish to gather the results:
my $result;
$pm->run_on_finish( sub {
my ($pid, $exit_code, $ident, $exit_signal, $core_dump, $single_result) = #_;
$result .= "Result from File: $single_result->[0] and Host: $single_result->[1]"
. " is $single_result->[2]\n";
Each time you run $pm->start, you are forking a new process to run the code until $pm->finish. This forked process cannot affect the parent process in any way except by the mechanism Parallel::ForkManager provides to send data back to the parent. This mechanism is described at https://metacpan.org/pod/Parallel::ForkManager#RETRIEVING-DATASTRUCTURES-from-child-processes.
$pm->run_on_finish(sub {
my ($pid, $exit_code, $ident, $exit_signal, $core_dump, $data) = #_;
my $result = $$data;
...
});
DATA_LOOP:
foreach my $n (1..$num_buckets) {
my $pid = $pm->start and next DATA_LOOP;
...
$pm->finish(0, \$result);
}
Forking is in fact not needed for these operations if you are willing to restructure a bit. Net::OpenSSH can provide commands that can be managed simultaneously by an event loop such as IO::Async::Loop, thus all Perl operations will occur in the same process (but not necessarily in the order they appear). Since IO::Async::Loop->run_process returns a Future, Future::Utils provides a way to manage concurrency of these commands.
use strict;
use warnings;
use Net::OpenSSH;
use IO::Async::Loop;
use Future::Utils 'fmap_concat';
my $loop = IO::Async::Loop->new;
my $future = fmap_concat {
my $n = shift;
...
my $remote_command = $ssh->make_remote_command($command_to_execute);
return $loop->run_process(command => $remote_command, capture => ['stdout'])
->transform(done => sub { "Result from File:$Files{$n} and Host:$list_of_ips[$n-1] is $_[0]\n"; });
} foreach => [1..$num_buckets], concurrent => 5;
my #results = $future->get;
There is a lot of flexibility to how the individual and overall (returned by fmap) Futures are managed, but by default any failure to execute a process will result in the whole Future failing immediately (causing get to throw an exception) and any nonzero exit will be ignored.

Can Net::LDAP and Parallel::Forkmanager work together?

I need to query different LDAP servers in perl via Net::LDAP. I have something that works well. However, in an attempt to speed up things, I tried to query the different servers in parallel, using Parallel::Forkmanager - and things do not work when I do that.
I get the following types of errors:
decode error 02<=>30 0 8 at /Users/myname/perl5/perlbrew/perls/perl-5.18.2/lib/site_perl/5.18.2/Convert/ASN1/_decode.pm line 113, <> line 18.
decode error 43<=>30 0 8 at /Users/myname/perl5/perlbrew/perls/perl-5.18.2/lib/site_perl/5.18.2/Convert/ASN1/_decode.pm line 113, <> line 25.
at the line that gets the search response:
$mesg = $ldap->search( base => $dn, filter => '(CN=*)');
I am puzzled.
Telling it in other other words, why does this fail:
use Net::LDAP;
use Parallel::Forkmanager;
...; # bind LDAP servers
while (<>) {
chop;
my $dn = $_;
foreach my $ldap (#servers) {
my $pid;
$pid = $pm->start and next; # do the fork
print $dn, $pid;
my $mesg;
try {
$mesg = $ldap->search( base => $dn, filter => '(CN=*)');
} catch {
...;
}
$pm->finish;
}
}
while this:
use Net::LDAP;
...; # bind LDAP servers
while (<>) {
chop;
my $dn = $_;
foreach my $ldap (#servers) {
print $dn;
my $mesg;
try {
$mesg = $ldap->search( base => $dn, filter => '(CN=*)');
} catch {
...;
}
}
}
works perfectly?
Whilst forking doesn't have quite the same thread safety problems of threading - there are still a few places you have gotchas. I think this is what's biting you - your Net::LDAP objects are created in the parent thread, but then (effectively) cloned to each when you fork.
Which means in your code - there's a very real possibility that if you've got the list of names coming in fast enough, that a new fork will try to reuse an existing Net::LDAP connection before a previous one is finished with it.
The easy way of preventing this is call wait_all_children to ensure all your parallel LDAP queries are finished before the next one starts.
If you put your LDAP bind within the ForkManager loop, do you still have the same problem? I appreciate that's a potential overhead as you'll be binding each iteration, but if that addresses it, I'd suggest that it's because Net::LDAP is sharing the same file descriptors between forks.
The next best solution there would be to adopt a 'worker' model, where you've got a bunch of 'workers' each with their on LDAP connections to do the querying. That's easier with threading, than forking - goes a bit like this:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use Thread::Queue;
sub server_worker {
my ( $hostname, $q_ref ) = #_;
## connect LDAP to $hostname;
while ( my $dn = $$q_ref->dequeue ) {
#query $dn
}
}
my #queues;
foreach my $server (#list_of_servers) {
my $server_q = Threads::Queue->new();
push( #queues, $server_q );
threads->create( \&server_worker, $hostname, \$server_q );
}
while ( my $dn = <STDIN> ) {
chomp($dn);
foreach my $q (#queues) {
$q->enqueue($dn);
}
}
foreach my $q ( #queues ) {
$q -> end;
}
foreach my $thr ( threads->list ) {
$thr->join();
}
Doing something similar with forking should work:
#!/usr/bin/perl
use strict;
use warnings;
use IO::Pipe;
use Parallel::ForkManager;
use Net::LDAP;
my #list_of_servers = qw ( servername servenama anotherserver );
my $pm = Parallel::ForkManager -> new ( scalar #list_of_servers );
my %pipe_for;
foreach my $server ( #list_of_servers ) {
my $pipe = IO::Pipe -> new();
my $pid = pm -> start;
if ( $pid ) {
print "$$: parent\n";
$pipe -> writer -> autoflush;
$pipe_for{$server} = $pipe;
}
else {
print "$$ child connecting to $server\n";
$pipe -> reader -> autoflush;
close ( STDIN ); #because this is a child.
#Net::LDAP setup
while ( my $item = <$pipe> ) {
chomp ( $item );
#ldap_search $item;
}
}
$pm -> finish;
}
And then send stuff:
for my $number ( 1..10 ) {
foreach my $pipe ( values %pipe_for ) {
print {$pipe} "test$number\n";
}
}
$pm -> wait_all_children();
Edit: Note - autoflush is important, otherwise the IO buffers and doesn't look like it's working. I'm pretty sure closing STDIN is probably a good idea in the child, but perhaps not vitally necessary if they don't use it.

Perl: Using IPC::Shareable for pooling Net::Server connections

I am trying to have a pool of shared connections that can be accessed by Net::Server instances. Unfortunately IPC::Shareable does not allow me to store the connections as they are code references. This is a stripped down version of the code:
use IPC::Shareable (':lock');
use parent 'Net::Server::Fork';
use MyConnectClass;
sub login {
return MyConnectClass->new();
};
my %connection;
tie %connection, 'IPC::Shareable', 'CONN', {
'create' => 1,
'exclusive' => 0,
'mode' => 0666,
'destroy' => 'yes',
}
or croak 'Can not tie connection variable';
sub add_connection {
my $id = shift(#_);
my $con = shift(#_);
$connection{$id} = $con;
};
sub get_connection {
my $id = # .. find unused connection
return $connection{$id};
}
sub process_request {
my $self = shift(#_);
eval {
my $connection = get_connection();
my $line = <STDIN>;
# .. use $connection to fetch data for user
};
};
for (my $i=0; $i<10; $i++) {
add_connection($i, &login);
};
main->run(
'host' => '*',
'port' => 7000,
'ipv' => '*',
'max_server' => 3,
};
Unfortunately the program dies after the first login: 'Can't store CODE items at ../../lib/Storable.pm'. This happens even when hiding $connection in an anonymous array. I am looking for an alternative to utilize the pool.
I appreciate your support
I am unable to propose an alternative module, but make a suggestion which may or not be of use. While you cannot store CODE, you can store strings which can be evaluated to run. would it be possible to pass a reference to the string q!&login! which you can dereference call after being assigned to $connection. ?
#!/usr/bin/perl
use warnings;
use strict;
use Storable;
my $codestring = q'sub { q^japh^ };' ;
#my $codestring = q'sub { return MyConnectClass->new(); }';
#
# for (0..9){ add_connection($i, $codestring) }
open my $file, '>', '.\filestore.dat' or die $!;
store \ $codestring, $file;
close $file;
open $file, '<', '.\filestore.dat' or die " 2 $!";
my $stringref = retrieve $file; # my $con = get_connection()
close $file;
print &{ eval $$stringref } ; # &{eval $$con} ;
exit 0; # my $line = <STDIN>; ...

How to use Net::Twitter::Stream to read stream from API?

I'm trying to use the Net::Twitter::Stream Perl module from CPAN to read the stream from sample.json. I believe this is the corect module though they way they crafted it allows one to process the filter stream. I've modified it as such but I must be missing something as I don't get any data in return. I establish a connection but nothing comes back. I'm guessing this should be an easy fix but I'm a touch new to this part of Perl.....
package Net::Twitter::Stream;
use strict;
use warnings;
use IO::Socket;
use MIME::Base64;
use JSON;
use IO::Socket::SSL;
use LibNewsStand qw(%cf);
use utf8;
our $VERSION = '0.27';
1;
=head1 NAME
Using Twitter streaming api.
=head1 SYNOPSIS
use Net::Twitter::Stream;
Net::Twitter::Stream->new ( user => $username, pass => $password,
callback => \&got_tweet,
track => 'perl,tinychat,emacs',
follow => '27712481,14252288,972651' );
sub got_tweet {
my ( $tweet, $json ) = #_; # a hash containing the tweet
# and the original json
print "By: $tweet->{user}{screen_name}\n";
print "Message: $tweet->{text}\n";
}
=head1 DESCRIPTION
The Streaming verson of the Twitter API allows near-realtime access to
various subsets of Twitter public statuses.
The /1/status/filter.json api call can be use to track up to 200 keywords
and to follow 200 users.
HTTP Basic authentication is supported (no OAuth yet) so you will need
a twitter account to connect.
JSON format is only supported. Twitter may depreciate XML.
More details at: http://dev.twitter.com/pages/streaming_api
Options
user, pass: required, twitter account user/password
callback: required, a subroutine called on each received tweet
perl#redmond5.com
#martinredmond
=head1 UPDATES
https fix: iwan standley <iwan#slebog.net>
=cut
sub new {
my $class = shift;
my %args = #_;
die "Usage: Net::Twitter::Stream->new ( user => 'user', pass => 'pass', callback => \&got_tweet_cb )" unless
$args{user} && $args{pass} && $args{callback};
my $self = bless {};
$self->{user} = $args{user};
$self->{pass} = $args{pass};
$self->{got_tweet} = $args{callback};
$self->{connection_closed} = $args{connection_closed_cb} if
$args{connection_closed_cb};
my $content = "follow=$args{follow}" if $args{follow};
$content = "track=$args{track}" if $args{track};
$content = "follow=$args{follow}&track=$args{track}\r\n" if $args{track} && $args{follow};
my $auth = encode_base64 ( "$args{user}:$args{pass}" );
chomp $auth;
my $cl = length $content;
my $req = <<EOF;
GET /1/statuses/sample.json HTTP/1.1\r
Authorization: Basic $auth\r
Host: stream.twitter.com\r
User-Agent: net-twitter-stream/0.1\r
Content-Type: application/x-www-form-urlencoded\r
Content-Length: $cl\r
\r
EOF
my $sock = IO::Socket::INET->new ( PeerAddr => 'stream.twitter.com:https' );
#$sock->print ( "$req$content" );
while ( my $l = $sock->getline ) {
last if $l =~ /^\s*$/;
}
while ( my $l = $sock->getline ) {
next if $l =~ /^\s*$/; # skip empty lines
$l =~ s/[^a-fA-F0-9]//g; # stop hex from compaining about \r
my $jsonlen = hex ( $l );
last if $jsonlen == 0;
eval {
my $json;
my $len = $sock->read ( $json, $jsonlen );
my $o = from_json ( $json );
$self->{got_tweet} ( $o, $json );
};
}
$self->{connection_closed} ( $sock ) if $self->{connection_closed};
}
You don't need to post the source, we can pretty much figure it out. You should try one of the examples, but my advice is to use AnyEvent::Twitter::Stream which comes with a good example that you only have to modify a bit to get it running
sub parse_from_twitter_stream {
my $user = 'XXX';
my $password = 'YYYY';
my $stream = Net::Twitter::Stream->new ( user => $user, pass => $password,
callback => \&got_tweet,
connection_closed_cb => \&connection_closed,
track => SEARCH_TERM);
sub connection_closed {
sleep 1;
warn "Connection to Twitter closed";
parse_from_twitter_stream();#This isn't working for me -- can't get connection to reopen after disconnect
}
sub got_tweet {
my ( $tweet, $json ) = #_; # a hash containing the tweet
#Do stuff here
}
}

Perl Parallel::ForkManager with DBI database handlers

im interested in performing multiple database actions in parallel. I have played with Perl Parallel::ForkManager but not used it with any databases yet. I have read that database connectivity is not supported very well with this. Does anyone have experience with this?
As an example i would probably be spawning a system call(which does the DBI work) NOT raw code, i.e.
#!/opt/local/bin/perl -w
use strict;
use Parallel::ForkManager;
$| = 1;
my $max_procs = 10;
my $pm = new Parallel::ForkManager($max_procs);
for (my $var = 0; $var <= 10; $var++) {
my $pid = $pm->start('proc'.$var) and next;
sleep ( 2 );
system( "./DBworker.pl $var" );
$pm->finish(0);
}
print "Waiting for child procs\n";
$pm->wait_all_children;
print "complete!\n";
If the work is being done by other programs, there is no danger to forking. The danger comes when you open a connection to the database and then fork. The child can't reuse the parents connection; however, take a look at DBIx::Connector, it handles the things you need to do after forking for you and running multiple programs is generally not the right answer.
#!/usr/bin/perl
use strict;
use warnings;
use DBIx::Connector;
use Parallel::ForkManager;
my $dsn = "dbi:SQLite:dbname=foo.db";
my $user = "";
my $pass = "";
my $conn = DBIx::Connector->new($dsn, $user, $pass,
{
AutoCommit => 0,
PrintError => 0,
RaiseError => 1,
ChopBlanks => 1,
FetchHashKeyName => 'NAME_lc',
}
);
END { unlink "foo.db" }
#setup table
$conn->run(fixup => sub {
my $dbh = $_;
$dbh->do("create table foo ( id integer, name char(35) )");
my $sth = $dbh->prepare("insert into foo (id, name) values (?, ?)");
while (<DATA>) {
chomp;
$sth->execute(split /,/);
}
});
my $pm = Parallel::ForkManager->new(3);
my $sth = $conn->dbh->prepare("select * from foo where id = ?");
for my $id (1 .. 3) {
next if $pm->start;
$sth->execute($id);
while (my $row = $sth->fetchrow_hashref) {
print "$id saw $row->{id} => $row->{name}\n";
}
$pm->finish;
}
$pm->wait_all_children;
print "done\n";
__DATA__
1,foo
2,bar
3,baz