Return object handle from a subroutine - perl

I want to know if it is possible to return a object handle from a subroutine in a Perl program.
I will use a specific example from a program that uses MAIL::IMAPClient
Create client object handle
my $client = Mail::IMAPClient->new(
Socket => $socket,
User => $user,
Password => $pass,
)
or die "new(): $#";
I would like to create this object handle from a sub routine instead
my $client = &create_client_object;
sub create_client_object {
my $client = Mail::IMAPClient->new(
Socket => $socket,
User => $user,
Password => $pass,
)
or die "new(): $#";
return $client;
}
If possible, what is the proper way to do this?

Yes, that works perfectly. Besides #Miller's comment, I'd recommend you to also pass the $socket, $user and $pass as parameters to your function instead of using them from context:
my $client = create_client_object($socket, $user, $pass);
sub create_client_object {
my ($socket, $user, $pass) = #_;
my $client = Mail::IMAPClient->new(
Socket => $socket,
User => $user,
Password => $pass,
)
or die "new(): $#";
return $client;
}

Related

Using IO::Socket::IP with Mojo::IOLoop

I am trying to achieve what these lines of code do synchronously, but asynchronously via Mojo::IOLoop and Mojo::Promises:
my $address = '192.168.1.240';
my $sock = IO::Socket::IP->new(PeerAddr => $address,
PeerPort => '9999',
Proto => 'tcp');
$sock->send($on);
my $data;
$sock->recv($data, 2048);
print $data;
This is used to communicate with a smartplug, and a promise-based interface would allow me to abstract out things, so that different plugs (some communicating via HTTP, some via TCP) could be handled in the same way.
Right now I've achieved the result by doing this
sub talk_to_plug_p {
my ($addr, $command) = #_;
my $promise = Mojo::Promise->new;
my $port = 9999;
my $sock = IO::Socket::IP->new(PeerAddr => $addr,
PeerPort => $port,
Proto => 'tcp');
return $promise->reject("Could not open socket on $addr at port $port") unless $sock;
my $id = Mojo::IOLoop->client({ handle => $sock } => sub {
my ($loop, $err, $stream) = #_;
$stream->on(read => sub {
my ($stream, $bytes) = #_;
# -------------------
# THIS LOOKS BAD
# -------------------
remove_id($loop);
$promise->resolve($bytes);
});
$stream->on(error => sub {
my $err = shift;
$promise->reject($err);
});
$stream->write($command);
});
sub remove_id { shift->remove($id) };
return $promise;
}
talk_to_plug_p('192.168.1.240', $on)->then(sub { print #_ })->catch(sub { print shift });
Mojo::IOLoop->start unless Mojo::IOLoop->is_running;
Now this works but it kind of looks wrong - especially the remove_id part, and I imagine there must eb a better way than this. I just could not find it.
What would be a cleaner way? i.e.: one that closes the client when bytes have been read, or where I can explicitly close the connection or something similar.

Perl Issue with concurrent requests with IO::Async and Future::Utils

I'm trying to use an IO loop to send concurrent requests (5) to a pool of hosts (3), but the code stops after 3 requests. I've had help to kickstart this code, but I certainly understand most of it now. What I don't get is why the number of processed requests is linked to the number of hosts in my pool of hosts. The objective of the code is to determine routing information from a given IP.
use strict;
use warnings;
use Net::OpenSSH;
use IO::Async::Loop;
use Future::Utils 'fmap_concat';
my #hosts = qw(host1 host2 host3);
my #ssh;
my $user = 'myuser';
my $pass = 'mypassword';
foreach my $host (#hosts) {
my $ssh = Net::OpenSSH->new(host => $host, user => $user, password => $pass, master_opts => [-o => "StrictHostKeyChecking=no"]);
die "Failed to connect to $host: " . $ssh->error if $ssh->error;
push #ssh, $ssh;
}
my #ipv4 = (
'ip1','ip2','ip3','ip4','ip5'
);
my $loop = IO::Async::Loop->new;
my $future = fmap_concat {
my $ip = shift;
my $ssh = shift #ssh;
my $cmd = 'show ip route '.$ip.' | i \*';
my #remote_cmd = $ssh->make_remote_command($cmd);
return $loop->run_process(command => \#remote_cmd)
->transform(done => sub { [#_] })
->on_ready(sub { push #ssh, $ssh });
} generate => sub { return () unless #ssh and #ipv4; shift #ipv4 }, concurrent => scalar #ssh;
my #results = $future->get;
foreach my $result (#results) {
my ($exit, $stdout) = #$result;
print $stdout, "\n";
}
Here are the results
Connection to host1 closed by remote host.
Connection to host2 closed by remote host.
Connection to host3 closed by remote host.
* ip1, from host1, 3w0d ago, via GigabitEthernet0/0/0
* ip2, from host2, 7w0d ago, via GigabitEthernet0/0/0
* ip3, from host3, 3w0d ago, via GigabitEthernet0/0/1
After researching on the problem, I found out that network devices such as Cisco might have an issue handling several requests over the same connection. So the code changed a bit in a way where a new connection is opened everytime the future is called instead of using a pool of pre-opened connections.
use strict;
use warnings;
use Net::OpenSSH;
use IO::Async::Loop;
use Future::Utils 'fmap_concat';
my #hosts = qw(host1 host2 host3);
my #ssh;
my $user = 'myuser';
my $pass = 'mypassword';
my #ipv4 = (
'ip1','ip2','ip3','ip4','ip5'
);
my $loop = IO::Async::Loop->new;
my $future = fmap_concat {
my $ip = shift;
my $host = shift #hosts;
my $ssh = Net::OpenSSH->new(host => $host, user => $user, password => $pass, master_opts => [-o => "StrictHostKeyChecking=no"]);
die "Failed to connect to $host: " . $ssh->error if $ssh->error;
my $cmd = 'show ip route '.$ip.' | i \*|^Routing entry';
my #remote_cmd = $ssh->make_remote_command($cmd);
return $loop->run_process(command => \#remote_cmd)
->transform(done => sub { [#_] })
->on_ready(sub { push #hosts, $host ; });
} generate => sub { return () unless #hosts and #ipv4; shift #ipv4 }, concurrent => scalar #hosts;
my #results = $future->get;
foreach my $result (#results) {
my ($exit, $stdout) = #$result;
print $stdout, "\n";
}
But this has leaded to other problems with underlying openssh library.
It looks like there was a race condition with the ssh connection not being released properly when the future was being invoked on the $host again.
undef $ssh fixed it
->on_ready(sub { undef $ssh; push #hosts, $host ; });

Perl Net::SMTP force auth method

I'm trying to send mail using Perl Net::SMTP with auth method other than default picked GSSAPI (e.g. force PLAIN).
I have tried:
my $smtp;
$smtp = Net::SMTP::SSL->new($host, Port => $port);
$smtp->auth($user, $passwd);
and replacing last line with:
$smtp->auth('PLAIN', $user, $passwd);
or passing Authen::SASL object with selected mechanism to $smtp->auth(). None of the above work - debug (and mail server logs) says it still tries AUTH GSSAPI.
Does anyone know how to correctly force auth method in Net::SMTP?
My Perl version is 5.20.2-3+deb8u8 from Debian 8, packages version:
Net::SMTP - 2.33
Net::SMTP::SSL - 1.01
Authen::SASL - 2.16
Net::SMTP version above 3.00
Net::SMTP above version 3:
* does not overwrite mechanism in Authen::SASL parameter of auth method
* supports STARTTLS and smtps
use Net::SMTP;
use Authen::SASL;
my($host, $user, $pass) = ('...','...','...'); # fill correct data
my $smtp = Net::SMTP->new( $host, SSL=>1, Debug => 1 ); # SSL=>1 - use smtps
$smtp->auth(
Authen::SASL->new(
mechanism => 'PLAIN LOGIN',
callback => { user => $user, pass => $passwd }
)
);
Net::SMTP version below 3.00
Net::SMTP version 2.31_1 (newest pre 3.00 in libnet) overwrites mechanism in Authen::SASL with mechanism listed in EHLO reply. Below please find my UGLY fix.
use Net::SMTP::SSL;
use Authen::SASL;
my ( $host, $port, $user, $pass ) = ( '...', '...', '...', '...' ); # fill correct data
my $smtp = Net::SMTP::SSL->new( $host, Port => $port, Debug => 1 );
my $auth = Authen::SASL->new(
mechanism => 'PLAIN LOGIN',
callback => { user => $user, pass => $passwd }
);
{
no warnings 'redefine';
my $count;
local *Authen::SASL::mechanism = sub {
my $self = shift;
# Fix Begin
# ignore first setting of mechanism
if ( !$count++ && #_ && $Net::SMTP::VERSION =~ /^2\./ ) {
return;
}
# Fix End
#_
? $self->{mechanism} = shift
: $self->{mechanism};
};
$smtp->auth($auth);
}
Code requiring fixing - Net::SMTP 2.31_1 from libnet 1.22_01
sub auth {
my ($self, $username, $password) = #_;
...
my $mechanisms = $self->supports('AUTH', 500, ["Command unknown: 'AUTH'"]);
return unless defined $mechanisms;
my $sasl;
if (ref($username) and UNIVERSAL::isa($username, 'Authen::SASL')) {
$sasl = $username;
$sasl->mechanism($mechanisms); # <= HERE Authen::SASL mechanism overwrite
}

Perl: Using IPC::Shareable for pooling Net::Server connections

I am trying to have a pool of shared connections that can be accessed by Net::Server instances. Unfortunately IPC::Shareable does not allow me to store the connections as they are code references. This is a stripped down version of the code:
use IPC::Shareable (':lock');
use parent 'Net::Server::Fork';
use MyConnectClass;
sub login {
return MyConnectClass->new();
};
my %connection;
tie %connection, 'IPC::Shareable', 'CONN', {
'create' => 1,
'exclusive' => 0,
'mode' => 0666,
'destroy' => 'yes',
}
or croak 'Can not tie connection variable';
sub add_connection {
my $id = shift(#_);
my $con = shift(#_);
$connection{$id} = $con;
};
sub get_connection {
my $id = # .. find unused connection
return $connection{$id};
}
sub process_request {
my $self = shift(#_);
eval {
my $connection = get_connection();
my $line = <STDIN>;
# .. use $connection to fetch data for user
};
};
for (my $i=0; $i<10; $i++) {
add_connection($i, &login);
};
main->run(
'host' => '*',
'port' => 7000,
'ipv' => '*',
'max_server' => 3,
};
Unfortunately the program dies after the first login: 'Can't store CODE items at ../../lib/Storable.pm'. This happens even when hiding $connection in an anonymous array. I am looking for an alternative to utilize the pool.
I appreciate your support
I am unable to propose an alternative module, but make a suggestion which may or not be of use. While you cannot store CODE, you can store strings which can be evaluated to run. would it be possible to pass a reference to the string q!&login! which you can dereference call after being assigned to $connection. ?
#!/usr/bin/perl
use warnings;
use strict;
use Storable;
my $codestring = q'sub { q^japh^ };' ;
#my $codestring = q'sub { return MyConnectClass->new(); }';
#
# for (0..9){ add_connection($i, $codestring) }
open my $file, '>', '.\filestore.dat' or die $!;
store \ $codestring, $file;
close $file;
open $file, '<', '.\filestore.dat' or die " 2 $!";
my $stringref = retrieve $file; # my $con = get_connection()
close $file;
print &{ eval $$stringref } ; # &{eval $$con} ;
exit 0; # my $line = <STDIN>; ...

Error handling on DBI->connect

Besides handling error using standard code die "Unable to connect: $DBI::errstr\n" is it possible to write a custom code like below?
Standard:
$dbstore = DBI->connect($dsn, $user, $pw,
{ora_session_mode => $mode, PrintError => 0, RaiseError => 0, AutoCommit => 0})
or die "Unable to connect: $DBI::errstr\n";
Custom:
$dbstore = DBI->connect($dsn, $user, $pw,
{ora_session_mode => $mode, PrintError => 0, RaiseError => 0, AutoCommit => 0});
if (!$dbstore)
{
CUSTOM_LOG_HANDLER("Could not connect to database: $DBI::errstr");
return;
}
Sample Standard Code:
#!/usr/bin/perl
# PERL MODULES WE WILL BE USING
use DBI;
use DBD::mysql;
# HTTP HEADER
print "Content-type: text/html \n\n";
# CONFIG VARIABLES
$platform = "mysql";
$database = "store";
$host = "localhost";
$port = "3306";
$tablename = "inventory";
$user = "username";
$pw = "password";
#DATA SOURCE NAME
$dsn = "dbi:mysql:$database:localhost:3306";
# PERL DBI CONNECT (RENAMED HANDLE)
$dbstore = DBI->connect($dsn, $user, $pw) or die "Unable to connect: $DBI::errstr\n";
Thanks for you time.
You can always use a custom error handler with the DBI:
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
sub handle_error {
my $message = shift;
#write error message wherever you want
print "the message is '$message'\n";
exit; #stop the program
}
my $dbh = DBI->connect(
"dbi:SQLite:foo",
"user",
"pass",
{
PrintError => 0,
HandleError => \&handle_error,
}
) or handle_error(DBI->errstr);
my $sth = $dbh->prepare("select * from doesntexist");
That said, you should be logging errors, and for a web application, the web server's logs makes sense. If you are worried about the amount of noise in your web logs, you should concentrate on fixing the errors, not making the logs less noisy by removing sources of information.