Perl Net::FTP object expected re-use behavior? - perl

I can't seem to re-use a Net::FTP object after using quit.
Is this expected? I was unable to tell from the documentation (cpan).
As a workaround, I'm creating a new Net::FTP object each time I need to perform a batch of ftp operations. This seems wasteful.
The following example shows: successful initial login, printing of root directory ls, quit (socket close), login failing with ftp message "Connection closed".
#!/usr/bin/env perl
use strict;
use warnings;
use Net::FTP;
my $hostname = 'foo';
my $username = 'bar';
my $password = 'baz';
# successful first pass
my $ftp = Net::FTP->new( $hostname ) or die "cannot connect to $hostname: $#";
$ftp->login( $username, $password ) or die "cannot login: ", $ftp->message;
map { print "ls_output: $_\n" } $ftp->ls; # success
$ftp->quit or die "cannot close connection: ", $ftp->message;
# re-use attempt
$ftp->login( $username, $password ) or die "cannot login: ", $ftp->message;
# never gets here since re-use attempt fails
print "done!\n";

quit causes the remote end to close the connection and there's no way to logout without closing the connection. If you're trying to avoid reconnecting, you can't.
On the other hand, maybe you're expecting login to connect you to the server. The connection is created in new, not in login, and Net::FTP does not provide a means of reconnecting.
Net::FTP subclasses IO::Socket::INET, so you could reconnect by using IO::Socket::INET's connect, but you'd also have to reinitialise a field or two the constructor normally initialises. Nothing complicated though.
But is there even a problem that needs fixing? You talk of inefficiencies, but the time it takes to create and initialise an object pales in comparison it takes to make an FTP connection.

It's not a Perl issue. It's the FTP protocol. After the quit is issued... that's it, the FTP session is over. There's nothing to issue the login command to -- no-one's listening anymore.
Try it yourself on the command line with the FTP client.

From this very documentation you linked:
quit ()
Send the QUIT command to the remote FTP server and close the socket connection.
Closing the socket connection terminates the connection to the server, not just the session on this server.
Creating a new object for each connection has two drawbacks:
Perl may or may not keep the data from the old object around in memory—but garbage collection should not bother you.
Creating a new connection involves some overhead. But unless you are creating dozens of new connections per second, this should not bother you.
So just create new objects every time; you can reuse the same variable if you like. Thinking too low level in Perl, and optimizing too early is only going to hurt.

Related

Perl SSH to remote server and check file exists

I have developed a script which should login into remote server and do certain operation.
In remote server it will check for sample_file.txt file exists or not, if YES then it should create create_user_file.txt file in home path with certain set of data.
The thing here is I am able to do ssh to remote machine though below script but unable to check for the files since I don't have idea how we can open the remote session and do all the operation.
Below is my code:
#!/usr/bin/perl
use Net::OpenSSH;
.
.
.
foreach my $ip( #list_of_ips){
print "Connecting to $ip...\n";
my $ssh = OpenSSH_Connection( $ip, $user, $passwd );
print "Connected!\n";
$dir = "/remote/server/home/path/";
$file = $dir."sample_file.txt";
if (-e $file){ #if sample_file.txt exists then create user file
open(my $fh, '>', $dir."create_user_file.txt") || die "Couldn't open file $!";;
print $fh "ID=123&PW=Passw0rd";
close $fh;
}
last if($ssh); #if connection is success no need to login into another server
$ssh->close();
}
sub OpenSSH_Connection {
my ( $host, $user, $passwd ) = #_;
my $ssh = Net::OpenSSH->new("$user:$passwd\#$host");
$ssh->error and die "Couldn't establish SSH connection: ". $ssh->error;
return $ssh;
}
Here in the code, below part of code should be executed in the remote server. But its been checking for a file from where I am executing this script.
if (-e $file){ #if sample_file.txt exists then create user file
open(my $fh, '>', $dir."create_user_file.txt") || die "Couldn't open file $!";;
print $fh "ID=123&PW=Passw0rd";
close $fh;
}
How can I keep open the remote session of the server and execute above code, or do we have any Perl module which could do this kind of operation from local server. TIA.
Your assumption that Net::SSH will make remote resources available locally is incorrect.
SSH creates encrypted channel to remote system and drops you either into shell, or runs remote command (for example perl/shell script), or allows to copy/retrieve a file.
You can open SSH channel into remote shell and run commands as on local computer, but if you want to check if file exists on remote system then you have to communicate it through remote shell or perl script.
Probably what you are after is SSH FS which may make remote files available locally.
I must make note that users on local and remote system can have different id on system level and it might be not what you want (manipulate files of different user). This approach may work well if your local and remote id is the same and belongs to you.
I guess that most easy way would be write perl script on remote system and pass some arguments to it. The script on remote system will do rest part of the job.
In such situation you could use Net::SSH2 to establish connection into remote shell and then run remote perl script with parameters.
To get a better idea how it works without creating long post I refer you to PerlMonks Net::SSH2 demo.
Other more difficult way would be writing enough perl code to establish connection with remote system over SSH2 protocol to remote perl script with capability to 'translate' your local commands
$chan->command($argument)
into remote perl code blocks to perform particular task.
Net::SSH2

Perl : implementing socket programming ( system() never returns)

My aim : implement socket programming such that client tries connecting to server if server not installed on remote machine , client(host) on its part transfers a tar file to server(target) machine and a perl script. This perl script untar the folder and runs a script (server perl script) , now the problem is : this server script has to run forever ( multiple clients ) until the machine restarts or something untoward happens.
so the script runs properly : but since it is continuously running the control doesnt go back to the client which will again try to connect to the server ( on some predefined socket) , so basically i want that somehow i run the server but bring back control to my host which is client in this case.
here is the code :
my $sourcedir = "$homedir/host_client/test.tar";
my $sourcedir2 = "$homedir/host_client/sabkuch.pl";
my $remote_path = "/local/home/hanmaghu";
# Main subroutines
my $ssh = Net::OpenSSH->new ( $hostmachine, user =>$username, password => $password);
$ssh->scp_put($sourcedir,$sourcedir2,$remote_path)
or die "scp failed \n" . $ssh->error;
# test() is similar to system() in perl openssh package
my $rc = $ssh->test('perl sabkuch.pl');
# check if test function returned or not -> this is never executed
if ($rc == 1) {
print "test was ok , server established \n";
}
else {
print "return from test = $rc \n";
}
exit;
The other script which invokes our server script is :
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
#system('perl utpsm_lts_server.pl');
# Tried with system but in both cases it doesn't return,
# this xxx_server.pl is my server script
exit;
The server script is :
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket::INET;
#flush after every write
$| =1;
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => '7783',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $! \n" unless $socket;
print "server waiting for client on port $socket->LocalPort \n";
while (1)
{
# waiting for new client connection
my $client_socket = $socket->accept();
# get info about new connected client
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port \n";
# read upto 1024 characters from connected client
my $data = "";
$client_socket->recv($data,1024);
print "rceeived data = $data";
# write response data to the connected client
$data = "ok";
$client_socket->send($data);
# notify client response is sent
shutdown($client_socket,1);
}
$socket->close();
Please help how to execute this : in terms of design this is what i want but having this issue while implementation, can i do it some other work around method.
In short, your 'driver' sabkuch.pl starts the server using exec -- which never returns. From exec
The "exec" function executes a system command and never returns; ...
(Emphasis from the quoted documentation.) Once an exec is used, the program running in the process is replaced by the other program, see exec wiki. If that server keeps running the exit you have there is never reached, that is unless there are errors. See Perl's exec linked above.
So your $ssh->test() will block forever (well, until the server does exit somehow). You need a non-blocking way to start the server. Here are some options
Run the driver in the background
my $rc = $ssh->test('perl sabkuch.pl &');
This starts a separate subshell and spawns sabkuch.pl in it, then returns control and test can complete. The sabkuch.pl runs exec and thus turns into the other program (the server), to run indefinitely. See Background processes in perlipc. Also see it in perlfaq8, and the many good links there. Note that there is no need for perl ... if sabkuch.pl can be made executable.
See whether Net::OpenSSH has a method to execute commands so that it doesn't block.
One way to 'fire-and-forget' is to fork and then exec in the child, while the parent can then do what it wants (exit in this case). Then there is more to consider. Plenty of (compulsory) information is found in perlipc, while examples abound elsewhere as well (search for fork and exec). This should not be taken lightly as errors can lead to bizarre behavior and untold consequences. Here is a trivial example.
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar') == 0 or die "Error with system(...): $!";
my $pid = fork;
die "Can't fork: $!" if not defined $pid;
# Two processes running now. For child $pid is 0, for parent large integer
if ($pid == 0) { # child, parent won't get into this block
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
die "exec should've not returned: $!";
}
# Can only be parent here since child exec-ed and can't get here. Otherwise,
# put parent-only code in else { } and wait for child or handle $SIG{CHLD}
# Now parent can do what it needs to...
exit; # in your case
Normally a concern when forking is to wait for children. If we'd rather not, this can be solved by double-forking or by handling SIGCHLD (see waitpid as well), for example. Please study perlfaq8 linked above, Signals in perlipc, docs for all calls used, and everything else you can lay your hands on. In this case the parent should by all means exit first and the child process is then re-parented by init and all is well. The exec-ed process gets the same $pid but since cd will trigger the shell (sh -c cd) the server will eventually run with a different PID.
With system('command &') we need not worry about waiting for a child.
This is related only to your direct question, not the rest of the shown code.
Well i figured the best way would be to fork out a child process and parents exists thus child can now go on forever running the server.pl
but it still is not working please let me knoe where in this code i am going wrong
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
my $child_pid = fork;
if (!defined $child_pid){
print "couldn't fork \n";}
else {
print "in child , now executing \n";
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl')
or die "can't run server.pl in sabkuch child \n";
}
the output is my script still hangs and the print statement "in child now executing " gets run twice , i dont understand why,
i work mostly on assembly language hence this all is new to me.
help will be appreciated.

[Perl][net::ssh2] How to keep the ssh connection while executing remote command

I'm working on a perl script using net::ssh2 to make a SSH connection to a remote server.
(I'm working on windows)
I chose Net::SSH2 because i had to make some SFTP connections in the same script.
For now, my sftp connections work perfectly. The problem is when i try to execute a "long-duration" command. I mean a command which execution can take more than 30sec.
$ssh2 = Net::SSH2->new();
$ssh2->connect('HOST') or die;
if($ssh2->auth(username=>'USER', password=>'PSWD'))
{
$sftp = Net::SFTP::Foreign->new(ssh2=>$ssh2, backend=>'Net_SSH2');
$sftp->put('local_path', 'remote_path');
$channel=$ssh2->channel();
##
$channel->shell('BCP_COMMAND_OR_OTHER_PERL_SCRIPT');
# OR (I tried both, both failed :( )
$channel->exec('BCP_COMMAND_OR_OTHER_PERL_SCRIPT');
##
$channel->wait_closed();
$channel->close();
print "End of command";
$sftp_disconnect();
}
$ssh2->disconnect();
When i execute this script, the connection is successfull, the file is correctly sent but the execution is not (completely) performed. I mean, I think the command is sent for execution but terminated immediatly or not sent at all, i'm not sure.
What i want is the script waits until the command is completly finished before disconnect everything (just because sometimes, i need to get the result of the command execution)
Does anyone know how to solve this? :( The cpan documentation is not very explicit for this
Thanks!
PS: I'm open to any remarks or suggestion :)
Edit: After some test, i can say that the command is sent but is interrupted. My test was to start another perl script on the remote server. This script writes in a flat file. In this test, the script is started, the file is half-filled. I mean, the file is brutaly stopped in the middle.
In the other hand, when i performed a "sleep(10)" just after the "$channel->exec()", the script goes to the end successfully.
Problem is, that I can't write a "sleep(10)" (i don't know if it will take 9 or 11 seconds (or more, you see my point)
You can try using Net::SSH::Any instead.
It provides a higher level and easier to use API and can use Net::SSH2 or Net::OpenSSH to handle the SSH connection.
For instance:
use Net::SSH::Any;
my $ssh = Net::SSH::Any->new($host, user => $user, password => $password);
$ssh->error and die $ssh->error;
my $sftp = $ssh->sftp;
$sftp->put('local_path', 'remote_path');
my $output = $ssh->capture($cmd);
print "command $cmd output:\n$output\n\n";
$sftp->put('local_path1', 'remote_path1');
# no need to explicitly disconnect, connections will be closed when
# both $sftp and $ssh go out of scope.
Note that SFTP support (via Net::SFTP::Foreign) has been added on version 0.03 that I have just uploaded to CPAN.

perl script working in vmware server but fails in vmware ESXi

This problem is really puzzling to me: I have the following script working on vmware server 2.0:
#!/usr/local/bin/perl
# server (transmitter)
use strict;
use IO::Socket::Multicast6;
use IO::Interface;
use constant GROUP => "235.1.1.2";
use constant PORT => "3000";
my $sock = IO::Socket::Multicast6->new(
Proto=>"udp",
Domain=>AF_INET,
PeerAddr=>GROUP,
PeerPort=>PORT);
$sock->mcast_if("eth1");
$sock->mcast_ttl(10);
while (1) {
my $message = localtime();
$sock->send($message) || die "Could not send: $!";
} continue {
sleep 4;
}
It works great on vmware server. I have cloned this VM to an EXSi server but running the same exact copy of the virtual machine running the script, and I get the following error:
Can't call method "mcast_if" on an undefined value
Im really puzzled by this as I am not sure what the problem could be.
there is really nothing different except for the CPU running on both machines, but I don't see how something so low level can be causing an issue but I could be wrong. perl -d wasn't very helpful.
Thanks.
It's failing to create the socket, use some error checking to try to find out why. Eg:
my $sock = IO::Socket::Multicast6->new(
Proto=>"udp",
Domain=>AF_INET,
PeerAddr=>GROUP,
PeerPort=>PORT)
or die "Socket failed: $!";
The new() constructor is failing, but not raising an exception. I don’t know its API: is there some way to get it to tell you why?
Otherwise you might try errno (that is, $!).

Whilst using Perl::FTP, What shall I put in the host field for connecting to a Local computer connected on the network

I am using the Net::FTP in perl to do some file transfers. when I run the following code :-
Are there any issues with using the IP address ? Am I correct in providing this in the host field ?
use strict;
use Net::FTP;
my $host = "10.77.69.124";
my $user = "administrator";
my $password = "Password";
my $f = Net::FTP->new($host, Debug =>0) or die "Can't open $host\n";
$f->login($user, $password) or die "Can't log $user in\n";
The code is not able to connect to the remote host. Why is this happening ? Shouldn't this work with the IP address provided in the $host ?
The constructor of Net::FTP allows you to pass a single scalar value or an array of hosts to try. The value of this field should be the same as the PeerAddr from IO::Socket::INET (either a hostname or an ip address).
Have a closer look at what is happening by specifying Debug. If you are behind a firewall or a NAT setup, you should probably also set Passive to a non-zero value and make sure to check if the constructor failed by printing out $#.
my $ftp = Net::FTP->new(Host=>$host, Debug=>1, Passive=>1) || die $#;
If the constructor succeeded, you might want to check if any of the other methods fail:
$ftp->login($user, $pass) || die $ftp->message;
$ftp->cwd($path) || die $ftp->message;
By the way: If you are unsure if you've used the correct host parameter, you can ask Net::FTP which host it tried to connect to:
print $ftp->host, "\n";
If this still doesn't work, please provide a detailed output of your application.
Hope this helps.
First be sure that you can reach the remote side:
From command line use telnet (available on linux and windows too, a it different in syntax)
telnet host 21
If you are not able to connect the from commandline, check for firewall rules or maybe your FTP server running on different port?
If you are able to connect try out login with plain FTP commands:
USER user#remote.host
PASS yourpassword
This will use ACTIVE ftp connection to the remote. This is the old way.
Nowadays most ftp server use PASSIVE ftp. To test try this command out (from linux commandline)
ftp -v -p host
In perl you could use passive mode this way:
my $f = Net::FTP->new($host, Debug =>1, Passive => 1) or die "Can't open $host\n";
I hope this will help you.