parallel SSH in perl - perl

I am trying to create a script in perl which can ssh to multiple hosts (500+), execute a desired command, and show output on the screen. I have done this with the Net::OpenSSH module, as ssh-keys are not configured and I am not allowed to configure those. So, I have to use a thing which can supply the password while doing ssh.
Due to the many connections, it takes considerable time while doing the thing. I searched for "parallel ssh in perl" and discovered that there is a module for opening parallel ssh (Net::OpenSSH:Parallel), but I read somewhere on some forums that I cannot capture output with this module like I can capture using Net::OpenSSH ($ssh->caputre(ls)).
So, how can I accomplish parallel ssh in a more expedient manner? Also, I welcome any other suggestions I can use to save time. Would using Net:OpenSSH in threads save my time or will it work exactly like parallel?

You can fork your program and manage the forks with something like Parallel::ForkManager. Then do the SSH work + capture using Net::OpenSSH and display the results to the screen. You'll need to be careful with your IO though since all those processes trying to write to STDOUT/STDERR at the same time will get garbled results. You'll need to do something like the answer from this question (pipes between parent and child processes): fork() and STDOUT/STDERR to the console from child processes
Parallel programming is harder than serial, so be prepared for some fun :)

One way is to use a shell script to execute your perl script:
#!/bin/bash
for host in $(cat myhosts)
do
perl myperl.pl $host $1 $2 &
done
where myhosts is a file containing 500+ host names

Related

Should I turn a perl script that parses a /var/log/.* file into a daemon?

I am writing a perl script to parse, for example, /var/log/syslog.
The perl script triggers further subsequent tasks when particular events in the log appear. The log is parsed following the advice of this post:
Command line: monitor log file and add data to database
Which what I believe is the use of a pipe.
Now I'd like this script to forever run in the background.
This sounds like a daemon to me, and the daemon program referenced in the following question seems ideal:
How can I run a Perl script as a system daemon in linux?
But from this post, it seems clear that daemon's have no open file handles. So how can I have a daemon, or a perl script that becomes a daemon, that monitors a logfile?
It sounds like what you want is a daemon. In that case the advise given in the second post you reference is the best practice. However, you do have other options like daemontools, which removes the fork complexity.
Daemons are allowed to have filehandles, but you should close STDIN, STDOUT, and STDERRR because you shouldn't really use them anymore. A lot of this has to do with the way fork works in *nix systems. Just open the pipe filehandle after your second fork, and you shouldn't have any issues.
this doesn't answer your question, but is another route to consider which may or may not be appropriate for you:
rsyslog can execute a program when a certain message is logged
see Filter Conditions for setting up the up the trigger, Templates for formatting the output that's passed to the script, and Actions > Shell Execute for specifying the executable.
Be sure to read the security implications, and that ryslog blocks while the external program runs. But if your script runs reliably quickly, it may be an option.

In Perl CGI, how can I use UNIX commands?

I'm trying to run ssh, mkdir from a Perl CGI script. It's not working. But in normal Perl script, it is working fine. Can any one tell me how to run commands in a Perl CGI script?
If you're running this script via a webserver, chances are the active user (e.g. "nobody", "www", etc) may not have the necessary privileges to execute commands like mkdir and ssh. If so, that is not something that perl can fix. You can test who the active user is with something like:
print "The active user is: ", `whoami`;
It is also a security concern, to have your web user privileges set to create files and perform commands.
system() or popen() are probably what you're looking for, if you're feeling dirty I think you can use back ticks too.
Do you need to run unix commands? Perl has a built-in mkdir, and there are modules to handle SSH. Normally a CGI process is going to have limited capabilities or access to the system. The more you can do in Perl the better.

How to obtain stateful ssh shell session programmatically?

I am working on an application that needs to send commands to remote servers. Sending commands is easy enough with the plethora of SSH client libraries.
However, I would like shell state (i.e. current working directory, environment variables, etc) preserved between each command. All client libraries that I have seen do not do this. For example, the following is an example of code that does not do what I want:
use Net::SSH::Perl;
my $server = Net::SSH::Perl->new($host);
$server->login($user, $pass);
$server->cmd('cd /var');
$server->cmd('pwd'); # I _would like_ this to output /var
There will be other tasks performed between sending commands, so combining the commands like $server->cmd('cd /var; pwd') is not acceptable.
Net::SSH::Expect does what you want, though the "Expect" way is not completely reliable as it will be parsing the output of your commands and trying to detect when the shell prompt appears again.
I'm not sure what you are doing exactly, but you could just start one SSH session. If you really can't do this, maybe you can just use absolute paths for everything.

How to open ssh session and execute commands from a Perl script?

I have a Perl script running on a Windows machine. I need this script to open a ssh session to a remote Unix machine, and to be able to execute certain commands on that Unix machine and to be able to get the output returned from these commands.
These commands are generated during the run-time of the script, and there are many of them executed at different times.
How can I do it?
Approach 1: Use CYGWIN: http://perlwin32ssh.blogspot.com/2007/07/test_4418.html
Approach 2: Use Net::SSH::W32Perl module.
This is one thread discussing how to install it: http://code.activestate.com/lists/perl-win32-users/29180/ (It seems to require downloading custom version of the module)
This thread should help with the problems arising from dependencies on math libraries needed for ssh calculations: http://www.issociate.de/board/post/494356/I%27m_trying_to_install_%27Net::SSH::Perl%27_on_a_Windows_Box..html
Caveat emptor: I never installed this, the above is just result of some analysis of google results.
#!/usr/bin/perl
system("ssh foo 'ls -l'");
Or go through the hassle of using ptmx(4) on the local side and ssh -t for remote.

How can I pause Perl processing without hard-coding the duration?

I have a Perl script that contains this code snippet, which calls the system shell to get some files by SFTP and unzip them with WinZip:
# Run script to get files from remote server
system "exec_SFTP.vbs";
# Unzip any files that were retrieved
foreach $zipFile (<*.zip>) {
system "wzunzip $zipFile";
}
Even if some files are retrieved, they are never unzipped, because by the time the files are retrieved and the SFTP connection is closed, the Perl script has already completed the unzip step, with the result that it doesn't find anything to unzip.
My short-term fix is to insert
sleep(60);
before the unzip step, but that assumes that the SFTP connection will finish within 60 seconds, which may sometimes be a gross over-estimate, and other times an under-estimate.
Is there a more sound way to cause Perl to pause until the SFTP connection is closed before proceeding with the unzip step?
Edit: Responders have questioned (and reasonably so) the use of a VB script rather than having Perl do the file transfer. It has to do with security -- the VB script is maintained by others and is authorized to do the SFTP.
Check the code in your *.vbs file. The system function waits for the child process to finish before execution continues. It appears that your *.vbs file is forking a background task to do the FTP and returning immediately.
In a perfect world your script would be rewritten to use Net::SFTP::Foreign and Archive::Extract..
An ugly quick-hackish kind of way might be to create a touch-file before your first system call, alter your sftp-fetching script to delete the file once it is done and have a while like so
while(-e 'touch.file') {
sleep 5;
}
# foreach [...]
Of course, you would need to take care if your .vbs fails and leaves the touchfile undeleted and many other bad side effects. This would be for a quick solution (if none of the other suggestions work) until you get the time to rewrite without system() calls.
You need a way for Perl to wait until the SFTP transfer is done, but as your script is currently written, Perl has no way of knowing this. (It looks like you're combining at least two scripting languages and a (GUI?) SFTP client; this can work, but it's not exactly reliable or robust. Why use VBscript to start the SFTP transfer?)
I can think of four options:
Your Perl script could do the SFTP transfer itself, using something like CPAN's Net::SFTP module, rather than spawning an external job whose status it cannot track.
Your Perl script could spawn a command-line SFTP utility (like PSFTP) that doesn't return until the transfer is done.
Or change exec_SFTP.vbs script to not return until the transfer is done.
If you're currently using a graphical SFTP client and can't switch for whatever reason, I'd recommend using a scripting language like AutoIt instead of Perl. AutoIt has features to wait for windows to change state and so on, so it could more easily monitor for an activity's completion.
Options 1 or 2 would be the most robust and reliable.
The best I can suggest is modifying exec_SFTP.vbs to exit only after the file transfer is complete. system waits for the program it called to complete, so that should solve your problem:
system LIST
system PROGRAM LIST
Does exactly the same thing as "exec LIST", except
that a fork is done first, and the parent process
waits for the child process to complete.
If you can't modify the vbs script to stay alive until it terminates, you may be able to track subprocess creation. If you get subprocess ids, you can monitor them thereby know when the vbs' various offspring terminate.
Win32::Process::Info lets you get a subprocess ids from a running process.
Maybe this is a dumb question, but why not just use the Net::SFTP and Archive::Extract Perl modules to download and unzip the files?
system will not return until the shell it's running the command in has returned; this may be wrong for launching graphical programs and file associations.
See if any of the following help?
system('cscript exec_SFTP.vbs');
use Win32::Process;
use Win32;
Win32::Process::Create(my $proc, 'wscript.exe',
'wscript exec_SFTP.vbs', 0, NORMAL_PRIORITY_CLASS, '.');
$proc->Wait(INFINITE);
Have a look at IPC::Open3
IPC::Open3 - open a process for reading, writing, and error handling using open3()