Perl SSH connection output issue - perl

I'm executing multiple commands over ssh using the Net::SSH2 module.
My problem is the following:
Sometimes if I send the command I get no output back. The command is still executed I just don't get the response. Sometimes I get the response in the output of the next command I send.
I suspected that it tries to get the output before it appeared in the shell but then it should at least return the shell "welcome lines" (on shell startup) if its the first command but sometimes i get no output at all. At this point I just don't know what to try anymore.
I've tried
to get the output multiple ways described in several forums for example from this demo.
Adding sleep between sending command and grabbing output. Helped somewhat most of the times but it still occurs. And having to determine the sleep time for each command individually is just not feasible. Especially since 1s seems to help on some occurences and doesn't help on others.
Running the command multiple times, which results in the next command sometimes grabbing output from the previous command.
Pushing until the last line of the output is a prompt with the same result: either ok or 0 lines.
My implementation
sub connectSSH{
my $user = "...";
my $password = "...";
my $ssh2 = Net::SSH2->new();
my $chan;
if($ssh2->connect($ip,$port,Timeout => 10)){
if(!$ssh2->auth_password($user, $password)){
print"Error: Password wrong\n";
exit;
}else{
$chan = $ssh2->channel(); # SSH
$chan->blocking(0);
$chan->shell();
}
}else{
print "Connection to $ip not possible\n";
exit;
}
return $chan;
}
sub sendCommand{
my ($chan,$command) = #_;
my #output=();
print $chan "$command\n";
#usleep(500)
push(#output,"$_") while <$chan>;
#process output...
}
I have to note that I've tried the usecases that didn't work with perl using python + paramiko and that seemed to have worked fine (didn't test as thorougly).
Update 26.08 (Partly solved)
I have retried (Thanks #jcaron for making me retry) pushing until prompt appears and that seems to work for most (the important) cases. Turns out that when I tried that method the first time there was another problem on top of the existing one, where I get no output from that specific host at all (not even after a delay/repeating the commands). Again with python/paramiko it works but not with my implementation in perl. However since the output from that specific group of hosts is not as important it's ok.

Related

Get the output of a command executed via $self->send() on a remote host in Perl Expect module

I am using Expect module in Perl to do an interactive activity and execute a command on a remote machine. Request your help on this
Here are the steps I used
Do a switch user to another account.
Sends the password to login.
Once I get the shell for the new user, execute a ssh command to connect to a remote machine.
Then I want to execute a command in that remote machine and get its response.
I am able to execute the command on the remote machine. I am seeing the output on my terminal too.
But I am not able to capture it in a variable so that I can compare it against a value.
use Expect;
my $exp = Expect->new;
$exp->raw_pty(1);
$exp->spawn('su - crazy_user') or die "Cannot spawn switch user cmd: $!\n"
$exp->expect($timeout,
[ qr/Password:/i,
sub { my $self = shift;
$self->send("$passwd\n");
exp_continue;
}],
[ qr/\-bash\-4.1\$/i,
sub { my $self = shift;
$self->send("ssh $REMOTE_MACHINE\n");
$self->send("$COMMAND1\n");
exp_continue;
}]
);
$exp->soft_close();
How can I get the result of the $COMMAND1 that I executed on the remote machine via $self->send("$COMMAND1\n") ?
I am by no means an expert on this but as noone else has answered so far, let me attempt it.
Your expect command is the su and as such, normal expecting will only work on whatever that command answers back to your original shell. That however is only the password prompt and maybe some error messages. You can still send commands, but their responses show up on the shell of the new user, not the shell the expect command has been executed in. That is why they show up on screen but not in the stream available to your expect object. Note that you would likely encounter the very same problem if you where to ssh directly (i am not sure why you would su and then ssh anyways, could you not directly ssh crazy-user#remote_machine?).
The solution is probably to get rid of the su and ssh directly into the user you need on the remote machine employing Net::SSH::Expect instead of plain Expect as it gives you everything written to the remote console in its output stream. But be careful, if i remember correctly, the syntax for inspecting the stream is slightly different.

Odd behavior with Perl system() command

Note that I'm aware that this is probably not the best or most optimal way to do this but I've run into this somewhere before and I'm curious as to the answer.
I have a perl script that is called from an init that runs and occasionally dies. To quickly debug this, I put together a quick wrapper perl script that basically consists of
#$path set from library call.
while(1){
system("$path/command.pl " . join(" ",#ARGV) . " >>/var/log/outlog 2>&1");
sleep 30; #Added this one later. See below...
}
Fire this up from the command line and it runs fine and as expected. command.pl is called and the script basically halts there until the child process dies then goes around again.
However, when called from a start script (actually via start-stop-daemon), the system command returns immediately, leaving command.pl running. Then it goes around for another go. And again and again. (This was not fun without the sleep command.). ps reveals the parent of (the many) command.pl to be 1 rather than the id of the wrapper script (which it is when I run from the command line).
Anyone know what's occurring?
Maybe the command.pl is not being run successfully. Maybe the file doesn't have execute permission (do you need to say perl command.pl?). Maybe you are running the command from a different directory than you thought, and the command.pl file isn't found.
There are at least three things you can check:
standard error output of your command. For now you are swallowing it by saying 2>&1. Remove that part and observe what errors the system command produces.
the return value of system. The command may run and system may still return an exit code, but if system returns 0, you know the command was successful.
Perl's error variable $!. If there was a problem, Perl will set $!, which may or may not be helpful.
To summarize, try:
my $ec = system("command.pl >> /var/log/outlog");
if ($ec != 0) {
warn "exit code was $ec, \$! is $!";
}
Update: if multiple instance of the command keep showing up in your ps output, then it sounds like the program is forking and running itself in the background. If that is indeed what the command is supposed to do, then what you do NOT want to do is run this command in an endless loop.
Perhaps when run from a deamon the "system" command is using a different shell than the one used when you are running as yourself. Maybe the shell used by the daemon does not recognize the >& construct.
Instead of system("..."), try exec("...") function if that works for you.

How do I exit the vi session which runs during Perl script execution?

I have to run few jobs. After running each job, it runs vi on the contents. After writing and quitting (usually I do :wq!) those data get updated in the database. As the number of these kind of jobs is more than a hundred, I thought of automating the process using Perl.
But when I ran the script, I got stuck in vi, unable to make it exit on its own. This requires manual intervention and fails the purpose of my script. I need help on how to handle such situation as it will help me to save time and effort.
Code is as given below:
print "Enter job name - \n";
$job_rc = <>;
print "Job entered by you is $job_rc \n";
my #job_name = ("job1", "job2", "job3", "job4");
my $total_job = #job_name;
print "Total job present = $total_job + 1 \n";
for ($i = 0; $i < $total_job; $i++) {
print "Curent job name: $job_name[$i] \n";
system "cr_job $job_name[$i] $job_rc";
sleep(10);
}
I think you approach the problem from the wrong side. Instead of exiting vi, think about not running it.
I can only guess why vi runs, it seems related to your “jobs”. One of the possible reasons is that they run a default text editor to grab some user input (well-known example of such behaviour is that when you call hg commit, svn commit, cvs ci, etc. without providing message, they automatically run a text editor to get the commit message).
If this is the case, first check your “jobs”, as they may have options to disable this very prompt. If not, they may be using the $EDITOR environment variable to decide which editor to run, setting this variable to something you prepare (for example, script which will write default message to file given as parameter) may do.

Export PATH using NET::SSH:PERL

I am writing a shell script to automate some of the tedious tasks that we perform. I need to ssh to a server and change the PATH variable remotely, have that variable persist for the next commands to be executed. Code below;
sub ab_tier{
my $ssh=Net::SSH::Perl->new($host);
$ssh->login($user2,$user2);
my $PATH;
my($stdout,$stderr,$exit)=$ssh->cmd("export
PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH");
my($PATH, $stderr, $exit)=$ssh->cmd("echo $PATH");
print $PATH; # Check the path for correctness : does not change
}
However the PATH does not change. Is there another way to implement this or am I doing something wrong. I need to automate tasks so dont think $ssh->shell would help here. Please suggest.
I made changes as per suggestions and everything works fine. However I am noticing another issue, which is occurring when trying to display environment variables.
my $cmd_ref_pri={
cmd0=>"echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Now I am connecting to a remote server using Net::SSH::Perl and the value returned by $ENV{"HOME"} is the value of the my home directory and not of the remote server. However if I add a command as in :
my $cmd_ref_pri={
cmd0=>"whoami ; echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Then the user id displayed is of the user using which I ssh to the remote server. I do not have other modules installed and the only one available is Net:SSh:perl hence I am forced to use this.
routine for executing command
sub ssh_cmd{
#$cmd_sub - contains command, $ssh contains object ref
my ($cmd_sub,$ssh)=#_;
my($stdout, $stderr, $exit)=$ssh->cmd("bash",$cmd_sub);
if( $exit !=0){
print $stdout;
print "ERROR-> $stderr";
exit 1;
}
return 0;
}
Any suggestions as to why this could happen ?
cmd() is not passing your commands into one shell. It executes them in separate shells (or without any shell - manual is not clear about it). As soon as you finish your export PATH the shell exits and the new PATH is lost.
Looks like it is possible to pass all the relevant commands to a single shell process as separate lines of $stdin?
my $stdin='export A=B
echo $A
';
$ssh->cmd("bash",$stdin);
This would work just like on interactive login (but without terminal control, so commands that talk directly to terminal would likely fail).
Anyway Net::SSH::Perl does not look like the best tool for the job. I would rather use expect for automation.
Set PATH on every command call:
$ssh->cmd('PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH echo $PATH');
And BTW, Net::SSH::Perl is not being maintained anymore, nowadays Net::SSH2 and Net::OpenSSH are better alternatives.
Write commands to a remote temp file, then execute that one. Or, skip the $PATH thing and use the full path for subsequent commands (assuming you know it).

Perl's backticks/system giving "tcsetattr: Input/output error"

I'm writing a perl script that kicks off a script on several different servers using ssh. The remote script needs to run as long as this script is running:
#!/usr/bin/perl
require 'config.cfg'
##servers is defined in config.cfg
#Contains server info as [username,hostname]
#
# #servers = ([username,server1.test.com],[username,server2.test.com])
#
foreach $server ( #servers ) {
my $pid = fork();
if ( $pid == 0 ) {
$u = ${$server}[0];
$h = ${$server}[1];
print "Running script on $h \n";
print `ssh -tl $u $h perl /opt/scripts/somescript.pl`;
exit 0;
} else {
die "Couldn't start the process: $!\n";
}
}
[...]
When I run this script, I get the following output:
./brokenscript.pl
Running script on server01
$ tcsetattr: Input/output error
Connection to server01 closed.
The same result occurs when running with system (and backticks are preferred anyways since I want the output of the script). When I run the exact command between the backticks on the command line, it works exactly as expected. What is causing this?
The tcsetattr: Input/output error message comes from ssh when it tries to put the local terminal into “raw” mode (which involves a call to tcsetattr; see enter_raw_mode in sshtty.c, called from client_loop in clientloop.c).
From IEEE Std 1003.1, 2004 (Posix) Section 11.1.4: Terminal Access Control, tcsetattr may return -1 with errno == EIO (i.e. “Input/output error”) if the calling process is in an orphaned (or background?) process group.
Effectively ssh is trying to change the setting of the local terminal even though it is not in the foreground process group (due of your fork and, the local script exiting (as evidenced by the apparent shell prompt that comes immediately before the error message in your quoted output)).
If you just want to avoid the error message, you can use ssh -ntt (redirect stdin from /dev/null, but ask the remote side to allocate a tty anyway) instead of ssh -t (add your -l and any other options you need back in too, of course).
More likely, you are interesting in keeping the local script running as long as some of the remote processes are still running. For that, you need to use the wait function (or one of its “relatives”) to wait for each forked process to exit before you exit the program that forked them (this will keep them in the foreground process group as long as the program that started them is in it). You may still want to use -n though, since it would be confusing if the multiple instances of ssh that you forked all tried to use (read from, or change the settings of) the local terminal at the same time.
As a simple demonstration, you could have the local script do sleep 30 after forking off all the children so that the ssh command(s) have time to start up while they are part of the foreground process group. This should suppress the error message, but it will not address your stated goal. You need wait for that (If I am interpreting your goal correctly).
That probably happens because you are forcing SSH to allocate a tty when stdin/stdout are not really ttys. SSH tries to call some specific tty function on those handlers (probably forwarded from the remote side) and the call fails returning some error.
Is there any reason why you should be allocating a tty?
Is there also any reason to use the obsolete version 1 of the SSH protocol?