I am writing a shell script to automate some of the tedious tasks that we perform. I need to ssh to a server and change the PATH variable remotely, have that variable persist for the next commands to be executed. Code below;
sub ab_tier{
my $ssh=Net::SSH::Perl->new($host);
$ssh->login($user2,$user2);
my $PATH;
my($stdout,$stderr,$exit)=$ssh->cmd("export
PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH");
my($PATH, $stderr, $exit)=$ssh->cmd("echo $PATH");
print $PATH; # Check the path for correctness : does not change
}
However the PATH does not change. Is there another way to implement this or am I doing something wrong. I need to automate tasks so dont think $ssh->shell would help here. Please suggest.
I made changes as per suggestions and everything works fine. However I am noticing another issue, which is occurring when trying to display environment variables.
my $cmd_ref_pri={
cmd0=>"echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Now I am connecting to a remote server using Net::SSH::Perl and the value returned by $ENV{"HOME"} is the value of the my home directory and not of the remote server. However if I add a command as in :
my $cmd_ref_pri={
cmd0=>"whoami ; echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Then the user id displayed is of the user using which I ssh to the remote server. I do not have other modules installed and the only one available is Net:SSh:perl hence I am forced to use this.
routine for executing command
sub ssh_cmd{
#$cmd_sub - contains command, $ssh contains object ref
my ($cmd_sub,$ssh)=#_;
my($stdout, $stderr, $exit)=$ssh->cmd("bash",$cmd_sub);
if( $exit !=0){
print $stdout;
print "ERROR-> $stderr";
exit 1;
}
return 0;
}
Any suggestions as to why this could happen ?
cmd() is not passing your commands into one shell. It executes them in separate shells (or without any shell - manual is not clear about it). As soon as you finish your export PATH the shell exits and the new PATH is lost.
Looks like it is possible to pass all the relevant commands to a single shell process as separate lines of $stdin?
my $stdin='export A=B
echo $A
';
$ssh->cmd("bash",$stdin);
This would work just like on interactive login (but without terminal control, so commands that talk directly to terminal would likely fail).
Anyway Net::SSH::Perl does not look like the best tool for the job. I would rather use expect for automation.
Set PATH on every command call:
$ssh->cmd('PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH echo $PATH');
And BTW, Net::SSH::Perl is not being maintained anymore, nowadays Net::SSH2 and Net::OpenSSH are better alternatives.
Write commands to a remote temp file, then execute that one. Or, skip the $PATH thing and use the full path for subsequent commands (assuming you know it).
Related
use Net::SSH2;
my $ssh2 = Net::SSH2->new();
$ssh2->connect($hostname);
$ssh2->auth_password($user,$pass);
$chan = $ssh2->channel();
$chan->exec("cd dir1");
$chan->exec("command file1.txt");
The above doesn't work and command cannot find dir1/file1.txt. How do you change the working directory using Net::SSH2?
According to the documentation, each invocation of $chan->exec() runs in its own process on the remote. The cd dir1 in the first exec affects only that execution. The next exec is a completely separate process.
The simplest way to solve the problem would be to pass the full path in the command, i.e.
$chan->exec("command dir1/file1.txt");
You could also try setting the PATH variable using $chan->setenv() but that probably will be prohibited by the remote side.
Note also (from the process section):
... it is also possible to launch a remote shell (using shell) and simulate the user interaction printing commands to its stdin stream and reading data back from its stdout and stderr. But this approach should be avoided if possible; talking to a shell is difficult and, in general, unreliable.
Note that I'm aware that this is probably not the best or most optimal way to do this but I've run into this somewhere before and I'm curious as to the answer.
I have a perl script that is called from an init that runs and occasionally dies. To quickly debug this, I put together a quick wrapper perl script that basically consists of
#$path set from library call.
while(1){
system("$path/command.pl " . join(" ",#ARGV) . " >>/var/log/outlog 2>&1");
sleep 30; #Added this one later. See below...
}
Fire this up from the command line and it runs fine and as expected. command.pl is called and the script basically halts there until the child process dies then goes around again.
However, when called from a start script (actually via start-stop-daemon), the system command returns immediately, leaving command.pl running. Then it goes around for another go. And again and again. (This was not fun without the sleep command.). ps reveals the parent of (the many) command.pl to be 1 rather than the id of the wrapper script (which it is when I run from the command line).
Anyone know what's occurring?
Maybe the command.pl is not being run successfully. Maybe the file doesn't have execute permission (do you need to say perl command.pl?). Maybe you are running the command from a different directory than you thought, and the command.pl file isn't found.
There are at least three things you can check:
standard error output of your command. For now you are swallowing it by saying 2>&1. Remove that part and observe what errors the system command produces.
the return value of system. The command may run and system may still return an exit code, but if system returns 0, you know the command was successful.
Perl's error variable $!. If there was a problem, Perl will set $!, which may or may not be helpful.
To summarize, try:
my $ec = system("command.pl >> /var/log/outlog");
if ($ec != 0) {
warn "exit code was $ec, \$! is $!";
}
Update: if multiple instance of the command keep showing up in your ps output, then it sounds like the program is forking and running itself in the background. If that is indeed what the command is supposed to do, then what you do NOT want to do is run this command in an endless loop.
Perhaps when run from a deamon the "system" command is using a different shell than the one used when you are running as yourself. Maybe the shell used by the daemon does not recognize the >& construct.
Instead of system("..."), try exec("...") function if that works for you.
I have a bluehost server setup and am trying to set the path in my perl program
print "Content-type: text/html\n\n";
my $output=`export PATH=\${PATH}:/usr/local/jdk/bin`;
my output1=`echo \$PATH`;
print $output1;
However it stil prints only the orginal $PATH. The /usr/local/jdk does not get added. Can anyone tell me what i am doing wrong?
You are creating a shell, executing a shell command that sets an environment variable in the shell, then exiting the shell without doing anything with the environment variable. You never changed perl's environment. That would be done using
local $ENV{PATH} = "$ENV{PATH}:/usr/local/jdk/bin";
Kinda weird to add to the end of the path, though.
Please note that ikegami's answer will only set the path in your local Perl-script, and will NOT change it for the shell that called your Perl script.
If you wish to change the path in the shell environment, so the next programs you run will also benefit from this change,
you will have to use 'source' or "dot-space" sequence,
or better yet - have this change to the path done in '.bashrc' or '.login' files.
I have this command that I load (example.sh) works well in the unix command line.
However, if I execute it in Perl using the system or ` syntax, it doesn't work.
I am guessing certain settings like environment variables and other external sh files weren't loaded.
Is there an example coding to ensure it will work?
More Updates on coding execution failure (I have been trying with different codes):
push (#JOBSTORUN, "cd $a/$b/$c/$d; loadproject cats; sleep 60;");
...
my $pm = new Parallel::ForkManager(3);
foreach my $job (#JOBSTORUN) {
$pm->start and next;
print(`$job`);
$pm->finish;
}
print "\n\n[DONE] FINISHED EXECUTING JOBS\n";
Output Messages:
sh: loadproject: command not found
Can you show us what you have tried so far? How are you running this program?
My first suspicion wouldn't be the environment if you are running it from a login shell. any Perl script you start (well, any program, really) inherits the same environment. However, if you are running the program through cron, then that's a different story.
The other mistakes I usually make in these situations is specifying the relative paths incorrectly. The paths are fine from the command line, but my Perl script has some other current working directory.
For general advice, see Interact with the system when Perl isn't enough. There's also a chapter in Learning Perl about this.
That's about the best advice you can hope for given the very limited information you've shared with us.
I have a perl script (part of the XMLTV family of "grabbers", specifically tv_grab_oztivo).
I can successfully run it like this:
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
I use the full paths to everything to eliminate issues with the Working Directory. Permissions shouldn't be a problem.
So, if I run it from the Terminal (Mac OSX) it works just fine.
But when I set it to run via a cron job, nothing appears to happen at all. No output is created etc.
There isn't anything wrong with the crontab as far as I can see, because if I substitute a helloworld.pl for the actual script, it runs just fine at the right time.
So, what can I do to debug? I can see from looking at %ENV in the two cases that the environment is very different, but what other approaches can I take to debugging? How can I see the output of the cron job, which might be some kind of perl "die" message or "not found" message from the shell or whatever?
Or should I be trying to somehow give the cron version of the command the same environment as when it's running as me?
It's often because you don't get the full environment when running under cron. Best bet is to capture the ouput by using the command:
( /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
and then have a look at /tmp/qq.
If it does turn out to be a missing environment, then you may need to put:
. ~/.profile
or something similar, into the execution chain of your cron job, such as:
( . ~/.profile ; /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
If you're looking at %ENV in the two cases, I'd suggest that, as a first step in your perl script, set %ENV to what it is in a cron job, and then trying to run it from the command line. You may need to exec yourself once for this to take full control:
BEGIN {
if (exists $ENV{something_in_your_env_not_in_cron}) {
%ENV = (...);
exec $^X, $0, #ARGV;
}
}
Now try running it, and seeing if there's anything you can do to debug it (including running under perl -d if required). Most likely, you'll find that you end up adding items back into %ENV one at a time until it magically starts working (LD_LIBRARY_PATH is a good one for this, but ORACLE_HOME or DB2HOME for Oracle or DB2 apps might be good choices, too). Then you can either set the variable in your script, or in the crontab.
I'd run a simple shell script by absolute path from the cron command.
Inside that script, I'd ensure that I trapped stdout and stderr to a known (or knowable) file. I'd also ensure that enough of your environment is set. On Unix, you get almost no environment set at all when you run a command via cron - I'm not sure about MacOS X. The standard culprit for problems is PATH. I have a separate .cronfile that sets my working environment enough that I usually don't have problems - that's an analogue of .profile.
On occasion if you can't figure out what's going wrong with your command line, the simplest way to fix it is to turn the whole thing into a shell script. Ideally you shouldn't have to do this, but it can be the fastest way to solve the problem.
File: /files/cron1.sh
#!/bin/sh
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
And then in cron:
/files/cron1.sh
This allows you to test the script independent of cron. Remember though that your login shell runs with different environment variables than cron does.
cron usually captures the output of stdout and stderr and e-mailes any output to the crontab owner.
Did you double check your crontab entry to make sure it's valid and will execute at the right time?
Make sure that the script does not need any environment variables set. Otherwise wrap it in another (bash) script, where you can set the environment variables that the other script expects.