use Net::SSH2;
my $ssh2 = Net::SSH2->new();
$ssh2->connect($hostname);
$ssh2->auth_password($user,$pass);
$chan = $ssh2->channel();
$chan->exec("cd dir1");
$chan->exec("command file1.txt");
The above doesn't work and command cannot find dir1/file1.txt. How do you change the working directory using Net::SSH2?
According to the documentation, each invocation of $chan->exec() runs in its own process on the remote. The cd dir1 in the first exec affects only that execution. The next exec is a completely separate process.
The simplest way to solve the problem would be to pass the full path in the command, i.e.
$chan->exec("command dir1/file1.txt");
You could also try setting the PATH variable using $chan->setenv() but that probably will be prohibited by the remote side.
Note also (from the process section):
... it is also possible to launch a remote shell (using shell) and simulate the user interaction printing commands to its stdin stream and reading data back from its stdout and stderr. But this approach should be avoided if possible; talking to a shell is difficult and, in general, unreliable.
Related
I'm trying to run a powershell script from within Cygwin (ultimately will be run over Cygwin SSH), and am finding that user input seems to be ignored. The script displays a menu and then uses Read-Host "Your selection:" to receive the input.
There is a blinking cursor, but the "Your selection" text doesn't appear, and anything I enter seems to just write to the console itself and is ignored by the script.
Does anyone know what could be wrong?
Thanks very much.
I'm guessing Cygwin console does not implement the APIs that the Powershell console's host (System.Management.Automation.Internal.Host.InternalHostUserInterface) depends on. Or doesn't implement them as expected. This will surely be the case if you attempt to run over SSH. MS has documentation on how to write a custom Host UI. So if you want to run PS over SSH seems like there are 4 possibilities:
Write your own PSHost implementation
Find someone else's PSHost implementation
Use your SSH's clients stdin and stdout as a two way pipe and write a REPL that takes input from the pipe (SSH stdin) and sends output to the pipe (SSH stdout). Unless you implement it yourself, this option means you lose line editing, history, tab completion, etc.
Not sure if this one will work, but it would might be the least amount of code to implement if it does. Create a child PS process. Redirect the child's stdout, stdin to the parent PS process. All input you get from SSH stdin you write to child PS's stdin and all output you read from child's stdout you write to SSH stdout. You'll probably want to use asynchronous I/O for the reads on SSH stdin and child's stdout, to prevent hangs (if script is waiting on a read from child's stdout, but the child PS has no more output, then the script is hung). In effect the SSH user is controlling the child PS process and the parent PS process is just the glue joining the two together.
I need to source configuration file 'eg.conf' to terminal though perl script. I am using system command but its not working.
system('. /etc/eg.conf')
Basically I am writing script in which later point it will use the environment variable (under conf file) for execute other process.
It is not clear what you are trying to achieve, but if you want to make the config available from within Perl AND your config file is valid Perl code you can use do or require (see perldoc for more information).
What you are doing in your code is to spawn a shell with system, include the config inside this shell (which must be in shell syntax) and then exit the shell again which of course throws all the config away on close. I guess this is not what you intend to do, but your real intention is not clear.
What is your goal? Do you need to source eg.conf to set up further calculations from within a perl controlled shell, or are you trying to affect the parent shell that is running the perl script?
Your example call to system('. /etc/eg.conf') creates a new shell subprocess. /etc/eg.conf is sourced into that shell at which point the shell exits. Nothing is changed within the perl script nor in the parent process that spawned the perl script.
One can not modify the environment of a parent process from a child process, without the assistance of the parent process[1]. One generally returns code for the parent shell to source or to eval.
1: ok, one could theoretically affect the parent process by directly poking into its memory space. Don't do that.
I am writing a shell script to automate some of the tedious tasks that we perform. I need to ssh to a server and change the PATH variable remotely, have that variable persist for the next commands to be executed. Code below;
sub ab_tier{
my $ssh=Net::SSH::Perl->new($host);
$ssh->login($user2,$user2);
my $PATH;
my($stdout,$stderr,$exit)=$ssh->cmd("export
PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH");
my($PATH, $stderr, $exit)=$ssh->cmd("echo $PATH");
print $PATH; # Check the path for correctness : does not change
}
However the PATH does not change. Is there another way to implement this or am I doing something wrong. I need to automate tasks so dont think $ssh->shell would help here. Please suggest.
I made changes as per suggestions and everything works fine. However I am noticing another issue, which is occurring when trying to display environment variables.
my $cmd_ref_pri={
cmd0=>"echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Now I am connecting to a remote server using Net::SSH::Perl and the value returned by $ENV{"HOME"} is the value of the my home directory and not of the remote server. However if I add a command as in :
my $cmd_ref_pri={
cmd0=>"whoami ; echo $ENV{'HOME'}",
cmd1=>"chmod 777 $ENV{'COMMON_TOP'}/temp"
};
Then the user id displayed is of the user using which I ssh to the remote server. I do not have other modules installed and the only one available is Net:SSh:perl hence I am forced to use this.
routine for executing command
sub ssh_cmd{
#$cmd_sub - contains command, $ssh contains object ref
my ($cmd_sub,$ssh)=#_;
my($stdout, $stderr, $exit)=$ssh->cmd("bash",$cmd_sub);
if( $exit !=0){
print $stdout;
print "ERROR-> $stderr";
exit 1;
}
return 0;
}
Any suggestions as to why this could happen ?
cmd() is not passing your commands into one shell. It executes them in separate shells (or without any shell - manual is not clear about it). As soon as you finish your export PATH the shell exits and the new PATH is lost.
Looks like it is possible to pass all the relevant commands to a single shell process as separate lines of $stdin?
my $stdin='export A=B
echo $A
';
$ssh->cmd("bash",$stdin);
This would work just like on interactive login (but without terminal control, so commands that talk directly to terminal would likely fail).
Anyway Net::SSH::Perl does not look like the best tool for the job. I would rather use expect for automation.
Set PATH on every command call:
$ssh->cmd('PATH=/usr/bin/sudo:/local/perl-5.6.1/bin:$PATH echo $PATH');
And BTW, Net::SSH::Perl is not being maintained anymore, nowadays Net::SSH2 and Net::OpenSSH are better alternatives.
Write commands to a remote temp file, then execute that one. Or, skip the $PATH thing and use the full path for subsequent commands (assuming you know it).
I am working on an application that needs to send commands to remote servers. Sending commands is easy enough with the plethora of SSH client libraries.
However, I would like shell state (i.e. current working directory, environment variables, etc) preserved between each command. All client libraries that I have seen do not do this. For example, the following is an example of code that does not do what I want:
use Net::SSH::Perl;
my $server = Net::SSH::Perl->new($host);
$server->login($user, $pass);
$server->cmd('cd /var');
$server->cmd('pwd'); # I _would like_ this to output /var
There will be other tasks performed between sending commands, so combining the commands like $server->cmd('cd /var; pwd') is not acceptable.
Net::SSH::Expect does what you want, though the "Expect" way is not completely reliable as it will be parsing the output of your commands and trying to detect when the shell prompt appears again.
I'm not sure what you are doing exactly, but you could just start one SSH session. If you really can't do this, maybe you can just use absolute paths for everything.
I have this command that I load (example.sh) works well in the unix command line.
However, if I execute it in Perl using the system or ` syntax, it doesn't work.
I am guessing certain settings like environment variables and other external sh files weren't loaded.
Is there an example coding to ensure it will work?
More Updates on coding execution failure (I have been trying with different codes):
push (#JOBSTORUN, "cd $a/$b/$c/$d; loadproject cats; sleep 60;");
...
my $pm = new Parallel::ForkManager(3);
foreach my $job (#JOBSTORUN) {
$pm->start and next;
print(`$job`);
$pm->finish;
}
print "\n\n[DONE] FINISHED EXECUTING JOBS\n";
Output Messages:
sh: loadproject: command not found
Can you show us what you have tried so far? How are you running this program?
My first suspicion wouldn't be the environment if you are running it from a login shell. any Perl script you start (well, any program, really) inherits the same environment. However, if you are running the program through cron, then that's a different story.
The other mistakes I usually make in these situations is specifying the relative paths incorrectly. The paths are fine from the command line, but my Perl script has some other current working directory.
For general advice, see Interact with the system when Perl isn't enough. There's also a chapter in Learning Perl about this.
That's about the best advice you can hope for given the very limited information you've shared with us.