I am working on an application that needs to send commands to remote servers. Sending commands is easy enough with the plethora of SSH client libraries.
However, I would like shell state (i.e. current working directory, environment variables, etc) preserved between each command. All client libraries that I have seen do not do this. For example, the following is an example of code that does not do what I want:
use Net::SSH::Perl;
my $server = Net::SSH::Perl->new($host);
$server->login($user, $pass);
$server->cmd('cd /var');
$server->cmd('pwd'); # I _would like_ this to output /var
There will be other tasks performed between sending commands, so combining the commands like $server->cmd('cd /var; pwd') is not acceptable.
Net::SSH::Expect does what you want, though the "Expect" way is not completely reliable as it will be parsing the output of your commands and trying to detect when the shell prompt appears again.
I'm not sure what you are doing exactly, but you could just start one SSH session. If you really can't do this, maybe you can just use absolute paths for everything.
Related
use Net::SSH2;
my $ssh2 = Net::SSH2->new();
$ssh2->connect($hostname);
$ssh2->auth_password($user,$pass);
$chan = $ssh2->channel();
$chan->exec("cd dir1");
$chan->exec("command file1.txt");
The above doesn't work and command cannot find dir1/file1.txt. How do you change the working directory using Net::SSH2?
According to the documentation, each invocation of $chan->exec() runs in its own process on the remote. The cd dir1 in the first exec affects only that execution. The next exec is a completely separate process.
The simplest way to solve the problem would be to pass the full path in the command, i.e.
$chan->exec("command dir1/file1.txt");
You could also try setting the PATH variable using $chan->setenv() but that probably will be prohibited by the remote side.
Note also (from the process section):
... it is also possible to launch a remote shell (using shell) and simulate the user interaction printing commands to its stdin stream and reading data back from its stdout and stderr. But this approach should be avoided if possible; talking to a shell is difficult and, in general, unreliable.
I am trying to create a script in perl which can ssh to multiple hosts (500+), execute a desired command, and show output on the screen. I have done this with the Net::OpenSSH module, as ssh-keys are not configured and I am not allowed to configure those. So, I have to use a thing which can supply the password while doing ssh.
Due to the many connections, it takes considerable time while doing the thing. I searched for "parallel ssh in perl" and discovered that there is a module for opening parallel ssh (Net::OpenSSH:Parallel), but I read somewhere on some forums that I cannot capture output with this module like I can capture using Net::OpenSSH ($ssh->caputre(ls)).
So, how can I accomplish parallel ssh in a more expedient manner? Also, I welcome any other suggestions I can use to save time. Would using Net:OpenSSH in threads save my time or will it work exactly like parallel?
You can fork your program and manage the forks with something like Parallel::ForkManager. Then do the SSH work + capture using Net::OpenSSH and display the results to the screen. You'll need to be careful with your IO though since all those processes trying to write to STDOUT/STDERR at the same time will get garbled results. You'll need to do something like the answer from this question (pipes between parent and child processes): fork() and STDOUT/STDERR to the console from child processes
Parallel programming is harder than serial, so be prepared for some fun :)
One way is to use a shell script to execute your perl script:
#!/bin/bash
for host in $(cat myhosts)
do
perl myperl.pl $host $1 $2 &
done
where myhosts is a file containing 500+ host names
I'm trying to run ssh, mkdir from a Perl CGI script. It's not working. But in normal Perl script, it is working fine. Can any one tell me how to run commands in a Perl CGI script?
If you're running this script via a webserver, chances are the active user (e.g. "nobody", "www", etc) may not have the necessary privileges to execute commands like mkdir and ssh. If so, that is not something that perl can fix. You can test who the active user is with something like:
print "The active user is: ", `whoami`;
It is also a security concern, to have your web user privileges set to create files and perform commands.
system() or popen() are probably what you're looking for, if you're feeling dirty I think you can use back ticks too.
Do you need to run unix commands? Perl has a built-in mkdir, and there are modules to handle SSH. Normally a CGI process is going to have limited capabilities or access to the system. The more you can do in Perl the better.
I am using a Windows System Command Prompt to call on a Perl Script. At one point and time, the Perl Script calls on svn+ssh to update a repo. The repository that is called asks the for user input - specifically a password.
I am trying to automate the execution of the Perl Script, but it continually gets hung up on the call to svn. I have tried many forms of input redirection (specifically < with an external file, | with cat, and the windows power shell use of the # symbol to specify a multi-lined string). Is there a way to input a password for this Perl script?
For purposes of this problem, I do not have access to the Perl Script and I will need to implement a work around.
You don't mention the svn+ssh implementation the script uses, but my guess is that the problem is this:
SSH clients tend to ask for passwords directly from the terminal. Password prompts often don't read from stdin, so you can't redirect input. For example, the OpenSSH client does it that way. It is designed that way to prevent users from doing insecure things - like storing passwords in files, environment variables or shell variables.
The common recommandation in this situation is to use public key authentication.
Without knowing your script, it will not be possible to come up with a workaround I think.
Need a way to have a perl script running on one machine run a perl script running on another.
The remote machine config is:
CentOS-5.5,
it's on the same network is the requesting machine,
has a DNS,
ports-open(SSH,HTTP)
Questions, feedback, comments -- just comment, thanks!!
Is this what you want?
system("ssh user#remotemachine perl <remote script's full path>");
will run the script on remote machine.
you may want to ssh without password, check: http://linuxproblem.org/art_9.html
As #Nylon Smile mentioned, you can use system to invoke the system's ssh client. If you want to do this without relying on external binaries (in particular because you want to handle password authentication differently), try Net::SSH::Perl, available from CPAN.
use Net::SSH::Perl;
my $ssh = Net::SSH::Perl->new('host');
$ssh->login('username', 'password');
my ($stdout, $stderr, $exit_code) = $ssh->cmd(
'perl some_script.pl --with=some_args',
'optionally, some stdin for that script'
);
Net::SSH::Perl can be a bit of a pain to install, but there are several other CPAN modules (most of which rely on an installed OpenSSH) and are a bit easier to deal with while providing a similar api. See also Net::SSH and Net::OpenSSH.
The above answers are fine for a one-off, but if you're doing this sort of thing a lot, you may want to look into some sort of messaging system like AMPQ (e.g. RabbitMQ) and set up queue listeners.