How to handle interactive commands in Perl? - perl

I have a perl script which calls a command which before executing asks for confirmation. How do I handle this in Perl?
For example let say in my perl script I am doing the following
`ssh myServer`;
Before connecting I get a prompt asking to proceed or not. I have to provide yes as my next command. How can I achieve this? Any code snippet would be useful.

If you are looking at interactive you can use Expect on the CPAN...
"Expect is a generic tool for talking to processes that normally require
user interaction. This might be running an ftp client to grab a file,
telnetting to a router to grab statistics or reset an interface. Or, as in the
case of a place I recently administered, to start up a secure webserver without
having to be physically at the machine to enter the super secret password."
However, there are other (better) methods to automate SSH login. I.e. by using ssh-keygen

From Net::SSH :
interactive
Set to a true value if you're using Net::SSH::Perl interactively. This is used in determining whether or not to display password prompts, for example. It's basically the inverse of the BatchMode parameter in ssh configuration.
Defaults to false.

Related

How to store user credentials for script

I am required to utilize an old version of ClearQuest 7, and the only APIs that are enabled in our installation are for VBA (Excel) and RatlPERL. (The REST API isn't an option for us - although it suffers the same cleartext credential problem.)
I've written a ratlperl script that executes queries into the defect database, and produces csv output. Note that ratlperl requires cleartext user credentials for authentication.
ratlperl query.cqpl -u %userid% -p %password% -q "%query%" -c %outfile%
That script is called from a Windows Batch file. When run from the Windows command line with no parameters, the batch file requests user credentials, but they can also be provided as parameters.
query.bat %userid% %password%
I trigger daily queries, with the user credentials passed as parameters for the batch file.
This all works well, but I'd rather not store the cleartext password in this way. The registry would be one possibility, but anyone with access to the machine would have access to those credentials.
How can I store these credentials in a somewhat secure way?
There's two things to watch out for. One is having your process list "show up" the auth credentials.
Particularly on Unix - if you run ps it'll show you the arguments, which might include a username and password. The way of handling this is mostly 'read from a file, not the arg list'. On Unix, you can also amend $0 to change how you show in ps (but that doesn't help command history, and it's also not perfect as there'll be a short period before it's applied).
The other is - storing the data at rest.
This is a bit more difficult. Pretty fundamentally, there aren't many solution that let your script access the credentials that wouldn't allow a malicious user to do so.
After all, by the simple expedient of inserting a print $password into your script... they bypass pretty much any control you could put on it. Especially if they have admin access on your box, at which point... there's really nothing you can do.
Solutions I'd offer though:
Create a file with (plaintext) username and password. Set minimum permissions on it. Run the script as a user that has privileges, but don't let anyone else access that user account.
That way other people can 'see' your script (and may need to to run it) but can't copy it/hack it/run it themselves.
I would suggest sudo for this on Unix. For Windows, I'm not sure how much granularity you have over RunAs - that's worth a look, or alternatively have a scheduled task that runs as your service account, and picks up 'request files' for processing that can be generated by anyone.
As the level of security doesn't need to be so high, perhaps consider to create a simple exe? The password could possibly be read out of the memory somehow, but I guess this way creates a big enough barrier.
Or something like this could be helpful?
http://www.battoexeconverter.com/
HTH

replacing telnet with ssh

I have some programs that use the Net::Telnet module to connect to several servers. Now the administrators have decided to replace the Telnet service for SSH, keeping everything else like before (for example the user accounts)
I've taken a look at Net::SSH2 and I see that I would have to change most part of the programs. Do you know of other SSH modules, better suited for this same replacement?
The client is a Windows box (ActiveState Perl or Cygwin Perl)
Net::OpenSSH!
And check the chapter about how to integrate it with Net::Telnet.
Thanks for your suggestions, but I finally used Net::SSH::Perl on ActivePerl for Windows
Pros:
quite similar to Net::Telnet. There is no close method, but instead of $host->close you can do $host->cmd("exit")
native Perl implementation
Cons:
each cmd() call has a different state, for example it doesn't keep the current directory between calls, like Net::Telnet did
needs a modification in the module code to work on Windows, see: https://rt.cpan.org/Public/Bug/Display.html?id=18154
cmd("su - user") doesn't work, but cmd("su - user -c 'commands'") does

Simultaneous Perl SSH Sessions

I am wondering if anyone has a Perl script (or can write one) to execute on multiple hosts at once via ssh, without any modules. I used to have something like this but cannot find it now and can't remember how it was done.
Are you looking for ClusterSSH? It's Perl, and it's used to run the same commands on several hosts at once, so this might be what you're looking for...
You might want to try using Expect.pm which is similar to #cnicutar's suggestion of calling an Expect script from Perl, except that you write it all in Perl. (This of course down not fit the requirement of "without any modules", but that requirement leads to bad Perl )
Learn how to install and use modules even when you don't have admin privileges on the host
Use Net::OpenSSH::Parallel
If you cannot use any additional modules from CPAN or any other source , all I can recommend you are:
1) Use Expect script and call it internally in your Perl script [Only if you are not willing to use Expect.pm module]
2) Use SSH keygen in all the servers to which you will connect to , so that password wont be necessary in the script. As mentioned by "cnicutar"
3) Use "remsh" if SSH usage is not that necessary.

Perl expect - how to control timeout on target machine

I am a newbie to perl. I am using perl expect module to spawn to a remote system. Execute a set of commands there one after another using the send module(like $exp->send("my command as string goes here\n"). The problem is the commands that I execute take some time for processing . And before all the command finish the remote machine gets timed out and I come back to my host machine prompt. Can you please help me how to handle this.?
I have one more question. I have a command which returns 2 values after execution(say I am doing a print for 2 values on remote machine). I want to capture these 2 values and pass as argument to the next command using send module. How do I do this.
Pls help me with this problem.
Thanks.
I just found out something about the expect module. There is an undef option that can be used with expect like $exp->expect(undef). This will wait indefinitely and lets all commands finish their processing. But the problem is that, it does not return back the control to the host machine. There is one more option of using expect with eof which will wait until it encounters an eof and then returns to the host machine. Although no idea precisely how to use it. An elegant solution that I found is to use ssh to run commands on remote machine rather than using expect in which case we do not have to deal with timeouts. :)
I just found out something about the expect module. There is an undef option that can be used with expect like $exp->expect(undef). This will wait indefinitely and lets all commands finish their processing. But the problem is that, it does not return back the control to the host machine. There is one more option of using expect with eof which will wait until it encounters an eof and then returns to the host machine. Although no idea precisely how to use it. An elegant solution that I found is to use ssh to run commands on remote machine rather than using expect in which case we do not have to deal with timeouts. :)

Creating a simple command line interface (CLI) using a python server (TCP sock) and few scripts

I have a Linux box and I want to be able to telnet into it (port 77557) and run few required commands without having to access to the whole Linux box. So, I have a server listening on that port, and echos the entered command on the screen. (for now)
Telnet 192.168.1.100 77557
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
hello<br />
You typed: "hello"<br />
NOW:
I want to create lot of commands that each take some args and have error codes.
Anyone has done this before?
It would be great if I can have the server upon initialization go through each directory
and execute the init.py file and in turn, the init.py file of each command call
into a main template lib API (e.g. RegisterMe()) and register themselves with the server as function call backs.
At least this is how I would do it in C/C++.
But I want the best Pythonic way of doing this.
/cmd/
/cmd/myreboot/
/cmd/myreboot/ini.py (note underscore don't show for some reason)
/cmd/mylist/
/cmd/mylist/init.py
... etc
IN: /cmd/myreboot/__ini__.py:
from myMainCommand import RegisterMe
RegisterMe(name="reboot",args=Arglist, usage="Use this to reboot the box", desc="blabla")
So, repeating this creates a list of commands and when you enter the command in the telnet session, then the server goes through the list, matches the command and passed the args to that command and the command does the job and print the success or failure to stdout.
Thx
I would build this app using combination of cmd2 and RPyC modules.
Twisted's web server does something kinda-sorta like what you're looking to do. The general approach used is to have a loadable python file define an object of a specific name in the loaded module's global namespace. Upon loading the module, the server checks for this object, makes sure that it derives from the proper type (and hence has the needed interface) then uses it to handle the requested URL. In your case, the same approach would probably work pretty well.
Upon seeing a command name, import the module on the fly (check the built-in import function's documentation for how to do this), look for an instance of "command", and then use it to parse your argument list, do the processing, and return the result code.
There likely wouldn't be much need to pre-process the directory on startup though you certainly could do this if you prefer it to on-the-fly loading.