Expect/Tcl Script: How to prompt user without writing to stdout? - perl

In Expect script, how do you prompt user without writing to stdout? Is it possible to prompt the user through stderr?
I have a script written in Perl to automate some testing using ssh. In order to automate login for ssh, I wrapped the ssh command using Expect, but sometimes the password for ssh expires mid-way in executing the code (e.g rsa token refreshes every 30seconds).
I found this: How can I make an expect script prompt for a password?
which works great, except it prompts the user through stdout. My Perl script reads and parses the stdout.
I would like to abstract this to my Expect script without modifying the Perl code, is it possible?
Edit: I just realized my use case is stupid since I'm not sure how Perl will be able to interact with the Expect script prompt since I'm calling the Expect script from Perl. :/ But it would still be good to know if it's possible to Expect to write to stderr. :D
Thanks!

You can try accessing the “file” /dev/tty (it's really a virtual device). That will bypass the Perl script and contact the user's terminal directly. Once you've got that open, you use normal Expect commands to ask the user to give their password (remembering to turn off echo at the right time with stty).
Otherwise, your best bet might actually be to pop up a GUI; using a little Tcl/Tk script to do this is pretty trivial:
# This goes in its own script:
pack [label .l -text "Password for [lindex $argv 0]"]
pack [entry .e -textvariable pass -show "*"]
bind .e <Enter> {
puts $pass
exit
}
focus .e
# In your expect script:
set password [exec wish getPass.tcl "foo#bar.example.com"]
That's all. It's also a reasonably secure way of getting a password, as the password is always communicated by (non-interceptable) pipes. The only downside is that it requires a GUI to be present.
The GUI is probably in need of being made prettier. That's just the bare bones of what you need.

Related

How can I have one perl script call another and get the return results?

How can I have one perl script call another perl script and get the return results?
I have perl Script B, which does a lot of database work, prints out nothing, and simply exits with a 0 or a 3.
So I would like perl Script A call Script B and get its results. But when I call:
my $result = system("perl importOrig.pl filename=$filename");
or
my $result = system("/usr/bin/perl /var/www/cgi-bin/importOrig.pl filename=$filename");
I get back a -1, and Script B is never called.
I have debugged Script B, and when called manually there are no glitches.
So obviously I am making an error in my call above, and not sure what it is.
There are many things to consider.
Zeroth, there's the perlipc docs for InterProcess Communication. What's the value in the error variable $!?
First, use $^X, which is the path to the perl you are executing. Since subprocesses inherit your environment, you want to use the same perl so it doesn't confuse itself with PERL5LIB and so on.
system("$^X /var/www/cgi-bin/importOrig.pl filename=$filename")
Second, CGI programs tend to expect particular environment variables to be set, such as REQUEST_METHOD. Calling them as normal command-line programs often leaves out those things. Try running the program from the command line to see how it complains. Check that it gets the environment it wants. You might also check the permissions of the program to see if you (or whatever user runs the calling program) are allowed to read it (or its directory, etc). You say there are no glitches, so maybe that's not your particular problem. But, do the two environments match in all the ways they should?
Third, consider making the second program a modulino. You could run it normally as a script from the command line, but you could also load it as a Perl library and use its features directly. This obviates all the IPC stuff. You could even fork so that stuff runs concurrently.

How to pass command line arguments to putty without making them visible in powershell / cmd?

When a process runs in Windows, if you pass command line arguments to it, they are visible if the process is running, making the passing of plain-text sensitive data using this method a really bad idea.
How does one prevent this from happening, short of implementing a public / private key infrastructure?
Do you just run everything via plink, since it exits right away after running the command (of course you have to make sure that it actually does exit)
See related questions below to refer to what I'm talking about:
https://stackoverflow.com/questions/7494073/commandline-arguments-of-running-process-in-dos
https://stackoverflow.com/questions/53808314/why-would-one-do-this-when-storing-a-secure-string/53808379?noredirect=1#comment94523315_53808379

Prevent printing to command line in MATLAB

Before you answer, I'm not looking for the functionality of ; to suppress command line printing.
I have a set of scripts which are not mine and I do not have the ability to change. However, in my scripts I make a call to these other scripts through evalin('base', 'scriptName'). Unfortunately, these other scripts do a lot of unnecessary and ugly printing to the command window that I don't want to see. Without being able to edit these other scripts, I would like a way to suppress output to the command line for the time that these other scripts are executing.
One potential answer was to use evalc, but when I try evalc(evalin('base', 'scriptName')) MATLAB throws an error complaining that it cannot execute a script as a function. I'm hoping there's something like the ability to disable command window printing or else redirecting all output to some null file much like /dev/null in unix.
I think you just need to turn the argument in your evalc example into a string:
evalc('evalin(''base'', ''scriptName'')');
Have you tried this solution
here ?
echo off;
I don't know if it will fit your needs, but another solution can be to open a new session of Matlab, and use there only minimized -nodesktop form (-just the command window). You can run from there the annoying scripts, and work on the main session as usual.
The problem here is that the sessions can't be synchronized, so if you need to work with the results of the scripts all the time, it'll be a little bit complicated. Maybe you can save the result to disk, than call it from the main session...
But it mainly depends on your workflow with those scripts.

how to read texts on the terminal inside perl script

Is there any way to capture the texts on termianl screen inside a perl script. I know there are some functions like system,exec,backticks but the problem is that they execute commands FROM the script.For ex:- in terminal i write cd/ (or) ls,and after that i run my perl script which will read what was written on termianl screen(in this case, scipt will capture cd/ (or) ls-whichever was given to termianl). I came with one solution that by passing the commands which you wrote on termianl as a command line arguments to the script,but any other way???
Like this maybe:
history | perl -ne 'print $_'
As I understand it, in a situation where you've typed some stuff into a terminal like this:
[tai#littlerobot ~] echo "Hello"
Hello
[tai#littlerobot ~] perl myscript.pl
You want myscript.pl to be able to access the echo "Hello" part, and possibly also the Hello that was that command's output.
Perl does not provide such a feature. No programming language does or can provide such a feature because the process in which your script/program runs has no intrinsic knowledge about what happened in the same terminal before it was run. The only way it could access this text would be if it could ask the currently running terminal, which will have some record of this information (i.e. the scrollback buffer), even if it cannot distinguish between which characters in the text were typed by you, and which are output. However, I know of no terminal that exposes that information via any kind of public API.
So if you want myscript.pl to be able to access that echo "Hello", you'll need to pass it to your script. Piping history to your script (as shown by Mark Setchell in his answer) is one technique. history is a shell built-in, so it has as much knowledge as your shell has (which is not quite the same knowledge as your terminal has). In particular it can give you a list of what commands have been typed in this shell session. However, it cannot tell you about the output generated by those commands. And it cannot tell you about other shell sessions, so doing this in Perl is fairly useless:
my #history = `tcsh -c history`;
The last thing you could try (though it would be incredibly complicated to do) would be to ask the X server (or Windows if running on that operating system) for a screen shot and then attempt to locate which rectangle the current terminal is running in and perform OCR on it. This would be fraught with problems though, such as dealing with overlapping windows.
So, in summary, you cannot do this. It's nothing to do with Perl. You cannot do this in any programming language.

Is there an interpreter for sending commands to Selenium?

I'm a Perl programmer doing some web application testing using Selenium. I was wondering if there's some kind of interactive interpreter which would allow me to type Selenium commands at a prompt and have them sent to selenium.
The way I'm currently developing code is to type all of the commands into a Perl script, and then execute the script. This make the development process painfully slow, because it makes it necessary to rerun the entire script whenever I make a change.
(If I were a Ruby programmer, I bet the Interactive Ruby Shell could help with this, but I'm hoping to find a solution for Perl.) Thanks!
The Java-based SeleniumServer has an --interactive mode.
I don't know about a Selenium shell, but if you are looking for a Perl REPL, there are modules such as Devel::REPL and Carp::REPL. I've made shells for various simple things using my Polyglot module, although I haven't looked at that in awhile.
May I ask why it's "necessary to re-run the entire script"? Since your solution is an interactive shell, i'm assuming that it's NOT because you need previous commands to set something up. If that's a correct assumption, simply make your set of Selenium commands in Perl script in such a way that you can skip the first N commands.
You can do it by explicit "if" wrappers.
Or you can have a generic "call a Selenium command based on a config hash" driver executed in a loop, and adding the config hash for every single Selenum command to an array of hashes.
Then you can either have an input parameter to the test script that refers to a # in an array; or you can even include unique labels for each test as a part of a config hash and pass "start with test named X" or even "only do test named X" as input on command line.