How to interact with a CLI program using Powershell? - powershell

Imagine I have a program writen in whatever language and compiled to run interactivelly using just command line interface. Lets imagine this one just for the sake of simplify the question:
The program first asks the user its name.
Then based on some business logic, it may ask the user age OR the user email. Only one of those.
After that it finishes with success or error.
Now image that I want to write a script in powershell that fills all that data automatically.
How can I achieve this? How can I run this program, read its questions (outputs) and then provide the correct answer (input)?

If you don't know the questions it will ask ahead of time, this would be tough.
PowerShell scripts are normally linear. Once you start the program from in PowerShell, it would wait for the program to finish before continuing. There are ways to do things in parallel, but it doesn't interact like that.
Although if you're dealing with something like a website making the first call gives a response (completing the command). You could match the response to select the proper value.
Or if the program is local and allows command line parameters, you could do that.

Related

How to pass command line arguments to putty without making them visible in powershell / cmd?

When a process runs in Windows, if you pass command line arguments to it, they are visible if the process is running, making the passing of plain-text sensitive data using this method a really bad idea.
How does one prevent this from happening, short of implementing a public / private key infrastructure?
Do you just run everything via plink, since it exits right away after running the command (of course you have to make sure that it actually does exit)
See related questions below to refer to what I'm talking about:
https://stackoverflow.com/questions/7494073/commandline-arguments-of-running-process-in-dos
https://stackoverflow.com/questions/53808314/why-would-one-do-this-when-storing-a-secure-string/53808379?noredirect=1#comment94523315_53808379

Run exe in Matlab in for loop most efficiently

I wounder what is the most efficient way to run a program, given as executable, from Matlab many times in a for loop. At the moment I use the following Code:
for i = 1:100
system('MyProgram.exe');
% Do something with the output from the .exe
end
So, from the profiler I know that 99,9% of the time is used in the execution of the Program itself. My question is basically if there is a more efficient way to run executables in general from within Matlab?
I have read that everytime I run an exe like described above, a process is created which has to initialize the Matlab runtime environment... Is there possibly a way to avoid this by only doing the initialization once and from there on run the programm multiple times?
I am guessing your can't directly modify the .exe's you are given, so perhaps there is a way to instead of calling the .exe directly, you could call a .bash shell script.
I would imagine that if you do this and within the shell script check to see if a workspace is already open to associate the execution of the .exe with a specific process ID. Although I would guess that when the executable finishes it closes the session.
Just throwing some stuff out there :P I have had lots of trouble with how Matlab handles this kind of thing (Also things like Excel).
Hope you figure this out.
EDIT: I found some possible examples here Example Descriptions
-Kyle

How to use 'system' command in MATLAB?

I have checked the documents on Mathworks about command
system
I still do not fully grasp the idea of this command. It seems that this command is designed for call external programms, such Excel, Word, R, etc.
Is there any other purposes of using this command? If I do not grasp its essential idea yet.
system
is used for executing OS commands
to call Excel, Word, etc you may be better off using f.e.
actxserver()
In general you seem to have grasped the command in its entirety, it provides the facility to call external commands of all sorts, including operating system commands and other applications on the same (or indeed, different) computers. I suggest that you learn more about it by using it and waste no more time reading answers like this one on SO.
When you have more specific and more detailed questions, ask them.
EDIT in response to comment
Yes, you certainly can run an R program using the system command. For example, if you have a program called myRprogram.exe and if your path is set properly the Matlab command
system('myRprogram.exe')
should run your R program.
If what you mean is 'can I run an R program which I write in Matlab and send to the R run-time system at run-time' then the answer is (probably, I'm not an R expert) yes too. You should be able to write something like:
system('R set.seed(1); num=50; w = rnorm(num+1,0,1)')
So, if you can type and execute an R program from the command line, you can build and execute it inside a Matlab program.
NOTE: I am not an R programmer, and I make no claim that the string inside the call to system is a valid way to run R at the command line. If anyone reading this knows better, please feel free to edit or to write a better answer.

how to perl for bi-directional communication with dsmadmc.exe?

I have simple web-form with a little js script that sends form values to a text box. This combined value becomes a database query.
This will be sendt to dsmadmc (TSM administrative command line).
How can I use perl to keep the dsmadmc process open for consecutive input/output without the dsmadmc process closing between each input command sent?
And how can I capture the output - this is to be sent back to the same web page, in a separate div.
Any thought, anyone?
Probably IPC::Open2 could help. It allows to read/write to/from both input and output of an external process.
Beware of deadlocks though (i.e. situations where both your code and the app wait for their counterpart). You might want to use IO::Select to handle that.
P.S. I don't know how these modules behave on windows (.exe?..), but from a quick google search it looks like they are compatible.

How can I control an interactive Unix application programmatically through Perl?

I have inherited a 20-year-old interactive command-line unix application that is no longer supported by its vendor. We need to automate some tasks in this application.
The most troublesome of these is creating thousands of new records with slightly different parameters (e.g. different identifiers, different names). The records have to be created in sequence, one at a time, which would take many months (and therefore dollars) to do manually. In most cases, creating a record has a very predictable pattern of keying in commands, reading responses, keying in further commands, etc. However, some record creation operations will result in error conditions ('record with this identifier already exists') that require a different set of commands to be exit gracefully.
I can see a few different ways to do this:
Named pipes. Write a Perl script that runs the target application with STDIN and STDOUT set to named pipes then sends the target application the sequence of commands to create a record with the required parameters, and then instructs the target application to exit and shut down. We then run the script as many times as required with different parameters.
Application. Find another Unix tool that can be used to script interactive programs. The only ones I have been able to find though are expect, but this does not seem top be maintained; and chat, which I recall from ages ago, and which seems to do more-or-less what I want, but appears to be only for controlling modems.
One more potential complication: I think the target application was written for a VT100 terminal and it uses some sort of escape sequences to do things like provide highlighting.
My question is what approach should I take? One of these, or something completely different? I quite like the idea of using named pipes and then having a Perl script that opens the FIFOs and reads and writes as required, as it provides a lot of flexibility, but from what I have read it seems like there's a lot of potential problems if I go down this path.
Thanks in advance.
I'd definitely stick to Perl for the extra flexibility, as chaos suggested. Are you aware of the Expect perl module? It's a lot nicer than the named pipe approach.
Note also with named pipes, you can't force the output coming back from your legacy application to be unbuffered, which could be annoying. I think Expect.pm uses pseudo-ttys to get around this problem, but I'm not sure. See the discussion in perlipc in the section "Bidirectional Communication with Another Process" for more details.
expect is a lot more solid than you're probably giving it credit for, but if I were you I'd still go with the Perl option, wanting to have a full and familiar programming language for managing the process and having confidence that whatever weird issues arise, there will be ways of addressing them.
Expect, either with the Tcl or Perl implementations, would be my first attempt. If you are seeing odd sequences in the output because it's doing odd terminal things, just filter those from the output before you do your matching.
With named pipes, you're going to end up reinventing Expect anyway.