Returning data from .Net EXE to Powershell - powershell

I have a powershell script that calls an executable to do some data crunching and the script needs to retrieve the results from by the executable file. Wondering what options I have on plate for this inter process communication
Can I have the executable file directly return a string array or an object (I don't think this is possible)?
Volatile variable that the exe file sets and the powershell script reads from?
Spawn a temporary .Net remoting server within the executable and have the powershell ping that server to get the results

You could just spit out the results from the EXE to stdout in XML or CSV format and have PowerShell slurp it up with either a cast to [xml] or ConvertFrom-Csv.

That the executable is written in .NET makes no difference: it will be a separate process and therefore only the mechanisms for passing data from one process to another (without specific support in both) are available:
The return value from the exe: an integer.
Standard output from the exe: a string (usually divided into separate lines by splitting on newlines and thus treated as an array).
(Theoretically standard error could also be used, but that would be abusing it for no additional functionality.)
The standard output approach is easiest: in the exe use Console.WriteLine (which is a shortcut to Console.Out.WriteLine) and then parse the strings in PowerShell:
MyExe | Foreach-Object {
# Do something with $_ which will be a string
}
Obviously any data format that can be encoded into strings can be used. Also the calling script could accumulate the whole output into a single value and process it all at once.

Related

Is it possible to retrieve an #argument in a Powershell shell from within a program?

I am writing a program prog.exe that retrieves all arguments that are passed to it (in the form of a "sentence", not standalone arguments).
I just realized that in some cases only part of the line is retrieved, and this is when there are #parameters:
PS > ./prog.exe this is a #nice sentence
Only this, is and a are retrieved. In case I do not use # I get all of them. I presume this is because everything after the # is interpreted by Powershell as a comment.
Is there a way to retrieve everything that is on the command line?
If this makes a difference, I code in Go and get the arguments via os.Args[1:].
You can prevent PowerShell from interpreting # as a comment token by explicitly quoting the input arguments:
./prog.exe one two three '#four' five
A better way exists, though, especially if you don't control the input: split the arguments into individual strings then use the splatting operator # on the array containing them:
$paramArgs = -split 'one two three #four five'
./prog.exe #paramArgs
Finally, using the --% end-of-parsing token in a command context will cause the subsequent arguments on the same line to be passed as-is, no parsing of language syntax:
./prog.exe --% one two three #four five

Is there any limit to the length of text content that a PowerShell variable can hold?

I am storing the content of a text file in a variable like this -
$fileContent=$(Get-Content file1.txt)
Right now file1.txt contains 200 lines only. But if one day the file contains 10 million lines, then will this approach work? Is there any limit to the length of content that a variable can hold in PowerShell?
Get-Content reads the file into memory.
With that being said, you'd want to change the approach on what you're after. PowerShell being built on top of the .Net framework has access to all of its capabilities. So, you can use classes such as StreamReader which reads the file from disk one line at a time using a method like the one below.
$file = [System.IO.StreamReader]::new('.\Desktop\adobe_export.reg') #instantiate an istance of streamreader
while ($file.EndOfStream.Equals($false)) #if not end of file, continue.
{
# save this to a variable if needed
$file.ReadLine() # read/display line
# more code
}
$file.Close()
$file.Dispose()
First of all, you need to understand that a PS variable is a wrapper around a .NET type, so whatever that can hold, is the answer.
Regarding your actual case, you can search in Microsoft docs whatever GetType() returns, if there is a limit for that type - but there is always a memory limit. So if you read a lot of data into memory, and then return some of it after filtering/transforming/completing/whatever, you are filling memory. Instead you may NOT assign anything to a variable, but use the pipeline's one-at-a-time processing functionality, with this only that much memory is used for the items in the pipeline. Of course you might need to do more than one complex thing with the same input that need their own pipelines, but in this case you can either re-read the data, or if you think that it can change between reads and you need a snapshot, then copy it into a temporary place.

How to pass integers and strings from MATLAB to a PowerShell script?

I need to automate a test. The test itself is being written (by me) in MATLAB, has 5 stages, each stage ends with setting a value to an integer (uint16_t and uint8_t) and with a message. I have to pass these 5 integers and 5 strings to a PowerShell script because Jenkins can only run a PowerShell or Python script, but I'm not entirely sure how can I achieve that. I have never used PS or done any scripting, and there isn't much on the Internet on how to even run a MATLAB script with PowerShell. (Maybe I should check batch file scripts running MATLAB scripts.)
The only option I've found so far is writing into a (temporary) file with MATLAB, then reading from it (and deleting it), it could be a .txt file, or preferably a .csv file (although using csvwrite is not recommended by Mathworks), but this isn't very reliable. Can anyone suggest other methods to pass it more directly? The MATLAB file is not a function, but it can be made to be one that has these variables as outputs. Also, it's fine if the integers are cast to another integer type.
Like #TessellatingHeckler said the way is $results = matlab.exe yourscript.
Here is an example if you want more features when launch the tests like no display windows ,run in a batch mode or wait to the end of matlab execution.
runTestMatlab (){
result=$(matlab.exe -wait -nosplash -noFigureWindows -batch TestScript.m)
if [ $? -ne 0 ]; then
# Error with the Matlab run
echo $result
return 1
fi
echo "$result"
return 0
}
Then you can parse the result with awk or any other tool that you want.

Manage inputs from external command in a powershell script

First, I would like to apologize in case that the title is not descriptive enough, I'm having a hard time dealing with this problem. I'm trying to build an automation for a svn merge using a powershell script that will be executed for another process. The function that I'm using looks like this:
function($target){
svn merge $target
}
Now, my problem occurs when there are conflicts in the merge. The default behavior of the command is request an input from the user and proceed accordingly. I would like to automatize this process using predefined values (show the differences and then postpone the merge), but I haven't found a way to do it. In summary, the workflow that I am looking to accomplish is the following:
Detect whether the command execution requires any input to proceed
Provide a default inputs (in my particular case "df" and then "p")
Is there any way to do this in powershell? Thank you so much in advance for any help/clue that you can provide me.
Edit:
To clarify my question: I would like to automatically provide a value when a command executed within a powershell script require it, like in the following example:
Requesting user input
Edit 2:
Here is a test using the snippet provided by #mklement0. Unfortunately, It didn't work as expected, but I thought it was wort to add this edition to clarify the question per complete
Expected behavior:
Actual result:
Note:
This answer does not solve the OP's problem, because the specific target utility, svn, apparently suppresses prompts when the process' stdin input isn't coming from a terminal (console).
For utilities that do still prompt, however, the solution below should work, within the constraints stated.
Generally, before attempting to simulate user input, it's worth investigating whether the target utility offers programmatic control over the behavior, via its command-line options, which is both simpler and more robust.
While it would be far from trivial to detect whether a given external command is prompting for user input:
you can blindly send the presumptive responses,
which assumes that no situational variations are needed (except if a particular calls happens not to prompt at all, in which case the input is ignored).
Let's assume the following batch file, foo.cmd, which puts up 2 prompts and echoes the input:
#echo off
echo begin
set /p "input1=prompt 1: "
echo [%input1%]
set /p "input2=prompt 2: "
echo [%input2%]
echo end
Now let's send responses one and two to that batch file:
C: PS> Set-Content tmp.txt -Value 'one', 'two'; ./foo.cmd '<' tmp.txt; Remove-Item tmp.txt
begin
prompt 1: one
[one]
prompt 2: two
[two]
end
Note:
For reasons unknown to me, the use of an intermediate file is necessary for this approach to work on Windows - 'one', 'two' | ./foo.cmd does not work.
Note how the < must be represented as '<' to ensure that it is passed through to cmd.exe and not interpreted by PowerShell up front (where < isn't supported).
By contrast, 'one', 'two' | ./foo does work on Unix platforms (PowerShell Core).
You can store the SVN command line output into a variable and parse through that and branch as you desire. Each line of output is stored into a new enumerator (cli output stored in PS variables is in array format)
$var = & svn merge $target
$var

perl memory usage when processing a file inline

I have a CGI script that's used by our employees to fetch logs from servers that they don't have direct access to. For reasons I won't go into, after a recent update to our app some of these logs now have characters like linefeeds, tabs, backslashes, etc. translated into their text equivalents. As such, I've modified the CGI script to invoke the following to convert these back to their original values:
perl -i -pe 's/\\r/\r/g && s/\\n/\n/g && s/\\t/\t/g && s/\\\//\//g' $filename
I was just informed that some people are now getting out of memory errors when they try to fetch logs that are fairly large (a few hundred MB).
My question: How does perl manage memory when an inline command like this is invoked? Is it reading the whole file in, processing it, then writing it out, or is it creating a temporary file, processing the lines from the input file one at a time then replacing the file once complete?
This is using perl 5.10.1 on a 64-bit Amazon linux instance.
The -p switch creates a while(<>){...; print} loop to iterate on each “line” in your input file.
If all of your newlines have been converted into "\\n", then your file would just be a single very long line. Therefore, your command would be loading the entire file into memory to perform your fix.
To avoid that, you'll have to intentionally buffer the file using either sysread or $/.
It would probably be easiest to create an actual script instead of a one-liner to do the work. However, if you know that all of your newlines are converted, then one simple fix would be to use $/ = "\\n"
As a secondary note, your regex is flawed. You're currently listing out your translations s/// using a shortcut operator. If any one of the earlier regexes doesn't match for a particular line, then no other translations would be attempted. You should instead use simple semicolons to separate your regexes:
's/\\r/\r/g; s/\\n/\n/g; s/\\t/\t/g; s|\\/|/|g'