Standard output in powershell debug console - powershell

I have a long script in powershell which calls an even longer function located in a separate .ps1 file. The function runs some svn update commands and some compiled executables which produce standard output. When I run these directly from the script the output gets redirected to the debug console in Powershell ISE. When I run them through the function I can tell they are running but I get no standard output in the console.
How do I redirect standard output from my function back to the powershell debug console where I can see it?
THanks.
EDIT
I am importing the function as follows:
. "D:\common.ps1"
and calling it as follows:
$MedianValue = Run-Comparison $LastRev $FirstRev $ScriptPath $SolutionPath $DevenvPath $TestPath $refFile $SVNPAth
Within the function, one of the calls is as follows
svn update $FirstRev
Start-Process ExecutableName Argument
It is for the above two statements that I cannot see the standard output for when I call their containing function.

If you're capturing a script's / function's output and that script / function contains a mix of PowerShell-native output statements and external-program calls producing stdout output, both types of output are sent to PowerShell's regular success output streams.
Therefore, unless you redirect at the source, you cannot selectively pass stdout from external programs through to the the host (e.g., a regular console window or the console pane in the ISE), because you won't be able to tell which output objects (lines) come from where.
To redirect at the source - if you have control over the callee's source code - you have several options, the simplest being Write-Host, as the following example demonstrates:
function Run-Comparison {
'PS success output'
cmd /c 'echo external stdout output' | Write-Host
}
# Captures 'PS success output', but passes the cmd.exe output through to the console.
$MedianValue = Run-Comparison
The above selectively sends the cmd.exe command's output to the host.
In PSv5+, where Write-Host writes to the newly introduced information stream (number 6), you can optionally suppress the to-host output with 6>$null on invocation.
To reverse the logic, use Write-Information instead of Write-Host (PSv5+ only), which is silent by default and allows you to turn on output with $InformationPreference = 'Continue'.
If you want silent-by-default behavior in PSv4-, use Write-Verbose or Write-Debug, but note that such output will be a different color, with each line having a prefix (VERBOSE: and DEBUG:, respectively).

Related

How to best indicate failure of a PowerShell module to the calling shell/process?

If I have a PowerShell module that acts as a wrapper over an executable program, and I would like to communicate failure to the parent process so that it can be checked programmatically, how would I best do this?
Possible ways that the module could be invoked from (non-exhaustive):
From within the PowerShell shell (powershell or pwsh),
From the command prompt (.e.g as powershell -Command \<module fn\>),
From an external program creating a PowerShell process (e.g. by calling powershell -Command \<module fn\>)).
If I throw an exception from the module when the executable fails, say, with
if ($LastExitCode -gt 0) { throw $LastExitCode; }
it appears to cover all of the requirements. If an exception is thrown and the module was called
from within the PowerShell shell, $? variable is set to False.
from the command prompt, the %errorlevel% variable is set to 1.
from an external process, the exit code is set to 1.
Thus the parent process can check for failure depending on how the module was called.
A small drawback with this approach is that the full range of exit codes cannot be communicated to the parent process (it either returns True/False in $? or 0/1 as the exit code), but more annoyingly the exception message displayed on the output is too verbose for some tastes (can it be suppressed?):
+ if ($LastExitCode -gt 0) { throw $LastExitCode; }
+ ~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (9:Int32) [], RuntimeException
+ FullyQualifiedErrorId : 9
Are there any better ways to communicate failure of an executable invoked from a PowerShell module to the parent process?
Thanks
Your best bet is to provide a wrapper .ps1 script file for your module's function, which you can then call via the PowerShell CLI's -File parameter:
# yourWrapperScript.ps1
# Default to 'yourFunction' as the function to invoke,
# but allow passing a different command name, optionally with
# arguments, too.
# From inside PowerShell, you could even pass a script block.
param(
$funcName = 'yourFunction'
)
# Make all commands executed in this script *abort* on emitting *any
# error*.
# Note that this aborts on the *first* PowerShell error reported.
# (A function could be emitting multiple, non-terminating errors.)
# The assumption is that your function itself checks $LASTEXITCODE
# after calling the wrapped external program and issues a PowerShell error in response.
$ErrorActionPreference = 'Stop'
$global:LASTEXITCODE = 0 # Reset the $LASTEXITCODE value.
try {
# Call the function, passing any additional arguments through.
& $funcName #args
# If no PowerShell error was reported, assume that the
# function succeeded.
exit 0
}
catch {
# Emit the message associated with the PowerShell error that occurred.
# Note:
# * In *PowerShell (Core) 7+*, emitting the error message
# via Write-Error *is* a one-liner, but (a)
# invariably prefixed with the function name and (b)
# printed in *red. If that's acceptable, you can use
# $_ | Write-Error
# * In *Windows PowerShell*, the error is "noisy", and the only
# way to avoid that is to write directly to stderr, as shown
# below.
# Print the error message directly to stderr.
[Console]::Error.WriteLine($_)
if ($LASTEXITCODE) {
# The error is assumed to have been reported in response
# to the external-program call reporting a nonzero exit code.
# Use that exit code.
# Note: if that assumption is too broad, you'll ned to examine
# the details of the [System.Management.Automation.ErrorRecord] instance reflected in $_.
exit $LASTEXITCODE
} else {
# An error unrelated to the external-program call.
# Report a nonzero exit code of your choice.
exit 1
}
}
Then pass your wrapper script, say yourWrapperScript.ps1, to the CLI's -File parameter:
powershell -File yourWrapperScript.ps1
Unlike the -Command CLI parameter, -File does pass a .ps1 script file's specific exit code through, if set via exit.
The downsides of this approach are:
yourWrapperScript.ps1 must either be in the current directory, in a directory listed in the $env:PATH environment variable, or you must refer it by its full path.
If you bundle the script with your module (simply by placing it inside the module directory), you can only anticipate its full path if you know that it is in one of the standard module-installation directories (as listed in $env:PSModulePath, though on Unix that environment variable only exists inside a PowerShell session)
An alternative would be to distribute your script as a separate, installable script that can be installed with Install-Script.
Unless you're prepared to pass the function name as an argument (e.g., powershell -File yourWrapperScript.ps1 yourFunction), you'll need a separate .ps1 wrapper for each of your functions.
The - cumbersome - alternative is to use -Command and pass the code above as a one-liner, wrapped in "..." (from outside PowerShell):
# Code truncated for brevity.
powershell -Command "param(..."
For a comprehensive overview of PowerShell's CLI, in both editions, see this post.
If, in order to make do with just a function call via -Command, you're willing to live with:
the loss of the specific exit code reported by your external program and have any nonzero code mapped to 1
a "noisy", multi-line Windows PowerShell error message (less problematic in PowerShell (Core) 7+, where the message prints as a single line, albeit invariably in red, and prefixed with the function name)
you have two options:
Stick with your original approach and use throw in your function in response to $LASTEXITCODE being nonzero after the external-program call. This causes a script-terminating (fatal) error.
This means that the PowerShell CLI process is instantly aborted, with exit code 1. Similarly, if your function is also called from PowerShell scripts, the entire script (and its callers) are instantly aborted - which may or may not be desired. See the next point if you'd rather avoid such errors.
Also note that cmdlets implemented via binary modules (as opposed to cmdlet-like advanced functions implemented in PowerShell code) do not and, in fact, cannot emit such script-terminating errors, only statement-terminating errors.
Make your function set $? to $false - without aborting execution overall - which the -Command CLI parameter also translates to exit code 1, assuming your function call is the only or last statement.
This can only be done implicitly, by emitting either one or more non-terminating errors or a statement-terminating error from your function.
In order for these to set $? properly from a function written in PowerShell, your function (a) must be an advanced function and (b) must use either $PSCmdlet.WriteError() (non-terminating error) or $PSCmdlet.ThrowTerminatingError() (statement-terminating error); notably, using Write-Error does not work.
Calling these methods is nontrivial, unfortunately; zett42 showed the technique in an (unfortunately) since-deleted answer; you can also find an example in this comment from GitHub issue # (the issue also contains additional background information about PowerShell's error types).
For an overview of PowerShell's bewilderingly complex error handling, see GitHub docs issue #1583.
The PowerShell exit keyword has an optional parameter. If that parameter is an integer, it's used as the process exit code. This way you can propagate the error code of the wrapped executable.
An example of Python capturing PowerShell's exit code:
:~> py
Python 3.7.9 [...]
>>> from subprocess import run
>>> res = run('powershell -command "exit 123"')
>>> res.returncode
123

PowerShell redirect output to Write-Host and return exit code

I need to do the following in a PowerShell script:
call this command: npm run build
return the exit code of the npm run build command from my script to the parent (caller) script
print text output of the command npm run build but avoid returning the text from my script
So far I tried:
#Null = #(npm run build)
return $?
this hides the output completely
Write-Host #(npm run build)
return $?
this prints the output without new lines. Additionally, this prints the output all at once, after npm run build, and not gradually
I have PowerShell Version 7
Pipe to Write-Host, or, because it works in a wider range of scenarios, to Out-Host in order to ensure streaming processing (reporting output as it is being received.
With external programs, whose output PowerShell only ever interprets as text, Write-Host and Out-Host behave identically with piped input; however, with output from PowerShell-native commands only Out-Host applies the usual, rich output formatting that you would see in the console by default, whereas Write-Host performs simple - and often unhelpful - .ToString() formatting.
Use the automatic $LASTEXITCODE variable to obtain the exit code of the most recently executed external program.
By contrast, the automatic $? variable is an abstract success indicator, returning either $true or $false, which in the case of external-program calls maps exit code 0 to $true and any nonzero exit code to $false. However, in Windows PowerShell false $false values can occur if a 2> redirection is involved - see this answer.
npm run build | Out-Host
return $LASTEXITCODE

Why does PowerShell interpret kind/kubectl STDOUT as STDERR and How to Prevent it?

We are moving our DevOps pipelines to a new cluster and while at it, we bumped into a weird behavior when calling kind with PowerShell. This applies to kubectl also.
The below should be taken only as a repro, not a real world application. In other words, I'm not looking to fix the below code but I am searching for an explanation why the error happens:
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.10.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\temp\kind.exe -Force
$job = Start-Job -ScriptBlock { iex "$args" } -ArgumentList c:\temp\kind.exe, get, clusters
$job | Receive-Job -Wait -AutoRemoveJob
Now, if I directly execute the c:\temp\kind.exe get clusters command in the PowerShell window, the error won't happen:
In other words, why does PowerShell (any version) consider the STDOUT of kind/kubectl as STDERR? And how can I prevent this from happening?
There must be an environmental factor to it as the same exact code runs fine in one system while on another it throws an error...
tl;dr
kind outputs its status messages to stderr, which in the context of PowerShell jobs surface via PowerShell's error output stream, which makes them print in red (and susceptible to $ErrorActionPreference = 'Stop' and -ErrorAction Stop).
Either:
Silence stderr: Use 2>$null as a general mechanism or, as David Kruk suggests, use a program-specific option to achieve the same effect, which in the case of kind is -q (--quiet)
Re-route stderr output through PowerShell's success output stream, merged with stdout output, using *>&1.
Caveat: The original output sequencing between stdout and stderr lines is not necessarily maintained on output.
Also, if you want to know whether the external program reported failure or success, you need to include the value of the automatic $LASTEXITCODE variable, which contains the most recently executed external program's process exit code, in the job's output (the exit code is the only reliably success/failure indicator - not the presence or absence of stderr output).
A simplified example with *>&1 (for Windows; on Unix-like platforms, replace cmd and /c with sh and -c):
$job = Start-Job -ScriptBlock {
param($exe)
& $exe $args *>&1
$LASTEXITCODE # Also output the process exit code.
} -ArgumentList cmd, /c, 'echo data1; echo status >&2; echo data2'
$job | Receive-Job -Wait -AutoRemoveJob
As many utilities do, kind apparently reports status messages via stderr.
Given that stdout is for data, it makes sense to use the only other available output stream, stderr, for anything that isn't data, so as to prevent pollution of the data output. The upshot is that stderr output doesn't necessarily indicate actual errors (success vs. failure should solely be inferred from an external program's process exit code).
PowerShell (for its own commands only) commendably has a more diverse system of output streams, documented in the conceptual about_Redirection help topic, allowing you to report status messages via Write-Verbose, for instance.
PowerShell maps an external program's output streams to its own streams as follows:
Stdout output:
Stdout output is mapped to PowerShell's success output stream (the stream with number 1, analogous to how stdout can be referred to in cmd.exe and POSIX-compatible shells), allowing it to be captured in a variable ($output = ...) or redirected to a file (> output.txt) or sent through the pipeline to another command.
Stderr output:
In local, foreground processing in a console (terminal), stderr is by default not mapped at all, and is passed through to the display (not colored in red) - unless a 2> redirection is used, which allows you to suppress stderr output (2>$null) or to send it to a file (2>errs.txt)
This is appropriate, because PowerShell cannot and should not assume that stderr output represents actual errors, whereas PowerShell's error stream is meant to be used for errors exclusively.
Unfortunately, as of PowerShell 7.2, in the context of PowerShell jobs (created with Start-Job or Start-ThreadJob) and remoting (e.g., in Invoke-Command -ComputerName ... calls), stderr output is mapped to PowerShell's error stream (the stream with number 2, analogous to how stdout can be referred to in cmd.exe and POSIX-compatible shells).
Caveat: This means that if $ErrorActionPreference = 'Stop' is in effect or -ErrorAction Stop is passed to Receive-Job or Invoke-Command, for instance, any stderr output from external programs will trigger a script-terminating error - even with stderr output comprising status messages only. Due to a bug in PowerShell 7.1 and below this can also happen in local, foreground invocation if a 2> redirection is used.
The upshot:
To silence stderr output, apply 2>$null - either at the source (inside the job or remote command), or on the receiving end.
To route stderr output (all streams) via the success output stream / stdout, i.e. to merge all streams, use *>&1
To prevent the stderr lines from printing in red (when originating from jobs or remote commands), apply this redirection at the source - which also guards against side effects from $ErrorActionPreference = 'Stop' / -ErrorAction Stop on the caller side.
Note: If you merge all streams with *>&1, the order in which stdout and stderr lines are output is not guaranteed to reflect the original output order, as of PowerShell 7.2.
If needed, PowerShell still allows you to later separate the output lines based on whether they originated from stdout or stderr - see this answer.

Invoke a Perl script from Powershell and stores the script output in a variable [duplicate]

I'd like to run an external process and capture it's command output to a variable in PowerShell. I'm currently using this:
$params = "/verify $pc /domain:hosp.uhhg.org"
start-process "netdom.exe" $params -WindowStyle Hidden -Wait
I've confirmed the command is executing but I need to capture the output into a variable. This means I can't use the -RedirectOutput because this only redirects to a file.
Note: The command in the question uses Start-Process, which prevents direct capturing of the target program's output. Generally, do not use Start-Process to execute console applications synchronously - just invoke them directly, as in any shell. Doing so keeps the application's output streams connected to PowerShell's streams, allowing their output to be captured by simple assignment $output = netdom ... (and with 2> for stderr output), as detailed below.
Fundamentally, capturing output from external programs works the same as with PowerShell-native commands (you may want a refresher on how to execute external programs; <command> is a placeholder for any valid command below):
# IMPORTANT:
# <command> is a *placeholder* for any valid command; e.g.:
# $cmdOutput = Get-Date
# $cmdOutput = attrib.exe +R readonly.txt
$cmdOutput = <command> # captures the command's success stream / stdout output
Note that $cmdOutput receives an array of objects if <command> produces more than 1 output object, which in the case of an external program means a string[1] array containing the program's output lines.
If you want to make sure that the result is always an array - even if only one object is output, type-constrain the variable as an array ([object[]]), or enclose the command in #(...), the array-subexpression operator:[2]
[array] $cmdOutput = <command>
$cmdOutput = #(<command>) # alternative
By contrast, if you want $cmdOutput to always receive a single - potentially multi-line - string, use Out-String, though note that a trailing newline is invariably added (GitHub issue #14444 discusses this problematic behavior):
# Note: Adds a trailing newline.
$cmdOutput = <command> | Out-String
With calls to external programs - which by definition only ever return strings in PowerShell[1] - you can avoid that by using the -join operator instead:
# NO trailing newline.
$cmdOutput = (<command>) -join "`n"
Note: For simplicity, the above uses "`n" to create Unix-style LF-only newlines, which PowerShell happily accepts on all platforms; if you need platform-appropriate newlines (CRLF on Windows, LF on Unix), use [Environment]::NewLine instead.
To capture output in a variable and print to the screen:
<command> | Tee-Object -Variable cmdOutput # Note how the var name is NOT $-prefixed
Or, if <command> is a cmdlet or advanced function, you can use common parameter
-OutVariable / -ov:
<command> -OutVariable cmdOutput # cmdlets and advanced functions only
Note that with -OutVariable, unlike in the other scenarios, $cmdOutput is always a collection, even if only one object is output. Specifically, an instance of the array-like [System.Collections.ArrayList] type is returned.
See this GitHub issue for a discussion of this discrepancy.
To capture the output from multiple commands, use either a subexpression ($(...)) or call a script block ({ ... }) with & or .:
$cmdOutput = $(<command>; ...) # subexpression
$cmdOutput = & {<command>; ...} # script block with & - creates child scope for vars.
$cmdOutput = . {<command>; ...} # script block with . - no child scope
Note that the general need to prefix with & (the call operator) an individual command whose name/path is quoted - e.g., $cmdOutput = & 'netdom.exe' ... - is not related to external programs per se (it equally applies to PowerShell scripts), but is a syntax requirement: PowerShell parses a statement that starts with a quoted string in expression mode by default, whereas argument mode is needed to invoke commands (cmdlets, external programs, functions, aliases), which is what & ensures.
The key difference between $(...) and & { ... } / . { ... } is that the former collects all input in memory before returning it as a whole, whereas the latter stream the output, suitable for one-by-one pipeline processing.
Redirections also work the same, fundamentally (but see caveats below):
$cmdOutput = <command> 2>&1 # redirect error stream (2) to success stream (1)
However, for external commands the following is more likely to work as expected:
$cmdOutput = cmd /c <command> '2>&1' # Let cmd.exe handle redirection - see below.
Considerations specific to external programs:
External programs, because they operate outside PowerShell's type system, only ever return strings via their success stream (stdout); similarly, PowerShell only ever sends strings to external programs via the pipeline.[1]
Character-encoding issues can therefore come into play:
On sending data via the pipeline to external programs, PowerShell uses the encoding stored in the $OutVariable preference variable; which in Windows PowerShell defaults to ASCII(!) and in PowerShell [Core] to UTF-8.
On receiving data from an external program, PowerShell uses the encoding stored in [Console]::OutputEncoding to decode the data, which in both PowerShell editions defaults to the system's active OEM code page.
See this answer for more information; this answer discusses the still-in-beta (as of this writing) Windows 10 feature that allows you to set UTF-8 as both the ANSI and the OEM code page system-wide.
If the output contains more than 1 line, PowerShell by default splits it into an array of strings. More accurately, the output lines are streamed one by one, and, when captured, stored in an array of type [System.Object[]] whose elements are strings ([System.String]).
If you want the output to be a single, potentially multi-line string, use the -join operator (you can alternatively pipe to Out-String, but that invariably adds a trailing newline):
$cmdOutput = (<command>) -join [Environment]::NewLine
Merging stderr into stdout with 2>&1, so as to also capture it as part of the success stream, comes with caveats:
To do this at the source, let cmd.exe handle the redirection, using the following idioms (works analogously with sh on Unix-like platforms):
$cmdOutput = cmd /c <command> '2>&1' # *array* of strings (typically)
$cmdOutput = (cmd /c <command> '2>&1') -join "`r`n" # single string
cmd /c invokes cmd.exe with command <command> and exits after <command> has finished.
Note the single quotes around 2>&1, which ensures that the redirection is passed to cmd.exe rather than being interpreted by PowerShell.
Note that involving cmd.exe means that its rules for escaping characters and expanding environment variables come into play, by default in addition to PowerShell's own requirements; in PS v3+ you can use special parameter --% (the so-called stop-parsing symbol) to turn off interpretation of the remaining parameters by PowerShell, except for cmd.exe-style environment-variable references such as %PATH%.
Note that since you're merging stdout and stderr at the source with this approach, you won't be able to distinguish between stdout-originated and stderr-originated lines in PowerShell; if you do need this distinction, use PowerShell's own 2>&1 redirection - see below.
Use PowerShell's 2>&1 redirection to know which lines came from what stream:
Stderr output is captured as error records ([System.Management.Automation.ErrorRecord]), not strings, so the output array may contain a mix of strings (each string representing a stdout line) and error records (each record representing a stderr line). Note that, as requested by 2>&1, both the strings and the error records are received through PowerShell's success output stream).
Note: The following only applies to Windows PowerShell - these problems have been corrected in PowerShell [Core] v6+, though the filtering technique by object type shown below ($_ -is [System.Management.Automation.ErrorRecord]) can also be useful there.
In the console, the error records print in red, and the 1st one by default produces multi-line display, in the same format that a cmdlet's non-terminating error would display; subsequent error records print in red as well, but only print their error message, on a single line.
When outputting to the console, the strings typically come first in the output array, followed by the error records (at least among a batch of stdout/stderr lines output "at the same time"), but, fortunately, when you capture the output, it is properly interleaved, using the same output order you would get without 2>&1; in other words: when outputting to the console, the captured output does NOT reflect the order in which stdout and stderr lines were generated by the external command.
If you capture the entire output in a single string with Out-String, PowerShell will add extra lines, because the string representation of an error record contains extra information such as location (At line:...) and category (+ CategoryInfo ...); curiously, this only applies to the first error record.
To work around this problem, apply the .ToString() method to each output object instead of piping to Out-String:
$cmdOutput = <command> 2>&1 | % { $_.ToString() };
in PS v3+ you can simplify to:
$cmdOutput = <command> 2>&1 | % ToString
(As a bonus, if the output isn't captured, this produces properly interleaved output even when printing to the console.)
Alternatively, filter the error records out and send them to PowerShell's error stream with Write-Error (as a bonus, if the output isn't captured, this produces properly interleaved output even when printing to the console):
$cmdOutput = <command> 2>&1 | ForEach-Object {
if ($_ -is [System.Management.Automation.ErrorRecord]) {
Write-Error $_
} else {
$_
}
}
An aside re argument-passing, as of PowerShell 7.2.x:
Passing arguments to external programs is broken with respect to empty-string arguments and arguments that contain embedded " characters.
Additionally, the (nonstandard) quoting needs of executables such as msiexec.exe and batch files aren't accommodated.
For the former problem only, a fix may be coming (though the fix would be complete on Unix-like platforms), as discussed in this answer, which also details all the current problems and workarounds.
If installing a third-party module is an option, the ie function from the Native module (Install-Module Native) offers a comprehensive solution.
[1] As of PowerShell 7.1, PowerShell knows only strings when communicating with external programs. There is generally no concept of raw byte data in a PowerShell pipeline. If you want raw byte data returned from an external program, you must shell out to cmd.exe /c (Windows) or sh -c (Unix), save to a file there, then read that file in PowerShell. See this answer for more information.
[2] There are subtle differences between the two approaches (which you may combine), though they usually won't matter: If the command has no output, the [array] type-constraint approach results in $null getting stored in the target variable, whereas it is an empty ([object[]) array in the case of #(...). Additionally, the [array] type constraint means that future (non-empty) assignments to the same variable are coerced to an array too.
Have you tried:
$OutputVariable = (Shell command) | Out-String
If you want to redirect the error output as well, you have to do:
$cmdOutput = command 2>&1
Or, if the program name has spaces in it:
$cmdOutput = & "command with spaces" 2>&1
Or try this. It will capture output into variable $scriptOutput:
& "netdom.exe" $params | Tee-Object -Variable scriptOutput | Out-Null
$scriptOutput
Another real-life example:
$result = & "$env:cust_tls_store\Tools\WDK\x64\devcon.exe" enable $strHwid 2>&1 | Out-String
Notice that this example includes a path (which begins with an environment variable). Notice that the quotes must surround the path and the EXE file, but not the parameters!
Note: Don't forget the & character in front of the command, but outside of the quotes.
The error output is also collected.
It took me a while to get this combination working, so I thought that I would share it.
I tried the answers, but in my case I did not get the raw output. Instead it was converted to a PowerShell exception.
The raw result I got with:
$rawOutput = (cmd /c <command> 2`>`&1)
I got the following to work:
$Command1="C:\\ProgramData\Amazon\Tools\ebsnvme-id.exe"
$result = & invoke-Expression $Command1 | Out-String
$result gives you the needful
I use the following:
Function GetProgramOutput([string]$exe, [string]$arguments)
{
$process = New-Object -TypeName System.Diagnostics.Process
$process.StartInfo.FileName = $exe
$process.StartInfo.Arguments = $arguments
$process.StartInfo.UseShellExecute = $false
$process.StartInfo.RedirectStandardOutput = $true
$process.StartInfo.RedirectStandardError = $true
$process.Start()
$output = $process.StandardOutput.ReadToEnd()
$err = $process.StandardError.ReadToEnd()
$process.WaitForExit()
$output
$err
}
$exe = "C:\Program Files\7-Zip\7z.exe"
$arguments = "i"
$runResult = (GetProgramOutput $exe $arguments)
$stdout = $runResult[-2]
$stderr = $runResult[-1]
[System.Console]::WriteLine("Standard out: " + $stdout)
[System.Console]::WriteLine("Standard error: " + $stderr)
This thing worked for me:
$scriptOutput = (cmd /s /c $FilePath $ArgumentList)
If all you are trying to do is capture the output from a command, then this will work well.
I use it for changing system time, as [timezoneinfo]::local always produces the same information, even after you have made changes to the system. This is the only way I can validate and log the change in time zone:
$NewTime = (powershell.exe -command [timezoneinfo]::local)
$NewTime | Tee-Object -FilePath $strLFpath\$strLFName -Append
Meaning that I have to open a new PowerShell session to reload the system variables.
What did the trick for me, and would work when using external commands and also when both standard error and standard output streams could be the result of running the command (or a mix of them), was the following:
$output = (command 2>&1)

Is Out-Host buffering?

I ha a function, where I call an application with the & operator. The application produces several line command line output, downloads some files, and returns a string:
& "app.exe" | Out-Host
$var = ...
return $var
It seems, that on the console appears the output produced by app.exe only after app.exe terminates. The user does not have any real time information which file is downloading. Is there a way to continuously update the console when app.exe is running?
Many console applications buffer theirs output stream, if it known to be redirected. Actually, it is standard behavior of C library. So, buffering done by app.exe, because of redirection, but not by Out-Host.
Solution would be to not to redirect output of app.exe, even when outer command redirected. For than you should know exact condition, when PowerShell not redirect output stream of console application and link it directly to their own output stream, which would be console for interactive PowerShell.exe session. The conditions is:
Command is last item in pipeline.
Command is piped to Out-Default.
Solution would be wrap command into script block, and pipe that script block to Out-Default:
& { & "app.exe" } | Out-Default
The other solution would be to use Start-Process cmdlet with -NoNewWindow and -Wait parameters:
Start-Process "app.exe" -NoNewWindow -Wait