Error handling of command prompt commands in Powershell - powershell

My goal is to check, disable and remove Scheduled Tasks on numerous Windows servers using Powershell.
Some of the servers are Windows 2008R2, so Get-ScheduledTask is out of question. I have to use schtasks
Here is what I have thus far
$servers = (Get-ADComputer -Server DomainController -Filter 'OperatingSystem -like "*Server*"').DNSHostname
$servers |
ForEach-Object {
if (Test-Connection -Count 1 -Quiet -ComputerName $_) {
Write-Output "$($_) exists, checking for Scheduled Task"
Invoke-Command -ComputerName $_ {
If((schtasks /query /TN 'SOMETASK')) {
Write-Output "Processing removal of scheduled task`n"
schtasks /change /TN 'SOMETASK' /DISABLE
schtasks /delete /TN 'SOMETASK' /F
}
else {
Write-Output "Scheduled Task does not exist`n"
}
}
}
}
This works fine for when SOMETASK exists but when it doesn't, Powershell spits an error, like this:
ERROR: The system cannot find the file specified.
+ CategoryInfo : NotSpecified: (ERROR: The syst...file specified.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
+ PSComputerName : SERVER1
NotSpecified: (:) [], RemoteException
Scheduled Task does not exist
I can circumvent this behavior by setting $ErrorActionPreference to "SilentlyContinue" but this suppresses other errors I may be interested in. I also tried Try, Catch but that still generates the error. I don't think I can add -ErrorHandling argument to an IF statement. Can anyone please lend a helping hand?
Thank you,

tl;dr:
Use 2>$null to suppress the stderr output from a call to an external program (such as schtasksk.exe)
To work around a bug present up to at least PowerShell [Core] 7.0 (see below), make sure that $ErrorActionPreferece is not set to 'Stop'.
# Execute with stderr silenced.
# Rely on the presence of stdout output in the success case only
# to make the conditional true.
if (schtasks /query /TN 'SOMETASK' 2>$null) { # success, task exists
"Processing removal of scheduled task`n"
# ...
}
For background information and more general use cases, read on.
Given how the line from the external program's stderr stream manifests as shown in your question,
it sounds like you're running your code in the PowerShell ISE, which I suggest moving away from: The PowerShell ISE is obsolescent and should be avoided going forward (bottom section of the linked answer).
That the ISE surfaces stderr lines surface via PowerShell's error stream by default is especially problematic - see this GitHub issue.
The regular console doesn't do that, fortunately - it passes stderr lines through to the host (console), and prints them normally (not in red), which is the right thing to do, given that you cannot generally assume that all stderr output represents errors (the stream's name notwithstanding).
With well-behaved external programs, you should only ever derive success vs. failure from their process exit code (as reflected in the automatic $LASTEXITCODE variable[1]), not from the presence of stderr output.: exit code 0 indicates success, any nonzero exit code (typically) indicates failure.
As for your specific case:
In the regular console, the value of the $ErrorActionPreference preference variable does not apply to external programs such as schtasks.exe, except in the form of a bug [fixed in PowerShell 7.2+] when you also use a 2> redirection - see GitHub issue #4002; as of PowerShell 7.1.0-preview.6; the corrected behavior is a available as experimental feature PSNotApplyErrorActionToStderr.
Since your schtasks /query /TN 'SOMETASK' command functions as a test, you can do the following:
# Execute with all streams silenced (both stdout and stderr, in this case).
# schtask.exe will indicate the non-existence of the specified task
# with exit code 1
schtasks /query /TN 'SOMETASK' *>$null
if ($LASTEXITCODE -eq 0) { # success, task exists
"Processing removal of scheduled task`n"
# ...
}
# You can also squeeze it into a single conditional, using
# $(...), the subexpression operator.
if (0 -eq $(schtasks /query /TN 'SOMETASK' *>$null; $LASTEXITCODE)) { # success, task exists
"Processing removal of scheduled task`n"
# ...
}
In your specific case, a more concise solution is possible, which relies on your schtasks command (a) producing stdout output in the case of success (if the task exists) and (b) only doings so in the success case:
# Execute with stderr silenced.
# Rely on the presence of stdout output in the success case only
# to make the conditional true.
if (schtasks /query /TN 'SOMETASK' 2>$null) { # success, task exists
"Processing removal of scheduled task`n"
# ...
}
If schtasks.exe produces stdout output (which maps to PowerShell's success output stream, 1), PowerShell's implicit to-Boolean conversion will consider the conditional $true (see the bottom section of this answer for an overview of PowerShell's to-Boolean conversion rules).
Note that a conditional only ever acts on the success output stream's output (1), other streams are passed through, such as the stderr output (2) would be in this case (as you've experienced).
2>$null silences stderr output, by redirecting it to the null device.
1 and 2 are the numbers of PowerShell's success output / error streams, respectively; in the case of external programs, they refers to their stdout (standard output) and stderr (standard error) streams, respectively - see about_Redirection.
You can also capture stderr output with a 2> redirection, if you want to report it later (or need to examine it specifically for an ill-behaved program that doesn't use exit codes properly).
2> stderr.txt sends the stderr lines to file sdterr.txt; unfortunately, there is currently no way to capture stderr in a variable - see GitHub issue #4332, which proposes syntax 2>&variableName for that.
As implied by the aforementioned bug, you must ensure that $ErrorActionPreference isn't set to 'Stop', because the 2> will then mistakenly trigger a script-terminating error.
Aside from the aforementioned bug, using 2> currently has another unexpected side effect [fixed in PowerShell 7.2+]: The stderr lines are unexpectedly also added to the automatic $Error collection, as if they're errors (which they cannot assumed to be).
The root cause of both issues is that stderr lines are unexpectedly routed via PowerShell's error stream, even though there is no good reason to do so - see GitHub issue #11133.
[1] Note that the automatic $? variable that indicates success vs. failure as a Boolean ($true / $false) is also set, but not reliably so: since stderr output is currently (v7.0) unexpectedly routed via PowerShell's error stream if redirected with 2>&, the presence of any stderr output invariably sets $? to $false, even if the external program reports overall success, via $LASTEXITCODE reporting 0. Therefore, the only reliable way to test for success is $LASTEXITCODE -eq 0, not $?.

Personally I prefer to use the Scheduler ComObject to manage scheduled tasks. You can connect to other servers with it, and search them simply enough to manage their tasks.
$Scheduler = New-Object -ComObject Schedule.Service
$servers = (Get-ADComputer -Server DomainController -Filter 'OperatingSystem -like "*Server*"').DNSHostname
$servers |
ForEach-Object {
if (Test-Connection -Count 1 -Quiet -ComputerName $_) {
Write-Output "$($_) exists, checking for Scheduled Task"
$Scheduler.Connect($_)
$RootFolder = $Scheduler.GetFolder("\")
$TargetTask = $RootFolder.GetTask('SOMETASK')
# If the task wasn't found continue to the next server
If(!$TargetTask){
Write-Output "Scheduled Task does not exist`n"
Continue
}
Write-Output "Processing removal of scheduled task`n"
$TargetTask.Enabled = $false
$RootFolder.DeleteTask('SOMETASK')
}
}

This appears like you've way over-complicated execution of this effort.
Why disable and remove vs just remove, as that seems a bit redundant?
All scheduled tasks are nothing but xml files and reg entries, that you can just delete if you don't want the task any longer. Thus, you can use sue Get-ChildItem.
# File system:
(Get-ChildItem -Path "$env:windir\System32\Tasks").FullName
# Results
<#
...
C:\Windows\System32\Tasks\Microsoft
...
C:\Windows\System32\Tasks\MicrosoftEdgeUpdateTaskMachineCore
...
#>
# Registry:
Get-ChildItem -Path 'HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Schedule\Taskcache\Tasks'
# Results
<#
Name Property
---- --------
{01C5B377-A7EB-4FF3-9C6C-86852 Path : \Microsoft\Windows\Management\Provisioning\Logon
...
#>
Get-ChildItem -Path 'HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Schedule\Taskcache\Tree'
# Results
<#
Name Property
---- --------
Adobe Acrobat Update Task SD : {1...
#>
Just select your task by name and delete the file and the regkeys using the normal filesystem cmdlets.

So you just want to hide the error message from schtasks? One way is to redirect standard error or "2" to $null. This is an example anyone can run as admin. The if statement only works because there's no output to standard out when there's an error. It looks like invoke-command generates a remote exception when something comes over standard error, but it doesn't stop the commands that follow. I don't see a way to try/catch it.
invoke-command localhost { if (schtasks /query /tn 'foo' 2>$null) {
'yes' } ; 'hi'}
hi

Related

Why does PowerShell interpret kind/kubectl STDOUT as STDERR and How to Prevent it?

We are moving our DevOps pipelines to a new cluster and while at it, we bumped into a weird behavior when calling kind with PowerShell. This applies to kubectl also.
The below should be taken only as a repro, not a real world application. In other words, I'm not looking to fix the below code but I am searching for an explanation why the error happens:
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.10.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\temp\kind.exe -Force
$job = Start-Job -ScriptBlock { iex "$args" } -ArgumentList c:\temp\kind.exe, get, clusters
$job | Receive-Job -Wait -AutoRemoveJob
Now, if I directly execute the c:\temp\kind.exe get clusters command in the PowerShell window, the error won't happen:
In other words, why does PowerShell (any version) consider the STDOUT of kind/kubectl as STDERR? And how can I prevent this from happening?
There must be an environmental factor to it as the same exact code runs fine in one system while on another it throws an error...
tl;dr
kind outputs its status messages to stderr, which in the context of PowerShell jobs surface via PowerShell's error output stream, which makes them print in red (and susceptible to $ErrorActionPreference = 'Stop' and -ErrorAction Stop).
Either:
Silence stderr: Use 2>$null as a general mechanism or, as David Kruk suggests, use a program-specific option to achieve the same effect, which in the case of kind is -q (--quiet)
Re-route stderr output through PowerShell's success output stream, merged with stdout output, using *>&1.
Caveat: The original output sequencing between stdout and stderr lines is not necessarily maintained on output.
Also, if you want to know whether the external program reported failure or success, you need to include the value of the automatic $LASTEXITCODE variable, which contains the most recently executed external program's process exit code, in the job's output (the exit code is the only reliably success/failure indicator - not the presence or absence of stderr output).
A simplified example with *>&1 (for Windows; on Unix-like platforms, replace cmd and /c with sh and -c):
$job = Start-Job -ScriptBlock {
param($exe)
& $exe $args *>&1
$LASTEXITCODE # Also output the process exit code.
} -ArgumentList cmd, /c, 'echo data1; echo status >&2; echo data2'
$job | Receive-Job -Wait -AutoRemoveJob
As many utilities do, kind apparently reports status messages via stderr.
Given that stdout is for data, it makes sense to use the only other available output stream, stderr, for anything that isn't data, so as to prevent pollution of the data output. The upshot is that stderr output doesn't necessarily indicate actual errors (success vs. failure should solely be inferred from an external program's process exit code).
PowerShell (for its own commands only) commendably has a more diverse system of output streams, documented in the conceptual about_Redirection help topic, allowing you to report status messages via Write-Verbose, for instance.
PowerShell maps an external program's output streams to its own streams as follows:
Stdout output:
Stdout output is mapped to PowerShell's success output stream (the stream with number 1, analogous to how stdout can be referred to in cmd.exe and POSIX-compatible shells), allowing it to be captured in a variable ($output = ...) or redirected to a file (> output.txt) or sent through the pipeline to another command.
Stderr output:
In local, foreground processing in a console (terminal), stderr is by default not mapped at all, and is passed through to the display (not colored in red) - unless a 2> redirection is used, which allows you to suppress stderr output (2>$null) or to send it to a file (2>errs.txt)
This is appropriate, because PowerShell cannot and should not assume that stderr output represents actual errors, whereas PowerShell's error stream is meant to be used for errors exclusively.
Unfortunately, as of PowerShell 7.2, in the context of PowerShell jobs (created with Start-Job or Start-ThreadJob) and remoting (e.g., in Invoke-Command -ComputerName ... calls), stderr output is mapped to PowerShell's error stream (the stream with number 2, analogous to how stdout can be referred to in cmd.exe and POSIX-compatible shells).
Caveat: This means that if $ErrorActionPreference = 'Stop' is in effect or -ErrorAction Stop is passed to Receive-Job or Invoke-Command, for instance, any stderr output from external programs will trigger a script-terminating error - even with stderr output comprising status messages only. Due to a bug in PowerShell 7.1 and below this can also happen in local, foreground invocation if a 2> redirection is used.
The upshot:
To silence stderr output, apply 2>$null - either at the source (inside the job or remote command), or on the receiving end.
To route stderr output (all streams) via the success output stream / stdout, i.e. to merge all streams, use *>&1
To prevent the stderr lines from printing in red (when originating from jobs or remote commands), apply this redirection at the source - which also guards against side effects from $ErrorActionPreference = 'Stop' / -ErrorAction Stop on the caller side.
Note: If you merge all streams with *>&1, the order in which stdout and stderr lines are output is not guaranteed to reflect the original output order, as of PowerShell 7.2.
If needed, PowerShell still allows you to later separate the output lines based on whether they originated from stdout or stderr - see this answer.

Powershell: Redirecting stderr causes an exception [duplicate]

Why does PowerShell show the surprising behaviour in the second example below?
First, an example of sane behaviour:
PS C:\> & cmd /c "echo Hello from standard error 1>&2"; echo "`$LastExitCode=$LastExitCode and `$?=$?"
Hello from standard error
$LastExitCode=0 and $?=True
No surprises. I print a message to standard error (using cmd's echo). I inspect the variables $? and $LastExitCode. They equal to True and 0 respectively, as expected.
However, if I ask PowerShell to redirect standard error to standard output over the first command, I get a NativeCommandError:
PS C:\> & cmd /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?"
cmd.exe : Hello from standard error
At line:1 char:4
+ cmd <<<< /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?"
+ CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
$LastExitCode=0 and $?=False
My first question, why the NativeCommandError?
Secondly, why is $? False when cmd ran successfully and $LastExitCode is 0? PowerShell's documentation about automatic variables doesn't explicitly define $?. I always supposed it is True if and only if $LastExitCode is 0, but my example contradicts that.
Here's how I came across this behaviour in the real-world (simplified). It really is FUBAR. I was calling one PowerShell script from another. The inner script:
cmd /c "echo Hello from standard error 1>&2"
if (! $?)
{
echo "Job failed. Sending email.."
exit 1
}
# Do something else
Running this simply as .\job.ps1, it works fine, and no email is sent. However, I was calling it from another PowerShell script, logging to a file .\job.ps1 2>&1 > log.txt. In this case, an email is sent! What you do outside the script with the error stream affects the internal behaviour of the script. Observing a phenomenon changes the outcome. This feels like quantum physics rather than scripting!
[Interestingly: .\job.ps1 2>&1 may or not blow up depending on where you run it]
(I am using PowerShell v2.)
The '$?' variable is documented in about_Automatic_Variables:
$?
Contains the execution status of the last operation
This is referring to the most recent PowerShell operation, as opposed to the last external command, which is what you get in $LastExitCode.
In your example, $LastExitCode is 0, because the last external command was cmd, which was successful in echoing some text. But the 2>&1 causes messages to stderr to be converted to error records in the output stream, which tells PowerShell that there was an error during the last operation, causing $? to be False.
To illustrate this a bit more, consider this:
> java -jar foo; $?; $LastExitCode
Unable to access jarfile foo
False
1
$LastExitCode is 1, because that was the exit code of java.exe. $? is False, because the very last thing the shell did failed.
But if all I do is switch them around:
> java -jar foo; $LastExitCode; $?
Unable to access jarfile foo
1
True
... then $? is True, because the last thing the shell did was print $LastExitCode to the host, which was successful.
Finally:
> &{ java -jar foo }; $?; $LastExitCode
Unable to access jarfile foo
True
1
...which seems a bit counter-intuitive, but $? is True now, because the execution of the script block was successful, even if the command run inside of it was not.
Returning to the 2>&1 redirect.... that causes an error record to go in the output stream, which is what gives that long-winded blob about the NativeCommandError. The shell is dumping the whole error record.
This can be especially annoying when all you want to do is pipe stderr and stdout together so they can be combined in a log file or something. Who wants PowerShell butting in to their log file??? If I do ant build 2>&1 >build.log, then any errors that go to stderr have PowerShell's nosey $0.02 tacked on, instead of getting clean error messages in my log file.
But, the output stream is not a text stream! Redirects are just another syntax for the object pipeline. The error records are objects, so all you have to do is convert the objects on that stream to strings before redirecting:
From:
> cmd /c "echo Hello from standard error 1>&2" 2>&1
cmd.exe : Hello from standard error
At line:1 char:4
+ cmd &2" 2>&1
+ CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
To:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" }
Hello from standard error
...and with a redirect to a file:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" } | tee out.txt
Hello from standard error
...or just:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" } >out.txt
This bug is an unforeseen consequence of PowerShell's prescriptive design for error handling, so most likely it will never be fixed. If your script plays only with other PowerShell scripts, you're safe. However if your script interacts with applications from the big wide world, this bug may bite.
PS> nslookup microsoft.com 2>&1 ; echo $?
False
Gotcha! Still, after some painful scratching, you'll never forget the lesson.
Use ($LastExitCode -eq 0) instead of $?
(Note: This is mostly speculation; I rarely use many native commands in PowerShell and others probably know more about PowerShell internals than me)
I guess you found a discrepancy in the PowerShell console host.
If PowerShell picks up stuff on the standard error stream it will assume an error and throw a NativeCommandError.
PowerShell can only pick this up if it monitors the standard error stream.
PowerShell ISE has to monitor it, because it is no console application and thus a native console application has no console to write to. This is why in the PowerShell ISE this fails regardless of the 2>&1 redirection operator.
The console host will monitor the standard error stream if you use the 2>&1 redirection operator because output on the standard error stream has to be redirected and thus read.
My guess here is that the console PowerShell host is lazy and just hands native console commands the console if it doesn't need to do any processing on their output.
I would really believe this to be a bug, because PowerShell behaves differently depending on the host application.
Update: The problems have been fixed in v7.2 - see this answer.
A summary of the problems as of v7.1:
The PowerShell engine still has bugs with respect to 2> redirections applied to external-program calls:
The root cause is that using 2> causes the stderr (standard error) output to be routed via PowerShell's error stream (see about_Redirection), which has the following undesired consequences:
If $ErrorActionPreference = 'Stop' happens to be in effect, using 2> unexpectedly triggers a script-terminating error, i.e. aborts the script (even in the form 2>$null, where the intent is clearly to ignore stderr lines). See GitHub issue #4002.
Workaround: (Temporarily) set $ErrorActionPreference = 'Continue'
Since 2> currently touches the error stream, $?, the automatic success-status variable is invariably set to $False if at least one stderr line was emitted, and then no longer reflects the true success status of the command. See this GitHub issue.
Workaround, as recommended in your answer: only ever use $LASTEXITCODE -eq 0 to test for success after calls to external programs.
With 2>, stderr lines are unexpectedly recorded in the automatic $Error variable (the variable that keeps a log of all errors that occurred in the session) - even if you use 2>$null. See this GitHub issue.
Workaround: Short of keeping track how many error records were added and removing them with $Error.RemoveAt() one by one, there is none.
Generally, unfortunately, some PowerShell hosts by default route stderr output from external programs via PowerShell's error stream, i.e. treat it as error output, which is inappropriate, because many external programs use stderr also for status information, or more generally, for anything that is not data (git being a prime example): Not every stderr line can be assumed to represent an error, and the presence of stderr output does not imply failure.
Affected hosts:
The obsolescent Windows PowerShell ISE and possibly other, older GUI-based IDEs other than Visual Studio Code.
When executing external programs via PowerShell remoting or in a background job (these two invocation mechanisms share the same infrastructure and use the ServerRemoteHost host that ships with PowerShell).
Hosts that DO behave as expected in non-remoting, non-background invocations (they pass stderr lines through to the display and print them normally):
Terminals (consoles), including Windows Terminal.
Visual Studio Code with the PowerShell extension; this cross-platform editor (IDE) is meant to supersede the Windows PowerShell ISE.
This inconsistency across hosts is discussed in this GitHub issue.
For me it was an issue with ErrorActionPreference.
When running from ISE I've set $ErrorActionPreference = "Stop" in the first lines and that was intercepting everything event with *>&1 added as parameters to the call.
So first I had this line:
& $exe $parameters *>&1
Which like I've said didn't work because I had $ErrorActionPreference = "Stop" earlier in file (or it can be set globally in profile for user launching the script).
So I've tried to wrap it in Invoke-Expression to force ErrorAction:
Invoke-Expression -Command "& `"$exe`" $parameters *>&1" -ErrorAction Continue
And this doesn't work either.
So I had to fallback to hack with temporary overriding ErrorActionPreference:
$old_error_action_preference = $ErrorActionPreference
try
{
$ErrorActionPreference = "Continue"
& $exe $parameters *>&1
}
finally
{
$ErrorActionPreference = $old_error_action_preference
}
Which is working for me.
And I've wrapped that into a function:
<#
.SYNOPSIS
Executes native executable in specified directory (if specified)
and optionally overriding global $ErrorActionPreference.
#>
function Start-NativeExecutable
{
[CmdletBinding(SupportsShouldProcess = $true)]
Param
(
[Parameter (Mandatory = $true, Position = 0, ValueFromPipelinebyPropertyName=$True)]
[ValidateNotNullOrEmpty()]
[string] $Path,
[Parameter (Mandatory = $false, Position = 1, ValueFromPipelinebyPropertyName=$True)]
[string] $Parameters,
[Parameter (Mandatory = $false, Position = 2, ValueFromPipelinebyPropertyName=$True)]
[string] $WorkingDirectory,
[Parameter (Mandatory = $false, Position = 3, ValueFromPipelinebyPropertyName=$True)]
[string] $GlobalErrorActionPreference,
[Parameter (Mandatory = $false, Position = 4, ValueFromPipelinebyPropertyName=$True)]
[switch] $RedirectAllOutput
)
if ($WorkingDirectory)
{
$old_work_dir = Resolve-Path .
cd $WorkingDirectory
}
if ($GlobalErrorActionPreference)
{
$old_error_action_preference = $ErrorActionPreference
$ErrorActionPreference = $GlobalErrorActionPreference
}
try
{
Write-Verbose "& $Path $Parameters"
if ($RedirectAllOutput)
{ & $Path $Parameters *>&1 }
else
{ & $Path $Parameters }
}
finally
{
if ($WorkingDirectory)
{ cd $old_work_dir }
if ($GlobalErrorActionPreference)
{ $ErrorActionPreference = $old_error_action_preference }
}
}

How to get background task status from parent window in PowerShell

I have the following powershell scripts to start two background task. I could able to fetch the status of background task if I use wait parameter.
$TestResult1=start .\TestFile1.bat -NoNewWindow -PassThru -ErrorAction Stop
$TestResult2=start .\TestFile2.bat -NoNewWindow -PassThru -Wait -ErrorAction Stop
if($TestResult1.ExitCode -gt 0){
throw 'Exceptions in TestFile1.bat'
}
if($TestResult2.ExitCode -gt 0){
throw 'Exceptions in TestFile2.bat'
}
Is there any way to fetch the status of background task without using wait parameter? In above example, I can able to fetch the status only from TestFile2.bat.
If you don't use -Wait, you can use Wait-Process with your $TestResult1 and $TestResult2 variables, which, thanks to -PassThru, contain System.Diagnostics.Process instances representing the processes launched:
# Waits synchronously for both processes to terminate.
$TestResult1, $TestResult2 | Wait-Process
# Now you can inspect the exit codes.
# NOTE: The .ExitCode property is only available after a process
# has *terminated*. Before that, it effectively returns `$null`
# (the underlying .NET exception that occurs is swallowed by PowerShell).
$TestResult1.ExitCode, $TestResult2.ExitCode
If you want to perform other operations while waiting for the processes to terminate, you can use the .HasExited property in a loop to periodically test if the process have terminated:
$leftToMonitor = $TestResult1, $TestResult2
do {
# Perform foreground operations...
Write-Host . -NoNewLine; Start-Sleep 1
# Check for processes that have already terminated.
$exited, $leftToMonitor = $psToMonitor.Where({ $_.HasExited }, 'Split')
foreach ($ps in $exited) {
# Output the command line and the exit code as properties of a custom object.
[pscustomobject] #{
CommandLine = $ps.CommandLine
ExitCode = $ps.ExitCode
]
}
} while ($leftToMonitor)
Note that Wait-Process also has a -Timeout parameter, and you can use -TimeOut 0 to momentarily test if processes have exited, but note that for (each) process that hasn't exited, a non-terminating error is reported, which makes checking the .HasExited property more convenient (and doing so is also faster).
That said, for invisible background tasks I recommend using
PowerShell jobs, either via Start-Job, or, preferably, via the faster and lighter-weight Start-ThreadJob (comes with PowerShell (Core) 7+, installable with Install-Module ThreadJob in Windows PowerShell) for background tasks rather than Start-Process -NoNewWindow, because they:
avoid the problem of potential output from the Start-Process -NoNewWindow-launched process printing output that cannot be captured to the console, which without -Wait will arrive with unpredictably timing.
instead allow you to collect output in a controlled manner on demand via the Receive-Job cmdlet.
Waiting for jobs to finish, optionally with a timeout, is done via the Wait-Job cmdlet.
Note:
Start-Job creates a hidden PowerShell child process in which to run given commands, which is what makes it slow, whereas Start-ThreadJob uses a thread in the current process.
As of PowerShell 7.1, background jobs do not automatically capture the exit code of an / the most recent external program executed by them, unlike in foreground execution, where the automatic $LASTEXITCODE variable reflects this information. Therefore, unfortunately, $LASTEXITCODE must be reported as part of each job's output, which is cumbersome - see below.
GitHub proposal #5422 suggests adding a .LastExitProperty to job objects to address this limitation.
Examples:
Note:
Instead of calling a batch file, the examples below call a cmd.exe command directly, with /c, but the principle is the same.
As stated, the exit code of the cmd.exe call must be reported as part of the job's output, hence the extra ; $LASTEXITCODE statement after the call.
Simplistic example: Wait synchronously for all jobs to terminate, and report the output, which comprises all stdout and stderr output from cmd.exe followed by the process exit code reported via $LASTEXITCODE:
# Start two thread jobs that call cmd.exe, with different outputs
# and different exit code.
# Note: If you don't have Start-ThreadJob, you can use Start-Job
$jobs =
(Start-ThreadJob { cmd /c 'echo ONE'; $LASTEXITCODE }),
(Start-ThreadJob { cmd /c 'echo TWO & exit /b 1'; $LASTEXITCODE })
$jobs | Receive-Job -Wait -AutoRemoveJob
The above yields (note that the output order isn't guaranteed):
ONE
0
TWO
1
Example with continued foreground operation while waiting:
# Start two thread jobs that call cmd.exe, with different outputs
# and different exit code.
# Note: If you don't have Start-ThreadJob, you can use Start-Job
$jobs =
(Start-ThreadJob { cmd /c echo ONE; $LASTEXITCODE }),
(Start-ThreadJob { cmd /c 'echo TWO & exit /b 1'; $LASTEXITCODE })
do {
# Perform foreground operations...
Write-Host . -NoNewLine; Start-Sleep 1
# Note: You can also capture *ongoing* job output via repeated Receive-Job calls.
# Find all all jobs that have finished.
$finished, $jobs = $jobs.Where({ $_.State -in 'Completed', 'Failed', 'Stopped' }, 'Split')
# Process all finished jobs.
foreach ($job in $finished) {
# Get the job's output and separate it into the actual output
# and the exit code, which is the *last* object.
$output = $job | Receive-Job
$i = 0
$lastExitCode, $actualOutput = $output.Where({ ++$i -eq $output.Count }, 'Split')
# Output a custom object that reflects the original command, the output, and the exit code.
[pscustomobject] #{
Command = $job.Command
Output = $($actualOutput) # unwrap a single-object output collection
ExitCode = $lastExitCode
}
# Remove the job
Remove-Job $job
}
} while ($jobs)
Note:
The above uses the fairly cumbersome $_.State -in 'Completed', 'Failed', 'Stopped' to momentarily test for finished jobs, without waiting.
Ideally, Wait-Job -Timeout 0 could more simply be used, but as of PowerShell 7.1 that doesn't work as expected (the minimum wait period is therefore -Timeout 1, i.e. 1 second) - see GitHub issue #14675.

Powershell: capturing remote output streams in Invoke-Command + Invoke-Expression combination

As I didn't find a solution by searching the forum and spent some time for finding out how to do it properly, I'm placing here the issue along with the working solution.
Scenario: in Powershell, need to remotely execute a script block stored in a variable and capture its output for further processing. No output should appear on the screen unless the script generates it on purpose. The script block can contain Write-Warning commands.
Note that the behaviors of interest apply generally to PowerShell commands, not just in the context of Invoke-Command and the - generally to be avoided - Invoke-Expression; in your case, it is only needed to work around a bug.[1]
Your own answer shows how to redirect a single, specific output streams to the success output stream; e.g, 3>&1 redirects (>&) the warning stream (3) to the success (output) stream (1).
The & indicates that the redirection target is a stream, as opposed to a file; for more information about PowerShell's output stream, see about_Redirection.
If you want to redirect all output streams to the success output stream, use redirection *>&1
By redirecting all streams to the output stream, their combined output can be captured in a variable, redirected to a file, or sent through the pipeline, whereas by default only the success output stream (1) is captured.
Separately, you can use the common parameters named -*Variable parameters to capture individual stream output in variables for some streams, namely:
Stream 1 (success): -OutVariable
Stream 2 (error): -ErrorVariable
Stream 3 (warning): -WarningVariable
Stream 6 (information): -InformationVariable
Be sure to specify the target variable by name only, without the $ prefix; e.g., to capture warnings in variable $warnings, use
-WarningVariable warnings, such as in the following example:
Write-Warning hi -WarningVariable warnings; "warnings: $warnings"
Note that with -*Variable, the stream output is collected in the variable whether or not you silence or even ignore that stream otherwise, with the notable exception of -ErrorAction Ignore, in which case an -ErrorVariable variable is not populated (and the error is also not recorded in the automatic $Error variable that otherwise records all errors that occur in the session).
Generally, -{StreamName}Action SilentlyIgnore seems to be equivalent to {StreamNumber}>$null.
Note the absence of the verbose (4) and the debug (5) streams above; you can only capture them indirectly, via 4>&1 and 5>&1 (or *>&1), which then requires you to extract the output of interest from the combined stream, via filtering by output-object type:
Important:
The verbose (4) and debug (5) streams are the only two streams that are silent at the source by default; that is, unless these streams are explicitly turned on via -Verbose / -Debug or their preference-variable equivalents, $VerbosePreference = 'Continue' / $DebugPreference = 'Continue', nothing is emitted and nothing can be captured.
The information stream (5) is silent only on output by default; that is, writing to the information stream (with Write-Information) always writes objects to the stream, but they're not displayed by default (they're only displayed with -InformationAction Continue / $InformationPreference = 'Continue')
Since v5, Write-Host now too writes to the information stream, though its output does print by default, but can be suppressed with 6>$null or -InformationAction Ignore (but not -InformationAction SilentlyContinue).
# Sample function that produces success and verbose output.
# Note that -Verbose is required for the message to actually be emitted.
function foo { Write-Output 1; Write-Verbose -Verbose 4 }
# Get combined output, via 4>&1
$combinedOut = foo 4>&1
# Extract the verbose-stream output records (objects).
# For the debug output stream (5), the object type is
# [System.Management.Automation.DebugRecord]
$verboseOut = $combinedOut.Where({ $_ -is [System.Management.Automation.VerboseRecord] })
[1] Stream-capturing bug, as of PowerShell v7.0:
In a nutshell: In the context of remoting (such as Invoke-Command -Session here), background jobs, and so-called minishells (passing a script block to the PowerShell CLI to execute commands in a child process), only the success (1) and error (2) streams can be captured as expected; all other are unexpectedly passed through to the host (display) - see this GitHub issue.
Your command should - but currently doesn't - work as follows, which would obviate the need for Invoke-Expression:
# !! 3>&1 redirection is BROKEN as of PowerShell 7.0, if *remoting* is involved
# !! (parameters -Session or -ComputerName).
$RemoteOutput =
Invoke-Command -Session $Session $Commands 3>&1 -ErrorVariable RemoteError 2>$null
That is, in principle you should be able to pass a $Commands variable that contains a script block directly as the (implied) -ScriptBlock argument to Invoke-Command.
Script block is contained in $Commands variable. $Session is an already established Powershell remoting session.
The task is resolved by the below command:
$RemoteOutput =
Invoke-Command -Session $Session {
Invoke-Expression $Using:Commands 3>&1
} -ErrorVariable RemoteError 2>$null
After the command is executed all output of the script block is contained in $RemoteOutput. Errors generated during remote code execution are placed in $RemoteError.
Additional clarifications. Write-Warning in Invoke-Expression code block generates its own output stream that is not captured by Invoke-Command. The only way to capture it in a variable is to redirect that stream to the standard stream of Invoke-Expression by using 3>&1. Commands in the code block writing to other output streams (verbose, debug) seems not to be captured even by adding 4>&1 and 5>&1 parameters to Invoke-Expression. However, stream #2 (errors) is properly captured by Invoke-Command in the way shown above.

PowerShell Streaming Output

I'd like to capture some streaming output in PowerShell. For example
cmd /c "echo hi && foo"
This command should print hi and then bomb. I know that I can use -ErrorVariable:
Invoke-Command { cmd /c "echo hi && foo" } -ErrorVariable ev
however there is an issue: in the case of long running commands, I want to stream the output, not capture it and only get the stderr/stdout output at the end of the command
Ideally, I'd like to be able to split stderr and stdout and pipe to two different streams - and pipe the stdout back to the caller, but be prepared to throw stderr in the event of an error. Something like
$stdErr
Invoke-Command "cmd" "/c `"echo hi && foo`"" `
-OutStream (Get-Command Write-Output) `
-ErrorAction {
$stdErr += "`n$_"
Write-Error $_
}
if ($lastexitcode -ne 0) { throw $stdErr}
the closest I can get is using piping, but that doesn't let me discriminate between stdout and stderr so I end up throwing the entire output stream
function Invoke-Cmd {
<#
.SYNOPSIS
Executes a command using cmd /c, throws on errors.
#>
param([string]$Cmd)
)
$out = New-Object System.Text.StringBuilder
# I need the 2>&1 to capture stderr at all
cmd /c $Cmd '2>&1' |% {
$out.AppendLine($_) | Out-Null
$_
}
if ($lastexitcode -ne 0) {
# I really just want to include the error stream here
throw "An error occurred running the command:`n$($out.ToString())"
}
}
Common usage:
Invoke-Cmd "GitVersion.exe" | ConvertFrom-Json
Note that an analogous version that just uses a ScriptBlock (and checking the output stream for [ErrorRecord]s isn't acceptable because there are many programs that "don't like" being executed directly from the PowerShell process
The .NET System.Diagnostics.Process API lets me do this...but I can't stream output from inside the stream handlers (because of the threading and blocking - though I guess I could use a while loop and stream/clear the collected output as it comes in)
- The behavior described applies to running PowerShell in regular console/terminal windows with no remoting involved. With remoting and in the ISE, the behavior is different as of PSv5.1 - see bottom.
- The 2>$null behavior that Burt's answer relies on - 2>$null secretly still writing to PowerShell's error stream and therefore, with $ErrorActionPreference Stop in effect, aborting the script as soon as an external utility writes anything to stderr - has been classified as a bug and is likely to go away.
When PowerShell invokes an external utility such as cmd, its stderr output is passed straight through by default. That is, stderr output prints straight to the console, without being included in captured output (whether by assigning to a variable or redirecting to a file).
While you can use 2>&1 as part of the cmd command line, you won't be able to distinguish between stdout and stderr output in PowerShell.
By contrast, if you use 2>&1 as a PowerShell redirection, you can filter the success stream based on the input objects' type:
A [string] instance is a stdout line
A [System.Management.Automation.ErrorRecord] instance is a stderr line.
The following function, Invoke-CommandLine, takes advantage of this:
Note that the cmd /c part isn't built in, so you would invoke it as follows, for instance:
Invoke-CommandLine 'cmd /c "echo hi && foo"'
There is no fundamental difference between passing invocation of a cmd command line and direct invocation of an external utility such as git.exe, but do note that only invocation via cmd allows use of multiple commands via operators &, &&, and ||, and that only cmd interprets %...%-style environment-variable references, unless you use --%, the stop-parsing symbol.
Invoke-CommandLine outputs both stdout and stderr line as they're being received, so you can use the function in a pipeline.
As written, stderr lines are written to PowerShell's error stream using Write-Error as they're being received, with a single, generic exception being thrown after the external command terminates, should it report a nonzero $LASTEXITCODE.
It's easy to adapt the function:
to take action once the first stderr line is received.
to collect all stderr lines in a single variable
and/or, after termination, to take action if any stderr input was received, even with $LASTEXITCODE reporting 0.
Invoke-CommandLine uses Invoke-Expression, so the usual caveat applies: be sure you know what command line you're passing, because it will be executed as-is, no matter what it contains.
function Invoke-CommandLine {
<#
.SYNOPSIS
Executes an external utility with stderr output sent to PowerShell's error '
stream, and an exception thrown if the utility reports a nonzero exit code.
#>
param([parameter(Mandatory)][string] $CommandLine)
# Note that using . { ... } is required around the Invoke-Expression
# call to ensure that the 2>&1 redirection works as intended.
. { Invoke-Expression $CommandLine } 2>&1 | ForEach-Object {
if ($_ -is [System.Management.Automation.ErrorRecord]) { # stderr line
Write-Error $_ # send stderr line to PowerShell's error stream
} else { # stdout line
$_ # pass stdout line through
}
}
# If the command line signaled failure, throw an exception.
if ($LASTEXITCODE) {
Throw "Command failed with exit code ${LASTEXITCODE}: $CommandLine"
}
}
Optional reading: how calls to external utilities fit into PowerShell's error handling
Current as of: Windows PowerShell v5.1, PowerShell Core v6-beta.2
The value of preference variable $ErrorActionPreference only controls the reaction to errors and .NET exceptions that occur in PowerShell cmdlet/function calls or expressions.
Try / Catch is for catching PowerShell's terminating errors and .NET exceptions.
In a regular console window with no remoting involved, external utilities such as cmd currently never generate either error - all they do is report an exit code, which PowerShell reflects in automatic variable $LASTEXITCODE, and automatic variable $? reflects $False if the exit code is nonzero.
Note: The fact that the behavior differs fundamentally in hosts other than the console host - which includes the Windows ISE and when remoting is involved - is problematic: There, calls to external utilities result in stderr output treated as if non-terminating errors had been reported; specifically:
Every stderr output line is output as an error record and also recorded in the automatic $Error collection.
In addition to $? being set to $false with a nonzero exit code, the presence of any stderr output also sets it to $False.
This behavior is problematic, as stderr output by itself does not necessarily indicate an error - only a nonzero exit code does.
Burt has created an issue in the PowerShell GitHub repository to discuss this inconsistency.
By default, stderr output generated by an external utility is passed straight through to the console - they are not captured by PowerShell variable assignments or (success-stream) output redirections.
As discussed above, this can be changed:
2>&1 as part of a command line passed to cmd sends stdout and stderr combined to PowerShell's success stream, as strings, with no way to distinguish between whether a given line was a stdout or stderr line.
2>&1 as a PowerShell redirection sends stderr lines to PowerShell's success stream too, but you can distinguish between stdout- and stderr-originated lines by their DATA TYPE: a [string]-typed line is a stdout-originated line, whereas a [System.Management.Automation.ErrorRecord]-typed line is a stderr-originated one.
Note: updated sample below should now work across PowerShell hosts. GitHub issue Inconsistent handling of native command stderr has been opened to track the discrepancy in previous example. Note however that as it depends on undocumented behavior, the behavior may change in the future. Take this into consideration before using it in a solution that must be durable.
You are on the right track with using pipes, you probably don't need Invoke-Command, almost ever. Powershell DOES distinguish between stdout and stderr. Try this for example:
cmd /c "echo hi && foo" | set-variable output
The stdout is piped on to set-variable, while std error still appears on your screen. If you want to hide and capture the stderr output, try this:
cmd /c "echo hi && foo" 2>$null | set-variable output
The 2>$null part is an undocumented trick that results in the error output getting appended to the PowerShell $Error variable as an ErrorRecord.
Here's an example that displays stdout, while trapping stderr with an catch block:
function test-cmd {
[CmdletBinding()]
param()
$ErrorActionPreference = "stop"
try {
cmd /c foo 2>$null
} catch {
$errorMessage = $_.TargetObject
Write-warning "`"cmd /c foo`" stderr: $errorMessage"
Format-list -InputObject $_ -Force | Out-String | Write-Debug
}
}
test-cmd
Generates the message:
WARNING: "cmd /c foo" stderr: 'foo' is not recognized as an internal or external command
If you invoke with debug output enabled, you'll alsow see the details of the thrown ErrorRecord:
DEBUG:
Exception : System.Management.Automation.RemoteException: 'foo' is not recognized as an internal or external command,
TargetObject : 'foo' is not recognized as an internal or external command,
CategoryInfo : NotSpecified: ('foo' is not re...ternal command,:String) [], RemoteException
FullyQualifiedErrorId : NativeCommandError
ErrorDetails :
InvocationInfo : System.Management.Automation.InvocationInfo
ScriptStackTrace : at test-cmd, <No file>: line 7
at <ScriptBlock>, <No file>: line 1
PipelineIterationInfo : {}
PSMessageDetails :
Setting $ErrorActionPreference="stop" causes PowerShell to throw an exception when the child process writes to stderr, which sounds like it's the core of what you want. This 2>$null trick makes the cmdlet and external command behavior very similar.