I want to write a wrapper around New-AzResourceGroupDeployment in PowerShell. So lets assume the following script:
New-AzResourceGroupDeployment `
-Name 'test' `
-ResourceGroupName 'rg-test' `
-TemplateFile .\main.bicep `
-TemplateParameterFile .\parameters\parameters.json `
-Verbose `
-WhatIf
This will output something like this:
VERBOSE: Using Bicep v0.4.1008
...
What if: Performing the operation "Creating Deployment" on target "rg-test".
So the problem here is that I won't get any results from the WhatIf. I guess its because WhatIf runs a different process in the background.
So is there a way to capture the output of the WhatIf?
The output produced by using the common -WhatIf parameter can unfortunately not be captured in-session - it prints directly to the console (host).
The only workaround is to use PowerShell's CLI (powershell.exe for Windows PowerShell, pwsh for PowerShell (Core) 7+)
$whatIfAndVerboseOutput = powershell.exe -c #'
New-AzResourceGroupDeployment `
-Name 'test' `
-ResourceGroupName 'rg-test' `
-TemplateFile .\main.bicep `
-TemplateParameterFile .\parameters\parameters.json `
-Verbose `
-WhatIf
'#
Not the use of a verbatim here-string (#'<newline>...<newline>'#) to avoid the need for escaping the embedded quote characters.
Caveats:
Such a call is expensive.
Since a new, independent session that runs in a child process is created, it knows nothing about the state of the caller (except that it inherits environment variables).
Assuming the argument data types are simple enough, you can use an expandable string ("...") to embed values from the caller in the command text.
Note: As of PowerShell 7.2, PowerShell's CLI, when a command string is passed to -c (-Command) - which is the only option when calling from outside PowerShell - not only sends direct-to-console output to stdout, but all of its output streams there as well - including error - which is problematic: see the bottom section of this answer for details; here, this means that the -Verbose output is captured as well.
Alternatively, for better - but not complete - type fidelity, pass a script block ({ ... }) to -c (-Command) instead of a string, to which you can pass values from the caller with -args. Note that using a script block is only an option when also calling from Powershell and that doing so will insert an empty line after each what-if output message.
Note: The script-block approach limits the output that is mapped to the caller's success stream (via stdout) to direct-to-console output (such as from -WhatIf) and the command's success-stream output. Output to any of the other streams is mapped to the corresponding streams of the caller, though you can use redirections inside the script block to merge streams into the success output stream.
In the case at hand, the -Verbose output would be passed through to the caller's verbose stream by default and would therefore not be captured in a variable capturing the command output. However, placing 4>&1 inside the script block would include it.
Related
Using the program got your back or GYB. I run the following command
Start-Process -FilePath 'C:\Gyb\gyb.exe' -ArgumentList #("--email <Email Address>", "--action backup", "--local-folder $GYBfolder", "--service-account", "--batch-size 4") -Wait
The issue is that when the process is done my script does not complete.
$GYBfolder = $GYBfolder.Replace('"', "")
$output = [PSCustomObject]#{
Name = $SourceGYB
Folder = $GYBfolder
}
$filename = "C:\reports\" + $SourceGYB.Split("#")[0] + "_Backup.csv"
$output | Export-Csv $filename -NoTypeInformation | Format-Table text-align=left -AutoSize
Return $filename
For some reason the script stops right before the return.
I am curious to know if I should be using a different command to run GYB?
Any thoughts on why the script does not process the return?
There's great information in the comments, but let me attempt a systematic overview:
To synchronously execute external console applications and capture their output, call them directly (C:\Gyb\gyb.exe ... or & 'C:\Gyb\gyb.exe' ...), do not use Start-Process - see this answer.
Only if gyb.exe were a GUI application would you need **Start-Process -Wait in order to execute it synchronously**.
A simple, but non-obvious shortcut is to pipe the invocation to another command, such as Out-Null, which also forces PowerShell to wait (e.g. gyb.exe | Out-Null) - see below.
When Start-Process is appropriate, the most robust way to pass all arguments is as a single string encoding all arguments, with appropriate embedded "..." quoting, as needed; this is unfortunate, but required as a workaround for a long-standing bug: see this answer.
Invoke-Command's primary purpose is to invoke commands remotely; while it can be used locally, there's rarely a good reason to do so, as &, the call operator is both more concise and more efficient - see this answer.
When you use an array to pass arguments to an external application, each element must contain just one argument, where parameter names and their values are considered distinct arguments; e.g., you must use #(--'action', 'backup', ...) rather than
#('--action backup', ...)
Therefore, use the following to run your command synchronously:
If gyb.exe is a console application:
# Note: Enclosing #(...) is optional
$argList = '--email', $emailAddress, '--action', 'backup', '--local-folder', $GYBfolder, '--service-account', '--batch-size', 4
# Note: Stdout and stderr output will print to the current console, unless captured.
& 'C:\Gyb\gyb.exe' $argList
If gyb.exe is a GUI application, which necessitates use of Start-Process -Wait (a here-string is used, because it makes embedded quoting easier):
# Note: A GUI application typically has no stdout or stderr output, and
# Start-Process never returns the application's *output*, though
# you can ask to have a *process object* returned with -PassThru.
Start-Process -Wait 'C:\Gyb\gyb.exe' #"
--email $emailAddress --action backup --local-folder "$GYBfolder" --service-account --batch-size 4
#"
The shortcut mentioned above - piping to another command in order to force waiting for a GUI application to exit - despite being obscure, has two advantages:
Normal argument-passing syntax can be used.
The automatic $LASTEXITCODE variable is set to the external program's process exit code, which does not happen with Start-Process. While GUI applications rarely report meaningful exit codes, some do, notably msiexec.
# Pipe to | Out-Null to force waiting (argument list shortened).
# $LASTEXITCODE will reflect gyb.exe's exit code.
# Note: In the rare event that the target GUI application explicitly
# attaches to the caller's console and produces output there,
# pipe to `Write-Output` instead, and possibly apply 2>&1 to
# the application call so as to also capture std*err* output.
& 'C:\Gyb\gyb.exe' --email $emailAddress --action backup | Out-Null
Note: If the above unexpectedly does not run synchronously, the implication is that gyb.exe itself launches another, asynchronous operation. There is no generic solution for that, and an application-specific one would require you to know the internals of the application and would be nontrivial.
A note re argument passing with direct / &-based invocation:
Passing an array as-is to an external program essentially performs splatting implicitly, without the need to use #argList[1]. That is, it passes each array element as its own argument.
By contrast, if you were to pass $argList to a PowerShell command, it would be passed as a single, array-valued argument, so #argList would indeed be necessary in order to pass the elements as separate, positional arguments. However, the more typical form of splatting used with PowerShell commands is to use a hashtable, which allows named arguments to be passed (parameter name-value pairs; e.g., to pass a value to a PowerShell command's
-LiteralPath parameter:
$argHash = #{ LiteralPath = $somePath; ... }; Set-Content #argHash
[1] $args and #args are largely identical in this context, but, strangely, #argList, honors use of --%, the stop-parsing symbol operator, even though it only makes sense in a literally specified argument list.
Here are my scripts
Parent.ps1
[CmdletBinding(SupportsShouldProcess=$true)]
Param()
Write-Verbose 'Triggering Child Process...'
Start-Process PowerShell.exe '.\Child.ps1'
Child.ps1
[CmdletBinding(SupportsShouldProcess=$true)]
Param()
Write-Verbose 'Child Process Triggered.' # I want output from this line to be displayed
Write-Output 'Child Process Triggered.'
Start-Sleep 10
I'm calling the parent script as below
powershell Parent.ps1 -Verbose
Actual outpt:
VERBOSE: Triggering Child Process...
Child Process Triggered.
Desired Output:
VERBOSE: Triggering Child Process...
VERBOSE: Child Process Triggered.
Child Process Triggered.
If you really want to run .\Child.ps1 via another PowerShell instance, in a new window, asynchronously:
Start-Process PowerShell.exe "-c .\Child.ps1 -Verbose:`$$($VerbosePreference -eq 'Continue')"
Note the use of -c (-Command) to signal what PowerShell CLI parameter the command string is passed to, to distinguish it from -f (-File). While not strictly necessary, because -c is the default in Windows PowerShell (powershell.exe), it helps to clarify, especially given that PowerShell (Core) 7+ (pwsh) now defaults to -f.
When you invoke your Parent.ps1 script with -Verbose (-vb), PowerShell translates this switch to a script-scoped $VerbosePreference variable with value Continue.
To propagate a switch value - on or off - programmatically, you can follow the switch name with : and a Boolean, e.g. -Verbose:$true.
Caveat: While something like -Verbose:$false is typically the same as not passing the switch at all, there are exceptions, and this is one of them: -Verbose:$false explicitly overrides a caller's $VerbosePreference preference variable to deactivate verbose output - see this answer.
That said, this isn't a concern in your case, given that you're launching a new PowerShell instance, and there's no session-internal caller.
The above uses an expandable string to translate the value of $VerbosePreference into the appropriate Boolean; note that the subexpression ($(...)) is prefixed with `$, i.e. an escaped $ character to be retained verbatim, because stringifying a Boolean results in either True or False, so the $ prefix is needed to turn it back into a Boolean the way it needs to be represented as a literal in source code.
Note that if you were to invoke .\Child.ps1 directly from your parent script, it would automatically "inherit" the parent's $VerbosePreference value (it would see the same value by default, due to PowerShell's dynamic scoping).
A note on -c (-Command) vs. -f (-File) and PowerShell (Core) 7+:
For invoking a PowerShell script file (.ps1) via the CLI, it is generally sufficient and preferable for robust passing of verbatim arguments to use the -f (-File) parameter; -c (-Command) is only needed if you need the command string to be evaluated as PowerShell code.
In Windows PowerShell (powershell.exe), the -f parameter doesn't recognize Boolean argument values, unfortunately, which is why -c is used in the solution above. This limitation has been fixed in PowerShell (Core) 7+ (pwsh.exe).
See also:
Guidance on when to use -c (-Command) vs. -f (-File)
An overview of PowerShell's CLI; covers both editions.
I think my problem has a simple solution. But now i'm a bit confused.
I have Java Code, that starts 1 Powershell Script. This Powershell Script must start other scripts.
Java -> Powershell.ps1 ->
Script1.ps1
Script2.ps1
Script3.ps1
Script....
Script1,2,..etc performing multiple tasks and return String Values.
I've tried
Start-Process, Invoke-Command and Invoke-Expression
Assuming script1.ps1 is:
$a = 1+2
$a
Start-Process would work the best for me but im not getting the output:
$arguments = "C:\..\script1.ps1" + " -ClientName" + $DeviceName
$output = Start-Process powershell -ArgumentList $arguments -Credential $credentials
$output
$output ist NULL.
Thank you very much!
Start-Process produces no output by default.
(The only way to make it produce output directly is to use -PassThru, which then doesn't return the script's output, but a System.Diagnostics.Process instance representing the newly created process - see below.)
The only way to capture output from your script via Start-Process is to use the -RedirectStandardOutput and
-RedirectStandardError parameters to capture the script's output as text, in files.[1][2]
You can then read those files in PowerShell after the new process has completed, which you can ensure in one of two ways:
Also pass the -Wait switch to Start-Process, to make the invocation synchronous, which means that when Start-Process returns, the output has already been captured in the specified file(s).
Use -PassThru to obtain a System.Diagnostics.Process instance and pass it to Wait-Process later (or use its .WaitForExit() method directly; property .HasExited can be used to check whether the process is still running).
Here's what may work in your situation:
$arguments = "-File C:\...\script1.ps1" + " -ClientName" + $DeviceName
# Launch the script in a new window running as the given user,
# capture its standard output in file ./out.txt,
# and wait for it to finish.
Start-Process -Wait -RedirectStandardOutput ./out.txt powershell -ArgumentList $arguments -Credential $credentials
"Running script1.ps1 produced the following output:"
Get-Content ./out.txt
The PowerShell CLI, regrettably, reports all of PowerShell's 6 output streams, via standard output (see this answer), so the above captures all output from your script, including error output.
However, you can use, e.g., -RedirectStandardError ./err.txt to capture the error stream separately.
[1] Calling another PowerShell instance via its CLI offers a structured alternative to capturing unstructured text (the for-display output as it would print to the console, which is what happens by default):
-OutputFormat xml (or -of xml / -o xml) makes PowerShell format its output in CLIXML format, which is the same XML-based serialization format used in PowerShell remoting and background jobs for serializing rich objects, which you can "rehydrate" with a later Import-Clixml call.
Note: For most complex objects there is a loss of type fidelity: that is, they are serialized as emulations of the original objects; in short as "property bags" without methods, which, however may be sufficient - see this answer.
Here's a quick demonstration, using a [datetime] instance, which does deserialize with type fidelity:
# Call Get-Date via the PowerShell CLI and save the output
# in CLIXML format in file ./out.xml
Start-Process -Wait -RedirectStandardOutput ./out.xml powershell '-of xml -c Get-Date'
# Import the CLIXML file and convert its content back to a [datetime] instance.
"Type of the CLIXML-serialized and deserialized `Get-Date` output:"
(Import-CliXml ./out.xml).GetType().FullName # -> System.DateTime
[2] The character encoding of the output files is determined by the encoding stored in [Console]::OutputEncoding, which reflects the current console output code page, which defaults to the system's active legacy OEM code page.
As I didn't find a solution by searching the forum and spent some time for finding out how to do it properly, I'm placing here the issue along with the working solution.
Scenario: in Powershell, need to remotely execute a script block stored in a variable and capture its output for further processing. No output should appear on the screen unless the script generates it on purpose. The script block can contain Write-Warning commands.
Note that the behaviors of interest apply generally to PowerShell commands, not just in the context of Invoke-Command and the - generally to be avoided - Invoke-Expression; in your case, it is only needed to work around a bug.[1]
Your own answer shows how to redirect a single, specific output streams to the success output stream; e.g, 3>&1 redirects (>&) the warning stream (3) to the success (output) stream (1).
The & indicates that the redirection target is a stream, as opposed to a file; for more information about PowerShell's output stream, see about_Redirection.
If you want to redirect all output streams to the success output stream, use redirection *>&1
By redirecting all streams to the output stream, their combined output can be captured in a variable, redirected to a file, or sent through the pipeline, whereas by default only the success output stream (1) is captured.
Separately, you can use the common parameters named -*Variable parameters to capture individual stream output in variables for some streams, namely:
Stream 1 (success): -OutVariable
Stream 2 (error): -ErrorVariable
Stream 3 (warning): -WarningVariable
Stream 6 (information): -InformationVariable
Be sure to specify the target variable by name only, without the $ prefix; e.g., to capture warnings in variable $warnings, use
-WarningVariable warnings, such as in the following example:
Write-Warning hi -WarningVariable warnings; "warnings: $warnings"
Note that with -*Variable, the stream output is collected in the variable whether or not you silence or even ignore that stream otherwise, with the notable exception of -ErrorAction Ignore, in which case an -ErrorVariable variable is not populated (and the error is also not recorded in the automatic $Error variable that otherwise records all errors that occur in the session).
Generally, -{StreamName}Action SilentlyIgnore seems to be equivalent to {StreamNumber}>$null.
Note the absence of the verbose (4) and the debug (5) streams above; you can only capture them indirectly, via 4>&1 and 5>&1 (or *>&1), which then requires you to extract the output of interest from the combined stream, via filtering by output-object type:
Important:
The verbose (4) and debug (5) streams are the only two streams that are silent at the source by default; that is, unless these streams are explicitly turned on via -Verbose / -Debug or their preference-variable equivalents, $VerbosePreference = 'Continue' / $DebugPreference = 'Continue', nothing is emitted and nothing can be captured.
The information stream (5) is silent only on output by default; that is, writing to the information stream (with Write-Information) always writes objects to the stream, but they're not displayed by default (they're only displayed with -InformationAction Continue / $InformationPreference = 'Continue')
Since v5, Write-Host now too writes to the information stream, though its output does print by default, but can be suppressed with 6>$null or -InformationAction Ignore (but not -InformationAction SilentlyContinue).
# Sample function that produces success and verbose output.
# Note that -Verbose is required for the message to actually be emitted.
function foo { Write-Output 1; Write-Verbose -Verbose 4 }
# Get combined output, via 4>&1
$combinedOut = foo 4>&1
# Extract the verbose-stream output records (objects).
# For the debug output stream (5), the object type is
# [System.Management.Automation.DebugRecord]
$verboseOut = $combinedOut.Where({ $_ -is [System.Management.Automation.VerboseRecord] })
[1] Stream-capturing bug, as of PowerShell v7.0:
In a nutshell: In the context of remoting (such as Invoke-Command -Session here), background jobs, and so-called minishells (passing a script block to the PowerShell CLI to execute commands in a child process), only the success (1) and error (2) streams can be captured as expected; all other are unexpectedly passed through to the host (display) - see this GitHub issue.
Your command should - but currently doesn't - work as follows, which would obviate the need for Invoke-Expression:
# !! 3>&1 redirection is BROKEN as of PowerShell 7.0, if *remoting* is involved
# !! (parameters -Session or -ComputerName).
$RemoteOutput =
Invoke-Command -Session $Session $Commands 3>&1 -ErrorVariable RemoteError 2>$null
That is, in principle you should be able to pass a $Commands variable that contains a script block directly as the (implied) -ScriptBlock argument to Invoke-Command.
Script block is contained in $Commands variable. $Session is an already established Powershell remoting session.
The task is resolved by the below command:
$RemoteOutput =
Invoke-Command -Session $Session {
Invoke-Expression $Using:Commands 3>&1
} -ErrorVariable RemoteError 2>$null
After the command is executed all output of the script block is contained in $RemoteOutput. Errors generated during remote code execution are placed in $RemoteError.
Additional clarifications. Write-Warning in Invoke-Expression code block generates its own output stream that is not captured by Invoke-Command. The only way to capture it in a variable is to redirect that stream to the standard stream of Invoke-Expression by using 3>&1. Commands in the code block writing to other output streams (verbose, debug) seems not to be captured even by adding 4>&1 and 5>&1 parameters to Invoke-Expression. However, stream #2 (errors) is properly captured by Invoke-Command in the way shown above.
I would like to redirect the output of a command in PowerShell, following these rules:
The command is stored to a variable
Output must be written to the console in real-time (i.e. "ping" results), including errors
Output must be stored to a variable, including errors (real-time is not mandatory here)
Here are my tests, assuming:
$command = "echo:"
to test errors redirection, and:
$command = "ping 127.0.0.1"
to test real-time output.
Output is written in real-time, errors are not redirected at all
Invoke-Expression $command 2>&1 | Tee-Object -Variable out_content
Output is written in real-time, errors are only redirected to the console
Invoke-Expression ($command 2>&1) | Tee-Object -Variable out_content
Invoke-Expression $command | Tee-Object -Variable out_content 2>&1
Output is not written in real-time, errors are correctly redirected to both
(Invoke-Expression $command) 2>&1 | Tee-Object -Variable out_content
Is it possible to get those rules working together?
Some general recommendations up front:
Invoke-Expression should generally be avoided, because it can be a security risk and introduces quoting headaches; there are usually better and safer solutions available; best to form a habit of avoiding Invoke-Expression, unless there is no other solution.
There is never a reason to use Invoke-Expression to simply execute an external program with arguments, such as ping 127.0.0.1; just invoke it directly - support for such direct invocations is a core feature of any shell, and PowerShell is no exception.
If you do need to store a command in a variable or pass it as an argument for later invocation, use script blocks ({ ... }); e.g., instead of $command = 'ping 127.0.0.1', use $command = { ping 127.0.0.1 }, and invoke that script block on demand with either &, the call operator, or ., the dot-sourcing operator.
When calling external programs, the two operators exhibit the same behavior; when calling PowerShell-native commands, & executes the code in a child scope, whereas . (typically) executes in the caller's current scope.
That Invoke-Expression $command 2>&1 doesn't work as expected looks like a bug (as of PowerShell Core 7.0.0-preview.3) and has been reported in this GitHub issue.
As for a workaround for your problem:
PetSerAl, as countless times before, has provided a solution in a comment on the question:
& { Invoke-Expression $command } 2>&1 | Tee-Object -Variable out_content
{ ... } is a script-block literal that contains the Invoke-Expression call, and it is invoked with &, the call operator, which enables applying stream-redirection expression 2>&1 to the & call, which bypasses the bug.
If $command contained a PowerShell-native command that you wanted to execute directly in the current scope, such as a function definition, you'd use . instead of &.