Correctly provide separated streaming feedback from PowerShell 5.1 remote Invoke-Command - powershell

PowerShell is designed to support and favor a streaming output style (where output and user feedback become available as they're generated, instead of being dumped when the script finishes). PowerShell provides the assorted Write cmdlets, their streams, and their redirect functionality to enable this style.
However, when using Invoke-Command, stream redirection behavior when operating in a remote PSSession varies depending on the stream used.
For example:
# Information redirect works as expected when invoking locally
>Invoke-Command { 'output'; Write-Host 'info' } 6>$Null
output
# Information redirect gets ignored when invoking remotely
# (assuming a remote session has already been created as $session)
>Invoke-Command -Session $session { 'output'; Write-Host 'info' } 6>$Null
output
info
(Write-Host used for simplicity, the same behavior occurs using Write-Information -InformationAction Continue)
Experimentation shows stream behavior varies by stream:
Success (1) and Error (2) output redirect as normal
Warning (3) and Verbose (4) output unconditionally similar to Information (6), but their formatting is preserved
Debug (5) redirects as normal (although the continuation prompt doesn't redirect)
I can see a few custom ways to jury-rig streaming output that could subsequently be filtered easily (e.g. outputting to Success with an identifying prefix).
Is there a good way to stream separate output and feedback from a remote command that would adhere to some form of PowerShell common practices, or a way to get the redirection to behave as expected?

Related

powershell: suppress command output during assignment [duplicate]

When my PowerShell script tries, for example, to create a SQL Server object for a server that doesn't exist ("bla" in my case), PowerShell displays lots of PowerShell errors in red.
Since my script checks the value of $? after such calls, and displays and logs errors, I'd rather not have the several lines of PowerShell errors displayed as well.
How can I deactivate those being displayed for my script?
You have a couple of options. The easiest involve using the ErrorAction settings.
-Erroraction is a universal parameter for all cmdlets. If there are special commands you want to ignore you can use -erroraction 'silentlycontinue' which will basically ignore all error messages generated by that command. You can also use the Ignore value (in PowerShell 3+):
Unlike SilentlyContinue, Ignore does not add the error message to the $Error automatic variable.
If you want to ignore all errors in a script, you can use the system variable $ErrorActionPreference and do the same thing: $ErrorActionPreference= 'silentlycontinue'
See about_CommonParameters for more info about -ErrorAction.
See about_preference_variables for more info about $ErrorActionPreference.
Windows PowerShell provides two mechanisms for reporting errors: one mechanism for terminating errors and another mechanism for non-terminating errors.
Internal CmdLets code can call a ThrowTerminatingError method when an error occurs that does not or should not allow the cmdlet to continue to process its input objects. The script writter can them use exception to catch these error.
EX :
try
{
Your database code
}
catch
{
Error reporting/logging
}
Internal CmdLets code can call a WriteError method to report non-terminating errors when the cmdlet can continue processing the input objects. The script writer can then use -ErrorAction option to hide the messages, or use the $ErrorActionPreference to setup the entire script behaviour.
You can also append 2>$null to your command.
Example:
$rec = Resolve-DnsName $fqdn -Server $dns 2>$null
You're way off track here. Silencing errors is almost never a good idea, and manually checking $? explicitly after every single command is enormously cumbersome and easy to forget to do (error prone). Don't set yourself up to easily make a mistake. If you're getting lots and lots of red, that means your script kept going when it should have stopped instead. It can no longer do useful work if most of its commands are failing. Continuing a program when it and the system are in an unknown state will have unknown consequences; you could easily leave the system in a corrupt state.
The correct solution is to stop the algorithm on the first error. This principle is called "fail fast," and PowerShell has a built in mechanism to enable that behavior. It is a setting called the error preference, and setting it to the highest level will make your script (and the child scopes if they don't override it) behave this way:
$ErrorActionPreference = 'Stop'
This will produce a nice, big error message for your consumption and prevent the following commands from executing the first time something goes wrong, without having to check $? every single time you run a command. This makes the code vastly simpler and more reliable. I put it at the top of every single script I ever write, and you almost certainly should as well.
In the rare cases where you can be absolutely certain that allowing the script to continue makes sense, you can use one of two mechanisms:
catch: This is the better and more flexible mechanism. You can wrap a try/catch block around multiple commands, allowing the first error to stop the sequence and jump into the handler where you can log it and then otherwise recover from it or rethrow it to bubble the error up even further. You can also limit the catch to specific errors, meaning that it will only be invoked in specific situations you anticipated rather than any error. (For example, failing to create a file because it already exists warrants a different response than a security failure.)
The common -ErrorAction parameter: This parameter changes the error handling for one single function call, but you cannot limit it to specific types of errors. You should only use this if you can be certain that the script can continue on any error, not just the ones you can anticipate.
In your case, you probably want one big try/catch block around your entire program. Then your process will stop on the first error and the catch block can log it before exiting. This will remove a lot of duplicate code from your program in addition to cleaning up your log file and terminal output and making your program less likely to cause problems.
Do note that this doesn't handle the case when external executables fail (exit code nonzero, conventionally), so you do still need to check $LASTEXITCODE if you invoke any. Despite this limitation, the setting still saves a lot of code and effort.
Additional reliability
You might also want to consider using strict mode:
Set-StrictMode -Version Latest
This prevents PowerShell from silently proceeding when you use a non-existent variable and in other weird situations. (See the -Version parameter for details about what it restricts.)
Combining these two settings makes PowerShell much more of fail-fast language, which makes programming in it vastly easier.
I had a similar problem when trying to resolve host names using [system.net.dns]. If the IP wasn't resolved .Net threw a terminating error.
To prevent the terminating error and still retain control of the output, I created a function using TRAP.
E.G.
Function Get-IP
{PARAM ([string]$HostName="")
PROCESS {TRAP
{"" ;continue}
[system.net.dns]::gethostaddresses($HostName)
}
}
Add -ErrorAction SilentlyContinue to your script and you'll be good to go.
In some cases you can pipe after the command a Out-Null
command | Out-Null
To extend on Mikkel's answer.
If you still want to capture the error, you can use "-ErrorAction stop" combined with a try - catch.
"-ErrorAction silentlycontinue" will ignore the error.
For instance:
try
{
New-Item -Path "/somepath" -Name "somename" -ErrorAction Stop | Out-Null
}
catch
{
echo "You must run this command in an elevated mode."
}
NOTE: There is no "silentlyStop" action, and I believe Mickel's answer refers to the "stop" action. It is likely a typo.
The idea of using a try-catch combined with the "stop" action is to be able to not just dismiss eventual errors but to show something in case of errors.
If you want the powershell errormessage for a cmdlet suppressed, but still want to catch the error, use "-erroraction 'silentlyStop'"

Powershell Pipeline data to external console application

I have a console application which can take standard input. It buffers up the data until the execute command, at which point it executes it all, and sends the output to standard output.
At the moment, I am running this application from Powershell, piping commands into it, and then parsing the output. The data piped in is relatively small; however this application is being called about 1000 times. Each time it is executed, it has to load, and create network connections. I am wondering whether it might be more efficient to pipeline all the commands into a single instantiation of the console application.
I have tried this by adding all Powershell script, that manufactures the standard input for the console, into a function, then piping that function to the console application. This seems to work at first, but you eventually realise it is buffering up all the data in Powershell until the function has finished, then sending it to the console's StdIn. You can see this because I have a whole load of Write-Host statements that flash by, and only then do you see the output.
e.g.
Function Run-Command1
{
Write-Host "Run-Command1"
"GET nethost xxxx COLS id,name"
"EXEC"
}
Function Run-Command2
{
Write-Host "Run-Command2"
"GET nethost yyyy COLS id,name"
"GET users yyyy COLS id,name"
"EXEC"
}
...
Function Run-CommandX
{
...
}
Previously, I would use this as:
Run-Command1 | netapp.exe -connect QQQQ -U user -P password
Run-Command2 | netapp.exe -connect QQQQ -U user -P password
...
Run-CommandX | netapp.exe -connect QQQQ -U user -P password
But now I would like to do:
Function Run-Commands
{
Run-Command1
Run-Command2
...
Run-CommandX
}
Run-Commands |
netapp.exe -connect QQQQ -U user -P password
Ideally, I would like the Powershell pipeline behaviour to be extended to an external application. Is this possible?
I would like the Powershell pipeline behaviour to be extended to an external application.
I have a whole load of Write-Host statements that flash by, and only then do you see the output.
Tip of the hat to marsze.
PowerShell [Core] v6+ performs no buffering at all, and sends (stringified) output as it is being produced by a command to an external program, in the same manner that output is streamed between PowerShell commands.[1]
PowerShell's legacy edition (versions up to 5.1), Windows PowerShell, buffers in that it collects all output from a command first before sending it(s stringification) to an external program.
marsze's helpful answer shows a workaround based on direct use of .NET APIs.
However, I think even Windows PowerShell's behavior isn't the problem here: Your Run-Commands function executes very quickly - given that the functions it calls merely output string literals - and the resulting array of lines is then sent all at once to netapp.exe - and further processing, including when to produce output, is then up to netapp.exe. In PowerShell [Core] v6+, with PowerShell-side buffering out of the picture, the individual Run-Commmand<n> functions' output would be sent to netapp.exe ever so slightly earlier, but I wouldn't expect that to make a difference.
The upshot is that unless netapp.exe offers a way to adjust its input and output buffering, you won't be able to control the timing of its input processing and output production.
How PowerShell sends objects to an external program (native utility) via the pipeline:
It sends a stringified representation of each object:
in PowerShell [Core] v6+: as the object becomes available.
in Windows PowerShell: after having collected all output objects in memory first.
In other words: on the PowerShell side, from v6 onward, there is no buffering.[1]
However, receiving external programs typically do buffer the stdin (standard input) data they receive via the pipeline[2].
Similarly, external programs typically do buffer their stdout (standard output) streams (but PowerShell performs no additional buffering before passing the output on, such as to the terminal (console)).
PowerShell has no control over this behavior; either the external program itself offers an option to adjust buffering or, in limited cases on Linux, you can call the external program via the stdbuf utility.
Optional reading: How PowerShell stringifies objects when piping to external programs:
PowerShell, as of v7.1, knows only text when communicating with external programs; that is, data sent to such programs is converted to text, and output from such programs is interpreted as text - even though the underlying system IPC features are simply byte conduits.
The UTF-16-based .NET strings PowerShell uses are converted to byte streams for external programs based on the character encoding specified in the $OutputEncoding preference variable, which, regrettably, defaults to ASCII(!) in Windows PowerShell, and now sensibly to (BOM-less) UTF-8 in PowerShell [Core] v6+.
In other words: The encoding specified via $OutputEncoding must match the character encoding that the external program expects.
Conversely, it is the encoding specified in [Console]::OutputEncoding that determines how PowerShell interprets text received from an external program, i.e. how it converts the bytes received to .NET strings, line by line, with newlines stripped (which, when captured in a variable, amounts to either a single string, if only one line was output, or an array of strings).
The for-display representations you see in the PowerShell console (terminal) are also what is sent to external programs via the pipeline, as lines of text, specifically:
If an object (already) is a string (or [char] instance), PowerShell sends it as-is to the pipe, but with a platform-appropriate newline invariably appended.
That is, a CRLF newline is appended on Windows, and a LF-only newline on Unix-like platforms.
This behavior can be problematic, as there are situations where you do not want that, and there's no way to prevent it - see GitHub issue #5974, GitHub issue #13579, and this answer for a workaround.
If an object is, loosely speaking, a primitive type - something that is conceptually a single value, notably the various number types - it is stringified in a culture-sensitive manner, where available[3], a platform-appropriate newline is again invariably appended.
E.g., with, a French culture in effect (as reflected in Get-Culture), decimal fraction 1.2 - which PowerShell parses as a [double] value - is sent as 1,2<newline>.
Note that [bool] instances are not culture-sensitive and are always converted to strings True or False.
All other (complex) types are subject to PowerShell's rich for-display output formatting, and whatever you would see in the terminal (console) is also what is sent to external programs - which not only again potentially contains culture-sensitive representations, but is generally problematic in that these representations are designed for the human observer, not for programmatic processing.
The upshot:
Beware encoding problems - make sure $OutputEncoding and [Console]::OutputEncoding are set correctly.
To avoid unexpected culture-sensitivity and unexpected for-display formatting, it is best to deliberately construct the string representation you want to send.
[1] By default; however, you can explicitly request buffering - expressed as an object count - via the common -OutBuffer parameter
[2] On recent macOS and Linux platforms, the stdin buffer size is 64KB. On Unix-like platforms, utilities typically switch to line-buffering in interactive invocations, i.e. when the stream in question is connected to a terminal.
[3] The behavior is delegated to the .ToString() method of a type at hand, i.e. whether or not that method outputs a culture-sensitive representation.
EDIT: As #mklement0 pointed out, this is different in PowerShell Core.
In PowerShell 5.1 (and lower) think you would have to manually write each pipeline item to the external application's input stream.
Here's an attempt to build a function for that:
function Invoke-Pipeline {
[CmdletBinding()]
param (
[Parameter(Mandatory, Position = 0)]
[string]$FileName,
[Parameter(Position = 1)]
[string[]]$ArgumentList,
[int]$TimeoutMilliseconds = -1,
[Parameter(ValueFromPipeline)]
$InputObject
)
begin {
$process = [System.Diagnostics.Process]::Start((New-Object System.Diagnostics.ProcessStartInfo -Property #{
FileName = $FileName
Arguments = $ArgumentList
UseShellExecute = $false
RedirectStandardInput = $true
RedirectStandardOutput = $true
}))
$output = [System.Collections.Concurrent.ConcurrentQueue[string]]::new()
$event = Register-ObjectEvent -InputObject $process -EventName 'OutputDataReceived' ` -Action {
$Event.MessageData.TryAdd($EventArgs.Data)
} -MessageData $output
$process.BeginOutputReadLine()
}
process {
$process.StandardInput.WriteLine($InputObject)
[string]$line = ""
while (-not ($output.TryDequeue([ref]$line))) {
start-sleep -Milliseconds 1
}
do {
$line
} while ($output.TryDequeue([ref]$line))
}
end {
if ($TimeoutMilliseconds -lt 0) {
$exited = $process.WaitForExit()
}
else {
$exited = $process.WaitForExit($TimeoutMilliseconds)
}
if ($exited) {
$process.Close()
}
else {
try {$process.Kill()} catch {}
}
}
}
Run-Commands | Invoke-Pipeline netapp.exe "-connect QQQQ -U user -P password"
The problem is, that there is no perfect solution, because by definition, you cannot know when the external program will write something to its output stream, or how much.
Note: This function doesn't redirect the error stream. The approach would be the same though.

Can I Write-Warning in a powershell script without a newline at the end?

I want to print a Warning in PowerShell as a prompt and then read the answer on the same line. The problem is that Write-Warning prints a newline at the endof the message, and one alternative, Read-Host -Prompt, doesn't print the prompt to the warning stream (or print in yellow). I've seen Write-Warning -WarningAction Inquire, but I think that's a little verbose and offers options I don't want.
The best I've done is:
$warningMsg= "Something is wrong. Do you want to continue anyway Y/N? [Y]:"
Write-Host -ForegroundColor yellow -NoNewline $warningMsg
$cont = Read-Host
This works great--prints the yellow prompt and then reads the input on the same line--but I'm wondering about the warnings I've seen against using Write-Host, if it's more appropriate to figure out some way to print to the warning stream without a newline. Is there a way to do that? I've noticed that Write-Host seems to be a wrapper to write to the Info stream, but I don't see any way to write to warnings without a new line ([Console]::Warning.WriteLine() doesn't exist for example).
You can't do what you want with Write-Warning.
Therefore, I will answer regarding your other concern.
Write-Host is perfectly fine to use in a PowerShell 5+ script.
If you look at the articles recommending against its use, you will notice that the vast majority (if not all) were written before the introduction of PowerShell 5.
Nowadays, Write-Host is a wrapper around Write-Information.
The official documentation confirms this:
Starting in Windows PowerShell 5.0, Write-Host is a wrapper for
Write-Information This allows you to use Write-Host to emit output to
the information stream. This enables the capture or suppression of
data written using Write-Host while preserving backwards
compatibility.
The $InformationPreference preference variable and -InformationAction
common parameter do not affect Write-Host messages. The exception to
this rule is
-InformationAction Ignore, which effectively suppresses
Write-Host output.
Writing to the information stream using Write-Host and / or Write-information won't create problems with your output string.
Stream # Description Introduced in
1 Success Stream PowerShell 2.0
2 Error Stream PowerShell 2.0
3 Warning Stream PowerShell 3.0
4 Verbose Stream PowerShell 3.0
5 Debug Stream PowerShell 3.0
6 Information Stream PowerShell 5.0
* All Streams PowerShell 3.0
Bonus
You can also control the visibility of the information stream if you use an advanced function through the -InformationAction parameter, provided you also bind the given parameter value to the Write-Host statements in the function.
For instance, if you wanted to disable the Information stream by default unless requested otherwise:
function Get-Stuff {
[CmdletBinding()]
param ()
if (!$PSBoundParameters.ContainsKey('InformationAction')) {
$InformationPreference = 'Ignore'
}
Write-Host 'This is the stuff' -InformationAction $InformationPreference -ForegroundColor Green
}
# Hidden by default
Get-Stuff
# Force it to show
Get-Stuff -InformationAction Continue
Note
While technically it is not possible to use Write-Warning -NoNewLine, you could look into manipulating the cursor position and resetting it to the end of the previous line, therefore doing the same.
However, I have limited experience with that and my observations regarding the subject were that you might end up having to create exceptions to comply with limitations of some console environments. In my opinion, this is a bit overkill...
Additional references
About_redirections

Can I tee unbuffered program output in Powershell?

I'm trying to use Putty's plink.exe as part of a Powershell script, and am having trouble teeing the output.
Some of the commands invoke an interactive response (eg: entering password). Specifically, I'm testing against an Isilon.
Example code:
$command = '&"C:\Program Files\Putty\plink.exe" root#10.0.0.141 -pw "password" -t -batch "isi auth users create testuser --set-password"'
iex $command
Expected result:
I get a prompt password:
I enter the password
I get a prompt confirm:
I enter the password again
Command ends
If I try to tee the output, using iex $command | tee-object -variable result or even just redirect with iex $command *>test.log, the prompt text doesn't show up until after I've responded to it. While still technically functional, if you don't know exactly what prompt to expect, it's useless.
I've tried using Start-Transcript, but that doesn't capture the output at all. I've also tried using plink's -sshlog argument, but that logs way too much, in a less than readable format.
Is there any way to have stdout be unbuffered in the console, and also have it stored in a variable?
To answer some potential questions:
-This is to be run in an environment that doesn't allow modules, so can't use Posh-SSH.
-The Powershell version available isn't new enough to use the built-in openssh functionality.
This is all about redirecting streams.
When you use redirection, all outputs are redirected from the streams, and passed to be written to file. When you execute:
Write-Host "Some Text" *>out.txt
You don't see any output and it is all redirected to the file.
Key Note: Redirection works on a (simplification) line by line basis, as
the redirection works by writing to the file one line at a time.
Similarly, when you use Tee-Object, all outputs are redirected from the stream and down the pipeline. This is passed to the cmdlet Tee-Object. Tee-Object takes the input, and then writes that input to both the variable/file you want and to the screen. This happens After the input has been gathered and processed.
This means that both redirection and the Tee-Object commands work on a line by line basis. This makes sense both redirection and the Tee-Object commands work this way because it is hard to deal with things like deleting characters, moving around and editing text dynamically while trying to edit and maintain an open file at the same time. It is only designed for a one-way once the statement is complete, output.
In this case, when running it interactively, the password: prompt is written to the screen and you can respond.
When redirecting/Teeing the output, the password: text prompt is redirected, and buffered, awaiting your response. This makes sense because the statement has not completed yet. You don't want to send half a statement that could change, or half an object down the pipeline. It is only after you complete the statement (e.g. entering in the password + enter) that the whole statement is passed down the stream/pipeline. Once the whole statement is sent, then it is redirected/output Tee'd and can be displayed.
#Bill_Stewart is correct, in the sense that you should pick either an interactive prompt, or a fully automated solution.
Edit: To add some more information from comments.
If we use Tee-Object it relies on the Pipeline. Pipelines can only pass complete objects down the pipeline (e.g. complete strings inc. New Line). Pipelines have to interact with other commands like ForEach-Object or Select-Object, and they can't handle passing incomplete data to them. That's how the PowerShell console works, and you can't change it.
Similarly, redirection works line by line. The underlying reason why, I will explain why in a moment.
So, if you want to interact with it character by character, then you are dealing with streams. And if you want to deal with streams directly, it's 100 times more complicated because you can't use the convenience of the PowerShell console, you have to directly run of the process manually and handle all the input and output yourself.
To start, you have to manually launch the process. To do this we use the System.Diagnostics.Process class. The Pseudocode looks something like this:
$p = [System.Diagnostics.Process]::New()
$p.StartInfo.RedirectStandardOutput = $true
$p.StartInfo.RedirectStandardError = $true
$p.StartInfo.RedirectStandardInput = $true
$p.StartInfo.UseShellExecute = $false
#$p.StartInfo.CreateNoWindow = $true
$p.StartInfo.FileName = "plink.exe"
$p.StartInfo.Arguments = 'root#10.0.0.141 -pw "password" -t -batch "isi auth users create testuser --set-password"'
$p.EnableRaisingEvents = $true
....
We essentially create the process, specify that we are going to redirect the stdout (StartInfo.RedirectStandardOutput = $true), as well as the stdin to something else for us to handle. How do we know when to read the data? Well, the class has the Process.OutputDataReceived Event. You bind to this event to read in the additional data. But:
The OutputDataReceived event indicates that the associated Process has
written a line, terminating with a newline character, to its
redirected StandardOutput stream.
Remarks
So even the process class revolves around newlines for streaming data. This is why even redirects *> work on a line by line basis. PowerShell, and cmd, etc. all use the Process class as a basis to run processes. They all bind to this same event and methods to do their processing. Hence, why everything revolves around newlines and statement completions.
(big breath) So. You still want to interactively work with things one character at a time? well then you can't use the convenience of events. You will have to fall back to using a Stream Reader and directly binding to the Process.StandardOutput Property. Unfortunately this is where I stop, and say that to accomplish this
is beyond the scope of SO, and will require much more research to accomplish.

Is there a way to set a variable up to place output to stdout or null?

I would like to set up a variable in my code that would ultimately define if I'll see some output or not.
"hello" writes to stdout
"hello" > $null supresses output
My idea is something like this:
$debugOutputSwitch = $true
$outputVar = $null
if ($debugOutputSwitch){ $outputVar = **STDOUT** }
...
Write-Host "Something I want out anyway"
"Something I might not want on STDOUT" > $outputVar
If this general idea is a way to go, then STDOUT is what I'm looking for
If this idea is completely wrong...well...then I'm lost
What you want to read up on are output streams and redirection in Powershell. This includes information on all of the different output streams and how to control their relevance using built-in constructs. Just like there are the Write-Host and Write-Output cmdlets, there are also several others that control which stream to write to.
About the Output Streams
There are 6 streams in total. Make note of their numbers, because these stream identifiers are used to control which streams to redirect:
1 - Success Stream - This stream is used when passing information along the Powershell Pipeline. This is the "default" stream, but can also be written to with Write-Output.
2 - Error Stream - Errors should be written to this stream. Can be written to with Write-Error, accompanied by further error information.
3 - Warning Stream - Used to write warning information. Can be written to withWrite-Warning.
4 - Verbose Stream - Used to write verbose output. Does not display by default but can be made to display by either setting $VerbosePreference = "Continue", or by using the [CmdletBinding()] attribute on a function or script and passing in the -Verbose flag. Write to the verbose stream with Write-Verbose.
5 - Debug Stream - Used to write to the debug stream, and optionally trigger a breakpoint. Does not display or trigger a breakpoint by default, but can be controlled with the $DebugPreference variable, or by using the [CmdletBinding()] attribute on a script or function and using the -Debug flag. You can write to the debug stream by using theWrite-Debug cmdlet.
6 - Information Stream - Can be written to by Write-Host. This is the console host output and is not part of the pipeline.
Redirecting Streams
You can use redirection operators to redirect other streams to the success stream as well. Each stream above has a number associated with it. This is the numeric representation of each stream.
The redirection operators are as follows:
> - Redirect success stream to file (overwrite)
#> - Redirect the # stream to file (e.g. 2> somefile.txt)
>> - Redirect success stream to file (appends, you can also use a numbered stream as with the overwrite file operator)
>&1 - Redirect any stream to success stream (note that unlike the other redirection operators you can only redirect to the success stream. Using other stream identifiers will result in an error).
Also note that in place of a stream number, you can use * which will redirect all streams at the same time.
Here are some examples of redirecting output from one stream to another (if you're familiar with it, it's somewhat UNIX-y):
# Write success stream to file
Write-Output "Here is some text for a file" > .\somefile.txt
# Write error stream to file (you have to first
Write-Error "Some error occurred" 2> .\somefile.txt
# Redirect all error output to the success stream
$myErrorOutput = Write-Error "My error output" 2>&1
# Append all script output streams to a single file
Get-OutputFromAllStreams.ps1 *>> somefile.txt
Output to a File and the Pipeline Simultaneously
You can redirect the output stream to a file and the pipeline at the same time as well, using the Tee-Object cmdlet. This also works with variables, too:
$myString = "My Output" | Tee-Object -FilePath .\somefile.txt
$myString2 = "My Output 2" | Tee-Object -Variable varName
Sample function to show how to use the different Write- cmdlets
Note how the following function is decorated with the [CmdletBinding()] attribute. This is key in making the -Verbose and -Debug switches work without you having to define them yourself.
function Write-DifferentOutputs {
[CmdletBinding()]
# These all visible by default but only the output stream is passed down the pipeline
Write-Output "Output stream"
Write-Warning "Warning stream"
Write-Error "Error stream"
Write-Host "Information stream"
# These are not visible by default, but are written when the `-Verbose` or `-Debug` flags are passed
# You can also manually set the $VerbosePreference or $DebugPreference variables to control this without parameters
Write-Verbose "Verbose stream"
Write-Debug "Debug stream"
}
Call the above function with the -Verbose or -Debug switches to see how the behavior differs, and also call it with neither flag.
Redirecting output to $null if you really need to
If there is output that you never want to see or for some other reason using the Write- cmdlets to write to the Verbose or Debug streams isn't an option, you can still redirect output to $null or make use of the Out-Null cmdlet. Recall the numbered streams at the top of this answer, they will be referenced here:
Using redirection
# Don't forget that *> redirects ALL streams, and may be what you want
Write-Output 'Success Stream' > $null
Write-Error 'Error Stream' 2> $null
Write-Warning 'Warning Stream' 3> $null
Write-Verbose 'Verbose Stream' 4> $null
Write-Debug 'Debug Stream' 5> $null
Write-Host 'Information Stream (yes you can suppress/redirect me)' 6> $null
You can also redirect target streams per command: The following example (using the earlier Write-DifferentOutputs function) redirects all streams except for the Error and Success streams:
Note: You are not limited to redirecting targeted streams only to $null.
Write-DifferentOutputs 6>$null 5>$null 4>$null 3>$null
Using Out-Null
Remember, you can redirect other streams to the success stream by redirecting the output to &1.
# Remember, to pass information on the pipeline
# it MUST be on the success stream first
# Don't forget that *> redirects ALL streams, and may be what you want
Write-Output 'Success Stream' | Out-Null
Write-Error 'Error Stream' 2>&1 | Out-Null
Write-Warning 'Warning Stream' 3>&1 | Out-Null
Write-Verbose 'Verbose Stream' 4>&1 | Out-Null
Write-Debug 'Debug Stream' 5>&1 | Out-Null
Write-Host 'Information Stream (yes you can suppress/redirect me)' 6>&1 | Out-Null
When using Out-Host is appropriate ("Don't Cross the Streams")
Warning: Unlike Write-Host, Out-Host does not output to the information stream. Instead, it outputs directly to the host console. This makes redirection of anything written directly to Out-Host impossible short of using Start-Transcript or using a custom PowerShell host. Note that information written to the console host is still visible to external applications which may be watching the output of PowerShell, as ultimately evenOut-Host output makes it to STDOUT.
Calling yourself Out-Host is usually redundant. By default, PowerShell sends all unassigned output on the success stream here via way of the Out-Default cmdlet (which you should never callOut-Default directly). That said, one useful invocation of Out-Host is to synchronously output formatted object data to the console:
Note: You can redirect information from other output streams and output to Out-Host as well, but there is no reason to do so. Object data will only remain intact on the success stream, the other streams will first convert an object to its ToString() representation prior to redirection. This is also why piping the object to Out-Host in this case is preferable to Write-Host.
Get-Process msedge | Out-Host
One of the caveats of the different output streams is that there is no synchronicity between streams. Normally this is not an issue as PowerShell executes instructions line by line in order, and with the exception of Write-Output success stream this is not a problem with the other streams. However, many types will have a computed for-display attribute which is computed asynchronously from the script execution before the information is sent to Out-Default.
This can result in the displayed object data being intermingled with other output streams which are written to the console host. In some cases, this can even result in loss of information written to the console. "Crossing the streams", if you will, as it pertains to how the rendered output may look.
Consider the following example and output. This does not showcase the streams intermingling, but consider the trouble you would have parsing the output externally if Write-Host "end `n" were written in the middle of the table:
Write-Host "start `n"
Get-LocalUser
Write-Host "end `n"
And the output:
start
end
Name Enabled Description
---- ------- -----------
Administrator True
DefaultAccount False A user account managed by the system.
Disabled False Built-in account for guest access to the computer/domain
In particular this is problematic for types which define a table format that must calculate the column width before sending the formatted data to Out-Host for display. Types which pre-define the table width or do not format output as a table at all do not have this problem. Out-Default can take up to 300ms to calculate column width.
When Out-Host is called explicitly as part of a pipeline, however, the table width calculation is skipped for these objects as the object data never makes it to Out-Default. This is useful primarily to ensure that object data intended to be written to the console is done so in the correct order. The downside is that table columns may not be wide enough to accommodate all of your data on each row.
This all said, if you must process the console output of a script, it is recommended to format the data you wish to process into a string and use Write-Host instead, or use another method to get the data somewhere suitable for external processing. for-display formatting is not intended for use with external processing.
#mklement1's answer here dives further into the details about this if you are curious to learn more about this problem.
Redirecting whole command outputs to Write- cmdlets
You can easily pipe all output of a command or cmdlet to one of the Write- cmdlets. I'll use the Write-DifferentOutputs provided earlier in my example below, but this will work with any cmdlet, script, or command you run:
Write-DifferentOutputs *>&1 | Write-Verbose
What the above will do is only show the command output if $VerbosePreference = $Continue, or if you passed -Verbose as an argument to your script or function.
In Summarium
In your original question, you are attempting to reinvent a wheel that Powershell already supports fairly well. I would suggest that you learn how to make use of the different Write-Output cmdlets for each stream and especially learn how to make use of the Write-Warning, Write-Verbose, Write-Error, and Write-Debug cmdlets.
All right.
Thanks to all the brainiacs here for the motivation.
This answer may not be the best way to go about it, but it works!
Two things you need to understand to achieve this:
If you are used to using Write-Host, it won't work, you'll have to go with Write-Output.
You may have to learn to use a block of script as a function parameter.
One is self explanatory, so here's how to attain #2:
Function Test-SctiptBlockParam {
Param(
$scriptblock
)
if ($debugOutput) {
Invoke-Command $scriptblock
} else {
(Invoke-Command $scriptblock) > $null
}
}
Test-SctiptBlockParam -scriptblock { Write-Output "I want to see on the STDOUT sometimes" }
Finally, here is an example of my output and code