PowerShell is designed to support and favor a streaming output style (where output and user feedback become available as they're generated, instead of being dumped when the script finishes). PowerShell provides the assorted Write cmdlets, their streams, and their redirect functionality to enable this style.
However, when using Invoke-Command, stream redirection behavior when operating in a remote PSSession varies depending on the stream used.
For example:
# Information redirect works as expected when invoking locally
>Invoke-Command { 'output'; Write-Host 'info' } 6>$Null
output
# Information redirect gets ignored when invoking remotely
# (assuming a remote session has already been created as $session)
>Invoke-Command -Session $session { 'output'; Write-Host 'info' } 6>$Null
output
info
(Write-Host used for simplicity, the same behavior occurs using Write-Information -InformationAction Continue)
Experimentation shows stream behavior varies by stream:
Success (1) and Error (2) output redirect as normal
Warning (3) and Verbose (4) output unconditionally similar to Information (6), but their formatting is preserved
Debug (5) redirects as normal (although the continuation prompt doesn't redirect)
I can see a few custom ways to jury-rig streaming output that could subsequently be filtered easily (e.g. outputting to Success with an identifying prefix).
Is there a good way to stream separate output and feedback from a remote command that would adhere to some form of PowerShell common practices, or a way to get the redirection to behave as expected?
I have a console application which can take standard input. It buffers up the data until the execute command, at which point it executes it all, and sends the output to standard output.
At the moment, I am running this application from Powershell, piping commands into it, and then parsing the output. The data piped in is relatively small; however this application is being called about 1000 times. Each time it is executed, it has to load, and create network connections. I am wondering whether it might be more efficient to pipeline all the commands into a single instantiation of the console application.
I have tried this by adding all Powershell script, that manufactures the standard input for the console, into a function, then piping that function to the console application. This seems to work at first, but you eventually realise it is buffering up all the data in Powershell until the function has finished, then sending it to the console's StdIn. You can see this because I have a whole load of Write-Host statements that flash by, and only then do you see the output.
e.g.
Function Run-Command1
{
Write-Host "Run-Command1"
"GET nethost xxxx COLS id,name"
"EXEC"
}
Function Run-Command2
{
Write-Host "Run-Command2"
"GET nethost yyyy COLS id,name"
"GET users yyyy COLS id,name"
"EXEC"
}
...
Function Run-CommandX
{
...
}
Previously, I would use this as:
Run-Command1 | netapp.exe -connect QQQQ -U user -P password
Run-Command2 | netapp.exe -connect QQQQ -U user -P password
...
Run-CommandX | netapp.exe -connect QQQQ -U user -P password
But now I would like to do:
Function Run-Commands
{
Run-Command1
Run-Command2
...
Run-CommandX
}
Run-Commands |
netapp.exe -connect QQQQ -U user -P password
Ideally, I would like the Powershell pipeline behaviour to be extended to an external application. Is this possible?
I would like the Powershell pipeline behaviour to be extended to an external application.
I have a whole load of Write-Host statements that flash by, and only then do you see the output.
Tip of the hat to marsze.
PowerShell [Core] v6+ performs no buffering at all, and sends (stringified) output as it is being produced by a command to an external program, in the same manner that output is streamed between PowerShell commands.[1]
PowerShell's legacy edition (versions up to 5.1), Windows PowerShell, buffers in that it collects all output from a command first before sending it(s stringification) to an external program.
marsze's helpful answer shows a workaround based on direct use of .NET APIs.
However, I think even Windows PowerShell's behavior isn't the problem here: Your Run-Commands function executes very quickly - given that the functions it calls merely output string literals - and the resulting array of lines is then sent all at once to netapp.exe - and further processing, including when to produce output, is then up to netapp.exe. In PowerShell [Core] v6+, with PowerShell-side buffering out of the picture, the individual Run-Commmand<n> functions' output would be sent to netapp.exe ever so slightly earlier, but I wouldn't expect that to make a difference.
The upshot is that unless netapp.exe offers a way to adjust its input and output buffering, you won't be able to control the timing of its input processing and output production.
How PowerShell sends objects to an external program (native utility) via the pipeline:
It sends a stringified representation of each object:
in PowerShell [Core] v6+: as the object becomes available.
in Windows PowerShell: after having collected all output objects in memory first.
In other words: on the PowerShell side, from v6 onward, there is no buffering.[1]
However, receiving external programs typically do buffer the stdin (standard input) data they receive via the pipeline[2].
Similarly, external programs typically do buffer their stdout (standard output) streams (but PowerShell performs no additional buffering before passing the output on, such as to the terminal (console)).
PowerShell has no control over this behavior; either the external program itself offers an option to adjust buffering or, in limited cases on Linux, you can call the external program via the stdbuf utility.
Optional reading: How PowerShell stringifies objects when piping to external programs:
PowerShell, as of v7.1, knows only text when communicating with external programs; that is, data sent to such programs is converted to text, and output from such programs is interpreted as text - even though the underlying system IPC features are simply byte conduits.
The UTF-16-based .NET strings PowerShell uses are converted to byte streams for external programs based on the character encoding specified in the $OutputEncoding preference variable, which, regrettably, defaults to ASCII(!) in Windows PowerShell, and now sensibly to (BOM-less) UTF-8 in PowerShell [Core] v6+.
In other words: The encoding specified via $OutputEncoding must match the character encoding that the external program expects.
Conversely, it is the encoding specified in [Console]::OutputEncoding that determines how PowerShell interprets text received from an external program, i.e. how it converts the bytes received to .NET strings, line by line, with newlines stripped (which, when captured in a variable, amounts to either a single string, if only one line was output, or an array of strings).
The for-display representations you see in the PowerShell console (terminal) are also what is sent to external programs via the pipeline, as lines of text, specifically:
If an object (already) is a string (or [char] instance), PowerShell sends it as-is to the pipe, but with a platform-appropriate newline invariably appended.
That is, a CRLF newline is appended on Windows, and a LF-only newline on Unix-like platforms.
This behavior can be problematic, as there are situations where you do not want that, and there's no way to prevent it - see GitHub issue #5974, GitHub issue #13579, and this answer for a workaround.
If an object is, loosely speaking, a primitive type - something that is conceptually a single value, notably the various number types - it is stringified in a culture-sensitive manner, where available[3], a platform-appropriate newline is again invariably appended.
E.g., with, a French culture in effect (as reflected in Get-Culture), decimal fraction 1.2 - which PowerShell parses as a [double] value - is sent as 1,2<newline>.
Note that [bool] instances are not culture-sensitive and are always converted to strings True or False.
All other (complex) types are subject to PowerShell's rich for-display output formatting, and whatever you would see in the terminal (console) is also what is sent to external programs - which not only again potentially contains culture-sensitive representations, but is generally problematic in that these representations are designed for the human observer, not for programmatic processing.
The upshot:
Beware encoding problems - make sure $OutputEncoding and [Console]::OutputEncoding are set correctly.
To avoid unexpected culture-sensitivity and unexpected for-display formatting, it is best to deliberately construct the string representation you want to send.
[1] By default; however, you can explicitly request buffering - expressed as an object count - via the common -OutBuffer parameter
[2] On recent macOS and Linux platforms, the stdin buffer size is 64KB. On Unix-like platforms, utilities typically switch to line-buffering in interactive invocations, i.e. when the stream in question is connected to a terminal.
[3] The behavior is delegated to the .ToString() method of a type at hand, i.e. whether or not that method outputs a culture-sensitive representation.
EDIT: As #mklement0 pointed out, this is different in PowerShell Core.
In PowerShell 5.1 (and lower) think you would have to manually write each pipeline item to the external application's input stream.
Here's an attempt to build a function for that:
function Invoke-Pipeline {
[CmdletBinding()]
param (
[Parameter(Mandatory, Position = 0)]
[string]$FileName,
[Parameter(Position = 1)]
[string[]]$ArgumentList,
[int]$TimeoutMilliseconds = -1,
[Parameter(ValueFromPipeline)]
$InputObject
)
begin {
$process = [System.Diagnostics.Process]::Start((New-Object System.Diagnostics.ProcessStartInfo -Property #{
FileName = $FileName
Arguments = $ArgumentList
UseShellExecute = $false
RedirectStandardInput = $true
RedirectStandardOutput = $true
}))
$output = [System.Collections.Concurrent.ConcurrentQueue[string]]::new()
$event = Register-ObjectEvent -InputObject $process -EventName 'OutputDataReceived' ` -Action {
$Event.MessageData.TryAdd($EventArgs.Data)
} -MessageData $output
$process.BeginOutputReadLine()
}
process {
$process.StandardInput.WriteLine($InputObject)
[string]$line = ""
while (-not ($output.TryDequeue([ref]$line))) {
start-sleep -Milliseconds 1
}
do {
$line
} while ($output.TryDequeue([ref]$line))
}
end {
if ($TimeoutMilliseconds -lt 0) {
$exited = $process.WaitForExit()
}
else {
$exited = $process.WaitForExit($TimeoutMilliseconds)
}
if ($exited) {
$process.Close()
}
else {
try {$process.Kill()} catch {}
}
}
}
Run-Commands | Invoke-Pipeline netapp.exe "-connect QQQQ -U user -P password"
The problem is, that there is no perfect solution, because by definition, you cannot know when the external program will write something to its output stream, or how much.
Note: This function doesn't redirect the error stream. The approach would be the same though.
I have the function below that produce multiple outputs, is there a way I can put all the outputs of the function in a text file. I tried below to use Out-File it did not work any suggestions?
cls
function functionAD {Write-output ""...}
functionAD | Out-File -FilePath C:\test\task5.txt -Append
the script above still did not work.
UPDATE: This is, in fact, possible if you overwrite the Write-Host function. It can be done like this:
function Write-Host($toWrite) {
Write-Output $toWrite
}
Copy and paste this code into your PowerShell console, then run the program.
Don't worry about permanently overwriting the Write-Host command, this will only last for the current session.
OLD COMMENT:
Unfortunately, Write-Host can not be rerouted to another file stream. It is the only 'write' command that acts in that way. That is why PowerShell programmers generally try to avoid using it unless there is a specific reason to. It is intended for messages sent directly to the user and is thus send to the program (powershell) itself rather than a console.
I would suggest using some other command if the function is your own. Write-Output is always a safe bet because it can be redirected to any other stream.
Here is a link if you have more questions: https://devblogs.microsoft.com/scripting/understanding-streams-redirection-and-write-host-in-powershell/
In my powershell script, I am calling a cmdlet (Say "Connect-Database"). If the cmdlet needs additional parameter, it prompts the user to supply those values ( say, "Connect-Database" can not get credentials from registry , so it prompts for "Username" "password" on Powershell console and expects the user to give that value ).
I looked into the code of Powershell cmdlet. I found that it is using "CommandInvocationIntrinsics" to prompt the user (using "NewScriptBlock" and then "Invoke" method of "CommandInvocationIntrinsics" ).
Since I am calling this cmdlets from the Powershell script, I want that whenever such a prompting happened, it will be suppressed and an exception is thrown.
The code is something like this -
try
{
$db = Connect-Database <databasename>
#if username / password is prompted, it should be converted into error. But it should not be shown in powershell console.
if($Error.Count > 0)
{
throw $Error[0]
}
}
catch
{
#do something
}
My way of doing that, is first to enumerate mandatory parameters, and then initialized them with $null vars. So there is no more interaction and an error is thrown.
Easy solution, don't prompt. I feel that you should either always prompt (by using mandatory parameters), or never prompt during the execution of your command. If you don't have enough information, throw an error. If your command is being used interactively, the user can rerun with the correct information.
This is a part of the design of PowerShell and likely cannot be overcome.
If you are trying to be non-interactive, you should be ensuring that you have collected all the necessary information before trying to call the cmdlet (aka, you should not have missing parameters for your non-interactive calls). You are likely following an anti-pattern and should re-think your approach. Prompting at the beginning of your non-interactive script (aka, short startup interaction), having parameters for your non-interactive script (those will, in turn, prompt if missing when set to mandatory), or reading from a config file (and verifying the information first) are a few approaches. Something common you will notice is that they all involve failing fast/failing early, a very good design.
I need to be able to launch a process and read the output into a variable. Then based on the return of the command I can choose to show the full output or just a selected subset.
So to be clear, I want to launch a text based process (psexec actually) and read the output from that command (stdout, stderr, etc) into a variable rather than have it directly outputted to the console.
You left off some details regarding what kind of process, but I think this article from the Powershell Team Blog has whatever you'd like to do, either from piping the executable's output somewhere or utilizing System.Diagnostics.Process.
Now that the second option sounds like what you want to do, you can use the ProcessStartInfo class to feed in true as the RedirectStandardOutput property, and then read from the StandardOutput property of the Process object to do whatever you want with the output. StandardError works identically.
As far as reading stuff into variables is concerned, you should just be able to do something like
$output = ps
This will only capture stdout, though, not the verbose, warning or error streams. You can get the exit code of the previous command by testing the special variable $?.
I think a little more information would be of use to provide a more complete answer, but hopefully this is some way towards what you're looking for.
The PowerShell Community Extensions includes Start-Process. This actually returns a System.Diagnostics.Process.
> $proc = Start-Process pslist -NoShellExecute
However, while this returns the Process object it does not allow you to redirect the output prior to executing. To do that one can create their own process and execute it by first modifying the ProcessStartInfo members:
> $proc = New-Object System.Diagnostics.Process
> $proc.StartInfo = New-Object System.Diagnostics.ProcessStartInfo("pslist.exe")
> $proc.StartInfo.CreateNoWindow = $true
> $proc.StartInfo.UseShellExecute = $false
> $proc.StartInfo.RedirectStandardOutput = $true
> $proc.Start()
> $proc.StandardOutput.ReadToEnd()