Using the program got your back or GYB. I run the following command
Start-Process -FilePath 'C:\Gyb\gyb.exe' -ArgumentList #("--email <Email Address>", "--action backup", "--local-folder $GYBfolder", "--service-account", "--batch-size 4") -Wait
The issue is that when the process is done my script does not complete.
$GYBfolder = $GYBfolder.Replace('"', "")
$output = [PSCustomObject]#{
Name = $SourceGYB
Folder = $GYBfolder
}
$filename = "C:\reports\" + $SourceGYB.Split("#")[0] + "_Backup.csv"
$output | Export-Csv $filename -NoTypeInformation | Format-Table text-align=left -AutoSize
Return $filename
For some reason the script stops right before the return.
I am curious to know if I should be using a different command to run GYB?
Any thoughts on why the script does not process the return?
There's great information in the comments, but let me attempt a systematic overview:
To synchronously execute external console applications and capture their output, call them directly (C:\Gyb\gyb.exe ... or & 'C:\Gyb\gyb.exe' ...), do not use Start-Process - see this answer.
Only if gyb.exe were a GUI application would you need **Start-Process -Wait in order to execute it synchronously**.
A simple, but non-obvious shortcut is to pipe the invocation to another command, such as Out-Null, which also forces PowerShell to wait (e.g. gyb.exe | Out-Null) - see below.
When Start-Process is appropriate, the most robust way to pass all arguments is as a single string encoding all arguments, with appropriate embedded "..." quoting, as needed; this is unfortunate, but required as a workaround for a long-standing bug: see this answer.
Invoke-Command's primary purpose is to invoke commands remotely; while it can be used locally, there's rarely a good reason to do so, as &, the call operator is both more concise and more efficient - see this answer.
When you use an array to pass arguments to an external application, each element must contain just one argument, where parameter names and their values are considered distinct arguments; e.g., you must use #(--'action', 'backup', ...) rather than
#('--action backup', ...)
Therefore, use the following to run your command synchronously:
If gyb.exe is a console application:
# Note: Enclosing #(...) is optional
$argList = '--email', $emailAddress, '--action', 'backup', '--local-folder', $GYBfolder, '--service-account', '--batch-size', 4
# Note: Stdout and stderr output will print to the current console, unless captured.
& 'C:\Gyb\gyb.exe' $argList
If gyb.exe is a GUI application, which necessitates use of Start-Process -Wait (a here-string is used, because it makes embedded quoting easier):
# Note: A GUI application typically has no stdout or stderr output, and
# Start-Process never returns the application's *output*, though
# you can ask to have a *process object* returned with -PassThru.
Start-Process -Wait 'C:\Gyb\gyb.exe' #"
--email $emailAddress --action backup --local-folder "$GYBfolder" --service-account --batch-size 4
#"
The shortcut mentioned above - piping to another command in order to force waiting for a GUI application to exit - despite being obscure, has two advantages:
Normal argument-passing syntax can be used.
The automatic $LASTEXITCODE variable is set to the external program's process exit code, which does not happen with Start-Process. While GUI applications rarely report meaningful exit codes, some do, notably msiexec.
# Pipe to | Out-Null to force waiting (argument list shortened).
# $LASTEXITCODE will reflect gyb.exe's exit code.
# Note: In the rare event that the target GUI application explicitly
# attaches to the caller's console and produces output there,
# pipe to `Write-Output` instead, and possibly apply 2>&1 to
# the application call so as to also capture std*err* output.
& 'C:\Gyb\gyb.exe' --email $emailAddress --action backup | Out-Null
Note: If the above unexpectedly does not run synchronously, the implication is that gyb.exe itself launches another, asynchronous operation. There is no generic solution for that, and an application-specific one would require you to know the internals of the application and would be nontrivial.
A note re argument passing with direct / &-based invocation:
Passing an array as-is to an external program essentially performs splatting implicitly, without the need to use #argList[1]. That is, it passes each array element as its own argument.
By contrast, if you were to pass $argList to a PowerShell command, it would be passed as a single, array-valued argument, so #argList would indeed be necessary in order to pass the elements as separate, positional arguments. However, the more typical form of splatting used with PowerShell commands is to use a hashtable, which allows named arguments to be passed (parameter name-value pairs; e.g., to pass a value to a PowerShell command's
-LiteralPath parameter:
$argHash = #{ LiteralPath = $somePath; ... }; Set-Content #argHash
[1] $args and #args are largely identical in this context, but, strangely, #argList, honors use of --%, the stop-parsing symbol operator, even though it only makes sense in a literally specified argument list.
Related
$npp = "C:\Program Files\Notepad++\notepad++.exe";
$myfiles = #(
"C:\bad boys\file1.txt",
"C:\bad boys\file2.txt",
"C:\bad boys\file3.txt"
)
foreach ($file in $myfiles) {
Start-Process -FilePath $npp -ArgumentList "$file" -PassThru -NoNewWindow | out-null
}
This almost works... except, It doesn't open in notepad++ because it sees the space in the file name and thinks this is where the file path ends... thus, i am unable to open my file list. Any Ideas how to fix? What i get instead is notepad++ asking many times if I want to create the file "C:\bad"
tl;dr
While Joel Coehoorn's helpful answer provides an effective solution to your Start-Process problem (which stems from the bug detailed below), you can simplify your code to:
foreach ($file in $myfiles) {
# Note: | Out-Null is a trick that makes calling *GUI* applications
# *synchronous* (makes PowerShell wait for them to exit).
& $npp $file | Out-Null
}
You're seeing a long-standing bug in Start-Process that causes it to blindly space-concatenate its -ArgumentList (-Args) arguments without using required embedded double-quoting for arguments with spaces when forming the single string encoding all arguments that is passed to the target executable behind the scenes.
See GitHub issue #5576, which also discusses that a fix will require a new parameter so as not to break backward compatibility.
For that reason, the required embedded double-quoting must be performed manually as shown in Joel's answer.
When passing multiple arguments, it is ultimately easier to pass a single string to -ArgumentList, with embedded double-quoting as necessary - essentially by formulating a string similar to how you would pass multiple arguments from cmd.exe:
E.g., if you were to pass two file paths with spaces to Notepad++ at once, you would do:
Start-Process -Wait -FilePath $npp -ArgumentList "`"C:\bad boys\file1.txt`" `"C:\bad boys\file2.txt`""
Alternatively, since your argument string doesn't require string interpolation, you could use a verbatim (single-quoted) string instead, which avoids the need for escaping the embedded " as `":
Start-Process -Wait -FilePath $npp -ArgumentList '"C:\bad boys\file1.txt`" `"C:\bad boys\file2.txt"'
Using a here-string is yet another option that avoids the need to escape, and can additionally make the call more readable (also works with single quotes (#'<newline>...<newline>'#):
Start-Process -Wait -FilePath $npp -ArgumentList #"
"C:\bad boys\file1.txt" "C:\bad boys\file2.txt"
"#
Also note the overall simplification of the Start-Process call:
Use of -Wait to ensure synchronous execution (waiting for Notepad++ to exit before continuing).
It looks like this is what you tried to do by combining -PassThru with piping to Out-Null, but that doesn't actually work, because that only waits for Start-Process itself to exit (which itself - unlike the launched process - executes synchronously anyway).
The omission of the unnecessary -NoNewWindow parameter, which only applies to starting console applications (in order to prevent opening a new console window); Notepad++ is a GUI application.
Note that the only good reason to use Start-Process here - rather than direct invocation - is the need for synchronous execution: Start-Process -Wait makes launching GUI applications synchronous (too), whereas with direct invocation only console applications execute synchronously.
If you didn't need to wait for Notepad++ to exit, direct invocation would make your quoting headaches would go away, as the required embedded quoting is then automatically performed behind the scenes:[1]
foreach ($file in $myfiles) {
& $npp $file # OK, even with values with spaces
}
However, the | Out-Null trick can be used effectively in direct invocation to make calling GUI applications synchronous[2], which leads us to the solution at the top:
foreach ($file in $myfiles) {
& $npp $file | Out-Null # Wait for Notepad++ to exit.
}
[1] However, up to at least PowerShell 7.2.x, other quoting headaches can still arise, namely with empty-string arguments and arguments whose values contain " chars. - see this answer.
[2] Out-Null automatically makes PowerShell wait for the process in the previous pipeline segment to exit, so as to ensure that all input can be processed - and it does so irrespective of whether the process is a console-subsystem or GUI-subsystem application. Since GUI applications are normally detached from the calling console and therefore produce no output there, Out-Null has no ill effects. In the rare event that a GUI application does explicitly attach to the calling console and produce output there, you can use | Write-Output instead (which also works if there's no output, but is perhaps more confusing).
Try quotes around the file paths within the string data:
$myfiles = #(
"`"C:\bad boys\file.txt`"",
"`"C:\bad boys\file2.txt`"",
"`"C:\bad boys\file3.txt`""
)
I have a long script. i have a function for logging:
function Log ([string]$Content){
$Date = Get-Date
Add-Content -Path $LogPath -Value ("$Date : $Content")
}
In some point at the script i have the need to run jobs in parallel.
I have a list of computer names and i need to use psexec to each one of them. this should be done as jobs to to run in parallel
Start-Job -ScriptBlock {
Log "$line Has called"
$Program_List = gc $USERS_DB_PATH\$line.txt | select -Skip 1
if (Test-Connection $line -Quiet) {
ForEach ($program in $Program_List){
Log "$line $program"
#"line $line bot is $bot pwd is $pwd"
psexec \\"$line" -u bla.local\"$bot" -p $pwd cmd bla
}
}
else{
Log "Cannot Connect to $line"
}
}
#Remove-Item "$USERS_DB_PATH\$line.txt"
}
I understand this is something to do with Scope but how can I make this scriptblock see the function Log and all the neccesery variables? they all come up empty
tl;dr
Reference variables from the caller's scope via the $using: scope.
Recreate your Log function in the context of the background job, using $function:Log = $using:function:Log
Start-Job -ScriptBlock {
# Required in Windows PowerShell only (if needed).
# Change to the same working directory as the caller.
Set-Location -LiteralPath ($using:PWD).ProviderPath
# Recreate the Log function.
$function:Log = $using:function:Log
# All variable values from the *caller*'s scope must be $using: prefixed.
Log "$using:line Has called"
# ...
}
Read on for an explanation.
See the bottom section for better alternatives to Start-Job: Start-ThreadJob and ForEach-Object -Parallel (PowerShell (Core) 7+ only).
A background job runs in an invisible PowerShell child process, i.e. a separate powershell.exe (Windows PowerShell) pwsh (PowerShell (Core) 7+) process.
Such a child process:
does not load $PROFILE files.
knows nothing about the caller's state; that is, it doesn't have access to the caller's variables, functions, aliases, ... defined in the session; only environment variables are inherited from the caller.
Conversely, this means that only the following commands are available by default in background jobs:
external programs and *.ps1 scripts, via the directories listed in the $env:PATH environment variable.
commands in modules available via the module-autoloading feature, from directories listed in the $env:PSModulePath environment variable (which has a default module).
Passing caller-state information to background jobs:
Variables:
While you cannot pass variables as such to background jobs, you can pass their values, using the $using: scope; in other words: you can get the value of but not update a variable in the caller's scope - see the conceptual about_Remote_Variables.
Alternatively, pass the value as an argument via Start-Job's -ArgumentList (-Args) parameter, which the -ScriptBlock argument must then access in the usual manner: either via the automatic $args variable or via explicitly declared parameters, using a param() block.
Functions:
Analogously, you cannot pass a function as such, but only a function's body, and the simplest way to do that is via namespace variable notation; e.g. to get the body of function foo, use $function:foo; to pass it to a background job (or remote call), use $using:function:foo.
Since namespace variable notation can also be used to assign values, assigning to $function:foo creates or updates a function named foo, so that $function:foo = $using:function:foo effectively recreates a foo function in the background session.
Note that while $function:foo returns the function body as a [scriptblock] instance, $using:function:foo, turns into a string during serialization (see GitHub issue #11698; however, fortunately you can also create functions bodies from strings.
Working directory:
In Windows PowerShell background jobs use a fixed working directory: the users Documents folder. To ensure that the background job uses the same directory as the caller, call
Set-Location -LiteralPath ($using:PWD).ProviderPath as the first statement from inside the script block passed to -ScriptBlock.
In PowerShell (Core) 7+ background job now - fortunately - use the same working directory as the caller.
Caveat re type fidelity:
Since values must be marshaled across process boundaries, serialization and deserialization of values is of necessity involved. Background jobs use the same serialization infrastructure as PowerShell's remoting, which - with the exception of a handful of well-known types, including .NET primitive types - results in loss of type fidelity, both on passing values to background jobs and receiving output from them - see this answer
Preferable alternative to background jobs: thread jobs, via Start-ThreadJob:
PowerShell (Core) 7+ comes with the ThreadJob module, which offers the Start-ThreadJob cmdlet; in Windows PowerShell you can install it on demand.
Additionally, PowerShell (Core) 7+ offers essentially the same functionality as an extension to the ForEach-Object cmdlet, via the -Parallel parameter, which executes a script block passed to it in a separate thread for each input object.
Start-ThreadJob fully integrates with PowerShell's other job-management cmdlets, but uses threads (i.e. in-process concurrency) rather than child processes, which implies:
much faster execution
use of fewer resources
no loss of type fidelity (though you can run into thread-safety issues and explicit synchronization may be required)
Also, the caller's working directory is inherited.
The need for $using: / -ArgumentList equally applies.
For ForEach-Object -Parallel an improvement is being considered to allow copying the caller's state to the thread script blocks on an opt-in basis - see GitHub issue #12240.
This answer provides an overview of ForEach-Object -Parallel and compares and contrasts Start-Job and Start-ThreadJob.
I would like to redirect the output of a command in PowerShell, following these rules:
The command is stored to a variable
Output must be written to the console in real-time (i.e. "ping" results), including errors
Output must be stored to a variable, including errors (real-time is not mandatory here)
Here are my tests, assuming:
$command = "echo:"
to test errors redirection, and:
$command = "ping 127.0.0.1"
to test real-time output.
Output is written in real-time, errors are not redirected at all
Invoke-Expression $command 2>&1 | Tee-Object -Variable out_content
Output is written in real-time, errors are only redirected to the console
Invoke-Expression ($command 2>&1) | Tee-Object -Variable out_content
Invoke-Expression $command | Tee-Object -Variable out_content 2>&1
Output is not written in real-time, errors are correctly redirected to both
(Invoke-Expression $command) 2>&1 | Tee-Object -Variable out_content
Is it possible to get those rules working together?
Some general recommendations up front:
Invoke-Expression should generally be avoided, because it can be a security risk and introduces quoting headaches; there are usually better and safer solutions available; best to form a habit of avoiding Invoke-Expression, unless there is no other solution.
There is never a reason to use Invoke-Expression to simply execute an external program with arguments, such as ping 127.0.0.1; just invoke it directly - support for such direct invocations is a core feature of any shell, and PowerShell is no exception.
If you do need to store a command in a variable or pass it as an argument for later invocation, use script blocks ({ ... }); e.g., instead of $command = 'ping 127.0.0.1', use $command = { ping 127.0.0.1 }, and invoke that script block on demand with either &, the call operator, or ., the dot-sourcing operator.
When calling external programs, the two operators exhibit the same behavior; when calling PowerShell-native commands, & executes the code in a child scope, whereas . (typically) executes in the caller's current scope.
That Invoke-Expression $command 2>&1 doesn't work as expected looks like a bug (as of PowerShell Core 7.0.0-preview.3) and has been reported in this GitHub issue.
As for a workaround for your problem:
PetSerAl, as countless times before, has provided a solution in a comment on the question:
& { Invoke-Expression $command } 2>&1 | Tee-Object -Variable out_content
{ ... } is a script-block literal that contains the Invoke-Expression call, and it is invoked with &, the call operator, which enables applying stream-redirection expression 2>&1 to the & call, which bypasses the bug.
If $command contained a PowerShell-native command that you wanted to execute directly in the current scope, such as a function definition, you'd use . instead of &.
I'd like to run an external process and capture it's command output to a variable in PowerShell. I'm currently using this:
$params = "/verify $pc /domain:hosp.uhhg.org"
start-process "netdom.exe" $params -WindowStyle Hidden -Wait
I've confirmed the command is executing but I need to capture the output into a variable. This means I can't use the -RedirectOutput because this only redirects to a file.
Note: The command in the question uses Start-Process, which prevents direct capturing of the target program's output. Generally, do not use Start-Process to execute console applications synchronously - just invoke them directly, as in any shell. Doing so keeps the application's output streams connected to PowerShell's streams, allowing their output to be captured by simple assignment $output = netdom ... (and with 2> for stderr output), as detailed below.
Fundamentally, capturing output from external programs works the same as with PowerShell-native commands (you may want a refresher on how to execute external programs; <command> is a placeholder for any valid command below):
# IMPORTANT:
# <command> is a *placeholder* for any valid command; e.g.:
# $cmdOutput = Get-Date
# $cmdOutput = attrib.exe +R readonly.txt
$cmdOutput = <command> # captures the command's success stream / stdout output
Note that $cmdOutput receives an array of objects if <command> produces more than 1 output object, which in the case of an external program means a string[1] array containing the program's output lines.
If you want to make sure that the result is always an array - even if only one object is output, type-constrain the variable as an array ([object[]]), or enclose the command in #(...), the array-subexpression operator:[2]
[array] $cmdOutput = <command>
$cmdOutput = #(<command>) # alternative
By contrast, if you want $cmdOutput to always receive a single - potentially multi-line - string, use Out-String, though note that a trailing newline is invariably added (GitHub issue #14444 discusses this problematic behavior):
# Note: Adds a trailing newline.
$cmdOutput = <command> | Out-String
With calls to external programs - which by definition only ever return strings in PowerShell[1] - you can avoid that by using the -join operator instead:
# NO trailing newline.
$cmdOutput = (<command>) -join "`n"
Note: For simplicity, the above uses "`n" to create Unix-style LF-only newlines, which PowerShell happily accepts on all platforms; if you need platform-appropriate newlines (CRLF on Windows, LF on Unix), use [Environment]::NewLine instead.
To capture output in a variable and print to the screen:
<command> | Tee-Object -Variable cmdOutput # Note how the var name is NOT $-prefixed
Or, if <command> is a cmdlet or advanced function, you can use common parameter
-OutVariable / -ov:
<command> -OutVariable cmdOutput # cmdlets and advanced functions only
Note that with -OutVariable, unlike in the other scenarios, $cmdOutput is always a collection, even if only one object is output. Specifically, an instance of the array-like [System.Collections.ArrayList] type is returned.
See this GitHub issue for a discussion of this discrepancy.
To capture the output from multiple commands, use either a subexpression ($(...)) or call a script block ({ ... }) with & or .:
$cmdOutput = $(<command>; ...) # subexpression
$cmdOutput = & {<command>; ...} # script block with & - creates child scope for vars.
$cmdOutput = . {<command>; ...} # script block with . - no child scope
Note that the general need to prefix with & (the call operator) an individual command whose name/path is quoted - e.g., $cmdOutput = & 'netdom.exe' ... - is not related to external programs per se (it equally applies to PowerShell scripts), but is a syntax requirement: PowerShell parses a statement that starts with a quoted string in expression mode by default, whereas argument mode is needed to invoke commands (cmdlets, external programs, functions, aliases), which is what & ensures.
The key difference between $(...) and & { ... } / . { ... } is that the former collects all input in memory before returning it as a whole, whereas the latter stream the output, suitable for one-by-one pipeline processing.
Redirections also work the same, fundamentally (but see caveats below):
$cmdOutput = <command> 2>&1 # redirect error stream (2) to success stream (1)
However, for external commands the following is more likely to work as expected:
$cmdOutput = cmd /c <command> '2>&1' # Let cmd.exe handle redirection - see below.
Considerations specific to external programs:
External programs, because they operate outside PowerShell's type system, only ever return strings via their success stream (stdout); similarly, PowerShell only ever sends strings to external programs via the pipeline.[1]
Character-encoding issues can therefore come into play:
On sending data via the pipeline to external programs, PowerShell uses the encoding stored in the $OutVariable preference variable; which in Windows PowerShell defaults to ASCII(!) and in PowerShell [Core] to UTF-8.
On receiving data from an external program, PowerShell uses the encoding stored in [Console]::OutputEncoding to decode the data, which in both PowerShell editions defaults to the system's active OEM code page.
See this answer for more information; this answer discusses the still-in-beta (as of this writing) Windows 10 feature that allows you to set UTF-8 as both the ANSI and the OEM code page system-wide.
If the output contains more than 1 line, PowerShell by default splits it into an array of strings. More accurately, the output lines are streamed one by one, and, when captured, stored in an array of type [System.Object[]] whose elements are strings ([System.String]).
If you want the output to be a single, potentially multi-line string, use the -join operator (you can alternatively pipe to Out-String, but that invariably adds a trailing newline):
$cmdOutput = (<command>) -join [Environment]::NewLine
Merging stderr into stdout with 2>&1, so as to also capture it as part of the success stream, comes with caveats:
To do this at the source, let cmd.exe handle the redirection, using the following idioms (works analogously with sh on Unix-like platforms):
$cmdOutput = cmd /c <command> '2>&1' # *array* of strings (typically)
$cmdOutput = (cmd /c <command> '2>&1') -join "`r`n" # single string
cmd /c invokes cmd.exe with command <command> and exits after <command> has finished.
Note the single quotes around 2>&1, which ensures that the redirection is passed to cmd.exe rather than being interpreted by PowerShell.
Note that involving cmd.exe means that its rules for escaping characters and expanding environment variables come into play, by default in addition to PowerShell's own requirements; in PS v3+ you can use special parameter --% (the so-called stop-parsing symbol) to turn off interpretation of the remaining parameters by PowerShell, except for cmd.exe-style environment-variable references such as %PATH%.
Note that since you're merging stdout and stderr at the source with this approach, you won't be able to distinguish between stdout-originated and stderr-originated lines in PowerShell; if you do need this distinction, use PowerShell's own 2>&1 redirection - see below.
Use PowerShell's 2>&1 redirection to know which lines came from what stream:
Stderr output is captured as error records ([System.Management.Automation.ErrorRecord]), not strings, so the output array may contain a mix of strings (each string representing a stdout line) and error records (each record representing a stderr line). Note that, as requested by 2>&1, both the strings and the error records are received through PowerShell's success output stream).
Note: The following only applies to Windows PowerShell - these problems have been corrected in PowerShell [Core] v6+, though the filtering technique by object type shown below ($_ -is [System.Management.Automation.ErrorRecord]) can also be useful there.
In the console, the error records print in red, and the 1st one by default produces multi-line display, in the same format that a cmdlet's non-terminating error would display; subsequent error records print in red as well, but only print their error message, on a single line.
When outputting to the console, the strings typically come first in the output array, followed by the error records (at least among a batch of stdout/stderr lines output "at the same time"), but, fortunately, when you capture the output, it is properly interleaved, using the same output order you would get without 2>&1; in other words: when outputting to the console, the captured output does NOT reflect the order in which stdout and stderr lines were generated by the external command.
If you capture the entire output in a single string with Out-String, PowerShell will add extra lines, because the string representation of an error record contains extra information such as location (At line:...) and category (+ CategoryInfo ...); curiously, this only applies to the first error record.
To work around this problem, apply the .ToString() method to each output object instead of piping to Out-String:
$cmdOutput = <command> 2>&1 | % { $_.ToString() };
in PS v3+ you can simplify to:
$cmdOutput = <command> 2>&1 | % ToString
(As a bonus, if the output isn't captured, this produces properly interleaved output even when printing to the console.)
Alternatively, filter the error records out and send them to PowerShell's error stream with Write-Error (as a bonus, if the output isn't captured, this produces properly interleaved output even when printing to the console):
$cmdOutput = <command> 2>&1 | ForEach-Object {
if ($_ -is [System.Management.Automation.ErrorRecord]) {
Write-Error $_
} else {
$_
}
}
An aside re argument-passing, as of PowerShell 7.2.x:
Passing arguments to external programs is broken with respect to empty-string arguments and arguments that contain embedded " characters.
Additionally, the (nonstandard) quoting needs of executables such as msiexec.exe and batch files aren't accommodated.
For the former problem only, a fix may be coming (though the fix would be complete on Unix-like platforms), as discussed in this answer, which also details all the current problems and workarounds.
If installing a third-party module is an option, the ie function from the Native module (Install-Module Native) offers a comprehensive solution.
[1] As of PowerShell 7.1, PowerShell knows only strings when communicating with external programs. There is generally no concept of raw byte data in a PowerShell pipeline. If you want raw byte data returned from an external program, you must shell out to cmd.exe /c (Windows) or sh -c (Unix), save to a file there, then read that file in PowerShell. See this answer for more information.
[2] There are subtle differences between the two approaches (which you may combine), though they usually won't matter: If the command has no output, the [array] type-constraint approach results in $null getting stored in the target variable, whereas it is an empty ([object[]) array in the case of #(...). Additionally, the [array] type constraint means that future (non-empty) assignments to the same variable are coerced to an array too.
Have you tried:
$OutputVariable = (Shell command) | Out-String
If you want to redirect the error output as well, you have to do:
$cmdOutput = command 2>&1
Or, if the program name has spaces in it:
$cmdOutput = & "command with spaces" 2>&1
Or try this. It will capture output into variable $scriptOutput:
& "netdom.exe" $params | Tee-Object -Variable scriptOutput | Out-Null
$scriptOutput
Another real-life example:
$result = & "$env:cust_tls_store\Tools\WDK\x64\devcon.exe" enable $strHwid 2>&1 | Out-String
Notice that this example includes a path (which begins with an environment variable). Notice that the quotes must surround the path and the EXE file, but not the parameters!
Note: Don't forget the & character in front of the command, but outside of the quotes.
The error output is also collected.
It took me a while to get this combination working, so I thought that I would share it.
I tried the answers, but in my case I did not get the raw output. Instead it was converted to a PowerShell exception.
The raw result I got with:
$rawOutput = (cmd /c <command> 2`>`&1)
I got the following to work:
$Command1="C:\\ProgramData\Amazon\Tools\ebsnvme-id.exe"
$result = & invoke-Expression $Command1 | Out-String
$result gives you the needful
I use the following:
Function GetProgramOutput([string]$exe, [string]$arguments)
{
$process = New-Object -TypeName System.Diagnostics.Process
$process.StartInfo.FileName = $exe
$process.StartInfo.Arguments = $arguments
$process.StartInfo.UseShellExecute = $false
$process.StartInfo.RedirectStandardOutput = $true
$process.StartInfo.RedirectStandardError = $true
$process.Start()
$output = $process.StandardOutput.ReadToEnd()
$err = $process.StandardError.ReadToEnd()
$process.WaitForExit()
$output
$err
}
$exe = "C:\Program Files\7-Zip\7z.exe"
$arguments = "i"
$runResult = (GetProgramOutput $exe $arguments)
$stdout = $runResult[-2]
$stderr = $runResult[-1]
[System.Console]::WriteLine("Standard out: " + $stdout)
[System.Console]::WriteLine("Standard error: " + $stderr)
This thing worked for me:
$scriptOutput = (cmd /s /c $FilePath $ArgumentList)
If all you are trying to do is capture the output from a command, then this will work well.
I use it for changing system time, as [timezoneinfo]::local always produces the same information, even after you have made changes to the system. This is the only way I can validate and log the change in time zone:
$NewTime = (powershell.exe -command [timezoneinfo]::local)
$NewTime | Tee-Object -FilePath $strLFpath\$strLFName -Append
Meaning that I have to open a new PowerShell session to reload the system variables.
What did the trick for me, and would work when using external commands and also when both standard error and standard output streams could be the result of running the command (or a mix of them), was the following:
$output = (command 2>&1)
I want to run multiple Powershell commands sequentially in their own Powershell windows and do not want those windows to be closed after running.
Example:
Start-Process powershell {Write-Host "hello"}; Start-Process powershell
{Write-Host "hello"}; Start-Process powershell {Write-Host "hello"}
Powershell windows get closed right after running. I want them to remain open.
Edit: Multiple commands are not always same and they may vary in number.
# Asynchronously starts 3 new PowerShell windows that
# print "hello #<n>" to the console and stay open.
1..3 | ForEach-Object {
Start-Process powershell -Args '-noexit', '-command', "Write-Host 'hello #$_'"
}
-noexit is required to keep a PowerShell session open after executing a command with -command (run powershell.exe -? to see all CLI parameters)
Note how the arguments are specified individually, as ,-separated elements of an array that is passed to
-Args (short for -ArgumentList, though the parameter name can be omitted altogether in this case).
Note how the Write-Host command is passed as a string - script blocks aren't supported as such in this scenario; you can pass one, as you tried, but it will be quietly converted to a string, which simply means that its literal content is used (everything between { and }).
In other words: passing {Write-Host "hello"} is the same as 'Write-Host "hello"', but to avoid confusion you should pass a string.
You can only pass a script block as such if you invoke powershell.exe directly, not via Start-Process; you need Start-Process, however, to run the new session in a new window and to start it asynchronously.
Also, the string was changed to a double-quoted string ("...") with embedded single-quoting ('...') to ensure that the reference to $_ - the automatic variable representing the pipeline object at hand (1, 2, or 3) - is expanded (interpolated).
Using the pipeline (|) with an array of inputs (1..3, which evaluates to array 1, 2, 3) with the ForEach-Object cmdlet is just an example - you can still invoke the individual commands individually, one after the other, on individual lines, or separated with ; - thanks to Start-Process they'll still launch asynchronously.
However, if the individual commands share logic, the pipeline approach can simplify matters; you can put the shared logic in the body of the ForEach-Object call and pass the variable parts as input via the pipeline.
Put a read-host at the end of the command sequence - it will wait for you to input something before continuing execution (and presumably exiting?). To copy/paste the example in this link, you could anything like this which will pause execution until you enter something: $Age = Read-Host "Please enter your age" -> Ref: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/read-host?view=powershell-6