command execution ordering inside a PowerShell scriptblock - powershell

I got excited with PowerShell ScriptBlock at first but I was confused recently with its executing ordering inside blocks. For example:
$test_block = {
write-host "show 1"
ps
write-host "show 2"
Get-Date
}
The output by calling $test_block.Invoke():
show 1
show 2
<result of command 'ps'>
<result of command 'get-date'>
Do commands who output something run first?

This behaviour is because write-host doesn't put the output on the pipeline. The other commands are placed on the pipeline so are not output to the screen until the function (invoke) returns.
To get the behaviour I believe you were expecting, use write-output instead, the results of all the commands will then be returned in the pipeline.
$test_block = {
write-output "show 1"
ps
write-output "show 2"
Get-Date
}
$test_block.Invoke()

To complement David Martin's helpful answer:
Avoiding Write-Host (which is often the wrong tool to use) in favor of Write-Output, i.e. outputting to the success output stream - rather than printing to the display with Write-Host[1] - solves your immediate problem.
However, you could have avoided the problem by using &, the call operator instead of the .Invoke() method:
# Invokes the script block and *streams* its output.
& $test_block
Using & is generally preferable, not just to avoid .Invoke()'s collect-all-success-output-stream-first behavior - see next section.
As an aside:
This answer describes common problem with similar symptoms (success output (pipeline output) appearing out of order relative to other streams), although it is technically unrelated and also occurs with &:
# !! The Write-Host output prints FIRST, due to implicit, asynchronous
# !! table formatting of the [pscustomobject] instance.
& { [pscustomobject] #{ Foo = 'Bar' }; Write-Host 'Why do I print first?' }
Why &, not .Invoke(), should be used to invoke script blocks ({ ... }):
Script blocks ({ ... }) are normally invoked with &, the call operator, in argument (parsing) mode (like cmdlets and external programs), not via their .Invoke() method, which allows for more familiar syntax; e.g.:
& $test_block rather than $test_block.Invoke()
with arguments: & $test_block arg1 ... rather than $test_block.Invoke(arg1, ...)
Perhaps more importantly, using this operator-based invocation syntax (& { ... } ... or . { ... } ...) has the following advantages:
It preserves normal streaming semantics, meaning that success output (too) is emitted from the script block as it is being produced, whereas .Invoke() collects all success output first, in a [System.Collections.ObjectModel.Collection[psobject]] instance, which it then returns - by contrast, the Write-Host output goes straight to the display in both cases.
As a beneficial side effect, your specific output-ordering problem goes away, but note that in PSv5+ there can generally still be an output-ordering problem, although its cause is unrelated:
Implicit tabular output (implied Format-Table) for output types without predefined format data is asynchronous in an effort to determine suitable column widths (your specific code happens to only use cmdlets with predefined format data).
See this answer for more information; a simple repro:
[pscustomobject] #{ foo = 1 }; Write-Host 'Should print after, but prints first.'
It allows you to pass named arguments (e.g. -Foo Bar), whereas .Invoke() supports only positional (unnamed) ones (e.g. Bar).
It preserves normal semantics for script-terminating errors:
# OK: & throw aborts the entire script; 'after' never prints.
& { throw 'fatal' }; 'after'
# !! .Invoke() "eats" the script-terminating error and
# !! effectively converts it to a *statement*-terminating one.
# !! Therefore, execution continues, and 'after' prints.
{ throw 'fatal' }.Invoke(); 'after'
Additionally, using operator-based invocation gives you the option to use ., the dot-sourcing operator, in lieu of &, so as to run a script block directly in the caller's scope, whereas an .Invoke() method call only runs in a child scope (as & does):
# Dot-sourcing the script block runs it in the caller's scope.
. { $foo='bar' }; $foo # -> 'bar'
Use of .Invoke() is best limited to PowerShell SDK projects (which are typically C#-based, where use of PowerShell operators isn't an option).
[1] Technically, since PowerShell v5 Write-Host outputs to the information stream (stream number 6), which, however, prints to the display by default, while getting ignored in the pipeline and redirections. See this answer for a juxtaposition of Write-Host and Write-Output, and why the latter is typically note even needed.

Related

Run ps1 file in foreground

How to run a ps1 file in foreground?
I noticed when I execute my ps1 file, instead of view the log of the ps1 file execution, a Background job is started.
Is there anyway to run a ps1 file and get the same behavior we have when executing a sh or batch file?
Updates:
My ps1 file:
$scratchOrgName=$args[0]
Write-Host "Hello " & $scratchOrgName
ps1 file execution:
The & starts a new process. (It's called the background operator)
Change the code into something like
Write-Host "Hello" $scratchOrgName
or
Write-Host "Hello $scratchOrgName"
tl;dr
Unless you explicitly request that commands be run in the background (as you accidentally did, see next section), PowerShell commands do run in the foreground.
To achieve what you were (presumably) trying to do:
$scratchOrgName=$args[0]
"Hello $scratchOrgName"
Michaël Hompus' helpful answer provides the crucial pointers, but let me attempt a systematic overview:
Write-Host "Hello " & $scratchOrgName is composed of two statements:
Write-Host "Hello " & submits command Write-Host "Hello " as a background job, in PowerShell (Core) v6+ (in Windows PowerShell (v5.1-), you'd get an error, saying that & is reserved for future use). An object representing the newly created job is output and prints to the screen, as shown in your screenshot.
The post-positional use of & - i.e. placed after a command - is indeed the background operator, and is therefore equivalent to Start-Job { Write-Host "Hello " }
This contrasts with pre-positional use of &, which then acts as the call operator, for invoking command names or paths that are potentially quoted or contain / are variable values (e.g. & 'C:\Program Files\Node.js\node.exe')
$scratchOrgName - by virtue of PowerShell's implicit output behavior - outputs the value of that variable, which prints to the screen by default.
As for what you intended:
& is VBScript's string-concatenation operator; its PowerShell equivalent is +
A string-concatenation operation is an expression, and as such it must be enclosed in (...) in order to be passed as an argument to a command such as Write-Host.
Therefore, the direct PowerShell expression of your intent would be:
Write-Host ("Hello " + $scratchOrgName)
But, as also shown in Michaël's answer, this is more easily expressed via an expandable (double-quoted) string ("..."):
Write-Host "Hello $scratchOrgName"
Taking a step back: Write-Host is typically the wrong tool to use, unless the intent is to write to the display only, bypassing the success output stream and with it the ability to send output to other commands, capture it in a variable, or redirect it to a file.
To output a value, use it by itself, taking advantage of the aforementioned implicit output behavior (or use Write-Output, though that is rarely needed):
"Hello $scratchOrgName"
See this answer for more information.

Powershell function call changing passed string into int

So I am using the kind of buggy Sapien powershell studio to make a powershell driven GUI application, and I am attempting to perform an ADSI query.
$nameOfDeviceInput is a System.Windows.Forms.TextBox
On one form, I have the following function:
$buttonPerformAction_Click={
if (FindInAD($nameOfDeviceInput.Text).Count -gt 0)
{
$buttonPerformAction.BackColor = 'Red'
$buttonPerformAction.Text = "System already exists in AD with that name. Try another name"
return
}
.....
}
On the "main" form, I have the function FindInAD
function FindInAd($nameOfSystem)
{
Write-Host "seeking system" $nameOfSystem
([adsisearcher]"(CN=$nameOfSystem)").FindAll()
}
FindInAd() is failing because for whatever reason, $nameOfSystem is set to 1, and if I don't explicitly cast it as a string, it gets implicitly cast to Int32 (obviously)
I have tried the following:
Fully qualifying the textbox input by notating the form it belongs to ( $adObjectModifier )
$buttonPerformAction_Click={
if (FindInAD($adObjectModifier.$nameOfDeviceInput.Text).Count -gt 0)
{
$buttonPerformAction.BackColor = 'Red'
$buttonPerformAction.Text = "System already exists in AD with that name. Try another name"
return
}
.....
}
Explicitly casting the $nameOfSystem parameter as a type of [string]
function FindInAd([string]$nameOfSystem)
{
Write-Host "seeking system" $nameOfSystem
([adsisearcher]"(CN=$nameOfSystem)").FindAll()
}
Passing a raw string into FindInAD from the AdObjectModifier form.
....
if (FindInAD("Test").Count -gt 0)
....
There is nothing else on the output pipeline at the time, (at least not from me) in between the method invocation. It is EventHandler > Function Call with String parameter
Why are the strings I'm passing getting changed to a digit???
EDIT: I think my passed parameter is being automatically replaced with the resulting boolean somehow, but this doesn't make any sense to me....
Your have a syntax problem:
FindInAD($nameOfDeviceInput.Text).Count # WRONG
Note: Wrong in this context means: the syntax is formally valid, but doesn't do what you expect - see the bottom section.
It should be:
(FindInAD $nameOfDeviceInput.Text).Count
PowerShell commands - functions, cmdlets, scripts and external programs - are invoked like shell commands - foo arg1 arg2 - and not like C# methods - foo('arg1', 'arg2').
That is:
Do not put (...) around the list of arguments.
However, you do need (...) around the call as a whole if you want a command call to participate in an expression, as shown above with the access to property .Count - see this answer for more information.
Separate arguments with spaces, both from each other and from the command name - do not use ,
, between arguments functions differently: It constructs an array that is passed as a single argument - see below.
You may pass simple strings (ones that contain neither spaces nor PowerShell metacharacters such as ; or &) as barewords; that is, quoting them is optional; e.g., instead of foo 'bar', you can call foo bar - see this answer for how PowerShell parses unquoted command arguments.
Also, if a target function or script has explicitly declared parameters (which binary cmdlets invariably do), such as -bar and -baz, you can pass your values as named arguments, i.e. by prepending them with the target parameter name; doing so is good practice in scripts: foo -bar arg1 -baz arg2
By contrast, calling methods of objects uses the syntax familiar from regular programming languages such as C# ($obj.foo('arg1', 'arg2'))
This difference relates two PowerShell's two fundamental parsing modes, explained in detail in this answer:
Commands are parsed in argument mode - as in shells.
Method calls and operator-based expressions are parsed in expression mode - as in regular programming languages.
These modes are required in order to allow PowerShell serve double duty: as a shell on the one hand, and as a scripting (programming) language on the other.
PowerShell can help you avoid this syntax problem:
Note that the problem isn't that using method syntax to call a command is invalid syntax, but that it doesn't work as intended, which can be difficult to diagnose.
In short: When you call command foo as foo('foo', 'bar'), ('foo', 'bar')is a 2-element array, which is then passed to foo as a single argument.
To prevent the problem to begin with, you can set Set-StrictMode to -Version 2 or higher, which makes PowerShell report an error if you accidentally use method syntax when calling a command:
# Turn on the check for accidental method syntax.
# Note: This also turns on ADDITIONAL checks - see below.
Set-StrictMode -Version 2
# This call now produces an ERROR, because the proper syntax would be:
# foo 'a' 'b'
foo('a', 'b')
Caveats:
Set-StrictMode -Version 2 comprises additional strictness checks that you must then also conform to, notably:
You must not reference non-existent variables.
You must not reference non-existent properties; see GitHub issue #2798 for an associated pitfall in connection with PowerShell's unified handling of scalars and collections.
An error is reported only for pseudo method calls with multiple arguments (e.g.,
foo('bar', 'baz')), not with only one; e.g., foo('bar') is accepted, because the single-argument case generally still (accidentally) works.
The errors reported for strictness violations are statement-terminating errors: that is, they only terminate the statement at hand, but by default the script continues; to ensure that overall execution aborts - on any type of error - you'd have to set
$ErrorActionPreference = 'Stop' at the start of your code. See this answer for more information.
As for what you tried:
FindInAD($nameOfDeviceInput.Text).Count
is the same as:
FindInAD ($nameOfDeviceInput.Text).Count
That is, the result of expression ($nameOfDeviceInput.Text).Count is passed as an argument to function FindInAD.

Use a variable in PowerShell to pass multiple arguments to an external program

I downloaded the npm package for merge junit reports - https://www.npmjs.com/package/junit-merge.
The problem is that I have multiple files to merge and I am trying to use string variable to hold file names to merge.
When I write the script myslef like:
junit-merge a.xml b.xml c.xml
This works, the merged file is being created, but when I do it like
$command = "a.xml b.xml c.xml"
junit-merge $command
This does not work. The error is
Error: File not found
Has anyone faced similar issues?
# WRONG
$command = "a.xml b.xml c.xml"; junit-merge $command
results in command line junit-merge "a.xml b.xml c.xml"[1], i.e. it passes a string with verbatim value a.xml b.xml c.xml as a single argument to junit-merge, which is not the intent.
PowerShell does not act like POSIX-like shells such as bash do in this regard: In bash, the value of variable $command - due to being referenced unquoted - would be subject to word splitting (one of the so-called shell expansions) and would indeed result in 3 distinct arguments (though even there an array-based invocation would be preferable).
PowerShell supports no bash-like shell expansions[2]; it has different, generally more flexible constructs, such as the splatting technique discussed below.
Instead, define your arguments as individual elements of an array, as justnotme advises:
# Define the *array* of *individual* arguments.
$command = "a.xml", "b.xml", "c.xml"
# Pass the array to junit-merge, which causes PowerShell
# to pass its elements as *individual arguments*; it is the equivalent of:
# junit-merge a.xml b.xml c.xml
junit-merge $command
This is an application of a PowerShell technique called splatting, where you specify arguments to pass to a command via a variable:
Either (typically only used for external programs, as in your case):
As an array of arguments to pass individually as positional arguments, as shown above.
Or (more typically when calling PowerShell commands):
As a hashtable to pass named parameter values, in which you must replace the $ sigil in the variable reference with #; e.g., in your case #command; e.g., the following is the equivalent of calling Get-ChildItem C:\ -Directory:
$paramVals = #{ LiteralPath = 'C:\'; Directory = $true }; Get-ChildItem #paramVals
Caveat re array-based splatting:
Due to a bug detailed in GitHub issue #6280, PowerShell doesn't pass empty arguments through to external programs (applies to all Windows PowerShell versions / and as of PowerShell (Core) 7.2.x; a fix may be coming in 7.3, via the $PSNativeCommandArgumentPassing preference variable, which in 7.2.x relies on an explicitly activated experimental feature).
E.g., foo.exe "" unexpectedly results in just foo.exe being called.
This problem equally affects array-based splatting, so that
$cmdArgs = "", "other"; foo.exe $cmdArgs results in foo.exe other rather than the expected foo.exe "" other.
Optional use of # with array-based splatting:
You can use the # sigil also with arrays, so this would work too:
junit-merge #command
There is a subtle distinction, however.
While it will rarely matter in practice,
the safer choice is to use $, because it guards against (the however hypothetical) accidental misinterpretation of a --% array element you intend to be a literal.
Only the # syntax recognizes an array element --% as the special stop-parsing symbol, --%
Said symbol tells PowerShell not to parse the remaining arguments as it normally would and instead pass them through as-is - unexpanded, except for expanding cmd.exe-style variable references such as %USERNAME%.
This is normally only useful when not using splatting, typically in the context of being able to use command lines that were written for cmd.exe from PowerShell as-is, without having to account for PowerShell's syntactical differences.
In the context of splatting, however, the behavior resulting from --% is non-obvious and best avoided:
As in direct argument passing, the --% is removed from the resulting command line.
Argument boundaries are lost, so that a single array element foo bar, which normally gets placed as "foo bar" on the command line, is placed as foo bar, i.e. effectively as 2 arguments.
[1] Your call implies the intent to pass the value of variable $command as a single argument, so when PowerShell builds the command line behind the scenes, it double-quotes the verbatim a.xml b.xml c.xml string contained in $command to ensure that. Note that these double quotes are unrelated to how you originally assigned a value to $command.
Unfortunately, this automatic quoting is broken for values with embedded " chars. - see this answer, for instance.
[2] As a nod to POSIX-like shells, PowerShell does perform one kind of shell expansion, but (a) only on Unix-like platforms (macOS, Linux) and (b) only when calling external programs: Unquoted wildcard patterns such as *.txt are indeed expanded to their matching filenames when you call an external program (e.g., /bin/echo *.txt), which is feature that PowerShell calls native globbing.
I had a similar problem. This technique from powershell worked for me:
Invoke-Expression "junit-merge $command"
I also tried the following (from a powershell script) and it works:
cmd / c "junit-merge $command"

How can I pass unbound arguments from one script as parameters to another?

I have little experience with PowerShell in particular.
I'm trying to refactor some very commonly re-used code into a single script that can be sourced where it's needed, instead of copying and pasting this same code into n different scripts.
The scenario I'm trying to get looks (I think) like this:
#common.ps1:
param(
# Sure'd be great if clients didn't need to know about these
$some_params_here
...
)
function Common-Func-Uses-Params {
...
}
⋮
# foo/bar/bat.ps1:
# sure would love not to have to redefine all the common params() here...
. common.ps1 <pass-the-arguments>
Common-Func-Uses-Params $specific_Foo/Bar/Bat_Data
As the pseudo-comments above indicate, I've only been able to do this so far by capturing the params in the calling script as well.
I want to be in a situation where I can update the common code (say with a -Debug or -DryRun or -Url or whatever parameter) and not have to worry about updating all of the client code to match.
Is this possible?
You're missing two key things:
args - which captures all of (and only) the unbound arguments to the script
splatting (#) - which is used to pass arrays or hashtables to a command rather than flattening them like you'd get with $
When you combine these, you can easily pass all arguments onto another script, like so:
# foo.ps1
. common.ps1 #args
With a sourced file like this:
#common.ps1
param ([string]$foo = "foo")
echo "`$foo is $foo"
You get these output:
> foo.ps1 returns $foo is foo
> foo.ps1 -Foo bar returns $foo is bar
Note that, if you're trying to use the PowerShell ISE it might take you a while to figure this out or debug any of it. When you're in the debugger, both $args nor $MyInvocation.UnboundArguments will do their best to hide that information from you. They'll appear to be completely empty.
You can print the args with >> echo "$(#args)", but that also provides the very weird side effect of telling the Debugger to continue. I think the splatting is adding an extra newline and that's ending up in the Command Window.
The best workaround I have for that is to add $theargs = $args at the top of your script and remember to use $theargs in the debugger.

Build up a string to be passed to call operator

I need to build a string that is actually a command-line, and then execute the contents of that command-line. I'd hoped the call operator (&) would help, but it appears not. Here is a simple contrived example. The following works as expected, it pings a website:
$command = "ping"
$website = "www.bbc.co.uk"
& $command $website
however if I change it to this:
$command = "ping"
$website = "www.bbc.co.uk"
$cmd = "$command $website"
& $cmd
I get an error:
The term 'ping www.bbc.co.uk' is not recognized as the name of a
cmdlet, function, script file, or operable program.
Is there a way to dynamically build up a command-line as a string, and then execute it?
Yes, but you need to use Invoke-Expression (which is just like eval), instead of the call operator. Note that you also need to ensure that all your quoting is correct in that case. E.g.
$cmd = "'$command' '$website'"
would work in your trivial example, unless $command or $website contained single quotes. The problem here is essentially that everything you put into the string is subject to the usual parsing rules of PowerShell.
Generally, if you can, stay as far away from Invoke-Expression as you can. There are a few problems that need it, but invoking external programs ... not so much.
A much better alternative, especially if you have an arbitrary number of arguments, is to just collect the arguments in an array and use the splat operator (note the # in the code example below):
$command = 'ping'
$arguments = '-t','www.bbc.co.uk'
&$command #arguments
This ensures that arguments are properly quoted when necessary and generally avoids a lot of headaches you're going to get with Invoke-Expression.
(Side note: Whenever you have a problem in PowerShell and think »Oh, I'm just going to use a string«, it's often time to rethink that. This includes handling file names, or command lines. PowerShell has objects, reducing it to the capabilities of earlier shells just yields the same pain you have elsewhere too, e.g. multiple levels of escaping, sometimes with different syntaxes, etc. And most of the time there are better ways of solving the problem.)