What does hyphen/dash parameter mean to PowerShell? - powershell

I've found out that, if I pass only a dash to an argument of PowerShell 5.1 script on Windows 10, like this:
powershell.exe -File Test.ps1 -
I get a strange error message saying:
C:\path\Test.ps1 : Cannot process argument because the value of argument "name" is not valid. Change the value o f the "name" argument and run the operation again.
CategoryInfo : InvalidArgument: (:) [Test.ps1], PSArgumentException
FullyQualifiedErrorId : Argument,Test.ps1
The Test.ps1 is only:
echo "foo"
The actual problem I face though is that, when the script declares any mandatory parameter:
param (
[Parameter(Mandatory)]
$value
)
echo "foo"
Then executing the script the same way (with - argument) does nothing at all. No output. No error message. It just hangs for a few seconds. And then a control returns to a command prompt.
C:\path>powershell.exe -File Test.ps1 -
C:\path>_
What does the - mean to PowerShell (5.1)?
On the contrary, with PowerShell 2.0 on Windows 7, I get script usage in this case:
C:\path>powershell.exe -File Test.ps1 -
Test.ps1 [-value] <Object> [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <ActionPreference>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <String>] [-OutBuffer <Int32>]
C:\path>_
What makes sense (a missing mandatory parameter).
And without the mandatory parameter declaration, the script works (prints it output):
C:\path>powershell.exe -File Test.ps1 -
foo
C:\path>_

The behavior should be considered a bug - something that starts with - but isn't a valid parameter name should be passed as a positional argument rather than reporting an error.
The bug affects:
Windows PowerShell (as of v5.1.18362.145) - it is unclear if a fix will ever be made.
As you state, whether you (a) get the error message (... the value of argument "name" is not valid ...) or (b) the - is quietly ignored depends on whether your parameter has a parameter attribute such as [Parameter(Mandatory)][1] and/or your param() block has a [CmdletBinding()] attribute (if so, (b) applies).
PowerShell Core 6.x - that is, the problem will be fixed in v7 (current as of this writing: v7.0.0-preview.3); I don't know if 6.2.2, the stable version current as of this writing, will be fixed - we'll see what happens to the bug report on GitHub you've filed.
As for a workaround (works analogously in PowerShell Core):
Use -Command instead of -File.
While that changes the semantics of how the command line is parsed[2], in simple cases such as this one the difference won't matter:
C:\> powershell -Command ./Test.ps1 - # note the "./"
Note the ./, because using -Command (-c) makes PowerShell parse the arguments as if they were PowerShell code, and the usual restrictions re executing scripts by filename only apply (to prevent accidental execution of a file in the current directory, you need a path component to explicitly signal that intent, hence prefix ./ or .\ is needed).
If your script file path needed quoting, you'd have to use quoting and prepend &, the call operator; e.g.:
C:\> powershell -Command "& \"./Test.ps1\" -"
[1] Adding a [Parameter()] attribute to a declared parameter implicitly makes the enclosing script/function an advanced one, in which case different parsing rules apply. The [CmdletBinding[] attribute, which is applied to a param(...) block as a whole, explicitly marks a script / function as an advanced one.
[2] See this answer for the differences between how -File and -Command arguments are parsed.

This isn't an answer but I'm curious as well. If you've already figured it out I'd be interested in what you found. Otherwise, in case it helps, Powershell -h says everything after -file is the script and any arguments passed to it.
-File
Runs the specified script in the local scope ("dot-sourced"), so that the
functions and variables that the script creates are available in the
current session. Enter the script file path and any parameters.
File must be the last parameter in the command, because all characters
typed after the File parameter name are interpreted
as the script file path followed by the script parameters.
Tokenizer reads it in as a CommandArgument.
Powershell >> $Errors = $Null
Powershell >> [System.Management.Automation.PSParser]::Tokenize("powershell -file test.ps1 -", [ref]$Errors)[3]
Content : -
Type : CommandArgument
Start : 26
Length : 1
StartLine : 1
StartColumn : 27
EndLine : 1
EndColumn : 28
So it seems like the issue is further up the chain but I couldn't find a simple way to call the Parser functions to test.
I do see that there's a case that shouldn't occur where the error is swallowed and null returned which might cause it to just stop like it does in your example

Related

Run powershell script as administrator via batch file with parameter passing

When I run the script, without an administrator, via batch file it passes the parameter, but when I run the script, as an administrator, it does not pass the parameter.
I'm trying the command in the link below, but with no success:
run-script-within-batch-file-with-parameters
Command that executes the script, as an administrator, via batch file:
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File "D:\z_Batchs e Scripts\Batchs\Normaliza_LUFS\ArqsNorms_LUFS_pass.ps1' '%_vLUF%' -Verb RunAs}"
The %_vLUF% is the parameter to be passed.
Error message:
No line:1 character:4
+ & {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolic ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Start-Process], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.StartProcessCommand
Command in powershell script to receive the parameter:
Param(
[decimal]$env:_vLUF
)
What could be wrong, the command in the batch file or in the powershell script?
Test:
When the script is executed, without being an administrator, via batch file and the Parameter in the powershell script is defined as:
Parameter in powershell:
Param(
[decimal]$env:_vLUF
)
Command in the batch file running the script without being an administrator:
powershell.exe -executionpolicy remotesigned -File "D:\z_Batchs e Scripts\Batchs\Normaliza_LUFS\ArqsNorms_LUFS_pass.ps1" %_vLUF%
Note:
No need to use a named argument with the target parameter name.
Result:
Conclusion:
When the script is running, without being an administrator, via a batch file it works correctly even if the parameter used in the script is defined as an environment parameter, eg: [decimal]$env:_vLUF and regardless of the parameter value being negative, eg : -11.0.
Why Powershell when running a script without being as an administrator correctly interprets the minus sign in the argument and when run as an administrator it does not interpret the minus sign correctly is a question I leave to the experts!
However, my question was very well answered by Mr. #mklement0.
Your .ps1 script's parameter declaration is flawed:
Param(
[decimal]$env:_vLUF # !! WRONG - don't use $env:
)
See the bottom section for more information.
It should be:
Param(
[decimal] $_vLUF # OK - regular PowerShell variable
)
Parameters in PowerShell are declared as regular variables, not as environment variables ($env:).
(While environment variables can be passed as an argument (parameter value), an alternative is to simply reference them by name directly in the body of your script.)
Your PowerShell CLI call has problems too, namely with quoting.
Try the following instead:
powershell -NoProfile -ExecutionPolicy Bypass -Command "Start-Process -Verb RunAs powershell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File \"D:\z_Batchs e Scripts\Batchs\Normaliza_LUFS\ArqsNorms_LUFS_pass.ps1\" -_vLUF %_vLUF%'"
Specifically:
Embedded " chars. must be escaped as \" (sic) when using the Windows PowerShell CLI (powershell.exe); however, given that %_vLUF% represents a [decimal], you needn't quote it at all.
However, you appear to have hit a bug that affects PowerShell versions up to at least 7.2.4 (current as of this writing): if the argument starts with -, such as in negative number -11.0, the -File CLI parameter invariably interprets it as a parameter name - even quoting doesn't help. See GitHub issue #17519.
The workaround, as used above is to use a named argument, i.e. to precede the value with the target parameter name: -_vLUF %_vLUF%
As an aside: There's no reason to use & { ... } in order to invoke code passed to PowerShell's CLI via the -Command (-c) parameter - just use ... directly, as shown above. Older versions of the CLI documentation erroneously suggested that & { ... } is required, but this has since been corrected.
As for the broken attempt to use [decimal]$env:_vLUF as a parameter declaration:
Param(
[decimal]$env:_vLUF # !! EFFECTIVELY IGNORED
)
is effectively ignored.
However, if an environment variable _vLUF happens to be defined, it is accessible in the body of a script, independently of which parameters, if any, have been passed.
In direct invocation of your .ps1 script from your batch file, _vLUF indeed exists as an environment variable, because in cmd.exe (the interpreter of batch files), variables are invariably also environment variables - unlike in PowerShell.
That is, if %_vLUF% has a value in your batch file, a powershell child process you launch from it automatically sees it as $env:_vLUF
By contrast, if you launch an elevated process via Start-Process from such a PowerShell child process, that new, elevated process does not see the caller's environment variables - by security-minded design.
Note:
That PowerShell even syntactically accepts [decimal]$env:_vLUF as a parameter declaration should be considered a bug.
What happens is that a regular variable named env:_vLUF is indeed created and bound, if an argument passed to it, but on trying to get the value of that variable in the body of your script, it is preempted by the environment variable.
As such, an invocation can break, namely if the parameter is type-constrained and you pass a value that cannot be converted to that type ([decimal] in the case at hand).
If the invocation doesn't break, the type constraint is ignored: $env:_vLUF is invariably of type [string], as all environment variables are.

Alternative to $MyInvocation

When deploying PowerShell scripts from my RMM (NinjaOne), the scripts are called from a .bat (batch file).
Example:
#powershell -ExecutionPolicy Bypass -File "test.ps1" -stringParam "testing" -switchParam > "output.txt" 2>&1
The script I am calling requires PowerShell 7+, so I need to restart the script by calling pwsh with the current parameters. I planned to accomplish this via the following:
Invoke-Command { & pwsh -Command $MyInvocation.Line } -NoNewScope
Unfortunately, $MyInvocation.Line does not return the correct result when a PowerShell script is called from a batch file. What alternatives exist that would work in this scenario?
Notes:
I am unable to make changes to the .bat file.
$PSBoundParameters also does not return the expected result.
Testing Script (called from batch):
Param(
[string]$string,
[switch]$switch
)
if (!($PSVersionTable.PSVersion.Major -ge 7)) {
Write-Output "`n"
Write-Output 'Attempting to restart in PowerShell 7'
if(!$MyInvocation.Line) {
Write-Output 'Parameters not carried over - cannot restart'
Exit
} else { Write-Output $MyInvocation.Line }
Invoke-Command { & pwsh -Command $MyInvocation.Line } -NoNewScope # PowerShell 7
Exit
}
Write-Output 'Parameters carried over:'
Write-Output $PSBoundParameters
Write-Output "`nSuccessfully restarted"
Edit: I've discovered the reason for $MyInvocation / $PSBoundParameters not being set properly is due to the use of -File instead of -Command when my RMM provider calls the PowerShell script from the .bat file. I've suggested they implement this change to resolve the issue. Until they do, I am still looking for alternatives.
Leaving RMM providers out of the picture (whose involvement may or may not matter), the following test.ps1 content should work:
# Note: Place [CmdletBinding()] above param(...) to make
# the script an *advanced* one, which then prevents passing
# extra arguments that don't bind to declared parameters.
param(
[string] $stringParam,
[switch] $switchParam
)
# If invoked via powershell.exe, re-invoke via pwsh.exe
if ((Get-Process -Id $PID).Name -eq 'powershell') {
# $PSCommandPath is the current script's full file path,
# and #PSBoundParameters uses splatting to pass all
# arguments that were bound to declared parameters through.
# Any extra arguments, if present, are passed through with #args
pwsh -ExecutionPolicy Bypass -File $PSCommandPath #PSBoundParameters #args
exit $LASTEXITCODE
}
# Getting here means that the file is being executed by pwsh.exe
# Print the arguments received:
if ($PSBoundParameters.Count) {
"-- Bound parameters and their values:`n"
# !! Because $PSBoundParameters causes table-formatted
# !! output, synchronous output must be forced to work around a bug.
# !! See notes below.
$PSBoundParameters | Out-Host
}
if ($args) {
"`n-- Unbound (positional) arguments:`n"
$args
}
exit 0
As suggested in the code comments, place [CmdletBinding()] above the param(...) block in order to make the script an advanced one, in which case passing extra arguments (ones that don't bind to formally declared parameters) is actively prevented (and in which case $args isn't defined).
Caveat:
Note the need for piping $PSBoundParameters to Out-Host in order to force synchronous output of the (default) table-formatted representation of its value. The same would apply to outputting any other values that would result in implicit Format-Table formatting not based on predefined formatting data ($args, as an array whose elements are strings isn't affected). (Note that While Out-Host output normally isn't suitable for data output from inside a PowerShell session, it does write to an outside caller's stdout.)
This need stems from a very unfortunate output-timing bug, present in both Windows PowerShell and the current PowerShell (Core) version, 7.2.1; it is a variant manifestation of the behavior in detail in this answer and reported in GitHub issue #13985 and applies here because exit is called within 300 msecs. of initiating the implicitly table-formatted output; if you were to omit the final exit 0 statement, the problem wouldn't arise.
See also:
The automatic $PSCommandPath variable, reflecting the running script's full (absolute) path.
The automatic $PSBoundParameters variable, a dictionary containing all bound parameters and their values (arguments), and the automatic $args variable, containing all (remaining) positional arguments not bound to formally declared parameters.
Parameter splatting, in which referencing a hashtable (dictionary) or array variable with sigil # instead of $, passes the hashtable's entries / array's elements as individual arguments.
powershell.exe, the Windows PowerShell CLI; pwsh, the PowerShell (Core) 7+ CLI.
Other automatic variables used above, namely $PID (the current process' ID) and $LASTEXITCODE (the most recently executed external program's exit code).
As for what you tried:
$MyInvocation.Line isn't defined when a script is called via PowerShell's CLI; however, the automatic $PSCommandPath variable, reflecting the running script's full file path, is defined.
Even if $MyInvocation.Line were defined, it wouldn't enable robust pass-through of the original arguments, due to potential quoting issues and - when called from inside PowerShell - due to reflecting unexpanded arguments. (Also, the value would start with the executable / script name / path, which would have to be removed.)
While [Environment]::CommandLine does reflect the process command line of the CLI call (also starting with the executable name / path), the quoting issues apply there too.
Also, it is virtually pointless to use Invoke-Command for local invocations - see this answer.
I'd try putting a file called powershell.bat early on your path (at least, in a directory earlier on the path than the C:\Windows\System32\WindowsPowerShell\v1.0\; entry) and assemble the appropriate parameters (I've no idea of your required structure for $myinvocationline - no doubt it could be derived from the parameters delivered to powershell.bat).
My thinking is that this should override powershell, re-assemble the bits and deliver them to pwsh.
I put this line in test.ps1:
$MyInvocation | Format-List -Property *
Found this content in output.txt:
MyCommand : test.ps1
BoundParameters : {}
UnboundArguments : {-stringParam, testing, -switchParam}
ScriptLineNumber : 0
OffsetInLine : 0
HistoryId : 1
ScriptName :
Line :
PositionMessage :
PSScriptRoot :
PSCommandPath :
InvocationName : D:\Temp\StackOverflow\71087897\test.ps1
PipelineLength : 2
PipelinePosition : 1
ExpectingInput : False
CommandOrigin : Runspace
DisplayScriptPosition :
Then tried this in test.ps1:
[string]$MySelf = $MyInvocation.InvocationName
Write-Host "###$MySelf###"
[string[]]$Params = $MyInvocation.UnboundArguments
foreach ($Param in $Params) {
Write-Host "Param: '$Param'"
}
And found this in output.txt:
###D:\Temp\StackOverflow\71087897\test.ps1###
Param: '-stringParam'
Param: 'testing'
Param: '-switchParam'
Then tried this in test.ps1:
[string]$Line = "$($MyInvocation.InvocationName) $($MyInvocation.UnboundArguments)"
Write-Host "$Line"
And found this in output.txt:
D:\Temp\StackOverflow\71087897\test.ps1 -stringParam testing -switchParam
Does this get you to where you need to be?

Get arguments passed to powershell.exe

Is there a way to determine, in a Profile script, what arguments were passed to the powershell executable?
Use-case
I'd like to check whether the WorkingDirectory parameter was set, before overriding it with my own cd in my user profile.
Attempts
I've made a few helpless attempts to get variable values from within the profile script, with no luck. None of them seem to give me any information about whether pwsh.exe was invoked with a -wd parameter or not:
echo $PSBoundParameters
echo $ArgumentList
echo (Get-Variable MyInvocation -Scope 0).Value;
To inspect PowerShell's own invocation command line, you can use:
[Environment]::CommandLine (single string)
or [Environment]::GetCommandLineArgs() (array of arguments, including the executable as the first argument).
These techniques also work on Unix-like platforms.
Caveat: As of PowerShell Core 7 (.NET Core 3.1), it is pwsh.dll, not pwsh[.exe] that is reported as the executable.
To check in your $PROFILE file if a working directory was specified on startup could look like this, though do note that the solution is not foolproof:
$workingDirSpecified =
($PSVersionTable.PSEdition -ne 'Desktop' -and
[Environment]::GetCommandLineArgs() -match '^-(WorkingDirectory|wd|wo|wor|work|worki|workin|working|workingd|workingdi|workingdir|workingdire|workingdirec|workingdirect|workingdirecto|workingdirector)') -or
[Environment]::CommandLine -match
'\b(Set-Location|sl|cd|chdir|Push-Location|pushd|pul)\b'
In PowerShell Core, a working directory may have been specified with the -WorkingDirectory / -wd parameter (which isn't supported in Windows PowerShell); e.g.,
pwsh -WorkingDirectory /
Note: Given that it is sufficient to specify only a prefix of a parameter's name, as long as that prefix uniquely identifies the parameter, it is necessary to also test for wo, wor, work, ...
In both PowerShell Core and Windows PowerShell, the working directory may have been set with a cmdlet call (possibly via a built-in alias) as part of a -c / -Command argument (e.g.,
pwsh -NoExit -c "Set-Location /")
Note: In this scenario, unlike with -WorkingDirectory, the working directory has not yet been changed at the time the $PROFILE file is loaded.
It is possible, but unlikely for the above to yield false positives; to use a contrived example:
pwsh -NoExit -c "'Set-Location inside a string literal'"
How about (powershell.exe or pwsh.exe?):
get-ciminstance win32_process | where name -match 'powershell.exe|pwsh.exe' |
select name,commandline

Max size of ScriptBlock / InitializationScript for Start-Job in PowerShell

When you start a new job with Start-Job, you can pass it a ScriptBlock and a InitializationScript for example:
Function FOO {
Write-Host "HEY"
}
Start-Job -ScriptBlock {FOO} -InitializationScript {
Function Foo { $function:FOO }
} | Wait-Job | Receive-Job
There seems to be a limit to the size of the initialization script you can pass, if it is too big then you get an error such as
[localhost] An error occurred while starting the background process. Error
reported: The filename or extension is too long.
+ CategoryInfo : OpenError: (localhost:String) [], PSRemotingTransportException
+ FullyQualifiedErrorId : -2147467259,PSSessionStateBroken
Behind the scenes, PowerShell is creating a new process and passing InitializationScript as a Base64 encoded command line parameter.
According to the Win32 CreateProcess() function, the max size of the command ine is 32,768 characters. So obviously if your Base64 encoded InitializationScript is getting near this size then you will probably get an error.
I haven't yet found a limit for the size of the ScriptBlock parameter. Can someone confirm that there is no limit?
I assume that there is no limit because it looks like the ScriptBlock is transmitted to the child process via standard input?
Your guess was correct.
PowerShell translates a Start-Job call into a PowerShell CLI call behind the scenes (to powershell.exe for Windows PowerShell, and to pwsh for PowerShell (Core) 7+), that is, it achieves parallelism via a child process that the calling PowerShell session communicates with, using the standard in- and output streams:
Because the -InitializationScript script block is translated to a Base64-encoded string representing the bytes of the UTF-16LE encoding of the block's string representation, which is passed to the CLI's -EncodedCommand parameter, its max. length is limited by the overall length limit of a process command line.
That limit is 32,766 characters (Unicode characters, not just bytes), because a terminating NUL character is required in the underlying WinAPI call (to quote from the CreateProcess() WinAPI function documentation you link to: "The maximum length of this string is 32,767 characters, including the Unicode terminating null character").
Note that the full path of the PowerShell executable is included in this limit, in double-quoted form (see below), and it really is the entire resulting command line that matters; therefore, given that PowerShell (Core) 7+ can be installed in any directory, its installation location has an effect on the effective limit, as does the length of the path of the current directory (see next point).
In Windows PowerShell, whose location is fixed, and whose CLI parameter values are of fixed length in the invocation (see below), this leaves 32,655 characters for the Base64-encoded string (32766 - 111 characters for the executable path and fixed parameters and the -EncodedCommand parameter name); while similar, no fixed number can be given for PowerShell (Core) 7+, due to differing install locations and the length of the -wd (working-directory) argument depending on the current location.
Base64-encoding the bytes of a UTF-16LE-encoded strings results in a ca. 2.67-fold increase in length, which*makes the **max. length of a script block passed to -InitializationScript 12,244 characters[1] for Windows PowerShell; for PowerShell (Core) 7+, it'll be slightly lower, depending on the installation location and the length of the current directory's path.
By contrast, the -ScriptBlock argument, i.e. the operation to perform in the background, is sent via stdin (the standard input stream) to the newly launched PowerShell process, and therefore has no length limit.
For instance, the following Start-Job call:
Start-Job -ScriptBlock { [Environment]::CommandLine } -InitializationScript { 'hi' > $null } |
Receive-Job -Wait -AutoRemoveJob
reveals that the background-job child process was launched as follows, when run from Windows PowerShell:
"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -Version 5.1 -s -NoLogo -NoProfile -EncodedCommand IAAnAGgAaQAnACAAPgAgACQAbgB1AGwAbAAgAA==
As you can see, the -ScriptBlock argument's text is not present in the resulting command line (it was sent via stdin), whereas the -InitializationScript argument's is, as the Base64-encoded string passed to -EncodedCommand, which you can verify as follows:
# -> " 'hi' > $null ", i.e. the -InitializationScript argument, sans { and }
[Text.Encoding]::Unicode.GetString([Convert]::FromBase64String('IAAnAGgAaQAnACAAPgAgACQAbgB1AGwAbAAgAA=='))`)
As for the other parameters:
-s is short for -servermode, and it is an undocumented parameter whose sole purpose is to facilitate background jobs (communication with the calling process via its standard streams); see this answer for more information.
-Version 5.1 applies only to Windows PowerShell, and isn't strictly necessary.
-NoLogo is also not strictly necessary, because it is implied by the use of -EncodedCommand (as it would be with -Command and -File).
In PowerShell (Core) 7+, you'd also see a -wd (short for: -WorkingDirectory) parameter, because background jobs there now sensibly use the same working directory as the caller.
[1]
[Convert]::ToBase64String([Text.Encoding]::Unicode.GetBytes('x' * 12244)).Length yields 32652, which is the closest you can come to the 32655 limit; an input length of 12245 yields 32656.
the script block actually have limit. and you can run command with scriptblock with 3 way :
$scriptblock = '
get-help
get-command
dir
get-help *command*
get-command *help*
'
iex $scriptblock
or use it :
$scriptblock = {
get-help
get-command
dir
get-help *command*
get-command *help*
}
Start-Process powershell.exe -ArgumentList "Command $scriptblock"
or use this :
Start-Process powershell {iex '
get-help
get-command
dir
get-help *command*
get-command *help*
'
}
the limit of script that can pass to powershell.exe is 12190 bytes.
but for script i never get limit from powershell more than 4000 line code.

PowerShell mkdir alias + Set-StrictMode -Version 2. Strange bug. Why?

It's something unbelievable. This is a PowerShell code snippet in test.ps1 file:
Set-StrictMode -Version 2
mkdir c:\tmp\1 # same with 'md c:\tmp\1'
Start cmd.exe, navigate to folder with test.ps1 script and run it:
c:\tmp>powershell ".\test.ps1"
This produces the following error:
The variable '$_' cannot be retrieved because it has not been set.
At line:50 char:38
+ $steppablePipeline.Process($_ <<<< )
+ CategoryInfo : InvalidOperation: (_:Token) [], ParentContainsEr
rorRecordException
+ FullyQualifiedErrorId : VariableIsUndefined
Why?
It works when started from PowerShell console but not cmd.exe. I discovered this bug in much larger script. It was a WTF moment.
What is wrong with this simple script?
Even though a workaround has already been found, I thought people might be interested in an explanation.
As to why it behaves differently in the shell versus cmd.exe, see Powershell 2.0 - Running scripts for the command line call vs. from the ISE
As mentioned in the reference, there is a difference between the following two commands:
powershell ".\test.ps1"
powershell -File ".\test.ps1"
When using the first syntax, it seems to mess with scope, causing the Set-StrictMode command to modify the strict mode for functions defined at the global scope.
This triggers a bug (or an incorrect assumption, perhaps) in the definition of the mkdir function.
The function makes use of the GetSteppablePipeline method to proxy the pipeline for the New-Item cmdlet. However, the author neglected to account for the fact that the PROCESS section is still executed even when there is nothing in the pipeline. Thus, when the PROCESS section is reached, the $_ automatic variable is not defined. If strict mode is enabled, an exception will occur.
One way for Microsoft to account for this would be to replace following line:
$steppablePipeline.Process($_)
with the following:
if (test-path Variable:Local:_) {
$steppablePipeline.Process($_)
}
I admit that this may not be the best way to fix it, but the overhead would be negligible. Another option would be to somehow test if the pipeline is empty in the BEGIN section, and then set $_ to $null.
Either way, if you run your scripts with the "powershell.exe -File filename" syntax, then you won't need to worry about it.
It looks like a bug (in PowerShell).