Is there a PowerShell equivalent for Python's doctest module? - powershell

I've just come across the doctest module in Python, which helps you perform automated testing against example code that's embedded within Python doc-strings. This ultimately helps ensure consistency between the documentation for a Python module, and the module's actual behavior.
Is there an equivalent capability in PowerShell, so I can test examples in the .EXAMPLE sections of PowerShell's built-in help?
This is an example of what I would be trying to do:
function MyFunction ($x, $y) {
<#
.EXAMPLE
> MyFunction -x 2 -y 2
4
#>
return $x + $y
}
MyFunction -x 2 -y 2

You could do this, although I'm not aware of any all-in-one built-in way to do it.
Method 1 - Create a scriptblock and execute it
Help documentation is an object, so can be leveraged to index into examples and their code. Below is the simplest example I could think of which executes your example code.
I'm not sure if this is what doctest does - it seems a bit dangerous to me but it might be what you're after! It's the simplest solution and I think will give you the most accurate results.
Function Test-Example {
param (
$Module
)
# Get the examples
$examples = Get-Help $Module -Examples
# Loop over the code of each example
foreach ($exampleCode in $examples.examples.example.code) {
# create a scriptblock of your code
$scriptBlock = [scriptblock]::Create($exampleCode)
# execute the scriptblock
$scriptBlock.Invoke()
}
}
Method 2 - Parse the example/function and make manual assertions
I think a potentially better way to this would be to parse your example and parse the function to make sure it's valid. The downside is this can get quite complex, especially if you're writing complex functions.
Here's some code that checks the example has the correct function name, parameters and valid values. It could probably be refactored (first time dealing with [System.Management.Automation.Language.Parser]) and doesn't deal with advanced functions at all.
If you care about things like Mandatory, ParameterSetName, ValidatePattern etc this probably isn't a good solution as it will require a lot of extension.
Function Check-Example {
param (
$Function
)
# we'll use this to get the example command later
# source: https://vexx32.github.io/2018/12/20/Searching-PowerShell-Abstract-Syntax-Tree/
$commandAstPredicate = {
param([System.Management.Automation.Language.Ast]$AstObject)
return ($AstObject -is [System.Management.Automation.Language.CommandAst])
}
# Get the examples
$examples = Get-Help $Function -Examples
# Parse the function
$parsedFunction = [System.Management.Automation.Language.Parser]::ParseInput((Get-Content Function:$Function), [ref]$null, [ref]$null)
# Loop over the code of each example
foreach ($exampleCode in $examples.examples.example.code) {
# parse the example code
$parsedExample = [System.Management.Automation.Language.Parser]::ParseInput($exampleCode, [ref]$null, [ref]$null)
# get the command, which gives us useful properties we can use
$parsedExampleCommand = $parsedExample.Find($commandAstPredicate,$true).CommandElements
# check the command name is correct
"Function is correctly named: $($parsedExampleCommand[0].Value -eq $Function)"
# loop over the command elements. skip the first one, which we assume is the function name
foreach ($element in ($parsedExampleCommand | select -Skip 1)) {
"" # new line
# check parameter in example exists in function definition
if ($element.ParameterName) {
"Checking parameter $($element.ParameterName)"
$parameterDefinition = $parsedFunction.ParamBlock.Parameters | where {$_.Name.VariablePath.Userpath -eq $element.ParameterName}
if ($parameterDefinition) {
"Parameter $($element.ParameterName) exists"
# store the parameter name so we can use it to check the value, which we should find in the next loop
# this falls apart for switches, which have no value so they'll need some additional logic
$previousParameterName = $element.ParameterName
}
}
# check the value has the same type as defined in the function, or can at least be cast to it.
elseif ($element.Value) {
"Checking value $($element.Value) of parameter $previousParameterName"
$parameterDefinition = $parsedFunction.ParamBlock.Parameters | where {$_.Name.VariablePath.Userpath -eq $previousParameterName}
"Parameter $previousParameterName has the same type: $($element.StaticType.Name -eq $parameterDefinition.StaticType.Name)"
"Parameter $previousParameterName can be cast to correct type: $(-not [string]::IsNullOrEmpty($element.Value -as $parameterDefinition.StaticType))"
}
else {
"Unexpected command element:"
$element
}
}
}
}
Method 3 - Use Pester (maybe out of scope)
I think this one is a bit off topic, but worth mentioning. Pester is the test framework for PowerShell and has features that could be helpful here. You could have a generic test that takes a script/function as argument and runs tests against the parsed examples/functions.
This is could involve executing the script like in method 1 or checking the parameters like in method 2. Pester has a HaveParameter assertion that allows you to check certain things about your function.
HaveParameter documenation, copied from link above:
Get-Command "Invoke-WebRequest" | Should -HaveParameter Uri -Mandatory
function f ([String] $Value = 8) { }
Get-Command f | Should -HaveParameter Value -Type String
Get-Command f | Should -Not -HaveParameter Name
Get-Command f | Should -HaveParameter Value -DefaultValue 8
Get-Command f | Should -HaveParameter Value -Not -Mandatory

Related

trying to write powershell cmdlet accepting pipelined input

I'm trying to get my head around powershell and write a function as cmdlet, found the following code sample in one of the articles, but it doesnt seem to want to work as cmdlet even though it has [cmdletbinding()] declaration on the top of the file.
When I try to do something like
1,2,3,4,5 | .\measure-data
it returns empty response (the function itself works just fine if I invoke it at the bottom of the file and run the file itself).
Here's the code that I am working with, any help will be appreciated :)
Function Measure-Data {
<#
.Synopsis
Calculate the median and range from a collection of numbers
.Description
This command takes a collection of numeric values and calculates the
median and range. The result is written as an object to the pipeline.
.Example
PS C:\> 1,4,7,2 | measure-data
Median Range
------ -----
3 6
.Example
PS C:\> dir c:\scripts\*.ps1 | select -expand Length | measure-data
Median Range
------ -----
1843 178435
#>
[cmdletbinding()]
Param (
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[ValidateRange([int64]::MinValue,[int64]::MaxValue)]
[psobject]$InputObject
)
Begin {
#define an array to hold incoming data
Write-Verbose "Defining data array"
$Data=#()
} #close Begin
Process {
#add each incoming value to the $data array
Write-Verbose "Adding $inputobject"
$Data+=$InputObject
} #close process
End {
#take incoming data and sort it
Write-Verbose "Sorting data"
$sorted = $data | Sort-Object
#count how many elements in the array
$count = $data.Count
Write-Verbose "Counted $count elements"
#region calculate median
if ($sorted.count%2) {
<#
if the number of elements is odd, add one to the count
and divide by to get middle number. But arrays start
counting at 0 so subtract one
#>
Write-Verbose "processing odd number"
[int]$i = (($sorted.count+1)/2-1)
#get the corresponding element from the sorted array
$median = $sorted[$i]
}
else {
<#
if number of elements is even, find the average
of the two middle numbers
#>
Write-Verbose "processing even number"
$i = $sorted.count/2
#get the lower number
$x = $sorted[$i-1]
#get the upper number
$y = $sorted[-$i]
#average the two numbers to calculate the median
$median = ($x+$y)/2
} #else even
#endregion
#region calculate range
Write-Verbose "Calculating the range"
$range = $sorted[-1] - $sorted[0]
#endregion
#region write result
Write-Verbose "Median = $median"
Write-Verbose "Range = $range"
#define a hash table for the custom object
$hash = #{Median=$median;Range=$Range}
#write result object to pipeline
Write-Verbose "Writing result to the pipeline"
New-Object -TypeName PSobject -Property $hash
#endregion
} #close end
} #close measure-data
this the article where I took the code from:
https://mcpmag.com/articles/2013/10/15/blacksmith-part-4.aspx
edit: maybe I should add that versions of this code from previous parts of the article worked just fine, but after adding all the things that make it a proper cmdlet like the help section and verbose lines, this thing just doesnt want to work, and I believe there is something missing, I have a feeling that this could be because it was written for powershell 3 and I am testing it on win 10 ps 5-point-something, but honestly I dont even know in which direction I should look for, that's why I ask you for help
There is nothing wrong with the code (apart from possible optimizations), but the way how you call it can't work:
1,2,3,4,5 | .\measure-data
When you call a script file that contains a named function, it is expected that "nothing happens". Actually, the scripts runs, but PowerShell does not know which function it should call (there could be multiple). So it just runs any code outside of functions.
You have two options to fix the problem:
Option 1
Remove the function keyword and the curly braces that belong to it. Keep the [cmdletbinding()] and Param sections.
[cmdletbinding()]
Param (
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[ValidateRange([int64]::MinValue,[int64]::MaxValue)]
[psobject]$InputObject
)
Begin {
# ... your code ...
} #close Begin
Process {
# ... your code ...
} #close process
End {
# ... your code ...
}
Now the script itself is the "function" and can be called as such:
1,2,3,4,5 | .\measure-data
Option 2
Turn the script into a module. Basically you just need to save it with .psm1 extension (there is more to it, but for getting started it will suffice).
In the script where you want to use the function you have to import the module before you can use its functions. If the module is not installed, you can import it by specifying its full path.
# Import module from directory where current script is located
Import-Module $PSScriptRoot\measure-data.psm1
# Call a function of the module
1,2,3,4,5 | Measure-Data
A module is the way when there are multiple functions in a single script file. It is also more efficient when a function will be called muliple times, because PowerShell needs to parse it only once (it remembers Import-Module calls).
It works as-is, you just need to call it properly. Since the code is now a function, you cannot call it like before when the codes was directly in the file
# method when code is directly in file with no Function Measure-Data {}
1,2,3,4,5 | .\measure-data
Now that you've defined the function you instead need to dot source the file so that it loads your function(s) into memory. Then you can call your function by its name (which happens to be the same as the filename, but doesn't have to be)
# Load the functions by dot-sourcing
. .\measure-data.ps1
# Use the function
1,2,3,4,5 | Measure-Data
You're not passing it an Object but an array of integers. If you change the parameter to:
Param (
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[ValidateRange([int64]::MinValue,[int64]::MaxValue)]
[Int[]]$InputObject
)
Now things work:
PS> 1,2,3,4,5 | Measure-Data
Median Range
------ -----
3 4

Why does the scope of variables change depending on if it's a .ps1 or .psm1 file, and how can this be mitigated?

I have a function that executes a script block. For convenience, the script block does not need to have explicitly defined parameters, but instead can use $_ and $A to refer to the inputs.
In the code, this is done as such:
$_ = $Value
$A = $Value2
& $ScriptBlock
This whole thing is wrapped in a function. Minimal example:
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
[Object]$Value2
)
$_ = $Value
$A = $Value2
& $ScriptBlock
}
If this function is written in a PowerShell script file (.ps1), but imported using Import-Module, the behaviour of F is as expected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
15
PS>
However, when the function is written in a PowerShell module file (.psm1) and imported using Import-Module, the behaviour is unexpected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
PS>
Using {$_ + 1} instead gives 1. It seems that $_ has a value of $null instead. Presumably, some security measure restricts the scope of the $_ variable or otherwise protects it. Or, possibly, the $_ variable is assigned by some automatic process. Regardless, if only the $_ variable was affected, the first unsuccessful example would return 1.
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
In conclusion, the questions are:
Why can't script blocks in module files access variables defined in functions from which they were called?
Is there a method for explicitly specifying the variables accessible by a script block when invoking it?
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Out of order:
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Yes, if you just want to populate $_, use ForEach-Object!
ForEach-Object executes in the caller's local scope, which helps you work around the issue - except you won't have to, because it also automatically binds input to $_/$PSItem:
# this will work both in module-exported commands and standalone functions
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
)
ForEach-Object -InputObject $Value -Process $ScriptBlock
}
Now F will work as expected:
PS C:\> F -Value 7 -ScriptBlock {$_ * 2}
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
Execute the scripblock using ScriptBlock.InvokeWithContext():
$functionsToDefine = #{
'Do-Stuff' = {
param($a,$b)
Write-Host "$a - $b"
}
}
$variablesToDefine = #(
[PSVariable]::new("var1", "one")
[PSVariable]::new("var2", "two")
)
$argumentList = #()
{Do-Stuff -a $var1 -b two}.InvokeWithContext($functionsToDefine, $variablesToDefine, $argumentList)
Or, wrapped in a function like your original example:
function F
{
param(
[scriptblock]$ScriptBlock
[object]$Value
)
$ScriptBlock.InvokeWithContext(#{},#([PSVariable]::new('_',$Value)),#())
}
Now you know how to solve your problem, let's get back to the question(s) about module scoping.
At first, it's worth noting that you could actually achieve the above using modules, but sort of in reverse.
(In the following, I use in-memory modules defined with New-Module, but the module scope resolution behavior describe is the same as when you import a script module from disk)
While module scoping "bypasses" normal scope resolution rules (see below for explanation), PowerShell actually supports the inverse - explicit execution in a specific module's scope.
Simply pass a module reference as the first argument to the & call operator, and PowerShell will treat the subsequent arguments as a command to be invoked in said module:
# Our non-module test function
$twoPlusTwo = { return $two + $two }
$two = 2
& $twoPlusTwo # yields 4
# let's try it with explicit module-scoped execution
$myEnv = New-Module {
$two = 2.5
}
& $myEnv $twoPlusTwo # Hell froze over, 2+2=5 (returns 5)
Why can't script blocks in module files access variables defined in functions from which they were called?
If they can, why can't the $_ automatic variable?
Because loaded modules maintain state, and the implementers of PowerShell wanted to isolate module state from the caller's environment.
Why might that be useful, and why might one preclude the other, you ask?
Consider the following example, a non-module function to test for odd numbers:
$two = 2
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
If we run the above statements in a script or an interactive prompt, subsequently invocating Test-IsOdd should yield the expected result:
PS C:\> Test-IsOdd 123
True
So far, so great, but relying on the non-local $two variable bears a flaw in this scenario - if, somewhere in our script or in the shell we accidentally reassign the local variable $two, we might break Test-IsOdd completely:
PS C:\> $two = 1 # oops!
PS C:\> Test-IsOdd 123
False
This is expected since, by default, variable scope resolution just wanders up the call stack until it reaches the global scope.
But sometimes you might require state to be kept across executions of one or more functions, like in our example above.
Modules solve this by following slightly different scope resolution rules - module-exported functions defer to something we call module scope (before reaching the global scope).
To illustrate how this solves our problem from before, considering this module-exported version of the same function:
$oddModule = New-Module {
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
$two = 2
}
Now, if we invoke our new module-exported Test-IsOdd, we predictably get the expected result, regardless of "contamination" in the callers scope:
PS C:\> Test-IsOdd 123
True
PS C:\> $two = 1
PS C:\> Test-IsOdd 123 # still works
True
This behavior, while maybe surprising, basicly serves to solidify the implicit contract between the module author and the user - the module author doesn't need to worry too much about what's "out there" (the callers session state), and the user can expect whatever going on "in there" (the loaded module's state) to work correctly without worrying about what they assign to variables in the local scope.
Module scoping behavior poorly documented in the help files, but is explained in some depth in chapter 8 of Bruce Payette's "PowerShell In Action" (ISBN:9781633430297)

How to use dash argument in Powershell?

I am porting a script from bash to PowerShell, and I would like to keep the same support for argument parsing in both. In the bash, one of the possible arguments is -- and I want to also detect that argument in PowerShell. However, nothing I've tried so far has worked. I cannot define it as an argument like param($-) as that causes a compile error. Also, if I decide to completely forego PowerShell argument processing, and just use $args everything appears good, but when I run the function, the -- argument is missing.
Function Test-Function {
Write-Host $args
}
Test-Function -- -args go -here # Prints "-args go -here"
I know about $PSBoundParameters as well, but the value isn't there, because I can't bind a parameter named $-. Are there any other mechanisms here that I can try, or any solution?
For a bit more context, note that me using PowerShell is a side effect. This isn't expected to be used as a normal PowerShell command, I have also written a batch wrapper around this, but the logic of the wrapper is more complex than I wanted to write in batch, so the batch wrapper just calls the PowerShell function, which then does the more complex processing.
I found a way to do so, but instead of double-hyphen you have to pass 3 of them.
This is a simple function, you can change the code as you want:
function Test-Hyphen {
param(
${-}
)
if (${-}) {
write-host "You used triple-hyphen"
} else {
write-host "You didn't use triple-hyphen"
}
}
Sample 1
Test-Hyphen
Output
You didn't use triple-hyphen
Sample 2
Test-Hyphen ---
Output
You used triple-hyphen
As an aside: PowerShell allows a surprising range of variable names, but you have to enclose them in {...} in order for them to be recognized; that is, ${-} technically works, but it doesn't solve your problem.
The challenge is that PowerShell quietly strips -- from the list of arguments - and the only way to preserve that token is you precede it with the PSv3+ stop-parsing symbol, --%, which, however, fundamentally changes how the arguments are passed and is obviously an extra requirement, which is what you're trying to avoid.
Your best bet is to try - suboptimal - workarounds:
Option A: In your batch-file wrapper, translate -- to a special argument that PowerShell does preserve and pass it instead; the PowerShell script will then have to re-translate that special argument to --.
Option B: Perform custom argument parsing in PowerShell:
You can analyze $MyInvocation.Line, which contains the raw command line that invoked your script, and look for the presence of -- there.
Getting this right and making it robust is nontrivial, however.
Here's a reasonably robust approach:
# Don't use `param()` or `$args` - instead, do your own argument parsing:
# Extract the argument list from the invocation command line.
$argList = ($MyInvocation.Line -replace ('^.*' + [regex]::Escape($MyInvocation.InvocationName)) -split '[;|]')[0].Trim()
# Use Invoke-Expression with a Write-Output call to parse the raw argument list,
# performing evaluation and splitting it into an array:
$customArgs = if ($argList) { #(Invoke-Expression "Write-Output -- $argList") } else { #() }
# Print the resulting arguments array for verification:
$i = 0
$customArgs | % { "Arg #$((++$i)): [$_]" }
Note:
There are undoubtedly edge cases where the argument list may not be correctly extracted or where the re-evaluation of the raw arguments causes side effect, but for the majority of cases - especially when called from outside PowerShell - this should do.
While useful here, Invoke-Expression should generally be avoided.
If your script is named foo.ps1 and you invoked it as ./foo.ps1 -- -args go -here, you'd see the following output:
Arg #1: [--]
Arg #2: [-args]
Arg #3: [go]
Arg #4: [-here]
I came up with the following solution, which works well also inside pipelines multi-line expressions. I am using the PowerShell Parser to parse the invocation expression string (while ignoring any incomplete tokens, which might be present at the end of $MyInfocation.Line value) and then Invoke-Expression with Write-Output to get the actual argument values:
# Parse the whole invocation line
$code = [System.Management.Automation.Language.Parser]::ParseInput($MyInvocation.Line.Substring($MyInvocation.OffsetInLine - 1), [ref]$null, [ref]$null)
# Find our invocation expression without redirections
$myline = $code.Find({$args[0].CommandElements}, $true).CommandElements | % { $_.ToString() } | Join-String -Separator ' '
# Get the argument values
$command, $arguments = Invoke-Expression ('Write-Output -- ' + $myline)
# Fine-tune arguments to be always an array
if ( $arguments -is [string] ) { $arguments = #($arguments) }
if ( $arguments -eq $null ) { $arguments = #() }
Please be aware that the original values in the function call are reevaluated in Invoke-Expression, so any local variables might shadow values of the actual arguments. Because of that, you can also use this (almost) one-liner at the top of your function, which prevents the pollution of local variables:
# Parse arguments
$command, $arguments = Invoke-Expression ('Write-Output -- ' + ([System.Management.Automation.Language.Parser]::ParseInput($MyInvocation.Line.Substring($MyInvocation.OffsetInLine - 1), [ref]$null, [ref]$null).Find({$args[0].CommandElements}, $true).CommandElements | % { $_.ToString() } | Join-String -Separator ' '))
# Fine-tune arguments to be always an array
if ( $arguments -is [string] ) { $arguments = #($arguments) }
if ( $arguments -eq $null ) { $arguments = #() }

When my powershell cmdlet parameter accepts ValueFromPipelineByPropertyName and I have an alias, how can I get the original property name?

How can a function tell if a parameter was passed in as an alias, or an object in the pipeline's property was matched as an alias? How can it get the original name?
Suppose my Powershell cmdlet accepts pipeline input and I want to use ValueFromPipelineByPropertyName. I have an alias set up because I might be getting a few different types of objects, and I want to be able to do something slightly different depending on what I receive.
This does not work
function Test-DogOrCitizenOrComputer
{
[CmdletBinding()]
Param
(
# Way Overloaded Example
[Parameter(Mandatory=$true,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true,
Position=0)]
[Alias("Country", "Manufacturer")]
[string]$DogBreed,
[Parameter(Mandatory=$true,
ValueFromPipelineByPropertyName=$true,
Position=1)]
[string]$Name
)
# For debugging purposes, since the debugger clobbers stuff
$foo = $MyInvocation
$bar = $PSBoundParameters
# This always matches.
if ($MyInvocation.BoundParameters.ContainsKey('DogBreed')) {
"Greetings, $Name, you are a good dog, you cute little $DogBreed"
}
# These never do.
if ($MyInvocation.BoundParameters.ContainsKey('Country')) {
"Greetings, $Name, proud citizen of $Country"
}
if ($MyInvocation.BoundParameters.ContainsKey('Manufacturer')) {
"Greetings, $Name, future ruler of earth, created by $Manufacturer"
}
}
Executing it, we see problems
At first, it seems to work:
PS> Test-DogOrCitizenOrComputer -Name Keith -DogBreed Basset
Greetings, Keith, you are a good dog, you cute little Basset
The problem is apparent when we try an Alias:
PS> Test-DogOrCitizenOrComputer -Name Calculon -Manufacturer HP
Greetings, Calculon, you are a good dog, you cute little HP
Bonus fail, doesn't work via pipeline:
PS> New-Object PSObject -Property #{'Name'='Fred'; 'Country'='USA'} | Test-DogOrCitizenOrComputer
Greetings, Fred, you are a good dog, you cute little USA
PS> New-Object PSObject -Property #{'Name'='HAL'; 'Manufacturer'='IBM'} | Test-DogOrCitizenOrComputer
Greetings, HAL, you are a good dog, you cute little IBM
Both $MyInvocation.BoundParameters and $PSBoundParameters contain the defined parameter names, not any aliases that were matched. I don't see a way to get the real names of arguments matched via alias.
It seems PowerShell is not only being 'helpful' to the user by silently massaging arguments to the right parameters via aliases, but it's also being 'helpful' to the programmer by folding all aliased inputs into the main parameter name. That's fine, but I can't figure out how to determine the actual original parameter passed to the Cmdlet (or the object property passed in via pipeline)
How can a function tell if a parameter was passed in as an alias, or an object in the pipeline's property was matched as an alias? How can it get the original name?
I don't think there is any way for a Function to know if an Alias has been used, but the point is it shouldn't matter. Inside the function you should always refer to the parameter as if its used by it's primary name.
If you need the parameter to act different depending on whether it's used an Alias that is not what an Alias is for and you should instead use different parameters, or a second parameter that acts as a switch.
By the way, if you're doing this because you want to use multiple parameters as ValueFromPipelineByPropertyName, you already can with individual parameters and you don't need to use Aliases to achieve this.
Accepting value from the pipeline by Value does need to be unique, for each different input type (e.g only one string can be by value, one int by value etc.). But accepting pipeline by Name can be enabled for every parameter (because each parameter name is unique).
I banged my head quite hard on this, so I'd like to write down the state of my understanding. The solution is at the bottom (such as it is).
First, quickly: if you alias the command, you can get the alias easily with $MyInvocation.InvocationName. But that doesn't help with parameter aliases.
Works in some cases
You can get some joy by pulling the commandline that invoked you:
function Do-Stuff {
[CmdletBinding()]param(
[Alias('AliasedParam')]$Param
)
$InvocationLine = $MyInvocation.Line.Substring($MyInvocation.OffsetInLine - 1)
return $InvocationLine
}
$a = 42; Do-Stuff -AliasedParam $a; $b = 23
# Do-Stuff -AliasedParam $a; $b = 23
This will show the alias names. You could parse them with regex, but I'd suggest using the language parser:
$InvocationAst = [Management.Automation.Language.Parser]::ParseInput($InvocationLine, [ref]$null, [ref]$null)
$InvocationAst.EndBlock.Statements[0].PipelineElements[0].CommandElements.ParameterName
That will get you a list of parameters as they were called. However, it's flimsy:
Doesn't work for splats
Doesn't work for ValueFromPipelineByPropertyName
Abbreviated param names will cause extra headache
Only works in the function body; in a dynamicparam block, the $MyInvocation properties are not yet populated
Doesn't work
I did a deep dive into ParameterBinderController - thanks to Rohn Edwards for some reflection snippets.
This is not going to get you anywhere. Why not? Because the relevant method has no side effects - it just moves seamlessly from canonical param names to aliases. Reflection ain't enough; you would need to attach a debugger, which I do not consider to be a code solution.
This is why Trace-Command never shows the alias resolution. If it did, you might be able to hook the trace provider.
Doesn't work
Register-ArgumentCompleter takes a scriptblock which accepts a CommandAst. This AST holds the aliased param names as tokens. But you won't get far in a script, because argument completers are only invoked when you interactively tab-complete an argument.
There are several completer classes that you could hook into; this limitation applies to them all.
Doesn't work
I messed about with custom parameter attributes, e.g. class HookAttribute : System.Management.Automation.ArgumentTransformationAttribute. These receive an EngineIntrinsics argument. Unfortunately, you get no new context; parameter binding has already been done when attributes are invoked, and the bindings you'll find with reflection are all referring to the canonical parameter name.
The Alias attribute itself is a sealed class.
Works
Where you can get joy is with the PreCommandLookupAction hook. This lets you intercept command resolution. At that point, you have the args as they were written.
This sample returns the string AliasedParam whenever you use the param alias. It works with abbreviated param names, colon syntax, and splatting.
$ExecutionContext.InvokeCommand.PreCommandLookupAction = {
param ($CommandName, $EventArgs)
if ($CommandName -eq 'Do-Stuff' -and $EventArgs.CommandOrigin -eq 'Runspace')
{
$EventArgs.CommandScriptBlock = {
# not sure why, but Global seems to be required
$Global:_args = $args
& $CommandName #args
Remove-Variable _args -Scope Global
}.GetNewClosure()
$EventArgs.StopSearch = $true
}
}
function Do-Stuff
{
[CmdletBinding()]
param
(
[Parameter()]
[Alias('AliasedParam')]
$Param
)
$CalledParamNames = #($_args) -match '^-' -replace '^-' -replace ':$'
$CanonParamNames = $MyInvocation.BoundParameters.Keys
$AliasParamNames = $CanonParamNames | ForEach-Object {$MyInvocation.MyCommand.Parameters[$_].Aliases}
# Filter out abbreviations that could match canonical param names (they take precedence over aliases)
$CalledParamNames = $CalledParamNames | Where-Object {
$CalledParamName = $_
-not ($CanonParamNames | Where-Object {$_.StartsWith($CalledParamName)} | Select-Object -First 1)
}
# Param aliases that would bind, so we infer that they were used
$BoundAliases = $AliasParamNames | Where-Object {
$AliasParamName = $_
$CalledParamNames | Where-Object {$AliasParamName.StartsWith($_)} | Select-Object -First 1
}
$BoundAliases
}
# Do-Stuff -AliasP 42
# AliasedParam
If the Global variable offends you, you could use a helper parameter instead:
$EventArgs.CommandScriptBlock = {
& $CommandName #args -_args $args
}.GetNewClosure()
[Parameter(DontShow)]
$_args
The drawback is that some fool might actually use the helper parameter, even though it's hidden with DontShow.
You could develop this approach further by doing a dry-run call of the parameter binding mechanism in the function body or the CommandScriptBlock.

Powershell pitfalls

What Powershell pitfalls you have fall into? :-)
Mine are:
# -----------------------------------
function foo()
{
#("text")
}
# Expected 1, actually 4.
(foo).length
# -----------------------------------
if(#($null, $null))
{
Write-Host "Expected to be here, and I am here."
}
if(#($null))
{
Write-Host "Expected to be here, BUT NEVER EVER."
}
# -----------------------------------
function foo($a)
{
# I thought this is right.
#if($a -eq $null)
#{
# throw "You can't pass $null as argument."
#}
# But actually it should be:
if($null -eq $a)
{
throw "You can't pass $null as argument."
}
}
foo #($null, $null)
# -----------------------------------
# There is try/catch, but no callstack reported.
function foo()
{
bar
}
function bar()
{
throw "test"
}
# Expected:
# At bar() line:XX
# At foo() line:XX
#
# Actually some like this:
# At bar() line:XX
foo
Would like to know yours to walk them around :-)
My personal favorite is
function foo() {
param ( $param1, $param2 = $(throw "Need a second parameter"))
...
}
foo (1,2)
For those unfamiliar with powershell that line throws because instead of passing 2 parameters it actually creates an array and passes one parameter. You have to call it as follows
foo 1 2
Another fun one. Not handling an expression by default writes it to the pipeline. Really annoying when you don't realize a particular function returns a value.
function example() {
param ( $p1 ) {
if ( $p1 ) {
42
}
"done"
}
PS> example $true
42
"done"
$files = Get-ChildItem . -inc *.extdoesntexist
foreach ($file in $files) {
"$($file.Fullname.substring(2))"
}
Fails with:
You cannot call a method on a null-valued expression.
At line:3 char:25
+ $file.Fullname.substring <<<< (2)
Fix it like so:
$files = #(Get-ChildItem . -inc *.extdoesntexist)
foreach ($file in $files) {
"$($file.Fullname.substring(2))"
}
Bottom line is that the foreach statement will loop on a scalar value even if that scalar value is $null. When Get-ChildItem in the first example returns nothing, $files gets assinged $null. If you are expecting an array of items to be returned by a command but there is a chance it will only return 1 item or zero items, put #() around the command. Then you will always get an array - be it of 0, 1 or N items. Note: If the item is already an array putting #() has no effect - it will still be the very same array (i.e. there is no extra array wrapper).
# The pipeline doesn't enumerate hashtables.
$ht = #{"foo" = 1; "bar" = 2}
$ht | measure
# Workaround: call GetEnumerator
$ht.GetEnumerator() | measure
Here are my top 5 PowerShell gotchas
Here is something Ive stumble upon lately (PowerShell 2.0 CTP):
$items = "item0", "item1", "item2"
$part = ($items | select-string "item0")
$items = ($items | where {$part -notcontains $_})
what do you think that $items be at the end of the script?
I was expecting "item1", "item2" but instead the value of $items is: "item0", "item1", "item2".
Say you've got the following XML file:
<Root>
<Child />
<Child />
</Root>
Run this:
PS > $myDoc = [xml](Get-Content $pathToMyDoc)
PS > #($myDoc.SelectNodes("/Root/Child")).Count
2
PS > #($myDoc.Root.Child).Count
2
Now edit the XML file so it has no Child nodes, just the Root node, and run those statements again:
PS > $myDoc = [xml](Get-Content $pathToMyDoc)
PS > #($myDoc.SelectNodes("/Root/Child")).Count
0
PS > #($myDoc.Root.Child).Count
1
That 1 is annoying when you want to iterate over a collection of nodes using foreach if and only if there actually are any. This is how I learned that you cannot use the XML handler's property (dot) notation as a simple shortcut. I believe what's happening is that SelectNodes returns a collection of 0. When #'ed, it is transformed from an XPathNodeList to an Object[] (check GetType()), but the length is preserved. The dynamically generated $myDoc.Root.Child property (which essentially does not exist) returns $null. When $null is #'ed, it becomes an array of length 1.
On Functions...
The subtleties of processing pipeline input in a function with respect to using $_ or $input and with respect to the begin, process, and end blocks.
How to handle the six principal equivalence classes of input delivered to a function (no input, null, empty string, scalar, list, list with null and/or empty) -- for both direct input and pipeline input -- and get what you expect.
The correct calling syntax for sending multiple arguments to a function.
I discuss these points and more at length in my Simple-Talk.com article Down the Rabbit Hole- A Study in PowerShell Pipelines, Functions, and Parameters and also provide an accompanying wallchart--here is a glimpse showing the various calling syntax pitfalls for a function taking 3 arguments:
On Modules...
These points are expounded upon in my Simple-Talk.com article Further Down the Rabbit Hole: PowerShell Modules and Encapsulation.
Dot-sourcing a file inside a script using a relative path is relative to your current directory -- not the directory where the script resides!
To be relative to the script use this function to locate your script directory: [Update for PowerShell V3+: Just use the builtin $PSScriptRoot variable!]
function Get-ScriptDirectory
{ Split-Path $script:MyInvocation.MyCommand.Path }
Modules must be stored as ...Modules\name\name.psm1 or ...\Modules\any_subpath\name\name.psm1. That is, you cannot just use ...Modules\name.psm1 -- the name of the immediate parent of the module must match the base name of the module. This chart shows the various failure modes when this rule is violated:
2015.06.25 A Pitfall Reference Chart
Simple-Talk.com just published the last of my triumvirate of in-depth articles on PowerShell pitfalls. The first two parts are in the form of a quiz that helps you appreciate a select group of pitfalls; the last part is a wallchart (albeit it would need a rather high-ceilinged room) containing 36 of the most common pitfalls (some adapted from answers on this page), giving concrete examples and workarounds for most. Read more here.
There are some tricks to building command lines for utilities that were not built with Powershell in mind:
To run an executable who's name starts with a number, preface it with an Ampersand (&).
& 7zip.exe
To run an executable with a space anywhere in the path, preface it with an Ampersand (&) and wrap it in quotes, as you would any string. This means that strings in a variable can be executed as well.
# Executing a string with a space.
& 'c:\path with spaces\command with spaces.exe'
# Executing a string with a space, after first saving it in a variable.
$a = 'c:\path with spaces\command with spaces.exe'
& $a
Parameters and arguments are passed to legacy utilities positionally. So it is important to quote them the way the utility expects to see them. In general, one would quote when it contains spaces or does not start with a letter, number or dash (-).
C:\Path\utility.exe '/parameter1' 'Value #1' 1234567890
Variables can be used to pass string values containing spaces or special characters.
$b = 'string with spaces and special characters (-/&)'
utility.exe $b
Alternatively array expansion can be used to pass values as well.
$c = #('Value #1', $Value2)
utility.exe $c
If you want Powershell to wait for an application to complete, you have to consume the output, either by piping the output to something or using Start-Process.
# Saving output as a string to a variable.
$output = ping.exe example.com | Out-String
# Piping the output.
ping stackoverflow.com | where { $_ -match '^reply' }
# Using Start-Process affords the most control.
Start-Process -Wait SomeExecutable.com
Because of the way they display their output, some command line utilities will appear to hang when ran inside of Powershell_ISE.exe, particularly when awaiting input from the user. These utilities will usually work fine when ran within Powershell.exe console.
PowerShell Gotchas
There are a few pitfall that repeatedly reappear on StackOverflow. It is recommend to do some research if you are not familiar with these PowerShell gotchas before asking a new question. It might even be a good idea to investigate in these PowerShell gotchas before answering a PowerShell question to make sure that you teach the questioner the right thing.
TLDR: In PowerShell:
the comparison equality operator is: -eq
(Stackoverflow example: Powershell simple syntax if condition not working)
parentheses and commas are not used with arguments
(Stackoverflow example: How do I pass multiple parameters into a function in PowerShell?)
output properties are based on the first object in the pipeline
(Stackoverflow example: Not all properties displayed)
the pipeline unrolls
(Stackoverflow example: Pipe complete array-objects instead of array items one at a time?)
a. single item collections
(Stackoverflow example: Powershell ArrayList turns a single array item back into a string)
b. embedded arrays
(Stackoverflow example: Return Multidimensional Array From Function)
c. output collections
(Stackoverflow example: Why does PowerShell flatten arrays automatically?)
$Null should be on the left side of the equality comparison operator
(Stackoverflow example: Should $null be on the left side of the equality comparison)
parentheses and assignments choke the pipeline
(Stackoverflow example: Importing 16MB CSV Into Variable Creates >600MB's Memory Usage)
the increase assignment operator (+=) might become expensive
Stackoverflow example: Improve the efficiency of my PowerShell scrip
The Get-Content cmdlet returns separate lines
Stackoverflow example: Multiline regex to match config block
Examples and explanations
Some of the gotchas might really feel counter-intuitive but often can be explained by some very nice PowerShell features along with the pipeline, expression/argument mode and type casting.
1. The comparison equality operator is: -eq
Unlike the Microsoft scripting language VBScript and some other programming languages, the comparison equality operator differs from the assignment operator (=) and is: -eq.
Note: assigning a value to a variable might pass through the value if needed:
$a = $b = 3 # The value 3 is assigned to both variables $a and $b.
This implies that following statement might be unexpectedly truthy or falsy:
If ($a = $b) {
# (assigns $b to $a and) returns a truthy if $b is e.g. 3
} else {
# (assigns $b to $a and) returns a falsy if $b is e.g. 0
}
2. Parentheses and commas are not used with arguments
Unlike a lot of other programming languages and the way a primitive PowerShell function is defined, calling a function doesn't require parentheses or commas for their related arguments. Use spaces to separate the parameter arguments:
MyFunction($Param1, $Param2 $Param3) {
# ...
}
MyFunction 'one' 'two' 'three' # assigns 'one' to $Param1, 'two' to $Param2, 'three' to $Param3
Parentheses and commas are used for calling (.Net) methods.
Commas are used to define arrays. MyFunction 'one', 'two', 'three' (or MyFunction('one', 'two', 'three')) will load the array #('one', 'two', 'three') into the first parameter ($Param1).
Parentheses will intepret the containing contents as a single collection into memory (and choke the PowerShell pipeline) and should only be used as such, e.g. to call an embedded function, e.g.:
MyFunction (MyOtherFunction) # passes the results MyOtherFunction to the first positional parameter of MyFunction ($Param1)
MyFunction One $Two (getThree) # assigns 'One' to $Param1, $Two to $Param2, the results of getThree to $Param3
Note: that quoting text arguments (as the word one in the later example) is only required when it contains spaces or special characters.
3. Output properties are based on the first object in the pipeline
In a PowerShell pipeline each object is processed and passed on by a cmdlet (that is implemented for the middle of a pipeline) similar to how objects are processed and passed on by workstations in an assembly line. Meaning each cmdlet processes one item at the time while the prior cmdlet (workstation) simultaneously processes the upcoming one. This way, the objects aren't loaded into memory at once (less memory usage) and could already be processed before the next one is supplied (or even exists). The disadvantage of this feature is that there is no oversight of what (or how many) objects are expected to follow.
Therefore most PowerShell cmdlets assume that all the objects in the pipeline correspond to the first one and have the same properties which is usually the case, but not always...
$List =
[pscustomobject]#{ one = 'a1'; two = 'a2' },
[pscustomobject]#{ one = 'b1'; two = 'b2'; three = 'b3' }
$List |Select-Object *
one two
--- ---
a1 a2
b1 b2
As you see, the third column three is missing from the results as it didn't exists in the first object and the PowerShell was already outputting the results prior it was aware of the exists of the second object.
On way to workaround this behavior is to explicitly define the properties (of all the following objects) at forehand:
$List |Select-Object one, two, three
one two three
--- --- -----
a1 a2
b1 b2 b3
See also proposal: #13906 Add -UnifyProperties parameter to Select-Object
4. The pipeline unrolls
This feature might come in handy if it complies with the straightforward expectation:
$Array = 'one', 'two', 'three'
$Array.Length
3
a. single item collections
But it might get confusing:
$Selection = $Array |Select-Object -First 2
$Selection.Length
2
$Selection[0]
one
when the collection is down to a single item:
$Selection = $Array |Select-Object -First 1
$Selection.Length
3
$Selection[0]
o
Explanation
When the pipeline outputs a single item which is assigned to a variable, it is not assigned as a collection (with 1 item, like: #('one')) but as a scalar item (the item itself, like: 'one').
Which means that the property .Length (which is in fact an alias for the property .Count for an array) is no longer applied on the array but on the string: 'one'.length which equals 3. And in case of the index $Selection[0] , the first character of the string 'one'[0] (which equals the character o) is returned .
Workaround
To workaround this behavior, you might force the scalar item to an array using the Array subexpression operator #( ):
$Selection = $Array |Select-Object -First 1
#($Selection).Length
1
#($Selection)[0]
one
Knowing that in the case the $Selection is already an array, it will will not be further increased in depth (#(#('one', 'two')), see the next section 4b. Embedded collections are flattened).
b. embedded arrays
When an array (or a collection) includes embedded arrays, like:
$Array = #(#('a', 'b'), #('c', 'd'))
$Array.Count
2
All the embedded items will be processed in the pipeline and consequently returns a flat array when displayed or assigned to a new variable:
$Processed = $Array |ForEach-Object { $_ }
$Processed.Count
4
$Processed
a
b
c
d
To iterate the embedded arrays, you might use the foreach statement:
foreach ($Item in $Array) { $Item.Count }
2
2
Or a simply for loop:
for ($i = 0; $i -lt $Array.Count; $i++) { $Array[$i].Count }
2
2
c. output collections
Collections are usually unrolled when they are placed on the pipeline:
function GetList {
[Collections.Generic.List[String]]#('a', 'b')
}
(GetList).GetType().Name
Object[]
To output the collection as a single item, use the comma operator ,:
function GetList {
,[Collections.Generic.List[String]]#('a', 'b')
}
(GetList).GetType().Name
List`1
5. $Null should be on the left side of the equality comparison operator
This gotcha is related to this comparison operators feature:
When the input of an operator is a scalar value, the operator returns a Boolean value. When the input is a collection, the operator returns the elements of the collection that match the right-hand value of the expression. If there are no matches in the collection, comparison operators return an empty array.
This means for scalars:
'a' -eq 'a' # returns $True
'a' -eq 'b' # returns $False
'a' -eq $Null # returns $False
$Null -eq $Null # returns $True
and for collections, the matching elements are returned which evaluates to either a truthy or falsy condition:
'a', 'b', 'c' -eq 'a' # returns 'a' (truthy)
'a', 'b', 'c' -eq 'd' # returns an empty array (falsy)
'a', 'b', 'c' -eq $Null # returns an empty array (falsy)
'a', $Null, 'c' -eq $Null # returns $Null (falsy)
'a', $Null, $Null -eq $Null # returns #($Null, $Null) (truthy!!!)
$Null, $Null, $Null -eq $Null # returns #($Null, $Null, $Null) (truthy!!!)
In other words, to check whether a variable is $Null (and exclude a collection containing multiple $Nulls), put $Null at the LHS (left hand side) of the equality comparison operator:
if ($Null -eq $MyVariable) { ...
6. Parentheses and assignments choke the pipeline
The PowerShell Pipeline is not just a series of commands connected by pipeline operators (|) (ASCII 124). It is a concept to simultaneously stream individual objects through a sequence of cmdlets. If a cmdlet (or function) is written according to the Strongly Encouraged Development Guidelines and implemented for the middle of a pipeline, it takes each single object from the pipeline, processes it and passes the results to the next cmdlet just before it takes and processes the next object in the pipeline. Meaning that for a simple pipeline as:
Import-Csv .\Input.csv |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
As the last cmdlet writes an object to the .\Output.csv file, the Select-Object cmdlet selects the properties of the next object and the Import-Csv reads the next object from the .\input.csv file (see also: Pipeline in Powershell). This will keep the memory usage low (especially where there are lots of object/records to process) and therefore might result in a faster throughput. To facilitate the pipeline, the PowerShell objects are quiet fat as each individual object contains all the property information (along with e.g. the property name).
Therefore it is not a good practice to choke the pipeline for no reason. There are two senarios that choke the pipeline:
Parentheses, e.g.:
(Import-Csv .\Input.csv) |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
Where all the .\Input.csv records are loaded as an array of PowerShell objects into memory before passing it on to the Select-Object cmdlet.
Assignments, e.g.:
$Objects = Import-Csv .\Input.csv
$Objects |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
Where all the .\Input.csv records are loaded as an array of PowerShell objects into $Objects (memory as well) before passing it on to the Select-Object cmdlet.
7. the increase assignment operator (+=) might become expensive
The increase assignment operator (+=) is syntactic sugar to increase and assign primitives as .e.g. $a += $b where $a is assigned $b + 1. The increase assignment operator can also be used for adding new items to a collection (or to String types and hash tables) but might get pretty expensive as the costs increases with each iteration (the size of the collection). The reason for this is that objects as array collections are immutable and the right variable in not just appended but *appended and reassigned to the left variable. For details see also: avoid using the increase assignment operator (+=) to create a collection
8. The Get-Content cmdlet returns separate lines
There are probably quite some more cmdlet gotchas, knowing that there exist a lot of (internal and external) cmdlets. In contrast to engine related gotchas, these gotchas are often easier to highlight (with e.g. a warning) as happend with ConvertTo-Json (see: Unexpected ConvertTo-Json results? Answer: it has a default -Depth of 2) or "fix". But there is very clasic gotcha in Get-Content which tight into the PowerShell general concept of streaming objects (in this case lines) rather than passing everything (the whole contents of the file) in once:
Get-Content .\Input.txt -Match '\r?\n.*Test.*\r?\n'
Will never work because, by default, Get-Contents returns a stream of objects where each object contains a single string (a line without any line breaks).
(Get-Content .\Input.txt).GetType().Name
Object[]
(Get-Content .\Input.txt)[0].GetType().Name
String
In fact:
Get-Content .\Input.txt -Match 'Test'
Returns all the lines with the word Test in it as Get-Contents puts every single line on the pipeline and when the input is a collection, the operator returns the elements of the collection that match the right-hand value of the expression.
Note: since PowerShell version 3, Get-Contents has a -Raw parameter that reads all the content of the concerned file at once, Meaning that this: Get-Content -Raw .\Input.txt -Match '\r?\n.*Test.*\r?\n' will work as it loads the whole file into memory.
alex2k8, I think this example of yours is good to talk about:
# -----------------------------------
function foo($a){
# I thought this is right.
#if($a -eq $null)
#{
# throw "You can't pass $null as argument."
#}
# But actually it should be:
if($null -eq $a)
{
throw "You can't pass $null as argument."
}
}
foo #($null, $null)
PowerShell can use some of the comparators against arrays like this:
$array -eq $value
## Returns all values in $array that equal $value
With that in mind, the original example returns two items (the two $null values in the array), which evalutates to $true because you end up with a collection of more than one item. Reversing the order of the arguments stops the array comparison.
This functionality is very handy in certain situations, but it is something you need to be aware of (just like array handling in PowerShell).
Functions 'foo' and 'bar' looks equivalent.
function foo() { $null }
function bar() { }
E.g.
(foo) -eq $null
# True
(bar) -eq $null
# True
But:
foo | %{ "foo" }
# Prints: foo
bar | %{ "bar" }
# PRINTS NOTHING
Returning $null and returning nothing is not equivalent dealing with pipes.
This one is inspired by Keith Hill example...
function bar() {}
$list = #(foo)
$list.length
# Prints: 0
# Now let's try the same but with a temporal variable.
$tmp = foo
$list = #($tmp)
$list.length
# Prints: 1
Another one:
$x = 2
$y = 3
$a,$b = $x,$y*5
because of operators precedence there is not 25 in $b; the command is the same as ($x,$y)*5
the correct version is
$a,$b = $x,($y*5)
The logical and bitwise operators don't follow standard precedence rules. The operator -and should have a higher priority than -or yet they're evaluated strictly left-to-right.
For example, compare logical operators between PowerShell and Python (or virtually any other modern language):
# PowerShell
PS> $true -or $false -and $false
False
# Python
>>> True or False and False
True
...and bitwise operators:
# PowerShell
PS> 1 -bor 0 -band 0
0
# Python
>>> 1 | 0 & 0
1
This works. But almost certainly not in the way you think it's working.
PS> $a = 42;
PS> [scriptblock]$b = { $a }
PS> & $b
42
This one has tripped me up before, using $o.SomeProperty where it should be $($o.SomeProperty).
# $x is not defined
[70]: $x -lt 0
True
[71]: [int]$x -eq 0
True
So, what's $x..?
Another one I ran into recently: [string] parameters that accept pipeline input are not strongly typed in practice. You can pipe anything at all and PS will coerce it via ToString().
function Foo
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True)]
[string] $param
)
process { $param }
}
get-process svchost | Foo
Unfortunately there is no way to turn this off. Best workaround I could think of:
function Bar
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True)]
[object] $param
)
process
{
if ($param -isnot [string]) {
throw "Pass a string you fool!"
}
# rest of function goes here
}
}
edit - a better workaround I've started using...
Add this to your custom type XML -
<?xml version="1.0" encoding="utf-8" ?>
<Types>
<Type>
<Name>System.String</Name>
<Members>
<ScriptProperty>
<Name>StringValue</Name>
<GetScriptBlock>
$this
</GetScriptBlock>
</ScriptProperty>
</Members>
</Type>
</Types>
Then write functions like this:
function Bar
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipelineByPropertyName=$True)]
[Alias("StringValue")]
[string] $param
)
process
{
# rest of function goes here
}
}
Forgetting that $_ gets overwritten in blocks made me scratch my head in confusion a couple times, and similarly for multiple reg-ex matches and the $matches array. >.<
Remembering to explicitly type pscustom objects from imported data tables as numeric so they can be sorted correctly:
$CVAP_WA=foreach ($i in $C){[PSCustomObject]#{ `
County=$i.county; `
TotalVote=[INT]$i.TotalBallots; `
RegVoters=[INT]$i.regvoters; `
Turnout_PCT=($i.TotalBallots/$i.regvoters)*100; `
CVAP=[INT]($B | ? {$_.GeoName -match $i.county}).CVAP_EST }}
PS C:\Politics> $CVAP_WA | sort -desc TotalVote |ft -auto -wrap
County TotalVote RegVoters Turnout_PCT CVAP CVAP_TV_PCT CVAP_RV_PCT
------ --------- --------- ----------- ---- ----------- -----------
King 973088 1170638 83.189 1299290 74.893 90.099
Pierce 349377 442985 78.86 554975 62.959 79.837
Snohomish 334354 415504 80.461 478440 69.832 86.81
Spokane 227007 282442 80.346 342060 66.398 82.555
Clark 193102 243155 79.453 284190 67.911 85.52
Mine are both related to file copying...
Square Brackets in File Names
I once had to move a very large/complicated folder structure using Move-Item -Path C:\Source -Destination C:\Dest. At the end of the process there were still a number of files in source directory. I noticed that every remaining file had square brackets in the name.
The problem was that the -Path parameter treats square brackets as wildcards.
EG. If you wanted to copy Log001 to Log200, you could use square brackets as follows:
Move-Item -Path C:\Source\Log[001-200].log.
In my case, to avoid square brackets being interpreted as wildcards, I should have used the -LiteralPath parameter.
ErrorActionPreference
The $ErrorActionPreference variable is ignored when using Move-Item and Copy-Item with the -Verbose parameter.
Treating the ExitCode of a Process as a Boolean.
eg, with this code:
$p = Start-Process foo.exe -NoNewWindow -Wait -PassThru
if ($p.ExitCode) {
# handle error
}
things are good, unless say foo.exe doesn't exist or otherwise fails to launch.
in that case $p will be $null, and [bool]($null.ExitCode) is False.
a simple fix is to replace the logic with if ($p.ExitCode -ne 0) {},
however for clarity of code imo the following is better: if (($p -eq $null) -or ($p.ExitCode -ne 0)) {}