Comparing Variables within Powershell - powershell

Ok. So I thought this would have been easy, but I am hitting a snag.
$var = (Get-ItemProperty "HKCU:\SOFTWARE\SAP\General" -Name "BrowserControl")."BrowserControl"
$var2 = "HKCU:\SOFTWARE\SAP\General"
$var3 = #('1','0')
#if (($var -eq ($var3 -join'')))
#if (Compare-Object -IncludeEqual $var $var3 -SyncWindow 0)
if ($var -eq $var3)
{
Write-Output "Registry hive exists"
exit 1
}
else
{
Write-Output "Registry hive doesn't exists"
#New-ItemProperty -Path $var2 -name "BrowserControl" -Value "1" -PropertyType "DWORD" -Force | Out-Null
}
If 1 or 0 is returned from BrowserControl, I want it to be a match. If anything else is returned, no match.
If BrowserControl is set to 1, it works. If it is set to 0 or any number other than 1 it doesn't match.
I know I can use else-if and add a couple more lines of code, but I was really wanting to get this to work.
As you can see, I have tried different comparison methods. I also tried (0,1), ('0','1'), 0,1 for var3. None of those worked either.
So... what am I missing?

You cannot meaningfully use an array as the RHS (right-hand side) of the -eq operator.[1]
However, PowerShell has dedicated operators for testing whether a given single value is contained in a collection (more accurately: equal to one of the elements of a collection), namely -in and its operands-reversed counterpart, -contains.
In this case, -in makes for more readable code:
if ($var -in $var3) # ...
[1] PowerShell quietly accepts an array (collection) as the RHS, but - uselessly - stringifies it, by concatenating the elements with a single space by default. E.g., '1 2' -eq 1, 2 yields $true.
By contrast, using an array as the LHS of -eq is meaningfully supported: the RHS scalar then acts as a filter, returning the sub-array of equal LHS elements; e.g. 1, 2, 3, 2 -eq 2 returns 2, 2

Related

Get newly added value in Array Powershell

for (;;) {
#Get All Files from the Folder
$FolderItems = #(Get-PnPFolderItem -FolderSiteRelativeUrl $FolderURL -ItemType File)
Write-Host "Total Number of Files in the Folder:" $FolderItems.Count
if ($FolderItems.Count -gt $oldCount) {
foreach ($item in $FolderItems) {
if ($oldFolderItems -contains $item) {
}
else {
Write-Host $item.Name
}
}
}
$oldCount = $FolderItems.Count
$oldFolderItems = $FolderItems
timeout 180
}
It prints all the names instead of the one new item
tl;dr
Replace your foreach loop with the following call to Compare-Object:
# Compare the new and the old collection items by their .Name property
# and output the name of those that are unique to the new collection.
Compare-Object -Property Name $FolderItems $oldFolderItems |
Where-Object SideIndicator -eq '<=' |
ForEach-Object Name
You should also initialize $oldFolderItems to $null and $oldCount to 0, to be safe, and - unless you want all names to be output in the first iteration - change the enclosing if statement to:
if ($oldFolderItems -and $FolderItems.Count -gt $oldCount) { # ...
Note: The immediate - but inefficient - fix to your attempt would have been the following, for the reasons explained in the next section:
if ($oldFolderItems.Name -contains $item.Name) { # Compare by .Name values
Note: $oldFolderItems.Name actually returns the array of .Name property values of the elements in collection $oldFolderItems, which is a convenient feature named member-access enumeration.
As for what you tried:
It's unclear what .NET type Get-PnPFolderItem returns instances of, but it's fair to assume that the type is a .NET reference type (as opposed to a value type).
Unless a reference type is explicitly designed to compare its instances based on identifying properties,[1] reference equality is tested for in equality test-based operations such as -contains (but also in other equality-comparison operations, such as with -in and -eq), i.e. only two references to the very same instance are considered equal.
Therefore, using -contains in your case won't work, because the elements of the collections - even if they conceptually represent the same objects - are distinct instances that compare as unequal.
A simplified example, using System.IO.DirectoryInfo instances, as output by Get-Item:
# !! Returns $false, because the two [System.IO.DirectoryInfo]
# !! instances are distinct objects.
#(Get-Item /) -contains (Get-Item /)
Therefore, instances of .NET reference types must be compared by the value of an identifying property (if available, such as .Name in this case) rather than as a whole.
To discover whether a given instance is one of a .NET reference type, access the type's .IsValueType property: a return value of $false indicates a reference type; e.g.:
(Get-Item /).GetType().IsValueType # -> $false -> reference type
# Equivalent, with a type literal
[System.IO.DirectoryInfo].IsValueType # -> $false
[1] A notable example is the [string] type, which, as an exception, generally behaves like a value type, so that the following is still $true, despite technically distinct instances being involved: $s1 = 'foo'; $s2 = 'f' + 'oo'; $s1 -eq $s2

Unexpected Behaviour with Where-Object

I just came across an unexpected behaviour of Where-Object which I couldn't find any explanation for:
$foo = $null | Where-Object {$false}
$foo -eq $null
> True
($null, 1 | Measure-Object).Count
> 1
($foo, 1 | Measure-Object).Count
> 1
($null, $null, 1 | Measure-Object).Count
> 1
($foo, $foo, 1 | Measure-Object).Count
> 0
If the condition of Where-Object is false, $foo should be $null (which appears to be correct).
However, piping $foo at least twice before any value into the pipeline seems to break it.
What is causing this?
Other inconsistencies:
($foo, $null, 1 | Measure-Object).Count
> 1
($foo, $null, $foo, 1 | Measure-Object).Count
> 0
($null, $foo, $null, 1 | Measure-Object).Count
> 1
($foo, 1, $foo, $foo | Measure-Object).Count
> 1
($null, $foo, $null, $foo, 1 | Measure-Object).Count
> 0
tl;dr:
Not all apparent $null values are the same, as Jeroen Mostert's comments indicate: PowerShell has two types of null that situationally behave differently - see the next section.
Additionally, you're seeing perhaps surprising Measure-Object behavior and a pipeline bug - see the bottom section.
It's best to eliminate Measure-Object from your test commands and simply invoke .Count directly on your arrays; e.g. (the simplest way to create the type of null as in your question is: $foo = & {}):
($foo, $null, 1).Count yields 3
($null, $foo, $null, $foo, 1).Count yields 5
As you can see, both types of null (discussed below) properly become elements of an array.
There are two distinct kinds of null values in PowerShell:
There's bona fide scalar null (corresponding to null in C#, for instance).
This null is contained in the automatic $null variable.
.NET methods may return it. (While PowerShell code may output it too, doing so is best avoided).
There's also the enumerable "collection null" (also called "AutomationNull", based on its class name), which is technically the System.Management.Automation.Internal.AutomationNull.Value singleton, which is itself a [psobject] instance.
This value is technically output by the pipeline when PowerShell commands (both binary cmdlets and PowerShell scripts/functions) produce no output.
The simplest way to get this value is with & {} , i.e. by executing an empty script block; of course, you can also use [System.Management.Automation.Internal.AutomationNull]::Value explicitly).
Unfortunately, the collection null value is nontrivial to distinguish from the scalar null, as of PowerShell 7.2:
GitHub issue #13465 proposes allowing detection of collection null via $var -is [AutomationNull] in a future PowerShell version.
For now, there are several workarounds for testing whether a given value $var contains collection null; perhaps the simplest (but non-obvious) is:
$null -eq $var -and $var -is [psobject] is $true only if $var contains the collection null value, because only collection null is technically an object.
Behavioral differences:
In expression contexts and in parameter binding, there is no difference in that collection null is implicitly converted to $null.
Note that this means that you cannot pass collection null as an argument - see the discussion in GitHub issue #9150.
The exception in the context of expressions is the LHS of operators that support collections as their LHS: they treat collection null as an empty collection and therefore evaluate to an empty array (#()) rather than $null:
E.g., $var -replace 'foo' | ForEach-Object { 'hi' } prints 'hi' only if $var is scalar $null, not with with collection null, because the -replace operation then outputs an empty array, which sends nothing through the pipeline.
See GitHub issue #3866.
In the pipeline:
Scalar $null is sent through the pipeline - it behaves like a single object: $null | ForEach-Object { '$_ is $null? ' + ($null -eq $_) } prints '$_ is $null? True';
Collection null is not sent through the pipeline - it behaves like a collection without elements; that is, just like #() | ForEach-Object { 'hi' } (sending an empty array), & {} | ForEach-Object { 'hi' } sends nothing through the pipeline, because there is nothing to enumerate, and therefore never outputs 'hi'.
Curiously, by contrast, in a foreach loop statement (as opposed to the ForEach-Object cmdlet) scalar $null too is not enumerated and the loop body is never entered in the following (ditto for collection null):
foreach ($i in $null) { 'hi' }
Measure-Object and pipeline problems:
Measure-Object generally ignores $null values, presumably by design.
This is discussed in GitHub issue #10905, which proposes introducing an -IncludeNull switch to support considering $null values on an opt-in basis. (The default behavior will not change so as not to break backward compatibility.)
However, you've discovered an outright bug in PowerShell's pipeline with respect to multi-object input involving collection nulls (as of PowerShell 7.1.2) , which Measure-Object only surfaces, as you've noted yourself:
On encountering a second collection null in multi-object input, sending objects through the pipeline unexpectedly stops:
E.g., (1, (& {}), 2, (& {}), 3, 4, 5 | Measure-Object).Count yields just 2: only 1 and 2 are counted (the collection nulls themselves are not sent through the pipeline), because the second collection null unexpectedly stops enumeration, so that the remaining objects - 3, 4, and 5 - aren't even sent to Measure-Object.
See GitHub issue #14920.
To add to mklement0's very detailed and much appreciated answer, I want to share the workaround I used:
$numbers = 3, 42, 7, 69, 13
$no1 = $numbers | Where-Object {$_ -eq 1}
$no2 = $numbers | Where-Object {$_ -eq 2}
$no3 = $numbers | Where-Object {$_ -eq 3}
Instead of piping the variables directly to ForEach-Object, which produces no output ... :
$no1, $no2, $no3 | ForEach-Object {$_}
>
... pipe the variable names to ForEach-Object and make use of Get-Variable to get the desired result:
'no1', 'no2', 'no3' | ForEach-Object {(Get-Variable $_).Value}
> 3

(AD) object equality in PowerShell

The following seems quite weird to me:
$user1 = Get-ADUser sameuser
$user2 = Get-ADUser sameuser
$user1 -eq $user2 # -> false
# the same for groups:
$group1 = Get-ADGroup samegroup
$group2 = Get-ADGroup samegroup
$group1 -eq $group2 # -> false
Actually it seems that Powershell users can be happy that 1 -eq 1 is true. Also:
"1" -eq 1 # -> true
#("1") -contains 1 # -> true
But:
$h1 = #{bla = 1}
$h2 = #{bla = 1}
$h1 -eq $h2 # -> false
$h1.GetHashCode(), $h2.GetHashCode() # -> 60847006, 5156994
# the above return values of course vary
$a1 = #(1;2;3)
$a2 = #(1;2;3)
$a1.GetHashCode(), $a2.GetHashCode() # -> 52954848, 34157931
# surprise, surprise:
$a1 -eq $a2 # no return value at all? (tested with versions 4.0 and 5.1)
($a1 -eq $a2).GetType() # or an Array?
($a1 -eq $a2).count # -> 0
Aside from these funny behaviors what really feels frustrating is that I cannot simply do it this way:
$ones = Get-ADPrincipalGroupMembership one
$seconds = Get-ADPrincipalGroupMembership second
$excl_ones = $ones | ? { $_ -notin $seconds }
But have to do something like this:
$second_nms = $seconds | % name
$excl_ones = $ones | ? { $_.name -notin $second_nms }
Am I missing something?
To understand some of the oddities your seeing we have to take a step back and consider the bigger picture, namely the framework PowerShell is built on top of: .NET!
Object Equality in .NET
$user1 -eq $user2 fails because $user1 and $user2 are two different objects - although they may both represent the same object in Active Directory.
When it comes to object equality in .NET, you'll need to distinguished between value equality and reference equality.
Two variables of a value type, like [int] for example, are considered equal if their underlying value is the same:
$a = 1
$b = 1
$a.Equals($b) # $true
Two variables of a reference type - anything that's not a value type - are usually only considered equal if they have the same identity - that is, they refer to the same object in memory:
$a = New-Object object
$b = New-Object object
$a.Equals($b) # $false
For all we know, $a and $b are exactly the same, but they refer to two distinct [object] instances in memory.
A type definition can override GetHashCode() (the function used to determine an object's identity) and Equals() (the function used to determine equality between two objects), so you may find that some reference types seem to act as value types when comparing them - [string] being a prime example:
$a = "test"
$b = "test"
$a.Equals($b)
The ADEntity class (the base type for all output objects in the ActiveDirectory module) doesn't attempt something like this, which is why you see the results you do.
Collection filtering in PowerShell
The above doesn't quite explain another weird thing you raised, namely this:
$a1 = #(1;2;3)
$a2 = #(1;2;3)
$a1 -eq $a2 # NOTHING! WHAT'S GOING ON HERE?
To understand what's going on here, you need to study comparison operator behavior in PowerShell itself!
All the comparison operators (-eq, -ne, -gt, -like, -match etc.) support two different modes depending on the left-hand side argument: scalar and filtering.
Scalar comparison
In scalar mode, a comparison operator takes a single object as its left-hand operand, a value expression as its right-hand operand and returns a boolean result: $true or $false.
Filtering using comparison operators
In filtering mode, a comparison operator takes a collection (an array or a list) as its left-hand operand, a value expression as its right-hand operand (just like before) and returns all the individual members of the left-hand collection that satisfy the comparison.
To see this in action, try the following:
$names = "James","Jane","John"
$prefix = "Ja"
$names -like "$prefix*"
You'll see that the -like operations returns two strings - James and Jane.
If we apply this newfound knowledge to your example
#(1;2;3) - eq #(1;2;3)
it becomes obvious why nothing is returned - the left hand operand is clearly an array, and neither of the ensuing comparisons (1 -eq #(1;2;3), 2 -eq #(1;2;3) etc.) will return $true
Now on to the practical part of your problem. Active Directory is designed in a way so that each object in the directory has a unique identifier you can use to figure out the identity of it - the objectGUID value. A GUID in .NET happens to be a value type, so you can safely use it as a basis for you comparison:
$ones = Get-ADPrincipalGroupMembership one
$seconds = Get-ADPrincipalGroupMembership second
$excl_ones = $ones | ? { $_.objectGUID -notin $seconds.objectGUID }
For security principals (groups, users, computers etc.), another unique identifier that's safe to use is the objectSID - security identifiers are always unique.

Printing useful variable values in powershell (in particular, involving $null/empty string)

If I have:
$a=$null
$b=''
$c=#($null,$null)
$d='foo'
write-host $a
write-host $b
write-host $c
write-host $d
the output is
foo
I'd really like to be able to easily get output that shows the variable values, e.g.,
$Null
''
#($Null,$Null)
'foo'
I can write a function to do this, but I'm guessing/hoping there's something built-in that I'm missing. Is there, or does everyone just roll their own function for something like this?
At the moment the quickest thing I've come up with is running a value through ConvertTo-Json before printing it. It doesn't handle a plain $null, but it shows me the other values nicely.
What you're looking for is similar to Ruby's .inspect method. It's something I always loved in Ruby and do miss in PowerShell/.Net.
Unfortunately there is no such thing to my knowledge, so you will somewhat have to roll your own.
The closest you get in .Net is the .ToString() method that, at a minimum, just displays the object type (it's inherited from [System.Object]).
So you're going to have to do some checking on your own. Let's talk about the edge case checks.
Arrays
You should check if you're dealing with an array first, because PowerShell often unrolls arrays and coalesces objects for you so if you start doing other checks you may not handle them correctly.
To check that you have an array:
$obj -is [array]
1 -is [array] # false
1,2,3 -is [array] # true
,1 -is [array] #true
In the case of an array, you'll have to iterate it if you want to properly serialize its elements as well. This is basically the part where your function will end up being recursive.
function Format-MyObject {
param(
$obj
)
if ($obj -is [array]) {
# initial array display, like "#(" or "["
foreach ($o in $obj) {
Format-MyObject $obj
}
# closing array display, like ")" or "]"
}
}
Nulls
Simply check if it's equal to $null:
$obj -eq $null
Strings
You can first test that you're dealing with a string by using -is [string].
For empty, you can compare the string to an empty string, or better, to [string]::Empty. You can also use the .IsNullOrEmpty() method, but only if you've already ruled out a null value (or checked that it is indeed a string):
if ($obj -is [string) {
# pick one
if ([string]::IsNullOrEmpty($obj)) {
# display empty string
}
if ($obj -eq [string]::Empty) {
# display empty string
}
if ($obj -eq "") { # this has no advantage over the previous test
# display empty string
}
}
Alternative
You could use the built-in XML serialization, then parse the XML to get the values out of it.
It's work (enough that I'm not going to do it in an SO answer), but it removes a lot of potential human error, and sort of future-proofs the approach.
The basic idea:
$serialized = [System.Management.Automation.PSSerializer]::Serialize($obj) -as [xml]
Now, use the built in XML methods to parse it and pull out what you need. You still need to convert some stuff to other stuff to display the way you want (like interpreting <nil /> and the list of types to properly display arrays and such), but I like leaving the actual serialization to an official component.
Quick example:
[System.Management.Automation.PSSerializer]::Serialize(#(
$null,
1,
'string',
#(
'start of nested array',
$null,
'2 empty strings next',
'',
([string]::Empty)
)
)
)
And the output:
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">
<Obj RefId="0">
<TN RefId="0">
<T>System.Object[]</T>
<T>System.Array</T>
<T>System.Object</T>
</TN>
<LST>
<Nil />
<I32>1</I32>
<S>string</S>
<Obj RefId="1">
<TNRef RefId="0" />
<LST>
<S>start of nested array</S>
<Nil />
<S>2 empty strings next</S>
<S></S>
<S></S>
</LST>
</Obj>
</LST>
</Obj>
</Objs>
I shared two functions that reveal PowerShell values (including the empty $Null's, empty arrays etc.) further than the usually do:
One that the serializes the PowerShell objects to a PowerShell
Object Notation (PSON)
which ultimate goal is to be able to reverse everything with the
standard command Invoke-Expression and parse it back to a
PowerShell object.
The other is the ConvertTo-Text (alias CText) function that I used in
my Log-Entry
framework. note the
specific line: Log "Several examples that usually aren't displayed
by Write-Host:" $NotSet #() #(#()) #(#(), #()) #($Null) that I wrote
in the example.
Function Global:ConvertTo-Text1([Alias("Value")]$O, [Int]$Depth = 9, [Switch]$Type, [Switch]$Expand, [Int]$Strip = -1, [String]$Prefix, [Int]$i) {
Function Iterate($Value, [String]$Prefix, [Int]$i = $i + 1) {ConvertTo-Text $Value -Depth:$Depth -Strip:$Strip -Type:$Type -Expand:$Expand -Prefix:$Prefix -i:$i}
$NewLine, $Space = If ($Expand) {"`r`n", ("`t" * $i)} Else{"", ""}
If ($O -eq $Null) {$V = '$Null'} Else {
$V = If ($O -is "Boolean") {"`$$O"}
ElseIf ($O -is "String") {If ($Strip -ge 0) {'"' + (($O -Replace "[\s]+", " ") -Replace "(?<=[\s\S]{$Strip})[\s\S]+", "...") + '"'} Else {"""$O"""}}
ElseIf ($O -is "DateTime") {$O.ToString("yyyy-MM-dd HH:mm:ss")}
ElseIf ($O -is "ValueType" -or ($O.Value.GetTypeCode -and $O.ToString.OverloadDefinitions)) {$O.ToString()}
ElseIf ($O -is "Xml") {(#(Select-XML -XML $O *) -Join "$NewLine$Space") + $NewLine}
ElseIf ($i -gt $Depth) {$Type = $True; "..."}
ElseIf ($O -is "Array") {"#(", #(&{For ($_ = 0; $_ -lt $O.Count; $_++) {Iterate $O[$_]}}), ")"}
ElseIf ($O.GetEnumerator.OverloadDefinitions) {"#{", (#(ForEach($_ in $O.Keys) {Iterate $O.$_ "$_ = "}) -Join "; "), "}"}
ElseIf ($O.PSObject.Properties -and !$O.value.GetTypeCode) {"{", (#(ForEach($_ in $O.PSObject.Properties | Select -Exp Name) {Iterate $O.$_ "$_`: "}) -Join "; "), "}"}
Else {$Type = $True; "?"}}
If ($Type) {$Prefix += "[" + $(Try {$O.GetType()} Catch {$Error.Remove($Error[0]); "$Var.PSTypeNames[0]"}).ToString().Split(".")[-1] + "]"}
"$Space$Prefix" + $(If ($V -is "Array") {$V[0] + $(If ($V[1]) {$NewLine + ($V[1] -Join ", $NewLine") + "$NewLine$Space"} Else {""}) + $V[2]} Else {$V})
}; Set-Alias CText ConvertTo-Text -Scope:Global -Description "Convert value to readable text"
ConvertTo-Text
The ConvertTo-Text function (Alias CText) recursively converts PowerShell object to readable text this includes hash tables, custom objects and revealing type details (like $Null vs an empty string).
Syntax
ConvertTo-Text [<Object>] [[-Depth] <int>] [[-Strip] <int>] <string>] [-Expand] [-Type]
Parameters
<Object>
The object (position 0) that should be converted a readable value.
-⁠Depth <int>
The maximal number of recursive iterations to reveal embedded objects.
The default depth for ConvertTo-Text is 9.
-Strip <int>
Truncates strings at the given length and removes redundant white space characters if the value supplied is equal or larger than 0. Set -Strip -1 prevents truncating and the removal of with space characters.
The default value for ConvertTo-Text is -1.
-Expand
Expands embedded objects over multiple lines for better readability.
-Type
Explicitly reveals the type of the object by adding [<Type>] in front of the objects.
Note: the parameter $Prefix is for internal use.
Examples
The following command returns a string that describes the object contained by the $var variable:
ConvertTo-Text $Var
The following command returns a string containing the hash table as shown in the example (rather then System.Collections.DictionaryEntry...):
ConvertTo-Text #{one = 1; two = 2; three = 3}
The following command reveals values (as e.g. $Null) that are usually not displayed by PowerShell:
ConvertTo-Text #{Null = $Null; EmptyString = ""; EmptyArray = #(); ArrayWithNull = #($Null); DoubleEmptyArray = #(#(), #())} -Expand
The following command returns a string revealing the WinNT User object up to a level of 5 deep and expands the embedded object over multiple lines:
ConvertTo-Text ([ADSI]"WinNT://./$Env:Username") -Depth 5 -Expand
A quick self-rolled option good for some datatypes.
function Format-MyObject {
param(
$obj
)
#equality comparison order is important due to array -eq overloading
if ($null -eq $obj)
{
return 'null'
}
#Specify depth because the default is 2, because powershell
return ConvertTo-Json -Depth 100 $obj
}

Powershell pitfalls

What Powershell pitfalls you have fall into? :-)
Mine are:
# -----------------------------------
function foo()
{
#("text")
}
# Expected 1, actually 4.
(foo).length
# -----------------------------------
if(#($null, $null))
{
Write-Host "Expected to be here, and I am here."
}
if(#($null))
{
Write-Host "Expected to be here, BUT NEVER EVER."
}
# -----------------------------------
function foo($a)
{
# I thought this is right.
#if($a -eq $null)
#{
# throw "You can't pass $null as argument."
#}
# But actually it should be:
if($null -eq $a)
{
throw "You can't pass $null as argument."
}
}
foo #($null, $null)
# -----------------------------------
# There is try/catch, but no callstack reported.
function foo()
{
bar
}
function bar()
{
throw "test"
}
# Expected:
# At bar() line:XX
# At foo() line:XX
#
# Actually some like this:
# At bar() line:XX
foo
Would like to know yours to walk them around :-)
My personal favorite is
function foo() {
param ( $param1, $param2 = $(throw "Need a second parameter"))
...
}
foo (1,2)
For those unfamiliar with powershell that line throws because instead of passing 2 parameters it actually creates an array and passes one parameter. You have to call it as follows
foo 1 2
Another fun one. Not handling an expression by default writes it to the pipeline. Really annoying when you don't realize a particular function returns a value.
function example() {
param ( $p1 ) {
if ( $p1 ) {
42
}
"done"
}
PS> example $true
42
"done"
$files = Get-ChildItem . -inc *.extdoesntexist
foreach ($file in $files) {
"$($file.Fullname.substring(2))"
}
Fails with:
You cannot call a method on a null-valued expression.
At line:3 char:25
+ $file.Fullname.substring <<<< (2)
Fix it like so:
$files = #(Get-ChildItem . -inc *.extdoesntexist)
foreach ($file in $files) {
"$($file.Fullname.substring(2))"
}
Bottom line is that the foreach statement will loop on a scalar value even if that scalar value is $null. When Get-ChildItem in the first example returns nothing, $files gets assinged $null. If you are expecting an array of items to be returned by a command but there is a chance it will only return 1 item or zero items, put #() around the command. Then you will always get an array - be it of 0, 1 or N items. Note: If the item is already an array putting #() has no effect - it will still be the very same array (i.e. there is no extra array wrapper).
# The pipeline doesn't enumerate hashtables.
$ht = #{"foo" = 1; "bar" = 2}
$ht | measure
# Workaround: call GetEnumerator
$ht.GetEnumerator() | measure
Here are my top 5 PowerShell gotchas
Here is something Ive stumble upon lately (PowerShell 2.0 CTP):
$items = "item0", "item1", "item2"
$part = ($items | select-string "item0")
$items = ($items | where {$part -notcontains $_})
what do you think that $items be at the end of the script?
I was expecting "item1", "item2" but instead the value of $items is: "item0", "item1", "item2".
Say you've got the following XML file:
<Root>
<Child />
<Child />
</Root>
Run this:
PS > $myDoc = [xml](Get-Content $pathToMyDoc)
PS > #($myDoc.SelectNodes("/Root/Child")).Count
2
PS > #($myDoc.Root.Child).Count
2
Now edit the XML file so it has no Child nodes, just the Root node, and run those statements again:
PS > $myDoc = [xml](Get-Content $pathToMyDoc)
PS > #($myDoc.SelectNodes("/Root/Child")).Count
0
PS > #($myDoc.Root.Child).Count
1
That 1 is annoying when you want to iterate over a collection of nodes using foreach if and only if there actually are any. This is how I learned that you cannot use the XML handler's property (dot) notation as a simple shortcut. I believe what's happening is that SelectNodes returns a collection of 0. When #'ed, it is transformed from an XPathNodeList to an Object[] (check GetType()), but the length is preserved. The dynamically generated $myDoc.Root.Child property (which essentially does not exist) returns $null. When $null is #'ed, it becomes an array of length 1.
On Functions...
The subtleties of processing pipeline input in a function with respect to using $_ or $input and with respect to the begin, process, and end blocks.
How to handle the six principal equivalence classes of input delivered to a function (no input, null, empty string, scalar, list, list with null and/or empty) -- for both direct input and pipeline input -- and get what you expect.
The correct calling syntax for sending multiple arguments to a function.
I discuss these points and more at length in my Simple-Talk.com article Down the Rabbit Hole- A Study in PowerShell Pipelines, Functions, and Parameters and also provide an accompanying wallchart--here is a glimpse showing the various calling syntax pitfalls for a function taking 3 arguments:
On Modules...
These points are expounded upon in my Simple-Talk.com article Further Down the Rabbit Hole: PowerShell Modules and Encapsulation.
Dot-sourcing a file inside a script using a relative path is relative to your current directory -- not the directory where the script resides!
To be relative to the script use this function to locate your script directory: [Update for PowerShell V3+: Just use the builtin $PSScriptRoot variable!]
function Get-ScriptDirectory
{ Split-Path $script:MyInvocation.MyCommand.Path }
Modules must be stored as ...Modules\name\name.psm1 or ...\Modules\any_subpath\name\name.psm1. That is, you cannot just use ...Modules\name.psm1 -- the name of the immediate parent of the module must match the base name of the module. This chart shows the various failure modes when this rule is violated:
2015.06.25 A Pitfall Reference Chart
Simple-Talk.com just published the last of my triumvirate of in-depth articles on PowerShell pitfalls. The first two parts are in the form of a quiz that helps you appreciate a select group of pitfalls; the last part is a wallchart (albeit it would need a rather high-ceilinged room) containing 36 of the most common pitfalls (some adapted from answers on this page), giving concrete examples and workarounds for most. Read more here.
There are some tricks to building command lines for utilities that were not built with Powershell in mind:
To run an executable who's name starts with a number, preface it with an Ampersand (&).
& 7zip.exe
To run an executable with a space anywhere in the path, preface it with an Ampersand (&) and wrap it in quotes, as you would any string. This means that strings in a variable can be executed as well.
# Executing a string with a space.
& 'c:\path with spaces\command with spaces.exe'
# Executing a string with a space, after first saving it in a variable.
$a = 'c:\path with spaces\command with spaces.exe'
& $a
Parameters and arguments are passed to legacy utilities positionally. So it is important to quote them the way the utility expects to see them. In general, one would quote when it contains spaces or does not start with a letter, number or dash (-).
C:\Path\utility.exe '/parameter1' 'Value #1' 1234567890
Variables can be used to pass string values containing spaces or special characters.
$b = 'string with spaces and special characters (-/&)'
utility.exe $b
Alternatively array expansion can be used to pass values as well.
$c = #('Value #1', $Value2)
utility.exe $c
If you want Powershell to wait for an application to complete, you have to consume the output, either by piping the output to something or using Start-Process.
# Saving output as a string to a variable.
$output = ping.exe example.com | Out-String
# Piping the output.
ping stackoverflow.com | where { $_ -match '^reply' }
# Using Start-Process affords the most control.
Start-Process -Wait SomeExecutable.com
Because of the way they display their output, some command line utilities will appear to hang when ran inside of Powershell_ISE.exe, particularly when awaiting input from the user. These utilities will usually work fine when ran within Powershell.exe console.
PowerShell Gotchas
There are a few pitfall that repeatedly reappear on StackOverflow. It is recommend to do some research if you are not familiar with these PowerShell gotchas before asking a new question. It might even be a good idea to investigate in these PowerShell gotchas before answering a PowerShell question to make sure that you teach the questioner the right thing.
TLDR: In PowerShell:
the comparison equality operator is: -eq
(Stackoverflow example: Powershell simple syntax if condition not working)
parentheses and commas are not used with arguments
(Stackoverflow example: How do I pass multiple parameters into a function in PowerShell?)
output properties are based on the first object in the pipeline
(Stackoverflow example: Not all properties displayed)
the pipeline unrolls
(Stackoverflow example: Pipe complete array-objects instead of array items one at a time?)
a. single item collections
(Stackoverflow example: Powershell ArrayList turns a single array item back into a string)
b. embedded arrays
(Stackoverflow example: Return Multidimensional Array From Function)
c. output collections
(Stackoverflow example: Why does PowerShell flatten arrays automatically?)
$Null should be on the left side of the equality comparison operator
(Stackoverflow example: Should $null be on the left side of the equality comparison)
parentheses and assignments choke the pipeline
(Stackoverflow example: Importing 16MB CSV Into Variable Creates >600MB's Memory Usage)
the increase assignment operator (+=) might become expensive
Stackoverflow example: Improve the efficiency of my PowerShell scrip
The Get-Content cmdlet returns separate lines
Stackoverflow example: Multiline regex to match config block
Examples and explanations
Some of the gotchas might really feel counter-intuitive but often can be explained by some very nice PowerShell features along with the pipeline, expression/argument mode and type casting.
1. The comparison equality operator is: -eq
Unlike the Microsoft scripting language VBScript and some other programming languages, the comparison equality operator differs from the assignment operator (=) and is: -eq.
Note: assigning a value to a variable might pass through the value if needed:
$a = $b = 3 # The value 3 is assigned to both variables $a and $b.
This implies that following statement might be unexpectedly truthy or falsy:
If ($a = $b) {
# (assigns $b to $a and) returns a truthy if $b is e.g. 3
} else {
# (assigns $b to $a and) returns a falsy if $b is e.g. 0
}
2. Parentheses and commas are not used with arguments
Unlike a lot of other programming languages and the way a primitive PowerShell function is defined, calling a function doesn't require parentheses or commas for their related arguments. Use spaces to separate the parameter arguments:
MyFunction($Param1, $Param2 $Param3) {
# ...
}
MyFunction 'one' 'two' 'three' # assigns 'one' to $Param1, 'two' to $Param2, 'three' to $Param3
Parentheses and commas are used for calling (.Net) methods.
Commas are used to define arrays. MyFunction 'one', 'two', 'three' (or MyFunction('one', 'two', 'three')) will load the array #('one', 'two', 'three') into the first parameter ($Param1).
Parentheses will intepret the containing contents as a single collection into memory (and choke the PowerShell pipeline) and should only be used as such, e.g. to call an embedded function, e.g.:
MyFunction (MyOtherFunction) # passes the results MyOtherFunction to the first positional parameter of MyFunction ($Param1)
MyFunction One $Two (getThree) # assigns 'One' to $Param1, $Two to $Param2, the results of getThree to $Param3
Note: that quoting text arguments (as the word one in the later example) is only required when it contains spaces or special characters.
3. Output properties are based on the first object in the pipeline
In a PowerShell pipeline each object is processed and passed on by a cmdlet (that is implemented for the middle of a pipeline) similar to how objects are processed and passed on by workstations in an assembly line. Meaning each cmdlet processes one item at the time while the prior cmdlet (workstation) simultaneously processes the upcoming one. This way, the objects aren't loaded into memory at once (less memory usage) and could already be processed before the next one is supplied (or even exists). The disadvantage of this feature is that there is no oversight of what (or how many) objects are expected to follow.
Therefore most PowerShell cmdlets assume that all the objects in the pipeline correspond to the first one and have the same properties which is usually the case, but not always...
$List =
[pscustomobject]#{ one = 'a1'; two = 'a2' },
[pscustomobject]#{ one = 'b1'; two = 'b2'; three = 'b3' }
$List |Select-Object *
one two
--- ---
a1 a2
b1 b2
As you see, the third column three is missing from the results as it didn't exists in the first object and the PowerShell was already outputting the results prior it was aware of the exists of the second object.
On way to workaround this behavior is to explicitly define the properties (of all the following objects) at forehand:
$List |Select-Object one, two, three
one two three
--- --- -----
a1 a2
b1 b2 b3
See also proposal: #13906 Add -UnifyProperties parameter to Select-Object
4. The pipeline unrolls
This feature might come in handy if it complies with the straightforward expectation:
$Array = 'one', 'two', 'three'
$Array.Length
3
a. single item collections
But it might get confusing:
$Selection = $Array |Select-Object -First 2
$Selection.Length
2
$Selection[0]
one
when the collection is down to a single item:
$Selection = $Array |Select-Object -First 1
$Selection.Length
3
$Selection[0]
o
Explanation
When the pipeline outputs a single item which is assigned to a variable, it is not assigned as a collection (with 1 item, like: #('one')) but as a scalar item (the item itself, like: 'one').
Which means that the property .Length (which is in fact an alias for the property .Count for an array) is no longer applied on the array but on the string: 'one'.length which equals 3. And in case of the index $Selection[0] , the first character of the string 'one'[0] (which equals the character o) is returned .
Workaround
To workaround this behavior, you might force the scalar item to an array using the Array subexpression operator #( ):
$Selection = $Array |Select-Object -First 1
#($Selection).Length
1
#($Selection)[0]
one
Knowing that in the case the $Selection is already an array, it will will not be further increased in depth (#(#('one', 'two')), see the next section 4b. Embedded collections are flattened).
b. embedded arrays
When an array (or a collection) includes embedded arrays, like:
$Array = #(#('a', 'b'), #('c', 'd'))
$Array.Count
2
All the embedded items will be processed in the pipeline and consequently returns a flat array when displayed or assigned to a new variable:
$Processed = $Array |ForEach-Object { $_ }
$Processed.Count
4
$Processed
a
b
c
d
To iterate the embedded arrays, you might use the foreach statement:
foreach ($Item in $Array) { $Item.Count }
2
2
Or a simply for loop:
for ($i = 0; $i -lt $Array.Count; $i++) { $Array[$i].Count }
2
2
c. output collections
Collections are usually unrolled when they are placed on the pipeline:
function GetList {
[Collections.Generic.List[String]]#('a', 'b')
}
(GetList).GetType().Name
Object[]
To output the collection as a single item, use the comma operator ,:
function GetList {
,[Collections.Generic.List[String]]#('a', 'b')
}
(GetList).GetType().Name
List`1
5. $Null should be on the left side of the equality comparison operator
This gotcha is related to this comparison operators feature:
When the input of an operator is a scalar value, the operator returns a Boolean value. When the input is a collection, the operator returns the elements of the collection that match the right-hand value of the expression. If there are no matches in the collection, comparison operators return an empty array.
This means for scalars:
'a' -eq 'a' # returns $True
'a' -eq 'b' # returns $False
'a' -eq $Null # returns $False
$Null -eq $Null # returns $True
and for collections, the matching elements are returned which evaluates to either a truthy or falsy condition:
'a', 'b', 'c' -eq 'a' # returns 'a' (truthy)
'a', 'b', 'c' -eq 'd' # returns an empty array (falsy)
'a', 'b', 'c' -eq $Null # returns an empty array (falsy)
'a', $Null, 'c' -eq $Null # returns $Null (falsy)
'a', $Null, $Null -eq $Null # returns #($Null, $Null) (truthy!!!)
$Null, $Null, $Null -eq $Null # returns #($Null, $Null, $Null) (truthy!!!)
In other words, to check whether a variable is $Null (and exclude a collection containing multiple $Nulls), put $Null at the LHS (left hand side) of the equality comparison operator:
if ($Null -eq $MyVariable) { ...
6. Parentheses and assignments choke the pipeline
The PowerShell Pipeline is not just a series of commands connected by pipeline operators (|) (ASCII 124). It is a concept to simultaneously stream individual objects through a sequence of cmdlets. If a cmdlet (or function) is written according to the Strongly Encouraged Development Guidelines and implemented for the middle of a pipeline, it takes each single object from the pipeline, processes it and passes the results to the next cmdlet just before it takes and processes the next object in the pipeline. Meaning that for a simple pipeline as:
Import-Csv .\Input.csv |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
As the last cmdlet writes an object to the .\Output.csv file, the Select-Object cmdlet selects the properties of the next object and the Import-Csv reads the next object from the .\input.csv file (see also: Pipeline in Powershell). This will keep the memory usage low (especially where there are lots of object/records to process) and therefore might result in a faster throughput. To facilitate the pipeline, the PowerShell objects are quiet fat as each individual object contains all the property information (along with e.g. the property name).
Therefore it is not a good practice to choke the pipeline for no reason. There are two senarios that choke the pipeline:
Parentheses, e.g.:
(Import-Csv .\Input.csv) |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
Where all the .\Input.csv records are loaded as an array of PowerShell objects into memory before passing it on to the Select-Object cmdlet.
Assignments, e.g.:
$Objects = Import-Csv .\Input.csv
$Objects |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
Where all the .\Input.csv records are loaded as an array of PowerShell objects into $Objects (memory as well) before passing it on to the Select-Object cmdlet.
7. the increase assignment operator (+=) might become expensive
The increase assignment operator (+=) is syntactic sugar to increase and assign primitives as .e.g. $a += $b where $a is assigned $b + 1. The increase assignment operator can also be used for adding new items to a collection (or to String types and hash tables) but might get pretty expensive as the costs increases with each iteration (the size of the collection). The reason for this is that objects as array collections are immutable and the right variable in not just appended but *appended and reassigned to the left variable. For details see also: avoid using the increase assignment operator (+=) to create a collection
8. The Get-Content cmdlet returns separate lines
There are probably quite some more cmdlet gotchas, knowing that there exist a lot of (internal and external) cmdlets. In contrast to engine related gotchas, these gotchas are often easier to highlight (with e.g. a warning) as happend with ConvertTo-Json (see: Unexpected ConvertTo-Json results? Answer: it has a default -Depth of 2) or "fix". But there is very clasic gotcha in Get-Content which tight into the PowerShell general concept of streaming objects (in this case lines) rather than passing everything (the whole contents of the file) in once:
Get-Content .\Input.txt -Match '\r?\n.*Test.*\r?\n'
Will never work because, by default, Get-Contents returns a stream of objects where each object contains a single string (a line without any line breaks).
(Get-Content .\Input.txt).GetType().Name
Object[]
(Get-Content .\Input.txt)[0].GetType().Name
String
In fact:
Get-Content .\Input.txt -Match 'Test'
Returns all the lines with the word Test in it as Get-Contents puts every single line on the pipeline and when the input is a collection, the operator returns the elements of the collection that match the right-hand value of the expression.
Note: since PowerShell version 3, Get-Contents has a -Raw parameter that reads all the content of the concerned file at once, Meaning that this: Get-Content -Raw .\Input.txt -Match '\r?\n.*Test.*\r?\n' will work as it loads the whole file into memory.
alex2k8, I think this example of yours is good to talk about:
# -----------------------------------
function foo($a){
# I thought this is right.
#if($a -eq $null)
#{
# throw "You can't pass $null as argument."
#}
# But actually it should be:
if($null -eq $a)
{
throw "You can't pass $null as argument."
}
}
foo #($null, $null)
PowerShell can use some of the comparators against arrays like this:
$array -eq $value
## Returns all values in $array that equal $value
With that in mind, the original example returns two items (the two $null values in the array), which evalutates to $true because you end up with a collection of more than one item. Reversing the order of the arguments stops the array comparison.
This functionality is very handy in certain situations, but it is something you need to be aware of (just like array handling in PowerShell).
Functions 'foo' and 'bar' looks equivalent.
function foo() { $null }
function bar() { }
E.g.
(foo) -eq $null
# True
(bar) -eq $null
# True
But:
foo | %{ "foo" }
# Prints: foo
bar | %{ "bar" }
# PRINTS NOTHING
Returning $null and returning nothing is not equivalent dealing with pipes.
This one is inspired by Keith Hill example...
function bar() {}
$list = #(foo)
$list.length
# Prints: 0
# Now let's try the same but with a temporal variable.
$tmp = foo
$list = #($tmp)
$list.length
# Prints: 1
Another one:
$x = 2
$y = 3
$a,$b = $x,$y*5
because of operators precedence there is not 25 in $b; the command is the same as ($x,$y)*5
the correct version is
$a,$b = $x,($y*5)
The logical and bitwise operators don't follow standard precedence rules. The operator -and should have a higher priority than -or yet they're evaluated strictly left-to-right.
For example, compare logical operators between PowerShell and Python (or virtually any other modern language):
# PowerShell
PS> $true -or $false -and $false
False
# Python
>>> True or False and False
True
...and bitwise operators:
# PowerShell
PS> 1 -bor 0 -band 0
0
# Python
>>> 1 | 0 & 0
1
This works. But almost certainly not in the way you think it's working.
PS> $a = 42;
PS> [scriptblock]$b = { $a }
PS> & $b
42
This one has tripped me up before, using $o.SomeProperty where it should be $($o.SomeProperty).
# $x is not defined
[70]: $x -lt 0
True
[71]: [int]$x -eq 0
True
So, what's $x..?
Another one I ran into recently: [string] parameters that accept pipeline input are not strongly typed in practice. You can pipe anything at all and PS will coerce it via ToString().
function Foo
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True)]
[string] $param
)
process { $param }
}
get-process svchost | Foo
Unfortunately there is no way to turn this off. Best workaround I could think of:
function Bar
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True)]
[object] $param
)
process
{
if ($param -isnot [string]) {
throw "Pass a string you fool!"
}
# rest of function goes here
}
}
edit - a better workaround I've started using...
Add this to your custom type XML -
<?xml version="1.0" encoding="utf-8" ?>
<Types>
<Type>
<Name>System.String</Name>
<Members>
<ScriptProperty>
<Name>StringValue</Name>
<GetScriptBlock>
$this
</GetScriptBlock>
</ScriptProperty>
</Members>
</Type>
</Types>
Then write functions like this:
function Bar
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipelineByPropertyName=$True)]
[Alias("StringValue")]
[string] $param
)
process
{
# rest of function goes here
}
}
Forgetting that $_ gets overwritten in blocks made me scratch my head in confusion a couple times, and similarly for multiple reg-ex matches and the $matches array. >.<
Remembering to explicitly type pscustom objects from imported data tables as numeric so they can be sorted correctly:
$CVAP_WA=foreach ($i in $C){[PSCustomObject]#{ `
County=$i.county; `
TotalVote=[INT]$i.TotalBallots; `
RegVoters=[INT]$i.regvoters; `
Turnout_PCT=($i.TotalBallots/$i.regvoters)*100; `
CVAP=[INT]($B | ? {$_.GeoName -match $i.county}).CVAP_EST }}
PS C:\Politics> $CVAP_WA | sort -desc TotalVote |ft -auto -wrap
County TotalVote RegVoters Turnout_PCT CVAP CVAP_TV_PCT CVAP_RV_PCT
------ --------- --------- ----------- ---- ----------- -----------
King 973088 1170638 83.189 1299290 74.893 90.099
Pierce 349377 442985 78.86 554975 62.959 79.837
Snohomish 334354 415504 80.461 478440 69.832 86.81
Spokane 227007 282442 80.346 342060 66.398 82.555
Clark 193102 243155 79.453 284190 67.911 85.52
Mine are both related to file copying...
Square Brackets in File Names
I once had to move a very large/complicated folder structure using Move-Item -Path C:\Source -Destination C:\Dest. At the end of the process there were still a number of files in source directory. I noticed that every remaining file had square brackets in the name.
The problem was that the -Path parameter treats square brackets as wildcards.
EG. If you wanted to copy Log001 to Log200, you could use square brackets as follows:
Move-Item -Path C:\Source\Log[001-200].log.
In my case, to avoid square brackets being interpreted as wildcards, I should have used the -LiteralPath parameter.
ErrorActionPreference
The $ErrorActionPreference variable is ignored when using Move-Item and Copy-Item with the -Verbose parameter.
Treating the ExitCode of a Process as a Boolean.
eg, with this code:
$p = Start-Process foo.exe -NoNewWindow -Wait -PassThru
if ($p.ExitCode) {
# handle error
}
things are good, unless say foo.exe doesn't exist or otherwise fails to launch.
in that case $p will be $null, and [bool]($null.ExitCode) is False.
a simple fix is to replace the logic with if ($p.ExitCode -ne 0) {},
however for clarity of code imo the following is better: if (($p -eq $null) -or ($p.ExitCode -ne 0)) {}