How do I change foreach to for in PowerShell? - powershell

I want to print the word exist in a text file and print "match" and "not match". My 1st text file is: xxaavv6J, my 2nd file is 6J6SCa.yB.
If it is match, it return like this:
Match found:
Match found:
Match found:
Match found:
Match found:
Match found: 6J
Match found:
Match found:
Match found:
My expectation is just print match and not match.
$X = Get-Content "C:\Users\2.txt"
$Data = Get-Content "C:\Users\d.txt"
$Split = $Data -split '(..)'
$Y = $X.Substring(0, 6)
$Z = $Y -split '(..)'
foreach ($i in $Z) {
foreach ($j in $Split) {
if ($i -like $j) {
Write-Host ("Match found: {0}" -f $i, $j)
}
}
}

The operation -split '(..)' does not produce the result you think it does. If you take a look at the output of the following command you'll see that you're getting a lot of empty results:
PS C:\> 'xxaavv6J' -split '(..)' | % { "-$_-" }
--
-xx-
--
-aa-
--
-vv-
--
-6J-
--
Those empty values are the additional matches you're getting from $i -like $j.
I'm not quite sure why -split '(..)' gives you any non-empty values in the first place, because I would have expected it to produce 5 empty strings for an input string "xxaavv6J". Apparently it has to do with the grouping parentheses, since -split '..' (without the grouping parentheses) actually does behave as expected. Looks like with the capturing group the captured matches are returned on top of the results of the split operation.
Anyway, to get the behavior you want replace
... -split '(..)'
with
... |
Select-String '..' -AllMatches |
Select-Object -Expand Matches |
Select-Object -Expand Value
You can also replace the nested loop with something like this:
foreach ($i in $Z) {
if (if $Split -contains $i) {
Write-Host "Match found: ${i}"
}
}

A slightly different approach using regex '.Match()' should also do it.
I have added a lot of explaining comments for you:
$Test = Get-Content "C:\Users\2.txt" -Raw # Read as single string. Contains "xxaavv6J"
$Data = (Get-Content "C:\Users\d.txt") -join '' # Read as array and join the lines with an empty string.
# This will remove Newlines. Contains "6J6SCa.yB"
# Split the data and make sure every substring has two characters
# In each substring, the regex special characters need to be Escaped.
# When this is done, we join the substrings together using the pipe symbol.
$Data = ($Data -split '(.{2})' | # split on every two characters
Where-Object { $_.Length -eq 2 } | # don't care about any left over character
ForEach-Object { [Regex]::Escape($_) } ) -join '|' # join with the '|' which is an OR in regular expression
# $Data is now a string to use with regular expression: "6J|6S|Ca|\.y"
# Using '.Match()' works Case-Sensitive. To have it compare Case-Insensitive, we do this:
$Data = '(?i)' + $Data
# See if we can find one or more matches
$regex = [regex]$Data
$match = $regex.Match($Test)
# If we have found at least one match:
if ($match.Groups.Count) {
while ($match.Success) {
# matched text: $match.Value
# match start: $match.Index
# match length: $match.Length
Write-Host ("Match found: {0}" -f $match.Value)
$match = $match.NextMatch()
}
}
else {
Write-Host "Not Found"
}
Result:
Match found: 6J

Further to the excellent Ansgar Wiechers' answer: if you are running (above) Windows PowerShell 4.0 then you could apply the .Where() method described in Kirk Munro's exhaustive article ForEach and Where magic methods:
With the release of Windows PowerShell 4.0, two new “magic” methods
were introduced for collection types that provide a new syntax for
accessing ForEach and Where capabilities in Windows PowerShell.
These methods are aptly named ForEach and Where. I call
these methods “magic” because they are quite magical in how they work
in PowerShell. They don’t show up in Get-Member output, even if you
apply -Force and request -MemberType All. If you roll up your
sleeves and dig in with reflection, you can find them; however, it
requires a broad search because they are private extension methods
implemented on a private class. Yet even though they are not
discoverable without peeking under the covers, they are there when you
need them, they are faster than their older counterparts, and they
include functionality that was not available in their older
counterparts, hence the “magic” feeling they leave you with when you
use them in PowerShell. Unfortunately, these methods remain
undocumented even today, almost a year since they were publicly
released, so many people don’t realize the power that is available in
these methods.
…
The Where method
Where is a method that allows you to filter a collection of objects.
This is very much like the Where-Object cmdlet, but the Where
method is also like Select-Object and Group-Object as well,
includes several additional features that the Where-Object cmdlet
does not natively support by itself. This method provides faster
performance than Where-Object in a simple, elegant command. Like
the ForEach method, any objects that are output by this method are
returned in a generic collection of type
System.Collections.ObjectModel.Collection1[psobject].
There is only one version of this method, which can be described as
follows:
Where(scriptblock expression[, WhereOperatorSelectionMode mode[, int numberToReturn]])
As indicated by the square brackets, the expression script block is
required and the mode enumeration and the numberToReturn integer
argument are optional, so you can invoke this method using 1, 2, or 3
arguments. If you want to use a particular argument, you must provide
all arguments to the left of that argument (i.e. if you want to
provide a value for numberToReturn, you must provide values for
mode and expression as well).
Applied to your case (using the simplest variant Where(scriptblock expression) of the .Where() method):
$X = '6J6SCa.yB' # Get-Content "C:\Users\2.txt"
$Data = 'xxaavv6J' # Get-Content "C:\Users\d.txt"
$Split = ($Data -split '(..)').Where({$_ -ne ''})
$Y = $X.Substring(0, 6)
$Z = ($Y -split '(..)').Where{$_ -ne ''} # without parentheses
For instance, Ansgar's example changes as follows:
PS > ('xxaavv6J' -split '(..)').Where{$_ -ne ''} | % { "-$_-" }
-xx-
-aa-
-vv-
-6J-

Related

Question regarding incrementing a string value in a text file using Powershell

Just beginning with Powershell. I have a text file that contains the string "CloseYear/2019" and looking for a way to increment the "2019" to "2020". Any advice would be appreciated. Thank you.
If the question is how to update text within a file, you can do the following, which will replace specified text with more specified text. The file (t.txt) is read with Get-Content, the targeted text is updated with the String class Replace method, and the file is rewritten using Set-Content.
(Get-Content t.txt).Replace('CloseYear/2019','CloseYear/2020') | Set-Content t.txt
Additional Considerations:
General incrementing would require a object type that supports incrementing. You can isolate the numeric data using -split, increment it, and create a new, joined string. This solution assumes working with 32-bit integers but can be updated to other numeric types.
$str = 'CloseYear/2019'
-join ($str -split "(\d+)" | Foreach-Object {
if ($_ -as [int]) {
[int]$_ + 1
}
else {
$_
}
})
Putting it all together, the following would result in incrementing all complete numbers (123 as opposed to 1 and 2 and 3 individually) in a text file. Again, this can be tailored to target more specific numbers.
$contents = Get-Content t.txt -Raw # Raw to prevent an array output
-join ($contents -split "(\d+)" | Foreach-Object {
if ($_ -as [int]) {
[int]$_ + 1
}
else {
$_
}
}) | Set-Content t.txt
Explanation:
-split uses regex matching to split on the matched result resulting in an array. By default, -split removes the matched text. Creating a capture group using (), ensures the matched text displays as is and is not removed. \d+ is a regex mechanism matching a digit (\d) one or more (+) successive times.
Using the -as operator, we can test that each item in the split array can be cast to [int]. If successful, the if statement will evaluate to true, the text will be cast to [int], and the integer will be incremented by 1. If the -as operator is not successful, the pipeline object will remain as a string and just be output.
The -join operator just joins the resulting array (from the Foreach-Object) into a single string.
AdminOfThings' answer is very detailed and the correct answer.
I wanted to provide another answer for options.
Depending on what your end goal is, you might need to convert the date to a datetime object for future use.
Example:
$yearString = 'CloseYear/2019'
#convert to datetime
[datetime]$dateConvert = [datetime]::new((($yearString -split "/")[-1]),1,1)
#add year
$yearAdded = $dateConvert.AddYears(1)
#if you want to display "CloseYear" with the new date and write-host
$out = "CloseYear/{0}" -f $yearAdded.Year
Write-Host $out
This approach would allow you to use $dateConvert and $yearAdded as a datetime allowing you to accurately manipulate dates and cultures, for example.

Exclude from file if line contains value from variable A OR B

I am writing to a file using streamwriter and I want to exclude any rows that match the values containing in two parameters. I have tried the below code but it does not output any values when I include the second condition ($file_stream -notmatch $exclude_permission_type).
$exclude_user_accounts = 'account1', 'account2', 'account3'
$exclude_permission_type = 'WRITE'
while ($file_stream = $report_input.ReadLine()) {
if ($file_stream -notmatch $exclude_user_accounts -and $file_stream -notmatch $exclude_permission_type) {
$_report_output.WriteLine($file_stream)
}
}
It's clearly not possible that your code has ever worked the way you intended, even with just the first condition, because a string can never match an array of strings . <string> -notmatch <array> will always evaluate to true even if the array contained an exact match. You cannot do partial matches like that at all.
Build one regular expression from all your filter strings:
$excludes = 'account1', 'account2', 'account3', 'WRITE'
$re = ($excludes | ForEach-Object {[regex]::Escape($_)}) -join '|'
then filter your strings using that regular expression:
if ($file_stream -notmatch $re) {
$_report_output.WriteLine($file_stream)
}

Pass a single space-delimited string as multiple arguments

I have a Powershell function in which I am trying to allow the user to add or remove items from a list by typing the word "add" or "remove" followed by a space-delimited list of items. I have an example below (slightly edited, so you can just drop the code into a powershell prompt to test it "live").
$Script:ServerList = #("Server01","Server02","Server03")
Function EditServerList (){
$Script:ServerList = $Script:ServerList |Sort -Unique
Write-host -ForegroundColor Green $Script:ServerList
$Inputs = $args
If ($Inputs[0] -eq "start"){
$Edits = Read-Host "Enter `"add`" or `"remove`" followed by a space-delimited list of server names"
#"# EditServerList $Edits
# EditServerList $Edits.split(' ')
EditServerList ($Edits.split(' ') |Where {$_ -NotLike "add","remove"})
EditServerList start
} Elseif ($Inputs[0] -eq "add"){
$Script:ServerList += $Inputs |where {$_ -NotLike $Inputs[0]}
EditServerList start
} Elseif ($Inputs[0] -eq "remove"){
$Script:ServerList = $Script:ServerList |Where {$_ -NotLike ($Inputs |Where {$_ -Notlike $Inputs[0]})}
EditServerList start
} Else {
Write-Host -ForegroundColor Red "ERROR!"
EditServerList start
}
}
EditServerList start
As you can see, the function takes in a list of arguments. The first argument is evaluated in the If/Then statements and then the rest of the arguments are treated as items to add or remove from the list.
I have tried a few different approaches to this, which you can see commented out in the first IF evaluation.
I have two problems.
When I put in something like "add Server05 Server06" (without quotes) it works, but it also drops in the word "add".
When I put in "remove Server02 Server03" (without quotes) it does not edit the array at all.
Can anybody point out where I'm going wrong, or suggest a better approach to this?
To address the title's generic question up front:
When you pass an array to a function (and nothing else), $Args receives a single argument containing the whole array, so you must use $Args[0] to access it.
There is a way to pass an array as individual arguments using splatting, but it requires an intermediate variable - see bottom.
To avoid confusion around such issues, formally declare your parameters.
Try the following:
$Script:ServerList = #("Server01", "Server02", "Server03")
Function EditServerList () {
# Split the arguments, which are all contained in $Args[0],
# into the command (1st token) and the remaining
# elements (as an array).
$Cmd, $Servers = $Args[0]
If ($Cmd -eq "start"){
While ($true) {
Write-host -ForegroundColor Green $Script:ServerList
$Edits = Read-Host "Enter `"add`" or `"remove`" followed by a space-delimited list of server names"
#"# Pass the array of whitespace-separated tokens to the recursive
# invocation to perform the requested edit operation.
EditServerList (-split $Edits)
}
} ElseIf ($Cmd -eq "add") {
# Append the $Servers array to the list, weeding out duplicates and
# keeping the list sorted.
$Script:ServerList = $Script:ServerList + $Servers | Sort-Object -Unique
} ElseIf ($Cmd -eq "remove") {
# Remove all specified $Servers from the list.
# Note that servers that don't exist in the list are quietly ignored.
$Script:ServerList = $Script:ServerList | Where-Object { $_ -notin $Servers }
} Else {
Write-Host -ForegroundColor Red "ERROR!"
}
}
EditServerList start
Note how a loop is used inside the "start" branch to avoid running out of stack space, which could happen if you keep recursing.
$Cmd, $Servers = $Args[0] destructures the array of arguments (contained in the one and only argument that was passed - see below) into the 1st token - (command string add or remove) and the array of the remaining arguments (server names).
Separating the arguments into command and server-name array up front simplifies the remaining code.
The $var1, $var2 = <array> technique to split the RHS into its first element - assigned as a scalar to $var1 - and the remaining elements - assigned as an array to $var2, is commonly called destructuring or unpacking; it is documented in Get-Help about_Assignment Operators, albeit without giving it such a name.
-split $Edits uses the convenient unary form of the -split operator to break the user input into an array of whitespace-separated token and passes that array to the recursive invocation.
Note that EditServerList (-split $Edits) passes a single argument that is an array - which is why $Args[0] must be used to access it.
Using PowerShell's -split operator (as opposed to .Split(' ')) has the added advantage of ignoring leading and trailing whitespace and ignoring multiple spaces between entries.
In general, operator -split is preferable to the [string] type's .Split() method - see this answer of mine.
Not how containment operator -notin, which accepts an array as the RHS, is used in Where-Object { $_ -notin $Servers } in order to filter out values from the server list contained in $Servers.
As for what you tried:
EditServerList ($Edits.split(' ') |Where {$_ -NotLike "add","remove"}) (a) mistakenly attempts to remove the command name from the argument array, even though the recursive invocations require it, but (b) actually fails to do so, because the RHS of -like doesn't support arrays. (As an aside: since you're looking for exact strings, -eq would have been the better choice.)
Since you're passing the arguments as an array as the first and only argument, $Inputs[0] actually refers to the entire array (command name + server names), not just to its first element (the command name).
You got away with ($Inputs[0] -eq "add") - even though the entire array was compared - because the -eq operator performs array filtering if its LHS is an array, returning a sub-array of matching elements. Since add was among the elements, a 1-element sub-array was returned, which, in a Boolean context, is "truthy".
However, your attempt to weed out the command name with where {$_ -NotLike $Inputs[0]} then failed, and add was not removed - you'd actually have to compare to $Inputs[0][0] (sic).
Where {$_ -NotLike ($Inputs |Where {$_ -Notlike $Inputs[0]})} doesn't filter anything out for the following reasons:
($Inputs |Where {$_ -Notlike $Inputs[0]}) always returns an empty array, because, the RHS of -Notlike is an array, which, as stated, doesn't work.
Therefore, the command is the equivalent of Where {$_ -NotLike #() } which returns $True for any scalar on the LHS.
Passing an array as individual arguments using splatting
Argument splatting (see Get-Help about_Splatting) works with arrays, too:
> function foo { $Args.Count } # function that outputs the argument count.
> foo #(1, 2) # pass array
1 # single parameter, containing array
> $arr = #(1, 2); foo #arr # splatting: array elements are passed as indiv. args.
2
Note how an intermediate variable is required, and how it must be prefixed with # rather than $ to perform the splatting.
I'd use parameters to modify the ServerList, this way you can use a single line to both add and remove:
Function EditServerList {
param(
[Parameter(Mandatory=$true)]
[string]$ServerList,
[array]$add,
[array]$remove
)
Write-Host -ForegroundColor Green "ServerList Contains: $ServerList"
$Servers = $ServerList.split(' ')
if ($add) {
$Servers += $add.split(' ')
}
if ($remove) {
$Servers = $Servers | Where-Object { $remove.split(' ') -notcontains $_ }
}
return $Servers
}
Then you can call the function like this:
EditServerList -ServerList "Server01 Server02 Server03" -remove "Server02 Server03" -add "Server09 Server10"
Which will return:
Server01
Server09
Server10

Reading strings from text files using switch -regex returns null element

Question:
The intention of my script is to filter out the name and phone number from both text files and add them into a hash table with the name being the key and the phone number being the value.
The problem I am facing is
$name = $_.Current is returning $null, as a result of which my hash is not getting populated.
Can someone tell me what the issue is?
Contents of File1.txt:
Lori
234 east 2nd street
Raleigh nc 12345
9199617621
lori#hotmail.com
=================
Contents of File2.txt:
Robert
2531 10th Avenue
Seattle WA 93413
2068869421
robert#hotmail.com
Sample Code:
$hash = #{}
Switch -regex (Get-content -Path C:\Users\svats\Desktop\Fil*.txt)
{
'^[a-z]+$' { $name = $_.current}
'^\d{10}' {
$phone = $_.current
$hash.Add($name,$phone)
$name=$phone=$null
}
default
{
write-host "Nothing matched"
}
}
$hash
Remove the current property reference from $_:
$hash = #{}
Switch -regex (Get-content -Path C:\Users\svats\Desktop\Fil*.txt)
{
'^[a-z]+$' {
$name = $_
}
'^\d{10}' {
$phone = $_
$hash.Add($name, $phone)
$name = $phone = $null
}
default {
Write-Host "Nothing matched"
}
}
$hash
Mathias R. Jessen's helpful answer explains your problem and offers an effective solution:
it is automatic variable $_ / $PSItem itself that contains the current input object (whatever its type is - what properties $_ / $PSItem has therefore depends on the input object's specific type).
Aside from that, there's potential for making the code both less verbose and more efficient:
# Initialize the output hashtable.
$hash = #{}
# Create the regex that will be used on each input file's content.
# (?...) sets options: i ... case-insensitive; m ... ^ and $ match
# the beginning and end of every *line*.
$re = [regex] '(?im)^([a-z]+|\d{10})$'
# Loop over each input file's content (as a whole, thanks to -Raw).
Get-Content -Raw File*.txt | foreach {
# Look for name and phone number.
$matchColl = $re.Matches($_)
if ($matchColl.Count -eq 2) { # Both found, add hashtable entry.
$hash.Add($matchColl.Value[0], $matchColl.Value[1])
} else {
Write-Host "Nothing matched."
}
}
# Output the resulting hashtable.
$hash
A note on the construction of the .NET [System.Text.RegularExpressions.Regex] object (or [regex] for short), [regex] '(?im)^([a-z]+|\d{10})$':
Embedding matching options IgnoreCase and Multiline as inline options i and m directly in the regex string ((?im) is convenient, in that it allows using simple cast syntax ([regex] ...) to construct the regular-expression .NET object.
However, this syntax may be obscure and, furthermore, not all matching options are available in inline form, so here's the more verbose, but easier-to-read equivalent:
$re = New-Object regex -ArgumentList '^([a-z]+|\d{10})$', 'IgnoreCase, Multiline'
Note that the two options must be specified comma-separated, as a single string, which PowerShell translates into the bit-OR-ed values of the corresponding enumeration values.
other solution, use convertfrom-string
$template=#'
{name*:Lori}
{street:234 east 2nd street}
{city:Raleigh nc 12345}
{phone:9199617621}
{mail:lori#hotmail.com}
{name*:Robert}
{street:2531 10th Avenue}
{city:Seattle WA 93413}
{phone:2068869421}
{mail:robert#hotmail.com}
{name*:Robert}
{street:2531 Avenue}
{city:Seattle WA 93413}
{phone:2068869421}
{mail:robert#hotmail.com}
'#
Get-Content -Path "c:\temp\file*.txt" | ConvertFrom-String -TemplateContent $template | select name, phone

Powershell pitfalls

What Powershell pitfalls you have fall into? :-)
Mine are:
# -----------------------------------
function foo()
{
#("text")
}
# Expected 1, actually 4.
(foo).length
# -----------------------------------
if(#($null, $null))
{
Write-Host "Expected to be here, and I am here."
}
if(#($null))
{
Write-Host "Expected to be here, BUT NEVER EVER."
}
# -----------------------------------
function foo($a)
{
# I thought this is right.
#if($a -eq $null)
#{
# throw "You can't pass $null as argument."
#}
# But actually it should be:
if($null -eq $a)
{
throw "You can't pass $null as argument."
}
}
foo #($null, $null)
# -----------------------------------
# There is try/catch, but no callstack reported.
function foo()
{
bar
}
function bar()
{
throw "test"
}
# Expected:
# At bar() line:XX
# At foo() line:XX
#
# Actually some like this:
# At bar() line:XX
foo
Would like to know yours to walk them around :-)
My personal favorite is
function foo() {
param ( $param1, $param2 = $(throw "Need a second parameter"))
...
}
foo (1,2)
For those unfamiliar with powershell that line throws because instead of passing 2 parameters it actually creates an array and passes one parameter. You have to call it as follows
foo 1 2
Another fun one. Not handling an expression by default writes it to the pipeline. Really annoying when you don't realize a particular function returns a value.
function example() {
param ( $p1 ) {
if ( $p1 ) {
42
}
"done"
}
PS> example $true
42
"done"
$files = Get-ChildItem . -inc *.extdoesntexist
foreach ($file in $files) {
"$($file.Fullname.substring(2))"
}
Fails with:
You cannot call a method on a null-valued expression.
At line:3 char:25
+ $file.Fullname.substring <<<< (2)
Fix it like so:
$files = #(Get-ChildItem . -inc *.extdoesntexist)
foreach ($file in $files) {
"$($file.Fullname.substring(2))"
}
Bottom line is that the foreach statement will loop on a scalar value even if that scalar value is $null. When Get-ChildItem in the first example returns nothing, $files gets assinged $null. If you are expecting an array of items to be returned by a command but there is a chance it will only return 1 item or zero items, put #() around the command. Then you will always get an array - be it of 0, 1 or N items. Note: If the item is already an array putting #() has no effect - it will still be the very same array (i.e. there is no extra array wrapper).
# The pipeline doesn't enumerate hashtables.
$ht = #{"foo" = 1; "bar" = 2}
$ht | measure
# Workaround: call GetEnumerator
$ht.GetEnumerator() | measure
Here are my top 5 PowerShell gotchas
Here is something Ive stumble upon lately (PowerShell 2.0 CTP):
$items = "item0", "item1", "item2"
$part = ($items | select-string "item0")
$items = ($items | where {$part -notcontains $_})
what do you think that $items be at the end of the script?
I was expecting "item1", "item2" but instead the value of $items is: "item0", "item1", "item2".
Say you've got the following XML file:
<Root>
<Child />
<Child />
</Root>
Run this:
PS > $myDoc = [xml](Get-Content $pathToMyDoc)
PS > #($myDoc.SelectNodes("/Root/Child")).Count
2
PS > #($myDoc.Root.Child).Count
2
Now edit the XML file so it has no Child nodes, just the Root node, and run those statements again:
PS > $myDoc = [xml](Get-Content $pathToMyDoc)
PS > #($myDoc.SelectNodes("/Root/Child")).Count
0
PS > #($myDoc.Root.Child).Count
1
That 1 is annoying when you want to iterate over a collection of nodes using foreach if and only if there actually are any. This is how I learned that you cannot use the XML handler's property (dot) notation as a simple shortcut. I believe what's happening is that SelectNodes returns a collection of 0. When #'ed, it is transformed from an XPathNodeList to an Object[] (check GetType()), but the length is preserved. The dynamically generated $myDoc.Root.Child property (which essentially does not exist) returns $null. When $null is #'ed, it becomes an array of length 1.
On Functions...
The subtleties of processing pipeline input in a function with respect to using $_ or $input and with respect to the begin, process, and end blocks.
How to handle the six principal equivalence classes of input delivered to a function (no input, null, empty string, scalar, list, list with null and/or empty) -- for both direct input and pipeline input -- and get what you expect.
The correct calling syntax for sending multiple arguments to a function.
I discuss these points and more at length in my Simple-Talk.com article Down the Rabbit Hole- A Study in PowerShell Pipelines, Functions, and Parameters and also provide an accompanying wallchart--here is a glimpse showing the various calling syntax pitfalls for a function taking 3 arguments:
On Modules...
These points are expounded upon in my Simple-Talk.com article Further Down the Rabbit Hole: PowerShell Modules and Encapsulation.
Dot-sourcing a file inside a script using a relative path is relative to your current directory -- not the directory where the script resides!
To be relative to the script use this function to locate your script directory: [Update for PowerShell V3+: Just use the builtin $PSScriptRoot variable!]
function Get-ScriptDirectory
{ Split-Path $script:MyInvocation.MyCommand.Path }
Modules must be stored as ...Modules\name\name.psm1 or ...\Modules\any_subpath\name\name.psm1. That is, you cannot just use ...Modules\name.psm1 -- the name of the immediate parent of the module must match the base name of the module. This chart shows the various failure modes when this rule is violated:
2015.06.25 A Pitfall Reference Chart
Simple-Talk.com just published the last of my triumvirate of in-depth articles on PowerShell pitfalls. The first two parts are in the form of a quiz that helps you appreciate a select group of pitfalls; the last part is a wallchart (albeit it would need a rather high-ceilinged room) containing 36 of the most common pitfalls (some adapted from answers on this page), giving concrete examples and workarounds for most. Read more here.
There are some tricks to building command lines for utilities that were not built with Powershell in mind:
To run an executable who's name starts with a number, preface it with an Ampersand (&).
& 7zip.exe
To run an executable with a space anywhere in the path, preface it with an Ampersand (&) and wrap it in quotes, as you would any string. This means that strings in a variable can be executed as well.
# Executing a string with a space.
& 'c:\path with spaces\command with spaces.exe'
# Executing a string with a space, after first saving it in a variable.
$a = 'c:\path with spaces\command with spaces.exe'
& $a
Parameters and arguments are passed to legacy utilities positionally. So it is important to quote them the way the utility expects to see them. In general, one would quote when it contains spaces or does not start with a letter, number or dash (-).
C:\Path\utility.exe '/parameter1' 'Value #1' 1234567890
Variables can be used to pass string values containing spaces or special characters.
$b = 'string with spaces and special characters (-/&)'
utility.exe $b
Alternatively array expansion can be used to pass values as well.
$c = #('Value #1', $Value2)
utility.exe $c
If you want Powershell to wait for an application to complete, you have to consume the output, either by piping the output to something or using Start-Process.
# Saving output as a string to a variable.
$output = ping.exe example.com | Out-String
# Piping the output.
ping stackoverflow.com | where { $_ -match '^reply' }
# Using Start-Process affords the most control.
Start-Process -Wait SomeExecutable.com
Because of the way they display their output, some command line utilities will appear to hang when ran inside of Powershell_ISE.exe, particularly when awaiting input from the user. These utilities will usually work fine when ran within Powershell.exe console.
PowerShell Gotchas
There are a few pitfall that repeatedly reappear on StackOverflow. It is recommend to do some research if you are not familiar with these PowerShell gotchas before asking a new question. It might even be a good idea to investigate in these PowerShell gotchas before answering a PowerShell question to make sure that you teach the questioner the right thing.
TLDR: In PowerShell:
the comparison equality operator is: -eq
(Stackoverflow example: Powershell simple syntax if condition not working)
parentheses and commas are not used with arguments
(Stackoverflow example: How do I pass multiple parameters into a function in PowerShell?)
output properties are based on the first object in the pipeline
(Stackoverflow example: Not all properties displayed)
the pipeline unrolls
(Stackoverflow example: Pipe complete array-objects instead of array items one at a time?)
a. single item collections
(Stackoverflow example: Powershell ArrayList turns a single array item back into a string)
b. embedded arrays
(Stackoverflow example: Return Multidimensional Array From Function)
c. output collections
(Stackoverflow example: Why does PowerShell flatten arrays automatically?)
$Null should be on the left side of the equality comparison operator
(Stackoverflow example: Should $null be on the left side of the equality comparison)
parentheses and assignments choke the pipeline
(Stackoverflow example: Importing 16MB CSV Into Variable Creates >600MB's Memory Usage)
the increase assignment operator (+=) might become expensive
Stackoverflow example: Improve the efficiency of my PowerShell scrip
The Get-Content cmdlet returns separate lines
Stackoverflow example: Multiline regex to match config block
Examples and explanations
Some of the gotchas might really feel counter-intuitive but often can be explained by some very nice PowerShell features along with the pipeline, expression/argument mode and type casting.
1. The comparison equality operator is: -eq
Unlike the Microsoft scripting language VBScript and some other programming languages, the comparison equality operator differs from the assignment operator (=) and is: -eq.
Note: assigning a value to a variable might pass through the value if needed:
$a = $b = 3 # The value 3 is assigned to both variables $a and $b.
This implies that following statement might be unexpectedly truthy or falsy:
If ($a = $b) {
# (assigns $b to $a and) returns a truthy if $b is e.g. 3
} else {
# (assigns $b to $a and) returns a falsy if $b is e.g. 0
}
2. Parentheses and commas are not used with arguments
Unlike a lot of other programming languages and the way a primitive PowerShell function is defined, calling a function doesn't require parentheses or commas for their related arguments. Use spaces to separate the parameter arguments:
MyFunction($Param1, $Param2 $Param3) {
# ...
}
MyFunction 'one' 'two' 'three' # assigns 'one' to $Param1, 'two' to $Param2, 'three' to $Param3
Parentheses and commas are used for calling (.Net) methods.
Commas are used to define arrays. MyFunction 'one', 'two', 'three' (or MyFunction('one', 'two', 'three')) will load the array #('one', 'two', 'three') into the first parameter ($Param1).
Parentheses will intepret the containing contents as a single collection into memory (and choke the PowerShell pipeline) and should only be used as such, e.g. to call an embedded function, e.g.:
MyFunction (MyOtherFunction) # passes the results MyOtherFunction to the first positional parameter of MyFunction ($Param1)
MyFunction One $Two (getThree) # assigns 'One' to $Param1, $Two to $Param2, the results of getThree to $Param3
Note: that quoting text arguments (as the word one in the later example) is only required when it contains spaces or special characters.
3. Output properties are based on the first object in the pipeline
In a PowerShell pipeline each object is processed and passed on by a cmdlet (that is implemented for the middle of a pipeline) similar to how objects are processed and passed on by workstations in an assembly line. Meaning each cmdlet processes one item at the time while the prior cmdlet (workstation) simultaneously processes the upcoming one. This way, the objects aren't loaded into memory at once (less memory usage) and could already be processed before the next one is supplied (or even exists). The disadvantage of this feature is that there is no oversight of what (or how many) objects are expected to follow.
Therefore most PowerShell cmdlets assume that all the objects in the pipeline correspond to the first one and have the same properties which is usually the case, but not always...
$List =
[pscustomobject]#{ one = 'a1'; two = 'a2' },
[pscustomobject]#{ one = 'b1'; two = 'b2'; three = 'b3' }
$List |Select-Object *
one two
--- ---
a1 a2
b1 b2
As you see, the third column three is missing from the results as it didn't exists in the first object and the PowerShell was already outputting the results prior it was aware of the exists of the second object.
On way to workaround this behavior is to explicitly define the properties (of all the following objects) at forehand:
$List |Select-Object one, two, three
one two three
--- --- -----
a1 a2
b1 b2 b3
See also proposal: #13906 Add -UnifyProperties parameter to Select-Object
4. The pipeline unrolls
This feature might come in handy if it complies with the straightforward expectation:
$Array = 'one', 'two', 'three'
$Array.Length
3
a. single item collections
But it might get confusing:
$Selection = $Array |Select-Object -First 2
$Selection.Length
2
$Selection[0]
one
when the collection is down to a single item:
$Selection = $Array |Select-Object -First 1
$Selection.Length
3
$Selection[0]
o
Explanation
When the pipeline outputs a single item which is assigned to a variable, it is not assigned as a collection (with 1 item, like: #('one')) but as a scalar item (the item itself, like: 'one').
Which means that the property .Length (which is in fact an alias for the property .Count for an array) is no longer applied on the array but on the string: 'one'.length which equals 3. And in case of the index $Selection[0] , the first character of the string 'one'[0] (which equals the character o) is returned .
Workaround
To workaround this behavior, you might force the scalar item to an array using the Array subexpression operator #( ):
$Selection = $Array |Select-Object -First 1
#($Selection).Length
1
#($Selection)[0]
one
Knowing that in the case the $Selection is already an array, it will will not be further increased in depth (#(#('one', 'two')), see the next section 4b. Embedded collections are flattened).
b. embedded arrays
When an array (or a collection) includes embedded arrays, like:
$Array = #(#('a', 'b'), #('c', 'd'))
$Array.Count
2
All the embedded items will be processed in the pipeline and consequently returns a flat array when displayed or assigned to a new variable:
$Processed = $Array |ForEach-Object { $_ }
$Processed.Count
4
$Processed
a
b
c
d
To iterate the embedded arrays, you might use the foreach statement:
foreach ($Item in $Array) { $Item.Count }
2
2
Or a simply for loop:
for ($i = 0; $i -lt $Array.Count; $i++) { $Array[$i].Count }
2
2
c. output collections
Collections are usually unrolled when they are placed on the pipeline:
function GetList {
[Collections.Generic.List[String]]#('a', 'b')
}
(GetList).GetType().Name
Object[]
To output the collection as a single item, use the comma operator ,:
function GetList {
,[Collections.Generic.List[String]]#('a', 'b')
}
(GetList).GetType().Name
List`1
5. $Null should be on the left side of the equality comparison operator
This gotcha is related to this comparison operators feature:
When the input of an operator is a scalar value, the operator returns a Boolean value. When the input is a collection, the operator returns the elements of the collection that match the right-hand value of the expression. If there are no matches in the collection, comparison operators return an empty array.
This means for scalars:
'a' -eq 'a' # returns $True
'a' -eq 'b' # returns $False
'a' -eq $Null # returns $False
$Null -eq $Null # returns $True
and for collections, the matching elements are returned which evaluates to either a truthy or falsy condition:
'a', 'b', 'c' -eq 'a' # returns 'a' (truthy)
'a', 'b', 'c' -eq 'd' # returns an empty array (falsy)
'a', 'b', 'c' -eq $Null # returns an empty array (falsy)
'a', $Null, 'c' -eq $Null # returns $Null (falsy)
'a', $Null, $Null -eq $Null # returns #($Null, $Null) (truthy!!!)
$Null, $Null, $Null -eq $Null # returns #($Null, $Null, $Null) (truthy!!!)
In other words, to check whether a variable is $Null (and exclude a collection containing multiple $Nulls), put $Null at the LHS (left hand side) of the equality comparison operator:
if ($Null -eq $MyVariable) { ...
6. Parentheses and assignments choke the pipeline
The PowerShell Pipeline is not just a series of commands connected by pipeline operators (|) (ASCII 124). It is a concept to simultaneously stream individual objects through a sequence of cmdlets. If a cmdlet (or function) is written according to the Strongly Encouraged Development Guidelines and implemented for the middle of a pipeline, it takes each single object from the pipeline, processes it and passes the results to the next cmdlet just before it takes and processes the next object in the pipeline. Meaning that for a simple pipeline as:
Import-Csv .\Input.csv |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
As the last cmdlet writes an object to the .\Output.csv file, the Select-Object cmdlet selects the properties of the next object and the Import-Csv reads the next object from the .\input.csv file (see also: Pipeline in Powershell). This will keep the memory usage low (especially where there are lots of object/records to process) and therefore might result in a faster throughput. To facilitate the pipeline, the PowerShell objects are quiet fat as each individual object contains all the property information (along with e.g. the property name).
Therefore it is not a good practice to choke the pipeline for no reason. There are two senarios that choke the pipeline:
Parentheses, e.g.:
(Import-Csv .\Input.csv) |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
Where all the .\Input.csv records are loaded as an array of PowerShell objects into memory before passing it on to the Select-Object cmdlet.
Assignments, e.g.:
$Objects = Import-Csv .\Input.csv
$Objects |Select-Object -Property Column1, Column2 |Export-Csv .\Output.csv
Where all the .\Input.csv records are loaded as an array of PowerShell objects into $Objects (memory as well) before passing it on to the Select-Object cmdlet.
7. the increase assignment operator (+=) might become expensive
The increase assignment operator (+=) is syntactic sugar to increase and assign primitives as .e.g. $a += $b where $a is assigned $b + 1. The increase assignment operator can also be used for adding new items to a collection (or to String types and hash tables) but might get pretty expensive as the costs increases with each iteration (the size of the collection). The reason for this is that objects as array collections are immutable and the right variable in not just appended but *appended and reassigned to the left variable. For details see also: avoid using the increase assignment operator (+=) to create a collection
8. The Get-Content cmdlet returns separate lines
There are probably quite some more cmdlet gotchas, knowing that there exist a lot of (internal and external) cmdlets. In contrast to engine related gotchas, these gotchas are often easier to highlight (with e.g. a warning) as happend with ConvertTo-Json (see: Unexpected ConvertTo-Json results? Answer: it has a default -Depth of 2) or "fix". But there is very clasic gotcha in Get-Content which tight into the PowerShell general concept of streaming objects (in this case lines) rather than passing everything (the whole contents of the file) in once:
Get-Content .\Input.txt -Match '\r?\n.*Test.*\r?\n'
Will never work because, by default, Get-Contents returns a stream of objects where each object contains a single string (a line without any line breaks).
(Get-Content .\Input.txt).GetType().Name
Object[]
(Get-Content .\Input.txt)[0].GetType().Name
String
In fact:
Get-Content .\Input.txt -Match 'Test'
Returns all the lines with the word Test in it as Get-Contents puts every single line on the pipeline and when the input is a collection, the operator returns the elements of the collection that match the right-hand value of the expression.
Note: since PowerShell version 3, Get-Contents has a -Raw parameter that reads all the content of the concerned file at once, Meaning that this: Get-Content -Raw .\Input.txt -Match '\r?\n.*Test.*\r?\n' will work as it loads the whole file into memory.
alex2k8, I think this example of yours is good to talk about:
# -----------------------------------
function foo($a){
# I thought this is right.
#if($a -eq $null)
#{
# throw "You can't pass $null as argument."
#}
# But actually it should be:
if($null -eq $a)
{
throw "You can't pass $null as argument."
}
}
foo #($null, $null)
PowerShell can use some of the comparators against arrays like this:
$array -eq $value
## Returns all values in $array that equal $value
With that in mind, the original example returns two items (the two $null values in the array), which evalutates to $true because you end up with a collection of more than one item. Reversing the order of the arguments stops the array comparison.
This functionality is very handy in certain situations, but it is something you need to be aware of (just like array handling in PowerShell).
Functions 'foo' and 'bar' looks equivalent.
function foo() { $null }
function bar() { }
E.g.
(foo) -eq $null
# True
(bar) -eq $null
# True
But:
foo | %{ "foo" }
# Prints: foo
bar | %{ "bar" }
# PRINTS NOTHING
Returning $null and returning nothing is not equivalent dealing with pipes.
This one is inspired by Keith Hill example...
function bar() {}
$list = #(foo)
$list.length
# Prints: 0
# Now let's try the same but with a temporal variable.
$tmp = foo
$list = #($tmp)
$list.length
# Prints: 1
Another one:
$x = 2
$y = 3
$a,$b = $x,$y*5
because of operators precedence there is not 25 in $b; the command is the same as ($x,$y)*5
the correct version is
$a,$b = $x,($y*5)
The logical and bitwise operators don't follow standard precedence rules. The operator -and should have a higher priority than -or yet they're evaluated strictly left-to-right.
For example, compare logical operators between PowerShell and Python (or virtually any other modern language):
# PowerShell
PS> $true -or $false -and $false
False
# Python
>>> True or False and False
True
...and bitwise operators:
# PowerShell
PS> 1 -bor 0 -band 0
0
# Python
>>> 1 | 0 & 0
1
This works. But almost certainly not in the way you think it's working.
PS> $a = 42;
PS> [scriptblock]$b = { $a }
PS> & $b
42
This one has tripped me up before, using $o.SomeProperty where it should be $($o.SomeProperty).
# $x is not defined
[70]: $x -lt 0
True
[71]: [int]$x -eq 0
True
So, what's $x..?
Another one I ran into recently: [string] parameters that accept pipeline input are not strongly typed in practice. You can pipe anything at all and PS will coerce it via ToString().
function Foo
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True)]
[string] $param
)
process { $param }
}
get-process svchost | Foo
Unfortunately there is no way to turn this off. Best workaround I could think of:
function Bar
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True)]
[object] $param
)
process
{
if ($param -isnot [string]) {
throw "Pass a string you fool!"
}
# rest of function goes here
}
}
edit - a better workaround I've started using...
Add this to your custom type XML -
<?xml version="1.0" encoding="utf-8" ?>
<Types>
<Type>
<Name>System.String</Name>
<Members>
<ScriptProperty>
<Name>StringValue</Name>
<GetScriptBlock>
$this
</GetScriptBlock>
</ScriptProperty>
</Members>
</Type>
</Types>
Then write functions like this:
function Bar
{
[CmdletBinding()]
param (
[parameter(Mandatory=$True, ValueFromPipelineByPropertyName=$True)]
[Alias("StringValue")]
[string] $param
)
process
{
# rest of function goes here
}
}
Forgetting that $_ gets overwritten in blocks made me scratch my head in confusion a couple times, and similarly for multiple reg-ex matches and the $matches array. >.<
Remembering to explicitly type pscustom objects from imported data tables as numeric so they can be sorted correctly:
$CVAP_WA=foreach ($i in $C){[PSCustomObject]#{ `
County=$i.county; `
TotalVote=[INT]$i.TotalBallots; `
RegVoters=[INT]$i.regvoters; `
Turnout_PCT=($i.TotalBallots/$i.regvoters)*100; `
CVAP=[INT]($B | ? {$_.GeoName -match $i.county}).CVAP_EST }}
PS C:\Politics> $CVAP_WA | sort -desc TotalVote |ft -auto -wrap
County TotalVote RegVoters Turnout_PCT CVAP CVAP_TV_PCT CVAP_RV_PCT
------ --------- --------- ----------- ---- ----------- -----------
King 973088 1170638 83.189 1299290 74.893 90.099
Pierce 349377 442985 78.86 554975 62.959 79.837
Snohomish 334354 415504 80.461 478440 69.832 86.81
Spokane 227007 282442 80.346 342060 66.398 82.555
Clark 193102 243155 79.453 284190 67.911 85.52
Mine are both related to file copying...
Square Brackets in File Names
I once had to move a very large/complicated folder structure using Move-Item -Path C:\Source -Destination C:\Dest. At the end of the process there were still a number of files in source directory. I noticed that every remaining file had square brackets in the name.
The problem was that the -Path parameter treats square brackets as wildcards.
EG. If you wanted to copy Log001 to Log200, you could use square brackets as follows:
Move-Item -Path C:\Source\Log[001-200].log.
In my case, to avoid square brackets being interpreted as wildcards, I should have used the -LiteralPath parameter.
ErrorActionPreference
The $ErrorActionPreference variable is ignored when using Move-Item and Copy-Item with the -Verbose parameter.
Treating the ExitCode of a Process as a Boolean.
eg, with this code:
$p = Start-Process foo.exe -NoNewWindow -Wait -PassThru
if ($p.ExitCode) {
# handle error
}
things are good, unless say foo.exe doesn't exist or otherwise fails to launch.
in that case $p will be $null, and [bool]($null.ExitCode) is False.
a simple fix is to replace the logic with if ($p.ExitCode -ne 0) {},
however for clarity of code imo the following is better: if (($p -eq $null) -or ($p.ExitCode -ne 0)) {}