Powershell script. More efficient way to perform nested foreach loops? [duplicate] - powershell

This question already has answers here:
In PowerShell, what's the best way to join two tables into one?
(5 answers)
Closed 5 months ago.
Good day.
I wrote a script that imports Excel files and then compares the rows. Each file contains about 13K rows. It is taking about 3 hours to process, which seems too long. This is happening because I am looping through every 13K rows from fileb for each row in filea.
Is there a more efficient way to do this?
Here is sample code:
#Import rows as customObject
rowsa = Import-Excel $filea
rowsb = Import-Excel $fileb
#Loop through each filea rows
foreach ($rowa in $rowsa)
{
#Loop through each fileb row. If the upc code matches rowa, check if other fields match
foreach ($rowb in $rowsb)
{
$rowb | Where-Object -Property "UPC Code" -Like $rowa.upc |
Foreach-Object {
if (( $rowa.uom2 -eq 'INP') -and ( $rowb.'Split Quantity' -ne $rowa.qty1in2 ))
{
#Do Something
}
}
}

Seems like you can leverage Group-Object -AsHashtable for this. See about Hash Tables for more info on why this should be faster.
$mapB = Import-Excel $fileb | Group-Object 'UPC Code' -AsHashTable -AsString
foreach($row in Import-Excel $filea) {
if($mapB.ContainsKey($row.upc)) {
$value = $mapB[$row.upc]
if($row.uom2 -eq 'INP' -and $row.qty1in2 -ne $value.'Split Quantity') {
$value # => has the row matching on UPC (FileA) / UPC Code (FileB)
$row # => current row in FileA
}
}
}

A few tricks:
The Object Pipeline may be easy, but it's not as fast as a statement
Try changing your code use to foreach statements, not Where-Object and Foreach-Object.
Use Hashtables to group.
While you can use Group-Object to do this, Group-Object suffers from the same performance problems as anything else in the pipeline.
Try to limit looping within looping.
As a general rule, looping within looping will be o(n^2). If you can avoid loops within loops, this is great. So switching the code around to loop thru A, then loop thru B, will be more efficient. So will exiting your loops as quickly as possible.
Consider using a benchmarking tool
There's a little module I make called Benchpress that can help you test multiple approaches to see which is faster. The Benchpress docs have a number of general PowerShell performance benchmarks to help you determine the fastest way to script a given thing.
Updated Script Below:
#Import rows as customObject
$rowsa = Import-Excel $filea
$rowsb = Import-Excel $fileb
$rowAByUPC = #{}
foreach ($rowA in $rowsa) {
# This assumes there will only be one row per UPC.
# If there is more than one, you may need to make a list here instead
$rowAByUPC[$rowA.UPC] = $rowA
}
foreach ($rowB in $rowsB) {
# Skip any rows in B that don't have a UPC code.
$rowBUPC = $rowB."UPC Code"
if (-not $rowBUPC) { continue }
$RowA = $rowAByUPC[$rowBUPC]
# It seems only rows that have 'INP' in uom2 are important
# so continue if missing
if ($rowA.uom2 -ne 'INP') { continue }
if ($rowA.qty1in2 -ne $rowB.'Split Quantity') {
# Do what you came here to do.
}
}
Please note that as you have not shared the code within the matching condition, you may need to take the advice contained in this answer and apply it to the inner code.

Related

Check if a condition is met by a line within a TXT but "in an advanced way"

I have a TXT file with 1300 megabytes (huge thing). I want to build code that does two things:
Every line contains a unique ID at the beginning. I want to check for all lines with the same unique ID if the conditions is met for that "group" of IDs. (This answers me: For how many lines with the unique ID X have all conditions been met)
If the script is finished I want to remove all lines from the TXT where the condition was met (see 2). So I can rerun the script with another condition set to "narrow down" the whole document.
After few cycles I finally have a set of conditions that applies to all lines in the document.
It seems that my current approach is very slow.( one cycle needs hours). My final result is a set of conditions that apply to all lines of code.
If you find an easier way to do that, feel free to recommend.
Help is welcome :)
Code so far (does not fullfill everything from 1&2)
foreach ($item in $liste)
{
# Check Conditions
if ( ($item -like "*XXX*") -and ($item -like "*YYY*") -and ($item -notlike "*ZZZ*")) {
# Add a line to a document to see which lines match condition
Add-Content "C:\Desktop\it_seems_to_match.txt" "$item"
# Retrieve the unique ID from the line and feed array.
$array += $item.Split("/")[1]
# Remove the line from final document
$liste = $liste -replace $item, ""
}
}
# Pipe the "new cleaned" list somewhere
$liste | Set-Content -Path "C:\NewListToWorkWith.txt"
# Show me the counts
$array | group | % { $h = #{} } { $h[$_.Name] = $_.Count } { $h } | Out-File "C:\Desktop\count.txt"
Demo Lines:
images/STRINGA/2XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGA/3XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGB/4XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGB/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGC/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
performance considerations:
Add-Content "C:\Desktop\it_seems_to_match.txt" "$item"
try to avoid wrapping cmdlet pipelines
See also: Mastering the (steppable) pipeline
$array += $item.Split("/")[1]
Try to avoid using the increase assignment operator (+=) to create a collection
See also: Why should I avoid using the increase assignment operator (+=) to create a collection
$liste = $liste -replace $item, ""
This is a very expensive operation considering that you are reassigning (copying) a long list ($liste) with each iteration.
Besides it is a bad practice to change an array that you are currently iterating.
$array | group | ...
Group-Object is a rather slow cmdlet, you better collect (or count) the items on-the-fly (where you do $array += $item.Split("/")[1]) using a hashtable, something like:
$Name = $item.Split("/")[1]
if (!$HashTable.Contains($Name)) { $HashTable[$Name] = [Collections.Generic.List[String]]::new() }
$HashTable[$Name].Add($Item)
To minimize memory usage it may be better to read one line at a time and check if it already exists. Below code I used StringReader and you can replace with StreamReader for reading from a file. I'm checking if the entire string exists, but you may want to split the line. Notice I have duplicaes in the input but not in the dictionary. See code below :
$rows= #"
images/STRINGA/2XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGA/3XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/4XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGC/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGA/2XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGA/3XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/4XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGC/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
"#
$dict = [System.Collections.Generic.Dictionary[int, System.Collections.Generic.List[string]]]::new();
$reader = [System.IO.StringReader]::new($rows)
while(($row = $reader.ReadLine()) -ne $null)
{
$hash = $row.GetHashCode()
if($dict.ContainsKey($hash))
{
#check if list contains the string
if($dict[$hash].Contains($row))
{
#string is a duplicate
}
else
{
#add string to dictionary value if it is not in list
$list = $dict[$hash].Value
$list.Add($row)
}
}
else
{
#add new hash value to dictionary
$list = [System.Collections.Generic.List[string]]::new();
$list.Add($row)
$dict.Add($hash, $list)
}
}
$dict

Fast compare two large csv(boths rows and columns) in powershell

I have two large CSVs to compare. Bosth csvs are basically data from the same system 1 day apart. No of rows are around 12k and columns 30.
The aim is to identify what column data has changed for primary key(#ID).
My idea was to loop through the CSVs to identify which rows have changed and dump these into a separate csvs. One done, I again loop through the changes rows, and indetify the exact change in column.
NewCSV = Import-Csv -Path ".\Data_A.csv"
OldCSV = Import-Csv -Path ".\Data_B.csv"
foreach ($LineNew in $NewCSV)
{
ForEach ($LineOld in $OldCSV)
{
If($LineNew -eq $LineOld)
{
Write-Host $LineNew, " Match"
}else{
Write-Host $LineNew, " Not Match"
}
}
}
But as soon as run the loop, it takes forever to run for 12k rows. I was hoping there must be a more efficient way to compare large files powershell. Something that is quicker.
Well you can give this a try, I'm not claiming it will be fast for what vonPryz has already pointed out but it should give you a good side-by-side perspective to compare what has changed from OldCsv to NewCsv.
Note: Those cells that have the same value on both CSVs will be ignored.
$NewCSV = Import-Csv -Path ".\Data_A.csv"
$OldCSV = Import-Csv -Path ".\Data_B.csv" | Group-Object ID -AsHashTable -AsString
$properties = $newCsv[0].PSObject.Properties.Name
$result = foreach($line in $NewCSV)
{
if($ref = $OldCSV[$line.ID])
{
foreach($prop in $properties)
{
if($line.$prop -ne $ref.$prop)
{
[pscustomobject]#{
ID = $line.ID
Property = $prop
OldValue = $ref.$prop
NewValue = $line.$prop
}
}
}
continue
}
Write-Warning "ID $($line.ID) could not be found on Old Csv!!"
}
As vonPryz hints in the comments, you've written an algorithm with quadratic time complexity (O(n²) in Big-O notation) - every time the input size doubles, the number of computations performed increase 4-fold.
To avoid this, I'd suggest using a hashtable or other dictionary type to hold each data set, and use the primary key from the input as the dictionary key. This way you get constant-time lookup of corresponding records, and the time complexity of your algorithm becomes near-linear (O(2n + k)):
$NewCSV = #{}
Import-Csv -Path ".\Data_A.csv" |ForEach-Object {
$NewCSV[$_.ID] = $_
}
$OldCSV = #{}
Import-Csv -Path ".\Data_B.csv" |ForEach-Object {
$OldCSV[$_.ID] = $_
}
Now that we can efficiently resolve each row by it's ID, we can inspect the whole of the data sets with an independent loop over each:
foreach($entry in $NewCSV.GetEnumerator()){
if(-not $OldCSV.ContainsKey($entry.Key)){
# $entry.Value is a new row, not seen in the old data set
}
$newRow = $entry.Value
$oldRow = $OldCSV[$entry.Key]
# do the individual comparison of the rows here
}
Do another loop like above, but with $NewCSV in place of $OldCSV to find/detect deletions.

PowerShell script to group records by overlapping start and end date

I am working on a CSV file which have start and end date and the requirement is group records by dates when the dates overlap each other.
For example, in below table Bill_Number 177835 Start_Date and End_Date is overlapping with 178682,179504, 178990 Start_Date and End_Date so all should be grouped together and so on for each and every record.
Bill_Number,Start_Date,End_Date
177835,4/14/20 3:00 AM,4/14/20 7:00 AM
178682,4/14/20 3:00 AM,4/14/20 7:00 AM
179504,4/14/20 3:29 AM,4/14/20 6:29 AM
178662,4/14/20 4:30 AM,4/14/20 5:30 AM
178990,4/14/20 6:00 AM,4/14/20 10:00 AM
178995,4/15/20 6:00 AM,4/15/20 10:00 AM
178998,4/15/20 6:00 AM,4/15/20 10:00 AM
I have tried different combination like "Group-by" and "for loop" but not able to produce result.
With the above example of CSV, the expected result is;
Group1: 177835,178682,179504, 178990
Group2: 177835,178682,179504, 178662
Group3: 178995, 178998
Currently i have below code in hand.
Any help on this will be appreciated,thanks in advance.
$array = #(‘ab’,’bc’,’cd’,’df’)
for ($y = 0; $y -lt $array.count) {
for ($x = 0; $x -lt $array.count) {
if ($array[$y]-ne $array[$x]){
Write-Host $array[$y],$array[$x]
}
$x++
}
$y++
}
You can do something like the following. There is likely a cleaner solution, but that could take a lot of time.
$csv = Import-Csv file.csv
# Creates all inclusive groups where times overlap
$csvGroups = foreach ($row in $csv) {
$start = [datetime]$row.Start_Date
$end = [datetime]$row.End_Date
,($csv | where { ($start -ge [datetime]$_.Start_Date -and $start -le [datetime]$_.End_Date) -or ($end -ge [datetime]$_.Start_Date -and $end -le [datetime]$_.End_Date) })
}
# Removes duplicates from $csvGroups
$groups = $csvGroups | Group {$_.Bill_number -join ','} |
Foreach-Object { ,$_.Group[0] }
# Compares current group against all groups except itself
$output = for ($i = 0; $i -lt $groups.count; $i++) {
$unique = $true # indicates if the group's bill_numbers are in another group
$group = $groups[$i]
$list = $groups -as [system.collections.arraylist]
$list.RemoveAt($i) # Removes self
foreach ($innerGroup in $list) {
# If current group's bill_numbers are in another group, skip to next group
if ((compare $group.Bill_Number $innergroup.Bill_Number).SideIndicator -notcontains '<=') {
$unique = $false
break
}
}
if ($unique) {
,$group
}
}
$groupCounter = 1
# Output formatting
$output | Foreach-Object { "Group{0}:{1}" -f $groupCounter++,($_.Bill_Number -join ",")}
Explanation:
I added comments to give an idea as to what is going on.
The ,$variable syntax uses the unary operator ,. It converts the output into an array. Typically, PowerShell unrolls an array as individual items. The unrolling becomes a problem here because we want the groups to stays as groups (arrays). Otherwise, there would be a lot of duplicate bill numbers, and we'd lose track between groups.
An arraylist is used for $list. This is so we can access the RemoveAt() method. A typical array is of fixed size and can't be manipulated in that fashion. This can effectively be done with an array, but the code is different. You either have to select the index ranges around the item you want to skip or create a new array using some other conditional statement that will exclude the target item. An arraylist is just easier for me (personal preference).
So a very dirty approach. I think there are a coup of ways to determine if there's overlap for a specific comparison, one record to another. However you may need a list of bill numbers each bill date range collides with. using a function call in a Select-Object statement/expression I added a collisions property to your objects.
The function is wordy and probably be improved, but the gist is that for each record it will compare to all other records and report that bill number in it's collision property if either the start or end date falls within the other records range.
This is of course just demo code, I'm sure it can be made better for your purposes, but may be a starting point for you.
Obviously change the path to the CSV file.
Function Get-Collisions
{
Param(
[Parameter(Mandatory = $true)]
[Object]$ReferenceObject,
[Parameter( Mandatory = $true )]
[Object[]]$CompareObjects
) # End Parameter Block
ForEach($Object in $CompareObjects)
{
If( !($ReferenceObject.Bill_Number -eq $Object.Bill_Number) )
{
If(
( $ReferenceObject.Start_Date -ge $Objact.StartDate -and $ReferenceObject.Start_Date -le $Objact.End_Date ) -or
( $ReferenceObject.End_Date -ge $Object.Start_Date -and $ReferenceObject.End_Date -le $Object.End_Date ) -or
( $ReferenceObject.Start_Date -le $Object.Start_Date -and $ReferenceObject.End_Date -ge $Object.Start_Date )
)
{
$Object.Bill_Number
}
}
}
} # End Get-Collisions
$Objects = Import-Csv 'C:\temp\DateOverlap.CSV'
$Objects |
ForEach-Object{
$_.Start_Date = [DateTime]$_.Start_Date
$_.End_Date = [DateTime]$_.End_Date
}
$Objects = $Objects |
Select-object *,#{Name = 'Collisions'; Expression = { Get-Collisions -ReferenceObject $_ -CompareObjects $Objects }}
$Objects | Format-Table -AutoSize
Let me know how it goes. Thanks.
#Shan , I saw your comments so I wanted to respond with some additional code and discussion. I may have gone overboard, but you expressed a desire to learn, such that you can maintain these code pieces in the future. So, I put a lot of time into this.
I may mention some of #AdminOfThings work too. That is not criticism, but collaboration. His example is clever and dynamic in terms of getting the job done and pulling in the right tools as he worked his way to the desired output.
I originally side-stepped the grouping question because I didn't feel like naming/numbering the groups had any meaning. For example: "Group 1" indicates all its members have overlap in their billing periods, but no indication of what or when the overlap is. Maybe I rushed through it… I may have been reading too much into it or perhaps even letting my own biases get in the way. At any rate, I elected to create a relationship from the perspective of each bill number, and that resulted in my first answer.
Since then, and because of your comment, I put effort into extending and documenting the first example I gave. The revised code will be Example 1 below. I've heavily commented it and most of the comments will apply to the original example as well. There are some differences that were forced by the extended grouping functionality, but the comments should reflect those situations.
Note: You'll also see I stopped calling them "collisions" and termed them "overlaps" instead.
Example 1:
Function Get-Overlaps
{
<#
.SYNOPSIS
Given an object (reference object) compare to a collection of other objects of the same
type. Return an array of billing numbers for which the billing period overlaps that of
the reference object.
.DESCRIPTION
Given an object (reference object) compare to a collection of other objects of the same
type. Return an array of billing numbers for which the billing period overlaps that of
the reference object.
.PARAMETER ReferenceObject
This is the current object you wish to compare to all other objects.
.PARAMETER
The collection of objects you want to compare with the reference object.
.NOTES
> The date time casting could probably have been done by further preparing
the objects in the calling code. However, givin this is for a
StackOverflow question I can polish that later.
#>
Param(
[Parameter(Mandatory = $true)]
[Object]$ReferenceObject,
[Parameter( Mandatory = $true )]
[Object[]]$CompareObjects
) # End Parameter Block
[Collections.ArrayList]$Return = #()
$R_StartDate = [DateTime]$ReferenceObject.Start_Date
$R_EndDate = [DateTime]$ReferenceObject.End_Date
ForEach($Object in $CompareObjects)
{
$O_StartDate = [DateTime]$Object.Start_Date
$O_EndDate = [DateTime]$Object.End_Date
# The first if statement skips the reference object's bill_number
If( !($ReferenceObject.Bill_Number -eq $Object.Bill_Number) )
{
# This logic can use some explaining. So far as I could tell there were 2 cases to look for:
# 1) Either or both the start and end dates fell inside the the timespan of the comparison
# object. This cases is handle by the first 2 conditions.
# 2) If the reference objects timespan covers the entire timespan of the comparison object.
# Meaning the start date is before and the end date is after, fitting the entire
# comparison timespan is within the bounds of the reference timespan. I elected to use
# the 3rd condition below to detect that case because once the start date is earlier I
# only have to care if the end date is greater than the start date. It's a little more
# inclusive and partially covered by the previous conditions, but whatever, you gotta
# pick something...
#
# Note: This was a deceptively difficult thing to comprehend, I missed that last condition
# in my first example (later corrected) and I think #AdminOfThings also overlooked it.
If(
( $R_StartDate -ge $O_StartDate -and $R_StartDate -le $O_EndDate ) -or
( $R_EndDate -ge $O_StartDate -and $R_EndDate -le $O_EndDate ) -or
( $R_StartDate -le $O_StartDate -and $R_EndDate -ge $O_StartDate )
)
{
[Void]$Return.Add( $Object.Bill_Number )
}
}
}
Return $Return
} # End Get-Overlaps
$Objects =
Import-Csv 'C:\temp\DateOverlap.CSV' |
ForEach-Object{
# Consider overlap as a relationship from the perspective of a given Object.
$Overlaps = [Collections.ArrayList]#(Get-overlaps -ReferenceObject $_ -CompareObjects $Objects)
# Knowing the overlaps I can infer the group, by adding the group's bill_number to its group property.
If( $Overlaps )
{ # Don't calculate a group unless you actually have overlaps:
$Group = $Overlaps.Clone()
[Void]$Group.Add( $_.Bill_Number ) # Can you do in the above line, but for readability I separated it.
}
Else { $Group = $null } # Ensure's not reusing group from a previous iteration of the loop.
# Create a new PSCustomObject with the data so far.
[PSCustomObject][Ordered]#{
Bill_Number = $_.Bill_Number
Start_Date = [DateTime]$_.Start_Date
End_Date = [DateTime]$_.End_Date
Overlaps = $Overlaps
Group = $Group | Sort-Object # Sorting will make it a lot easier to get unique lists later.
}
}
# The reason I recreated the objects from the CSV file instead of using Select-Object as I had
# previously is that I simply couldn't get Select-Object to maintain type ArrayList that was being
# returned from the function. I know that's a documented problem or circumstance some where.
# Now I'll add one more property called Group_ID a comma delimited string that we can later use
# to echo the groups according to your original request.
$Objects =
$Objects |
Select-Object *,#{Name = 'Group_ID'; Expression = { $_.Group -join ', ' } }
# This output is just for the sake of showing the new objects:
$Objects | Format-Table -AutoSize -Wrap
# Now create an array of unique Group_ID strings, this is possible of the sorts and joins done earlier.
$UniqueGroups = $Objects.Group_ID | Select-Object -Unique
$Num = 1
ForEach($UniqueGroup in $UniqueGroups)
{
"Group $Num : $UniqueGroup"
++$Num # Increment the $Num, using convienient unary operator, so next group is echoed properly.
}
# Below is a traditional for loop that does the same thing. I did that first before deciding the ForEach
# was cleaner. Leaving it commented below, because you're on a learning-quest, so just more demo code...
# For($i = 0; $i -lt $UniqueGroups.Count; ++$i)
# {
# $Num = $i + 1
# $UniqueGroup = $UniqueGroups[$i]
# "Group $Num : $UniqueGroup"
# }
Example 2:
$Objects =
Import-Csv 'C:\temp\DateOverlap.CSV' |
Select-Object Bill_Number,
#{ Name = 'Start_Date'; Expression = { [DateTime]$_.Start_Date } },
#{ Name = 'End_Date'; Expression = { [DateTime]$_.End_Date } }
# The above select statement converts the Start_Date & End_Date properties to [DateTime] objects
# While you had asked to pack everything into the nested loops, that would have resulted in
# unnecessary recasting of object types to ensure proper comparison. Often this is a matter of
# preference, but in this case I think it's better. I did have it working well without the
# above select, but the code is more readable / concise with it. So even if you treat the
# Select-Object command as a blackbox the rest of the code should be easier to understand.
#
# Of course, and if you couldn't tell from my samples Select-Object is incredibly useful. I
# recommend taking the time to learn it thoroughly. The MS documentation can be found here:
# https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/select-object?view=powershell-5.1
:Outer ForEach( $ReferenceObject in $Objects )
{
# In other revisions I had assigned these values to some shorter variable names.
# I took that out. Again since you're learning I wanted the all the dot referencing
# to be on full display.
$ReferenceObject.Start_Date = $ReferenceObject.Start_Date
$ReferenceObject.End_Date = $ReferenceObject.End_Date
[Collections.ArrayList]$TempArrList = #() # Reset this on each iteration of the outer loop.
:Inner ForEach( $ComparisonObject in $Objects )
{
If( $ComparisonObject.Bill_Number -eq $ReferenceObject.Bill_Number )
{ # Skip the current reference object in the $Objects collection! This prevents the duplication of
# the current Bill's number within it's group, helping to ensure unique-ification.
#
# By now you should have seen across all revision including AdminOfThings demo, that there was some
# need skip the current item when searching for overlaps. And, that there are a number of ways to
# accomplish that. In this case I simply go back to the top of the loop when the current record
# is encountered, effectively skipping it.
Continue Inner
}
# The below logic needs some explaining. So far as I could tell there were 2 cases to look for:
# 1) Either or both the start and end dates fell inside the the timespan of the comparison
# object. This cases is handle by the first 2 conditions.
# 2) If the reference object's timespan covers the entire timespan of the comparison object.
# Meaning the start date is before and the end date is after, fitting the entire
# comparison timespan is within the bounds of the reference timespan. I elected to use
# the 3rd condition below to detect that case because once the start date is earlier I
# only have to care if the end date is greater than the other start date. It's a little
# more inclusive and partially covered by the previous conditions, but whatever, you gotta
# pick something...
#
# Note: This was a deceptively difficult thing to comprehend, I missed that last condition
# in my first example (later corrected) and I think #AdminOfThings also overlooked it.
If(
( $ReferenceObject.Start_Date -ge $ComparisonObject.Start_Date -and $ReferenceObject.Start_Date -le $ComparisonObject.End_Date ) -or
( $ReferenceObject.End_Date -ge $ComparisonObject.Start_Date -and $ReferenceObject.End_Date -le $ComparisonObject.End_Date ) -or
( $ReferenceObject.Start_Date -le $ComparisonObject.Start_Date -and $ReferenceObject.End_Date -ge $ComparisonObject.Start_Date )
)
{
[Void]$TempArrList.Add( $ComparisonObject.Bill_Number )
}
}
# Now Add the properties!
$ReferenceObject | Add-Member -Name Overlaps -MemberType NoteProperty -Value $TempArrList
If( $ReferenceObject.Overlaps )
{
[Void]$TempArrList.Add($ReferenceObject.Bill_Number)
$ReferenceObject | Add-Member -Name Group -MemberType NoteProperty -Value ( $TempArrList | Sort-Object )
$ReferenceObject | Add-Member -Name Group_ID -MemberType NoteProperty -Value ( $ReferenceObject.Group -join ', ' )
# Below a script property also works, but I think the above is easier to follow:
# $ReferenceObject | Add-Member -Name Group_ID -MemberType ScriptProperty -Value { $this.Group -join ', ' }
}
Else
{
$ReferenceObject | Add-Member -Name Group -MemberType NoteProperty -Value $null
$ReferenceObject | Add-Member -Name Group_ID -MemberType NoteProperty -Value $null
}
}
# This output is just for the sake of showing the new objects:
$Objects | Format-Table -AutoSize -Wrap
# Now create an array of unique Group_ID strings, this is possible of the sorts and joins done earlier.
#
# It's important to point out I chose to sort because I saw the clever solution that AdminOfThings
# used. There's a need to display only groups that have unique memberships, not necessarily unique
# ordering of the members. He identified these by doing some additional loops and using the Compare
# -Object cmdlet. Again, I must say that was very clever, and Compare-Object is another tool very much
# worth getting to know. However, the code didn't seem like it cared which of the various orderings it
# ultimately output. Therefore I could conclude the order wasn't really important, and it's fine if the
# groups are sorted. With the objects sorted it's much easier to derive the truely unique lists with the
# simple Select-Object command below.
$UniqueGroups = $Objects.Group_ID | Select-Object -Unique
# Finally Loop through the UniqueGroups
$Num = 1
ForEach($UniqueGroup in $UniqueGroups)
{
"Group $Num : $UniqueGroup"
++$Num # Increment the $Num, using convienient unary operator, so next group is echoed properly.
}
Additional Discussion:
Hopefully the examples are helpful. I wanted to mentioned a few more points:
Using ArrayLists ( [System.Collections.ArrayList] ) instead of native arrays. The typical reason to do this is the ability to add and remove elements quickly. If you search the internet you'll find hundreds of articles explaining why it's faster. It's so common you'll often find experienced PowerShell users implementing it instinctively. But the main reason is speed and the flexibility to easily add and remove elements.
You'll notice I relied heavily on the ability to append new properties to objects. There are several ways to do this, Select-Object , Creating your own objects, and in Example 2 above I used Get-Member. The main reason I used Get-Member was I couldn't get the ArrayList type to stick when using Select-Object.
Regarding loops. This is specific to your desire for nested loops. My first answer still had loops, except some were implied by the pipe, and others were stored in a helper function. The latter is really also a preference; for readability it's sometimes helpful to park some code out of view from the main code body. That said, all the same concepts were there from the beginning. You should get comfortable with the implied loop that comes with pipe-lining capability.
I don't think there's much more I can say without getting redundant. I really hope this was helpful, it was certainly fun for me to work on it. If you have questions or feedback let me know. Thanks.

Powershell compare arrays and get unique values

I am currently trying to write a powershell script that can be run weekly on two CSV files, to check they both contain the same information. I want the script to output anything that appears in one file but not the other to a new file.
The script I have written so far compares the two but only adds <= and => to the values.
It also doesn't work all the time, because I manually checked the file and found results that existed in both.
Code below:
$NotPresents = compare-object -ReferenceObject $whatsup -DifferenceObject $vmservers -Property device
foreach ($NotPresent in $NotPresents)
{
Write-Host $NotPresent.device
}
$NotPresents | Out-File Filepath.txt
$NotPresents.count
Any ideas what I have done wrong?
In order to avoid having to iterate over one of the arrays more than once*, you may want to throw them each into a hashtable:
$whatsupTable = #{}
foreach($entry in $whatsup){
$whatsupTable[$entry.device] = $true
}
$vmserversTable = #{}
foreach($entry in $vmservers){
$vmserversTable[$entry.device] = $true
}
Now you can easily find the disjunction with a single loop and a lookup against the other table:
$NotInWhatsUp = $vmservers |Where { -not $whatsupTable[$_] }
$NotInVMServers = $whatsup |Where { -not $vmserversTable[$_] }
*) ok, technically we're looping through each twice, but still much better than nested looping

Unable to remove item from hash table

In Powershell, I have a hash table that contains data similar to this -
Name Value
---- -----
1-update.bat 1
2-update.bat 2
3-update.bat 3
3.1-update.bat 3.1
4-update.bat 4
I also have an variable that contians a number, for example 3
What I would like to do is loop through the array and remove any entry where the value is less than or equal to 3
I'm thinking that this will be easy, especially as the docs say that has tables contain a .remove method. However, the code I have below fails, yeilding this error -
Exception calling "Remove" with "1" argument(s): "Collection was of a
fixed size."
Here is the code that I used -
$versions = #{}
$updateFiles | ForEach-Object {
$versions.Add($_.name, [decimal]($_.name -split '-')[0])
}
[decimal]$lastUpdate = Get-Content $directory\$updatesFile
$versions | ForEach-Object {
if ( $_.Value -le $lastUpdate ) {
$versions.Remove($version.Name)
}
}
I first tried to loop $versions in a different manner, trying both the foreach and for approaches, but both failed in the same manner.
I also tried to create a temporary array to hold the name of the versions to remove, and then looping that array to remove them, but that also failed.
Next I hit Google, and while I can find several similar questions, none that answer my specific question. Mostly they suggest using a list (New-Object System.Collections.Generic.List[System.Object]), whcih from what I can tell is of no help to me here.
Is anyone able to suggest a fix?
Here you go, you can use .Remove(), you just need a clone of the hashtable so that it will let you remove items as you enumerate.
[hashtable]$ht = #{ '1-update.bat'=1;'2-update.bat'=2;'3-update.bat'=3;'3.1-update.bat'=3.1; '4-update.bat'=4 }
#Clone so that we can remove items as we're enumerating
$ht2 = $ht.Clone()
foreach($k in $ht.GetEnumerator()){
if([decimal]$k.Value -le 3){
#notice, deleting from clone, then return clone at the end
$ht2.Remove($k.Key)
}
}
$ht2
Notice I've cast the original variable too so that it's explicitly a hash table, may not be required, but I like to do it to at least keep things clear.
It looks like you just confused ForEach-Object with foreach but only halfway (maybe it was foreach before and you converted it).
You can't send a [hashtable] directly to ForEach-Object; the $_ in that case will just refer to the single [hashtable] you sent in. You can do:
foreach ($version in $versions.GetEnumerator()) {
$version.Name
$version.Value
}
or you can do something like this:
$versions.Keys | ForEach-Object {
$_ # the name/key
$versions[$_] # the value
$versions.$_ # also the value
}
$ht.Keys # list all keys
$ht[$_] # take an element of hastable
$ht.Remove($_) # remove an element of hastable by his key
what you want:
$ht.Keys | ? { $ht[$_] -le 3 } | %{$ht.Remove($_) }
You need to create a temporary array to hold the name/key of the versions to remove, and then looping that array to remove them from hash table:
$versionKeysToRemove = $versions.Keys | Where-Object { $versions[$_] -le $lastUpdate }
$versionKeysToRemove | ForEach-Object { $versions.Remove($_) }
Or shorter:
($versions.Keys | ? { $versions[$_] -le $lastUpdate }) | % { $versions.Remove($_) }
Please note the parentheses.