Foreach with Where-Object not yielding correct results - powershell

I have the following code:
$ErrCodes = Get-AlarmIDs_XML -fileNamePath $Paper_Dialog_FullBasePath
$excelDevice = Get_ErrorCodes -errorCodeListFilePath $outFilePath -ws "DEVICE COMPONENT MAP"
foreach ($errCode in $ErrCodes | Where-Object{$excelDevice.Common_Alarm_Number -eq $errCode.Value })
{
#$dataList = [System.Collections.Generic.List[psobject]]::new()
#$resultHelp = [System.Collections.Generic.List[psobject]]::new()
Write-Host "another:"
$err = $errCode.Value #get each error code to lookup in mdb file
$key = $errCode.Key
Write-Host $err
...
}
But it's definitely getting in the foreach loop when it shouldn't.
My intention is to use the foreach, and if it has a value in the $ErrCodes, then it should continue with the code that follows.
Let me know if you need to see the Functions that do the file reads, but the data structures look like this:
$excelDevice:
[Object[57]]
[0]:#{Common_Alarm_Number=12-2000}
[1]:#{Common_Alarm_Number=12-5707}
[2]:#{Common_Alarm_Number=12-9}
[3]:#{Common_Alarm_Number=12-5703}
...
$ErrCodes:
[Object[7]]
[0]:#{Key=A;Value=12-5702}
[1]:#{Key=B;Value=12-5704}
[2]:#{Key=C;Value=12-5706}
[3]:#{Key=D;Value=12-5707}
...
So we only care about the ones in $ErrCodes that are also in $excelDevice.
When I step through the code, it's getting into the foreach code for 12-5702 for some reason, when it shouldn't be there (prints 12-5702 to screen). I know I wouldn't want 12-5702 to be used because it isn't in $excelDevice list.
How would I get that Where-Object to filter out $ErrCodes that aren't in $excelDevice list? I don't want to process error codes that don't have data for this device.

Right now you're testing whether any of the values in $excelDevice.Common_Alarm_Number (which presumably evaluates to an array) is exactly the same value as all the values in $errCodes.Value - which doesn't make much sense.
It looks like you'll want to test each error code for whether it is contained in the $excelDevice.Common_Alarm_Number list instead. Use $_ to refer to the individual input items received via the pipeline:
foreach ($errCode in $ErrCodes | Where-Object{ $excelDevice.Common_Alarm_Number -contains $_.Value }) { ... }

Related

Check if a condition is met by a line within a TXT but "in an advanced way"

I have a TXT file with 1300 megabytes (huge thing). I want to build code that does two things:
Every line contains a unique ID at the beginning. I want to check for all lines with the same unique ID if the conditions is met for that "group" of IDs. (This answers me: For how many lines with the unique ID X have all conditions been met)
If the script is finished I want to remove all lines from the TXT where the condition was met (see 2). So I can rerun the script with another condition set to "narrow down" the whole document.
After few cycles I finally have a set of conditions that applies to all lines in the document.
It seems that my current approach is very slow.( one cycle needs hours). My final result is a set of conditions that apply to all lines of code.
If you find an easier way to do that, feel free to recommend.
Help is welcome :)
Code so far (does not fullfill everything from 1&2)
foreach ($item in $liste)
{
# Check Conditions
if ( ($item -like "*XXX*") -and ($item -like "*YYY*") -and ($item -notlike "*ZZZ*")) {
# Add a line to a document to see which lines match condition
Add-Content "C:\Desktop\it_seems_to_match.txt" "$item"
# Retrieve the unique ID from the line and feed array.
$array += $item.Split("/")[1]
# Remove the line from final document
$liste = $liste -replace $item, ""
}
}
# Pipe the "new cleaned" list somewhere
$liste | Set-Content -Path "C:\NewListToWorkWith.txt"
# Show me the counts
$array | group | % { $h = #{} } { $h[$_.Name] = $_.Count } { $h } | Out-File "C:\Desktop\count.txt"
Demo Lines:
images/STRINGA/2XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGA/3XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGB/4XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGB/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg images/STRINGC/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
performance considerations:
Add-Content "C:\Desktop\it_seems_to_match.txt" "$item"
try to avoid wrapping cmdlet pipelines
See also: Mastering the (steppable) pipeline
$array += $item.Split("/")[1]
Try to avoid using the increase assignment operator (+=) to create a collection
See also: Why should I avoid using the increase assignment operator (+=) to create a collection
$liste = $liste -replace $item, ""
This is a very expensive operation considering that you are reassigning (copying) a long list ($liste) with each iteration.
Besides it is a bad practice to change an array that you are currently iterating.
$array | group | ...
Group-Object is a rather slow cmdlet, you better collect (or count) the items on-the-fly (where you do $array += $item.Split("/")[1]) using a hashtable, something like:
$Name = $item.Split("/")[1]
if (!$HashTable.Contains($Name)) { $HashTable[$Name] = [Collections.Generic.List[String]]::new() }
$HashTable[$Name].Add($Item)
To minimize memory usage it may be better to read one line at a time and check if it already exists. Below code I used StringReader and you can replace with StreamReader for reading from a file. I'm checking if the entire string exists, but you may want to split the line. Notice I have duplicaes in the input but not in the dictionary. See code below :
$rows= #"
images/STRINGA/2XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGA/3XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/4XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGC/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGA/2XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGA/3XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/4XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGB/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
images/STRINGC/5XXXXXXXX_rTTTTw_GGGG1_Top_MMM1_YY02_ZZZ30_AAAA5.jpg
"#
$dict = [System.Collections.Generic.Dictionary[int, System.Collections.Generic.List[string]]]::new();
$reader = [System.IO.StringReader]::new($rows)
while(($row = $reader.ReadLine()) -ne $null)
{
$hash = $row.GetHashCode()
if($dict.ContainsKey($hash))
{
#check if list contains the string
if($dict[$hash].Contains($row))
{
#string is a duplicate
}
else
{
#add string to dictionary value if it is not in list
$list = $dict[$hash].Value
$list.Add($row)
}
}
else
{
#add new hash value to dictionary
$list = [System.Collections.Generic.List[string]]::new();
$list.Add($row)
$dict.Add($hash, $list)
}
}
$dict

Powershell Key/Value Pair Matching Problem

I am looking to check a key value pair in powershell. I have tried various methods but none seem to be working as I would expect.
I keep getting a True response on a lookup of $checked_group, even though I can see the KeyValue pair in the write-out.
if (!($checked_groups[$varDomain] -eq "$varName"))
Even if the keyValue pair is in the $checked_groups dictionary the above statement is True. Have you ever come across this before?
I cant recreate the problem using the snippet below even though its pretty much the same logic as my live code. The sources of the values are different, as I am dynamically collecting these in the live code. And iterating over them in a loop, my $checked_groups = #{ } is outside this loop to maintain the dict key/pairs across all iterations and I can confirm they are preserved across ever iteration.
I can't work out why this statement would always resolve to True :(
$checked_groups = #{ }
$varDomain = "example.com"
$varName = "Administrator"
if (!($checked_groups[$varDomain] -eq "$varName"))
{
write-host 'Not In There'
}
foreach ($thing in $checked_groups)
{
Write-Host ($thing | Out-String)
}
$checked_groups += #{$varDomain = $varName}
if (($checked_groups[$varDomain] -eq "$varName"))
{
write-host 'In There'
foreach ($thing in $checked_groups)
{
Write-Host ($thing | Out-String)
}
}

Powershell assistance

I am currently using the below PS script to check if the currents months MS patches are installed on the system. The script is set to check the $env:COMPUTERNAME.mbsa and the Patch_NA.txt file and send the result to the $env:COMPUTERNAME.csv file.
I now need to modify this script to also pull information from other POS devices in the same location (C:\Users\Cambridge\SecurityScans) and send the results to the $env:COMPUTERNAME.csv file.
The POS devices are listed like this:
172.26.210.1.mbsa
172.26.210.2.mbsa
172.26.210.3.mbsa
and so forth.
The IP range at all our locations (last octet) is 1 - 60. Any ideas on how I can set this up?
Script:
$logname = "C:\temp\PatchVerify\$env:COMPUTERNAME.csv"
[xml]$x=type "C:\Users\Cambridge\SecurityScans\$env:COMPUTERNAME.mbsa"
#This list is created based on a text file that is provided.
$montlyPatches = type "C:\Temp\PatchVerify\Patches_NA.txt"|
foreach{if ($_ -mat"-KB(? <KB>\d+)"){$matches.KB}}
$patchesNotInstalled=$x.SecScan.check | where {$_.id -eq 500} |foreach{`
$_.detail.updatedata|where {$_.isinstalled -eq "false"}}|Select -expandProperty KBID
$patchesInstalled =$x.SecScan.check | where {$_.id -eq 500} |foreach{`
$_.detail.updatedata|where {$_.isinstalled -eq "true"}}|Select -expandProperty KBID
"Store,Patch,Present"> $logname
$store = "$env:COMPUTERNAME"
foreach ($patch in $montlyPatches)
{
$result = "Unknown"
if ( $patchesInstalled -contains $patch)
{
$result = "YES"
}
if ( $patchesNotInstalled -contains $patch)
{
$result = "NO"
}
"$store,KB$($patch),$result" >>$logname
}
You can find lots of information on creating functions on the web, but a simple example would be:
Function Check-Patches{
Param($FileName)
$logname = "C:\temp\PatchVerify\$FileName.csv"
[xml]$x=type "C:\Users\Cambridge\SecurityScans\$FileName.mbsa"
The rest of your existing code goes here...
}
Check-Patches "$env:ComputerName"
For($i=1;$i -le 60;$i++){
Check-Patches "172.26.210.$i"
}
If you need me to break down anything in that let me know and I'll go into further explanation, but from what you already have it looks like you have a decent grasp on PowerShell theory and just needed to know what resources are available.
Edit: I updated my example to better fit your script, having it accept a file name, and then applying that file name to the $logname and $x variables within the function.
The break down...
First we declare that we are creating a Function using the Function keyword. Following that is the name of the function that you will use later to call it, and an opening curly brace to start the scriptblock that makes up the actual function.
Next is the Param line, which in this case is very simple only declaring one variable as input. This could alternatively be done as Function Check-Patches ($FileName){ but when you start getting into more advanced functions that only gets confusing, so my recommendation is to stick with putting the parameters inside the function's scriptblock. This is the first thing you want inside of your function in most cases, excluding any Help that you would write up for the function.
Then we have updated lines for $logname and [xml]$x that use the $FileName that the function gets as input.
After that comes all of your code that parses the patch logs, and outputs to your CSV, and the closing curly brace that ends the scriptblock, and the function.
Then we call it for the ComputerName, and run a For loop. The For loop runs everything between 1 and 60, and for each loop it uses that number as the last octet of the file name to feed into the function and check those files.
A few comments on the rest of your code. $monthlypatches = could be changed to = type | ?{$_ -match "-KB(? <KB>\d+)"}|%{$matches.KB} so that the results are filtered before the ForEach loop, which could cut down on some time.
On the $patchesInstalled and $patchesNotInstalled lines you don't need the backtick at the end of that line. You can naturally have a linebreak after the beginning of the scriptblock for a ForEach loop. Having it there can be hard to see later if the script breaks, and if there is anything after it (including a space) the script can break and throw errors that are hard to track down.
Lastly, you loop through $x twice, and then $monthlyPatches once, and do a lot of individual writes to the log file. I would suggest creating an array, filling it with custom objects that have 3 properties (Store, Patch, and Present), and then outputting that at the end of the function. That changes things a little bit, but then your function outputs an object, which you could pipe to Export-CSV, or maybe later you could want it to do something else, but at least then you'd have it. To do that I'd run $x through a switch to see if things are installed, then I'd flush out the array by setting all of the monthlypatches that aren't already in that array to Unknown. That would go something like:
Function Check-Patches{
Param($FileName)
$logname = "C:\temp\PatchVerify\$FileName.csv"
[xml]$x=type "C:\Users\Cambridge\SecurityScans\$FileName.mbsa"
$PatchStatus = #()
#This list is created based on a text file that is provided.
$monthlyPatches = GC "C:\Temp\PatchVerify\Patches_NA.txt"|?{$_ -match "-KB(? <KB>\d+)"} | %{$matches.KB}
#Create objects for all the patches in the updatelog that were in the monthly list.
Switch($x.SecScan.Check|?{$_.KBID -in $monthlyPatches -and $_.id -eq 500}){
{$_.detail.updatedata.isinstalled -eq "true"}{$PatchStatus+=[PSCustomObject][Ordered]#{Store=$FileName;Patch=$_.KBID;Present="YES"};Continue}
{$_.detail.updatedata.isinstalled -eq "false"}{$PatchStatus+=[PSCustomObject][Ordered]#{Store=$FileName;Patch=$_.KBID;Present="NO"};Continue}
}
#Populate all of the monthly patches that weren't found on the machine as installed or failed
$monthlyPatches | ?{$_ -notin $PatchStatus.Patch} | %{$PatchStatus += [PSCustomObject][Ordered]#{Store=$FileName;Patch=$_;Present="Unknown"}}
#Output results
$PatchStatus
}
#Check patches on current computer
Check-Patches "$env:ComputerName"|Export-Csv "C:\temp\PatchVerify\$env:ComputerName.csv" -NoTypeInformation
#Check patches on POS Devices
For($i=1;$i -le 60;$i++){
Check-Patches "172.26.210.$i"|Export-Csv "C:\temp\PatchVerify\172.26.210.$i.csv" -NoTypeInformation
}

Why Is It Possible to Loop Through a Null Array

Given the following PowerShell code:
$FolderItems = Get-ChildItem -Path "C:\Test"
Write-Host "FolderItems Is Null: $($FolderItems -eq $null)"
foreach ($FolderItem in $FolderItems)
{
Write-Host "Inside the loop: $($FolderItem.Name)"
}
Write-Host "Done."
When I test it with one file in the C:\Test folder, it outputs this:
FolderItems Is Null: False
Inside the loop: MyFile.txt
Done.
However, when I test it with ZERO files in the folder, it outputs this:
FolderItems Is Null: True
Inside the loop:
Done."
If $FolderItems is null, then why does it enter the foreach loop?
This was an intentional design choice made in V1 and revisited in V3.
In most languages, the foreach statement can only loop over collections of things. PowerShell has always been a little different, and in V1, you could loop over a single value in addition to collections of values.
For example:
foreach ($i in 42) { $i } # prints 42
In V1, if a value was a collection, foreach would iterate over each element in the collection, otherwise it would enter the loop for just that value.
Note in the above sentence, $null isn't special. It's just another value. From a language design point of view, this is fairly clean and concisely explained.
Unfortunately many people did not expect this behavior and it caused many bugs. I think some confusion arises because people expect the foreach statement to behave almost like the foreach-object cmdlet. In other words, I think people expect the following to work the same:
$null | foreach { $_ }
foreach ($i in $null) { $i }
In V3, we decided that it was important enough to change behavior because we could help scripters avoid introducing bugs in their scripts.
Note that changing the behavior could in theory break existing scripts in unexpected ways. We ultimately decided that most scripts that potentially see $null in the foreach statement already guard the foreach statement with an if, e.g.:
if ($null -ne $c)
{
foreach ($i in $c) { ... }
}
So in reality, most real world scripts would not see a change in behavior.
This was something of an idiosyncracy/bug in ForEach in V1 and V2. It was corrected in the V3 release.
Seems to me like you need to wrap your foreach within a conditional that checks if $FolderItem != null. This way, it'll never get in the if statement whenever $FolderItems is NULL
If (-NOT $FolderItems -eq $null) {
foreach ($FolderItem in $FolderItems)
{
Write-Host "Inside the loop: $($FolderItem.Name)"
}
}
This may be of help as well http://bit.ly/1brKRRk

Is it possible to terminate or stop a PowerShell pipeline from within a filter

I have written a simple PowerShell filter that pushes the current object down the pipeline if its date is between the specified begin and end date. The objects coming down the pipeline are always in ascending date order so as soon as the date exceeds the specified end date I know my work is done and I would like to let tell the pipeline that the upstream commands can abandon their work so that the pipeline can finish its work. I am reading some very large log files and I will frequently want to examine just a portion of the log. I am pretty sure this is not possible but I wanted to ask to be sure.
It is possible to break a pipeline with anything that would otherwise break an outside loop or halt script execution altogether (like throwing an exception). The solution then is to wrap the pipeline in a loop that you can break if you need to stop the pipeline. For example, the below code will return the first item from the pipeline and then break the pipeline by breaking the outside do-while loop:
do {
Get-ChildItem|% { $_;break }
} while ($false)
This functionality can be wrapped into a function like this, where the last line accomplishes the same thing as above:
function Breakable-Pipeline([ScriptBlock]$ScriptBlock) {
do {
. $ScriptBlock
} while ($false)
}
Breakable-Pipeline { Get-ChildItem|% { $_;break } }
It is not possible to stop an upstream command from a downstream command.. it will continue to filter out objects that do not match your criteria, but the first command will process everything it was set to process.
The workaround will be to do more filtering on the upstream cmdlet or function/filter. Working with log files makes it a bit more comoplicated, but perhaps using Select-String and a regular expression to filter out the undesired dates might work for you.
Unless you know how many lines you want to take and from where, the whole file will be read to check for the pattern.
You can throw an exception when ending the pipeline.
gc demo.txt -ReadCount 1 | %{$num=0}{$num++; if($num -eq 5){throw "terminated pipeline!"}else{write-host $_}}
or
Look at this post about how to terminate a pipeline: https://web.archive.org/web/20160829015320/http://powershell.com/cs/blogs/tobias/archive/2010/01/01/cancelling-a-pipeline.aspx
Not sure about your exact needs, but it may be worth your time to look at Log Parser to see if you can't use a query to filter the data before it even hits the pipe.
If you're willing to use non-public members here is a way to stop the pipeline. It mimics what select-object does. invoke-method (alias im) is a function to invoke non-public methods. select-property (alias selp) is a function to select (similar to select-object) non-public properties - however it automatically acts like -ExpandProperty if only one matching property is found. (I wrote select-property and invoke-method at work, so can't share the source code of those).
# Get the system.management.automation assembly
$script:smaa=[appdomain]::currentdomain.getassemblies()|
? location -like "*system.management.automation*"
# Get the StopUpstreamCommandsException class
$script:upcet=$smaa.gettypes()| ? name -like "*StopUpstreamCommandsException *"
function stop-pipeline {
# Create a StopUpstreamCommandsException
$upce = [activator]::CreateInstance($upcet,#($pscmdlet))
$PipelineProcessor=$pscmdlet.CommandRuntime|select-property PipelineProcessor
$commands = $PipelineProcessor|select-property commands
$commandProcessor= $commands[0]
$ci = $commandProcessor|select-property commandinfo
$upce.RequestingCommandProcessor | im set_commandinfo #($ci)
$cr = $commandProcessor|select-property commandruntime
$upce.RequestingCommandProcessor| im set_commandruntime #($cr)
$null = $PipelineProcessor|
invoke-method recordfailure #($upce, $commandProcessor.command)
if ($commands.count -gt 1) {
$doCompletes = #()
1..($commands.count-1) | % {
write-debug "Stop-pipeline: added DoComplete for $($commands[$_])"
$doCompletes += $commands[$_] | invoke-method DoComplete -returnClosure
}
foreach ($DoComplete in $doCompletes) {
$null = & $DoComplete
}
}
throw $upce
}
EDIT: per mklement0's comment:
Here is a link to the Nivot ink blog on a script on the "poke" module which similarly gives access to non-public members.
As far as additional comments, I don't have meaningful ones at this point. This code just mimics what a decompilation of select-object reveals. The original MS comments (if any) are of course not in the decompilation. Frankly I don't know the purpose of the various types the function uses. Getting that level of understanding would likely require a considerable amount of effort.
My suggestion: get Oisin's poke module. Tweak the code to use that module. And then try it out. If you like the way it works, then use it and don't worry how it works (that's what I did).
Note: I haven't studied "poke" in any depth, but my guess is that it doesn't have anything like -returnClosure. However adding that should be easy as this:
if (-not $returnClosure) {
$methodInfo.Invoke($arguments)
} else {
{$methodInfo.Invoke($arguments)}.GetNewClosure()
}
Here's an - imperfect - implementation of a Stop-Pipeline cmdlet (requires PS v3+), gratefully adapted from this answer:
#requires -version 3
Filter Stop-Pipeline {
$sp = { Select-Object -First 1 }.GetSteppablePipeline($MyInvocation.CommandOrigin)
$sp.Begin($true)
$sp.Process(0)
}
# Example
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } # -> 1, 2
Caveat: I don't fully understand how it works, though fundamentally it takes advantage of Select -First's ability to stop the pipeline prematurely (PS v3+). However, in this case there is one crucial difference to how Select -First terminates the pipeline: downstream cmdlets (commands later in the pipeline) do not get a chance to run their end blocks.
Therefore, aggregating cmdlets (those that must receive all input before producing output, such as Sort-Object, Group-Object, and Measure-Object) will not produce output if placed later in the same pipeline; e.g.:
# !! NO output, because Sort-Object never finishes.
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } | Sort-Object
Background info that may lead to a better solution:
Thanks to PetSerAl, my answer here shows how to produce the same exception that Select-Object -First uses internally to stop upstream cmdlets.
However, there the exception is thrown from inside the cmdlet that is itself connected to the pipeline to stop, which is not the case here:
Stop-Pipeline, as used in the examples above, is not connected to the pipeline that should be stopped (only the enclosing ForEach-Object (%) block is), so the question is: How can the exception be thrown in the context of the target pipeline?
Try these filters, they'll force the pipeline to stop after the first object -or the first n elements- and store it -them- in a variable; you need to pass the name of the variable, if you don't the object(s) are pushed out but cannot be assigned to a variable.
filter FirstObject ([string]$vName = '') {
if ($vName) {sv $vName $_ -s 1} else {$_}
break
}
filter FirstElements ([int]$max = 2, [string]$vName = '') {
if ($max -le 0) {break} else {$_arr += ,$_}
if (!--$max) {
if ($vName) {sv $vName $_arr -s 1} else {$_arr}
break
}
}
# can't assign to a variable directly
$myLog = get-eventLog security | ... | firstObject
# pass the the varName
get-eventLog security | ... | firstObject myLog
$myLog
# can't assign to a variable directly
$myLogs = get-eventLog security | ... | firstElements 3
# pass the number of elements and the varName
get-eventLog security | ... | firstElements 3 myLogs
$myLogs
####################################
get-eventLog security | % {
if ($_.timegenerated -lt (date 11.09.08) -and`
$_.timegenerated -gt (date 11.01.08)) {$log1 = $_; break}
}
#
$log1
Another option would be to use the -file parameter on a switch statement. Using -file will read the file one line at a time, and you can use break to exit immediately without reading the rest of the file.
switch -file $someFile {
# Parse current line for later matches.
{ $script:line = [DateTime]$_ } { }
# If less than min date, keep looking.
{ $line -lt $minDate } { Write-Host "skipping: $line"; continue }
# If greater than max date, stop checking.
{ $line -gt $maxDate } { Write-Host "stopping: $line"; break }
# Otherwise, date is between min and max.
default { Write-Host "match: $line" }
}