I have a csv file in the form:
Address,L0,L1,L2,L3,L4
01,Species,,,,
01.01,,Mammals,,,
01.01.01,,,Threatened,,
...
I want to use it to create a matching directory structure. I'm new to scripting and PowerShell, and in this case I'm not sure if I'm on totally the wrong track. Should I use a separate array to store each level's Name/Address pairs and then use those arrays like a lookup table to build the path? If so, I guess I'm stuck on how to set up if-thens based on a row's Address. This is as far as I've got so suggestions on general strategy or links to similar kinds of problem would be really welcome:
$folders = Import-Csv "E:\path\to\file.csv"
$folders | foreach {
$row = new-object PSObject -Property #{
Address = $_.Address;
Level = ([regex]::Matches($_.Address, "\." )).count;
L0 = $_.L0
L1 = $_.L1
L2 = $_.L2
L3 = $_.L3
}
$array += $row
}
#top level directories
$0 = $array | ?{$_.Level -eq 0} |
Select-Object #{n="Address";e={$_.Address;}},#{n="Name";e={$_.L0}}
#2nd level directories
$1 = $array | ?{$_.Level -eq 1} |
Select-Object #{n="Number";e={$_.Address.split(".")[-1];}},#{n="Name";e={$_.L1}}
Not tested, but I think this might do what you want:
$root = 'E:\SomeDirName'
Switch -Regex (Get-Content "E:\path\to\file.csv")
{
'^01,(\w+),,,,$' { $L1,$L2,$L3,$L4 = $null; $L0=$matches[1];mkdir "$root\$L0" }
'^01\.01,,(\w+),,,$' { $L1=$matches[1];mkdir "$root\$L0\$L1" }
'^01\.01\.01,,,(\w+),,$' { $L2=$matches[1];mkdir "$root\$L0\$L1\$L2" }
'^01\.01\.01\.01,,,,(\w+),$' { $L3=$matches[1];mkdir "$root\$L0\$L1\$L2\$L3" }
'^01\.01\.01\.01\.01,,,,,(\w+)$' { $L4=$matches[1];mkdir "$root\$L0\$L1\$L2\$L3\$L4" }
}
To solve that kind of problem a programming concept called recursion is often used.
In short a recursive function is a function that call itself.
I successfully tested this code with you CSV input:
$csvPath = 'C:\Temp\test.csv'
$folderRoot = 'C:\Temp'
$csv = Import-Csv $csvPath -Delimiter ',' | Sort-Object -Property Address
# Recursive function
function Recurse-Folder( $folderAddress, $basePath )
{
# Getting the list of current folder subfolders
$childFolders = $null
if( $folderAddress -like '' )
{
$childFolders = $csv | Where-Object { $_.Address -like '??' }
}
else
{
$childFolders = $csv | Where-Object { $_.Address -like $( $folderAddress + '.??' ) }
}
# For each child folder
foreach( $childFolder in $childFolders )
{
# Get the folder name
$dotCount = $childFolder.Address.Split('.').Count - 1
$childFolderName = $childFolder.$('L'+$dotCount)
if( $childFolderName -ne '')
{
$childFolderPath = $basePath + '\' + $childFolderName
# Creating child folder and calling recursive function for it
New-Item -Path $childFolderPath -ItemType Directory
Recurse-Folder $childFolder.Address $childFolderPath
}
}
}
Recurse-Folder '' $folderRoot
Related
How to alter the script so that it checks for the file hash only when the condition of File name matches with each folder {Pre,Post}
$result = [System.Collections.Generic.List[object]]::new()
$sb = {
process {
if($_.Name -eq 'Thumbs.db') { return }
[PSCustomObject]#{
h = (Get-FileHash $_.FullName -Algorithm SHA1).Hash
n = $_.Name
s = $_.Length
fn = $_.fullname
}
}
}
$refFiles = Get-ChildItem 'C:\Users\HP\hello\pre' -Recurse -File | & $sb
$diffFiles = Get-ChildItem 'C:\Users\HP\hello\post' -Recurse -File | & $sb
foreach($file in $diffFiles) {
# this file exists on both folders, skip it
if($file.h -in $refFiles.h) { continue }
# this file exists on reference folder but has changed
if($file.n -in $refFiles.n) {
$file.PSObject.Properties.Add(
[psnoteproperty]::new('Status', 'Changed in Ref')
)
$result.Add($file)
continue
}
# this file does not exist on reference folder
# based on previous conditions
$file.PSObject.Properties.Add(
[psnoteproperty]::new('Status', 'Unique in Diff')
)
$result.Add($file)
}
foreach($file in $refFiles) {
# this file is unique in reference folder, rest of the files
# not meeting this condition can be ignored since we're
# interested only in files on reference folder that are unique
if($file.h -notin $diffFiles.h) {
$file.PSObject.Properties.Add(
[psnoteproperty]::new('Status', 'Unique in Ref')
)
$result.Add($file)
}
}
$result | Format-Table
This Code Produces output for every hash differ from the Reference folder regardless of File Name. Thank you
I've been working on a little side project of listing files compressed in nested zip files.
I've cooked up a script that does just that, but only if the depth of zip files is known.
In in example below the zip file has additional zips in it and then anthoer in one of them.
Add-Type -AssemblyName System.IO.Compression.Filesystem
$path = "PATH"
$CSV_Path = "CSV_PATH"
$zipFile = Get-ChildItem $path -recurse -Filter "*.zip"
$rootArchive = [System.IO.Compression.zipfile]::OpenRead($zipFile.fullname)
$rootArchive.Entries | Select #{l = 'Source Zip'; e = {} }, #{l = "FullName"; e = { $_.FullName.Substring(0, $rootArchive.Fullname.Lastindexof('\')) } }, Name | Export-csv $CSV_Path -notypeinformation
$archivesLevel2 = $rootArchive.Entries | Where { $_.Name -like "*.zip" }
foreach ($archive in $archivesLevel2)
{
(New-object System.IO.Compression.ZipArchive ($archive.Open())).Entries | Select #{l = 'Source Zip'; e = { $archive.name } }, #{l = "FullName"; e = { $archive.FullName.Substring(0, $_.Fullname.Lastindexof('\')) } }, Name | Export-Csv $CSV_Path -NoTypeInformation -append;
New-object System.IO.Compression.ZipArchive($archive.Open()) -OutVariable +lastArchiveLevel2
}
$archivesLevel3 = $lastArchiveLevel2.entries | Where { $_.Name -like "*.zip" }
foreach ($archive in $archivesLevel3)
{
(New-Object System.IO.Compression.ZipArchive ($archive.Open())).Entries | Select #{l = 'Source Zip'; e = { $archive.name } }, #{l = "FullName"; e = { $archive.FullName.Substring(0, $_.Fullname.Lastindexof('\')) } }, Name | Export-Csv $CSV_Path -NoTypeInformation -append
}
What I ask of you is to help me modify this to accomodate an unknown depth of inner zip files. Is that even possible?
Here's an example on how to do it using a Queue object, which allow you to recursively go through all depths of your zip file in one go.
As requested, here are some comments to explain what is going on.
Add-Type -AssemblyName System.IO.Compression.Filesystem
$path = "PATH"
$CSV_Path = "CSV_PATH"
$Queue = [System.Collections.Queue]::New()
$zipFiles = Get-ChildItem $path -recurse -Filter "*.zip"
# All records will be stored here
$Output = [System.Collections.Generic.List[PSObject]]::new()
# Main logic. Used when looking at the root zip and any zip entries.
# ScriptBlock is used to prevent code duplication.
$ProcessEntries = {
Param($Entries)
$Entries | % {
# Put all zip in the queue for future processing
if ([System.IO.Path]::GetExtension($entry) -eq '.zip') { $Queue.Enqueue($_) }
# Add a Source Zip property with the parent zip since we want this informations in the csv export and it is not available otherwise.
$_ | Add-Member -MemberType NoteProperty -Name 'Source Zip' -Value $zip.name
# Every entries, zip or not, need to be part of the output
$output.Add($_)
}
}
# Your initial Get-ChildItem to find zip file implicate there could be multiple root zip files, so a loop is required.
Foreach ($zip in $zipFiles) {
$archive = [System.IO.Compression.zipfile]::OpenRead($zip.fullname)
# The $ProcessEntries scriptblock is invoked to fill the Queue and the output.
. $ProcessEntries $archive.Entries
# Should the Zip file have no zip entries, this loop will never be entered.
# Otherwise, the loop will continue as long as zip entries are detected while processing any child zip.
while ($Queue.Count -gt 0) {
# Removing item from the queue to avoid reprocessing it again.
$Item = $Queue.Dequeue()
$archive = New-object System.IO.Compression.ZipArchive ($Item.open())
# We call the main scriptblock again to fill the queue and the output.
. $ProcessEntries $archive.Entries
}
}
$Output | Select 'Source Zip', FullName, Name | Export-Csv $CSV_Path -NoTypeInformation
References
Queue
Here you have a little example of how recursion would look like, basically, you loop over the .Entries property of ZipFile and check if the extension of each item is .zip, if it is, then you pass that entry to your function.
EDIT: Un-deleting this answer mainly to show how this could be approached using a recursive function, my previous answer was inaccurate. I was using [ZipFile]::OpenRead(..) to read the nested .zip files which seemed to work correctly on Linux (.NET Core) however it clearly does not work when using Windows PowerShell. The correct approach would be to use [ZipArchive]::new($nestedZip.Open()) as Sage Pourpre's helpful answer shows.
using namespace System.IO
using namespace System.IO.Compression
function Get-ZipFile {
[cmdletbinding()]
param(
[parameter(ValueFromPipeline)]
[object]$Path,
[parameter(DontShow)]
[int]$Nesting = -1
)
begin { $Nesting++ }
process {
try
{
$zip = if(-not $Nesting) {
[ZipFile]::OpenRead($Path)
}
else {
[ZipArchive]::new($Path.Open())
}
foreach($entry in $zip.Entries) {
[pscustomobject]#{
Nesting = $Nesting
Parent = $Path.Name
Contents = $entry.FullName
}
if([Path]::GetExtension($entry) -eq '.zip') {
Get-ZipFile -Path $entry -Nesting $Nesting
}
}
}
catch
{
$PSCmdlet.WriteError($_)
}
finally
{
if($null -ne $zip) {
$zip.Dispose()
}
}
}
}
Get-ChildItem *.zip | Get-ZipFile
In my CSV file I have "SharePoint Site" column and a few other columns. I'm trying to split the ID from "SharePoint Site" columns and put it to the new column call "SharePoint ID" but not sure how to do it so I'll be really appreciated If I can get any help or suggestion.
$downloadFile = Import-Csv "C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv"
(($downloadFile -split "/") -split "_") | Select-Object -Index 5
CSV file
SharePoint Site
Include:[https://companyname-my.sharepoint.com/personal/elksn7_nam_corp_kl_com]
Include:[https://companyname-my.sharepoint.com/personal/tzksn_nam_corp_kl_com]
Include:[https://companyname.sharepoint.com/sites/msteams_c578f2/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2Fmsteams%5Fc578f2%2FShared%20Documents%2FBittner%2DWilfong%20%2D%20Litigation%20Hold%2FWork%20History&viewid=b3e993a1%2De0dc%2D4d33%2D8220%2D5dd778853184]
Include:[https://companyname.sharepoint.com/sites/msteams_c578f2/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2Fmsteams%5Fc578f2%2FShared%20Documents%2FBittner%2DWilfong%20%2D%20Litigation%20Hold%2FWork%20History&viewid=b3e993a1%2De0dc%2D4d33%2D8220%2D5dd778853184]
Include:[All]
After spliting this will show it under new Column call "SharePoint ID"
SharePoint ID
2. elksn
3. tzksn
4. msteams_c578f2
5. msteams_c578f2
6. All
Try this:
# Import csv into an array
$Sites = (Import-Csv C:\temp\Modified-Audit-Log-Records.csv).'SharePoint Site'
# Create Export variable
$Export = #()
# ForEach loop that goes through the SharePoint sites one at a time
ForEach($Site in $Sites){
# Clean up the input to leave only the hyperlink
$Site = $Site.replace('Include:[','')
$Site = $Site.replace(']','')
# Split the hyperlink at the fifth slash (Split uses binary, so 0 would be the first slash)
$SiteID = $Site.split('/')[4]
# The 'SharePoint Site' Include:[All] entry will be empty after doing the split, because it has no 4th slash.
# This If statement will detect if the $Site is 'All' and set the $SiteID as that.
if($Site -eq 'All'){
$SiteID = $Site
}
# Create variable to export Site ID
$SiteExport = #()
$SiteExport = [pscustomobject]#{
'SharePoint ID' = $SiteID
}
# Add each SiteExport to the Export array
$Export += $SiteExport
}
# Write out the export
$Export
A concise solution that appends a Sharepoint ID column to the existing columns by way of a calculated property:
Import-Csv 'C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv' |
Select-Object *, #{
Name = 'SharePoint ID'
Expression = {
$tokens = $_.'SharePoint Site' -split '[][/]'
if ($tokens.Count -eq 3) { $tokens[1] } # matches 'Include:[All]'
else { $tokens[5] -replace '_nam_corp_kl_com$' }
}
}
Note:
To see all resulting column values, pipe the above to Format-List.
To re-export the results to a CSV file, pipe to Export-Csv
You have 3 distinct patterns you are trying to extract data from. I believe regex would be an appropriate tool.
If you are wanting the new csv to just have the single ID column.
$file = "C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv"
$IdList = switch -Regex -File ($file){
'Include:.+(?=/(\w+?)_)(?<=personal)' {$matches.1}
'Include:(?=\[(\w+)\])' {$matches.1}
'Include:.+(?=/(\w+?)/)(?<=sites)' {$matches.1}
}
$IdList |
ConvertFrom-Csv -Header "Sharepoint ID" |
Export-Csv -Path $newfile -NoTypeInformation
If you want to add a column to your existing CSV
$file = "C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv"
$properties = ‘*’,#{
Name = 'Sharepoint ID'
Expression = {
switch -Regex ($_.'sharepoint Site'){
'Include:.+(?=/(\w+?)_)(?<=personal)' {$matches.1}
'Include:(?=\[(\w+)\])' {$matches.1}
'Include:.+(?=/(\w+?)/)(?<=sites)' {$matches.1}
}
}
}
Import-Csv -Path $file |
Select-Object $properties |
Export-Csv -Path $newfile -NoTypeInformation
Regex details
.+ Match any amount of any character
(?=...) Positive look ahead
(...) Capture group
\w+ Match one or more word characters
? Lazy quantifier
(?<=...) Positive look behind
This would require more testing to see if it works well, but with the input we have it works, the main concept is to use System.Uri to parse the strings. From what I'm seeing, the segment you are looking for is always the third one [2] and depending on the previous segments, perform a split on _ or trim the trailing / or leave the string as is if IsAbsoluteUri is $false.
$csv = Import-Csv path/to/test.csv
$result = foreach($line in $csv)
{
$uri = [uri]($line.'SharePoint Site' -replace '^Include:\[|]$')
$id = switch($uri)
{
{-not $_.IsAbsoluteUri} {
$_
break
}
{ $_.Segments[1] -eq 'personal/' } {
$_.Segments[2].Split('_')[0]
break
}
{ $_.Segments[1] -eq 'sites/' } {
$_.Segments[2].TrimEnd('/')
}
}
[pscustomobject]#{
'SharePoint Site' = $line.'SharePoint Site'
'SharePoint ID' = $id
}
}
$result | Format-List
the idea is to import a csv, and then if the value "infohostname" contains a nullorwithespace delete the entire line
Function Last_NAS_Parse {
$Import_IP = Import-Csv -Path "$destination_RAW_NAS\audit_nas_8_$((Get-Date).ToString('yyyy-MM-dd')).txt" -Header #("date","infohostname","Version","SMTP","Value_1","Value_2","Value_3","Value_4","Value_5","Value_6","Value_7","Value_8","Value_9")
$Import_IP | ForEach-Object {
if ( [string]::IsNullOrWhiteSpace($_.infohostname)
}
But don't know how can i delete the line after this is selected, thanks.
IMO you don't need a function just a Where-Object:
$Header = ("date","infohostname","Version","SMTP","Value_1","Value_2","Value_3","Value_4","Value_5","Value_6","Value_7","Value_8","Value_9")
$Import_IP = Import-Csv -Path "$destination_RAW_NAS\audit_nas_8_$((Get-Date).ToString('yyyy-MM-dd')).txt" -Header $Header |
Where-Object {![string]::IsNullOrWhiteSpace($_.infohostname)}
But of course you could wrap that in a function
(but a function without passed parameters and returned values isn't a real function)
Loop over the results and store only objects where that boolean evaluation is true, and create a new file. In order to delete the row of the existing file, I believe you'd have to convert it to XLS/X and access it as a COM object
$results = #()
Function Last_NAS_Parse {
$Import_IP = Import-Csv -Path C:\path\to\file.csv
$Import_IP | ForEach-Object {
if ( [string]::IsNullOrWhiteSpace($_.infohostname))
{
Out-Null
}
else
{
$results += $_
}
}
}
Last_NAS_Parse
$results | Export-CSV "C:\export\path\of\newfile.csv"
I have two csv files. They both have SamAccountName in common. User records may or may not have a match found for every record between both files (THIS IS VERY IMPORTANT TO NOTE).
I am trying to basically just merge all columns (and their values) into one file (based from the SamAccountNames found in the first file...).
If the SamAccountName is not found in the 2nd file, it should add all null values for that user record in the merged file (since the record was found in the first file).
If the SamAccountName is found in the 2nd file, but not in the first, it should ignore merging that record.
Number of columns in each file may vary (5, 10, 2, so forth...).
Function MergeTwoCsvFiles
{
Param ([String]$baseFile, [String]$fileToBeMerged, [String]$columnTitleLineInFileToBeMerged)
$baseFileCsvContents = Import-Csv $baseFile
$fileToBeMergedCsvContents = Import-Csv $fileToBeMerged
$baseFileContents = Get-Content $baseFile
$baseFileContents[0] += "," + $columnTitleLineInFileToBeMerged
$baseFileCsvContents | ForEach-Object {
$matchFound = $False
$baseSameAccountName = $_.SamAccountName
[String]$mergedLineInFile = $_
[String]$lineMatchFound = $fileToBeMergedCsvContents | Where-Object {$_.SamAccountName -eq $baseSameAccountName}
Write-Host '$mergedLineInFile =' $mergedLineInFile
Write-Host '$lineMatchFound =' $lineMatchFound
Exit
}
}
The problem is, the record in the file is being written as a hash table instead of a string like line (if you were to view it as .txt). So I'm not really sure how to do this...
Adding results csv example files...
First CSV File
"SamAccountName","sn","GivenName"
"PBrain","Pinky","Brain"
"JSteward","John","Steward"
"JDoe","John","Doe"
"SDoo","Scooby","Doo"
Second CSV File
"SamAccountName","employeeNumber","userAccountControl","mail"
"KYasunori","678213","546","KYasunori#mystuff.com"
"JSteward","43518790","512","JSteward#mystuff.com"
"JKibogabi","24356","546","JKibogabi#mystuff.com"
"JDoe","902187u4","1114624","JDoe#mystuff.com"
"CStrife","54627","512","CStrife#mystuff.com"
Expected Merged CSV File
"SamAccountName","sn","GivenName","employeeNumber","userAccountControl","mail"
"PBrain","Pinky","Brain","","",""
"JSteward","John","Steward","43518790","512","JSteward#mystuff.com"
"JDoe","John","Doe","902187u4","1114624","JDoe#mystuff.com"
"SDoo","Scooby","Doo","","",""
Note: This will be part of a loop process in merging multiple files, so I would like to avoid hardcoding the title names (with $_.SamAccountName as an exception)
Trying suggestion from "restless 1987" (Not Working)
$baseFileCsvContents = Import-Csv 'D:\Scripts\Powershell\Tests\base.csv'
$fileToBeMergedCsvContents = Import-Csv 'D:\Scripts\Powershell\Tests\lookup.csv'
$resultsFile = 'D:\Scripts\Powershell\Tests\MergedResults.csv'
$resultsFileContents = #()
$baseFileContents = Get-Content 'D:\Scripts\Powershell\Tests\base.csv'
$recordsMatched = compare-object $baseFileCsvContents $fileToBeMergedCsvContents -Property SamAccountName
switch ($recordsMatched)
{
'<=' {}
'=>' {}
'==' {$resultsFileContents += $_}
}
$resultsFileCsv = $resultsFileContents | ConvertTo-Csv
$resultsFileCsv | Export-Csv $resultsFile -NoTypeInformation -Force
Output gives a blank file :(
The code below outputs the desired results based on the inputs you provided.
function CombineSkip1($s1, $s2){
$s3 = $s1 -split ','
$s2 -split ',' | select -Skip 1 | % {$s3 += $_}
$s4 = $s3 -join ', '
$s4
}
Write-Output "------Combine files------"
# content
$c1 = Get-Content D:\junk\test1.csv
$c2 = Get-Content D:\junk\test2.csv
# users in both files, could be a better way to do this
$t1 = $c1 | ConvertFrom-Csv
$t2 = $c2 | ConvertFrom-Csv
$users = $t1 | Select SamAccountName
# generate final, combined output
$combined = #()
$combined += CombineSkip1 $c1[0] $c2[0]
$c2PropCount = ($c2[0] -split ',').Count - 1
$filler = (', ""' * $c2PropCount)
for ($i = 1; $i -lt $c1.Count; $i++){
$user = $c1[$i].Split(',')[0]
$u2 = $c2 | where {([string]$_).StartsWith($user)}
if ($u2)
{
$combined += CombineSkip1 $c1[$i] $u2
}
else
{
$combined += ($c1[$i] + $filler)
}
}
# write to output and file
Write-Output $combined
$combined | Set-Content -Path D:\junk\test3.csv -Force
You can use compare-object for that purpose. Use -property samaccountname with it. For example:
$a = 1,2,3,4,5
$b = 4,5,6,7
$side = compare-object $a $b
switch ($side){
'<=' {is not in $a}
'=>' {is not in $b}
'==' { is on both sides}
}
When you have all the data in your output-variable, trow it at convertto-csv and write it in a file
After an entire day, I finally came up with something that works...
...
Edit
Reason: breaking the inner loop and removing the found element from the array will be much faster when merging files with thousands of records...
Function GetTitlesFromFileToBeMerged
{
Param ($csvFile)
[String]$fileToBeMergedTitles = Get-Content $fileToBeMerged -TotalCount 1
[String[]]$fileToBeMergedTitles = ($fileToBeMergedTitles -replace "`",`"", "|").Trim()
[String[]]$fileToBeMergedTitles = ($fileToBeMergedTitles -replace "`"", "").Trim()
[String[]]$fileToBeMergedTitles = ($fileToBeMergedTitles -replace "SamAccountName", "").Trim()
[String[]]$listOfColumnTitles = $fileToBeMergedTitles.Split('|',[System.StringSplitOptions]::RemoveEmptyEntries)
Write-Output $listOfColumnTitles
}
$baseFile = 'D:\Scripts\Powershell\Tests\base.csv'
$fileToBeMerged = 'D:\Scripts\Powershell\Tests\lookup.csv'
$baseFileCsvContents = Import-Csv $baseFile
$baseFileContents = Get-Content $baseFile
$fileToBeMergedCsvContents = Import-Csv $fileToBeMerged
[System.Collections.Generic.List[System.Object]]$fileToBeMergedContents = Get-Content $fileToBeMerged
$resultsFile = 'D:\Scripts\Powershell\Tests\MergedResults.csv'
$resultsFileContents = #()
[String]$baseFileTitles = $baseFileContents[0]
[String]$fileToBeMergedTitles = (Get-Content $fileToBeMerged -TotalCount 1) -replace "`"SamAccountName`",", ""
$resultsFileContents += $baseFileTitles + "," + $fileToBeMergedTitles
[String]$lineMatchNotFound = ""
$arrayFileToBeMergedTitles = GetTitlesFromFileToBeMerged $fileToBeMerged
For ($valueNum = 0; $valueNum -lt $arrayFileToBeMergedTitles.Length; $valueNum++)
{
$lineMatchNotFound += ",`"`""
}
$baseLineCounter = 1
$baseFileCsvContents | ForEach-Object {
$baseSameAccountName = $_.SamAccountName
[String]$baseLineInFile = $baseFileContents[$baseLineCounter]
$lineMatchCounter = 1
$lineMatchFound = ""
:inner
ForEach ($line in $fileToBeMergedContents) {
If ($line -like "*$baseSameAccountName*") {
[String]$lineMatchFound = "," + ($line -replace '^"[^"]*",', "")
$fileToBeMergedContents.RemoveAt($lineMatchCounter)
break inner
}; $lineMatchCounter++
}
If (!($lineMatchFound))
{
[String]$lineMatchFound = $lineMatchNotFound
}
$mergedLine = $baseLineInFile + $lineMatchFound
$resultsFileContents += $mergedLine
$baseLineCounter++
}
ForEach ($line in $resultsFileContents)
{
Write-Host $line
}
$resultsFileContents | Set-Content $resultsFile -Force
I'm very sure this is not the best approach and there is something better that would handle this much faster. If anyone has any ideas, I'm open to them. Thanks.