Import-Csv powershell with duplicate column headers - powershell

I need to work with csv files in PowerShell that have a duplicate column header. The reasons which they have a duplicate column are beyond me. Such is life.
I want to use Import-Csv so that I can easily deal with the data, but since the duplicate column exists I get this error:
Import-Csv : The member "PROC STAT" is already present.
At C:\Users\MyName\Documents\SomeFolder\testScript1.ps1:10 char:9
+ $csv2 = Import-Csv $files[0].FullName
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Import-Csv], ExtendedTypeSystemException
+ FullyQualifiedErrorId : AlreadyPresentPSMemberInfoInternalCollectionAdd,Microsoft.PowerShell.Commands.ImportCsvCommand
I could manually fix the problem by going into every csv file and deleting the duplicate column. But this is not an option. There are hundreds of them, and the script needs to be run periodically. Ideally I am looking for a way to programatically remove that column (Import-Csv won't work) or programatically change the name of the column (so that I can then Import-Csv and delete it). Any suggestions?
My code to loop through all the files:
$files = Get-ChildItem "C:\Users\MyName\Documents\SomeFolder\Data" -Filter *.csv
foreach($file in $files) {
$csv = Import-Csv $file.FullName
}

You can specify custom header names with the Header parameter:
Import-Csv .\file.csv -Header header1,header2,header3
This will treat the original header line as a normal row, so skip the first output object with Select-Object:
Import-Csv .\file.csv -Header header1,header2,header3 |Select-Object -Skip 1

I ran into this a few times as well and wrote this as work around. It works with any csv even if all/multiple column names are the same.
function Import-DuplicateHeaderCSV{
<#
# Synopsis
Workaround function for the powershell error: "Import-Csv : The member "column_name" is already present."
This error is returned when attempting to use the Import-CSV cmdlet on a csv which has duplicate column names.
# Description
The headers are looped through, read in, and parsed into an array.
Duplicate headers are stored into a hash table e.g.{#columnName = numOccurences}.
Multiple occurences of the header are supported by incrementing the value in the hashtable for each occurence.
The duplicate header is then inserted into the array with columnName_COPYnumOccruences.
Import-CSV is then used normally with the new column header array as the -header parameter.
.PARAMETER $Path
The full file path
e.g. "C:\users\johndoe\desktop\myfile.csv"
#>
param(
[Parameter(Mandatory=$true)] [string] $Path
)
$headerRow = Get-Content $Path | ConvertFrom-String -Delimiter "," | Select-Object -First 1
$objectSize = ($headerRow | Get-Member -MemberType NoteProperty | Measure-Object).Count
$headers = #()
$duplicates = #{}
for ($i = 1; $i -le $objectSize; $i++){
if ($headers -notcontains $headerRow."P$i"){
$headers += $headerRow."P$i"
}else{
if ($duplicates.$($headerRow."P$i") -gt 0){
$duplicates.$($headerRow."P$i")++
}else{
$duplicates.$($headerRow."P$i") = 1
}
$header = $($headerRow."P$i")
$header = $header + "_COPY"
$header = $header + ($duplicates.$($headerRow."P$i"))
$headers += $header
}
}
$headerString = ""
foreach ($item in $headers){$headerString += "'$item',"}
$headerString = $headerString.Substring(0,$headerString.Length -1)
$data = Invoke-Expression ("Import-Csv '$Path' " + "-Header " + $headerString)
return $data
}

you can load date with get-content and convert your data like this
Get-Content "C:\temp\test.csv" | ConvertFrom-String -Delimiter "," | select -Skip 1
short version:
gc "C:\temp\test.csv" | cfs -D "," | select -Skip 1
if you dont want rename auto the column you can rename manuelly like this
gc "C:\temp\test.csv" | cfs -D "," -PropertyNames head1, head2, head3 | select -Skip 1

Here's an example of how to do it without needing to hard-code the column header names in the code (i.e., dynamically generate a generic header based on the number of columns in the CSV file):
$csvFile = "test.csv"
# Count columns in CSV file
$columnCount = (Get-Content $csvFile |
Select-Object -Index 1,2 |
ConvertFrom-Csv |
Get-Member -MemberType NoteProperty |
Measure-Object).Count
# Create list of generic property names (no duplicates)
$propertyNames = 1..$columnCount |
ForEach-Object { "Property{0}" -f $_ }
# Get CSV file content, skip header line, and convert from CSV using generic header
Get-Content $csvFile |
Select-Object -Skip 1 |
ConvertFrom-Csv -Header $propertyNames
One caveat with this solution is that the CSV file must have at least two rows of data (not counting the header line).

Related

Insert Content into specific place in Powershell

I am trying to compare the string of two CSV files. If the string from the 2nd CSV file occurs in the 1st CSV file, the corresponding line in the 1st CSV file should be marked with a label (e.g.: "TestLabel") after the semicolon. The strings contain a lot of special characters. By and large, the comparison already works, I can also already add the label.
Since Powershell is still new to me and this is my first script, the following question still arises. How can I set my text "TestLabel" to a certain place in an uncomplicated way? Here, for example, in the next empty field between the semicolons?
CSV1 contains:
Testdefinition;Stichwörter;Stichwörter;Stichwörter;Stichwörter;Stichwörter
It is just a normal text (with round brackets).Test: success;ExistingLabel;;;;
This is a second text;;;
Another text;ExistingLabel;;;;
One more text for the testing - success;ExistingLabel;;;;
CSV2 contains:
Testdefinition;Stichwörter;Stichwörter;Stichwörter;Stichwörter;Stichwörter
It is just a normal text (with round brackets).Test: success
One more text for the testing - success
My script so far:
$header='Testdefinition', 'Stichwörter1', 'Stichwörter2', 'Stichwörter3', 'Stichwörter4', 'Stichwörter5'
$exportheader="Testdefinition;Stichwörter;Stichwörter;Stichwörter;Stichwörter;Stichwörter"
$path1='D:\data\.....test.csv'
$path2='D:\data\.....test_failed.csv'
$wfile='temp1.csv'
$wfile2='temp2.csv'
Get-Content $path1 | Select-Object -Skip 1 | Set-Content $wfile -Encoding UTF8
Get-Content $path2 | Select-Object -Skip 1 | Set-Content $wfile2 -Encoding UTF8
$file1=Import-CSV -Path $wfile -Delimiter ";" -Header $header
$file2=Import-CSV -Path $wfile2 -Delimiter ";" -Header $header
$exportfile='test.csv'
#$exportfile=$file1
$file1 | Get-Member
$file2 | Get-Member
$file1 | Format-Table
$file2 | Format-Table
Write-Output ""
Write-Output "Searching for failed results"
Set-Content $exportfile -Value $exportheader
$file1.Testdefinition | ForEach-Object {
Write-Output "The Testdefinition is: $_ "
$testSearch = $_
$testlinecontent = $file2.Testdefinition | Select-String $testSearch
$testlinenumber = $testlinecontent.LineNumber
if("$_" -eq "$testlinecontent")
{
Write-Output "Testline found: $testlinecontent in Line $testlinenumber"
Write-Output "$_ = $testlinecontent"
$testlineexport = "$_;$testlinenumber;TestLabel"
Write-Output $testlineexport
$testlineexport | Add-Content -Path $exportfile
}
else
{
Write-Output "Testline not found"
$testlineexport = "$_;$testlinenumber;NULL"
Write-Output $testlineexport
$testlineexport | Add-Content -Path $exportfile
}
Write-Output ""
}
$exportCsv = Import-Csv $exportfile -Delimiter ";" -Header $header
$exportCsv | Format-Table
Remove-Item -Path $wfile
Remove-Item -Path $wfile2
I hope you can give me a hint. Thanks in advance!
Assuming the files aren't too big, you can use the following approach based on Compare-Object, which is conceptually clear and relatively simple:
# Read the CSV files into their header row and the array of data rows, as strings.
$header, $rows1 = Get-Content $path1
$null, $rows2 = Get-Content $path2
# Initialize the export file by writing its header
Set-Content -Encoding utf8 $exportfile -Value $exportheader
# Compare the data rows by their first ";"-separated field.
# If the fields match, append ";TestLabel" to the LHS data row before
# passing it through, otherwise pass it as-is, and append to the
# export file.
Compare-Object -PassThru $rows1 $rows2 -IncludeEqual -Property { $_.Split(';')[0] } |
ForEach-Object { if ($_.SideIndicator -eq '==') { $_ + ';TestLabel' } else { $_ } } |
Add-Content $exportfile
Note:
For brevity I've omitted the code to also add a line number.
As you are already aware, PowerShell doesn't support CSV files whose headers contain duplicate column names, given that the column names become property names on import, and must therefore be unique.

Converting at least two text files with different rows into one csv - powershell

I am trying to convert two TXT files into one CSV file using powershell script. When files have same structure, and same number of rows then case looks be easy. But in my case txt files have diffrent structure.
Pipe sign in both txt files is not a delimiter should be treat as normal character and it is a string.
File URL.txt
L5020|http://linktosite.de|URL
L100|http://sitelink.de|URL
L50|http://abcde.de|URL
L511|http://bbcccddeee.de|URL
L300|http://link123456.de|URL
L5450|http://randomlink.de|URL_DE
L5460|http://randomwebsitelink.de|URL_DE
File URL1.txt
L5020|http://linktosite.de|URL|P555
L100|http://sitelink.de|URL|P523
L50|http://abcde.de|URL|P53
L511|http://bbcccddeee.de|URL|P540
CSV which I expect should look like as below and delimiter is ";"
HEADER1;HEADER2
L5020|http://linktosite.de|URL;L5020|http://linktosite.de|URL|P555
L100|http://sitelink.de|URL;L100|http://sitelink.de|URL|P523
L50|http://abcde.de|URL;L50|http://abcde.de|URL|P53
L511|http://bbcccddeee.de|URL;L511|http://bbcccddeee.de|URL|P540
L300|http://link123456.de|URL;
L5450|http://randomlink.de|URL_DE;
L5460|http://randomwebsitelink.de|URL_DE;
I tried something like that
$URL = "C:\Users\XXX\Desktop\URL.txt"
$URLcontent = Get-Content $URL
$URL1 = "C:\Users\XXX\Desktop\URL1.txt"
$URLcontent1 = Get-Content $URL1
$results = #() # Empty array to store new created rows in
$csv = Import-CSV "C:\Users\XXX\Desktop\map.csv" -Delimiter ';'
foreach ($row in $csv) {
$properties = [ordered]#{
HEADER1 = $URLcontent
HEADER2 = $URLcontent1
}
# insert the new row as an object into the results-array
$results += New-Object psobject -Property $properties
}
# foreach-loop filled the results-array - export it as a CSV-file
$results | Export-Csv "C:\Users\XXXX\Desktop\map_final.csv" -NoTypeInformation
And something like that:
import-csv URL.txt -Header 'HEADER1' | Export-CSV "C:\Users\xxx\Desktop\URL.csv" -Delimiter ';' -NoTypeInformation
import-csv URL1.txt -Header 'HEADER2' | Export-CSV "C:\Users\xxx\Desktop\URL1.csv" -Delimiter ';' -NoTypeInformation
Get-ChildItem "C:\Users\xx\Desktop" -Filter "URL*.csv" | Select-Object -ExpandProperty FullName | Import-Csv | Export-Csv .\combinedcsvs.csv -NoTypeInformation -Append
Without any succes...
BR
Based on the updates in your question, if you want to build something yourself, you probably want to do something like this:
$Url1 = #(Get-Content .\URL1.txt)
$i = 0
Get-Content .\URL.txt | Foreach-Object {
[pscustomobject]#{
HEADER1 = $_
HEADER2 = If ($i -lt $URL1.Count) { $URL1[$i++] }
}
} | Export-Csv .\combinedcsvs.csv -Delimiter ';' -NoTypeInformation -Append
In case you do not want to go through the hassle of reinventing the wheel (with all pitfalls including performance tuning). Using the Join-Object I mentioned in the comment:
Import-Csv .\URL.txt -Header HEADER1 |
LeftJoin (Import-Csv .\URL1.txt -Header HEADER2) |
Export-Csv .\combinedcsvs.csv -Delimiter ';' -NoTypeInformation -Append
Note1: I am not sure why you trying to import anything like map.csv, I think that is required.
Note2: If you still want to go your own way, try to avoid using the increase assignment operator (+=) to create a collection it is a very expensive operator.
Note3: it is generally not a good idea to join lines on their line index as the list might not be sorted or have duplicates, therefore it is better to join lists on a specific property, like the the Url:
 
Import-Csv .\URL.txt -Delimiter '|' -Header Lid,Url,Type |
LeftJoin (Import-Csv .\URL1.txt -Delimiter '|' -Header Lid2,Url,Type2,Pid) -On Url |
Format-Table # or: Export-Csv .\combinedcsvs.csv -Delimiter ';' -NoTypeInformation
Lid Url Type Lid2 Type2 Pid
--- --- ---- ---- ----- ---
L5020 http://linktosite.de URL L5020 URL P555
L100 http://sitelink.de URL L100 URL P523
L50 http://abcde.de URL L50 URL P53
L511 http://bbcccddeee.de URL L511 URL P540
L300 http://link123456.de URL
L5450 http://randomlink.de URL_DE
L5460 http://randomwebsitelink.de URL_DE
Or on all three (Lid, Url and Type) properties:
Import-Csv .\URL.txt -Delimiter '|' -Header Lid,Url,Type |
LeftJoin (Import-Csv .\URL1.txt -Delimiter '|' -Header Lid,Url,Type,Pid) -On Lid,Url,Type |
Format-Table # or: Export-Csv .\combinedcsvs.csv -Delimiter ';' -NoTypeInformation
Lid Url Type Pid
--- --- ---- ---
L5020 http://linktosite.de URL P555
L100 http://sitelink.de URL P523
L50 http://abcde.de URL P53
L511 http://bbcccddeee.de URL P540
L300 http://link123456.de URL
L5450 http://randomlink.de URL_DE
L5460 http://randomwebsitelink.de URL_DE
If you only want to combine lines where both files contain data, you can do the following:
$f1 = Get-Content file1.txt
$f2 = Get-Content file2.txt
$output = for ($i = 0; $i -lt [math]::Min($f1.count,$f2.count); $i++) {
$f2[$i],$f1[$i] -join '|'
}
$output | Set-Content newfile.txt
If you want to combine all coinciding lines plus add extra lines from one of the files, you can do the following:
$output = for ($i = 0; $i -lt [math]::Max($f1.count,$f2.count); $i++) {
if ($f1[$i] -and $f2[$i]) {
$f2[$i],$f1[$i] -join '|'
}
else {
$f2[$i],$f1[$i] | Where {$_}
}
}
$output | Set-Content newfile.txt

How to parse csv file, look for trigger and split into new files with powershell

I have a CSV file which is structured like this:
"SA1";"21020180123155514000000000000000002"
"SA2";"21020180123155514000000000000000002";"210"
"SA4";"21020180123155514000000000000000002";"210";"200000001"
"SA5";"21020180123155514000000000000000002";"210";"200000001";"140000001";"ZZ"
"SA1";"21020180123155522000000000000000002"
"SA2";"21020180123155522000000000000000002";"210"
"SA4";"21020180123155522000000000000000002";"210";"200000001"
"SA5";"21020180123155522000000000000000002";"210";"200000001";"140000671";"ZZ"
"SA1";"21020180123155567000000000000000002"
"SA2";"21020180123155567000000000000000002";"210"
"SA4";"21020180123155567000000000000000002";"210";"200000001"
"SA5";"21020180123155567000000000000000002";"210";"200000001";"140000001";"ZZ"
So the Value in the second field (separator ';') marks the data which belongs together and value 140000001 or 140000671 is the trigger.
So the result should be:
1st file: 140000001.txt
"SA1";"21020180123155514000000000000000002"
"SA2";"21020180123155514000000000000000002";"210"
"SA4";"21020180123155514000000000000000002";"210";"200000001"
"SA5";"21020180123155514000000000000000002";"210";"200000001";"140000001";"ZZ"
"SA1";"21020180123155567000000000000000002"
"SA2";"21020180123155567000000000000000002";"210"
"SA4";"21020180123155567000000000000000002";"210";"200000001"
"SA5";"21020180123155567000000000000000002";"210";"200000001";"140000001";"ZZ"
2nd file: 140000671.txt
"SA1";"21020180123155522000000000000000002"
"SA2";"21020180123155522000000000000000002";"210"
"SA4";"21020180123155522000000000000000002";"210";"200000001"
"SA5";"21020180123155522000000000000000002";"210";"200000001";"140000671";"ZZ"
For now I found a snippet which splits the big file by the second field:
$src = "C:\temp\ORD001.txt"
$dstDir = "C:\temp\files\"
Remove-Item -Path "$dstDir\\*"
$header = Get-Content -Path $src | select -First 1
Get-Content -Path $src | select -Skip 1 | foreach {
$file = "$(($_ -split ";")[1]).txt"
Write-Verbose "Wrting to $file"
$file = $file.Replace('"',"")
if (-not (Test-Path -Path $dstDir\$file))
{
Out-File -FilePath $dstDir\$file -InputObject $header -Encoding ascii
}
$file -replace '"', ""
Out-File -FilePath $dstDir\$file -InputObject $_ -Encoding ascii -Append
}
For the rest I'm standing in the dark.
Please help.
The Import-CSV cmdlet will work here, if you don't already know about it. I would use that, as it returns all the rows as different objects in an array, with the properties being the column values. And you don't have to manually remove the quotes and such. Assuming the second column is a date time value, and should be unique for each group of 4 consecutive rows, then this will work:
$src = "C:\temp\ORD001.txt"
$dstDir = "C:\temp\files\"
Remove-Item -Path "$dstDir\*"
$csv = Import-CSV $src -Delimiter ';'
$DateTimeGroups = $csv | Group-Object -Property 'ColumnTwoHeader'
foreach ($group in $DateTimeGroups) {
$filename = $group.Group.'ColumnFiveHeader' | select -Unique
$group.Group | Export-CSV "$dstDir\$filename.txt" -Append -NoTypeInformation
}
However, this will break if two of those "groups of 4 consecutive rows" have the same value for the second column and the fifth column. There isn't a way to fix this unless you are certain that there will always be 4 consecutive rows in each time group. In which case:
$src = "C:\temp\ORD001.txt"
$dstDir = "C:\temp\files\"
Remove-Item -Path "$dstDir\*"
$csv = Import-CSV $src -Delimiter ';'
if ($csv.count % 4 -ne 0) {
Write-Error "CSV does not have a proper number of rows. Attempting to continue will be bad :)"
return
}
for ($i = 0 ; $i -lt $csv.Count ; $i=$i+4) {
$group = $csv[$i..($i+4)]
$group | Export-Csv "$dstDir\$($group[3].'ColumnFiveHeader').txt" -Append -NoTypeInformation
}
Just be sure to replace Column2Header and Column5Header with the appropriate values.
If performance is not a concern, combining Import-Csv / Export-Csv with Group-Object allows the most concise, direct expression of your intent, using PowerShell's ability to convert CSV to objects and back:
$src = "C:\temp\ORD001.txt" # Input CSV file
$dstDir = "C:\temp\files" # Output directory
# Delete previous output files, if necessary.
Remove-Item -Path "$dstDir\*" -WhatIf
# Import the source CSV into custom objects with properties named for the columns.
# Note: The assumption is that your CSV header line defines columns "Col1", "Col2", ...
Import-Csv $src -Delimiter ';' |
# Group the resulting objects by column 2
Group-Object -Property Col2 |
ForEach-Object { # Process each resulting group.
# Determine the output filename via the group's last row's column 5 value.
$outFile = '{0}\{1}.txt' -f $dstDir, $_.Group[-1].Col5
# Append the group at hand to the target file.
$_.Group | Export-Csv -Append -Encoding Ascii $outFile -Delimiter ';' -NoTypeInformation
}
Note:
The assumption - in line with your sample data - is that it is always the last row in a group of lines sharing the same column-2 value whose column 5 contains the root of the output filename (e.g., 140000001)
Sorry but I don't have a Header Column. It's a semikolon seperated txt file for an interface
You can simply read the file with Get-Content, and then search for the trigger in the line.
I hope this small example can help:
$file = Get-Content CSV_File.txt
$140000001 = #()
$140000671 = #()
$bTrig = #()
foreach($line in $file){
$bTrig += $line
if($line -match ';"140000001";'){
$140000001 += $bTrig
$bTrig = #()
}
elseif($line -match ';"140000671";'){
$140000671 += $bTrig
$bTrig = #()
}
}
if($bTrig.Count -ne 0){Write-Warning "No trigger for $bTrig"}
$140000001 | Out-File 140000001.txt -Encoding ascii
$140000671 | Out-File 140000671.txt -Encoding ascii

Match on two columns in two separate csv's then merge one column

I have 2 CSV's.
SOURCE CSV
"Employee ID","username","givenname","surname","emailaddress","title","Division","Location"
"204264","ABDUL.JALIL#domain.com","Abdul Jalil","Bin Hajar","Abdul.jalil#domain.com","Warehouse Associate I","Singapore","Singapore, "
"30053","ABEL.BARRAGAN#domain.com","Abel","Barragan","Abel.Barragan#domain.com","Manager, Customer Programs - CMS","Germany","Norderstedt, "
CHANGE CSV
givenname,surname,samaccountname,emailaddress,mail,country,city,state
Abigai,Teoyotl Rugerio,Abigai.Teoyotl,Abigai.TeoyotlRugerio#domain.com,Abigai.TeoyotlRugerio#domain.com,MX,,
Adekunle,Adesiyan,Adekunle.Adesiyan,Adekunle.Adesiyan#domain.com,Adekunle.Adesiyan#domain.com,US,VALENCIA,CALIFORNIA
I would like to match the surname and givenname from SOURCE to CHANGE, and if there is a match grab the "emailaddress" from the CHANGE CSV and place it into a new column in the SOURCE CSV.
So far I'm stuck on matching the first and last name columns.
$source = import-csv .\ur.csv
$change = import-csv .\all.csv
$Matchgivenname = Compare-Object $source.givenname $change.givenname -IncludeEqual -ExcludeDifferent -PassThru
$matchsurname = Compare-Object $source.surname $change.surname -IncludeEqual -ExcludeDifferent -PassThru
Not sure if you can do it with Compare-Object all at once, but you can loop over all the original employees and for each one, search for any changes. e.g.
$results = foreach ($employee in $source)
{
$update = $change | Where-Object { $employee.surname -eq $_.surname -and $employee.givenname -eq $_.givenname } | Select -First 1
if ($update)
{
$employee | Add-Member -MemberType NoteProperty -Name NewEmailAddress -Value $update.emailaddress
}
$employee
}
(untested)
In case the change CSV is big and you only need the changed email address, build a hashtable with givenname+surname as a key and email as a value. Then use it while importing the source CSV.
Hashtable lookup is much faster than enumerating all rows of changed data, so overall number of iterations will be around #changed + #source + log2(#changed) where # stands for number of rows.
Also, since adding the new column via Add-Member is slower than direct assignment, add this new column just once using a custom header consisting of source fields extracted from the first row plus the changed email.
$changedEmail = #{}
Import-Csv .\all.csv |
ForEach { $changedEmail[$_.givenname + '|' + $_.surname] = $_.emailaddress }
$newHeader = ((Import-Csv .\ur.csv | Select -first 1).PSObject.Properties |
Select -expand name) + 'changedemailaddress'
$combined = Import-Csv .\ur.csv -Header $newHeader | Select -skip 1 |
ForEach {
$_.changedemailaddress = $changedEmail[$_.givenname + '|' + $_.surname]
$_
}
$combined

Powershell search hits within column & return column name

I'm looking for a way (if possible) to find any hits within any column that contain a ";" semicolon character and return the column/field name.
I'm basically loading in a DAT delimited text file (or csv). The headers will be different each time, but I'm basically trying to figure out if I will be expecting any of the columns to contain multi-delimited values within the column such as email CC or BCC.
I'm using a form with a text box to input the DAT/CSV.
$form.Topmost = $True
$form.Add_Shown({$textBox.Select()})
$result = $form.ShowDialog()
if ($result -eq [System.Windows.Forms.DialogResult]::OK)
{
$x = $textBox.Text
$x
}
Here is my code for output file:
Get-Content $x |
foreach {$_ -replace "þ", '"'} |
ConvertFrom-Csv -Delimiter "" |
Out-GridView
I have been able to search a hit on the entire CSV by using:
$FileContent = Get-Content $x
$Matches = Select-String -InputObject $FileContent -Pattern ';' -AllMatches
$Matches.Matches.Count
The above part does giving me the total number of ";" hits. But I'd rather see which columns hit, I don't really need a total count, just the header name or column number.
I'm using powershell ISE v5.
I would use a Select-String to to find the initial hits which would be quicker than looping through each column and row. You can the loop through the results by converting each found row into a CSV and then an object. All you need to do then is loop though each property and output the results. Something like this:
$file = 'your_file.csv'
$head = Get-Content $file -TotalCount 1
$re = ';'
Select-String $file -Pattern $re | % {
$line = $_
$item = $head + "`n" + $_.line | ConvertFrom-Csv
$item | gm -MemberType NoteProperty | select -ExpandProperty Name | % {
if($item."$_" -match $re) {
New-Object psobject -property #{
Column = $_
Row = $line.Linenumber
Value = $item."$_"
}
}
}
}