The situation is the following - I've got a csv file with number of columns (let's say 3) and custom object with same columns but one will be a new one. I want to compare column names and if there is a new one, then add column and then add values accordingly to csv file and I don't want to replace all existing content in current csv file. Here is the code I've written, but I'm getting error: The appended object does not have a property that c
orresponds to the following column: column1. To continue with mismatched properties, add the -Force parameter, and then retry the command.
File sample:
Output should be same file, but with new column 'column4' and new row containing, like this:
I can't figure out what I'm doing wrong.
$file_path = '..path_to_the_file\test_csv.csv'
$csv = import-csv -path $file_path
$columns = #([pscustomobject]#{
column1 = 'something_new_1'
column2 = 'something_new_2'
column3 = 'something_new_3'
column4 = 'something_new_4'
}
)
$csv_columns = ($csv | get-member).where({$_.Membertype -eq 'NoteProperty'}).Name
$columns = ($columns | get-member).where({$_.Membertype -eq 'NoteProperty'}).Name
$compare = $columns | Where-Object {$csv_columns -notContains $_}
foreach ($column in $csv) {
$column | add-member -MemberType NoteProperty -Name $compare -value ''
}
$csv | export-csv -Path $file_path -NoTypeInformation
$columns | export-csv -path $file_path -Append -NoTypeInformation -force
You can use this function to streamline the process, this of course assumes the objects you want to append already has the same properties as the ones in the array of objects and will only add new ones.
function Add-ObjectNormalized {
[CmdletBinding()]
param(
[Parameter(Mandatory, ValueFromPipeline)]
[object] $InputObject,
[Parameter(Mandatory, Position = 0)]
[object[]] $AddObject
)
begin {
$isFirstObject = $false
$propertiesToAdd = [Collections.Generic.HashSet[string]]::new(
[string[]] $AddObject[0].PSObject.Properties.Name,
[System.StringComparer]::OrdinalIgnoreCase
)
}
process {
if(-not $isFirstObject) {
$isFirstObject = $true
$propertiesToAdd.ExceptWith([string[]] $InputObject.PSObject.Properties.Name)
}
$psobject = $InputObject.PSObject.Properties
foreach($prop in $propertiesToAdd) {
$psobject.Add([psnoteproperty]::new($prop, $null))
}
$InputObject
}
end {
$AddObject
}
}
Pipe it after Import-Csv including the array of objects to be added as argument. For example:
$objects = #(
[pscustomobject]#{
column1 = 'something_new_1'
column2 = 'something_new_2'
column3 = 'something_new_3'
column4 = 'something_new_4'
}
[pscustomobject]#{
column1 = 'something_new_5'
column2 = 'something_new_6'
column3 = 'something_new_7'
column4 = 'something_new_8'
}
)
Import-Csv path\to\csv.csv | Add-ObjectNormalized $objects |
Export-Csv path\to\newCsv.csv -NoTypeInformation
The output using a simple Csv like the one in question would be:
column1 column2 column3 column4
------- ------- ------- -------
va1 val2 val3
something_new_1 something_new_2 something_new_3 something_new_4
something_new_5 something_new_6 something_new_7 something_new_8
Related
In my existing CSV file I have a column called "SharePoint ID" and it look like this
1.ylkbq
2.KlMNO
3.
4.MSTeam
6.
7.MSTEAM
8.LMNO83
and I'm just wondering how can I create a new Column in my CSV call "SharePoint Email" and then add "#gmail.com" to only the actual Id like "ylkbq", "KLMNO" and "LMNO83" instead of applying to all even in the blank space. And Maybe not add/transfer "MSTEAM" to the new Column since it's not an Id.
$file = "C:\AuditLogSearch\New folder\OriginalFile.csv"
$file2 = "C:\AuditLogSearch\New folder\newFile23.csv"
$add = "#GMAIL.COM"
$properties = #{
Name = 'Sharepoint Email'
Expression = {
switch -Regex ($_.'SharePoint ID') {
#Not sure what to do here
}
}
}, '*'
Import-Csv -Path $file |
Select-Object $properties |
Export-Csv $file2 -NoTypeInformation
Using calculated properties with Select-Object this is how it could look:
$add = "#GMAIL.COM"
$expression = {
switch($_.'SharePoint ID')
{
{[string]::IsNullOrWhiteSpace($_) -or $_ -match 'MSTeam'}
{
# Null value or mathces MSTeam, leave this Null
break
}
Default # We can assume these are IDs, append $add
{
$_.Trim() + $add
}
}
}
Import-Csv $file | Select-Object *, #{
Name = 'SharePoint Email'
Expression = $expression
} | Export-Csv $file2 -NoTypeInformation
Sample Output
Index SharePoint ID SharePoint Email
----- ------------- ----------------
1 ylkbq ylkbq#GMAIL.COM
2 KlMNO KlMNO#GMAIL.COM
3
4 MSTeam
5
6 MSTEAM
7 LMNO83 LMNO83#GMAIL.COM
A more concise expression, since I misread the point, it can be reduced to just one if statement:
$expression = {
if(-not [string]::IsNullOrWhiteSpace($_.'SharePoint ID') -and $_ -notmatch 'MSTeam')
{
$_.'SharePoint ID'.Trim() + $add
}
}
I try to import a csv file and create a xlsx file from the data afterwards. My Goal is to only show the value of Column1 once and not in every row. The csv file is already sorted so a check if the previous/next row has the same value would be possible.
CSV
"Column1";"Column2";"Column3"
"Value1A";"Value1B";"Value1C"
"Value1A";"Value2B";"Value2C"
"Value1A";"Value3B";"Value3C"
"Value2A";"Value4B";"Value4C"
Expected Outcome
"Column1";"Column2";"Column3"
"Value1A";"Value1B";"Value1C"
"";"Value2B";"Value2C"
"";"Value2B";"Value1C"
"Value2A";"Value4B";"Value4C"
Outcome
"Column1";"Column2";"Column3"
"Value1A";"Value1B";"Value1C"
"Value1A";"Value2B";"Value2C"
"Value1A";"Value2B";"Value1C"
"Value2A";"Value4B";"Value4C"
Only column1 duplicate cells should be empty.
My Code to import and add to Excel
$csv = "C:\path\to\file.csv"
$i = 1
Import-Csv $csv | Select-Object -Property Column1,Column2,Column3 | ForEach-Object {
$j = 1
foreach ($prop in $_.PSObject.Properties) {
if ($i -eq 1) {
$serverInfoSheet.Cells.Item($i, $j++).Value = $prop.Name
} else {
$serverInfoSheet.Cells.Item($i, $j++).Value = $prop.Value
}
}
$i++
}
To provide further context imagine Column1 as a Date and Columns2 and 3 are Employees.
Example of expected outcome
"12/01/2020";"Mark";"Tony"
"";"Mark";"Andrew"
"";"Tony;Vanessa"
"12/02/2020";"Tony";"Michael"
I dont want the date to repeat 2 times because the excel sheet loses clear view.
$Csv = #'
"Column1";"Column2";"Column3"
"Value1A";"Value1B";"Value1C"
"Value1A";"Value2B";"Value2C"
"Value1A";"Value3B";"Value3C"
"Value2A";"Value4B";"Value4C"
'#
$Csv | ConvertFrom-Csv -Delimiter ';' |
Foreach-Object -Begin { $Last1 = $Null } {
if ( $_.Column1 -eq $Last1 ) { $_.Column1 = '' }
else { $Last1 = $_.Column1 }
$_
} | ConvertTo-Csv -Delimiter ';'
"Column1";"Column2";"Column3"
"Value1A";"Value1B";"Value1C"
"";"Value2B";"Value2C"
"";"Value3B";"Value3C"
"Value2A";"Value4B";"Value4C"
I am very new in powershell.
I am trying to validate my CSV file by finding out if there is any text value in my numeric fields. I can define with columns are numeric.
This is my source data like this
ColA ColB ColC ColD
23 23 ff 100
2.30E+01 34 2.40E+01 23
df 33 ss df
34 35 36 37
I need output something like this (only text values if found in any column)
ColA ColC ColD
2.30E+01 ff df
df 2.40E+01
ss
I have tried some code but not getting any results, get only some output like as under
System.Object[]
---------------
xxx fff' ddd 3.54E+03
...
This is what I was trying
#
cls
function Is-Numeric ($Value) {
return $Value -match "^[\d\.]+$"
}
$arrResult = #()
$arraycol = #()
$FileCol = #("ColA","ColB","ColC","ColD")
$dif_file_path = "C:\Users\$env:username\desktop\f2.csv"
#Importing CSVs
$dif_file = Import-Csv -Path $dif_file_path -Delimiter ","
############## Test Datatype (Is-Numeric)##########
foreach($col in $FileCol)
{
foreach ($line in $dif_file) {
$val = $line.$col
$isnum = Is-Numeric($val)
if ($isnum -eq $false) {
$arrResult += $line.$col
$arraycol += $col
}
}
}
[pscustomobject]#{$arraycol = "$arrResult"}| out-file "C:\Users\$env:username\Desktop\Errors1.csv"
####################
can someone guide me right direction?
Thanks
You can try something like this,
function Is-Numeric ($Value) {
return $Value -match "^[\d\.]+$"
}
$dif_file_path = "C:\Users\$env:username\desktop\f2.csv"
#Importing CSVs
$dif_file = Import-Csv -Path $dif_file_path -Delimiter ","
#$columns = $dif_file | Get-member -MemberType 'NoteProperty' | Select-Object -ExpandProperty 'Name'
# Use this to specify certain columns
$columns = "ColB", "ColC", "ColD"
foreach($row in $dif_file) {
foreach ($col in $columns) {
if ($col -in $columns) {
if (!(Is-Numeric $row.$col)) {
$row.$col = ""
}
}
}
}
$dif_file | Export-Csv C:\temp\formatted.txt
Look up name of columns as you go
Look up values of each col in each row and if it is not numeric, change to ""
Exported updated file.
I think not displaying columns that have no data creates the challenge here. You can do the following:
$csv = Import-Csv "C:\Users\$env:username\desktop\f2.csv"
$finalprops = [collections.generic.list[string]]#()
$out = foreach ($line in $csv) {
$props = $line.psobject.properties | Where {$_.Value -notmatch '^[\d\.]+$'} |
Select-Object -Expand Name
$props | Where {$_ -notin $finalprops} | Foreach-Object { $finalprops.add($_) }
if ($props) {
$line | Select $props
}
$out | Select-Object ($finalprops | Sort)
Given the nature of Format-Table or tabular output, you only see the properties of the first object in the collection. So if object1 has ColA only, but object2 has ColA and ColB, you only see ColA.
The output order you want is quite different than the input CSV; you're tracking bad text data not by first occurrence, but by column order, which requires some extra steps.
test.csv file contents:
ColA,ColB,ColC,ColD
23,23,ff,100
2.30E+01,34,2.40E+01,23
df,33,ss,df
34,35,36,37
Sample code tested to meet your description:
$csvIn = Import-Csv "$PSScriptRoot\test.csv";
# create working data set with headers in same order as input file
$data = [ordered]#{};
$csvIn[0].PSObject.Properties | foreach {
$data.Add($_.Name, (New-Object System.Collections.ArrayList));
};
# add fields with text data
$csvIn | foreach {
$_.PSObject.Properties | foreach {
if ($_.Value -notmatch '^-?[\d\.]+$') {
$null = $data[$_.Name].Add($_.Value);
}
}
}
$removes = #(); # remove `good` columns with numeric data
$rowCount = 0; # column with most bad values
$data.GetEnumerator() | foreach {
$badCount = $_.Value.Count;
if ($badCount -eq 0) { $removes += $_.Key; }
if ($badCount -gt $rowCount) { $rowCount = $badCount; }
}
$removes | foreach { $data.Remove($_); }
0..($rowCount - 1) | foreach {
$h = [ordered]#{};
foreach ($key in $data.Keys) {
$h.Add($key, $data[$key][$_]);
}
[PSCustomObject]$h;
} |
Export-Csv -NoTypeInformation -Path "$PSScriptRoot\text-data.csv";
output file contents:
"ColA","ColC","ColD"
"2.30E+01","ff","df"
"df","2.40E+01",
,"ss",
#Jawad, Finally I have tried
function Is-Numeric ($Value) {
return $Value -match "^[\d\.]+$"
}
$arrResult = #()
$columns = "ColA","ColB","ColC","ColD"
$dif_file_path = "C:\Users\$env:username\desktop\f1.csv"
$dif_file = Import-Csv -Path $dif_file_path -Delimiter "," |select $columns
$columns = $dif_file | Get-member -MemberType 'NoteProperty' | Select-Object -ExpandProperty 'Name'
foreach($row in $dif_file) {
foreach ($col in $columns) {
$val = $row.$col
$isnum = Is-Numeric($val)
if ($isnum -eq $false) {
$arrResult += $col+ " " +$row.$col
}}}
$arrResult | out-file "C:\Users\$env:username\desktop\Errordata.csv"
I get correct result in my out file, order is very ambiguous like
ColA ss
ColB 5.74E+03
ColA ss
ColC rrr
ColB 3.54E+03
ColD ss
ColB 8.31E+03
ColD cc
any idea to get proper format? thanks
Note: with your suggested code, I get complete source file with all data , not the specific error data.
I have 2 csv files
First file:
firstName,secondName
1234,Value1
2345,Value1
3456,Value1
4567,Value3
7645,Value3
Second file:
firstName,fileSplitter,Csv2ColumnOne,Csv2ColumnTwo,Csv2ColumnThree
1234,,1234,abc,Value1
1234,,1234,asd,Value1
3456,,3456,qwe,Value1
4567,,4567,mnb,Value1
I want to insert column secondName in the second file in between columns firstName and fileSplitter.
The result should look like this:
firstName,secondName,fileSplitter,Csv2ColumnOne,Csv2ColumnTwo,Csv2ColumnThree
1234,Value1,,1234,abc,Value1
1234,Value1,,1234,asd,Value1
3456,Value1,,3456,qwe,Value1
4567,Value3,,4567,mnb,Value1
I'm trying the following code:
Function InsertColumnInBetweenColumns
{
Param ($FirstFileFirstColumnTitle, $firstFile, [string]$1stColumnName, [string]$2ndColumnName, [string]$columnMergedFileBeforeInput)
Write-Host "Creating hash table with columns values `"$1stColumnName`" `"$2ndColumnName`" From $OimFileWithMatches"
$hashFirstFileTwoColumns = #{}
Import-Csv $firstFile | ForEach-Object {$hashFirstFileTwoColumns[$_.$1stColumnName] = $_.$2ndColumnName}
Write-Host "Complete."
Write-Host "Appending Merge file with column `"$2ndColumnName`" from file $secondCsvFileWithLocalPath"
Import-Csv $outputCsvFileWithLocalPath | Select-Object $columnMergedFileBeforeInput, #{n=$2ndColumnName; e={
if ($hashFirstFileTwoColumns.ContainsKey($_.$FirstFileFirstColumnTitle)) {
$hashFirstFileTwoColumns[$_.$FirstFileFirstColumnTitle]
} Else {
'Not Found'
}}}, * | Export-Csv "$outputCsvFileWithLocalPath-temp" -NoType -Force
Move-Item "$outputCsvFileWithLocalPath-temp" $outputCsvFileWithLocalPath -Force
Write-Host "Complete."
Write-Host ""
}
This function will be called in a for loop for each column found in the first file (can contain an indefinite number). For testing, I am only using 2 columns from the first file.
I'm getting an error output resulting the following:
Select : Property cannot be processed because property "firstName" already exists.
At C:\Scripts\Tests\Compare2CsvFilesOutput1WithMatchesOnly.ps1:490 char:43
+ Import-Csv $outputCsvFileWithLocalPath | Select $columnMergedFileBeforeInput, # ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (#{firstName=L...ntName=asdfas}:PSObject) [Select-Object], PSArgume
ntException
+ FullyQualifiedErrorId : AlreadyExistingUserSpecifiedPropertyNoExpand,Microsoft.PowerShell.Commands.SelectObjectC
ommand
I know the issue is where it says Select-Object $columnMergedFileBeforeInput,.
How can I get the loop statement to insert the column in between the before column (name is specified), and append the rest using *?
Update
Just an fyi, changing this line Select-Object $columnMergedFileBeforeInput, #{n=$2ndColumnName..... to this line Select-Object #{n=$2ndColumnName..... works, it just attaches the columns out of order. That is why I'm trying to insert the column in between. Maybe if i do it this way but insert the columns in backwards using the for loop, this would work...
Not sure if this is the most efficient way to do it, but it should do the trick. It just adds the property to the record from file2, then reorders the output so secondName is the second column. You can output results to csv where required too (ConvertTo-Csv).
$file1 = Import-Csv -Path file1.csv
$file2 = Import-Csv -Path file2.csv
$results = #()
ForEach ($record In $file2) {
Add-Member -InputObject $record -MemberType NoteProperty -Name secondName -Value $($file1 | ? { $_.firstName -eq $record.firstName } | Select -ExpandProperty secondName)
$results += $record
}
$results | Select-Object -Property firstName,secondName,fileSplitter,Csv2ColumnOne,Csv2ColumnTwo,Csv2ColumnThree
I've created the following function. What it does is find the match (in this case "firstname") and adds the matching columnname to the new array afther the columnname on which the match is made (little difficult to explain in my poor English).
function Add-ColumnAfterMatchingColumn{
[CmdletBinding()]
param(
[string]$MainFile,
[string]$MatchingFile,
[string]$MatchColumnName,
[string]$MatchingColumnName
)
# Import data from two files
$file1 = Import-Csv -Path $MainFile
$file2 = Import-Csv -Path $MatchingFile
# Find column names and order them
$columnnames = $file2 | gm | where {$_.MemberType -like "NoteProperty"} | Select Name | %{$_.Name}
[array]::Reverse($columnnames)
# Find $MatchColumnName index and put the $MatchingColumnName after it
$MatchColumnNameIndex = [array]::IndexOf($columnnames, $MatchColumnName)
if($MatchColumnNameIndex -eq -1){
$MatchColumnNameIndex = 0
}
$columnnames = $columnnames[0..$MatchColumnNameIndex] + $MatchingColumnName + $columnnames[($MatchColumnNameIndex+1)..($columnnames.Length -1)]
$returnObject = #()
foreach ($item in $file2){
# Find corresponding value MatchingColumnName in $file1 and add it to the current item
$item | Add-Member -Name "$MatchingColumnName" -Value ($file1 | ?{$_."$($MatchColumnName)" -eq $item."$($MatchColumnName)"})."$MatchingColumnName" -MemberType NoteProperty
# Add current item to the returnObject array, in the correct order
$newItem = New-Object psobject
foreach ($columnname in [string[]]$columnnames){
$newItem | Add-Member -Name $columnname -Value $item."$columnname" -MemberType NoteProperty
}
$returnObject += $newItem
}
return $returnObject
}
When you run this function you will have the following output:
Add-ColumnAfterMatchingColumn -MainFile C:\Temp\file1.csv -MatchingFile C:\Temp\file2.csv -MatchColumnName "firstname" -MatchingColumnName "secondname" | ft
firstName secondname fileSplitter Csv2ColumnTwo Csv2ColumnThree Csv2ColumnOne
--------- ---------- ------------ ------------- --------------- -------------
1234 Value1 abc Value1 1234
1234 Value1 asd Value1 1234
3456 Value1 qwe Value1 3456
4567 Value3 mnb Value1 4567
I have a CSV File with a string column were that column spans to multiple lines. I want to aggregate those multiple lines into one line.
For example
1, "asdsdsdsds", "John"
2, "dfdhifdkinf
dfjdfgkdnjgknkdjgndkng
dkfdkjfnjdnf", "Roy"
3, "dfjfdkgjfgn", "Rahul"
I want my output to be
1, "asdsdsdsds", "John"
2, "dfdhifdkinf dfjdfgkdnjgknkdjgndkng dkfdkjfnjdnf", "Roy"
3, "dfjfdkgjfgn", "Rahul"
I want to achieve this output using PowerShell
Thanks.
Building on Ansgar's answer, here's how to do it when:
You don't know the column names
Your CSV file may contain CR or LF independently
(Import-Csv $csvInput) | % {
$line = $_
foreach ($prop in $line.PSObject.Properties) {
$line.($prop.Name) = ($prop.Value -replace '[\r\n]',' ')
}
$line
} | Export-Csv $csvOutput -NoTypeInformation
Try this:
$csv = 'C:\path\to\your.csv'
(Import-Csv $csv -Header 'ID','Value','Name') | % {
$_.Value = $_.Value -replace "`r`n",' '
$_
} | Export-Csv $csv -NoTypeInformation
If your CSV contains headers, remove -Header 'ID','Value','Name' from the import and replace Value with the actual column name.
If you don't want double quotes around the fields, you can remove them by replacing Export-Csv with something like this:
... | ConvertTo-Csv -NoTypeInformation | % { $_ -replace '"' } | Out-File $csv
To remove the header from the output you add another filter before Out-File to skip the first line:
... | select -Skip 1 | Out-File $csv
You can import the csv, do a specialized select, and write the result into a new CSV.
import-csv Before.csv -Header "ID","Change" | Select ID,#{Name="NoNewLines", Expression={$_.Change -replace "`n"," "}} | export-csv After.csv
The key part is in the select statement, which allows you to pass a specialized hash table (Name is the name of the property, Expression is a scriptblock that computes it).
You may need to fiddle with headers a bit to get the exact output you want.
The problems with Export-CSV are twofold:
Early versions (powershell1 & 2) do not allow you to append data to the CSV
If the data being piped to it contains newline characters, the data is useless in Excel
The solution to both of the above is to use Convertto-CSV instead. Here is a sample:
{bunch of stuff} | ConvertTo-CSV | %{$_ -replace "`n","<NL>"} | %{$_ -replace "`r","<CR>"} >>$AppendFile
Note that this allows you to do whatever editing on the data (in this case, replacing newline data), and using redirecrors to append.
FYI: I've created a CSV Cleaner: https://stackoverflow.com/a/32016543/361842
This can be used to replace any unwanted characters / should be straight-forward to adapt to your needs.
Code copied below; though I recommend referring to the above thread to see any feedback from others.
clear-host
[Reflection.Assembly]::LoadWithPartialName("System.IO") | out-null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.VisualBasic") | out-null
function Clean-CsvStream {
[CmdletBinding()]
param (
[Parameter(Mandatory = $true, ValueFromPipeline=$true)]
[string]$CsvRow
,
[Parameter(Mandatory = $false)]
[char]$Delimiter = ','
,
[Parameter(Mandatory = $false)]
[regex]$InvalidCharRegex
,
[Parameter(Mandatory = $false)]
[string]$ReplacementString
)
begin {
[bool]$IsSimple = [string]::IsNullOrEmpty($InvalidCharRegex)
if(-not $IsSimple) {
[System.IO.MemoryStream]$memStream = New-Object System.IO.MemoryStream
[System.IO.StreamWriter]$writeStream = New-Object System.IO.StreamWriter($memStream)
[Microsoft.VisualBasic.FileIO.TextFieldParser]$Parser = new-object Microsoft.VisualBasic.FileIO.TextFieldParser($memStream)
$Parser.SetDelimiters($Delimiter)
$Parser.HasFieldsEnclosedInQuotes = $true
[long]$seekStart = 0
}
}
process {
if ($IsSimple) {
$CsvRow
} else { #if we're not replacing anything, keep it simple
$seekStart = $memStream.Seek($seekStart, [System.IO.SeekOrigin]::Current)
$writeStream.WriteLine($CsvRow)
$writeStream.Flush()
$seekStart = $memStream.Seek($seekStart, [System.IO.SeekOrigin]::Begin)
write-output (($Parser.ReadFields() | %{$_ -replace $InvalidCharRegex,$ReplacementString }) -join $Delimiter)
}
}
end {
if(-not $IsSimple) {
try {$Parser.Close(); $Parser.Dispose()} catch{}
try {$writeStream.Close(); $writeStream.Dispose()} catch{}
try {$memStream.Close(); $memStream.Dispose()} catch{}
}
}
}
$csv = #(
(new-object -TypeName PSCustomObject -Property #{A="this is regular text";B="nothing to see here";C="all should be good"})
,(new-object -TypeName PSCustomObject -Property #{A="this is regular text2";B="what the`nLine break!";C="all should be good2"})
,(new-object -TypeName PSCustomObject -Property #{A="this is regular text3";B="ooh`r`nwindows line break!";C="all should be good3"})
,(new-object -TypeName PSCustomObject -Property #{A="this is regular text4";B="I've got;a semi";C="all should be good4"})
,(new-object -TypeName PSCustomObject -Property #{A="this is regular text5";B="""You're Joking!"" said the Developer`r`n""No honestly; it's all about the secret VB library"" responded the Google search result";C="all should be good5"})
) | convertto-csv -Delimiter ';' -NoTypeInformation
$csv | Clean-CsvStream -Delimiter ';' -InvalidCharRegex "[`r`n;]" -ReplacementString ':'