Find replace using PowerShell get-content - powershell

I am attempting to mask SSN numbers with Random SSNs in a large text file. The file is 400M or .4 gigs.
There are 17,000 instances of SSNs that i want to find and replace.
Here is an example of the powershell script I am using.
(get-content C:\TrainingFile\TrainingFile.txt) | foreach-object {$_ -replace "123-45-6789", "666-66-6666"} | set-content C:\TrainingFile\TrainingFile.txt
My problem is that that i have 17,000 lines of this code to that I have in a .ps1 file. The ps1 file looks similar to
(get-content C:\TrainingFile\TrainingFile.txt) | foreach-object {$_ -replace "123-45-6789", "666-66-6666"} | set-content C:\TrainingFile\TrainingFile.txt
(get-content C:\TrainingFile\TrainingFile.txt) | foreach-object {$_ -replace "122-45-6789", "666-66-6668"} | set-content C:\TrainingFile\TrainingFile.txt
(get-content C:\TrainingFile\TrainingFile.txt) | foreach-object {$_ -replace "223-45-6789", "666-66-6667"} | set-content C:\TrainingFile\TrainingFile.txt
(get-content C:\TrainingFile\TrainingFile.txt) | foreach-object {$_ -replace "123-44-6789", "666-66-6669"} | set-content C:\TrainingFile\TrainingFile.txt
For 17,000 powershell commands in the .ps1 file. One command per line.
I did a test on just one command and it took about 15 secoonds to execute. Doing the math, 170000 X 15 seconds comes out to about 3 days to run my .ps1 script of 17,000 commands.
Is there a faster way to do this?

The reason for poor performance is that a lot of extra work is being done. Let's look the process as a pseudoalgorithm like so,
select SSN (X) and masked SSN (X') from a list
read all rows from file
look each file row for string X
if found, replace with X'
save all rows to file
loop until all SSNs are processed
So what's the problem? It is that for each SSN replacement, you process all the rows. Not only those that do need masking but those that don't. That's a lot of extra work. If you got, say 100 rows and 10 replacements, you are going to use 1000 steps when only 100 are needed. In addition, reading and saving file creates disk IO. Whlist that's not often an issue for single operation, multiply the IO cost with loop count and you'll find quite large a time wasted for disk waits.
For great performance, tune the algorithm like so,
read all rows from file
loop through rows
for current row, change X -> X'
save the result
Why should this be faster? 1) You read and save the file once. Disk IO is slow. 2) You process each row only once, so extra work is not being done. As how to actually perform the X -> X' transform, you got to define more carefully what the masking rule is.
Edit
Here's more practical an resolution:
Since you already know the f(X) -> X' results, you should have a pre-calculated list saved to disk like so,
ssn, mask
"123-45-6789", "666-66-6666"
...
"223-45-6789", "666-66-6667"
Import the file into a hash table and work forward by stealing all the juicy bits from Ansgar's answer like so,
$ssnMask = #{}
$ssn = import-csv "c:\temp\SSNMasks.csv" -delimiter ","
# Add X -> X' to hashtable
$ssn | % {
if(-not $ssnMask.ContainsKey($_.ssn)) {
# It's an error to add existing key, so check first
$ssnMask.Add($_.ssn, $_.mask)
}
}
$dataToMask = get-content "c:\temp\training.txt"
$dataToMask | % {
if ( $_ -match '(\d{3}-\d{2}-\d{4})' ) {
# Replace SSN look-a-like with value from hashtable
# NB: This simply removes SSNs that don't have a match in hashtable
$_ -replace $matches[1], $ssnMask[$matches[1]]
}
} | set-content "c:\temp\training2.txt"

Avoid reading and writing the file multiple times. I/O is expensive and is what slows your script down. Try something like this:
$filename = 'C:\TrainingFile\TrainingFile.txt'
$ssnMap = #{}
(Get-Content $filename) | % {
if ( $_ -match '(\d{3}-\d{2}-\d{4})' ) {
# If SSN is found, check if a mapping of that SSN to a random SSN exists.
# Otherwise create a new mapping.
if ( -not $ssnMap.ContainsKey($matches[1]) ) {
do {
$rnd = Get-Random -Min 100000 -Max 999999
$newSSN = "666-$($rnd -replace '(..)(....)','$1-$2')"
} while ( $ssnMap.ContainsValue($newSSN) ) # loop to avoid collisions
$ssnMap[$matches[1]] = $newSSN
}
# Replace the SSN with the corresponding randomly generated SSN.
$_ -replace $matches[1], $ssnMap[$matches[1]]
} else {
# If no SSN is found, simply print the line.
$_
}
} | Set-Content $filename
If you already have a list of random SSNs and also have them mapped to specific "real" SSNs, you could read those mappings from a CSV (example column titles: realSSN, randomSSN) into the $ssnMap hashtable:
$ssnMap = #{}
Import-Csv 'C:\mappings.csv' | % { $ssnMap[$_.realSSN] = $_.randomSSN }

If you've already generated a list of random SSNs for replacement, and each SSN in the file just needs to be replaced with one of them (not necessarily mapped to a specific replacement string), thing I think this will be much faster:
$inputfile = 'C:\TrainingFile\TrainingFile.txt'
$outputfile = 'C:\TrainingFile\NewTrainingFile.txt'
$replacements = Get-Content 'C:\TrainingFile\SSN_Replacements.txt'
$i=0
Filter Replace-SSN { $_ -replace '\d{3}-\d{2}-\d{4}',$replacements[$i++] }
Get-Content $inputfile |
Replace-SSN |
Set-Content $outputfile
This will walk through your list of replacement SSNs, selecting the next one in the list for each new replacement.
Edit:
Here's a solution for mapping specific SSNs to specific replacement strings. It assumes you have a CSV file of the original SSNs and their intended replacement strings, as columns 'OldSSN' and 'NewSSN':
$inputfile = 'C:\TrainingFile\TrainingFile.txt'
$outputfile = 'C:\TrainingFile\NewTrainingFile.txt'
$replacementfile = 'C:\TrainingFile\SSN_Replacements.csv'
$SSNmatch = [regex]'\d{3}-\d{2}-\d{4}'
$replacements = #{}
Import-Csv $replacementfile |
ForEach-Object { $replacements[$_.OldSSN] = $_.NewSSN }
Get-Content $inputfile -ReadCount 1000|
ForEach-Object {
foreach ($Line in $_){
if ( $Line -match $SSNmatch ) #Found SSN in line
{ if ( $replacements.ContainsKey($matches[0]) ) #Found replacement string for this SSN
{ $Line -replace $SSNmatch,$replacements[$matches[0]] } #Replace SSN and ouput line
else {Write-Warning "Warning - no replacement string found for $($matches[0])"
}
}
else { $Line } #No SSN in this line - output line as-is
}
} | Set-Content $outputfile

# Fairly fast PowerShell code for masking up to 1000 SSN number per line in a large text file (with unlimited # of lines in the file) where the SSN matches the pattern of " ###-##-#### ", " ##-####### ", or " ######### ".
# This code can handle a 14 MB text file that has SSN numbers in nearly every row within about 4 minutes.
# $inputFilename = 'C:/InputFile.txt'
$inputFileName = "
1
0550 125665 338066
- 02 CR05635 07/06/16
0 SAMPLE CUSTOMER NAME
PO BOX 12345
ROSEVILLE CA 12345-9109
EMPLOYEE DEFERRALS
FREDDIE MAC RO 16 9385456 164-44-9120 XXX
SALLY MAE RO 95 9385356 07-4719130 XXX
FRED FLINTSTONE RO 95 1185456 061741130 XXX
WILMA FLINTSTONE RO 91 9235456 364-74-9130 123456789 123456389 987354321 XXX
PEBBLES RUBBLE RO 10 9235456 06-3749130 064-74-9150 034-74-9130 XXX
BARNEY RUBBLE RO 11 9235456 06-3449130 06-3749140 063-74-9130 XXX
BETTY RUBBLE RO 16 9235456 9-74-9140 123456789 123456789 987654321 XXX
PLEASE ENTER BELOW ANY ADDITIONAL PARTICIPANTS FOR WHOM YOU ARE
REMITTING. FOR GENERAL INFORMATION AND SERVICE CALL
"
$outputFilename = 'D:/OutFile.txt'
#(Get-Content $inputFilename ) | % {
($inputFilename ) | % {
$NewLine=$_
# Write-Host "0 new line value is ($NewLine)."
$ChangeFound='Y'
$WhileCounter=0
While (($ChangeFound -eq 'Y') -and ($WhileCounter -lt 1000))
{
$WhileCounter=$WhileCounter+1
$ChangeFound='N'
$matches = $NewLine | Select-String -pattern "[ ][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9][0-9][0-9][ |\t|\r|\n]" -AllMatches
If ($matches.length -gt 0)
{
$ChangeFound='Y'
$NewLine=''
for($i = 0; $i -lt 1; $i++){
for($k = 0; $k -lt 1; $k++){
# Write-Host "AmHere 1a `$i ($i), `$k ($k), `$NewLine ($NewLine)."
$t = $matches[$i] -replace $matches[$i].matches[$k].value, (" ###-##-" + $matches[$i].matches[$k].value.substring(8) )
$NewLine=$NewLine + $t
# Write-Host "AmHere 1b `$i ($i), `$k ($k), `$NewLine ($NewLine)."
}
}
# Write-Host "1 new line value is ($NewLine)."
}
$matches = $NewLine | Select-String -pattern "[ ][0-9][0-9]-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][ |\t|\r|\n]" -AllMatches
If ($matches.length -gt 0)
{
$ChangeFound='Y'
$NewLine=''
for($i = 0; $i -lt 1; $i++){
for($k = 0; $k -lt 1; $k++){
# Write-Host "AmHere 2a `$i ($i), `$k ($k), `$NewLine ($NewLine)."
$t = $matches[$i] -replace $matches[$i].matches[$k].value, (" ##-###" + $matches[$i].matches[$k].value.substring(7) )
$NewLine=$NewLine + $t
# Write-Host "AmHere 2b `$i ($i), `$k ($k), `$NewLine ($NewLine)."
}
}
# Write-Host "2 new line value is ($NewLine)."
}
$matches = $NewLine | Select-String -pattern "[ ][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][ |\t|\r|\n]" -AllMatches
If ($matches.length -gt 0)
{
$ChangeFound='Y'
$NewLine=''
for($i = 0; $i -lt 1; $i++){
for($k = 0; $k -lt 1; $k++){
# Write-Host "AmHere 3a `$i ($i), `$k ($k), `$NewLine ($NewLine)."
$t = $matches[$i] -replace $matches[$i].matches[$k].value, (" #####" + $matches[$i].matches[$k].value.substring(6) )
$NewLine=$NewLine + $t
# Write-Host "AmHere 3b `$i ($i), `$k ($k), `$NewLine ($NewLine)."
}
}
#print the line
# Write-Host "3 new line value is ($NewLine)."
}
# Write-Host "4 new line value is ($NewLine)."
} # end of DoWhile
Write-Host "5 new line value is ($NewLine)."
$NewLine
# Replace the SSN with the corresponding randomly generated SSN.
# $_ -replace $matches[1], $ssnMap[$matches[1]]
} | Set-Content $outputFilename

Related

How can I transpose and parse a large vertical text file into a CSV file with headers?

I have a large text file (*.txt) in the following format:
; KEY 123456
; Any Company LLC
; 123 Main St, Anytown, USA
SEC1 = xxxxxxxxxxxxxxxxxxxxx
SEC2 = xxxxxxxxxxxxxxxxxxxxx
SEC3 = xxxxxxxxxxxxxxxxxxxxx
SEC4 = xxxxxxxxxxxxxxxxxxxxx
SEC5 = xxxxxxxxxxxxxxxxxxxxx
SEC6 = xxxxxxxxxxxxxxxxxxxxx
This is repeated for about 350 - 400 keys. These are HASP keys and the SEC codes associated with them. I am trying to parse this file into a CSV file with KEY and SEC1 - SEC6 as the headers, with the rows being filled in. This is the format I am trying to get to:
KEY,SEC1,SEC2,SEC3,SEC4,SEC5,SEC6
123456,xxxxxxxxxx,xxxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx
456789,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx
I have been able to get a script to export to a CSV with only one key in the text file (my test file), but when I try to run it on the full list, it only exports the last key and sec codes.
$keysheet = '.\AllKeys.txt'
$holdarr = #{}
Get-Content $keysheet | ForEach-Object {
if ($_ -match "KEY") {
$key, $value = $_.TrimStart("; ") -split " "
$holdarr[$key] = $value }
elseif ($_ -match "SEC") {
$key, $value = $_ -split " = "
$holdarr[$key] = $value }
}
$hash = New-Object PSObject -Property $holdarr
$hash | Export-Csv -Path '.\allsec.csv' -NoTypeInformation
When I run it on the full list, it also adds a couple of extra columns with what looks like properties instead of values.
Any help to get this to work would be appreciated.
Thanks.
Here's the approach I suggest:
$output = switch -Regex -File './AllKeys.txt' {
'^; KEY (?<key>\d+)' {
if ($o) {
[pscustomobject]$o
}
$o = #{
KEY = $Matches['key']
}
}
'^(?<sec>SEC.*?)\s' {
$o[$Matches['sec']] = ($_ | ConvertFrom-StringData)[$Matches['sec']]
}
default {
Write-Warning -Message "No match found: $_"
}
}
# catch the last object
$output += [pscustomobject]$o
$output | Export-Csv -Path './some.csv' -NoTypeInformation
This would be one approach.
& {
$entry = $null
switch -Regex -File '.\AllKeys.txt' {
"KEY" {
if ($entry ) {
[PSCustomObject]$entry
}
$entry = #{}
$key, $value = $_.TrimStart("; ") -split " "
$entry[$key] = [int]$value
}
"SEC" {
$key, $value = $_ -split " = "
$entry[$key] = $value
}
}
[PSCustomObject]$entry
} | sort KEY | select KEY,SEC1,SEC2,SEC3,SEC4,SEC5,SEC6 |
Export-Csv -Path '.\allsec.csv' -NoTypeInformation
Lets leverage the strength of ConvertFrom-StringData which
Converts a string containing one or more key and value pairs to a hash table.
So what we will do is
Split into blocks of text
edit the "; Key" line
Remove an blank lines or semicolon lines.
Pass to ConvertFrom-StringData to create a hashtable
Convert that to a PowerShell object
$path = "c:\temp\keys.txt"
# Split the file into its key/sec collections. Drop any black entries created in the split
(Get-Content -Raw $path) -split ";\s+KEY\s+" | Where-Object{-not [string]::IsNullOrWhiteSpace($_)} | ForEach-Object{
# Split the block into lines again
$lines = $_ -split "`r`n" | Where-Object{$_ -notmatch "^;" -and -not [string]::IsNullOrWhiteSpace($_)}
# Edit the first line so we have a full block of key=value pairs.
$lines[0] = "key=$($lines[0])"
# Use ConvertFrom-StringData to do the leg work after we join the lines back as a single string.
[pscustomobject](($lines -join "`r`n") | ConvertFrom-StringData)
} |
# Cannot guarentee column order so we force it with this select statement.
Select-Object KEY,SEC1,SEC2,SEC3,SEC4,SEC5,SEC6
Use Export-CSV to your hearts content now.

Replace first duplicate without regex and increment

I have a text file and I have 3 of the same numbers somewhere in the file. I need to add incrementally to each using PowerShell.
Below is my current code.
$duped = Get-Content $file | sort | Get-Unique
while ($duped -ne $null) {
$duped = Get-Content $file | sort | Get-Unique | Select -Index $dupecount
$dupefix = $duped + $dupecount
echo $duped
echo $dupefix
(Get-Content $file) | ForEach-Object {
$_ -replace "$duped", "$dupefix"
} | Set-Content $file
echo $dupecount
$dupecount = [int]$dupecount + [int]"1"
}
Original:
12345678
12345678
12345678
Intended Result:
123456781
123456782
123456783
$filecontent = (get-content C:\temp\pos\bart.txt )
$output = $null
[int]$increment = 1
foreach($line in $filecontent){
if($line -match '12345679'){
$line = [int]$line + $increment
$line
$output += "$line`n"
$increment++
}else{
$output += "$line`n"
}
}
$output | Set-Content -Path C:\temp\pos\bart.txt -Force
This works in my test of 5 lines being
a word
12345679
a second word
12345679
a third word
the output would be :
a word
12345680
a second word
12345681
a third word
Let's see if i understand the question correctly:
You have a file with X-amount of lines:
a word
12345678
a second word
12345678
a third word
You want to catch each instance of 12345678 and add 1 increment to it so that it would become:
a word
12345679
a second word
12345679
a third word
Is that what you are trying to do?

Powershell to count columns in a file

I need to test the integrity of file before importing to SQL.
Each row of the file should have the exact same amount of columns.
These are "|" delimited files.
I also need to ignore the first line as it is garbage.
If every row does not have the same number of columns, then I need to write an error message.
I have tried using something like the following with no luck:
$colCnt = "c:\datafeeds\filetoimport.txt"
$file = (Get-Content $colCnt -Delimiter "|")
$file = $file[1..($file.count - 1)]
Foreach($row in $file){
$row.Count
}
Counting rows is easy. Columns is not.
Any suggestions?
Yep, read the file skipping the first line. For each line split it on the pipe, and count the results. If it isn't the same as the previous throw an error and stops.
$colCnt = "c:\datafeeds\filetoimport.txt"
[int]$LastSplitCount = $Null
Get-Content $colCnt | ?{$_} | Select -Skip 1 | %{if($LastSplitCount -and !($_.split("|").Count -eq $LastSplitCount)){"Process stopped at line number $($_.psobject.Properties.value[5]) for column count mis-match.";break}elseif(!$LastSplitCount){$LastSplitCount = $_.split("|").Count}}
That should do it, and if it finds a bad column count it will stop and output something like:
Process stopped at line number 5 for column count mis-match.
Edit: Added a Where catch to skip blank lines ( ?{$_} )
Edit2: Ok, if you know what the column count should be then this is even easier.
Get-Content $colCnt | ?{$_} | Select -Skip 1 | %{if(!($_.split("|").Count -eq 210)){"Process stopped at line number $($_.psobject.Properties.value[5]), incorrect column count of: $($_.split("|").Count).";break}}
If you want it to return all lines that don't have 210 columns just remove the ;break and let it run.
A more generic approach, including a RegEx filter:
$path = "path\to\folder"
$regex = "regex"
$expValue = 450
$files= Get-ChildItem $path | Where-Object {$_.Name -match $regex}
Foreach( $f in $files) {
$filename = $f.Name
echo $filename
$a = Get-Content $f.FullName;
$i = 1;
$e = 0;
echo "Starting...";
foreach($line in $a)
{
if ($line.length -ne $expValue){
echo $filename
$a | Measure-Object -Line
echo "Long:"
echo $line.Length;
echo "Line Nº: "
echo $i;
$e = $e + 1;
}
$i = $i+1;
}
echo "Finished";
if ($e -ne 0){
echo $e "errors found";
}else{
echo "No errors"
echo ""
}
}
echo "All files examined"
Another possibility:
$colCnt = "c:\datafeeds\filetoimport.txt"
$DataLine = (Get-Content $colCnt -TotalCount 2)[1]
$DelimCount = ([char[]]$DataLine -eq '|').count
$MatchString = '.*' + ('|.*' * $DelimCount )
$test = Select-String -Path $colCnt -Pattern $MatchString -NotMatch |
where { $_.linenumber -ne 1 }
That will find the number of delimiter characters in the second line, and build a regex pattern that can be used with Select-String.
The -NotMatch switch will make it return any lines that don't match that pattern as MatchInfo objects that will have the filename, line number and content of the problem lines.
Edit: Since the first line is "garbage" you probably don't care if it didn't match so I added a filter to the result to drop that out.

Replacing a text at specified line number of a file using powershell

IF there is one file for example test.config , this file contain work "WARN" between line 140 and 170 , there are other lines where "WARN" word is there , but I want to replace "WARN" between line 140 and 170 with word "DEBUG", and keep the remaining text of the file same and when saved the "WARN" is replaced by "DEBUG" between only lines 140 and 170 . remaining all text is unaffected.
Look at $_.ReadCount which will help. Just as a example I replace only rows 10-15.
$content = Get-Content c:\test.txt
$content |
ForEach-Object {
if ($_.ReadCount -ge 10 -and $_.ReadCount -le 15) {
$_ -replace '\w+','replaced'
} else {
$_
}
} |
Set-Content c:\test.txt
After that, the file will contain:
1
2
3
4
5
6
7
8
9
replaced
replaced
replaced
replaced
replaced
replaced
16
17
18
19
20
2 Lines:
$FileContent = Get-Content "C:\Some\Path\textfile.txt"
$FileContent | % { If ($_.ReadCount -ge 140 -and $_.ReadCount -le 170) {$_ -Replace "WARN","DEBUG"} Else {$_} } | Set-Content -Path "C:\Some\Path\textfile.txt"
Description:
Write content of text file to array "$FileContent"
Pipe $FileContent array to For-EachObject cmdlet "%"
For each item in array, check Line number ($_.ReadCount)
If Line number 140-170, Replace WARN with DEBUG; otherwise write line unmodified.
NOTE: You MUST add the "Else {$_}". Otherwise the text file will only contain the modified lines.
Set-Content to write the content to text file
Using array slicing:
$content = Get-Content c:\test.txt
$out = #()
$out += $content[0..139]
$out += $content[140..168] -replace "warn","DEBUG"
$out += $content[169..($content.count -1)]
$out | out-file out.txt
This is the test file
text
text
DEBUG
DEBUG
TEXT
--
PS:\ gc .\stuff1.txt |% { [system.text.regularexpressions.regex]::replace($_,"WARN","DEBUG") } > out.txt
Out.txt look like this
text
text
DEBUG
DEBUG
TEXT
Might be trivial but it does the job:
$content = gc "D:\posh\stack\test.txt"
$start=139
$end=169
$content | % {$i=0;$lines=#();}{
if($i -ge $start -and $i -le $end){
$lines+=$_ -replace 'WARN', 'DEBUG'
}
else
{
$lines+=$_
}
$i+=1
}{set-content test_output.txt $lines}
So my script is pretty similar, so I am going to post what I ended up doing.
I had a bunch of servers all with the same script in the same location, and I needed to updated a path in all of the scripts.
i just replaced the entire line (line 3 in this script) and rewrote the script back out
my server names and "paths" to replace the old path were stored in an array (you could pull that from a DB if you wanted to automated it more:
$servers = #("Server1","Server2")
$Paths = #("\\NASSHARE\SERVER1\Databackups","\\NASSHARE\SERVER2\Databackups")
$a = 0
foreach ($x in $servers)
{
$dest = "\\" + $x + "\e$\Powershell\Backup.ps1"
$newline = '$backupNASPath = "' + $Paths[$a] + '"'
$lines = #(Get-Content $dest)
$lines[3] = $newline
$lines > $dest
$a++
}
it works, and saved me a ton of time logging into each server and updating each path. ugh
Cheers

Extracting columns from text file using PowerShell

I have to extract columns from a text file explained in this post:
Extracting columns from text file using Perl one-liner: similar to Unix cut
but I have to do this also in a Windows Server 2008 which does not have Perl installed. How could I do this using PowerShell? Any ideas or resources? I'm PowerShell noob...
Try this:
Get-Content test.txt | Foreach {($_ -split '\s+',4)[0..2]}
And if you want the data in those columns printed on the same line:
Get-Content test.txt | Foreach {"$(($_ -split '\s+',4)[0..2])"}
Note that this requires PowerShell 2.0 for the -split operator. Also, the ,4 tells the the split operator the maximum number of split strings you want but keep in mind the last string will always contain all extras concat'd.
For fixed width columns, here's one approach for column width equal to 7 ($w=7):
$res = Get-Content test.txt | Foreach {
$i=0;$w=7;$c=0; `
while($i+$w -lt $_.length -and $c++ -lt 2) {
$_.Substring($i,$w);$i=$i+$w-1}}
$res will contain each column for all rows. To set the max columns change $c++ -lt 2 from 2 to something else. There is probably a more elegant solution but don't have time right now to ponder it. :-)
Assuming it's white space delimited this code should do.
$fileName = "someFilePath.txt"
$columnToGet = 2
$columns = gc $fileName |
%{ $_.Split(" ",[StringSplitOptions]"RemoveEmptyEntries")[$columnToGet] }
To ordinary、
type foo.bar | % { $_.Split(" ") | select -first 3 }
Try this. This will help to skip initial rows if you want, extract/iterate through columns, edit the column data and rebuild the record:
$header3 = #("Field_1","Field_2","Field_3","Field_4","Field_5")
Import-Csv $fileName -Header $header3 -Delimiter "`t" | select -skip 3 | Foreach-Object {
$record = $indexName
foreach ($property in $_.PSObject.Properties){
#doSomething $property.Name, $property.Value
if($property.Name -like '*CUSIP*'){
$record = $record + "," + '"' + $property.Value + '"'
}
else{
$record = $record + "," + $property.Value
}
}
$array.add($record) | out-null
#write-host $record
}