I'm trying to see if there is a way to read the column values in a csv file based on the column location. The reason for this is the file I'm being handed always has it's titles being changed...
For example, lets say csv file column A (via excel) looks like the following:
ColumnOne
ValueOne
ValueTwo
ValueThree
Now the user changes the title:
Column 1
ValueOne
ValueTwo
ValueThree
Now I want to create an array of the first column. Normally what I do is the following:
$arrayFirstColumn = Import-Csv 'C:\test\test1.csv' | where-object {$_.ColumnOne} | select-object -expand 'ColumnOne'
However, as we can see if ColumnOne is changed to Column 1, it breaks this code. How can I create this array to allow an interchangeable column title, but the column location will always be the same?
You can specify headers of your own on import:
Import-Csv 'C:\path\to\your.csv' -Header 'MyHeaderA','MyHeaderB',...
As long as you don't export the data back to a CSV (or don't require the original headers to be in the output CSV as well) you can use whatever names you like. You can also specify as many header names as you like. If their number is less than the number of the columns in the CSV the additional columns will be omitted, if it's greater then the columns for the additional headers will be empty.
If you need to preserve the original headers you could get the header name(s) you need to work with in variable(s) like this:
$csv = Import-Csv 'C:\test\test1.csv'
$firstCol = $csv | Select-Object -First 1 | ForEach-Object {
$_.PSObject.Properties | Select-Object -First 1 -Expand Name
}
$arrayFirstColumn = $csv | Where-Object {$_.$firstCol} |
Select-Object -Expand $firstCol
Or you could simply read the first line from the CSV and split it to get an array with the headers:
$headers = (Get-Content 'C:\test\test1.csv' -TotalCount 1) -split ','
$firstCol = $headers[0]
One option:
$ImportFile = 'C:\test\test1.csv'
$FirstColumn = ((Get-Content $ImportFile -TotalCount 2 | ConvertFrom-Csv).psobject.properties.name)[0]
$FirstColumn
$arrayFirstColumn = Import-Csv $ImportFile | where-object {$_.$FirstColumn} | select-object -expand $FirstColumn
If you are using PowerShell v2.0 then the expression for $FirstColumn in $mjolinor's answer would be:
$FirstColumn = ((Get-Content $ImportFile -TotalCount 2 | ConvertFrom-Csv).psobject.properties | ForEach-Object {$_.name})[0]
(Apologies for starting a new answer; I do not yet have enough reputation to add a comment to mjolinor's post)
Related
I've created the following small script to remove 2++ strings from a CSV.
Each row is a log of a given person and a answer they give.
The CSV has X columns.
The column named FIRST identifies the person.
What I need to do is when I delete a row matching the answer, I also need to delete the person from the whole CSV if it had one of the two strings.
What I've made so far, removes the row of people having the answers but the person is still left in the overall CSV with other answers. I want to remove the person fully if the questions have been answered.
Can somebody help me out with making the addition or changes to make this happen?
INPUT File
FIRST,LAST,ADDR,ADDR2,GENDER,HOME,WORK
1,N/A,N/A,N/A,N/A,BAF,N/A
10005,JAS,AA,N/A,,ZAV,N/A
10007,JADE,BB,N/A,OMA,N/A,N/A
10007,JADE,N/A,RAV,N/A,N/A,N/A
10011,KIAH,N/A,N/A,BALI,BB,N/A
SCRIPT
$CSVfile = "C:\Temp\Test\Test.csv"
$CSVfile_filtered = "C:\Temp\Test\Test.csv"
$regex001 = "AA"
$regex002 = "BB"
$filterArray = #($regex001,$regex002)
Get-Content $CSVfile | Select-String -pattern $filterArray -notmatch | Set-Content $CSVfile_filtered
The file should then remove 10005, 10011 and both lines of 10007. But my version only removes one of the 10007 since it only matches one of the two patterns.
Using more of PowerShell's built-in cmdlets can make this a little easier to manage.
# Assuming searching only properties ADDR and ADDR2
$filter = 'AA','BB'
# Grouping by First and Last values to easily remove duplicates
# -match uses regex so | is needed for an OR of multiple items
Import-Csv Test.csv | Group-Object First,Last |
Where {!($_.Group.ADDR,$_.Group.ADDR2 -match ($filter -join '|'))} |
Foreach-Object Group |
Export-Csv output.csv -NoType
You would think strictly using text manipulation would be simpler, but it adds other scenarios to consider:
You will need to track users that have duplicate entries and potentially back track to remove them (if not grouping). This could require reading the file contents twice.
Your header row could match the string you want to filter so you will need to add it to the output if filtering removes it.
Keeping the scenarios above in mind, you can still use a grouping concept:
$filter = 'AA','BB'
$file = Get-Content Test.csv
# $file[0] is the header row
# -split string uses regex and splits at the second comma
# -split results' [0] element is First,Last values
$file[0],($file |
Select-Object -Skip 1 |
Group-Object {($_ -split '(?<=^[^,]*,[^,]*),')[0]} |
where {!($_.Group -match ($filter -join '|'))} |
Foreach-Object Group) | Set-Content output.csv
If I got it right you could do something like this:
$SearchPattern = 'AA', 'BB'
$INPUTCSV = #'
FIRST,LAST,ADDR,ADDR2,GENDER,HOME,WORK
1,N/A,N/A,N/A,N/A,BAF,N/A
10005,JAS,AA,N/A,,ZAV,N/A
10007,JADE,BB,N/A,OMA,N/A,N/A
10007,JADE,N/A,RAV,N/A,N/A,N/A
10011,KIAH,N/A,N/A,BALI,BB,N/A
'# | ConvertFrom-Csv
$ActualSearchPattern =
$INPUTCSV |
Where-Object {
$_.LAST -in $SearchPattern -or
$_.ADDR -in $SearchPattern -or
$_.ADDR2 -in $SearchPattern -or
$_.GENDER -in $SearchPattern -or
$_.HOME -in $SearchPattern -or
$_.Work -in $SearchPattern
} |
Select-Object -ExpandProperty FIRST
$INPUTCSV |
Where-Object -Property FIRST -NotIn -Value $ActualSearchPattern |
Format-Table -AutoSize
There might be more sophisticated or more elegant ways but I cannot think about one at the moment. ;-)
There is a nice PowerShell module you can use to manipulate the content of a csv or xlsx file: ImportExcel
This give you a lot of options to manipulate the sheets, columns etc.
I have a CSV file containing two columns:server name with domain and date
servername.domain.domain.com,10/15/2018 6:28
servername1.domain.domain.com,10/13/2018 7:28
I need to remove the fully qualified name so it only has the shortname and I need to keep the second column so it looks as is like below either by sending to a new CSV or somehow removing the domain inplace somehow. Basically I want the second column untouched but I need it to be included when creating a new CSV with the altered column 1.
servername,10/15/2018 6:28
servername1,10/13/2018 7:28
I have this:
Import-Csv "filename.csv" -Header b1,b2 |
% {$_.b1.Split('.')[0]} |
Set-Content "filename1.csv"
This works great, but the problem is the new CSV is missing the 2nd column. I need to send the second column to the new CSV file as well.
Use a calculated property to replace the property you want changed, but leave everything else untouched:
Import-Csv 'input.csv' -Header 'b1', 'b2' |
Select-Object -Property #{n='b1';e={$_.b1.Split('.')[0]}}, * -Exclude b1 |
Export-Csv 'output.csv' -NoType
Note that you only need to use the parameter -Header if your CSV data doesn't already have a header line. Otherwise you should remove the parameter.
If your input file doesn't have headers and you want to create the output file also without headers you can't use Export-Csv, though. Use ConvertTo-Csv to create the CSV text output, then skip over the first line (to remove the headers) and write the rest to the output file with Set-Content.
Import-Csv 'input.csv' -Header 'b1', 'b2' |
Select-Object -Property #{n='b1';e={$_.b1.Split('.')[0]}}, * -Exclude b1 |
ConvertTo-Csv -NoType |
Select-Object -Skip 1 |
Set-Content 'output.csv'
I have a csv file that may have unknown headers, one of the columns will contain email addresses for example.
Is there a way to select only the column that contains the email addresses and save it as a list to a variable?
One csv could have the header say email, another could say emailaddresses, another could say email addresses another file might not even have the word email in the header. As you can see, the headers are different. So I want to be able to detect the correct column first and use that data further in the script. Once the column is identified based on the data it contains, select that column only.
I've tried the where-object and select-string cmdlets. With both, the output is the entire array and not just the data in the column I am wanting.
$CSV = import-csv file.csv
$CSV | Where {$_ -like "*#domain.com"}
This outputs the entire array as all rows will contain this data.
Sample Data for visualization
id,first_name,bagel,last_name
1,Base,bcruikshank0#homestead.com,Cruikshank
2,Regan,rbriamo1#ebay.co.uk,Briamo
3,Ryley,rsacase2#mysql.com,Sacase
4,Siobhan,sdonnett3#is.gd,Donnett
5,Patty,pesmonde4#diigo.com,Esmonde
Bagel is obviously what we are trying to find. And we will play pretend in that we have no knowledge of the columns name or position ahead of time.
Find column dynamically
# Import the CSV
$data = Import-CSV $path
# Take the first row and get its columns
$columns = $data[0].psobject.properties.name
# Cycle the columns to find the one that has an email address for a row value
# Use a VERY crude regex to validate an email address.
$emailColumn = $columns | Where-Object{$data[0].$_ -match ".*#*.\..*"}
# Example of using the found column(s) to display data.
$data | Select-Object $emailColumn
Basically read in the CSV like normal and use the first columns data to try and figure out where the email address column is. There is a caveat that if there is more than one column that matches it will get returned.
To enforce only 1 result a simple pipe to Select-Object -First 1 will handle that. Then you just have to hope the first one is the "right" one.
If you're using Import-Csv, the result is a PSCustomObject.
$CsvObject = Import-Csv -Path 'C:\Temp\Example.csv'
$Header = ($CsvObject | Get-Member | Where-Object { $_.Name -like '*email*' }).Name
$CsvObject.$Header
This filters for the header containing email, then selects that column from the object.
Edit for requirement:
$Str = #((Get-Content -Path 'C:\Temp\Example.csv') -like '*#domain.com*')
$Headers = #((Get-Content -Path 'C:\Temp\Example.csv' -TotalCount 1) -split ',')
$Str | ConvertFrom-Csv -Delimiter ',' -Header $Headers
Other method:
$PathFile="c:\temp\test.csv"
$columnName=$null
$content=Get-Content $PathFile
foreach ($item in $content)
{
$SplitRow= $item -split ','
$Cpt=0..($SplitRow.Count - 1) | where {$SplitRow[$_] -match ".*#*.\..*"} | select -first 1
if ($Cpt)
{
$columnName=($content[0] -split ',')[$Cpt]
break
}
}
if ($columnName)
{
import-csv "c:\temp\test.csv" | select $columnName
}
else
{
"No Email column founded"
}
I'm having trouble making some changes to a series of CSV files, all with the same data structure. I'm trying to combine all of the files into one CSV file or one tab delimited text file (don't really mind), however each file needs to have 2 empty rows removed and two of the columns removed, below is an example:
col1,col2,col3,col4,col5,col6 <-remove
col1,col2,col3,col4,col5,col6 <-remove
col1,col2,col3,col4,col5,col6
col1,col2,col3,col4,col5,col6
^ ^
remove remove
End Result:
col1,col2,col4,col6
col1,col2,col4,col6
This is my attempt at doing this (I'm very new to Powershell)
$ListofFiles = "example.csv" #this is an list of all the CSV files
ForEach ($file in $ListofFiles)
{
$content = Get-Content ($file)
$content = $content[2..($content.Count)]
$contentArray = #()
[string[]]$contentArray = $content -split ","
$content = $content[0..2 + 4 + 6]
Add-Content '...\output.txt' $content
}
Where am I going wrong here...
your example file should be read, before foreach to fetch the file list
$ListofFiles = get-content "example.csv"
Inside the foreach you are getting content of mainfile
$content = Get-Content ($ListofFiles)
instead of
$content = Get-Content $file
and for removing rows i will recommend this:
$obj = get-content C:\t.csv | select -Index 0,1,3
for removing columns (column numbers 0,1,3,5):
$obj | %{(($_.split(","))[0,1,3,5]) -join "," } | out-file test.csv -Append
According to the fact the initial files looks like
col1,col2,col3,col4,col5,col6
col1,col2,col3,col4,col5,col6
,,,,,
,,,,,
You can also try this one liner
Import-Csv D:\temp\*.csv -Header 'C1','C2','C3','C4','C5','C6' | where {$_.c1 -ne ''} | select -Property 'C1','C2','C5' | Export-Csv 'd:\temp\final.csv' -NoTypeInformation
According to the fact that you CSVs have all the same structure, you can directly open them providing the header, then remove objects with the missing datas then export all the object in a csv file.
It is sufficient to specify fictitious column names, with a column number that can exceed the number of columns in the file, change where you want and exclude columns that you do not want to take.
gci "c:\yourdirwithcsv" -file -filter *.csv |
%{ Import-Csv $_.FullName -Header C1,C2,C3,C4,C5,C6 |
where C1 -ne '' |
select -ExcludeProperty C3, C4 |
export-csv "c:\temp\merged.csv" -NoTypeInformation
}
I am working on a CSV File which I recently created. The CSV file contains columns with headers and corresponding rows.
I need to remove entire columns (including its data) that have specific text common to their headers. For e.g column 1 has header named intID, column 2 has header named boolID, column 3 has header named charID and so on ('ID' being the common text). There are some columns that don't have 'ID' as text in their headers, so we need to retain those.
The csv file is getting generated dynamically, so there may be more/less columns based on what data we select for the csv. But we need these columns with their headers having some common text to be removed.
How can we achieve this?
Would something like that do the trick?
$yourfile = "<path to your csv>"
# Import the CSV
$csv = Import-Csv -Path $yourfile
# Find all columns that do not end with "ID"
$colsToKeep = $csv | Get-Member -MemberType NoteProperty |?{$_.name -notmatch "^.+ID$"} | Select-Object -ExpandProperty name
# Filter out all unwanted columns
$newCsv = $csv | Select-Object -Property $colsToKeep
# Export CSV to new file
$newCsv | Export-Csv -Path "<path to new csv>"
Assuming the following:
the ID part is not a plain text "ID" but a dynamic arbitrary text
headers of interest start with int, char, bool
Let's count occurrences of ID part and build a list of headers used just once, then export the CSV.
$csv = Import-Csv 1.csv
$prefix = '^(int|char|bool)' # or '^([a-z])' for any lowercase text
$headers = $csv[0].PSObject.Properties.Name
$uniqueIDs = $headers -creplace $prefix, '' | group | ? Count -eq 1 | select -expand Name
$uniqueHeaders = $headers | ?{ $_ -creplace $prefix, '' -in $uniqueIDs }
$csv | select $uniqueHeaders | Export-Csv 2.csv -NoTypeInformation
Note: in the old PowerShell 2.0 instead of ? Count -eq 1 use ?{ $_.Count -eq 1 }