How to export array to csv in powershell? - powershell

$x1 = (1,22,333,4444)
$x1 | export-csv 'd:\123.csv' -Force
Then I get this:
How do I Get a table like this?:

CSV's don't just accept arbitrary data properly, you can use | Out-File x.csv to dump them out on individual lines, and then read it back in with Import-Csv specifying headers, but a proper CSV needs headers when it is saved.
if you want to save it out properly you need to convert it into an object where the numbers are actually "named" within an object so powershell can create a valid CSV.
1,22,333,4444 | ForEach {
[PSCustomObject]#{Number = $_}
} | Export-Csv C:\++\123.csv -NoTypeInformation
-NoTypeInformation removes the #TYPE header.
that being said, Out-File is the only way it will match your 'expected' output table, you don't seem to be looking for a CSV here.

This will create a proper csv file with a header:
ConvertFrom-Csv (1,22,333,4444) -Header Number|Export-Csv .\123.csv -NoType
Loaded in Excel cell A1 will be Number
This will create a fake Csv accepted by Excel and returning your sample table.
(1,22,333,4444)|Set-Content .\234.csv

Related

Powershell remove a column from csv only if a word is present

I have a csv with columns A, B, C. I would like to import that to powershell and only on column C, remove any rows that have the word "Unknown" listed. If Column A or B has "Unknown", they stay, however, if Column C has it, the entire row gets deleted. Per the picture below, Row 4 would be deleted.
Can someone please provide a sample script to do this?
Thanks!
So, you have 3 problems you need to solve:
Import the data from the CSV file
Filter it based on the value of column C
Export the filtered data to a file again
To import, use the aptly named Import-Csv cmdlet:
$data = Import-Csv .\path\to\file.csv
Import-Csv will parse the CSV file and for each row it reads, it will output 1 object with properties corresponding to the column names in the header row.
To filter these objects based on the value of their C property, use the Where-Object cmdlet:
$filteredData = $data |Where-Object C -ne 'Unknown'
Where-Object will test whether the C property on each object does not have the value 'Unknown' (-ne = not equals), and discard any object for which that's not the case.
To re-export the filtered data, use the Export-Csv cmdlet:
$filteredData |Export-Csv .\path\to\output.csv -NoTypeInformation
You can also combine all three statements into a single pipeline expression:
Import-Csv .\path\to\file.csv |Where-Object C -ne 'Unknown' |Export-Csv .\path\to\output.csv -NoTypeInformation
This "one-liner" approach might be preferable if you're working on large CSV files (> hundreds of thousands of records), as it doesn't require reading the entire CSV file into memory at once.
$Data = Get-Content "C:\file.csv" | ConvertFrom-Csv
$Data | Where-Object {$_.C-ne 'Unknown'} | Export-Csv "C:\file_New.csv"

Using Powershell to write out two header rows without deleting existing data

I have a need to generate two header rows to an existing csv file because the system where the csv will be uploaded needs the two header rows. The csv file will contain data that I want to keep.
I have been testing a powershell script to do this, and I can write a single row of headers, but am struggling to write two rows.
Below is the powershell script I am currently trying to build out.
$file = "C:\Users\_svcamerarcgis\Desktop\Test.csv"
$filedata = import-csv $file -Header WorkorderETL 'n ICFAORNonICFA, WONUmber, Origin
$filedata | export-csv $file -NoTypeInformation
The end result I'm looking for should be as follows:
WorkorderETL
ICFAORNonICFA, WONUmber, Origin
xxx,yyy,zzz
The sole purpose of Import-Csv's -Header parameter is to provide an array of column names to serve as the property names of the custom objects that the CSV rows are parsed into - you cannot repurpose that for special output formatting for later exporting.
You can use the following approach instead, bypassing the need for Import-Csv and Export-Csv altogether (PSv5+):
$file = 'C:\Users\User\OneDrive\Scripts\StackTesting\Test.csv'
# Prepend the 2-line header to the existing file content
# and save it back to the same file
# Adjust the encoding as needed.
#'
WorkorderETL
ICFAORNonICFA,WONUmber,Origin
'# + (Get-Content -Raw $file) | Set-Content $file -NoNewline -Encoding utf8
To be safe, be sure to create a backup of the original file first.
Since the file is being read (in full) and rewritten in the same pipeline, there's a hypothetical chance of data loss if writing back to the input file get interrupted.
You may be better trying to handle this as a text file, considering you are just trying to add a single line at the top of the CSV:
$file = "C:\Users\User\OneDrive\Scripts\StackTesting\Test.csv"
$CSV = "c1r1, c2r1, c3r1 `nc1r2, c2r2, c3r2"
$filedata = Get-Content $file
$filedata = "WorkorderETL`n" + $CSV
$filedata | Out-File $file
This will resul in the CSV file holding:
WorkorderETL
c1r1, c2r1, c3r1
c1r2, c2r2, c3r2
Which looks to be what you want.

Filtering data from CSV file with PowerShell

I have huge csv file where first line contains headers of the data. Because the file size I can't open it with excel or similar. I need to filter rows what I only need. I would want to create new csv file which contains only data where Header3 = "TextHere". Everything else is filtered away.
I have tried in PowerShell Get-Content Select-String | Out-File 'newfile.csv' but it lost header row and also messed up with the data putting data in to wrong fields. There is included empty fields in the data and I believe that is messing it. When I tried Get-Content -First or -Last data seemed to be in order.
I have no experience handling big data files or powershell before. Also other options besides PowerShell is also possible if it is free to use as "non-commercial use"
try like this (modify your delimiter if necessary):
import-csv "c:\temp\yourfile.csv" -delimiter ";" | where Header3 -eq "TextHere" | export-csv "c:\temp\result.csv" -delimiter ";" -notype

How can I alternate column headers in a tab delimited file?

I have a tab delimited txt file and i need to switch first and second column names (without switching columns data). In other words I need to rename A(Id) to B(ExternalId) and B(ExternalId) to A(Id). Other columns in the file (other data) should stay unchanged. I'm very new in PowerShell, please advice. As I understand I need to use import/export csv cmdlet.
I tryed this, but it's not working the right way...
Import-Csv 'C:\original_users.txt' |
Select-Object Id, #{Name="ExternalId";Expression={$_."Id"}}; Select-Object ExternalId, #{Name="Id";Expression={$_."ExternalId"}} |
Export-Csv 'C:\changed_users.txt'
The Import-CSV and Export-CSV cmdlets have their strengths but this might not be one of them. The latter cmdlet would introduce quoting that might not be in your original file and that might not be desired.
Either way why not just do some text manipulation on the first line! Lets read in the file and and output the first lined, edited, and the remainder of the file. This sample uses a new location but you could easily write it back to the same file.
# Get the full file into a variable
$fullFile = Get-Content "c:\temp\mockdata.csv"
# Parse the first line into a column array
$columns = $fullFile[0].Split("`t")
# Rebuild the header by switching the columns order as desired.
$newHeader = ($columns[1],$columns[0] + ($columns | Select-Object -Skip 2)) -join "`t"
# Write the header back to file then the rest of the data.
$outputPath = "C:\somepath.txt"
$newHeader | Set-Content $outputPath
$fullFile | Select-Object -Skip 1 | Add-Content $outputPath
This also preserves the presence of other columns and their data.

Using duplicate headers in Powershell .csv file

I have a .csv file and I want to import it into powershell then iterate through the file changing certain values. I then want the output to append to the original .csv file, so that the values have been updated.
My issue is that the .csv file has headers which aren't unique, and can't be changed as then it won't work in another program. Originally I defined my own headers in the powershell to get around this but then the output file has these new headers when it needs to have the old ones.
I have also tried ConvertFrom-Csv which means I can no longer access the columns I need to, so lots of runtime errors.
What would be ideal is to be able to use the defined column headers and then convert back to the original column headers. My current code is below:
$csvfile = Import-Csv C:\test.csv| Where-Object {$_.'3' -eq $classID} | ConvertFrom-Csv
foreach($record in $csvfile){
*do something*}
$csvfile | Export-Csv -path C:\test.csv -NoTypeInformation -Append
I've searched the web now for some hours and tried everything I've come across, to no avail.
Thanks in advance.
This is a somewhat hackish implementation but should work.
Remove all the headers as a single line and save it somewhere
Parse the new result-set (with the headers removed)
Add the line at the top when you are finished
A CSV is a comma delimited file, you don't have to treat it like structured data. Feel free to splice and dice as you want.
Since you know beforehand how many columns are in the input CSV file, you can import without the header and process internally. Example:
$columns = 78
Import-Csv "inputfile.csv" -Header (0..$($columns - 1)) | Select-Object -Skip 1 | ForEach-Object {
$row = $_
$outputObject = New-Object PSObject
0..$($columns- 1) | ForEach-Object {
$outputObject | Add-Member NoteProperty "Col$_" $row.$_
}
$outputObject
} | Export-Csv "outputfile.csv" -NoTypeInformation
This example generates new PSObjects and then outputs a new CSV file with generic column names (Col0, Col1, etc.).