Powershell: Format output of Import-Csv Records in a specific way - powershell

id like to format the output of CSV Records in a specific way.
Lets say, i have a csv file with these records:
Field1,Field2
blah,blah
bluh,bluh
Now, what i want to achieve is an output (into console or file) with this format
('blah','blah')
('bluh','bluh')
This [1] does not work! Powershell simply writes nothing into out.txt.
[1]
Import-Csv .\test.csv | Select-Object Field1,Field2 | ForEach-Object {Write-Host "('"$_.Field1"', '"$_.Field2"')"} > .\out.txt

Use the format operator:
import-csv .\test.csv | %{ "({0},{1})" -f $_.Field1, $_.Field2; }
To output to file, simply pipe the output there; i.e.
import-csv .\test.csv | %{ "({0},{1})" -f $_.Field1, $_.Field2; } | out-file .\out.txt
For more info on the format operator: http://ss64.com/ps/syntax-f-operator.html

Try this if work
...{Write-output "('$($_.Field1)', '$($_.Field2)')"} > .\out.txt
Dont use write-host

Related

powershell: Write specific rows from files to formatted csv

The following code gives me the correct output to console. But I would need it in a csv file:
$array = #{}
$files = Get-ChildItem "C:\Temp\Logs\*"
foreach($file in $files){
foreach($row in (Get-Content $file | select -Last 2)){
if($row -like "Total peak job memory used:*"){
$sp_memory = $row.Split(" ")[5]
$array.Add(($file.BaseName),([double]$sp_memory))
break
}
}
}
$array.GetEnumerator() | sort Value -Descending |Format-Table -AutoSize
current output (console):
required output (csv):
In order to increase performance I would like to avoid the array and write output directly to csv (no append).
Thanks in advance!
Change your last line to this -
$array.GetEnumerator() | sort Value -Descending | select #{l='FileName'; e={$_.Name}}, #{l='Memory (MB)'; e={$_.Value }} | Export-Csv -path $env:USERPROFILE\Desktop\Output.csv -NoTypeInformation
This will give you a csv file named Output.csv on your desktop.
I am using Calculated properties to change the column headers to FileName and Memory (MB) and piping the output of $array to Export-Csv cmdlet.
Just to let you know, your variable $array is of type Hashtable which won't store duplicate keys. If you need to store duplicate key/value pairs, you can use arrays. Just suggesting! :)

I use -NoTypeInformation so why do I get header back when using Out-File?

I filtered by date this file data1.csv
2017.11.1,09:55,1.1,1.2,1.3,1.4,1
2017.11.2,09:55,1.5,1.6,1.7,1.8,2
I don't get a header with -NoTypeInformation:
$CutOff = (Get-Date).AddDays(-2)
$filePath = "data1.csv"
$Data = Import-Csv $filePath -Header Date,Time,A,B,C,D,E
$Data2 = $Data | Where-Object {$_.Date -as [datetime] -gt $Cutoff} | convertto-csv -NoTypeInformation -Delimiter "," | % {$_ -replace '"',''}
But when rewriting with Out-File
$Data2 | Out-File "data2.csv" -Encoding utf8 -Force
I get header back as data2.csv contains:
Date,Time,A,B,C,D,E
2017.11.2,09:55,1.5,1.6,1.7,1.8,2
Why do I have Date,Time,A,B,C,D,E ?
-NoTypeInformation is not about the header but the data type of the rows in the file. Remove it to see what shows up. From Microsoft
Omits the type information header from the output. By default, the string in the output contains #TYPE followed by the fully-qualified name of the object type.
Emphasis mine.
CSVs need headers. That is why it is making one. If you don't want to see the header in the output use Select-Object -Skip 1 to remove it.
$Data |
Where-Object {$_.Date -as [datetime] -gt $Cutoff} |
ConvertTo-CSV -NoTypeInformation -Delimiter "," |
Select-Object -Skip 1 |
% {$_ -replace '"'}
I would not pipe Out-File to itself. You could pipe to Set-Content here just as well.
I am guessing this whole process is to keep the source file in the same state just with some lines filtered out based on date. You could skip most of this just by parsing the date out in each line.
$threshold = (Get-Date).AddDays(-2)
$filePath = "c:\temp\bagel.txt"
(Get-Content $filePath) | Where-Object{
$date,$null=$_.Split(",",2)
[datetime]$date -gt $threshold
} | Set-Content $filePath
Now you don't have to worry about PowerShell CSV object structure or output since we act on the raw data of the file itself.
That will take each line of the input file and filter it out if the parsed date does not match the threshold. Change encoding on the input output cmdlets as you see necessary. What $date,$null=$_.Split(",",2) is doing is splitting the line
on the comma into 2 parts. First of which becomes $date and since this is just a filtering condition we dump the rest of the line into $null.
Properly-formed CSV files must have column headers. Your use of -NoTypeInformation in generating the CSV does not affect column headers; instead, it affects whether the PowerShell object type information is included. If you Export-CSV without -NoTypeInformation, the first line of your CSV file will have a line that looks like #TYPE System.PSCustomObject, which you don't want if you're going to open the CSV in a spreadsheet program.
If you subsequently Import-CSV, the headers (Date, Time, A, B, C) are used to create the fields of a PSObject, so that you can refer to them using the standard dot notation (e.g., $CSV[$line].Date).
The ability to specify -Header on Import-CSV is essentially a "hack" to allow the cmdlet to handle files that are comma-separated, but which did not include column headers.

CSV file header changes in powershell

I have a CSV file in which I want to change the headers names.
The current header is: name,id and I want to change it to company,transit
Following is what I wrote in script:
$a = import-csv .\finalexam\employees.csv -header name,id
foreach ($a in $as[1-$as.count-1]){
# I used 1 here because I want it to ignore the exiting headers.
$_.name -eq company, $_.id -eq transit
}
I don't think this is the correct way to do this.
You're over thinking this... All you want to do is replace the header row, so set the new header as the first item of an array, read in the file skipping the first line and add it to the array, output the array.
"Company,Transit"|Set-Content C:\Path\To\NewFile.csv
Get-Content C:\Path\To\Old.csv | Select -skip 1 | Add-Content C:\Path\To\NewFile.csv
Something very simple like this:
$file = Get-Content C:\temp\data.csv
"new,column,name" | Set-Content C:\temp\data.csv
$file | Select-Object -Skip 1 | Add-Content C:\temp\data.csv
Collect the complete file contents and then write a new header. Then restore the rest of the file content while -skiping the original header.

How to change column position in powershell?

Is there any easy way how to change column position? I'm looking for a way how to move column 1 from the beginning to the and of each row and also I would like to add zero column as a second last column. Please see txt file example below.
Thank you for any suggestions.
File sample
TEXT1,02/10/2015,55.930,57.005,55.600,56.890,1890
TEXT2,02/10/2015,51.060,52.620,50.850,52.510,4935
TEXT3,02/10/2015,50.014,50.74,55.55,52.55,5551
Output:
02/10/2015,55.930,57.005,55.600,56.890,1890,0,TEXT1
02/10/2015,51.060,52.620,50.850,52.510,4935,0,TEXT2
02/10/2015,50.014,50.74,55.55,52.55,5551,0,TEXT3
Another option:
#Prepare test file
(#'
TEXT1,02/10/2015,55.930,57.005,55.600,56.890,1890
TEXT2,02/10/2015,51.060,52.620,50.850,52.510,4935
TEXT3,02/10/2015,50.014,50.74,55.55,52.55,5551
'#).split("`n") |
foreach {$_.trim()} |
sc testfile.txt
#Script starts here
$file = 'testfile.txt'
(get-content $file -ReadCount 0) |
foreach {
'{1},{2},{3},{4},{5},{6},0,{0}' -f $_.split(',')
} | Set-Content $file
#End of script
#show results
get-content $file
02/10/2015,55.930,57.005,55.600,56.890,1890,0,TEXT1
02/10/2015,51.060,52.620,50.850,52.510,4935,0,TEXT2
02/10/2015,50.014,50.74,55.55,52.55,5551,0,TEXT3
Sure, split on commas, spit the results back minus the first result joined by commas, add a 0, and then add the first result to the end and join the whole thing with commas. Something like:
$Input = #"
TEXT1,02/10/2015,55.930,57.005,55.600,56.890,1890
TEXT2,02/10/2015,51.060,52.620,50.850,52.510,4935
TEXT3,02/10/2015,50.014,50.74,55.55,52.55,5551
"# -split "`n"|ForEach{$_.trim()}
$Input|ForEach{
$split = $_.split(',')
($Split[1..($split.count-1)]-join ','),0,$split[0] -join ','
}
I created file test.txt to contain your sample data. I Assigned each field a name, "one","two","three" etc so that i could select them by name, then just selected and exported back to csv in the order you wanted.
First, add the zero to the end, it will end up as second last.
gc .\test.txt | %{ "$_,0" } | Out-File test1.txt
Then, rearrange order.
Import-Csv .\test.txt -Header "one","two","three","four","five","six","seven","eight" | Select-Object -Property two,three,four,five,six,seven,eight,one | Export-Csv test2.txt -NoTypeInformation
This will take the output file and get rid of quotes and header line if you would rather not have them.
gc .\test2.txt | %{ $_.replace('"','')} | Select-Object -Skip 1 | out-file test3.txt

Convert date format in CSV using PowerShell

I have two CSV file with 50 column and more than 10K row. There is a column having date-time value. In some records it is 01/18/2013 18:16:32 and some records is 16/01/2014 17:32.
What I want to convert the column data like this: 01/18/2013 18:16. I want to remove seconds value. I want to do it by PowerShell script.
Sample data:
10/1/2014 13:18
10/1/2014 13:21
15/01/2014 12:03:19
15/01/2014 17:39:27
15/01/2014 18:29:44
17/01/2014 13:33:59
Since you're not going to convert to a sane date format anyway, you can just do a regex replace of that column:
Import-Csv foo.csv |
ForEach-Object {
$_.Date = $_.Date -replace '(\d+:\d+):\d+', '$1'
} |
Export-Csv -NoTypeInformation foo-new.csv
You could also go the route of using date parsing, but that's probably a bit slower:
Import-Csv foo.csv |
ForEach-Object {
$_.Date = [datetime]::Parse($_.Date).ToString('MM/dd/yyyy HH:mm')
} |
Export-Csv -NoTypeInformation foo-new.csv
If you're sure that there are no other timestamps or anything that could look like it elsewhere, you can also just replace everything that looks like it without even parsing as CSV:
$csv = Get-Content foo.csv -ReadCount 0
$csv = $csv -replace '(\d{2}/\d{2}/\d{4} \d{2}:\d{2}):\d{2}', '$1'
$csv | Out-File foo-new.csv