Powershell script remove | from csv file - powershell

I have a CSV file that is outputting data in this format:
sitename | groupname | grouprole
--------------------+----------------------------+--------------------------
Administration | Group1 | NewRole
Finance | Group1 | NewRole
Default | Group1 | NewRole
I am trying to remove the | marks and replace them with a ; delimiter.
I am already using: ConvertTo-Csv -Delimiter ';' -NoType| % {$_ -replace ' ',''} to remove the padded spaces.
I tried using % {$_ -replace '|',';'} to have it replace the | with ; so that it would format to a proper CSV file. Instead the results were:
;s;i;t;e;n;a;m;e;|;g;r;o;u;p;n;a;m;e;|;g;r;o;u;p;r;o;l;e;
How do I go about removing the | in a CSV file and replacing it with a proper delimiter?

Related

Powershell Replace Regex Import CSV File

I have a CSV file named test.csv (C:\testing\test.csv) in this format:
File Name,Location,Added (GMT),Created (GMT),Last Modified (GMT),File Size (Bytes),File Size,Extension,Incident Type
10-MB-Test (1).docx,\\blah\Test 3,10/8/2020 21:13,10/8/2020 19:33,10/8/2020 16:26,10723331,10.23 (MB),docx,low_data_discover
10-MB-Test (1).xlsx,\\blah2\Test 3\,10/8/2020 21:14,10/8/2020 19:33,10/8/2020 16:25,9566567,9.12 (MB),xlsx,high_data_discover
1-MB-Test.docx,\\blah3\Test 3\,10/8/2020 21:13,10/8/2020 19:33,10/8/2020 16:37,1045970,1021.46 (KB),docx,medium_data_discover
I'm trying to replace trailing "\" characters (if they exist) for values in the Location column with nothing using this Powershell code:
$file1 = import-csv -path "C:\testing\test.csv" | % {$_."Location" -replace "\\$",""} | Select-Object * | export-csv -NoTypeInformation "C:\testing\blah.csv"
However, when I run the code, the only output I get is a column named "Length" with a numerical value. Can you assist?
You're only sending the new string (updated location) down the pipeline. You can update each location and then export it at the end.
$file1 = import-csv -path "C:\testing\test.csv"
$file1 | ForEach-Object {$_.location = $_.location -replace '\\$'}
$file1 | export-csv -NoTypeInformation "C:\testing\blah.csv"

Copy table from .txt file to a new .txt file by skipping certain lines

I have one table (.txt file) in this form:
Table: HHBB
Displayed Fields: 1 of 5 Fixed Columns: 4
-----------------------------------------------------------------------------
| |ID |NAME |Zähler |Obj |ID-RON |MANI |Felder |Nim
|----------------------------------------------------------------------------
| |007 |Kano |000001 |Lad |19283712 | |/HA |
| |007 |Bani |000002 |Bad |917391823 | |/LA |
I want to save this table into another .txt file but just want to skip the lines that match Table and Displayed Fields for example. What I tried:
If ([string]::IsNullOrWhitespace($tempInputRecord2) -or $_ -match "=|Table:|Displayed|----") {
continue
}
How can I do that?
And another question:
What is the best way to write the lines one by one into a new text file?
So you just want to remove the lines which start with Table: or Displayed Fields: and output results to a new file? Use Where-Object to filter lines, and Out-File to write them to the file:
Get-Content test.txt |
Where-Object { $_ -notlike "Table:*" -and $_ -notlike "Displayed Fields:*" } |
Out-File test2.txt
There are many ways for simple tasks:
If the header to skip occurs only once:
Get-Content test.txt|Select-Object -Skip 2|Set-Content test2.txt
A similar approach to yours with -notmatch and RegEx alternation
Get-Content test.txt|Where-Object {$_ -notmatch '^Table:|^Displayed Fields:'}|Set-Content test2.txt
When forcing a complete read in to memory by enclosing in parentheses you can write to the same file:
(Get-Content test.txt)|Select-Object -Skip 2|Set-Content test.txt

How to export to "non-standard" CSV with Powershell

I need to convert a file with this format:
2015.03.27,09:00,1.08764,1.08827,1.08535,1.08747,8941
2015.03.27,10:00,1.08745,1.08893,1.08604,1.08762,7558
to this format
2015.03.27,1.08764,1.08827,1.08535,1.08747,1
2015.03.27,1.08745,1.08893,1.08604,1.08762,1
I started with this code but can't see how to achieve the full transformation:
Import-Csv in.csv -Header Date,Time,O,H,L,C,V | Select-Object Date,O,H,L,C,V | Export-Csv -path out.csv -NoTypeInformation
(Get-Content out.csv) | % {$_ -replace '"', ""} | out-file -FilePath out.csv -Force -Encoding ascii
which outputs
Date,O,H,L,C,V
2015.03.27,1.08745,1.08893,1.08604,1.08762,8941
2015.03.27,1.08763,1.08911,1.08542,1.08901,7558
After that I need to
remove the header (I tried -NoHeader which is not recognized)
replace last column with 1.
How to do that as simply as possible (if possible without looping through each row)
Update : finally I have simplified requirement. I just need to replace last column with constant.
Ok, this could be one massive one-liner... I'm going to do line breaks at the pipes for sanity reasons though.
Import-Csv in.csv -header Date,Time,O,H,L,C,V|select * -ExcludeProperty time|
%{$_.date = [datetime]::ParseExact($_.date,"yyyy.MM.dd",$null).tostring("yyMMdd");$_}|
ConvertTo-Csv -NoTypeInformation|
select -skip 1|
%{$_ -replace '"'}|
Set-Content out.csv -encoding ascii
Basically I import the CSV, exclude the time column, convert the date column to an actual [datetime] object and then convert it back in the desired format. Then I pass the modified object (with the newly formatted date) down the pipe to ConvertTo-CSV, and skip the first line (your headers that you don't want), and then remove the quotes from it, and lastly output to file with Set-Content (faster than Out-File)
Edit: Just saw your update... to do that we'll just change the last column to 1 at the same time we modify the date column by adding $_.v=1;...
%{$_.date = [datetime]::ParseExact($_.date,"yyyy.MM.dd",$null).tostring("yyMMdd");$_.v=1;$_}|
Whole script modified:
Import-Csv in.csv -header Date,Time,O,H,L,C,V|select * -ExcludeProperty time|
%{$_.date = [datetime]::ParseExact($_.date,"yyyy.MM.dd",$null).tostring("yyMMdd");$_.v=1;$_}|
ConvertTo-Csv -NoTypeInformation|
select -skip 1|
%{$_ -replace '"'}|
Set-Content out.csv -encoding ascii
Oh, and this has the added benefit of not having to read the file in, write the file to the drive, read that file in, and then write the file to the drive again.

How to change column position in powershell?

Is there any easy way how to change column position? I'm looking for a way how to move column 1 from the beginning to the and of each row and also I would like to add zero column as a second last column. Please see txt file example below.
Thank you for any suggestions.
File sample
TEXT1,02/10/2015,55.930,57.005,55.600,56.890,1890
TEXT2,02/10/2015,51.060,52.620,50.850,52.510,4935
TEXT3,02/10/2015,50.014,50.74,55.55,52.55,5551
Output:
02/10/2015,55.930,57.005,55.600,56.890,1890,0,TEXT1
02/10/2015,51.060,52.620,50.850,52.510,4935,0,TEXT2
02/10/2015,50.014,50.74,55.55,52.55,5551,0,TEXT3
Another option:
#Prepare test file
(#'
TEXT1,02/10/2015,55.930,57.005,55.600,56.890,1890
TEXT2,02/10/2015,51.060,52.620,50.850,52.510,4935
TEXT3,02/10/2015,50.014,50.74,55.55,52.55,5551
'#).split("`n") |
foreach {$_.trim()} |
sc testfile.txt
#Script starts here
$file = 'testfile.txt'
(get-content $file -ReadCount 0) |
foreach {
'{1},{2},{3},{4},{5},{6},0,{0}' -f $_.split(',')
} | Set-Content $file
#End of script
#show results
get-content $file
02/10/2015,55.930,57.005,55.600,56.890,1890,0,TEXT1
02/10/2015,51.060,52.620,50.850,52.510,4935,0,TEXT2
02/10/2015,50.014,50.74,55.55,52.55,5551,0,TEXT3
Sure, split on commas, spit the results back minus the first result joined by commas, add a 0, and then add the first result to the end and join the whole thing with commas. Something like:
$Input = #"
TEXT1,02/10/2015,55.930,57.005,55.600,56.890,1890
TEXT2,02/10/2015,51.060,52.620,50.850,52.510,4935
TEXT3,02/10/2015,50.014,50.74,55.55,52.55,5551
"# -split "`n"|ForEach{$_.trim()}
$Input|ForEach{
$split = $_.split(',')
($Split[1..($split.count-1)]-join ','),0,$split[0] -join ','
}
I created file test.txt to contain your sample data. I Assigned each field a name, "one","two","three" etc so that i could select them by name, then just selected and exported back to csv in the order you wanted.
First, add the zero to the end, it will end up as second last.
gc .\test.txt | %{ "$_,0" } | Out-File test1.txt
Then, rearrange order.
Import-Csv .\test.txt -Header "one","two","three","four","five","six","seven","eight" | Select-Object -Property two,three,four,five,six,seven,eight,one | Export-Csv test2.txt -NoTypeInformation
This will take the output file and get rid of quotes and header line if you would rather not have them.
gc .\test2.txt | %{ $_.replace('"','')} | Select-Object -Skip 1 | out-file test3.txt

Convert all values in a csv column to integer (or remove leading zeroes) in PowerShell

I have a csv file with an ID_code column and the IDs have leading zeroes that I want to remove. I found out that if I convert the value to an integer, the leading zeroes should disappear but I don't know how to apply that to all values throughout the csv. This is what I tried but the resulting csv comes out blank:
Import-Csv C:\folder\myFile.csv |
ForEach-Object {
$_.ID_code = [convert]::ToInt32($_.ID_code, 10) } |
convertto-csv -NoTypeInformation | %{$_-replace '"', ""} |
out-file C:\folder\myFile2.csv
You could use a calculated property for that. You don't need to parse the value, though. Simply casting it to int should suffice:
Import-Csv 'C:\path\to\input.csv' |
select -Property #{n='ID_code';e={[int]$_.ID_code}},* -Exclude 'ID_code' |
Export-Csv 'C:\path\to\output.csv'
Another option (since you're exporting the data back to a text file anyway) would be to just remove leading zeroes from the string value:
Import-Csv 'C:\path\to\input.csv' |
select -Property #{n='ID_code';e={$_.ID_code -replace '^0+'}},* -Exclude 'ID_code' |
Export-Csv 'C:\path\to\output.csv'
If you know the position of the ID_code column you don't even need to import the CSV. If for instance the column is the first column in the CSV you could do the replacement like this:
(Get-Content 'C:\path\to\input.csv') -replace '^0+' |
Set-Content 'C:\path\to\output.csv'