I used the ">" operator to put the content of an array into a CSV file, here is what i got, the first line is empty
Computer IP
-------- --
IMPPRD1 172.22.30.33
IMPPRD2 172.22.30.31
IMPPRD3 172.22.30.32
IMPSR1 172.22.30.12
IMPPRD5 172.22.30.17
I would like it to be a normal CSV, so something like this :
Computer,IP
IMPPRD1,172.22.30.33
IMPPRD2,172.22.30.31
IMPPRD3,172.22.30.32
IMPSR1,172.22.30.12
How could I manage to do this using powershell?
Thanks !
Using > outputs as normal text - including the spaces between columns. Basically the same as Write-Output. You are looking for Export-Csv which outputs in CSV format.
Related
I have a source file which is in .txt format. It looks like a semi-colon separated file:
100;200;ThisisastringcolumnA;4;
101;400;Thisisastringc;lumnA;5;
102;600;ThisisastringcolumnB;6;
104;600;Thisisa;;ringcolumnB;6;
However, it is determined by length. So it is a length-delimited file.
Fist column for example is from first value to the third (100), then a semi-colon follows.
Second column starts at 5th position (including), until (including) 7th position. A string column can contain a semi-colon.
Now I want to import this length-delimited txt file with Powershell and export it as a csv file. This file should be really semi-colon separated. The result should look like
100;200;ThisisastringcolumnA;4;
101;400;"Thisisastringc;lumnA";5;
102;600;ThisisastringcolumnB;6;
104;600;"Thisisa;;ringcolumnB";6;
But I have simply no idea how to do it? I googled it, but I did not find that much useful code examples for importing length-delimited txt files with PowerShell.
Unfortunately, I cannot use Python. I am not sure, if this task is generally possible using Powershell? Because when exporting, Powershell also needs to recognize that there are string values containing the separator, so it has to pay attention to the quoting: "Thisisa;;ringcolumnB". I think it would be also ok for me, if the whole column is quoted, so every entry in a string column gets quotes added.
You can use regex to describe a string in which the 3rd "column" contains a ; and then inject the quotation marks with the -replace operator:
$lines = Get-Content path\to\file.txt
#($lines) -replace '(.{3});(.{3});(.{20}(?<=;.{0,19}));(.);', '$1;$2;"$3";$4;'
The expression (.{20}(?<=;.{0,19})) is going to match the 20-char 3rd column value only if it contains at least one semi-colon - so lines with no semicolon in that column will be left alone:
# let's try it out with your test data
$lines = #'
100;200;ThisisastringcolumnA;4;
101;400;Thisisastringc;lumnA;5;
102;600;ThisisastringcolumnB;6;
104;600;Thisisa;;ringcolumnB;6;
'# -split '\r?\n'
#($lines) -replace '(.{3});(.{3});(.{20}(?<=;.{0,19}));(.);', '$1;$2;"$3";$4;'
Which yields the following four strings:
100;200;ThisisastringcolumnA;4;
101;400;"Thisisastringc;lumnA";5;
102;600;ThisisastringcolumnB;6;
104;600;"Thisisa;;ringcolumnB";6;
To write the output back to file, use Set-Content:
#($lines) -replace '(.{3});(.{3});(.{20}(?<=;.{0,19}));(.);', '$1;$2;"$3";$4;' |Set-Content path\to\fixed_output.scsv
I have a table that contains message_content. It looks like this:
message_content | WFUS54 ABNT 080344\r\r
| TORLCH\r\r
| TXC245-361-080415-\r
How would I extract only the 2nd line of that output(TORLCH)? I've tried to shorten the output to a certain number of characters but that ultimately doesn't provide what I want. I've also tried removing carriage returns and new lines. I am outputting my results to a CSV I could manipulate with Python, but was wondering if there's a way to do it in the query first.
Based on other examples, it seems like I could use a regular expression to maybe do this? Not sure where to start with learning that though.
you can split the line into an array, then take the second element:
(string_to_array(message_content, e'\r\r'))[2]
Online example: https://rextester.com/MDYLXB40812
This question already has answers here:
Export-CSV exports length but not name
(6 answers)
Closed 4 years ago.
Operating System - Windows 10
Powershell version - 5.1.15063.1088
Ok, I'm really trying hard to think logically what can be wrong with this PowerShell script, but apparently can't get an idea and asking for some help. So here is what I'm trying to do, simple as 1+1
If I understood the tutorial correctly, creating an array in PowerShell is like this:
$someVariable = "PowerShell", "MowerShell", "HowerShell", "ZowerShell"
Then I'm simply trying to write this thing to csv file with comma as delimeter, but firstly give it a try in the console output
$someVariable | ConvertTo-Csv -NoTypeInformation
According to PowerShell 5.1 official documentation
...Specifies a delimiter to separate the property values. The default
is a comma (,).
So no additional writing that I would like to use comma as delimiter is not required. Once the command Write-Host $someVariable is executed, I see this weird output:
"Length" "10" "10" "10" "10"
What is this? Am I suppose to see the values of my variable separated with simple comma? So from the numbers I can guess that scripts calculates the amount of alphabet letters in each word -
P o w e r S h e l l
contains 10 letters.
Is this the suggested way to calculate the amount of letters in the string (in case I get PowerShell task on my next job interview) using ConvertTo-Csv command?
Writing this funky data to the csv file itself leads to more unexpected results:
Now I'm completely lost what those numbers are...
Is this possible to write my strings as STRINGS to the csv file in one line rather then silly numbers?
The desired output is this entry as headers in the csv file:
"PowerShell","MowerShell","HowerShell","ZowerShell"
The output reads "Length", and has a series of 10's. Each of your strings are 10 characters long (the double quotes aren't factored in).
Length can be calculated many ways. I wouldn't say there is one suggested way, only the ways that fit what you're trying to do.
To get the literal text of what you posted (no headers, etc.) in a csv, try:
$someVariable | Out-File foo.csv
I am new to using powershell and I am in need of some assistance.
I have a csv file that looks like this:
DisplayName,AllJSSUSers,ALLMobileDevices,LimitToUsers,Exclusions,DepartmentEx,IconURL,ID
Aurasma,TRUE,TRUE,"G_Year 4,G_Year 7,G_Year 11,G_Year 6,G_Year 10,G_Year 5,G_Year 9,G_Teaching Staff,G_Year 8,G_Supply Teachers,G_Year 3,G_Year 12",,,,5
What I would like to do is split the column "LimitToUsers" where the commas are into multiple column and then output that to a new csv file.
I have no idea where to start with this. Can anyone help?
Thank you
Gavin
You can read CSV data with Import-Csv.
You can access that column from each data object by accessing the LimitToUsers property.
You can split a string with the -split operator.
You can add new properties to object with Add-Member.
You can write CSV with Export-Csv.
Since you somehow have to split a single column into multiple ones, how you do that is up to you and I can't help you there
I have a CSV file containing some user data it looks like this:
"10333","","an.10","Kenyata","","Aaron","","","","","","","","","",""
"12222","","an.4","Wendy","","Aaron","","","","","","","","","",""
"14343","","aaron.5","Nanci","","Aaron","","","","","","","","","",""
I also have a file which has an item on each line like this:
an.10
arron.5
What I want is to find only the lines in the CSV file contained in the list file.
So desired output would be:
"10333","","an.10","Kenyata","","Aaron","","","","","","","","","",""
"14343","","aaron.5","Nanci","","Aaron","","","","","","","","","",""
(Note how an.4 is not contained in this new list.)
I have any environment available to me and am willing to try just about anything aside from manually doing so as this csv contains millions of records and there are about 100k entries in the list itself.
How unique are the identifiers an.10 and the like?
Maybe a very small *x shell script would be enough:
for i in $(uniq list.txt); do grep "\"$i\"" data.csv; done
That would, for every unique entry in the list, return all matching lines in the csv file. It does not match exclusively on the second column however. (That could be done with awk for example)
If the csv file is data.csv and the list file is list.txt, I would do this:
for i in `cat list.txt`; do grep $i data.csv; done