I'm really liking what I have seen of Powershell. But I'm really confused by some things, as I have so much to learn. I've been reading everything on the site here, but I've not been able to figure this out. Hopefully this is simple. I have a csv like this:
Title,Name,Office,Phone
Boss,Bob,101,323.555-1212
Office-Manager-Level-2,Helen,202,5-1213
Time-Waster-Level-5,Nemo,105,5-1214
Widget-Maker,Zack,10,5-1215
Temp,Larry,102,5-1000
I have been trying to figure out an easy way to prepend & append data to the first column, "Title", that will take eventually become a static webpage with the user's information. I'm trying this so far:
$file = ("\\web\users.csv")
$urlbase="<a href`=`"file:///web/users/info/"
$urlend="_info.html`">"
$data = import-csv ($file) -header ("Title","Name","Office","Phone")
$data | select -Skip 1 | % { $_.Title -replace '$_.Title', "'$urlbase'$_.Title'$urlend'`">'$_.Title'</a>"} | Export-CSV -Path "links_output.csv" -NoTypeInformation
However - all that I'm matching or replacing it appears is the length of the string (??) of the first column of data. My output file is this:
"Length"
"4"
"23"
"19"
"12"
"4"
What I would desire as my output would be:
<a href="file:///web/users/info/Boss_info.html"Boss</a>"
Office-Manager-Level-2"
Time-Waster-Level-5"
Widget-Maker"
Temp"
Also, besides my basic issue, if I could use set-content I'd be happy because I'd really like this to be like a sed -i type of action/function, on the original file, but a new file with the same contents as the old with the updated first column will satisfy if I cannot set-content on the original.
This section of my script will become an html file later and because of issues with regex find and replacing with tags, I'm trying to add the html tags before I use ConvertTo-Html, because that is all working already. Thanks in advance!!
Here's one solution:
$file = ("\\web\users.csv")
$urlbase='<a href="file:///web/users/info/'
$urlend='_info.html">'
get-content $file |
select -Skip 1 |
foreach {
"$Urlbase{0}$urlend" -f $_.split(',')[0]
}
The split(',') is an object method of [string] that will split the string at the commas, producing an array. The trailing [0] takes the first element of that array, whic will be the Name. That gets inserted at {0} in the format string between the other two variables by the format (-f) operator.
You can use the -replace operator, but you can't use PS variables in the replacement string. You can include the literal text:
(get-content $file | select -Skip 1) -replace '^([^,]+)(.+)','<a href="file:///web/users/info/$1_info.html">$2'
Related
Suppose I have two csv files. One is
id_number,location_code,category,animal,quantity
12212,3,4,cat,2
29889,7,6,dog,2
98900,
33221,1,8,squirrel,1
the second one is:
98900,2,1,gerbil,1
The second file may have a newline or something at the end (maybe or maybe not, I haven't checked), but only the one line of content. There may be three or four or more different varieties of the "second" file, but each one will have a first element (98900 in this example) that corresponds to an incomplete line in the first file similar to what is in this example.
Is there a way using powershell to automatically merge the line in the second (plus any additional similar) csv file into the matching line(s) of the first file, so that the resulting file is:
12212,3,4,cat,2
29889,7,6,dog,2
98900,2,1,gerbil,1
33221,1,8,squirrel,1
main.csv
id_number,location_code,category,animal,quantity
12212,3,4,cat,2
29889,7,6,dog,2
98900,
33221,1,8,squirrel,1
correction_001.csv
98900,2,1,gerbil,1
merge code used at the commandline, or in the .ps1 file of your choice
$myHeader = #('id_number','location_code','category','animal','quantity')
#Stage all the correction files: last correction in the most recent file wins
$ToFix = #{}
filter Plumbing_Import-Csv($Header){import-csv -LiteralPath $_ -Header $Header}
ls correction*.csv | sort -Property LastWriteTime | Plumbing_Import-Csv $myHeader | %{$ToFix[$_.id_number]=$_}
function myObjPipe($Header){
begin{
function TextTo-CsvField([String]$text){
#text fields which contain comma, double quotes, or new-line are a special case for CSV fields and need to be accounted for
if($text -match '"|,|\n'){return '"'+($text -replace '"','""')+'"'}
return $text
}
function myObjTo-CsvRecord($obj){
return ''+
$obj.id_number +','+
$obj.location_code +','+
$obj.category +','+
(TextTo-CsvField $obj.animal)+','+
$obj.quantity
}
$Header -join ','
}
process{
if($ToFix.Contains($_.id_number)){
$out = $ToFix[$_.id_number]
$ToFix.Remove($_.id_number)
}else{$out = $_}
myObjTo-CsvRecord $out
}
end{
#I assume you'd append any leftover fixes that weren't used
foreach($out in $ToFix.Values){
myObjTo-CsvRecord $out
}
}
}
import-csv main.csv | myObjPipe $myHeader | sc combined.csv -encoding ascii
You could also use ConvertTo-Csv, but my preference is to not have all the extra " cruft.
Edit 1: reduced code redundancy, accounted for \n, fixed appends, and used #OwlsSleeping suggestion about the -Header commandlet parameter
also works with these files:
correction_002.csv
98900,2,1,I Win,1
correction_new.csv
98901,2,1,godzilla,1
correction_too.csv
98902,2,1,gamera,1
98903,2,1,mothra,1
Edit 2: convert gc | ConvertTo-Csv over to Import-Csv to fix the front-end \n issues. Now also works with:
correction_003.csv
29889,7,6,"""bad""
monkey",2
This is a simple solution assuming there's always exactly one match, and you don't care about output order. Change the output path to csv1 to overwrite.
I added headers manually in both input files, but you can specify them in Import-Csv instead if you'd rather avoid changing your files.
[array]$MissingLine = Import-Csv -Path "C:\Users\me\Documents\csv2.csv"
[string]$MissingId = $MissingLine[0].id_number
[array]$BigCsv = Import-Csv -Path "C:\Users\me\Documents\csv1.csv" |
Where-Object {$_.id_number -ne $MissingId}
($BigCsv + $MissingLine) |
Export-Csv -Path "C:\Users\me\Documents\Combined.csv"
I have a Powershell script that I am trying to work out part of it, so the text input to this is listing the user group they are part of. This PS script is supposed to replace the group with the groups that I am assigning them in active directory(I am limited to only changing groups in active directory). My issue is that when it reaches HR and replaces it, it will then proceed to contine and replace all the new but it all so replaces the HR in CHRL, so my groups look nuts right now. But I am looking it over and it doesn't do it with every line. But for gilchrist it will put something in there for the HR in the name. Is there anything can I do to keep it for changing or am I going to have to change my HR to Human Resources? Thanks for the help.
$lookupTable = #{
'Admin' = 'W_CHRL_ADMIN_GS,M_CHRL_ADMIN_UD,M_CHRL_SITE_GS'
'Security' = 'W_CHRL_SECURITY_GS,M_CHRL_SITE_GS'
'HR' = 'M_CHRL_HR_UD,W_CHRL_HR_GS,M_CHRL_SITE_GS'
$original_file = 'c:\tmp\test.txt'
$destination_file = 'c:\tmp\test2.txt'
Get-Content -Path $original_file | ForEach-Object {
$line = $_
$lookupTable.GetEnumerator() | ForEach-Object {
if ($line -match $_.Key)
{
$line = $line -replace $_.Key, $_.Value
}
}
$line
} | Set-Content -Path $destination_file
Get-Content $destination_file
test.txt:
user,group
john.smith,Admin
joanha.smith,HR
john.gilchrist,security
aaron.r.smith,admin
abby.doe,secuity
abigail.doe,admin
Your input appears to be in CSV format (though note that your sample rows have trailing spaces, which you'd have to deal with, if they're part of your actual data).
Therefore, use Import-Csv and Export-Csv to read / rewrite your data, which allows a more concise and convenient solution:
Import-Csv test.txt |
Select-Object user, #{ Name='group'; Expression = { $lookupTable[$_.group] } } |
Export-Csv -NoTypeInformation -Encoding Utf8 test2.txt
Import-Csv reads the CSV file as a collection of custom objects whose properties correspond to the CSV column values; that is, each object has a .user and .name property in your case.
$_.group therefore robustly reports the abstract group name only, which you can directly pass to your lookup hashtable; Select-Object is used to pass the original .user value through, and to replace the original .group value with the lookup result, using a calculated property.
Export-Csv re-converts the custom objects to a CSV file:
-NoTypeInformation suppresses the (usually useless) data-type-information line at the top of the output file
-Encoding Utf8 was added to prevent potential data loss, because it is ASCII encoding that is used by default.
Note that Export-Csv blindly double-quotes all field values, whether they need it or not; that said, CSV readers should be able to deal with that (and Import-Csv certainly does).
As for what you tried:
The -replace operator replaces all occurrences of a given regex (regular expression) in the input.
Your regexes amounts to looking for (case-insensitive) substrings, which explains why HR matches both the HR group name and substring hr in username gilchrist.
A simple workaround would be to add assertions to your regex so that the substrings only match where you want them; e.g.: ,HR$ would only match after a , at the end of a line ($).
However, your approach of enumerating the hashtable keys for each input CSV row is inefficient, and you're better off splitting off the group name and doing a straight lookup based on it:
# Split the row into fields.
$fields = $line -split ','
# Update the group value (last field)
$fields[-1] = $lookupTable[$fields[-1]]
# Rebuild the line
$line = $fields -join ','
Note that you'd have to make an exception for the header row (e.g., test if the lookup result is empty and refrain from updating, if so).
Why don't you load your text file as a CSV file, using Import-CSV and use "," as a delimiter?
This will allow you to have a Powershell Object you can work on. and then export it as text o CSV. if I use your file & lookup table this code may help you :
$file = Import-Csv -Delimiter "," -Path "c:\ps\test.txt"
$lookupTable = #{
'Admin' = 'W_CHRL_ADMIN_GS,M_CHRL_ADMIN_UD,M_CHRL_SITE_GS'
'Security' = 'W_CHRL_SECURITY_GS,M_CHRL_SITE_GS'
'HR' = 'M_CHRL_HR_UD,W_CHRL_HR_GS,M_CHRL_SITE_GS'}
foreach ($i in $file) {
#Compare and replace
...
}
Export-CSV $file -Delimiter ","
You can then iterate over $file and compare and replace. you can also Export-CSV after you're done.
I have a group of .txt files that contain one or two of the following strings.
"red", "blue", "green", "orange", "purple", .... many more (50+) possibilities in the list.
If it helps, I can tell if the .txt file contains one or two items, but don't know which one/ones they are. The string patterns are always on their own line.
I'd like the script to tell me specifically which one or two string matches (from the master list) it found, and the order in which it found them. (Which one was first)
Since I have a lot of text files to search, I'd like to write the output results to a CSV file as I search.
FILENAME1,first_match,second_match
file1.txt,blue,red
file2.txt,red, blue
file3.txt,orange,
file4.txt,purple,red
file5.txt,purple,
...
I've tried using many individual Select-Strings returning Boolean results to set variables with any matches found, but with the number of possible strings it gets ugly real fast. My search results for this issue has provided me with no new ideas to try. (I'm sure I'm not asking in the correct way)
Do I need to loop through each line of text in each file?
Am I stuck with the process of elimination method by checking for the existence of each search string?
I'm looking for a more elegant approach to this problem. (if one exists)
Not very intuïtive but elegant...
Following switch statement
$regex = "(purple|blue|red)"
Get-ChildItem $env:TEMP\test\*.txt | Foreach-Object{
$result = $_.FullName
switch -Regex -File $_
{
$regex {$result = "$($result),$($matches[1])"}
}
$result
}
returns
C:\Users\Lieven Keersmaekers\AppData\Local\Temp\test\file1.txt,blue,red
C:\Users\Lieven Keersmaekers\AppData\Local\Temp\test\file2.txt,red,blue
where
file1 contains first blue, then red
file2 contains first red, then blue
You can use regex to search to get index (startpos. in line) combine with Select-String which returns linenumber and you're good to go.
Select-String supports an array as value for -Pattern, but unfortunately it stops on a line after first match even when you use -AllMatches (bug?). Because of this we have to search one time per word/pattern. Try:
#List of words. Had to escape them because Select-String doesn't return Matches-objects (with Index/location) for SimpleMatch
$words = "purple","blue","red" | ForEach-Object { [regex]::Escape($_) }
#Can also use a list with word/sentence per line using $words = Get-Content patterns.txt | % { [regex]::Escape($_.Trim()) }
#Get all files to search
Get-ChildItem -Filter "test.txt" -Recurse | Foreach-Object {
#Has to loop words because Select-String -Pattern "blue","red" won't return match for both pattern. It stops on a line after first match
foreach ($word in $words) {
$_ | Select-String -Pattern $word |
#Select the properties we care about
Select-Object Path, Line, Pattern, LineNumber, #{n="Index";e={$_.Matches[0].Index}}
}
} |
#Sort by File (to keep file-matches together), then LineNumber and Index to get the order of matches
Sort-Object Path, LineNumber, Index |
Export-Csv -NoTypeInformation -Path Results.csv -Encoding UTF8
Results.csv
"Path","Line","Pattern","LineNumber","Index"
"C:\Users\frode\Downloads\test.txt","file1.txt,blue,red","blue","3","10"
"C:\Users\frode\Downloads\test.txt","file1.txt,blue,red","red","3","15"
"C:\Users\frode\Downloads\test.txt","file2.txt,red, blue","red","4","10"
"C:\Users\frode\Downloads\test.txt","file2.txt,red, blue","blue","4","15"
"C:\Users\frode\Downloads\test.txt","file4.txt,purple,red","purple","6","10"
"C:\Users\frode\Downloads\test.txt","file4.txt,purple,red","red","6","17"
"C:\Users\frode\Downloads\test.txt","file5.txt,purple,","purple","7","10"
I'm wondering if anyone has any suggestions on how to handle what I want to do in PowerShell.
I have this data in a text file:
"0003233","9/1/2017","0241902","$12,145.05"
"FGENERAL","MY VENDOR","VENDOR COMPANY INC.",""
"1","Check(s)","Checks Total:","$12,145.05"
I want to run PowerShell to make it look like this:
"0003233","9/1/2017","0241902","MY VENDOR","VENDOR COMPANY INC.","$12,145.05"
I have experience with simpler data manipulation, but I'm stumped on how to handle this one. Can anyone suggest something?
Thanks
Get contents from file,
Use select-string with regex to split the string at the quotes.
Use the string array to build your final output.
$string = Get-Content "C:\Test\Test.txt"
$StringArray = Select-String "([`"'])(?:(?=(\\?))\2.)*?\1" -input $string -AllMatches | Foreach {$_.matches.Value}
write-output "$($StringArray[0]),$($StringArray[1]),$($StringArray[2]),$($StringArray[5]),$($StringArray[6]),$($StringArray[11])"
You could use Get-Content to read the text file in. At that point you have an array of lines. If you know for sure the order of the lines are the same each time then you can create a new array or string, depending on your needs, from the text file array.
$textFile = Get-Content -Path "C:\..." #reads in the text file
$lineOne = $textFile[0].Split(",") #splits the first line based on commma, repeat for each line
$formattedLine = $lineOne[0] + "," $lineOne[5] #creates new string
This would allow you to restructure the data into the format you want.
$Data = Import-Csv .\Data.txt -Header 0,1,2,3
$Data[0]."0", $Data[0]."1", $Data[0]."2", $Data[0]."3", $Data[1]."1", $Data[1]."2", $Data[2]."3" -Join ","
I am at a lost here. I have 20,000 lines in a tab delimited text file. One of the lines are below. I need to extract the ip's and the username, which is located near the end of the line. I have figured out how to strip the ip's and put them in the text file, but how can I get the username in the same text file and keep that user name associated with the ip's in the line? I have placed my code at the bottom. I think I have the proper regular expression to pull the $Name but I am not sure...The name are all lastname, firstname
Mike Joung 8/21/2012 2:36 gdnwgx9495j;10.2.135.56;359;2013/11/13 08:21:13gdnm8xyydv1;10.2.135.20;1;2013/08/09 09:20:51gdnm592;10.2.132.205;1;2012/08/30 13:26:42gdnw0225;10.2.132.229;1;2012/08/30 13:17:28gdnmh0lydv1;10.7.101.54;14;2012/07/27 01:15:37 6/12/2012 8:00 11/23/2009 5:26 Joung, Mike Never
$input_path = ‘c:\ps\EMEA_wNotes_only.txt’
$output_file = ‘c:\ps\extracted_ip_addresses.txt’
$regex = ‘\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b’
$Name = ‘\b[A-Za-z0-20._%-]+\b’
select-string -Path $input_path -Pattern $regex -AllMatches | % { $_.Matches } | % { $_.Value } > $output_file
#KeithHill is right on the money about Import-Csv, but after looking at your example line, I don't think it will be that simple. Is every line the same format? That is, does every line have the same number of fields? It looks like, from your example, that you have a few fields that each consist of semi-colon separated data, with the username being the second to last tab separated field.
If I haven't completely confused myself here, you can take advantage of some of PowerShells nifty array indexing features.
$input_path = ‘c:\ps\EMEA_wNotes_only.txt’
$output_file = ‘c:\ps\extracted_ip_addresses.txt’
$regex = ‘\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b’
Get-Content $input_path | % { $_ -split "`t" } | Select-Object #{Name="uname";Expression={$_[-2]}},#{Name="ips";Expression={($_ | Select-String -Pattern $regex -AllMatches) -Join ","}} | Export-Csv $output_file -NoTypeInformation
Basically, we are treating each line individually, and manual splitting it on the tabs into an array, which we then pull the second to last item out of. (Or whatever number it is from the end of the array. Then we transform that array by looking at each item in it using select string to pull out the ips, join the ips with commas, wash, rinse andd repeat, then export it all to a cvs file.
The cvs file should be something like
User name,ip,ip,ip
But the ips might be surrounded by quotes, like
User name,"ip,ip,ip"
I don't remember, and I can't test it on the iPad here ;)
Hopefully this helps some.