I have a CSV text file separated with ; and it's in the format as:
USER_EMPLOYEE_ID;SYSTEM1;USERNAME1
The first column is an identity and the following pairs of columns are user's account on different active directories. I have placed garbage data but the idea is there.
ay7suve0001;ADDPWN;ay7suve0001
AAXMR3E0001;ADDPWN;AAXMR3E0001
ABABIL;ADDPWN;ABABIL
ABDF17;ADDPWN;ABDF17;
ABKMPPE0001;ADDPWN;ABKMPPE0001
ABL1FL;ADDPWN;ABL1FL
AB6JG8E0004;ADDPWN;AB6JG8E0004;
ACB4YB;ADDPWN;ACB4YB
ACK7J9;ADDPWN;ACK7J9
ACLZFS;ADDPWN;ACLZFS;
ACQXZ3;ADDPWN;ACQXZ3
Now there is a requirement that I have to append a fixed string like #ADDPWN.com to all the USERNAME1 values. Some records are having a ; and some don't.
Is there a quick way to append the #ADDPWN.com to each line taking care of:
any ;
any already #ADDPWN.com
From PowerShell?
Import-Csv is your friend. The following should get you on the right track.
Import-Csv "import.csv" -Delimiter ';' |
foreach {
if ($_.username1 -notlike '*#ADDPWN.com') { $_.username1 += '#ADDPWN.com' }
$_
} |
Export-Csv "export.csv" -Delimiter ';'
This assumes the first line of your csv file is your header line. If it's not, you can pass -Header 'USER_EMPLOYEE_ID','SYSTEM1','USERNAME1' as another parameter to Import-Csv.
Export-Csv adds some extra stuff like quotes around parameters, so you may need to play with the output format if you don't want that.
For another explanation how this works, check out Changes last name, first name to first name, last name in last column CSV powershell
This was a solution that worked for me.........
#opens list of file names
$file2 ="F:\OneDrive_Biz\PowerApps\SecurityCameraVideoApp\file_list_names.csv"
$x = Get-Content $file2
#appends URl to beginning of file name list
for($i=0; $i -lt $x.Count; $i++){
$x[$i] = "https://analytics-my.sharepoint.com/personal/gpowell_analytics_onmicrosoft_com/Documents/PowerApps/SecurityCameraVideoApp/Video_Files/" + $x[$i]
}
$x
#remove all files in target directory prior to saving new list
get-childitem -path C:\_TEMP\file_list_names.csv | remove-item
Add-Content -Path C:\_TEMP\file_list_names_url.csv -Value $x
Related
does anybody know how to split a string variable in Powershell and rename the file by adding a number and file extension?
My scenario and example of variables I'm using:
$return_path = "C:\FolderA\result.txt"
How do I get the "C:\FolderA\result" portion? I have tried Split-Path but I keep getting this error below. Even though there is indeed a value in $return_path but it still says that it is null
Error splitting File Path
Also I am trying to loop through a dataset and I need to append "i" to the retrieved file name after I split it, for example, "C:\FolderA\result[i].txt", but currently, my code below is outputting an error which is attached at Screenshot 3 below. Is the error because I cannot check if an integer is less than number of data set returned? Any answers would be appreciated as I need help after trying and researching but I am still stuck. Thank you.
Looping through retrieved tables from dataset (Count how many tables returned)
Error comparing in loop
For your ask, Try this :
$return_path = "C:\FolderA\result.txt"
#get file name extension
$FileExtension=[System.IO.Path]::GetExtension($return_path)
#get file name without extension
$FileNameWithoutExtension=[System.IO.Path]::GetFileNameWithoutExtension($return_path)
#get directory path
$Directory=[System.IO.Path]::GetDirectoryName($return_path)
for ($i = 1; $i -lt $ds.Tables.Count; $i++)
{
#build file name
$PathFile=[System.IO.Path]::Combine($Directory, $FileNameWithoutExtension + $i + $FileExtension)
#export current table
$ds.Tables[$i] | export-csv $PathFile -Delimiter '|' -notypeinformation
}
But ideally your should do it :
$initdir = "C:\FolderA\" # define your inital directory
$FileName="result" # define your final file name
$Extension=".CSV" #use csv and not txt extension for csv file
for ($i = 1; $i -lt $ds.Tables.Count; $i++)
{
#build file name
$PathFile=[System.IO.Path]::Combine($initdir, $FileName + $i + $Extension)
#export current table
$ds.Tables[$i] | export-csv $PathFile -Delimiter '|' -notypeinformation
}
I'm creating a CSV file in powershell.
Right now my code is:
Add-content -Path $filePath -Value "$($variable.Property)"
This works fine for the most part EXCEPT if the property contained a comma ie. "test, organization".
When I open up the CSV, the comma is taken with it (which is what i want) causing a extra separation. How do save "test, organization" to one column?
Referring to the documentation for Export-CSV, you will need to use a different delimiter, like a semi-colon.
When you read the CSV you should specify the delimiter as well: Import-CSV.
Try to quote your properties:
Add-content -Path $filePath -Value "'$($variable.Property)'"
Or use one of the built-in CSV commands, which automatically quote all values:
$foo.Bar | Export-Csv -Path $filePath
$foo.Bar| ConvertTo-Csv | Out-File -Path $filePath
If you just want to avoid issues with commas, you can change the delimiter between fields:
$foo | Export-Csv -Path $filePath -Delimiter '|'
Here is an article on how to use out-file or add-member with some cells of the row having commas in the variable values and some not.
https://imjustanengineer.blogspot.com/2022/01/so-youre-trying-to-use-powershell-out.html
Here is a code snippet, a more detailed explanation with a full working function is in the link. $outputArr is an array of all the lines of csv data you want to write to the csv. The loop checks each line to see if it contains commas inside of the individual cell entries and puts quotes around that entry if it does. If it does not, no adjustment is necessary and then a new array is appended to afterwards.
$index = 0;
foreach ($outputTemp in $outputArr)
{
if ($outputTemp.ToString().Contains(","))
{
$output += "`"$outputTemp`",";
}
else
{
$output += $outputTemp + ",";
}
$index++;
if ($index -eq $outputArr.Count)
{
if ($output.EndsWith(","))
{
$output = $output.Remove($output.Length - 1);
}
}
}
I found a diabolically simply answer after I opened a csv file in Excel and added text and commas to one column. When I saved, closed and reopened, the column still had all the words and commas properly formatted. So, then I opened the file in notepad++ and this is what I found:
column1text, column2text,"column3,text,with,commas"
In case it's not clear, and it took me a fair bit to recognize the little detail that makes all the difference, the opening double quote cannot have a space after the preceding comma.
column1text, column2text, "column3,text,with,commas"
splits all the words into separate columns because there is a space between
column2text, "column3,etc"
Take that space away
column2text,"column3,etc"
and everything within the double quotes stays in one column.
Example using active directory distinguishedName such as CN=somename,OU=Domain Controllers,DC=foo,DC=bar
$computers = get-adcomputer -filter *
foreach ($computer in $computers) {
$deviceName = $computer.Name
$dn = '"' + $computer.DistinguishedName + '"'
$guid = $computer.objectGUID
$lastLogon = $computer.LastLogonDate
$serialNumber = $computer.serialNumber
$whenCreated = $computer.whenCreated
"$guid, $lastLogon, $deviceName, $serialNumber, $whenCreated,$dn" | add-content "c:\temp\filename.csv"
}
It does not work if a space is added between $whenCreated, and $dn like so:
"$guid, $lastLogon, $deviceName, $serialNumber, $whenCreated, $dn" | add-content "c:\temp\filename.csv"
This took up an afternoon, so I hope this saves somebody some time and frustration.
Suppose I have two csv files. One is
id_number,location_code,category,animal,quantity
12212,3,4,cat,2
29889,7,6,dog,2
98900,
33221,1,8,squirrel,1
the second one is:
98900,2,1,gerbil,1
The second file may have a newline or something at the end (maybe or maybe not, I haven't checked), but only the one line of content. There may be three or four or more different varieties of the "second" file, but each one will have a first element (98900 in this example) that corresponds to an incomplete line in the first file similar to what is in this example.
Is there a way using powershell to automatically merge the line in the second (plus any additional similar) csv file into the matching line(s) of the first file, so that the resulting file is:
12212,3,4,cat,2
29889,7,6,dog,2
98900,2,1,gerbil,1
33221,1,8,squirrel,1
main.csv
id_number,location_code,category,animal,quantity
12212,3,4,cat,2
29889,7,6,dog,2
98900,
33221,1,8,squirrel,1
correction_001.csv
98900,2,1,gerbil,1
merge code used at the commandline, or in the .ps1 file of your choice
$myHeader = #('id_number','location_code','category','animal','quantity')
#Stage all the correction files: last correction in the most recent file wins
$ToFix = #{}
filter Plumbing_Import-Csv($Header){import-csv -LiteralPath $_ -Header $Header}
ls correction*.csv | sort -Property LastWriteTime | Plumbing_Import-Csv $myHeader | %{$ToFix[$_.id_number]=$_}
function myObjPipe($Header){
begin{
function TextTo-CsvField([String]$text){
#text fields which contain comma, double quotes, or new-line are a special case for CSV fields and need to be accounted for
if($text -match '"|,|\n'){return '"'+($text -replace '"','""')+'"'}
return $text
}
function myObjTo-CsvRecord($obj){
return ''+
$obj.id_number +','+
$obj.location_code +','+
$obj.category +','+
(TextTo-CsvField $obj.animal)+','+
$obj.quantity
}
$Header -join ','
}
process{
if($ToFix.Contains($_.id_number)){
$out = $ToFix[$_.id_number]
$ToFix.Remove($_.id_number)
}else{$out = $_}
myObjTo-CsvRecord $out
}
end{
#I assume you'd append any leftover fixes that weren't used
foreach($out in $ToFix.Values){
myObjTo-CsvRecord $out
}
}
}
import-csv main.csv | myObjPipe $myHeader | sc combined.csv -encoding ascii
You could also use ConvertTo-Csv, but my preference is to not have all the extra " cruft.
Edit 1: reduced code redundancy, accounted for \n, fixed appends, and used #OwlsSleeping suggestion about the -Header commandlet parameter
also works with these files:
correction_002.csv
98900,2,1,I Win,1
correction_new.csv
98901,2,1,godzilla,1
correction_too.csv
98902,2,1,gamera,1
98903,2,1,mothra,1
Edit 2: convert gc | ConvertTo-Csv over to Import-Csv to fix the front-end \n issues. Now also works with:
correction_003.csv
29889,7,6,"""bad""
monkey",2
This is a simple solution assuming there's always exactly one match, and you don't care about output order. Change the output path to csv1 to overwrite.
I added headers manually in both input files, but you can specify them in Import-Csv instead if you'd rather avoid changing your files.
[array]$MissingLine = Import-Csv -Path "C:\Users\me\Documents\csv2.csv"
[string]$MissingId = $MissingLine[0].id_number
[array]$BigCsv = Import-Csv -Path "C:\Users\me\Documents\csv1.csv" |
Where-Object {$_.id_number -ne $MissingId}
($BigCsv + $MissingLine) |
Export-Csv -Path "C:\Users\me\Documents\Combined.csv"
I have a Powershell script that I am trying to work out part of it, so the text input to this is listing the user group they are part of. This PS script is supposed to replace the group with the groups that I am assigning them in active directory(I am limited to only changing groups in active directory). My issue is that when it reaches HR and replaces it, it will then proceed to contine and replace all the new but it all so replaces the HR in CHRL, so my groups look nuts right now. But I am looking it over and it doesn't do it with every line. But for gilchrist it will put something in there for the HR in the name. Is there anything can I do to keep it for changing or am I going to have to change my HR to Human Resources? Thanks for the help.
$lookupTable = #{
'Admin' = 'W_CHRL_ADMIN_GS,M_CHRL_ADMIN_UD,M_CHRL_SITE_GS'
'Security' = 'W_CHRL_SECURITY_GS,M_CHRL_SITE_GS'
'HR' = 'M_CHRL_HR_UD,W_CHRL_HR_GS,M_CHRL_SITE_GS'
$original_file = 'c:\tmp\test.txt'
$destination_file = 'c:\tmp\test2.txt'
Get-Content -Path $original_file | ForEach-Object {
$line = $_
$lookupTable.GetEnumerator() | ForEach-Object {
if ($line -match $_.Key)
{
$line = $line -replace $_.Key, $_.Value
}
}
$line
} | Set-Content -Path $destination_file
Get-Content $destination_file
test.txt:
user,group
john.smith,Admin
joanha.smith,HR
john.gilchrist,security
aaron.r.smith,admin
abby.doe,secuity
abigail.doe,admin
Your input appears to be in CSV format (though note that your sample rows have trailing spaces, which you'd have to deal with, if they're part of your actual data).
Therefore, use Import-Csv and Export-Csv to read / rewrite your data, which allows a more concise and convenient solution:
Import-Csv test.txt |
Select-Object user, #{ Name='group'; Expression = { $lookupTable[$_.group] } } |
Export-Csv -NoTypeInformation -Encoding Utf8 test2.txt
Import-Csv reads the CSV file as a collection of custom objects whose properties correspond to the CSV column values; that is, each object has a .user and .name property in your case.
$_.group therefore robustly reports the abstract group name only, which you can directly pass to your lookup hashtable; Select-Object is used to pass the original .user value through, and to replace the original .group value with the lookup result, using a calculated property.
Export-Csv re-converts the custom objects to a CSV file:
-NoTypeInformation suppresses the (usually useless) data-type-information line at the top of the output file
-Encoding Utf8 was added to prevent potential data loss, because it is ASCII encoding that is used by default.
Note that Export-Csv blindly double-quotes all field values, whether they need it or not; that said, CSV readers should be able to deal with that (and Import-Csv certainly does).
As for what you tried:
The -replace operator replaces all occurrences of a given regex (regular expression) in the input.
Your regexes amounts to looking for (case-insensitive) substrings, which explains why HR matches both the HR group name and substring hr in username gilchrist.
A simple workaround would be to add assertions to your regex so that the substrings only match where you want them; e.g.: ,HR$ would only match after a , at the end of a line ($).
However, your approach of enumerating the hashtable keys for each input CSV row is inefficient, and you're better off splitting off the group name and doing a straight lookup based on it:
# Split the row into fields.
$fields = $line -split ','
# Update the group value (last field)
$fields[-1] = $lookupTable[$fields[-1]]
# Rebuild the line
$line = $fields -join ','
Note that you'd have to make an exception for the header row (e.g., test if the lookup result is empty and refrain from updating, if so).
Why don't you load your text file as a CSV file, using Import-CSV and use "," as a delimiter?
This will allow you to have a Powershell Object you can work on. and then export it as text o CSV. if I use your file & lookup table this code may help you :
$file = Import-Csv -Delimiter "," -Path "c:\ps\test.txt"
$lookupTable = #{
'Admin' = 'W_CHRL_ADMIN_GS,M_CHRL_ADMIN_UD,M_CHRL_SITE_GS'
'Security' = 'W_CHRL_SECURITY_GS,M_CHRL_SITE_GS'
'HR' = 'M_CHRL_HR_UD,W_CHRL_HR_GS,M_CHRL_SITE_GS'}
foreach ($i in $file) {
#Compare and replace
...
}
Export-CSV $file -Delimiter ","
You can then iterate over $file and compare and replace. you can also Export-CSV after you're done.
I have limited experience with Powershell doing very basic tasks by itself (such as simple renaming or moving files), but I've never created one that has the need to actually extract information from inside a file and apply that data directly to a file name.
I'd like to create a script that can reference a simple .csv or text file containing a list of unique identifiers and have it assign those to a batch of duplicated files (they all have the same contents) that share a slightly different name in the form of a 3-digit number appended as the prefix of a generic name.
For example, let's say my list of files are something like this:
001_test.txt
002_test.txt
003_test.txt
004_test.txt
005_test.txt
etc.
Then my .csv contains an alphabetical list of what I would like those to become:
Alpha.txt
Beta.txt
Charlie.txt
Delta.txt
Echo.txt
etc.
I tried looking at similar examples, but I'm failing miserably trying to tailor them to get it to do the above.
EDIT: I didn't save what I already modified, but here is the baseline script I was messing with:
$file_server = Read-Host "Enter the file server IP address"
$rootFolder = 'C:\TEMP\GPO\source\5'
Get-ChildItem -LiteralPath $rootFolder -Directory |
Where-Object { $_.Name -as [System.Guid] } |
ForEach-Object {
$directory = $_.FullName
(Get-Content "$directory\gpreport.xml") |
ForEach-Object { $_ -replace "99.999.999.999", $file_server } |
Set-Content "$directory\gpreport.xml"
# ... etc
}
I think this is to replace a string inside a file though. I need to replace the file name itself using a list from another file (that is not getting renamed), while not changing the contents of the files that are being renamed.
So you want to rename similar files with those listed in a text file. Ok, here's what you are going to need for my solution (alias listed in parenthesis): Get-Content (GC), Get-ChildItem (GCI), Where (?), Rename-Item, ForEach (%)
$NewNames = GC c:\temp\Namelist.txt #Path, including file name, to list of new names
$Name = "dog.txt" #File name without the 001_ prefix
$Path = "C:\Temp" #Path to search
$i=0
GCI $path | ?{$_.Name -match "\d{3}_$Name"}|%{Rename-Item $_.FullName $NewNames[$i];$i++}
Tested as working. That gets your list of new names and saves it as an array. Then it defines your file name, path, and sets $i to 0 as a counter. Then for each file that matches your pattern it renames it based off of item number $i in the array of new names, and then increments $i up one number and moves to the next file.
I haven't tested this, but it should be pretty close. It assumes you have a CSV with a column named FileNames and that you have at least as many names in that list as there are on disk.
$newNames = Import-Csv newfilenames.csv | Select -ExpandProperty FileNames
$existingFiles = Get-ChildItem c:\someplace
for ($i = 0; $i -lt $existingFiles.count; $i++)
{
Rename-Item -Path $existingFiles[$i].FullName -NewName $newNames[$i]
}
Basically, you create two arrays and using a basic for loop steping through the list of files on disk and pull the name from the corresponding index in the newNames array.
Does your CSV file map the identifiers to the file names?
Identifier,NewName
001,Alpha
002,Beta
If so, you'll need to look up the identifier before renaming the file:
# Define the naming convention
$Suffix = '_test'
$Extension = 'txt'
# Get the files and what to rename them to
$Files = Get-ChildItem "*$Suffix.$Extension"
$Csv = Import-Csv 'Names.csv'
# Rename the files
foreach ($File in $Files) {
$NewName = ($Csv | Where-Object { $File.Name -match '^' + $_.Identifier } | Select-Object -ExpandProperty NewName)
Rename-Item $File "$NewName.$Extension"
}
If your CSV file is just a sequential list of filenames, logicaldiagram's answer is probably more along the lines of what you're looking for.