Powershell Substring Setting Specific Cell Length - powershell

Been working on trying to Trim/Delete a data in Specific Row (Results)in CSV file to a specific length. Keep getting an "Overload for Substring". Any ideas?
$Csv = Import-Csv $FileIn
$CsvNew = ForEach($Row in $Csv){
$Row.Results.Substring((0,[System.Math]::Min(254,$Row.Results.Length)))}

Looks like a simple mistake in your Substring usage.
Remove one of the parenthesis.
From :
$Row.Results.Substring((0,[System.Math]::Min(254,$Row.Results.Length)))}
To :
$Row.Results.Substring(0,[System.Math]::Min(254,$Row.Results.Length))}
For the sake of readability, you could have put your substring count into a variable. The error would have appeared even more obvious.
$Csv = Import-Csv $FileIn
$CsvNew = ForEach($Row in $Csv)
{
$MaxLength = [System.Math]::Min(254,$Row.Results.Length)
$Row.Results.Substring(0,$MaxLength )
}
Edit:
Finally, please note that $CsvNew (that I took from your example) store nothing.
If you want to edit the CSV row content, use this instead.
$Csv = Import-Csv $FileIn
$Csv | ForEach-Object {
$MaxLength = [System.Math]::Min(254,$_.Results.Length)
$_.Results.Substring(0,$MaxLength )
}
This last snippet will actually edit $Csv variable content to trim your results column to a maximum of 254 characters. (It won't be exported to the file though. For that, you will need to export the new $Csv content using the ExportTo-CSV Cmdlet ).

Related

File data not displaying in table form using Powershell Script

I have been grabbing the file data from student_data.dat and trying to display it into tabular form.
The first 3 lines of the file are written like this:
Jamie Zawinski,78.8,81.0,77.3,80.0,80.0,77.0
Adam Douglas,86.2,69.0,77.8,81.0,87.5,88.0
Wallace Steven,66.2,68.0,91.3,78.6,80.3,86.4
I wish to set it up into a table with headers of Student Name, Assignment-1, Assignment-2, etc. I will later be manipulating the data to calculate the class averages for each assignment and the students overall average in the course so far. Each method of setting up the table results in the file being displayed as:
#{Name=Jamie Zawinski; Assignment-1=78.8; Assignment-2=81.0; Assignment-3=77.3; Assignment-4=80.0;
Midterm_Exam=80.0; Final_Exam=77.0} #{Name=Adam Douglas; Assignment-1=86.2; Assignment-2=69.0;
Assignment-3=77.8; Assignment-4=81.0; Midterm_Exam=87.5; Final_Exam=88.0} #{Name=Wallace Steven;
Assignment-1=66.2; Assignment-2=68.0; Assignment-3=91.3; Assignment-4=78.6; Midterm_Exam=80.3;
Final_Exam=86.4}
My coding has looked like this:
$file = Import-Csv C:\**real path**\student_data.dat -Delimiter ',' -Header 'Name', 'Assignment-1','Assignment-2','Assignment-3','Assignment-4','Midterm_Exam','Final_Exam'
Write-Host $file
I tried adding:
foreach ($line in $file){
$data += [pscustomobject]#{
Name = $line.name
Assignment-1 = $line.Assignment-1
Assignment-2 = $line.Assignment-2
}
}
and writing instead:
$filedata = Get-Content ./student_data.dat
$newline = $filedata.Split("`n")
$newline.count
foreach ($l in $newline){
$Names = $l.Split(",")[0].Trim()
$Assignment-1 = $l.Split(",").Trim()
[pscustomObject]#{
Names = $Names;
Assignment-1 = $Assignment-1
}
}
but errors occurred.
First of all, note that you get a more readable output if you use Write-Output instead of Write-Host, which tries to push the entire input into a single string. (Or just remove it altogether, $file is equivalent to Write-Output $file)
As Matthias R. Jessen commented, you are probably looking for the Format-Table command:
$path = "C:\**real path**\student_data.dat"
$header = 'Name', 'Assignment-1','Assignment-2','Assignment-3','Assignment-4','Midterm_Exam','Final_Exam'
$data = Import-Csv $path -Delimiter ',' -Header $header
$data | Format-Table
Note this just "pretty-prints" the data for display in the console. The data is not changed and you should not use this for any kind of file output. And it's not really necessary for working with the data. Once you're done manipulating your data, you can just export it back as CSV:
$outpath = "C:\**real path**\student_data_result.csv"
$data | Export-Csv $outpath -Delimiter "," -NoTypeInformation

Remove Columns in multiple CSVs POWERSHELL [duplicate]

I need to remove several columns from a CSV file without importing the CSV file in Powershell. Below is an example of my input CSV and what I hope the output CSV can look like.
Input.csv
A,1,2,3,4,5
B,6,7,8,9,10
C,11,12,13,14,15
D,15,16,17,18,19,20
Idealoutput.csv
A,3,5
B,8,10
C,13,15
D,17,20
I have tried doing this the following code, but it is giving me plenty of errors and saying that I cannot use the "Delete" method this way (which I have done in the past)...Any ideas?
$Workbook1 = $Excel.Workbooks.open($file.FullName)
$header = $Workbook1.ActiveSheet.Range("A1:A68").EntireRow
$unneededcolumns1 = $Workbook1.ActiveSheet.Range("A1:O1").EntireColumn
$unneededcolumns2 = $Workbook1.ActiveSheet.Range("B1:K1").EntireColumn
$unneededcolumns3 = $Workbook1.ActiveSheet.Range("F1:I1").EntireColumn
$unneededcolumns4 = $Workbook1.ActiveSheet.Range("G1:I1").EntireColumn
$unneededcolumns5 = $Workbook1.ActiveSheet.Range("H1:O1").EntireColumn
$unneededcolumns6 = $Workbook1.ActiveSheet.Range("J1:AL1").EntireColumn
$unneededcolumns7 = $Workbook1.ActiveSheet.Range("K1").EntireColumn
$unneededcolumns8 = $Workbook1.ActiveSheet.Range("L1:AK1").EntireColumn
$unneededcolumns9 = $Workbook1.ActiveSheet.Range("F1:I1").EntireColumn
$unneededcolumns10 = $Workbook1.ActiveSheet.Range("M1:AB1").EntireColumn
$unneededcolumns11 = $Workbook1.ActiveSheet.Range("N1:X1").EntireColumn
$unneededcolumns12 = $Workbook1.ActiveSheet.Range("O1:BA1").EntireColumn
$unneededcolumns13 = $Workbook1.ActiveSheet.Range("P1:U1").EntireColumn
$header.Delete()
$unneededcolumns1.Delete()
$unneededcolumns2.Delete()
$unneededcolumns3.Delete()
$unneededcolumns4.Delete()
$unneededcolumns5.Delete()
$unneededcolumns6.Delete()
$unneededcolumns7.Delete()
$unneededcolumns8.Delete()
$unneededcolumns9.Delete()
$unneededcolumns10.Delete()
$unneededcolumns11.Delete()
$unneededcolumns12.Delete()
$unneededcolumns13.Delete()
$Workbook1.SaveAs("\\output.csv")
I am just going to add this anyway since I hope to convince you how easy it will be to avoid having to use Excel.
$source = "c:\temp\file.csv"
$destination = "C:\temp\newfile.csv"
(Import-CSV $source -Header 1,2,3,4,5,6 |
Select "1","4","6" |
ConvertTo-Csv -NoTypeInformation |
Select-Object -Skip 1) -replace '"' | Set-Content $destination
We assign arbitrary headers to the object and that way we can call the 1st, 4th and 6th columns by position. Once exported the file will have the following contents which match what I think you want and not what you had in the question. Your last line had an extra value (20) on it which I don't know if it was on purpose or not.
A,3,5
B,8,10
C,13,15
D,17,19
If this is not viable I am really interested as to why.
Excel Approach
Alright, so the file is enormous so Import-CSV is not a viable option. Keeping with your excel idea I came up with this. What it will do is take column indexes and delete any column that is not in those indices.
Wait you say?... that wont work since the column indexes change as you remove columns. Using the indices we want to keep we get the inverse to delete based on the UsedRows of the sheet. We then take each of those columns to delete and remove a value equal to is array position. Reason being is that when a column is actually deleted the next value has already been adjusted to account for the shift.
$file = "c:\temp\file.csv"
$ColumnsToKeep = 1,4,6
# Create the com object
$excel = New-Object -comobject Excel.Application
$excel.DisplayAlerts = $False
$excel.visible = $False
# Open the CSV File
$workbook = $excel.Workbooks.Open($file)
$sheet = $workbook.Sheets.Item(1)
# Determine the number of rows in use
$maxColumns = $sheet.UsedRange.Columns.Count
$ColumnsToRemove = Compare-Object $ColumnsToKeep (1..$maxColumns) | Where-Object{$_.SideIndicator -eq "=>"} | Select-Object -ExpandProperty InputObject
0..($ColumnsToRemove.Count - 1) | %{$ColumnsToRemove[$_] = $ColumnsToRemove[$_] - $_}
$ColumnsToRemove | ForEach-Object{
[void]$sheet.Cells.Item(1,$_).EntireColumn.Delete()
}
# Save the edited file
$workbook.SaveAs("C:\temp\newfile.csv", 6)
# Close excel and release the com object.
$workbook.Close($true)
$excel.Quit()
[void][System.Runtime.Interopservices.Marshal]::ReleaseComObject($excel)
Remove-Variable excel
I was having issues with Excel remaining open even after reading up on the "correct" way to do it. The inner logic is what is important. Don't forget to change your paths as needed.
Here's a better approach that I use, but it's not the most performant on large files. Both have been tested on 1GB files.
Powershell:
Import-Csv '.\inputfile.csv'
| select ColumnName1,ColumnName2,ColumnName3
| Export-Csv -Path .\outputfile.csv -NoTypeInformation
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/export-csv?view=powershell-5.1
If you want to get rid of those pesky quotes that the tool adds, upgrade to Powershell 7.
Powershell 7+:
Import-Csv '.\inputfile.csv'
| select ColumnName1,ColumnName2,ColumnName3
| Export-Csv -Path .\outputfile.csv -NoTypeInformation -UseQuotes Never
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/export-csv?view=powershell-7

Powershell replace text once per line

I have a Powershell script that I am trying to work out part of it, so the text input to this is listing the user group they are part of. This PS script is supposed to replace the group with the groups that I am assigning them in active directory(I am limited to only changing groups in active directory). My issue is that when it reaches HR and replaces it, it will then proceed to contine and replace all the new but it all so replaces the HR in CHRL, so my groups look nuts right now. But I am looking it over and it doesn't do it with every line. But for gilchrist it will put something in there for the HR in the name. Is there anything can I do to keep it for changing or am I going to have to change my HR to Human Resources? Thanks for the help.
$lookupTable = #{
'Admin' = 'W_CHRL_ADMIN_GS,M_CHRL_ADMIN_UD,M_CHRL_SITE_GS'
'Security' = 'W_CHRL_SECURITY_GS,M_CHRL_SITE_GS'
'HR' = 'M_CHRL_HR_UD,W_CHRL_HR_GS,M_CHRL_SITE_GS'
$original_file = 'c:\tmp\test.txt'
$destination_file = 'c:\tmp\test2.txt'
Get-Content -Path $original_file | ForEach-Object {
$line = $_
$lookupTable.GetEnumerator() | ForEach-Object {
if ($line -match $_.Key)
{
$line = $line -replace $_.Key, $_.Value
}
}
$line
} | Set-Content -Path $destination_file
Get-Content $destination_file
test.txt:
user,group
john.smith,Admin
joanha.smith,HR
john.gilchrist,security
aaron.r.smith,admin
abby.doe,secuity
abigail.doe,admin
Your input appears to be in CSV format (though note that your sample rows have trailing spaces, which you'd have to deal with, if they're part of your actual data).
Therefore, use Import-Csv and Export-Csv to read / rewrite your data, which allows a more concise and convenient solution:
Import-Csv test.txt |
Select-Object user, #{ Name='group'; Expression = { $lookupTable[$_.group] } } |
Export-Csv -NoTypeInformation -Encoding Utf8 test2.txt
Import-Csv reads the CSV file as a collection of custom objects whose properties correspond to the CSV column values; that is, each object has a .user and .name property in your case.
$_.group therefore robustly reports the abstract group name only, which you can directly pass to your lookup hashtable; Select-Object is used to pass the original .user value through, and to replace the original .group value with the lookup result, using a calculated property.
Export-Csv re-converts the custom objects to a CSV file:
-NoTypeInformation suppresses the (usually useless) data-type-information line at the top of the output file
-Encoding Utf8 was added to prevent potential data loss, because it is ASCII encoding that is used by default.
Note that Export-Csv blindly double-quotes all field values, whether they need it or not; that said, CSV readers should be able to deal with that (and Import-Csv certainly does).
As for what you tried:
The -replace operator replaces all occurrences of a given regex (regular expression) in the input.
Your regexes amounts to looking for (case-insensitive) substrings, which explains why HR matches both the HR group name and substring hr in username gilchrist.
A simple workaround would be to add assertions to your regex so that the substrings only match where you want them; e.g.: ,HR$ would only match after a , at the end of a line ($).
However, your approach of enumerating the hashtable keys for each input CSV row is inefficient, and you're better off splitting off the group name and doing a straight lookup based on it:
# Split the row into fields.
$fields = $line -split ','
# Update the group value (last field)
$fields[-1] = $lookupTable[$fields[-1]]
# Rebuild the line
$line = $fields -join ','
Note that you'd have to make an exception for the header row (e.g., test if the lookup result is empty and refrain from updating, if so).
Why don't you load your text file as a CSV file, using Import-CSV and use "," as a delimiter?
This will allow you to have a Powershell Object you can work on. and then export it as text o CSV. if I use your file & lookup table this code may help you :
$file = Import-Csv -Delimiter "," -Path "c:\ps\test.txt"
$lookupTable = #{
'Admin' = 'W_CHRL_ADMIN_GS,M_CHRL_ADMIN_UD,M_CHRL_SITE_GS'
'Security' = 'W_CHRL_SECURITY_GS,M_CHRL_SITE_GS'
'HR' = 'M_CHRL_HR_UD,W_CHRL_HR_GS,M_CHRL_SITE_GS'}
foreach ($i in $file) {
#Compare and replace
...
}
Export-CSV $file -Delimiter ","
You can then iterate over $file and compare and replace. you can also Export-CSV after you're done.

Data manipulation in PowerShell

I'm wondering if anyone has any suggestions on how to handle what I want to do in PowerShell.
I have this data in a text file:
"0003233","9/1/2017","0241902","$12,145.05"
"FGENERAL","MY VENDOR","VENDOR COMPANY INC.",""
"1","Check(s)","Checks Total:","$12,145.05"
I want to run PowerShell to make it look like this:
"0003233","9/1/2017","0241902","MY VENDOR","VENDOR COMPANY INC.","$12,145.05"
I have experience with simpler data manipulation, but I'm stumped on how to handle this one. Can anyone suggest something?
Thanks
Get contents from file,
Use select-string with regex to split the string at the quotes.
Use the string array to build your final output.
$string = Get-Content "C:\Test\Test.txt"
$StringArray = Select-String "([`"'])(?:(?=(\\?))\2.)*?\1" -input $string -AllMatches | Foreach {$_.matches.Value}
write-output "$($StringArray[0]),$($StringArray[1]),$($StringArray[2]),$($StringArray[5]),$($StringArray[6]),$($StringArray[11])"
You could use Get-Content to read the text file in. At that point you have an array of lines. If you know for sure the order of the lines are the same each time then you can create a new array or string, depending on your needs, from the text file array.
$textFile = Get-Content -Path "C:\..." #reads in the text file
$lineOne = $textFile[0].Split(",") #splits the first line based on commma, repeat for each line
$formattedLine = $lineOne[0] + "," $lineOne[5] #creates new string
This would allow you to restructure the data into the format you want.
$Data = Import-Csv .\Data.txt -Header 0,1,2,3
$Data[0]."0", $Data[0]."1", $Data[0]."2", $Data[0]."3", $Data[1]."1", $Data[1]."2", $Data[2]."3" -Join ","

Append text to certain values in text file with PowerShell

I have a CSV text file separated with ; and it's in the format as:
USER_EMPLOYEE_ID;SYSTEM1;USERNAME1
The first column is an identity and the following pairs of columns are user's account on different active directories. I have placed garbage data but the idea is there.
ay7suve0001;ADDPWN;ay7suve0001
AAXMR3E0001;ADDPWN;AAXMR3E0001
ABABIL;ADDPWN;ABABIL
ABDF17;ADDPWN;ABDF17;
ABKMPPE0001;ADDPWN;ABKMPPE0001
ABL1FL;ADDPWN;ABL1FL
AB6JG8E0004;ADDPWN;AB6JG8E0004;
ACB4YB;ADDPWN;ACB4YB
ACK7J9;ADDPWN;ACK7J9
ACLZFS;ADDPWN;ACLZFS;
ACQXZ3;ADDPWN;ACQXZ3
Now there is a requirement that I have to append a fixed string like #ADDPWN.com to all the USERNAME1 values. Some records are having a ; and some don't.
Is there a quick way to append the #ADDPWN.com to each line taking care of:
any ;
any already #ADDPWN.com
From PowerShell?
Import-Csv is your friend. The following should get you on the right track.
Import-Csv "import.csv" -Delimiter ';' |
foreach {
if ($_.username1 -notlike '*#ADDPWN.com') { $_.username1 += '#ADDPWN.com' }
$_
} |
Export-Csv "export.csv" -Delimiter ';'
This assumes the first line of your csv file is your header line. If it's not, you can pass -Header 'USER_EMPLOYEE_ID','SYSTEM1','USERNAME1' as another parameter to Import-Csv.
Export-Csv adds some extra stuff like quotes around parameters, so you may need to play with the output format if you don't want that.
For another explanation how this works, check out Changes last name, first name to first name, last name in last column CSV powershell
This was a solution that worked for me.........
#opens list of file names
$file2 ="F:\OneDrive_Biz\PowerApps\SecurityCameraVideoApp\file_list_names.csv"
$x = Get-Content $file2
#appends URl to beginning of file name list
for($i=0; $i -lt $x.Count; $i++){
$x[$i] = "https://analytics-my.sharepoint.com/personal/gpowell_analytics_onmicrosoft_com/Documents/PowerApps/SecurityCameraVideoApp/Video_Files/" + $x[$i]
}
$x
#remove all files in target directory prior to saving new list
get-childitem -path C:\_TEMP\file_list_names.csv | remove-item
Add-Content -Path C:\_TEMP\file_list_names_url.csv -Value $x