Powershell count matching variables in a single column - powershell

I have this CSV file that I kind of do a lot to. My most recent task is to add a summary sheet.
With that said I have a CSV file I pull from a website and send through lot of checks. Code Below:
$Dups = import-csv 'C:\Working\cylrpt.csv' | Group-Object -Property 'Device Name'| Where-Object {$_.count -ge 2} | ForEach-Object {$_.Group} | Select #{Name="Device Name"; Expression={$_."Device Name"}},#{Name="MAC"; Expression={$_."Mac Addresses"}},Zones,#{Name="Agent"; Expression={$_."Agent Version"}},#{Name="Status"; Expression={$_."Is online"}}
$Dups | Export-CSV $working\temp\01-Duplicates.csv -NoTypeInformation
$csvtmp = Import-CSV $working\cylrpt.csv | Select #{N='Device';E={$_."Device Name"}},#{N='OS';E={$_."OS Version"}},Zones,#{N='Agent';E={$_."Agent Version"}},#{N='Active';E={$_."Is Online"}},#{N='Checkin';E={[DateTime]$_."Online Date"}},#{N='Checked';E={[DateTime]$_."Offline Date"}},Policy
$csvtmp | %{
if ($_.Zones -eq ""){$_.Zones = "Unzoned"}
}
$csvtmp | Export-Csv $working\cy.csv -NoTypeInformation
import-csv $working\cy.csv | Select Device,policy,OS,Zones,Agent,Active,Checkin,Checked | % {
$_ | Export-CSV -path $working\temp\$($_.Zones).csv -notypeinformation -Append
}
The first check is for duplicates, I used separate lines of code for this because I wanted to create a CSV for duplicates.
The second check backfills all blank cells under the Zones column with "UnZoned"
The third thing is does is goes through the entire CSV file and creates a CSV file for each Zone
So this is my base. I need to add another CSV file for a Summary of the Zone information. The Zones are in the format of XXX-WS or XXX-SRV, where XXX can be between 3 and 17 letters.
I would like the Summary sheet to look like this
ABC ###
ABC-WS ##
ABC-SRV ##
DEF ###
DEF-WS ##
DEF-SRV ##
My thoughts are to either do the count from the original CSV file or to count the number of lines in each CSV file and subtract 1, for the header row.
Now the Zones are dynamic so I can't just say I want ZONE XYZ, because that zone may not exist.
So what I need is to be able to either count the like zone type in the original file and either output that to an array or file, that would be my preferred method to give the number of items with the same zone name. I just don't know how to write it to look for and count matching variables. Here is the code I'm trying to use to get the count:
import-csv C:\Working\cylrpt.csv | Group-Object -Property 'Zones'| ForEach-Object {$_.Group} | Select #{N='Device';E={$_."Device Name"}},Zones | % {
$Znum = ($_.Zones).Count
If ($Znum -eq $null) {
$Znum = 1
} else {
$Znum++
}
}
$Count = ($_.Zones),$Znum | Out-file C:\Working\Temp\test2.csv -Append
Here is the full code minus the report key:
$cylURL = "https://protect.cylance.com/Reports/ThreatDataReportV1/devices/"
$working = "C:\Working"
Remove-item -literalpath "\\?\C:\Working\Cylance Report.xlsx"
Invoke-WebRequest -Uri $cylURL -outfile $working\cylrpt.csv
$Dups = import-csv 'C:\Working\cylrpt.csv' | Group-Object -Property 'Device Name'| Where-Object {$_.count -ge 2} | ForEach-Object {$_.Group} | Select #{Name="Device Name"; Expression={$_."Device Name"}},#{Name="MAC"; Expression={$_."Mac Addresses"}},Zones,#{Name="Agent"; Expression={$_."Agent Version"}},#{Name="Status"; Expression={$_."Is online"}}
$Dups | Export-CSV $working\temp\01-Duplicates.csv -NoTypeInformation
$csvtmp = Import-CSV $working\cylrpt.csv | Select #{N='Device';E={$_."Device Name"}},#{N='OS';E={$_."OS Version"}},Zones,#{N='Agent';E={$_."Agent Version"}},#{N='Active';E={$_."Is Online"}},#{N='Checkin';E={[DateTime]$_."Online Date"}},#{N='Checked';E={[DateTime]$_."Offline Date"}},Policy
$csvtmp | %{
if ($_.Zones -eq ""){$_.Zones = "Unzoned"}
}
$csvtmp | Export-Csv $working\cy.csv -NoTypeInformation
import-csv $working\cy.csv | Select Device,policy,OS,Zones,Agent,Active,Checkin,Checked | % {
$_ | Export-CSV -path $working\temp\$($_.Zones).csv -notypeinformation -Append
}
cd $working\temp;
Rename-Item "Unzoned.csv" -NewName "02-Unzoned.csv"
Rename-Item "Systems-Removal.csv" -NewName "03-Systems-Removal.csv"
$CSVFiles = Get-ChildItem -path $working\temp -filter *.csv
$Excel = "$working\Cylance Report.xlsx"
$Num = $CSVFiles.Count
Write-Host "Found the following Files: ($Num)"
ForEach ($csv in $CSVFiles) {
Write-host "Merging $CSVFiles.Name"
}
$EXc1 = New-Object -ComObject Excel.Application
$Exc1.SheetsInNewWorkBook = $CSVFiles.Count
$XLS = $EXc1.Workbooks.Add()
$Sht = 1
ForEach ($csv in $CSVFiles) {
$Row = 1
$Column = 1
$WorkSHT = $XLS.WorkSheets.Item($Sht)
$WorkSHT.Name = $csv.Name -Replace ".csv",""
$File = (Get-Content $csv)
ForEach ($line in $File) {
$LineContents = $line -split ',(?!\s*\w+")'
ForEach ($Cell in $LineContents) {
$WorkSHT.Cells.Item($Row,$Column) = $Cell -Replace '"',''
$Column++
}
$Column = 1
$Row++
}
$Sht++
}
$Output = $Excel
$XLS.SaveAs($Output)
$EXc1.Quit()
Remove-Item *.csv
cd ..\

Found the solution
$Zcount = import-csv C:\Working\cylrpt.csv | where Zones -ne "$null" | select #{N='Device';E={$_."Device Name"}},Zones | group Zones | Select Name,Count
$Zcount | Export-Csv -path C:\Working\Temp\01-Summary.csv -NoTypeInformation

Related

How do I find text between two words and export it to txt.file

I have a CSV file which contains many lines and I want to take the text between <STR_0.005_Long>, and µm,5.000µm.
Example line from the CSV:
Straightness(Up/Down) <STR_0.005_Long>,4.444µm,5.000µm,,Pass,‌​2.476µm,1.968µm,25,0‌​.566µm,0.720µm
This is the script that I am trying to write:
$arr = #()
$path = "C:\Users\georgi\Desktop\5\test.csv"
$pattern = "(?<=.*<STR_0.005_Long>,)\w+?(?=µm,5.000µm*)"
$Text = Get-Content $path
$Text.GetType() | Format-Table -AutoSize
$Text[14] | Foreach {
if ([Regex]::IsMatch($_, $pattern)) {
$arr += [Regex]::Match($_, $pattern)
Out-File C:\Users\georgi\Desktop\5\test.txt -Append
}
}
$arr | Foreach {$_.Value} | Out-File C:\Users\georgi\Desktop\5\test.txt -Append
Use a Where-Object filter with your regular expression and simply output the match to the output file:
Get-Content $path |
Where-Object { $_ -match $pattern } |
ForEach-Object { $matches[0] } |
Out-File 'C:\Users\georgi\Desktop\5\test.txt'
Of course, since you have a CSV, you could simply use Import-Csv and export the value of that particular column:
Import-Csv $path | Select-Object -Expand 'column_name' |
Out-File 'C:\Users\georgi\Desktop\5\test.txt'
Replace column_name with the actual name of the column. If the CSV doesn't have a column header you can specify one via the -Header parameter:
Import-Csv $path -Header 'col1','col2','col3',... |
Select-Object -Expand 'col2' |
Out-File 'C:\Users\georgi\Desktop\5\test.txt'

Combining CSV files in Powershell - different headings

I need to take a slew of csv files from a directory and get them into an array in Powershell (to eventually manipulate and write back to a CSV).
The problem is there are 5 file types. I need around 8 columns from each. The columns are essentially the same, but have different headings.
Is there an easy way to do this? I started creating a custom object with my 8 fields, looping through the files importing each one, looking at the filename (which tells me the column names I need) and then a bunch of ifs to add it to my custom object array.
I was wondering if there is a simpler way...like with a template saying which columns from each file.
wound up doing this. It may have not been the most efficient, but works. I wound up writing out each file separately and combining at the end as PS really got bogged down (over a million rows combined).
$Newcsv = #()
$path = "c:\scrap\BWFILES\"
$files = gci -path $path -recurse -filter *.csv | Where-Object { ! ($_.psiscontainer) }
$counter=1
foreach($file in $files)
{
$csv = Import-Csv $file.FullName
if ($file.Name -like '*SAV*')
{
$Newcsv = $csv | Select-Object #{Name="PRODUCT";Expression={"SV"}},DMBRCH,DMACCT,DMSHRT
}
if ($file.Name -like '*TIME*')
{
$Newcsv = $csv | Select-Object #{Name="PRODUCT";Expression={"TM"}},TMBRCH,TMACCT,TMSHRT
}
if ($file.Name -like '*TRAN*')
{
$Newcsv = $csv | Select-Object #{Name="PRODUCT";Expression={"TR"}},DMBRCH,DMACCT,DMSHRT
}
if ($file.Name -like '*LN*')
{
$Newcsv = $csv | Select-Object #{Name="PRODUCT";Expression={"LN"}},LNBRCH,LNNOTE,LNSHRT
}
$Newcsv | Export-Csv "C:\scrap\$file.name$counter.csv" -force -notypeinformation
$counter++
}
get-childItem "c:\scrap\*.csv" | foreach {
$filePath = $_
$lines = $lines = Get-Content $filePath
$linesToWrite = switch($getFirstLine) {
$true {$lines}
$false {$lines | Select -Skip 1}
}
$getFirstLine = $false
Add-Content "c:\scrap\combined.csv" $linesToWrite
}
With a hashtable for reference, a little RegEx matching, and using the automatic variable $Matches in a ForEach-Object loop (alias % used) that could all be shortened to:
$path = "c:\scrap\BWFILES\"
$Reference = #{
'SAV' = 'SV'
'TIME' = 'TM'
'TRAN' = 'TR'
'LN'='LN'
}
Set-Content -Value "PRODUCT,BRCH,ACCT,SHRT" -Path 'c:\scrap\combined.csv'
gci -path $path -recurse -filter *.csv | Where-Object { !($_.psiscontainer) -and $_.Name -match ".*(SAV|TIME|TRAN|LN).*"}|%{
$Product = $Reference[($Matches[1])]
Import-CSV $_.FullName | Select-Object #{Name="PRODUCT";Expression={$Product}},*BRCH,#{l='Acct';e={$_.LNNOTE, $_.DMACCT, $_.TMACCT|?{$_}}},*SHRT | ConvertTo-Csv -NoTypeInformation | Select -Skip 1 | Add-Content 'c:\scrap\combined.csv'
}
That should produce the exact same file. Only kind of tricky part was the LNNOTE/TMACCT/DMACCT field since obviously you can't just do the same as like *SHRT.

Comparing csv files with -like in Powershell

I have two csv files, each that contain a PATH column. For example:
CSV1.csv
PATH,Data,NF
\\server1\folderA,1,1
\\server1\folderB,1,1
\\server2\folderA,1,1
\\server2\folderB,1,1
CSV2.csv
PATH,User,Access,Size
\\server1\folderA\file1,don,1
\\server1\folderA\file2,don,1
\\server1\folderA\file3,sue,1
\\server2\folderB\file1,don,1
What I'm attempting to do is create a script that will result in separate csv exports based on the paths in CSV1 such that the new files contain file values from CSV2 that match. For example, from the above, I'd end up with 2 results:
result1.csv
\\server1\folderA\file1,don,1
\\server1\folderA\file2,don,1
\\server1\folderA\file3,sue,1
result2.csv
\\server2\folderB\file1,don,1
Previously I've used a script lime this when the two values are exact:
$reportfile = import-csv $apireportoutputfile -delimiter ';' -encoding unicode
$masterlist = import-csv $pathlistfile
foreach ($record in $masterlist)
{
$path=$record.Path
$filename = $path -replace '\\','_'
$filename = '.\Working\sharefiles\' + $filename + '.csv'
$reportfile | where-object {$_.path -eq $path} | select FilePath,UserName,LastAccessDate,LogicalSize | export-csv -path $filename
write-host " Creating files list for $path" -foregroundcolor red -backgroundcolor white
}
however since the two path values are not the same, it returns nothing. I found a -like operator but am not sure how to use it in this code to get the results I want. where-object is a filter while -like ends up returning a true/false. Am I on the right track? Any ideas for a solution?
Something like this, maybe?
$ht = #{}
Import-Csv csv1.csv |
foreach { $ht[$_.path] = New-Object collections.arraylist }
Import-Csv csv2.csv |
foreach {
$path = $_.path | Split-Path -Parent
$ht[$path].Add($_) > $null
}
$i=1
$ht.Values |
foreach { if ($_.count)
{
$_ | Export-Csv "result$i.csv" -NoTypeInformation
$i++
}
}
My suggestion:
$1=ipcsv .\csv1.CSV
$2=ipcsv .\csv2.CSV
$equal = diff ($2|select #{n='PATH';e={Split-Path $_.PATH}}) $1 -Property PATH -IncludeEqual -ExcludeDifferent -PassThru
0..(-1 + $equal.Count) | %{%{$i = $_}{
$2 | ?{ (Split-Path $_.PATH) -eq $equal[$i].PATH } | epcsv ".\Result$i.CSV"
}}

Powershell csv remove lines

I have a CSV file (file1) that looks like: (User dirs and the size)
Initials,Size
User1,10
User2,100
User3,131
User4,140
I have another CSV file (file2) that looks like: (VIP users)
User2
User4
Now what I'm trying to do, is to update file1, so it looks like:
User1,10
User3,131
User2 and User4 is removed because they are in file2
I can get them removed, but at the same time I remove the size for all users, so my output is only the Users:
User1
User3
My code:
$SourcePath = "\\server1\info\SYSINFO\UsrSize"
$DestinationFile = "\\server1\info\SYSINFO\UsrSize\OverLimit\UsersOverLimit1.log"
$VIP_Exclusion_List = "\\server1\info\SYSINFO\UsrSize\OverLimit\_VIP_EXCLUSION_LIST.txt"
$Database = "\\server1\info\SYSINFO\UsrSize\OverLimit\_UsersOverLimitDATABASE.log"
$INT_SizeToLookFor = 100
dir $SourcePath -Filter usr*.txt | import-csv -delimiter "`t" |
Where-Object {[INT] $_."Size excl. Backup/Pst" -ge $INT_SizeToLookFor} |
Select-Object Initials,"Size excl. Backup/Pst" | convertto-csv -NoTypeInformation | % { $_ -replace '"', ""} | out-file $DestinationFile ;
$Userlist = import-csv $DestinationFile | Select-Object Initials |
convertto-csv -NoTypeInformation | % { $_ -replace '"', ""};
compare-object ($Userlist) (get-content $VIP_Exclusion_List) |
select-object inputObject | convertto-csv -NoTypeInformation |
% { $_ -replace '"', ""} | out-file "\\server1\info\SYSINFO\UsrSize\OverLimit\UsersOverLimitThisTime.log";
If the files are small-ish and you don't care too much about performance, then the following would be a trivial way:
$data = Import-Csv file1
$vips = Import-Csv file2
$data = $data | ?{ $vips -notcontains $_.Initials }
$data | Export-Csv file1_new -NoTypeInformation
A faster way would be to add the names to remove to a set, but given the things you're talking about here I doubt you'll get into the range of a few thousand or million users.
I solved it using this code:
$ArrayVIP = get-content $VIP_Exclusion_List
select-string $DestinationFile -pattern $ArrayVIP -notmatch |
select -expand line |
out-file $DestinationFile
Taken from here: Removing lines from a CSV

Format results in table

My script below searches for a specific part number (459279) recursively through a number of txt files.
set-location C:\Users\admin\Desktop\PartNumbers\
$exclude = #('PartNumbers.txt','*.ps1')
$Line = [Environment]::NewLine
Get-ChildItem -Recurse -Exclude $exclude | select-string 459279 | group count | Format-Table -Wrap -AutoSize -Property Count,
#{Expression={$_.Group | foreach { $_.filename} | Foreach {"$_$Line"} }; Name="Filename"},
#{Expression={$_.Group | foreach {$_.LineNumber} | Foreach {"$_$Line"} }; Name="LineNumbers"} | Out-File 459279Results.txt
My Results are:
Count Filename LineNumbers
----- -------- -----------
2 {Customer1.txt {2
, Customer2.txt , 3
} }
My ideal results would be if this is possible:
Part Number: 459279
Count: 2
Filename LineNumbers
-------- -----------
Customer1.txt 2
Customer2.txt 3
I have manually retrieved the part number '459279' from "PartNumbers.txt" and searched for it using the script.
I cannot seem to remove/replace the braces and commas to present a clean list.
What I hope to eventually do is to recursively search through "PartNumbers.txt" and produce a report with each part number appended to the next in the style mentioned above.
PartNumbers.txt is formatted:
895725
939058
163485
459279
498573
Customer*.txt are formatted:
163485
459279
498573
Something like this should work:
$exclude = 'PartNumbers.txt', '*.ps1'
$root = 'C:\Users\admin\Desktop\PartNumbers'
$outfile = Join-Path $root 'loopResults.txt'
Get-Content (Join-Path $root 'PartNumbers.txt') | % {
$partno = $_
$found = Get-ChildItem $root -Recurse -Exclude $exclude `
| ? { -not $_.PSIsContainer } `
| Select-String $partno `
| % {
$a = $_ -split ':'
New-Object -Type PSCustomObject -Property #{
'Filename' = Split-Path -Leaf $a[1];
'LineNumbers' = $a[2]
}
}
"Part Number: $partno"
'Count: ' + #($found).Count
$found | Format-Table Filename, LineNumbers
} | Out-File $outfile -Append