Powershell - Import-CSV Group-Object SUM a number from grouped objects and then combine all grouped objects to single rows - powershell

I have a question similar to this one but with a twist:
Powershell Group Object in CSV and exporting it
My file has 42 existing headers. The delimiter is a standard comma, and there are no quotation marks in this file.
master_account_number,sub,txn,cur,last,first,address,address2,city,state,zip,ssn,credit,email,phone,cell,workphn,dob,chrgnum,cred,max,allow,neg,plan,downpayment,pmt2,min,clid,cliname,owner,merch,legal,is_active,apply,ag,offer,settle_perc,min_pay,plan2,lstpmt,orig,placedate
The file's data (the first 6 columns) looks like this:
master_account_number,sub,txn,cur,last,first
001,12,35,50.25,BIRD, BIG
001,34,47,100.10,BIRD, BIG
002,56,9,10.50,BUNNY, BUGS
002,78,3,20,BUNNY, BUGS
003,54,7,250,DUCK, DAFFY
004,44,88,25,MOUSE, JERRY
I am only working with the first column master_account_number and the 4th column cur.
I want to check for duplicates of the"master_account_number" column, if found then add the totals up from the 4th column "cur" for only those dupes found and then do a combine for any rows that we just did a sum on. The summed value from the dupes should replace the cur value in our combined row.
With that said, our out-put should look like so.
master_account_number,sub,txn,cur,last,first
001,12,35,150.35,BIRD, BIG
002,56,9,30.50,BUNNY, BUGS
003,54,7,250,DUCK, DAFFY
004,44,88,25,MOUSE, JERRY
Now that we have that out the way, here is how this question differs. I want to keep all 42 columns intact in the out-put file. In the other question I referenced above, the input was 5 columns and the out-put was 4 columns and this is not what I'm trying to achieve. I have so many more headers, I'd hate to have specify individually all 42 columns. That seems inefficient anyhow.
As for what I have so far for code... not much.
$revNB = "\\server\path\example.csv"
$global:revCSV = import-csv -Path $revNB | ? {$_.is_active -eq "Y"}
$dupesGrouped = $revCSV | Group-Object master_account_number | Select-Object #{Expression={ ($_.Group|Measure-Object cur -Sum).Sum }}
Ultimately I want the output to look identical to the input, only the output should merge duplicate account numbers rows, and add all the "cur" values, where the merged row contains the sum of the grouped cur values, in the cur field.
Last Update: Tried Rich's solution and got an error. Modified what he had to this $dupesGrouped = $revCSV | Group-Object master_account_number | Select-Object Name, #{Name='curSum'; Expression={ ($_.Group | Measure-Object cur -Sum).Sum}}
And this gets me exactly what my own code got me so I am still looking for a solution. I need to output this CSV with all 42 headers. Even for items with no duplicates.
Other things I've tried:
This doesn't give me the data I need in the columns, the columns are there but they are blank.
$dupesGrouped = $revCSV | Group-Object master_account_number | Select-Object #{ expression={$_.Name}; label='master_account_number' },
sub_account_number,
charge_txn,
#{Name='current_balance'; Expression={ ($_.Group | Measure-Object current_balance -Sum).Sum },
last,
}

You're pretty close, but you used current_balance where you probably meant cur.
Here's a start:
$dupesGrouped = $revCSV | Group-Object master_account_number |
Select-Object Name, #{N='curSum'; E={ ($_.Group | Measure-Object cur -Sum).Sum},
#{N='last'; E={ ($_.Group | Select-Object last -first 1).last} }
You can add the other fields by adding Name;Expression hashtables for each of the fields you want to summarize. I assumed you would want to select the first occurrence of repeated last name for the same master_account_number. The output will be incorrect if the last name differs for the same master_account_number.

In the case of changing only part of the data, there is also the following way.
$dupesGrouped = $revCSV | Group-Object master_account_number | ForEach-Object {
# copy the first data in order not to change original data
$new = $_.Group[0].psobject.Copy()
# update the value of cur property
$new.cur = ($_.Group | Measure-Object cur -Sum).Sum
# output
$new
}

Related

How to summarize value rows of one column with reference to another column in PowerShell object

I'm learning to work with the import-excel module and have successfully imported the data from a sample.xlsx file. I need to extract out the total amount based on the values of another column values. Basically, I want to just create a grouped data view where I can store the sum of values next to each type. Here's the sample data view.
Type Amount
level 1 $1.00
level 1 $2.00
level 2 $3.00
level 3 $4.00
level 3 $5.00
Now to import I'm just using the simple code
$fileName = "C:\SampleData.xlsx"
$data = Import-Excel -Path $fileName
#extracting distinct type values
$distinctTypes = $importedExcelRows | Select-Object -ExpandProperty "Type" -Unique
#looping through distinct types and storing it in the output
$output = foreach ($type in $distinctTypes)
{
$data | Group-Object $type | %{
New-Object psobject -Property #{
Type = $_.Name
Amt = ($_.Group | Measure-Object 'Amount' -Sum).Sum
}
}
}
$output
The output I'm looking for looks somewhat like:
Type Amount
level 1 $3.00
level 2 $3.00
level 3 $9.00
However, I'm getting nothing in the output. It's $null I think. Any help is appreciated I think I'm missing something in the looping.
You're halfway there by using Group-Object for this scenario, kudos on that part. Luckily, you can group by the type at your import and then measure the sum:
$fileName = "C:\SampleData.xlsx"
Import-Excel -Path $fileName | Group-Object -Property Type | % {
$group = $_.Group | % {
$_.Amount = $_.Amount -replace '[^0-9.]'
$_
} | Measure-Object -Property Amount -Sum
[pscustomobject]#{
Type = $_.Name
Amount = "{0:C2}" -f $group.Sum
}
}
Since you can't measure the amount in currency format, you can remove the dollar sign with some regex of [^0-9.], removing everything that is not a number, or ., or you could use ^\$ instead as well. This allows for the measurement of the amount and you can just format the amount back to currency format using the string format operator '{0:C2} -f ....
I don't know what your issue is but when the dollar signs are not part of the data you pull from the Excel sheet it should work as expected ...
$InputCsvData = #'
Type,Amount
level 1,1.00
level 1,2.00
level 2,3.00
level 3,4.00
level 3,5.00
'# |
ConvertFrom-Csv
$InputCsvData |
Group-Object -Property Type |
ForEach-Object {
[PSCustomObject]#{
Type = $_.Name
Amt = '${0:n2}'-f ($_.Group | Measure-Object -Property Amount -Sum).Sum
}
}
The ouptut looks like this:
Type Amt
---- ---
level 1 $3,00
level 2 $3,00
level 3 $9,00
Otherwise you may remove the dollar signs before you try to summarize the numbers.

Powershell Help: How can I remove duplicates (using multiple columns simultaneously, not sequentially)?

I have tried several different variations based on some other stack overflow articles, but I will share a sample of what I have and a sample output and then some cobbled-together code hoping for some direction from the community:
C:\Scripts\contacts.csv:
id,first_name,last_name,email
1,john,smith,jsmith#notreal.com
1,jane,smith,jsmith#notreal.com
2,jane,smith,jsmith#notreal.com
2,john,smith,jsmith#notreal.com
3,sam,jones,sjones#notreal.com
3,sandy,jones,sandy#notreal.com
Need to turn this into a file where column "email" is unique to column "id". In other words there can be duplicate addresses, but only if there is a different id.
desired output C:\Scripts\contacts-trimmed.csv:
id,first_name,last_name,email
1,john,smith,jsmith#notreal.com
2,john,smith,jsmith#notreal.com
3,sam,jones,sjones#notreal.com
3,sandy,jones,sandy#notreal.com
I have tried this with a few different variations:
Import-Csv C:\Scripts\contacts.csv | sort first_name | Sort-Object -Property id,email -Unique | Export-Csv C:\Scripts\contacts-trim.csv -NoTypeInformation
Any help or direction would be most appreciated
You'll want to use the Group-Object cmdlet, to, well, group together records with similar values:
$records = #'
id,first_name,last_name,email
1,john,smith,jsmith#notreal.com
1,jane,smith,jsmith#notreal.com
2,jane,smith,jsmith#notreal.com
2,john,smith,jsmith#notreal.com
3,sam,jones,sjones#notreal.com
3,sandy,jones,sandy#notreal.com
'# |ConvertFrom-Csv
# group records based on id and email column
$records |Group-Object id,email |ForEach-Object {
# grab only the first record from each group
$_.Group |Select-Object -First 1
} |Export-Csv .\no_duplicates.csv -NoTypeInformation

Sum various columns to get subtotal depending on a criteria from a row using Powershell

I have a csv file, that contains the next data:
Pages,Pages BN,Pages Color,Customer
145,117,28,Report_Alexis
46,31,15,Report_Alexis
75,27,48,Report_Alexis
145,117,28,Report_Jack
46,31,15,Report_Jack
75,27,48,Report_Jack
145,117,28,Report_Amy
46,31,15,Report_Amy
75,27,48,Report_Amy
So what i need to do , is sum each column based on the report name and the export to another csv file like this
Pages,Pages BN,Pages Color,Customer
266,175,91,Report_Alexis
266,175,91,Report_Jack
266,175,91,Report_Amy
How can i do this?
I tried with this:
$coutnpages = Import-Csv "C:\temp\testcount\final file2.csv" |where {$_.Filename -eq 'Report_Jack'} | Measure-Object -Property Pages -Sum
then
$Countpages.Sum | Set-Content -Path "C:\temp\testcount\final file3.csv"
But this is just one, and then i dont know how to follow.
Can you please help me?
Working code
$IdentityColumns = #('Customer')
$ColumnsToSum = #('Pages', 'Pages BN', 'Pages Color')
$CSVFileInput = 'S:\SCRIPTS\1.csv'
Import-Csv -Path $CSVFileInput |
Group-Object -Property $IdentityColumns |
ForEach-Object {
$resultHT = #{ Customer = $_.Name } # This is result HashTable (Key-Value collection). We add here sum's next line.
#($_.Group | Measure-Object -Property $ColumnsToSum -Sum ) | # Run calculating of sum for all $ColumnsToSum`s in one line
ForEach-Object { $resultHT[$_.Property] = $_.Sum } # For each calculated property we set property in result HashTable
return [PSCustomObject]$resultHT # Convert HashTable to PSCustomObject. This better.
} | # End of ForEach-Object by groups
Select #($ColumnsToSum + $IdentityColumns) | # This sets order of columns. It may be important.
Out-GridView # Or replace with Export-Csv
#Export-Csv ...
Explanation:
Use Group-Object to make collection of groups. Groups have 4 properties:
Name - Name of group, equals to stingified values of property(-ies) you're grouping by
Values - Collection of values of properties you're grouping by (not stringified)
Count - Count of elements grouped into this group
Group - Values of elements grouped into this group
For grouping by single string properties (in this case it is ok), you can easily use Name of group, otherwise, always use Values.
So after Group-Object, you iterate not on collection-of-rows of CSV, but on collection-of-collections-of-rows grouped by some condition.
Measure-Object can process more than one propertiy for single pass (not mixing between values from different properties), we use this actively. This results in array of objects with attribute Property equal to passed to Measure-Object and value (Sum in our case). We move those Property=Sum pairs to hashtable.
[PSCustomObject] converts hashtable to object. Objects are always better for output.

Count unique numbers in CSV (PowerShell or Notepad++)

How to find the count of unique numbers in a CSV file? When I use the following command in PowerShell ISE
1,2,3,4,2 | Sort-Object | Get-Unique
I can get the unique numbers but I'm not able to get this to work with CSV files. If for example I use
$A = Import-Csv C:\test.csv | Sort-Object | Get-Unique
$A.Count
it returns 0. I would like to count unique numbers for all the files in a given folder.
My data looks similar to this:
Col1,Col2,Col3,Col4
5,,7,4
0,,9,
3,,5,4
And the result should be 6 unique values (preferably written inside the same CSV file).
Or would it be easier to do it with Notepad++? So far I have found examples only on how to count the unique rows.
You can try the following (PSv3+):
PS> (Import-CSV C:\test.csv |
ForEach-Object { $_.psobject.properties.value -ne '' } |
Sort-Object -Unique).Count
6
The key is to extract all property (column) values from each input object (CSV row), which is what $_.psobject.properties.value does;
-ne '' filters out empty values.
Note that, given that Sort-Object has a -Unique switch, you don't need Get-Unique (you need Get-Unique only if your input already is sorted).
That said, if your CSV file is structured as simply as yours, you can speed up processing by reading it as a text file (PSv2+):
PS> (Get-Content C:\test.csv | Select-Object -Skip 1 |
ForEach-Object { $_ -split ',' -ne '' } |
Sort-Object -Unique).Count
6
Get-Content reads the CSV file as a line of strings.
Select-Object -Skip 1 skips the header line.
$_ -split ',' -ne '' splits each line into values by commas and weeds out empty values.
As for what you tried:
Import-CSV C:\test.csv | Sort-Object | Get-Unique:
Fundamentally, Sort-Object emits the input objects as a whole (just in sorted order), it doesn't extract property values, yet that is what you need.
Because no -Property argument is passed to Sort-Object to base the sorting on, it compares the custom objects that Import-Csv emits as a whole, by their .ToString() values, which happen to be empty[1]
, so they all compare the same, and in effect no sorting happens.
Similarly, Get-Unique also determines uniqueness by .ToString() here, so that, again, all objects are considered the same and only the very first one is output.
[1] This may be surprising, given that using a custom object in an expandable string does yield a value: compare $obj = [pscustomobject] #{ foo ='bar' }; $obj.ToString(); '---'; "$obj". This inconsistency is discussed in this GitHub issue.

Powershell counting same values from csv

Using PowerShell, I can import the CSV file and count how many objects are equal to "a". For example,
#(Import-csv location | where-Object{$_.id -eq "a"}).Count
Is there a way to go through every column and row looking for the same String "a" and adding onto count? Or do I have to do the same command over and over for every column, just with a different keyword?
So I made a dummy file that contains 5 columns of people names. Now to show you how the process will work I will show you how often the text "Ann" appears in any field.
$file = "C:\temp\MOCK_DATA (3).csv"
gc $file | %{$_ -split ","} | Group-Object | Where-Object{$_.Name -like "Ann*"}
Don't focus on the code but the output below.
Count Name Group
----- ---- -----
5 Ann {Ann, Ann, Ann, Ann...}
9 Anne {Anne, Anne, Anne, Anne...}
12 Annie {Annie, Annie, Annie, Annie...}
19 Anna {Anna, Anna, Anna, Anna...}
"Ann" appears 5 times on it's own. However it is a part of other names as well. Lets use a simple regex to find all the values that are only "Ann".
(select-string -Path 'C:\temp\MOCK_DATA (3).csv' -Pattern "\bAnn\b" -AllMatches | Select-Object -ExpandProperty Matches).Count
That will return 5 since \b is for a word boundary. In essence it is only looking at what is between commas or beginning or end of each line. This omits results like "Anna" and "Annie" that you might have. Select-Object -ExpandProperty Matches is important to have if you have more than one match on a single line.
Small Caveat
It should not matter but in trying to keep the code simple it is possible that your header could match with the value you are looking for. Not likely which is why I don't account for it. If that is a possibility then we could use Get-Content instead with a Select -Skip 1.
Try cycling through properties like this:
(Import-Csv location | %{$record = $_; $record | Get-Member -MemberType Properties |
?{$record.$($_.Name) -eq 'a';}}).Count