How to merge 2 x CSVs with the same column but overwrite not append? - powershell

I've got this one that has been baffling me all day, and I can't seem to find any search results that match exactly what I am trying to do.
I have 2 CSV files, both of which have the same columns and headers. They look like this (shortened for the purpose of this post):
"plate","labid","well"
"1013740016604537004556","none46","F006"
"1013740016604537004556","none47","G006"
"1013740016604537004556","none48","H006"
"1013740016604537004556","3835265","A007"
"1013740016604537004556","3835269","B007"
"1013740016604537004556","3835271","C007"
Each of the 2 CSVs only have some actual Lab IDs, and the 'nonexx' are just fillers for the importing software. There is no duplication ie each 'well' is only referenced once across the 2 files.
What I need to do is merge the 2 CSVs, for example the second CSV might have a Lab ID for well H006 but the first will not. I need the lab ID from the second CSV imported into the first, overwriting the 'nonexx' currently in that column.
Here is my current code:
$CSVB = Import-CSV "$RootDir\SymphonyOutputPending\$plateID`A_Header.csv"
Import-CSV "$RootDir\SymphonyOutputPending\$plateID`_Header.csv" | ForEach-Object {
$CSVData = [PSCustomObject]#{
labid = $_.labid
well = $_.well
}
If ($CSVB.well -match $CSVData.wellID) {
write-host "I MATCH"
($CSVB | Where-Object {$_.well -eq $CSVData.well}).labid = $CSVData.labid
}
$CSVB | Export-CSV "$RootDir\SymphonyOutputPending\$plateID`_final.csv" -NoTypeInformation
}
The code runs but doesn't 'merge' the data, the final CSV output is just a replication of the first input file. I am definitely getting a match as the string "I MATCH" appears several times when debugging as expected.

Based on the responses in the comments of your question, I believe this is what you are looking for. This assumes that the both CSVs contain the exact same data with labid being the only difference.
There is no need to modify csv2 if we are just grabbing the labid to overwrite the row in csv1.
$csv1 = Import-Csv C:\temp\LabCSV1.csv
$csv2 = Import-Csv C:\temp\LabCSV2.csv
# Loop through csv1 rows
Foreach($line in $csv1) {
# If Labid contains "none"
If($line.labid -like "none*") {
# Set rows labid to the labid from csv2 row that matches plate/well
# May be able to remove the plate section if well is a unique value
$line.labid = ($csv2 | Where {$_.well -eq $line.well -and $_.plate -eq $line.plate}).labid
}
}
# Export to CSV - not overwrite - to confirm results
$csv1 | export-csv C:\Temp\LabCSV1Adjusted.csv -NoTypeInformation

Since you need to do a bi-directional comparison of the 2 Csvs you could create a new array of both and then group the objects by their well property, for this you can use Group-Object, then filter each group if their Count is equal to 2 where their labid property does not start with none else return the object as-is.
Using the following Csvs for demonstration purposes:
Csv1
"plate","labid","well"
"1013740016604537004556","none46","F006"
"1013740016604537004556","none47","G006"
"1013740016604537004556","3835265","A007"
"newrowuniquecsv1","none123","X001"
Csv2
"plate","labid","well"
"1013740016604537004556","none48","A007"
"1013740016604537004556","3835269","F006"
"1013740016604537004556","3835271","G006"
"newrowuniquecsv2","none123","X002"
Code
Note that this code assumes there will be a maximum of 2 objects with the same well property and, if there are 2 objects with the same well, one of them must have a value not starting with none.
$mergedCsv = #(
Import-Csv pathtocsv1.csv
Import-Csv pathtocsv2.csv
)
$mergedCsv | Group-Object well | ForEach-Object {
if($_.Count -eq 2) {
return $_.Group.Where{ -not $_.labid.StartsWith('none') }
}
$_.Group
} | Export-Csv pathtomerged.csv -NoTypeInformation
Output
plate labid well
----- ----- ----
1013740016604537004556 3835265 A007
1013740016604537004556 3835269 F006
1013740016604537004556 3835271 G006
newrowuniquecsv1 none123 X001
newrowuniquecsv2 none123 X002

If the lists are large, performance might be an issue as Where-Object (or any other where method) and Group-Object do not perform very well for embedded loops.
By indexing the second csv file (aka creating a hashtable), you have quicker access to the required objects. Indexing upon two (or more) items (plate and well) is issued here: Does there exist a designated (sub)index delimiter? and resolved by #mklement0 and zett42 with a nice CaseInsensitiveArrayEqualityComparer class.
To apply this class on Drew's helpful answer:
$csv1 = Import-Csv C:\temp\LabCSV1.csv
$csv2 = Import-Csv C:\temp\LabCSV2.csv
$dict = [hashtable]::new([CaseInsensitiveArrayEqualityComparer]::new())
$csv2.ForEach{ $dict.($_.plate, $_.well) = $_ }
Foreach($line in $csv1) {
If($line.labid -like "none*") {
$line.labid = $dict.($line.plate, $line.well).labid
}
}
$csv1 | export-csv C:\Temp\LabCSV1Adjusted.csv -NoTypeInformation

Related

Using Powershell, how can I export and delete csv rows, where a particular value is *not found* in a *different* csv?

I have two files. One is called allper.csv
institutiongroup,studentid,iscomplete
institutionId=22343,123,FALSE
institutionId=22343,456,FALSE
institutionId=22343,789,FALSE
The other one is called actswithpersons.csv
abc,123;456
def,456
ghi,123
jkl,123;456
Note: The actswithpersons.csv does not have headers - they are going to be added in later via an excel power query so don't want them in there now. The actswithpersons csv columns are delimited with commas - there are only two columns, and the second one contains multiple personids - again Excel will deal with this later.
I want to remove all rows from allper.csv where the personid doesn't appear in actswithpersons.csv, and export them to another csv. So in the desired outcome, allper.csv would look like this
institutiongroup,studentid,iscomplete
institutionId=22343,123,FALSE
institutionId=22343,456,FALSE
and the export.csv would look like this
institutiongroup,studentid,iscomplete
institutionId=22343,789,FALSE
I've got as far as the below, which will put into the shell whether the personid is found in the actswithpersons.csv file.
$donestuff = (Get-Content .\ActsWithpersons.csv | ConvertFrom-Csv); $ids=(Import-Csv .\allper.csv);foreach($id in $ids.personid) {echo $id;if($donestuff -like "*$id*" )
{
echo 'Contains String'
}
else
{
echo 'Does not contain String'
}}
However, I'm not sure how to go the last step, and export & remove the unwanted rows from allper.csv
I've tried (among many things)
$donestuff = (Get-Content .\ActsWithpersons.csv | ConvertFrom-Csv);
Import-Csv .\allper.csv |
Where-Object {$donestuff -notlike $_.personid} |
Export-Csv -Path export.csv -NoTypeInformation
This took a really long time and left me with an empty csv. So, if you can give any guidance, please help.
Since your actswithpersons.csv doesn't have headers, in order for you to import as csv, you can specify the -Header parameter in either Import-Csv or ConvertFrom-Csv; with the former cmdlet being the better solution.
With that said, you can use any header name for those 2 columns then filter by the given column name (ID in this case) after your import of allper.csv using Where-Object:
$awp = (Import-Csv -Path '.\actswithpersons.csv' -Header 'blah','ID').ID.Split(';')
Import-Csv -Path '.\allper.csv' | Where-Object -Property 'Studentid' -notin $awp
This should give you:
institutiongroup studentid iscomplete
---------------- --------- ----------
institutionId=22343 789 FALSE
If you're looking to do it with Get-Content you can split by the delimiters of , and ;. This should give you just a single row of values which you can then compare the entirety of variable ($awp) using the same filter as above which will give you the same results:
$awp = (Get-Content -Path '.\actswithpersons.csv') -split ",|;"
Import-Csv -Path '.\allper.csv' | Where-Object -Property 'Studentid' -notin $awp

Export results of (2) cmdlets to separate columns in the same CSV

I'm new to PS, so your patience is appreciated.
I'm trying to grab data from (2) separate CSV files and then dump them into a new CSV with (2) columns. Doing this for (1) is easy, but I don't know how to do it for more.
This works perfectly:
Import-CSV C:\File1.csv | Select "Employee" | Export-CSV -Path D:\Result.csv -NoTypeInformation
If I add another Import-CSV, then it simply overwrites the existing data:
Import-CSV C:\File2.csv | Select "Department" | Export-CSV -Path D:\Result.csv -NoTypeInformation
How can I get columns A and B populated with the info result from these two commands? Thanks for your help.
I would have choose this option:
$1 = Import-Csv -Path "C:\Users\user\Desktop\1.csv" | Select "Employee"
$2 = Import-Csv -Path "C:\Users\user\Desktop\2.csv" | Select "Department"
$marged = [pscustomobject]#()
$object = [pscustomobject]
for ($i=0 ; $i -lt $1.Count ; $i++){
$object = [pscustomobject]#{
Employees = $1[$i].Employee
Department = $2[$i].Department}
$marged += $object
}
$marged | ForEach-Object{ [pscustomobject]$_ } | Export-Csv -Path "C:\Users\user\Desktop\3.csv" -NoTypeInformation -Force
I'll explain how I would do this, but I do it this way because I'm more comfortable working with objects than with hastables. Someone else may offer an answer using hashtables which would probably work better.
First, I would define an array to hold your data, which can later be exported to CSV:
$report = #()
Then, I would import your CSV to an object that can be iterated through:
$firstSet = Import-CSV .\File1.csv
Then I would iterate through this, importing each row into an object that has the two properties I want. In your case these are Employee and Department (potentially more which you can add easily).
foreach($row in $firstSet)
{
$employeeName = $row.Employee
$employee = [PSCustomObject]#{
Employee = $employee
Department = ""
}
$report += $employee
}
And, as you can see in the example above, add this object to your report.
Then, import the second CSV file into a second object to iterate through (for good form I would actually do this at the begining of the script, when you import your first one):
$secondSet = Import-CSV .\File2.csv
Now here is where it gets interesting. Based on just the information you have provided, I am assuming that all employees in the one file are in the same order as the departments in the other files. So for example, if I work for the "Cake Tasting Department", and my name is on row 12 of File 1, row 12 of File 2 says "Cake Tasting Department".
In this case it's fairly easy. You would just roll through both lists and update the report:
$i = 0
foreach($row in $secondSet)
{
$dept = $row.Department
$report[i].Department = $dept
$i++
}
After this, your $report object will contain all of your employees in one row and departments in the other. Then you can export it to CSV:
$report | Export-CSV .\Result.csv -NoTypeInformation
This works if, as I said, your data aligns across both files. If not, then you need to get a little fancier:
foreach($row in $secondSet)
{
$emp = $row.Employee
$dept = $row.Department
$report | Where {$_.Employee -eq $emp} foreach {$_.Department = $dept
}
Technically you could just do it this way anyway, but it depends on a lot of things. First of all whether you have the data to match in that column across both files (which obviously in my example you don't otherwise you wouldn't need to do this in the first place, but you could match across other fields you may have, like EmployeeID or DoB). Second, on the sovereignty of individual records (e.g., if you have multiple matching records in your first file, you will have a problem; you would expect duplicates in the second as there are more than one person in each department).
Anyway, I hope this helps. As I said there is probably a 'better' way to do this, but this is how I would do it.

Count unique numbers in CSV (PowerShell or Notepad++)

How to find the count of unique numbers in a CSV file? When I use the following command in PowerShell ISE
1,2,3,4,2 | Sort-Object | Get-Unique
I can get the unique numbers but I'm not able to get this to work with CSV files. If for example I use
$A = Import-Csv C:\test.csv | Sort-Object | Get-Unique
$A.Count
it returns 0. I would like to count unique numbers for all the files in a given folder.
My data looks similar to this:
Col1,Col2,Col3,Col4
5,,7,4
0,,9,
3,,5,4
And the result should be 6 unique values (preferably written inside the same CSV file).
Or would it be easier to do it with Notepad++? So far I have found examples only on how to count the unique rows.
You can try the following (PSv3+):
PS> (Import-CSV C:\test.csv |
ForEach-Object { $_.psobject.properties.value -ne '' } |
Sort-Object -Unique).Count
6
The key is to extract all property (column) values from each input object (CSV row), which is what $_.psobject.properties.value does;
-ne '' filters out empty values.
Note that, given that Sort-Object has a -Unique switch, you don't need Get-Unique (you need Get-Unique only if your input already is sorted).
That said, if your CSV file is structured as simply as yours, you can speed up processing by reading it as a text file (PSv2+):
PS> (Get-Content C:\test.csv | Select-Object -Skip 1 |
ForEach-Object { $_ -split ',' -ne '' } |
Sort-Object -Unique).Count
6
Get-Content reads the CSV file as a line of strings.
Select-Object -Skip 1 skips the header line.
$_ -split ',' -ne '' splits each line into values by commas and weeds out empty values.
As for what you tried:
Import-CSV C:\test.csv | Sort-Object | Get-Unique:
Fundamentally, Sort-Object emits the input objects as a whole (just in sorted order), it doesn't extract property values, yet that is what you need.
Because no -Property argument is passed to Sort-Object to base the sorting on, it compares the custom objects that Import-Csv emits as a whole, by their .ToString() values, which happen to be empty[1]
, so they all compare the same, and in effect no sorting happens.
Similarly, Get-Unique also determines uniqueness by .ToString() here, so that, again, all objects are considered the same and only the very first one is output.
[1] This may be surprising, given that using a custom object in an expandable string does yield a value: compare $obj = [pscustomobject] #{ foo ='bar' }; $obj.ToString(); '---'; "$obj". This inconsistency is discussed in this GitHub issue.

How can I concatenate csv colums and rename their header with Powershell?

I am attempting to merge two csv files together and select only two of their columns for use in a new csv. I don't understand why I cannot use the code I have already:
$Temp1 = (Import-csv "C:\path\APPcsv.csv" -header "APP") |
select-object APP
$Temp2 = (Import-csv "C:\path\ALLdb42APPs.csv"-header "NA1", "NA2", "Applications", "NA3", "Project") |
select-object Project
$CSV= #($temp1, $temp2) |
export-csv -path "C:\path\Why isn't this working.csv" -noTypeInformation
Here is an example line from each CSV:
CSV1 (ALLdb42APPs.csv)
"Current Application","Calculation","AdobeReaderDC-18.011.20036 V1 - Add Instalation Status: SUCCESSFUL","2018-05-16 08:54:17","DK ATM error main"
CSV2 (APPcsv.csv)
"DameWareService-10.0.0.0-x64 V2 - Add"
So your issue is because #($temp1,$temp2) doesn't combine the first element of $temp1 with the first element of $temp2, but instead makes a new collection which is all of $temp1's objects followed by all of $temp2.
Since $temp1 is objects with an APP property and $temp2 is objects with a Project, combining these into a collection doesn't make sense to export to a csv.
If $temp1 is a bag of apples and $temp2 is a bag of oranges, #($temp1,$temp2) isn't holding the bags together, it's dumping both into one bag on top of each other.
You could either join the two objects into one. Warren Frame has a well respected module Join-Object that could be used as James C pointed out, but your two csvs would need to share a column.
The other alternative is to use a for loop, then in each iteration take the value from each collection and create a new object with both values.
$Temp1 = (Import-csv "C:\path\APPcsv.csv" -header "APP") |
Select-Object -ExpandProperty APP
$Temp2 = (Import-csv "C:\path\ALLdb42APPs.csv"-header "NA1", "NA2", "Applications", "NA3", "Project") |
Select-Object -ExpandProperty Project
$LargestIndex = [math]::Max($temp1.count,$temp2.count)
$CombinedArray = For ($i=0; $i -le $LargestIndex; $i++) {
[pscustomobject]#{
APP = $temp1[$i]
Project = $temp2[$i]
}
}
$CombinedArray |
Export-Csv -Path "C:\path\Example.csv" -NoTypeInformation
Note: requires PowerShell 3+ for the pscustomobject way of creating objects.

How to read a CSV file but exclude certain columns containing blanks using Get-Content

I want to read a CSV file and exclude rows where dynamically selected columns contain blanks but not all rows of those dynamically selected columns contain blanks.
Trying to use the where clause in the statement below (but not working):
Get-Content $Source -ReadCount 1000 |
Where {
ForEach($NotEqualBlankCol in $BlankColumns)
{
$NotEqualBlankCol -ne $null -and $NotEqualBlankCol -ne ''}
} |
ConvertFrom-Csv |
Sort-Object -Property $SortByColNames.Replace('"', '') -Unique |
.
.
.
| Out-File $Destination
$BlankColumns is my dynamic object string array which I would like to loop through containing the column names of the CSV that are blank. it can be 1 column or more. When more then all of the selected columns need to be blank to qualify as a row that does not need to be included in the final CSV file output.
How do I do it using Get-Content? Any help would be appreciated.
Using Get-Content
Ok. So what this will do it read in the contents of a file X lines at a time. It will parse each line into its indiviual columns. Then it will check the specified columns for blanks. If any of the flagged columns contains a black then it will be filtered out. Consider the test data I used for this
id,first_name,last_name,email,gender,ip_address
1,Christina,Tucker,ctucker0#bbc.co.uk,Female,91.33.192.187
2,Jacqueline,Torres,jtorres1#shop-pro.jp,Female,205.70.183.107
3,Kathy,Perez,kperez2#hugedomains.com,Female,35.175.154.127
4,"",Holmes,eholmes3#canalblog.com,,
5,Ernest,Walker,ewalker4#marketwatch.com,Male,140.110.129.21
6,,Garza,cgarza5#jugem.jp,,
7,,Cunningham,jcunningham6#ox.ac.uk,Female,
8,,Clark,lclark7#posterous.com,,
9,,Ortiz,lortiz8#shareasale.com,,
Notice that the first_name and gender are blank for some of these folks. id 1,2,3,5,10 have complete data. The rest should be filtered.
$BlankColumns = "first_name","gender"
$headers = (Get-Content $path -TotalCount 1).Split(",")
$potentialBlankHeaderIndecies = 0..($headers.Count - 1) | Where-Object{$BlankColumns -contains $headers[$_]}
$potentialBlankHeaderIndecies
Get-Content $path -ReadCount 3 | Foreach-Object{
# Check to see if any of the indexes from a split are empty
$_ | Where-Object{
[bool[]](($_.Split(","))[$potentialBlankHeaderIndecies] | ForEach-Object{
![string]::IsNullOrEmpty($_.Trim('"'))
}) -notcontains $false
}
}
The output of this code is the file, as string, with the removed entries. You can just pipe this into a variable, file or what even you need.
To go into a little more detail we take the header names we want to check and this read in the first line of the csv file. That should contain the column names. Using that we determine the column indexes that we want to scrutinize. The we read in the whole file and parse it line by line. For each line we split on the comma and check the elements matching the identified headers. Check each of those elements if they are blank or null. We trim quotes in case it is a string "" which I will assume you would count as blank. Of all the elements we evaluate as a Boolean whether or not it is empty. If at least one is then it fails the where-object clause and gets ommited.
Using Import-CSV
$BlankColumns = "first_name","gender"
Import-CSV $path | Where-Object{
$line = $_
($BlankColumns | ForEach-Object{
![string]::IsNullOrEmpty(($line.$_.Trim('"')))
}) -notcontains $false
}
Very similar approach just a lot less overhead since we are dealing with objects now instead of strings.
Now you could use Export-CSV or ConvertFrom-CSV depending on your needs in the rest of the project.
Changing the filter criteria.
Both examples above filter columns where any of the columns contain blanks. If you want to omit only where all are blank change the line }) -notcontains $false to }) -contains $true