Related
val map1 = Map(1 -> 9 , 2 -> 20)
val map2 = Map(1 -> 100, 3 -> 300)
I want to merge them, and sum the values of same keys. So the result will be:
Map(2->20, 1->109, 3->300)
Now I have 2 solutions:
val list = map1.toList ++ map2.toList
val merged = list.groupBy ( _._1) .map { case (k,v) => k -> v.map(_._2).sum }
and
val merged = (map1 /: map2) { case (map, (k,v)) =>
map + ( k -> (v + map.getOrElse(k, 0)) )
}
But I want to know if there are any better solutions.
The shortest answer I know of that uses only the standard library is
map1 ++ map2.map{ case (k,v) => k -> (v + map1.getOrElse(k,0)) }
Scalaz has the concept of a Semigroup which captures what you want to do here, and leads to arguably the shortest/cleanest solution:
scala> import scalaz._
import scalaz._
scala> import Scalaz._
import Scalaz._
scala> val map1 = Map(1 -> 9 , 2 -> 20)
map1: scala.collection.immutable.Map[Int,Int] = Map(1 -> 9, 2 -> 20)
scala> val map2 = Map(1 -> 100, 3 -> 300)
map2: scala.collection.immutable.Map[Int,Int] = Map(1 -> 100, 3 -> 300)
scala> map1 |+| map2
res2: scala.collection.immutable.Map[Int,Int] = Map(1 -> 109, 3 -> 300, 2 -> 20)
Specifically, the binary operator for Map[K, V] combines the keys of the maps, folding V's semigroup operator over any duplicate values. The standard semigroup for Int uses the addition operator, so you get the sum of values for each duplicate key.
Edit: A little more detail, as per user482745's request.
Mathematically a semigroup is just a set of values, together with an operator that takes two values from that set, and produces another value from that set. So integers under addition are a semigroup, for example - the + operator combines two ints to make another int.
You can also define a semigroup over the set of "all maps with a given key type and value type", so long as you can come up with some operation that combines two maps to produce a new one which is somehow the combination of the two inputs.
If there are no keys that appear in both maps, this is trivial. If the same key exists in both maps, then we need to combine the two values that the key maps to. Hmm, haven't we just described an operator which combines two entities of the same type? This is why in Scalaz a semigroup for Map[K, V] exists if and only if a Semigroup for V exists - V's semigroup is used to combine the values from two maps which are assigned to the same key.
So because Int is the value type here, the "collision" on the 1 key is resolved by integer addition of the two mapped values (as that's what Int's semigroup operator does), hence 100 + 9. If the values had been Strings, a collision would have resulted in string concatenation of the two mapped values (again, because that's what the semigroup operator for String does).
(And interestingly, because string concatenation is not commutative - that is, "a" + "b" != "b" + "a" - the resulting semigroup operation isn't either. So map1 |+| map2 is different from map2 |+| map1 in the String case, but not in the Int case.)
Quick solution:
(map1.keySet ++ map2.keySet).map {i=> (i,map1.getOrElse(i,0) + map2.getOrElse(i,0))}.toMap
Well, now in scala library (at least in 2.10) there is something you wanted - merged function. BUT it's presented only in HashMap not in Map. It's somewhat confusing. Also the signature is cumbersome - can't imagine why I'd need a key twice and when I'd need to produce a pair with another key. But nevertheless, it works and much cleaner than previous "native" solutions.
val map1 = collection.immutable.HashMap(1 -> 11 , 2 -> 12)
val map2 = collection.immutable.HashMap(1 -> 11 , 2 -> 12)
map1.merged(map2)({ case ((k,v1),(_,v2)) => (k,v1+v2) })
Also in scaladoc mentioned that
The merged method is on average more performant than doing a
traversal and reconstructing a new immutable hash map from
scratch, or ++.
This can be implemented as a Monoid with just plain Scala. Here is a sample implementation. With this approach, we can merge not just 2, but a list of maps.
// Monoid trait
trait Monoid[M] {
def zero: M
def op(a: M, b: M): M
}
The Map based implementation of the Monoid trait that merges two maps.
val mapMonoid = new Monoid[Map[Int, Int]] {
override def zero: Map[Int, Int] = Map()
override def op(a: Map[Int, Int], b: Map[Int, Int]): Map[Int, Int] =
(a.keySet ++ b.keySet) map { k =>
(k, a.getOrElse(k, 0) + b.getOrElse(k, 0))
} toMap
}
Now, if you have a list of maps that needs to be merged (in this case, only 2), it can be done like below.
val map1 = Map(1 -> 9 , 2 -> 20)
val map2 = Map(1 -> 100, 3 -> 300)
val maps = List(map1, map2) // The list can have more maps.
val merged = maps.foldLeft(mapMonoid.zero)(mapMonoid.op)
map1 ++ ( for ( (k,v) <- map2 ) yield ( k -> ( v + map1.getOrElse(k,0) ) ) )
I wrote a blog post about this , check it out :
http://www.nimrodstech.com/scala-map-merge/
basically using scalaz semi group you can achieve this pretty easily
would look something like :
import scalaz.Scalaz._
map1 |+| map2
You can also do that with Cats.
import cats.implicits._
val map1 = Map(1 -> 9 , 2 -> 20)
val map2 = Map(1 -> 100, 3 -> 300)
map1 combine map2 // Map(2 -> 20, 1 -> 109, 3 -> 300)
Starting Scala 2.13, another solution only based on the standard library consists in replacing the groupBy part of your solution with groupMapReduce which (as its name suggests) is an equivalent of a groupBy followed by mapValues and a reduce step:
// val map1 = Map(1 -> 9, 2 -> 20)
// val map2 = Map(1 -> 100, 3 -> 300)
(map1.toSeq ++ map2).groupMapReduce(_._1)(_._2)(_+_)
// Map[Int,Int] = Map(2 -> 20, 1 -> 109, 3 -> 300)
This:
Concatenates the two maps as a sequence of tuples (List((1,9), (2,20), (1,100), (3,300))). For conciseness, map2 is implicitly converted to Seq to adapt to the type of map1.toSeq - but you could choose to make it explicit by using map2.toSeq,
groups elements based on their first tuple part (group part of groupMapReduce),
maps grouped values to their second tuple part (map part of groupMapReduce),
reduces mapped values (_+_) by summing them (reduce part of groupMapReduce).
Andrzej Doyle's answer contains a great explanation of semigroups which allows you to use the |+| operator to join two maps and sum the values for matching keys.
There are many ways something can be defined to be an instance of a typeclass, and unlike the OP you might not want to sum your keys specifically. Or, you might want to do operate on a union rather than an intersection. Scalaz also adds extra functions to Map for this purpose:
https://oss.sonatype.org/service/local/repositories/snapshots/archive/org/scalaz/scalaz_2.11/7.3.0-SNAPSHOT/scalaz_2.11-7.3.0-SNAPSHOT-javadoc.jar/!/index.html#scalaz.std.MapFunctions
You can do
import scalaz.Scalaz._
map1 |+| map2 // As per other answers
map1.intersectWith(map2)(_ + _) // Do things other than sum the values
The fastest and simplest way:
val m1 = Map(1 -> 1.0, 3 -> 3.0, 5 -> 5.2)
val m2 = Map(0 -> 10.0, 3 -> 3.0)
val merged = (m2 foldLeft m1) (
(acc, v) => acc + (v._1 -> (v._2 + acc.getOrElse(v._1, 0.0)))
)
By this way, each of element's immediately added to map.
The second ++ way is:
map1 ++ map2.map { case (k,v) => k -> (v + map1.getOrElse(k,0)) }
Unlike the first way, In a second way for each element in a second map a new List will be created and concatenated to the previous map.
The case expression implicitly creates a new List using unapply method.
Here's what I ended up using:
(a.toSeq ++ b.toSeq).groupBy(_._1).mapValues(_.map(_._2).sum)
This is what I came up with...
def mergeMap(m1: Map[Char, Int], m2: Map[Char, Int]): Map[Char, Int] = {
var map : Map[Char, Int] = Map[Char, Int]() ++ m1
for(p <- m2) {
map = map + (p._1 -> (p._2 + map.getOrElse(p._1,0)))
}
map
}
Using the typeclass pattern, we can merge any Numeric type:
object MapSyntax {
implicit class MapOps[A, B](a: Map[A, B]) {
def plus(b: Map[A, B])(implicit num: Numeric[B]): Map[A, B] = {
b ++ a.map { case (key, value) => key -> num.plus(value, b.getOrElse(key, num.zero)) }
}
}
}
Usage:
import MapSyntax.MapOps
map1 plus map2
Merging a sequence of maps:
maps.reduce(_ plus _)
I've got a small function to do the job, it's in my small library for some frequently used functionality which isn't in standard lib.
It should work for all types of maps, mutable and immutable, not only HashMaps
Here is the usage
scala> import com.daodecode.scalax.collection.extensions._
scala> val merged = Map("1" -> 1, "2" -> 2).mergedWith(Map("1" -> 1, "2" -> 2))(_ + _)
merged: scala.collection.immutable.Map[String,Int] = Map(1 -> 2, 2 -> 4)
https://github.com/jozic/scalax-collection/blob/master/README.md#mergedwith
And here's the body
def mergedWith(another: Map[K, V])(f: (V, V) => V): Repr =
if (another.isEmpty) mapLike.asInstanceOf[Repr]
else {
val mapBuilder = new mutable.MapBuilder[K, V, Repr](mapLike.asInstanceOf[Repr])
another.foreach { case (k, v) =>
mapLike.get(k) match {
case Some(ev) => mapBuilder += k -> f(ev, v)
case _ => mapBuilder += k -> v
}
}
mapBuilder.result()
}
https://github.com/jozic/scalax-collection/blob/master/src%2Fmain%2Fscala%2Fcom%2Fdaodecode%2Fscalax%2Fcollection%2Fextensions%2Fpackage.scala#L190
For anyone coming across an AnyVal error, convert the values as follows.
Error:
"could not find implicit value for parameter num: Numeric[AnyVal]"
(m1.toSeq ++ m2.toSeq).groupBy(_._1).mapValues(_.map(_._2.asInstanceOf[Number].intValue()).sum)
I want to merge multiple maps that are in a list. Each map has two key-value pairs.
What I have...
val input = List[Map[String, String]]
Map[a -> b, c -> d],
Map[a -> b, c -> e],
Map[a -> f, c -> h]
What I want...
val output = Map[String, List[String]]
Map[b -> (d, e), f -> (h)]
I've researched but the closest I could find was this (Scala: Merge maps by key), which is not the scenario I am looking for. Ideally, I would appreciate an explanation rather than just a line of code. I know this can be done with for-loops, but I am trying to learn the Scala way of merging maps.
EDIT: After some discussion in the comments, I decided to simplify the question a bit. The keys 'a' and 'c' are static/not relevant/can be hard coded.
The goal is to make new maps, where the value associated with key 'a' would be the key, and the value associated with key 'c' would be the value. Once all the new maps are made, the ones with a similar key can be grouped together, and all their values can be placed in a list.
The idea is to first extract all the (key, value) pairs before using groupBy and finally mapping the values:
val input: List[Map[String, String]] = ...
val res: Map[String, List[String]] =
input
.flatten // List[(String, String)]
.groupBy { case (k, _) => k } // Map[String, List[(String, String)]]
.mapValues(_.map { case (_, v) => v }) // Map[String, List[String]]
OK, try this.
val input: List[Map[String, String]] = List( Map("a" -> "b", "c" -> "d")
, Map("a" -> "b", "c" -> "e")
, Map("a" -> "f", "c" -> "h")
)
input.map(m => (m("a"), m("c"))) //List((b,d), (b,e), (f,h))
.groupBy(_._1) //Map(b -> List((b,d), (b,e)), f -> List((f,h)))
.mapValues(_.map(_._2)) //Map(b -> List(d, e), f -> List(h))
retrieve the values and put them in a tuple
make the 1st element a key to the tuple(s)
un-tuple by extracting the 2nd element
I'm trying to find a cleaner way to update nested immutable structures in Scala. I think I'm looking for something similar to assoc-in in Clojure. I'm not sure how much types factor into this.
For example, in Clojure, to update the "city" attribute of a nested map I'd do:
> (def person {:name "john", :dob "1990-01-01", :home-address {:city "norfolk", :state "VA"}})
#'user/person
> (assoc-in person [:home-address :city] "richmond")
{:name "john", :dob "1990-01-01", :home-address {:state "VA", :city "richmond"}}
What are my options in Scala?
val person = Map("name" -> "john", "dob" -> "1990-01-01",
"home-address" -> Map("city" -> "norfolk", "state" -> "VA"))
As indicated in the other answer, you can leverage case classes to get cleaner, typed data objects. But in case what you need is simply to update a map:
val m = Map("A" -> 1, "B" -> 2)
val m2 = m + ("A" -> 3)
The result (in a worksheet):
m: scala.collection.immutable.Map[String,Int] = Map(A -> 1, B -> 2)
m2: scala.collection.immutable.Map[String,Int] = Map(A -> 3, B -> 2)
The + operator on a Map will add the new key-value pair, overwriting if it already exists. Notably, though, because the original value is a val, you have to assign the result to a new val, because you cannot change the original.
Because, in your example, you're rewriting a nested value, doing this manually becomes somewhat more onerous:
val m = Map("A" -> 1, "B" -> Map("X" -> 2, "Y" -> 4))
val m2 = m + ("B" -> Map("X" -> 3))
This yields some loss-of-data (the nested Y value disappears):
m: scala.collection.immutable.Map[String,Any] = Map(A -> 1, B -> Map(X -> 2, Y -> 4))
m2: scala.collection.immutable.Map[String,Any] = Map(A -> 1, B -> Map(X -> 3)) // Note that 'Y' has gone away.
Thus, forcing you to copy the original value and then re-assign it back:
val m = Map("A" -> 1, "B" -> Map("X" -> 2, "Y" -> 4))
val b = m.get("B") match {
case Some(b: Map[String, Any]) => b + ("X" -> 3) // Will update `X` while keeping other key-value pairs
case None => Map("X" -> 3)
}
val m2 = m + ("B" -> b)
This yields the 'expected' result, but is obviously a lot of code:
m: scala.collection.immutable.Map[String,Any] = Map(A -> 1, B -> Map(X -> 2, Y -> 4))
b: scala.collection.immutable.Map[String,Any] = Map(X -> 3, Y -> 4)
m2: scala.collection.immutable.Map[String,Any] = Map(A -> 1, B -> Map(X -> 3, Y -> 4))
In short, with any immutable data structure when you 'update' it you're really copying all the pieces you want and then including updated values where appropriate. If the structure is complicated this can get onerous. Hence the recommendation that #0___ gave with, say, Monocle.
Scala is a statically typed language, so you may first want to increase the safety of your code by moving away from any-string-to-any-string.
case class Address(city: String, state: String)
case class Person(name: String, dob: java.util.Date, homeAddress: Address)
(Yes, there are better alternatives for java.util.Date).
Then you create an update like this:
val person = Person(name = "john", dob = new java.util.Date(90, 0, 1),
homeAddress = Address(city = "norfolk", state = "VA"))
person.copy(homeAddress = person.homeAddress.copy(city = "richmond"))
To avoid this nested copy, you would use a lens library, like Monocle or Quicklens (there are many others).
import com.softwaremill.quicklens._
person.modify(_.homeAddress.city).setTo("richmond")
The other two answers nicely sum up the importance of correctly modelling your problem so we don't end up having to deal with Map[String, Object] type of collection.
Just adding my two cents here for a brute force solution utilizing the quiet powerful function pipelining and higher order function features in Scala. The ugly asInstanceOf casting is needed because the Map values are of different types and hence Scala treats the Map signature as Map[String,Any].
val person: Map[String,Any] = Map("name" -> "john", "dob" -> "1990-01-01", "home-address" -> Map("city" -> "norfolk", "state" -> "VA"))
val newperson = person.map({case(k,v) => if(k == "home-address") v.asInstanceOf[Map[String,String]].updated("city","Virginia") else k -> v})
Is there a function in Scala to compose two maps or is flatMap a sensible approach?
scala> val caps: Map[String, Int] = Map(("A", 1), ("B", 2))
caps: Map[String,Int] = Map(A -> 1, B -> 2)
scala> val lower: Map[Int, String] = Map((1, "a"), (2, "b"))
lower: Map[Int,String] = Map(1 -> a, 2 -> b)
scala> caps.flatMap {
| case (cap, idx) => Map((cap, lower(idx)))
| }
res1: scala.collection.immutable.Map[String,String] = Map(A -> a, B -> b)
Some syntactic sugar would be great!
If you know lower will contain keys for all the values in caps, you can use mapValues:
scala> caps mapValues lower
res0: scala.collection.immutable.Map[String,String] = Map(A -> a, B -> b)
If you don't want or need a new collection, just a mapping, it's a little more idiomatic to use andThen:
scala> val composed = caps andThen lower
composed: PartialFunction[String,String] = <function1>
scala> composed("A")
res1: String = a
This also assumes there aren't values in caps that aren't mapped in lower.
val map1 = Map(1 -> 9 , 2 -> 20)
val map2 = Map(1 -> 100, 3 -> 300)
I want to merge them, and sum the values of same keys. So the result will be:
Map(2->20, 1->109, 3->300)
Now I have 2 solutions:
val list = map1.toList ++ map2.toList
val merged = list.groupBy ( _._1) .map { case (k,v) => k -> v.map(_._2).sum }
and
val merged = (map1 /: map2) { case (map, (k,v)) =>
map + ( k -> (v + map.getOrElse(k, 0)) )
}
But I want to know if there are any better solutions.
The shortest answer I know of that uses only the standard library is
map1 ++ map2.map{ case (k,v) => k -> (v + map1.getOrElse(k,0)) }
Scalaz has the concept of a Semigroup which captures what you want to do here, and leads to arguably the shortest/cleanest solution:
scala> import scalaz._
import scalaz._
scala> import Scalaz._
import Scalaz._
scala> val map1 = Map(1 -> 9 , 2 -> 20)
map1: scala.collection.immutable.Map[Int,Int] = Map(1 -> 9, 2 -> 20)
scala> val map2 = Map(1 -> 100, 3 -> 300)
map2: scala.collection.immutable.Map[Int,Int] = Map(1 -> 100, 3 -> 300)
scala> map1 |+| map2
res2: scala.collection.immutable.Map[Int,Int] = Map(1 -> 109, 3 -> 300, 2 -> 20)
Specifically, the binary operator for Map[K, V] combines the keys of the maps, folding V's semigroup operator over any duplicate values. The standard semigroup for Int uses the addition operator, so you get the sum of values for each duplicate key.
Edit: A little more detail, as per user482745's request.
Mathematically a semigroup is just a set of values, together with an operator that takes two values from that set, and produces another value from that set. So integers under addition are a semigroup, for example - the + operator combines two ints to make another int.
You can also define a semigroup over the set of "all maps with a given key type and value type", so long as you can come up with some operation that combines two maps to produce a new one which is somehow the combination of the two inputs.
If there are no keys that appear in both maps, this is trivial. If the same key exists in both maps, then we need to combine the two values that the key maps to. Hmm, haven't we just described an operator which combines two entities of the same type? This is why in Scalaz a semigroup for Map[K, V] exists if and only if a Semigroup for V exists - V's semigroup is used to combine the values from two maps which are assigned to the same key.
So because Int is the value type here, the "collision" on the 1 key is resolved by integer addition of the two mapped values (as that's what Int's semigroup operator does), hence 100 + 9. If the values had been Strings, a collision would have resulted in string concatenation of the two mapped values (again, because that's what the semigroup operator for String does).
(And interestingly, because string concatenation is not commutative - that is, "a" + "b" != "b" + "a" - the resulting semigroup operation isn't either. So map1 |+| map2 is different from map2 |+| map1 in the String case, but not in the Int case.)
Quick solution:
(map1.keySet ++ map2.keySet).map {i=> (i,map1.getOrElse(i,0) + map2.getOrElse(i,0))}.toMap
Well, now in scala library (at least in 2.10) there is something you wanted - merged function. BUT it's presented only in HashMap not in Map. It's somewhat confusing. Also the signature is cumbersome - can't imagine why I'd need a key twice and when I'd need to produce a pair with another key. But nevertheless, it works and much cleaner than previous "native" solutions.
val map1 = collection.immutable.HashMap(1 -> 11 , 2 -> 12)
val map2 = collection.immutable.HashMap(1 -> 11 , 2 -> 12)
map1.merged(map2)({ case ((k,v1),(_,v2)) => (k,v1+v2) })
Also in scaladoc mentioned that
The merged method is on average more performant than doing a
traversal and reconstructing a new immutable hash map from
scratch, or ++.
This can be implemented as a Monoid with just plain Scala. Here is a sample implementation. With this approach, we can merge not just 2, but a list of maps.
// Monoid trait
trait Monoid[M] {
def zero: M
def op(a: M, b: M): M
}
The Map based implementation of the Monoid trait that merges two maps.
val mapMonoid = new Monoid[Map[Int, Int]] {
override def zero: Map[Int, Int] = Map()
override def op(a: Map[Int, Int], b: Map[Int, Int]): Map[Int, Int] =
(a.keySet ++ b.keySet) map { k =>
(k, a.getOrElse(k, 0) + b.getOrElse(k, 0))
} toMap
}
Now, if you have a list of maps that needs to be merged (in this case, only 2), it can be done like below.
val map1 = Map(1 -> 9 , 2 -> 20)
val map2 = Map(1 -> 100, 3 -> 300)
val maps = List(map1, map2) // The list can have more maps.
val merged = maps.foldLeft(mapMonoid.zero)(mapMonoid.op)
map1 ++ ( for ( (k,v) <- map2 ) yield ( k -> ( v + map1.getOrElse(k,0) ) ) )
I wrote a blog post about this , check it out :
http://www.nimrodstech.com/scala-map-merge/
basically using scalaz semi group you can achieve this pretty easily
would look something like :
import scalaz.Scalaz._
map1 |+| map2
You can also do that with Cats.
import cats.implicits._
val map1 = Map(1 -> 9 , 2 -> 20)
val map2 = Map(1 -> 100, 3 -> 300)
map1 combine map2 // Map(2 -> 20, 1 -> 109, 3 -> 300)
Starting Scala 2.13, another solution only based on the standard library consists in replacing the groupBy part of your solution with groupMapReduce which (as its name suggests) is an equivalent of a groupBy followed by mapValues and a reduce step:
// val map1 = Map(1 -> 9, 2 -> 20)
// val map2 = Map(1 -> 100, 3 -> 300)
(map1.toSeq ++ map2).groupMapReduce(_._1)(_._2)(_+_)
// Map[Int,Int] = Map(2 -> 20, 1 -> 109, 3 -> 300)
This:
Concatenates the two maps as a sequence of tuples (List((1,9), (2,20), (1,100), (3,300))). For conciseness, map2 is implicitly converted to Seq to adapt to the type of map1.toSeq - but you could choose to make it explicit by using map2.toSeq,
groups elements based on their first tuple part (group part of groupMapReduce),
maps grouped values to their second tuple part (map part of groupMapReduce),
reduces mapped values (_+_) by summing them (reduce part of groupMapReduce).
Andrzej Doyle's answer contains a great explanation of semigroups which allows you to use the |+| operator to join two maps and sum the values for matching keys.
There are many ways something can be defined to be an instance of a typeclass, and unlike the OP you might not want to sum your keys specifically. Or, you might want to do operate on a union rather than an intersection. Scalaz also adds extra functions to Map for this purpose:
https://oss.sonatype.org/service/local/repositories/snapshots/archive/org/scalaz/scalaz_2.11/7.3.0-SNAPSHOT/scalaz_2.11-7.3.0-SNAPSHOT-javadoc.jar/!/index.html#scalaz.std.MapFunctions
You can do
import scalaz.Scalaz._
map1 |+| map2 // As per other answers
map1.intersectWith(map2)(_ + _) // Do things other than sum the values
The fastest and simplest way:
val m1 = Map(1 -> 1.0, 3 -> 3.0, 5 -> 5.2)
val m2 = Map(0 -> 10.0, 3 -> 3.0)
val merged = (m2 foldLeft m1) (
(acc, v) => acc + (v._1 -> (v._2 + acc.getOrElse(v._1, 0.0)))
)
By this way, each of element's immediately added to map.
The second ++ way is:
map1 ++ map2.map { case (k,v) => k -> (v + map1.getOrElse(k,0)) }
Unlike the first way, In a second way for each element in a second map a new List will be created and concatenated to the previous map.
The case expression implicitly creates a new List using unapply method.
Here's what I ended up using:
(a.toSeq ++ b.toSeq).groupBy(_._1).mapValues(_.map(_._2).sum)
This is what I came up with...
def mergeMap(m1: Map[Char, Int], m2: Map[Char, Int]): Map[Char, Int] = {
var map : Map[Char, Int] = Map[Char, Int]() ++ m1
for(p <- m2) {
map = map + (p._1 -> (p._2 + map.getOrElse(p._1,0)))
}
map
}
Using the typeclass pattern, we can merge any Numeric type:
object MapSyntax {
implicit class MapOps[A, B](a: Map[A, B]) {
def plus(b: Map[A, B])(implicit num: Numeric[B]): Map[A, B] = {
b ++ a.map { case (key, value) => key -> num.plus(value, b.getOrElse(key, num.zero)) }
}
}
}
Usage:
import MapSyntax.MapOps
map1 plus map2
Merging a sequence of maps:
maps.reduce(_ plus _)
I've got a small function to do the job, it's in my small library for some frequently used functionality which isn't in standard lib.
It should work for all types of maps, mutable and immutable, not only HashMaps
Here is the usage
scala> import com.daodecode.scalax.collection.extensions._
scala> val merged = Map("1" -> 1, "2" -> 2).mergedWith(Map("1" -> 1, "2" -> 2))(_ + _)
merged: scala.collection.immutable.Map[String,Int] = Map(1 -> 2, 2 -> 4)
https://github.com/jozic/scalax-collection/blob/master/README.md#mergedwith
And here's the body
def mergedWith(another: Map[K, V])(f: (V, V) => V): Repr =
if (another.isEmpty) mapLike.asInstanceOf[Repr]
else {
val mapBuilder = new mutable.MapBuilder[K, V, Repr](mapLike.asInstanceOf[Repr])
another.foreach { case (k, v) =>
mapLike.get(k) match {
case Some(ev) => mapBuilder += k -> f(ev, v)
case _ => mapBuilder += k -> v
}
}
mapBuilder.result()
}
https://github.com/jozic/scalax-collection/blob/master/src%2Fmain%2Fscala%2Fcom%2Fdaodecode%2Fscalax%2Fcollection%2Fextensions%2Fpackage.scala#L190
For anyone coming across an AnyVal error, convert the values as follows.
Error:
"could not find implicit value for parameter num: Numeric[AnyVal]"
(m1.toSeq ++ m2.toSeq).groupBy(_._1).mapValues(_.map(_._2.asInstanceOf[Number].intValue()).sum)