Create Map from Elements from List of case class - scala

case class Student(id:String, name:String, teacher:String )
val myList = List( Student("1","Ramesh","Isabela"), Student("2","Elena","Mark"),Student("3","invalidKey","Someteacher"))
val a = myList.foreach( i=> (i.name -> i.teacher)).toMap.filter(i.name != "invalidKey")
I have a list of case class of student. I Want to build a map of student, teacher which are name ( key of the map) will always be unique. Preferably map can filter out a certain name.

You're using foreach, which returns Unit as the result.
I would suggest either of these 2 below. First one is as Luis Miguel mentioned:
val myMap = myList.collect {
case student if student.name != "invalidKey" => student.name -> student.teacher
}.toMap
Or:
val myMap2 = myList.foldLeft[Map[String, String]](Map.empty) {
case (elementsMap, newElement) if newElement.name != "invalidKey" =>
elementsMap + (newElement.name -> newElement.teacher)
case (elementsMap, _) => elementsMap
}
Differences:
First approach is much easier to read and shorter (being shorter is not an advantage though :D). Second one has less iterations (first one has another iteration to convert to Map).

Related

How to create two sequence out of one comparing one custom object with another in that sequence?

case class Submission(name: String, plannedDate: Option[LocalDate], revisedDate: Option[LocalDate])
val submission_1 = Submission("Åwesh Care", Some(2020-05-11), Some(2020-06-11))
val submission_2 = Submission("robin Dore", Some(2020-05-11), Some(2020-05-30))
val submission_3 = Submission("AIMS Hospital", Some(2020-01-24), Some(2020-07-30))
val submissions = Seq(submission_1, submission_2, submission_3)
Split the submissions so that the submission with the same plannedDate and/or revisedDate
goes to sameDateGroup and others go to remainder.
val (sameDateGroup, remainder) = someFunction(submissions)
Example result as below:
sameDateGroup should have
Seq(Submission("Åwesh Care", Some(2020-05-11), Some(2020-06-11)),
Submission("robin Dore", Some(2020-05-11), Some(2020-05-30)))
and remainder should have:
Seq(Submission("AIMS Hospital", Some(2020-01-24), Some(2020-07-30)))
So, if I understand the logic here, submission A shares a date with submission B (and both would go in the sameDateGrooup) IFF:
subA.plannedDate == subB.plannedDate
OR subA.plannedDate == subB.revisedDate
OR subA.revisedDate == subB.plannedDate
OR subA.revisedDate == subB.revisedDate
Likewise, and conversely, submission C belongs in the remainder category IFF:
subC.plannedDate // is unique among all planned dates
AND subC.plannedDate // does not exist among all revised dates
AND subC.revisedDate // is unique among all revised dates
AND subC.revisedDate // does not exist among all planned dates
Given all that, I think this does what you're describing.
import java.time.LocalDate
case class Submission(name : String
,plannedDate : Option[LocalDate]
,revisedDate : Option[LocalDate])
val submission_1 = Submission("Åwesh Care"
,Some(LocalDate.parse("2020-05-11"))
,Some(LocalDate.parse("2020-06-11")))
val submission_2 = Submission("robin Dore"
,Some(LocalDate.parse("2020-05-11"))
,Some(LocalDate.parse("2020-05-30")))
val submission_3 = Submission("AIMS Hospital"
,Some(LocalDate.parse("2020-01-24"))
,Some(LocalDate.parse("2020-07-30")))
val submissions = Seq(submission_1, submission_2, submission_3)
val pDates = submissions.groupBy(_.plannedDate)
val rDates = submissions.groupBy(_.revisedDate)
val (sameDateGroup, remainder) = submissions.partition(sub =>
pDates(sub.plannedDate).lengthIs > 1 ||
rDates(sub.revisedDate).lengthIs > 1 ||
pDates.keySet(sub.revisedDate) ||
rDates.keySet(sub.plannedDate))
A simple way to do this is to count the number of matching submissions for each submission in the list, and use that to partition the list:
def matching(s1: Submission, s2: Submission) =
s1.plannedDate == s2.plannedDate || s1.revisedDate == s2.revisedDate
val (sameDateGroup, remainder) =
submissions.partition { s1 =>
submissions.count(s2 => matching(s1, s2)) > 1
}
The matching function can contain whatever specific test is required.
This is O(n^2) so a more sophisticated algorithm would be needed for very long lists.
I think this will do the trick.
I'm sorry, some variablenames are not very meaningful, because I used different case class hen trying this. For some reason I only thought about using .groupBy later. So I'm not really recommend using this, as it is a bit uncomprehensible and can be solved easier with groupby
case class Submission(name: String, plannedDate: Option[String], revisedDate: Option[String])
val l =
List(
Submission("Åwesh Care", Some("2020-05-11"), Some("2020-06-11")),
Submission("robin Dore", Some("2020-05-11"), Some("2020-05-30")),
Submission("AIMS Hospital", Some("2020-01-24"), Some("2020-07-30")))
val t = l
.map((_, 1))
.foldLeft(Map.empty[Option[String], (List[Submission], Int)])((acc, idnTuple) => idnTuple match {
case (idn, count) => {
acc
.get(idn.plannedDate)
.map {
case (mapIdn, mapCount) => acc + (idn.plannedDate -> (idn :: mapIdn, mapCount + count))
}.getOrElse(acc + (idn.plannedDate -> (List(idn), count)))
}})
.values
.partition(_._2 > 1)
val r = (t._1.map(_._1).flatten, t._2.map(_._1).flatten)
println(r)
It basically follows the map-reduce wordcount schema.
If someone sees this, and knows how to do the tuple deconstruction easier, please let me know in the comments.

Scala create immutable nested map

I have a situation here
I have two strins
val keyMap = "anrodiApp,key1;iosApp,key2;xyz,key3"
val tentMap = "androidApp,tenant1; iosApp,tenant1; xyz,tenant2"
So what I want to add is to create a nested immutable nested map like this
tenant1 -> (andoidiApp -> key1, iosApp -> key2),
tenant2 -> (xyz -> key3)
So basically want to group by tenant and create a map of keyMap
Here is what I tried but is done using mutable map which I do want, is there a way to create this using immmutable map
case class TenantSetting() {
val requesterKeyMapping = new mutable.HashMap[String, String]()
}
val requesterKeyMapping = keyMap.split(";")
.map { keyValueList => keyValueList.split(',')
.filter(_.size==2)
.map(keyValuePair => (keyValuePair[0],keyValuePair[1]))
.toMap
}.flatten.toMap
val config = new mutable.HashMap[String, TenantSetting]
tentMap.split(";")
.map { keyValueList => keyValueList.split(',')
.filter(_.size==2)
.map { keyValuePair =>
val requester = keyValuePair[0]
val tenant = keyValuePair[1]
if (!config.contains(tenant)) config.put(tenant, new TenantSetting)
config.get(tenant).get.requesterKeyMapping.put(requester, requesterKeyMapping.get(requester).get)
}
}
The logic to break the strings into a map can be the same for both as it's the same syntax.
What you had for the first string was not quite right as the filter you were applying to each string from the split result and not on the array result itself. Which also showed in that you were using [] on keyValuePair which was of type String and not Array[String] as I think you were expecting. Also you needed a trim in there to cope with the spaces in the second string. You might want to also trim the key and value to avoid other whitespace issues.
Additionally in this case the combination of map and filter can be more succinctly done with collect as shown here:
How to convert an Array to a Tuple?
The use of the pattern with 2 elements ensures you filter out anything with length other than 2 as you wanted.
The iterator is to make the combination of map and collect more efficient by only requiring one iteration of the collection returned from the first split (see comments below).
With both strings turned into a map it just needs the right use of groupByto group the first map by the value of the second based on the same key to get what you wanted. Obviously this only works if the same key is always in the second map.
def toMap(str: String): Map[String, String] =
str
.split(";")
.iterator
.map(_.trim.split(','))
.collect { case Array(key, value) => (key.trim, value.trim) }
.toMap
val keyMap = toMap("androidApp,key1;iosApp,key2;xyz,key3")
val tentMap = toMap("androidApp,tenant1; iosApp,tenant1; xyz,tenant2")
val finalMap = keyMap.groupBy { case (k, _) => tentMap(k) }
Printing out finalMap gives:
Map(tenant2 -> Map(xyz -> key3), tenant1 -> Map(androidApp -> key1, iosApp -> key2))
Which is what you wanted.

reduce a list in scala by value

How can I reduce a list like below concisely
Seq[Temp] = List(Temp(a,1), Temp(a,2), Temp(b,1))
to
List(Temp(a,2), Temp(b,1))
Only keep Temp objects with unique first param and max of second param.
My solution is with lot of groupBys and reduces which is giving a lengthy answer.
you have to
groupBy
sortBy values in ASC order
get the last one which is the largest
Example,
scala> final case class Temp (a: String, value: Int)
defined class Temp
scala> val data : Seq[Temp] = List(Temp("a",1), Temp("a",2), Temp("b",1))
data: Seq[Temp] = List(Temp(a,1), Temp(a,2), Temp(b,1))
scala> data.groupBy(_.a).map { case (k, group) => group.sortBy(_.value).last }
res0: scala.collection.immutable.Iterable[Temp] = List(Temp(b,1), Temp(a,2))
or instead of sortBy(fn).last you can maxBy(fn)
scala> data.groupBy(_.a).map { case (k, group) => group.maxBy(_.value) }
res1: scala.collection.immutable.Iterable[Temp] = List(Temp(b,1), Temp(a,2))
You can generate a Map with groupBy, compute the max in mapValues and convert it back to the Temp classes as in the following example:
case class Temp(id: String, value: Int)
List(Temp("a", 1), Temp("a", 2), Temp("b", 1)).
groupBy(_.id).mapValues( _.map(_.value).max ).
map{ case (k, v) => Temp(k, v) }
// res1: scala.collection.immutable.Iterable[Temp] = List(Temp(b,1), Temp(a,2))
Worth noting that the solution using maxBy in the other answer is more efficient as it minimizes necessary transformations.
You can do this using foldLeft:
data.foldLeft(Map[String, Int]().withDefaultValue(0))((map, tmp) => {
map.updated(tmp.id, max(map(tmp.id), tmp.value))
}).map{case (i,v) => Temp(i, v)}
This is essentially combining the logic of groupBy with the max operation in a single pass.
Note This may be less efficient because groupBy uses a mutable.Map internally which avoids constantly re-creating a new map. If you care about performance and are prepared to use mutable data, this is another option:
val tmpMap = mutable.Map[String, Int]().withDefaultValue(0)
data.foreach(tmp => tmpMap(tmp.id) = max(tmp.value, tmpMap(tmp.id)))
tmpMap.map{case (i,v) => Temp(i, v)}.toList
Use a ListMap if you need to retain the data order, or sort at the end if you need a particular ordering.

Scala broadcast join with "one to many" relationship

I am fairly new to Scala and RDDs.
I have a very simple scenario yet it seems very hard to implement with RDDs.
Scenario:
I have two tables. One large and one small. I broadcast the smaller table.
I then want to join the table and finally aggregate the values after the join to a final total.
Here is an example of the code:
val bigRDD = sc.parallelize(List(("A",1,"1Jan2000"),("B",2,"1Jan2000"),("C",3,"1Jan2000"),("D",3,"1Jan2000"),("E",3,"1Jan2000")))
val smallRDD = sc.parallelize(List(("A","Fruit","Apples"),("A","ZipCode","1234"),("B","Fruit","Apples"),("B","ZipCode","456")))
val broadcastVar = sc.broadcast(smallRDD.keyBy{ a => (a._1,a._2) } // turn to pair RDD
.collectAsMap() // collect as Map
)
//first join
val joinedRDD = bigRDD.map( accs => {
//get list of groups
val groups = List("Fruit", "ZipCode")
val i = "Fruit"
//for each group
//for(i <- groups) {
if (broadcastVar.value.get(accs._1, i) != None) {
( broadcastVar.value.get(accs._1, i).get._1,
broadcastVar.value.get(accs._1, i).get._2,
accs._2, accs._3)
} else {
None
}
//}
}
)
//expected after this
//("A","Fruit","Apples",1, "1Jan2000"),("B","Fruit","Apples",2, "1Jan2000"),
//("A","ZipCode","1234", 1,"1Jan2000"),("B","ZipCode","456", 2,"1Jan2000")
//then group and sum
//cannot do anything with the joinedRDD!!!
//error == value copy is not a member of Product with Serializable
// Final Expected Result
//("Fruit","Apples",3, "1Jan2000"),("ZipCode","1234", 1,"1Jan2000"),("ZipCode","456", 2,"1Jan2000")
My questions:
Is this the best approach first of all with RDDs?
Disclaimer - I have done this whole task using dataframes successfully. The idea is to create another version using only RDDs to compare performance.
Why is the type of my joinedRDD not recognised after it was created so that I can continue to use functions like copy on it?
How can I get away with not doing a .collectAsMap() when broadcasting the variable. I currently have to include the first to items to enforce uniqueness and not dropping any values.
Thanks for the help in advance!
Final solution for anyone interested
case class dt (group:String, group_key:String, count:Long, date:String)
val bigRDD = sc.parallelize(List(("A",1,"1Jan2000"),("B",2,"1Jan2000"),("C",3,"1Jan2000"),("D",3,"1Jan2000"),("E",3,"1Jan2000")))
val smallRDD = sc.parallelize(List(("A","Fruit","Apples"),("A","ZipCode","1234"),("B","Fruit","Apples"),("B","ZipCode","456")))
val broadcastVar = sc.broadcast(smallRDD.keyBy{ a => (a._1) } // turn to pair RDD
.groupByKey() //to not loose any data
.collectAsMap() // collect as Map
)
//first join
val joinedRDD = bigRDD.flatMap( accs => {
if (broadcastVar.value.get(accs._1) != None) {
val bc = broadcastVar.value.get(accs._1).get
bc.map(p => {
dt(p._2, p._3,accs._2, accs._3)
})
} else {
None
}
}
)
//expected after this
//("Fruit","Apples",1, "1Jan2000"),("Fruit","Apples",2, "1Jan2000"),
//("ZipCode","1234", 1,"1Jan2000"),("ZipCode","456", 2,"1Jan2000")
//then group and sum
var finalRDD = joinedRDD.map(s => {
(s.copy(count=0),s.count) //trick to keep code to minimum (count = 0)
})
.reduceByKey(_ + _)
.map(pair => {
pair._1.copy(count=pair._2)
})
In your map statement you return either a tuple or None based on the if condition. These types do not match so you fall back the a common supertype so joinedRDD is an RDD[Product with Serializable] Which is not what you want at all (it's basically RDD[Any]). You need to make sure all paths return the same type. In this case, you probably want an Option[(String, String, Int, String)]. All you need to do is wrap the tuple result into a Some
if (broadcastVar.value.get(accs._1, i) != None) {
Some(( broadcastVar.value.get(accs._1, i).get.group_key,
broadcastVar.value.get(accs._1, i).get.group,
accs._2, accs._3))
} else {
None
}
And now your types will match up. This will make joinedRDD and RDD[Option(String, String, Int, String)]. Now that the type is correct the data is usable, however, it means that you will need to map the Option to work with the tuples. If you don't need the None values in the final result, you can use flatmap instead of map to create joinedRDD which will unwrap the Options for you, filtering out all the Nones.
CollectAsMap is the correct way to turnan RDD into a Hashmap, but you need multiple values for a single key. Before using collectAsMap but after mapping the smallRDD into a Key,Value pair, use groupByKey to group all of the values for a single key together. When when you look up a key from your HashMap, you can map over the values, creating a new record for each one.

Decompose Scala sequence into member values

I'm looking for an elegant way of accessing two items in a Seq at the same time. I've checked earlier in my code that the Seq will have exactly two items. Now I would like to be able to give them names, so they have meaning.
records
.sliding(2) // makes sure we get `Seq` with two items
.map(recs => {
// Something like this...
val (former, latter) = recs
})
Is there an elegant and/or idiomatic way to achieve this in Scala?
I'm not sure if it is any more elegant, but you can also unpick the sequence like this:
val former +: latter +: _ = recs
You can access the elements by their index:
map { recs => {
val (former, latter) = recs(0), recs(1)
}}
You can use pattern matching to decompose the structure of your list:
val records = List("first", "second")
records match {
case first +: second +: Nil => println(s"1: $first, 2: $second")
case _ => // Won't happen (you can omit this)
}
will output
1: first, 2: second
The result of sliding is a List. Using a pattern match, you can give name to these elements like this:
map{ case List(former, latter) =>
...
}
Note that since it's a pattern match, you need to use {} instead of ().
For a records of known types (for example, Int):
records.sliding (2).map (_ match {
case List (former:Int, latter:Int) => former + latter })
Note, that this will unify element (0, 1), then (1, 2), (2, 3) ... and so on. To combine pairwise, use sliding (2, 2):
val pairs = records.sliding (2, 2).map (_ match {
case List (former: Int, latter: Int) => former + latter
case List (one: Int) => one
}).toList
Note, that you now need an extra case for just one element, if the records size is odd.