How to implement a dynamic group by in scala? - scala

Suppose I have a List[Map[String, String]] that represents a table in a database, and a List[String] that represents a list of column names. I'd like to implement the equivalent of a group by clause in SQL query:
def fun(table:List[Map[String, String]], keys:List[String]): List[List[Map[String, String]]
For example:
val table = List(
Map("name"->"jade", "job"->"driver", "sex"->"male"),
Map("name"->"mike", "job"->"police", "sex"->"female"),
Map("name"->"jane", "job"->"clerk", "sex"->"female"),
Map("name"->"smith", "job"->"driver", "sex"->"male")
)
val keys = List("job", "sex")
And then fun(table,keys) should be:
List(
List(
Map("name"->"jade", "job"->"driver", "sex"->"male"),
Map("name"->"smith", "job"->"driver", "sex"->"male")
),
List(Map("name"->"mike", "job"->"police", "sex"->"female")),
List(Map("name"->"jane", "job"->"clerk", "sex"->"female"))
)

You're looking for groupBy:
table.groupBy(row => keys.map(key => row(key))) map {
case (group, values) => values
}
Or more concisely:
table.groupBy(keys.map(_)).map(_._2)

Related

Trouble with Scala pattern matching with map - required String

I'm trying to make my RDD into a pairdRDD, but having trouble with the pattern matching and I have no idea what I'm doing wrong..
val test = sc.textFile("neighborhood_test.csv");
val nhead0 = test.first;
val test_split = test.map(line => line.split("\t"));
val nhead = test_split.first;
val test_neigh0 = test.filter(line => line!= nhead0);
//test_neigh0.first = 3335 Dunlap Seattle
val test_neigh1 = test_neigh0.map(line => line.split("\t"));
//test_neigh1.first = Array[String] = Array(3335, Dunlap, Seattle)
val test_neigh = test_neigh1.map({case (id, neigh, city) => (id, (neigh, city))});
Gives error:
found : (T1, T2, T3)
required: String
val test_neigh = test_neigh0.map({case (id, neigh, city) => (id, (neigh, city))});
EDIT:
The inputfile is tab seperated and looks like this:
id neighbourhood city
3335 Dunlap Seattle
4291 Roosevelt Seattle
5682 South Delridge Seattle
As output I wan't a pairRDD with id as key, and (neigh, city) as value.
Neither test_neigh0.first nor test_neigh1.first is a triple, so you cannot pattern match it as such.
The elements in test_neigh1 are Array[String]. Under the assumption that these arrays are all of length 3, you can pattern match against them as { case Array(id, neigh, city) => ...}.
To make sure that you won't get a matching error if one of the line as more or less than 3 elements, you may collect on this pattern matching, instead of mapping on it.
val test_neigh: RDD[(String, (String, String))] = test_neigh1.collect{
case Array(id, neigh, city) => (id, (neigh, city))
}
EDIT
The issues you experienced as described in your comment are related to RDD[_] not being a usual collection (such as List, Array or Set). To avoid those, you might need to fetch elements in the array without pattern matching:
val test_neigh: RDD[(String, (String, String))] = test_neigh0.map(line => {
val arr = line.split("\t")
(arr(0), (arr(1), arr(2))
})
val baseRDD = sc.textFile("neighborhood_test.csv").filter { x => !x.contains("city") }
baseRDD.map { x =>
val split = x.split("\t")
(split(0), (split(1), split(2)))
}.groupByKey().foreach(println(_))
Result:
(3335,CompactBuffer((Dunlap,Seattle)))
(4291,CompactBuffer((Roosevelt,Seattle)))
(5682,CompactBuffer((South Delridge,Seattle)))

Array[Byte] Spark RDD to String Spark RDD

I'm using the Cloudera's SparkOnHBase module in order to get data from HBase.
I get a RDD in this way:
var getRdd = hbaseContext.hbaseRDD("kbdp:detalle_feedback", scan)
Based on that, what I get is an object of type
RDD[(Array[Byte], List[(Array[Byte], Array[Byte], Array[Byte])])]
which corresponds to row key and a list of values. All of them represented by a byte array.
If I save the getRDD to a file, what I see is:
([B#f7e2590,[([B#22d418e2,[B#12adaf4b,[B#48cf6e81), ([B#2a5ffc7f,[B#3ba0b95,[B#2b4e651c), ([B#27d0277a,[B#52cfcf01,[B#491f7520), ([B#3042ad61,[B#6984d407,[B#f7c4db0), ([B#29d065c1,[B#30c87759,[B#39138d14), ([B#32933952,[B#5f98506e,[B#8c896ca), ([B#2923ac47,[B#65037e6a,[B#486094f5), ([B#3cd385f2,[B#62fef210,[B#4fc62b36), ([B#5b3f0f24,[B#8fb3349,[B#23e4023a), ([B#4e4e403e,[B#735bce9b,[B#10595d48), ([B#5afb2a5a,[B#1f99a960,[B#213eedd5), ([B#2a704c00,[B#328da9c4,[B#72849cc9), ([B#60518adb,[B#9736144,[B#75f6bc34)])
for each record (rowKey and the columns)
But what I need is to get the String representation of all and each of the keys and values. Or at least the values. In order to save it to a file and see something like
key1,(value1,value2...)
or something like
key1,value1,value2...
I'm completely new on spark and scala and it's being quite hard to get something.
Could you please help me with that?
First lets create some sample data:
scala> val d = List( ("ab" -> List(("qw", "er", "ty")) ), ("cd" -> List(("ac", "bn", "afad")) ) )
d: List[(String, List[(String, String, String)])] = List((ab,List((qw,er,ty))), (cd,List((ac,bn,afad))))
This is how the data is:
scala> d foreach println
(ab,List((qw,er,ty)))
(cd,List((ac,bn,afad)))
Convert it to Array[Byte] format
scala> val arrData = d.map { case (k,v) => k.getBytes() -> v.map { case (a,b,c) => (a.getBytes(), b.getBytes(), c.getBytes()) } }
arrData: List[(Array[Byte], List[(Array[Byte], Array[Byte], Array[Byte])])] = List((Array(97, 98),List((Array(113, 119),Array(101, 114),Array(116, 121)))), (Array(99, 100),List((Array(97, 99),Array(98, 110),Array(97, 102, 97, 100)))))
Create an RDD out of this data
scala> val rdd1 = sc.parallelize(arrData)
rdd1: org.apache.spark.rdd.RDD[(Array[Byte], List[(Array[Byte], Array[Byte], Array[Byte])])] = ParallelCollectionRDD[0] at parallelize at <console>:25
Create a conversion function from Array[Byte] to String:
scala> def b2s(a: Array[Byte]): String = new String(a)
b2s: (a: Array[Byte])String
Perform our final conversion:
scala> val rdd2 = rdd1.map { case (k,v) => b2s(k) -> v.map{ case (a,b,c) => (b2s(a), b2s(b), b2s(c)) } }
rdd2: org.apache.spark.rdd.RDD[(String, List[(String, String, String)])] = MapPartitionsRDD[1] at map at <console>:29
scala> rdd2.collect()
res2: Array[(String, List[(String, String, String)])] = Array((ab,List((qw,er,ty))), (cd,List((ac,bn,afad))))
I don't know about HBase but if those Array[Byte]s are Unicode strings, something like this should work:
rdd: RDD[(Array[Byte], List[(Array[Byte], Array[Byte], Array[Byte])])] = *whatever*
rdd.map(k, l =>
(new String(k),
l.map(a =>
a.map(elem =>
new String(elem)
)
))
)
Sorry for bad styling and whatnot, I am not even sure it will work.

How to join two RDDs by key to get RDD of (String, String)?

I have two paired rdds in the form RDD [(String, mutable.HashSet[String]):
For example:
rdd1: 332101231222, "320758, 320762, 320760, 320759, 320757, 320761"
rdd2: 332101231222, "220758, 220762, 220760, 220759, 220757, 220761"
I want to combine rdd1 and rdd2 based on common keys, so o/p should be like:
332101231222 320758, 320762, 320760, 320759, 320757, 320761 220758, 220762, 220760, 220759, 220757, 220761
Here is my code:
def cogroupTest (rdd1: RDD [(String, mutable.HashSet[String])], rdd2: RDD [(String, mutable.HashSet[String])] ): Unit =
{
val prods_per_user_co_grouped = (rdd1).cogroup(rdd2)
prods_per_user_co_grouped.map { case (key: String, (value1: mutable.HashSet[String], value2: mutable.HashSet[String])) => {
val combinedhs = value1 ++ value2
val sstr = combinedhs.mkString("\t")
val keypadded = key + "\t"
s"$keypadded$sstr"
}
}.saveAsTextFile("/scratch/rdds_joined/")
Here is the error that I get when I run the my program:
scala.MatchError: (32101231222,(CompactBuffer(Set(320758, 320762, 320760, 320759, 320757, 320761)),CompactBuffer(Set(220758, 220762, 220760, 220759, 220757, 220761)))) (of class scala.Tuple2)
Any help with this will be great!
As you might guess from the name cogroup groups observations by key. It means that in your case you get:
(String, (Iterable[mutable.HashSet[String]], Iterable[mutable.HashSet[String]]))
not
(String, (mutable.HashSet[String], mutable.HashSet[String]))
It is pretty clear when you take a look at the error you get. If you want to combine pairs you should use join method. If not you should adjust pattern to match structure you get and then use something like this:
val combinedhs = value1.reduce(_ ++ _) ++ value2.reduce(_ ++ _)

spark join operation based on two columns

I'm trying to join two datasets based on two columns. It works until I use one column but fails with below error
:29: error: value join is not a member of org.apache.spark.rdd.RDD[(String, String, (String, String, String, String, Double))]
val finalFact = fact.join(dimensionWithSK).map { case(nk1,nk2, ((parts1,parts2,parts3,parts4,amount), (sk, prop1,prop2,prop3,prop4))) => (sk,amount) }
Code :
import org.apache.spark.rdd.RDD
def zipWithIndex[T](rdd: RDD[T]) = {
val partitionSizes = rdd.mapPartitions(p => Iterator(p.length)).collect
val ranges = partitionSizes.foldLeft(List((0, 0))) { case(accList, count) =>
val start = accList.head._2
val end = start + count
(start, end) :: accList
}.reverse.tail.toArray
rdd.mapPartitionsWithIndex( (index, partition) => {
val start = ranges(index)._1
val end = ranges(index)._2
val indexes = Iterator.range(start, end)
partition.zip(indexes)
})
}
val dimension = sc.
textFile("dimension.txt").
map{ line =>
val parts = line.split("\t")
(parts(0),parts(1),parts(2),parts(3),parts(4),parts(5))
}
val dimensionWithSK =
zipWithIndex(dimension).map { case((nk1,nk2,prop3,prop4,prop5,prop6), idx) => (nk1,nk2,(prop3,prop4,prop5,prop6,idx + nextSurrogateKey)) }
val fact = sc.
textFile("fact.txt").
map { line =>
val parts = line.split("\t")
// we need to output (Naturalkey, (FactId, Amount)) in
// order to be able to join with the dimension data.
(parts(0),parts(1), (parts(2),parts(3), parts(4),parts(5),parts(6).toDouble))
}
val finalFact = fact.join(dimensionWithSK).map { case(nk1,nk2, ((parts1,parts2,parts3,parts4,amount), (sk, prop1,prop2,prop3,prop4))) => (sk,amount) }
Request someone's help here..
Thanks
Sridhar
If you look at the signature of join it works on an RDD of pairs:
def join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))]
You have a triple. I guess your trying to join on the first 2 elements of the tuple, and so you need to map your triple to a pair, where the first element of the pair is a pair containing the first two elements of the triple, e.g. for any Types V1 and V2
val left: RDD[(String, String, V1)] = ??? // some rdd
val right: RDD[(String, String, V2)] = ??? // some rdd
left.map {
case (key1, key2, value) => ((key1, key2), value)
}
.join(
right.map {
case (key1, key2, value) => ((key1, key2), value)
})
This will give you an RDD of the form RDD[(String, String), (V1, V2)]
rdd1 Schema :
field1,field2, field3, fieldX,.....
rdd2 Schema :
field1, field2, field3, fieldY,.....
val joinResult = rdd1.join(rdd2,
Seq("field1", "field2", "field3"), "outer")
joinResult schema :
field1, field2, field3, fieldX, fieldY, ......
val emp = sc.
textFile("emp.txt").
map { line =>
val parts = line.split("\t")
// we need to output (Naturalkey, (FactId, Amount)) in
// order to be able to join with the dimension data.
((parts(0), parts(2)),parts(1))
}
val emp_new = sc.
textFile("emp_new.txt").
map { line =>
val parts = line.split("\t")
// we need to output (Naturalkey, (FactId, Amount)) in
// order to be able to join with the dimension data.
((parts(0), parts(2)),parts(1))
}
val finalemp =
emp_new.join(emp).
map { case((nk1,nk2) ,((parts1), (val1))) => (nk1,parts1,val1) }

Scala/Slick plain SQL: retrieve result as a map

I have a simple method to retrieve a user from a db with Sclick plain SQL method:
object Data {
implicit val getListStringResult = GetResult[List[String]] (
prs => (1 to prs.numColumns).map(_ => prs.nextString).toList
)
def getUser(id: Int): Option[List[String]] = DB.withSession {
sql"""SELECT * FROM "user" WHERE "id" = $id""".as[List[String]].firstOption
}
}
The result is List[String] but I would like it to be something like Map[String, String] - column name and value pair map. Is this possible? If so, how?
My stack is Play Framework 2.2.1, Slick 1.0.1, Scala 2.10.3, Java 8 64bit
import scala.slick.jdbc.meta._
val columns = MTable.getTables(None, None, None, None)
.list.filter(_.name.name == "USER") // <- upper case table name
.head.getColumns.list.map(_.column)
val user = sql"""SELECT * FROM "user" WHERE "id" = $id""".as[List[String]].firstOption
.map( columns zip _ toMap )
It's possible to do this without querying the table metadata as follows:
implicit val resultAsStringMap = GetResult[Map[String,String]] ( prs =>
(1 to prs.numColumns).map(_ =>
prs.rs.getMetaData.getColumnName(prs.currentPos+1) -> prs.nextString
).toMap
)
It's also possible to build a Map[String,Any] in the same way but that's obviously more complicated.