I am new to Spark-Scala Development and trying to get hands dirty so please bear with me if you find the question stupid.
Sample dataset
[29430500,1104296400000,1938,F,11,2131,
MutableList([123291654450,1440129600000,100121,0,1440734400000],[234564535,2345129600000,345121,1,14567734400000])
]
If you see the last field it's an Array[] and I want the output to look like this:-
Row 1:
[29430500,1104296400000,1938,F,11,2131,
123291654450,1440129600000,100121,0,1440734400000]
Row 2:
[29430500,1104296400000,1938,F,11,2131,
234564535,2345129600000,345121,1,14567734400000]
I think I have to do flatMap but for some reason, the following code gives this error:
def getMasterRdd(sc: SparkContext, hiveContext: HiveContext, outputDatabase:String, jobId:String,MasterTableName:String, dataSourceType: DataSourceType, startDate:Long, endDate:Long):RDD[Row]={}
val Rdd1= ClassName.getMasterRdd(sc, hiveContext, "xyz", "test123", "xyz.abc", DataSourceType.SS, 1435723200000L, 1451538000000L)
Rdd1: holds the sample dataset
val mapRdd1= Rdd1.map(Row => Row.get(6))
val flatmapRdd1 = mapPatientRdd.flatMap(_.split(","))
When I hover over (_.split(",")) I get a suggestion that says the following:
Type mismatch, expected:(Any) => TraversableOnce[NotInferedU], actual: (Any) =>Any
I think there is a better way to structure this (maybe using tuples instead of Lists) but anyway this works for me:
scala> val myRDD = sc.parallelize(Seq(Seq(29430500L,1104296400000L,1938L,"F",11L,2131L,Seq(Seq(123291654450L,1440129600000L,100121L,0L,1440734400000L),Seq(234564535L,2345129600000L,345121L,1L,14567734400000L)))))
myRDD: org.apache.spark.rdd.RDD[Seq[Any]] = ParallelCollectionRDD[11] at parallelize at <console>:27
scala> :pa
// Entering paste mode (ctrl-D to finish)
val myRDD2 = myRDD.flatMap(row => {
val (beginning, end) = (row.dropRight(1), row.last)
end.asInstanceOf[List[List[Any]]].map(beginning++_)
})
// Exiting paste mode, now interpreting.
myRDD2: org.apache.spark.rdd.RDD[Seq[Any]] = MapPartitionsRDD[10] at flatMap at <console>:29
scala> myRDD2.foreach{println}
List(29430500, 1104296400000, 1938, F, 11, 2131, 123291654450, 1440129600000, 100121, 0, 1440734400000)
List(29430500, 1104296400000, 1938, F, 11, 2131, 234564535, 2345129600000, 345121, 1, 14567734400000)
Use:
rdd.flatMap(row => row.getSeq[String](6).map(_.split(","))
Related
Why does pattern matching in Spark not work the same as in Scala? See example below... function f() tries to pattern match on class, which works in the Scala REPL but fails in Spark and results in all "???". f2() is a workaround that gets the desired result in Spark using .isInstanceOf(), but I understand that to be bad form in Scala.
Any help on pattern matching the correct way in this scenario in Spark would be greatly appreciated.
abstract class a extends Serializable {val a: Int}
case class b(a: Int) extends a
case class bNull(a: Int=0) extends a
val x: List[a] = List(b(0), b(1), bNull())
val xRdd = sc.parallelize(x)
attempt at pattern matching which works in Scala REPL but fails in Spark
def f(x: a) = x match {
case b(n) => "b"
case bNull(n) => "bnull"
case _ => "???"
}
workaround that functions in Spark, but is bad form (I think)
def f2(x: a) = {
if (x.isInstanceOf[b]) {
"b"
} else if (x.isInstanceOf[bNull]) {
"bnull"
} else {
"???"
}
}
View results
xRdd.map(f).collect //does not work in Spark
// result: Array("???", "???", "???")
xRdd.map(f2).collect // works in Spark
// resut: Array("b", "b", "bnull")
x.map(f(_)) // works in Scala REPL
// result: List("b", "b", "bnull")
Versions used...
Spark results run in spark-shell (Spark 1.6 on AWS EMR-4.3)
Scala REPL in SBT 0.13.9 (Scala 2.10.5)
This is a known issue with Spark REPL. You can find more details in SPARK-2620. It affects multiple operations in Spark REPL including most of transformations on the PairwiseRDDs. For example:
case class Foo(x: Int)
val foos = Seq(Foo(1), Foo(1), Foo(2), Foo(2))
foos.distinct.size
// Int = 2
val foosRdd = sc.parallelize(foos, 4)
foosRdd.distinct.count
// Long = 4
foosRdd.map((_, 1)).reduceByKey(_ + _).collect
// Array[(Foo, Int)] = Array((Foo(1),1), (Foo(1),1), (Foo(2),1), (Foo(2),1))
foosRdd.first == foos.head
// Boolean = false
Foo.unapply(foosRdd.first) == Foo.unapply(foos.head)
// Boolean = true
What makes it even worse is that the results depend on the data distribution:
sc.parallelize(foos, 1).distinct.count
// Long = 2
sc.parallelize(foos, 1).map((_, 1)).reduceByKey(_ + _).collect
// Array[(Foo, Int)] = Array((Foo(2),2), (Foo(1),2))
The simplest thing you can do is to define and package required case classes outside REPL. Any code submitted directly using spark-submit should work as well.
In Scala 2.11+ you can create a package directly in the REPL with paste -raw.
scala> :paste -raw
// Entering paste mode (ctrl-D to finish)
package bar
case class Bar(x: Int)
// Exiting paste mode, now interpreting.
scala> import bar.Bar
import bar.Bar
scala> sc.parallelize(Seq(Bar(1), Bar(1), Bar(2), Bar(2))).distinct.collect
res1: Array[bar.Bar] = Array(Bar(1), Bar(2))
Why does pattern matching in Spark not work the same as in Scala? See example below... function f() tries to pattern match on class, which works in the Scala REPL but fails in Spark and results in all "???". f2() is a workaround that gets the desired result in Spark using .isInstanceOf(), but I understand that to be bad form in Scala.
Any help on pattern matching the correct way in this scenario in Spark would be greatly appreciated.
abstract class a extends Serializable {val a: Int}
case class b(a: Int) extends a
case class bNull(a: Int=0) extends a
val x: List[a] = List(b(0), b(1), bNull())
val xRdd = sc.parallelize(x)
attempt at pattern matching which works in Scala REPL but fails in Spark
def f(x: a) = x match {
case b(n) => "b"
case bNull(n) => "bnull"
case _ => "???"
}
workaround that functions in Spark, but is bad form (I think)
def f2(x: a) = {
if (x.isInstanceOf[b]) {
"b"
} else if (x.isInstanceOf[bNull]) {
"bnull"
} else {
"???"
}
}
View results
xRdd.map(f).collect //does not work in Spark
// result: Array("???", "???", "???")
xRdd.map(f2).collect // works in Spark
// resut: Array("b", "b", "bnull")
x.map(f(_)) // works in Scala REPL
// result: List("b", "b", "bnull")
Versions used...
Spark results run in spark-shell (Spark 1.6 on AWS EMR-4.3)
Scala REPL in SBT 0.13.9 (Scala 2.10.5)
This is a known issue with Spark REPL. You can find more details in SPARK-2620. It affects multiple operations in Spark REPL including most of transformations on the PairwiseRDDs. For example:
case class Foo(x: Int)
val foos = Seq(Foo(1), Foo(1), Foo(2), Foo(2))
foos.distinct.size
// Int = 2
val foosRdd = sc.parallelize(foos, 4)
foosRdd.distinct.count
// Long = 4
foosRdd.map((_, 1)).reduceByKey(_ + _).collect
// Array[(Foo, Int)] = Array((Foo(1),1), (Foo(1),1), (Foo(2),1), (Foo(2),1))
foosRdd.first == foos.head
// Boolean = false
Foo.unapply(foosRdd.first) == Foo.unapply(foos.head)
// Boolean = true
What makes it even worse is that the results depend on the data distribution:
sc.parallelize(foos, 1).distinct.count
// Long = 2
sc.parallelize(foos, 1).map((_, 1)).reduceByKey(_ + _).collect
// Array[(Foo, Int)] = Array((Foo(2),2), (Foo(1),2))
The simplest thing you can do is to define and package required case classes outside REPL. Any code submitted directly using spark-submit should work as well.
In Scala 2.11+ you can create a package directly in the REPL with paste -raw.
scala> :paste -raw
// Entering paste mode (ctrl-D to finish)
package bar
case class Bar(x: Int)
// Exiting paste mode, now interpreting.
scala> import bar.Bar
import bar.Bar
scala> sc.parallelize(Seq(Bar(1), Bar(1), Bar(2), Bar(2))).distinct.collect
res1: Array[bar.Bar] = Array(Bar(1), Bar(2))
I have a variable "myrdd" that is an avro file with 10 records loaded through hadoopfile.
When I do
myrdd.first_1.datum.getName()
I can get the name. Problem is, I have 10 records in "myrdd". When I do:
myrdd.map(x => {println(x._1.datum.getName())})
it does not work and prints out a weird object a single time. How can I iterate over all records?
Here is a log from a session using spark-shell with a similar scenario.
Given
scala> persons
res8: org.apache.spark.sql.DataFrame = [name: string, age: int]
scala> persons.first
res7: org.apache.spark.sql.Row = [Justin,19]
Your issue looks like
scala> persons.map(t => println(t))
res4: org.apache.spark.rdd.RDD[Unit] = MapPartitionsRDD[10]
so map just returns another RDD (the function is not applied immediately, the function is applied "lazily" when you really iterate over the result).
So when you materialize (using collect()) you get a "normal" collection:
scala> persons.collect()
res11: Array[org.apache.spark.sql.Row] = Array([Justin,19])
over which which you can map. Note that in this case you have a side-effect in the closure passed to map (the println), the result of println is Unit):
scala> persons.collect().map(t => println(t))
[Justin,19]
res5: Array[Unit] = Array(())
Same result if collect is applied at the end:
scala> persons.map(t => println(t)).collect()
[Justin,19]
res19: Array[Unit] = Array(())
But if you just want to print the rows, you can simplify it to using foreach:
scala> persons.foreach(t => println(t))
[Justin,19]
As #RohanAletty has pointed out in a comment, this works for a local Spark job. If the job runs in a cluster, collect is required as well:
persons.collect().foreach(t => println(t))
Notes
The same behaviour can be observed in the Iterator class.
The output of the session above has been reordered
Update
As for filtering: The location of collect is "bad", if you apply filters after collect which can be applied before.
For example these expressions give the same result:
scala> persons.filter("age > 20").collect().foreach(println)
[Michael,29]
[Andy,30]
scala> persons.collect().filter(r => r.getInt(1) >= 20).foreach(println)
[Michael,29]
[Andy,30]
but the 2nd case is worse, because that filter could have been applied before collect.
The same applies to any type of aggregation as well.
I have the following clss in scala shell in spark.
class StringSplit(val query:String)
{
def getStrSplit(rdd:RDD[String]):RDD[String]={
rdd.map(x=>x.split(query))
}
}
I am trying to call the method in this class like
val inputRDD=sc.parallelize(List("one","two","three"))
val strSplit=new StringSplit(",")
strSplit.getStrSplit(inputRDD)
-> This steps fails with error:getStrSplit is not a member of StringSplit error.
Can you please let me know what is wrong with this?
It seems like a reasonable thing to do, but...
the result type for getStrSplit is wrong because .split returns Array[String] not String
parallelizing List("one","two","three") results in "one", "two" and "three" being stored, and there are no strings needing a comma split.
Another way:
val input = sc.parallelize(List("1,2,3,4","5,6,7,8"))
input: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[16] at parallelize at <console>
The test input here is a list of two strings that each require some comma splitting to get to the data.
To parse input by splitting can be as easy as:
val parsedInput = input.map(_.split(","))
parsedInput: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[19] at map at <console>:25
Here _.split(",") is an anonymous function with one parameter _, where Scala infers the types from the other calls rather than the types being explicitly defined.
Notice the type is RDD[Array[String]] not RDD[String]
We could extract the 3rd element of each line with
parsedInput.map(_(2)).collect()
res27: Array[String] = Array(3, 7)
So how about the original question, doing the same operation
in a class. I tried:
class StringSplit(query:String){
def get(rdd:RDD[String]) = rdd.map(_.split(query));
}
val ss = StringSplit(",");
ss.get(input);
---> org.apache.spark.SparkException: Task not serializable
I'm guessing that occurs because the class is not serialized to each worker, rather Spark tries to send split function but it has a parameter that is not also sent.
scala> class commaSplitter {
def get(rdd:RDD[String])=rdd.map(_.split(","));
}
defined class commaSplitter
scala> val cs = new commaSplitter;
cs: commaSplitter = $iwC$$iwC$commaSplitter#262f1580
scala> cs.get(input);
res29: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[23] at map at <console>:10
scala> cs.get(input).collect()
res30: Array[Array[String]] = Array(Array(1, 2, 3, 4), Array(5, 6, 7, 8))
This parameter-free class works.
EDIT
You can tell scala you want your class to be serializable by extends Serializable like so:
scala> class stringSplitter(s:String) extends Serializable {
def get(rdd:RDD[String]) = rdd.map(_.split(s));
}
defined class stringSplitter
scala> val ss = new stringSplitter(",");
ss: stringSplitter = $iwC$$iwC$stringSplitter#2a33abcd
scala> ss.get(input)
res33: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[25] at map at <console>:10
scala> ss.get(input).collect()
res34: Array[Array[String]] = Array(Array(1, 2, 3, 4), Array(5, 6, 7, 8))
and this works.
I am getting IndexOutOfBoundException while doing following operation in spark-shell
val input = sc.textFile("demo.txt")
b.collect
Both of above functions are working fine .
val out = input.map(_.split(",")).map(r => r(1))
Getting OutOfBoundException for above line
demo.txt is looks like this:(Header :- Name,Gender,age)
Danial,,14
,Male,18
Hema,,
With pig same file is working without any issue!!
You can try this out yourself, just start the Scala console and enter your sample lines.
scala> "Danial,,14".split(",")
res0: Array[String] = Array(Danial, "", 14)
scala> ",Male,18".split(",")
res1: Array[String] = Array("", Male, 18)
scala> "Hema,,".split(",")
res2: Array[String] = Array(Hema)
So ooops, the last line doesn't work. Add the number of expected columns to split:
scala> "Hema,,".split(",", 3)
res3: Array[String] = Array(Hema, "", "")
or even better, write a real parser. String.split isn't suitable for production code.