Read certain folders based on a variable - scala

I have below folder structure:
/mnt/tmp/fldr1
/mnt/tmp/fldr2
/mnt/tmp/fldr3
/mnt/tmp/fldr4
/mnt/tmp/fldr5
/mnt/tmp/fldr6
Based on a variable I would be reading the parquet files from certain folders.
eg.
if trail = 2 I would read only fldr1 and fldr2
if trail = 4 I would read only fldr1, fldr2, fldr3 and fldr4
if trail = 6 I would read all the folders
I have the below code.
scala> val cnt = 2
val cnt: Int = 2
scala> val xs = List(1, cnt)
val xs: List[Int] = List(1, 2)
scala> val ys = List("/mnt/tmp1", "/mnt/tmp2")
val ys: List[String] = List(/mnt/tmp1, /mnt/tmp2)
scala> val cross = xs.flatMap(x => ys.map(y => (y + "/trial=" + x.toString)))
val cross: List[String] = List(/mnt/tmp1/trial=1, /mnt/tmp2/trial=1, /mnt/tmp1/trial=2, /mnt/tmp2/trial=2)
I would then read through the paths and get the parquet files.
Is there any other easier way to do this?

a basic skeleton can be:
val dirsstr="/mnt/tmp/fldr1,/mnt/tmp/fldr2,/mnt/tmp/fldr3,/mnt/tmp/fldr4,/mnt/tmp/fldr5,/mnt/tmp/fldr6"
val dirs = dirsstr.split(",")
//get an random generator - you will have your own input
val trail = scala.util.Random
trail.nextInt(6) match{
case 2 => {
readFiles(dirs(0));
readFiles(dirs(1));
}
case 4 => for(i <- (0 to 3)) readFiles(dirs(i));
case 6 => dirs.foreach(dir => readFiles(dir))
case _ => println("Out of scope of your question :)")
}
where readFiles depends on your spark, parquet context. Sure, there can be many other (maybe better) ways how to do it,

Related

Concatenate String to each element of a List in a Spark dataframe with Scala

I have two columns in a Spark dataframe: one is a String, and the other is a List of Strings. How do I create a new column that is the concatenation of the String in column one with each element of the list in column 2, resulting in another list in column 3.
For example, if column 1 is "a", and column 2 is ["A","B"], I'd like the output in column 3 of the dataframe to to be ["aA","aB"].
So far, I have:
val multiplier = (x1: String, x2: Seq[String]) => {x1+x2}
val multiplierUDF = udf(multiplier)
val df2 = df1
.withColumn("col3", multiplierUDF(df1("col1"),df1("col2")))
which gives aWrappedArray(A,B)
I suggest you try your udf functions outside of spark, and get them working for local variables first. If you do:
val multiplier = (x1: String, x2: Seq[String]) => {x1+x2}
multiplier("a", Seq("A", "B"))
// output
res1: String = aList(A, B)
You'll see multiplier doesn't do what you want.
I think you're looking for:
val multiplier = (x1: String, x2: Seq[String]) => x2.map(x1+_)
multiplier("a", Seq("A", "B"))
//output
res2: Seq[String] = List(aA, aB)
I think you should redefine your UDF to something similar to my function append
val a = Seq("A", "B")
val p = "a"
def append(init: String, tails: Seq[String]) = tails.map(x => init + x)
append(p, a)
//res1: Seq[String] = List(aA, aB)

SubtractByKey and keep rejected values

I was playing around with spark and I am getting stuck with something that seems foolish.
Let's say we have two RDD:
rdd1 = {(1, 2), (3, 4), (3, 6)}
rdd2 = {(3, 9)}
if I am doing rdd1.substrackByKey(rdd2) , I will get {(1, 2)} wich is perfectly fine. But I also want to save the rejected values {(3,4),(3,6)} to another RDD, is there a prebuilt function in spark or an elegant way to do this?
Please keep in mind that I am new with Spark, any help will be appreciated, thanks.
As Rohan suggests, there is no (to the best of my knowledge) standard API call to do this. What you want to do can be expressed as Union - Intersection.
Here is how you can do this on spark:
val r1 = sc.parallelize(Seq((1,2), (3,4), (3,6)))
val r2 = sc.parallelize(Seq((3,9)))
val intersection = r1.map(_._1).intersection(r2.map(_._1))
val union = r1.map(_._1).union(r2.map(_._1))
val diff = union.subtract(intersection)
diff.collect()
> Array[Int] = Array(1)
To get the actual pairs:
val d = diff.collect()
r1.union(r2).filter(x => d.contains(x._1)).collect
I think I claim this is slightly more elegant:
val r1 = sc.parallelize(Seq((1,2), (3,4), (3,6)))
val r2 = sc.parallelize(Seq((3,9)))
val r3 = r1.leftOuterJoin(r2)
val subtracted = r3.filter(_._2._2.isEmpty).map(x=>(x._1, x._2._1))
val discarded = r3.filter(_._2._2.nonEmpty).map(x=>(x._1, x._2._1))
//subtracted: (1,2)
//discarded: (3,4)(3,6)
The insight is noticing that leftOuterJoin produces both the discarded (== records with a matching key in r2) and remaining (no matching key) in one go.
It's a pity Spark doesn't have RDD.partition (in the Scala collection sense of split a collection into two depending on a predicate) or we could caclculate subtracted and discarded in one pass
You can try
val rdd3 = rdd1.subtractByKey(rdd2)
val rdd4 = rdd1.subtractByKey(rdd3)
But you won't be keeping the values, just running another subtraction.
Unfortunately, I don't think there's an easy way to keep the rejected values using subtractByKey(). I think one way you get your desired result is through cogrouping and filtering. Something like:
val cogrouped = rdd1.cogroup(rdd2, numPartitions)
def flatFunc[A, B](key: A, values: Iterable[B]) : Iterable[(A, B)] = for {value <- values} yield (key, value)
val res1 = cogrouped.filter(_._2._2.isEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
val res2 = cogrouped.filter(_._2._2.nonEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
You might be able to borrow the work done here to make the last two lines look more elegant.
When I run this on your example, I see:
scala> val rdd1 = sc.parallelize(Array((1, 2), (3, 4), (3, 6)))
scala> val rdd2 = sc.parallelize(Array((3, 9)))
scala> val cogrouped = rdd1.cogroup(rdd2)
scala> def flatFunc[A, B](key: A, values: Iterable[B]) : Iterable[(A, B)] = for {value <- values} yield (key, value)
scala> val res1 = cogrouped.filter(_._2._2.isEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
scala> val res2 = cogrouped.filter(_._2._2.nonEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
scala> res1.collect()
...
res7: Array[(Int, Int)] = Array((1,2))
scala> res2.collect()
...
res8: Array[(Int, Int)] = Array((3,4), (3,6))
First use substractByKey() and then subtract
val rdd1 = spark.sparkContext.parallelize(Seq((1,2), (3,4), (3,5)))
val rdd2 = spark.sparkContext.parallelize(Seq((3,10)))
val result = rdd1.subtractByKey(rdd2)
result.foreach(print) // (1,2)
val rejected = rdd1.subtract(result)
rejected.foreach(print) // (3,5)(3,4)

Does Scala have a statement equivalent to ML's "as" construct?

In ML, one can assign names for each element of a matched pattern:
fun findPair n nil = NONE
| findPair n (head as (n1, _))::rest =
if n = n1 then (SOME head) else (findPair n rest)
In this code, I defined an alias for the first pair of the list and matched the contents of the pair. Is there an equivalent construct in Scala?
You can do variable binding with the # symbol, e.g.:
scala> val wholeList # List(x, _*) = List(1,2,3)
wholeList: List[Int] = List(1, 2, 3)
x: Int = 1
I'm sure you'll get a more complete answer later as I'm not sure how to write it recursively like your example, but maybe this variation would work for you:
scala> val pairs = List((1, "a"), (2, "b"), (3, "c"))
pairs: List[(Int, String)] = List((1,a), (2,b), (3,c))
scala> val n = 2
n: Int = 2
scala> pairs find {e => e._1 == n}
res0: Option[(Int, String)] = Some((2,b))
OK, next attempt at direct translation. How about this?
scala> def findPair[A, B](n: A, p: List[Tuple2[A, B]]): Option[Tuple2[A, B]] = p match {
| case Nil => None
| case head::rest if head._1 == n => Some(head)
| case _::rest => findPair(n, rest)
| }
findPair: [A, B](n: A, p: List[(A, B)])Option[(A, B)]

Make a recursive Stream from just 1 element?

I've made a Stream for Gray Codes using recursion as follows:
val gray: Stream[List[String]] = {
List("") #:: List("0", "1") #:: gray.tail.map {gnext}
}
where
val gnext = (i:List[String]) => i.map {"0" + _} ::: i.reverse.map {"1" + _}
so that, for example
scala> gray(2)
res17: List[String] = List(00, 01, 11, 10)
I don't really need the List("0", "1") in the definition, because it can be produced from element 0:
scala> gnext(List(""))
res18: List[java.lang.String] = List(0, 1)
So is there a way / pattern that can be used to produce a Stream from just the first element?
val gray: Stream[List[String]] = List("") #:: gray.map {gnext}
Or, alternatively,
val gray = Stream.iterate(List(""))(gnext)

Simple question about tuple of scala

I'm new to scala, and what I'm learning is tuple.
I can define a tuple as following, and get the items:
val tuple = ("Mike", 40, "New York")
println("Name: " + tuple._1)
println("Age: " + tuple._2)
println("City: " + tuple._3)
My question is:
How to get the length of a tuple?
Is tuple mutable? Can I modify its items?
Is there any other useful operation we can do on a tuple?
Thanks in advance!
1] tuple.productArity
2] No.
3] Some interesting operations you can perform on tuples: (a short REPL session)
scala> val x = (3, "hello")
x: (Int, java.lang.String) = (3,hello)
scala> x.swap
res0: (java.lang.String, Int) = (hello,3)
scala> x.toString
res1: java.lang.String = (3,hello)
scala> val y = (3, "hello")
y: (Int, java.lang.String) = (3,hello)
scala> x == y
res2: Boolean = true
scala> x.productPrefix
res3: java.lang.String = Tuple2
scala> val xi = x.productIterator
xi: Iterator[Any] = non-empty iterator
scala> while(xi.hasNext) println(xi.next)
3
hello
See scaladocs of Tuple2, Tuple3 etc for more.
One thing that you can also do with a tuple is to extract the content using the match expression:
def tupleview( tup: Any ){
tup match {
case (a: String, b: String) =>
println("A pair of strings: "+a + " "+ b)
case (a: Int, b: Int, c: Int) =>
println("A triplet of ints: "+a + " "+ b + " " +c)
case _ => println("Unknown")
}
}
tupleview( ("Hello", "Freewind"))
tupleview( (1,2,3))
Gives:
A pair of strings: Hello Freewind
A triplet of ints: 1 2 3
Tuples are immutable, but, like all cases classes, they have a copy method that can be used to create a new Tuple with a few changed elements:
scala> (1, false, "two")
res0: (Int, Boolean, java.lang.String) = (1,false,two)
scala> res0.copy(_2 = true)
res1: (Int, Boolean, java.lang.String) = (1,true,two)
scala> res1.copy(_1 = 1f)
res2: (Float, Boolean, java.lang.String) = (1.0,true,two)
Concerning question 3:
A useful thing you can do with Tuples is to store parameter lists for functions:
def f(i:Int, s:String, c:Char) = s * i + c
List((3, "cha", '!'), (2, "bora", '.')).foreach(t => println((f _).tupled(t)))
//--> chachacha!
//--> borabora.
[Edit] As Randall remarks, you'd better use something like this in "real life":
def f(i:Int, s:String, c:Char) = s * i + c
val g = (f _).tupled
List((3, "cha", '!'), (2, "bora", '.')).foreach(t => println(g(t)))
In order to extract the values from tuples in the middle of a "collection transformation chain" you can write:
val words = List((3, "cha"),(2, "bora")).map{ case(i,s) => s * i }
Note the curly braces around the case, parentheses won't work.
Another nice trick ad question 3) (as 1 and 2 are already answered by others)
val tuple = ("Mike", 40, "New York")
tuple match {
case (name, age, city) =>{
println("Name: " + name)
println("Age: " + age)
println("City: " + city)
}
}
Edit: in fact it's rather a feature of pattern matching and case classes, a tuple is just a simple example of a case class...
You know the size of a tuple, it's part of it's type. For example if you define a function def f(tup: (Int, Int)), you know the length of tup is 2 because values of type (Int, Int) (aka Tuple2[Int, Int]) always have a length of 2.
No.
Not really. Tuples are useful for storing a fixed amount of items of possibly different types and passing them around, putting them into data structures etc. There's really not much you can do with them, other than creating tuples, and getting stuff out of tuples.
1 and 2 have already been answered.
A very useful thing that you can use tuples for is to return more than one value from a method or function. Simple example:
// Get the min and max of two integers
def minmax(a: Int, b: Int): (Int, Int) = if (a < b) (a, b) else (b, a)
// Call it and assign the result to two variables like this:
val (x, y) = minmax(10, 3) // x = 3, y = 10
Using shapeless, you easily get a lot of useful methods, that are usually available only on collections:
import shapeless.syntax.std.tuple._
val t = ("a", 2, true, 0.0)
val first = t(0)
val second = t(1)
// etc
val head = t.head
val tail = t.tail
val init = t.init
val last = t.last
val v = (2.0, 3L)
val concat = t ++ v
val append = t :+ 2L
val prepend = 1.0 +: t
val take2 = t take 2
val drop3 = t drop 3
val reverse = t.reverse
val zip = t zip (2.0, 2, "a", false)
val (unzip, other) = zip.unzip
val list = t.toList
val array = t.toArray
val set = t.to[Set]
Everything is typed as one would expect (that is first has type String, concat has type (String, Int, Boolean, Double, Double, Long), etc.)
The last method above (.to[Collection]) should be available in the next release (as of 2014/07/19).
You can also "update" a tuple
val a = t.updatedAt(1, 3) // gives ("a", 3, true, 0.0)
but that will return a new tuple instead of mutating the original one.