How to optimize this Scala code? - scala

I'm learning Scala, curious how to optimize this code. What I have is an RDD loaded from Spark. It's a tab delimited dataset. I want to combine the first column with the second column, and append it as a new column to the end of the dataset, with a "-" separating the two.
For example:
column1\tcolumn2\tcolumn3
becomes
column1\tcolumn2\tcolumn3\tcolumn1-column2
val f = sc.textFile("path/to/dataset")
f.map(line => if (line.split("\t").length > 1)
line.split("\t") :+ line.split("\t")(0)+"-"+line.split("\t")(1)
else
Array[String]()).map(a => a.mkString("\t")
)
.saveAsTextFile("output/path")

Try:
f.map{ line =>
val cols = line.split("\t")
if (cols.length > 1) line + "\t" + cols(0) + "-" + cols(1)
else line
}

Related

SPARK: sum of elements with this same indexes from RDD[Array[Int]] in spark-rdd

I have three files like:
file1: 1,2,3,4,5
6,7,8,9,10
file2: 11,12,13,14,15
16,17,18,19,20
file3: 21,22,23,24,25
26,27,28,29,30
I have to find the sum of rows from each file:
1+2+3+4+5 + 11+12+13+14+15 + 21+21+23+24+25
6+7+8+9+10 + 16+17+18+19+20 + 26+27+28+29+30
I have written following code in spark-scala to get the Array of sum of all the rows:
val filesRDD = sc.wholeTextFiles("path to folder\\numbers\\*")
// creating RDD[Array[String]]
val linesRDD = filesRDD.map(elem => elem._2.split("\\n"))
// creating RDD[Array[Array[Int]]]
val rdd1 = linesRDD.map(line => line.map(str => str.split(",").map(_.trim.toInt)))
// creating RDD[Array[Int]]
val rdd2 = rdd1.map(elem => elem.map(e => e.sum))
rdd2.collect.foreach(elem => println(elem.mkString(",")))
the output I am getting is:
15,40
65,90
115,140
What I want is to sum 15+65+115 and 40+90+140
Any help is appreciated!
PS:
the files can have different no. of lines like some with 3 lines other with 4 and there can be any no. of files.
I want to do this using rdds only not dataframes.
You can use reduce to sum up the arrays:
val result = rdd2.reduce((x,y) => (x,y).zipped.map(_ + _))
// result: Array[Int] = Array(195, 270)
and if the files are of different length (e.g. file 3 has only one line 21,22,23,24,25)
val result = rdd2.reduce((x,y) => x.zipAll(y,0,0).map{case (a, b) => a + b})

Drop any row with NULL

I have a small problem. I would like to delete any row that contains 'NULL'.
This is my input file:
matricule,dateins,cycle,specialite,bourse,sport
0000000001,1999-11-22,Master,IC,Non,Non
0000000002,2014-02-01,Null,IC,Null,Oui
0000000003,2006-09-07,Null,Null,Oui,Oui
0000000004,2008-12-11,Master,IC,Oui,Oui
0000000005,2006-06-07,Master,SI,Non,Oui
I did a lot of research and found a function called drop(any). Which basically drops any rows that contains NULL value. I tried using it in the code below but it wont work
val x = sc.textFile("/home/amel/one")
val re = x.map(row => {
val cols = row.split(",")
val cycle = cols(2)
val years = cycle match {
case "License" => "3 years"
case "Master" => "3 years"
case "Ingeniorat" => "5 years"
case "Doctorate" => "3 years"
case _ => "other"
}
(cols(1).split("-")(0) + "," + years + "," + cycle + "," + cols(3), 1)
}).reduceByKey(_ + _)
re.collect.foreach(println)
This is the current result of my code:
(1999,3 years,Master,IC,57)
(2013,NULL,Doctorat,SI,44)
(2013,NULL,Licence,IC,73)
(2009,5 years,Ingeniorat,Null,58)
(2011,3 years,Master,Null,61)
(2003,5 years,Ingeniorat,Null,65)
(2019,NULL,Doctorat,SI,80)
However, I want the result to be like this:
(1999, 3 years, Master, IC)
I.e., any row that contains 'NULL' should be removed.
Similar but not duplicate question as the following question on SO: Filter spark DataFrame on string contains
You can filter this RDD when you read it in.
val x = sc.textFile("/home/amel/one").filter(!_.toLowerCase.contains("null"))

How to combine the results of spark computations in the following case?

The question is to calculate average of each of the columns corresponding to each class. Class number is given in the first column.
I am giving a part of test file for better clarity.
2 0.819039 -0.408442 0.120827
3 -0.063763 0.060122 0.250393
4 -0.304877 0.379067 0.092391
5 -0.168923 0.044400 0.074417
1 0.053700 -0.088746 0.228501
2 0.196758 0.035607 0.008134
3 0.006971 -0.096478 0.123718
4 0.084281 0.278343 -0.350414
So the task is to calculate
1: avg(), avg(), avg()
.
.
.
I am very new to Scala. After juggling a lot with the code I came up with the following code
val inputfile = sc.textFile ("testfile.txt")
val myArray = inputfile.map { line =>
(line.split(" ").toList)
}
var Avgmap:Map[String,List[Double]] = Map()
var countmap:Map[String,Int] = Map()
for( a <- myArray ){
//println( "Value of a: " + a + " " + a.size );
if(!countmap.contains(a(0))){
countmap += (a(0) -> 0)
Avgmap += (a(0) -> List.fill(a.size-1)(1.0))
}
var c = countmap(a(0)) + 1
val countmap2 = countmap + (a(0) -> c)
countmap = countmap2
var p = List[Double]()
for( i <- 1 to a.size - 1) {
var temp = (Avgmap(a(0))(i-1)*(countmap(a(0)) - 1) + a(i).toDouble)/countmap(a(0))
// println("i: "+i+" temp: "+temp)
var q = p :+ temp
p = q
}
val Avgmap2 = Avgmap + (a(0) -> p)
Avgmap = Avgmap2;
println("--------------------------------------------------")
println(countmap)
println(Avgmap)
}
When I execute this code I seem to be getting the results in two halves of the dataset. Please help me in combining them.
Edit: About the variables I am using. countmap keeps record of classnumber -> number of vectors encountered. Similarly Avgmap keeps record of average so far of each columns corresponding to the key.
at first, use DataFrame API. at secont, what you want is just one row
df.select(df.columns.map(c => mean(col(c))) :_*).show

flatMap in scala, the compiler says it's wrong

I have a file which contains lines which contain items separated by ","
for example:
2 1,3
3 2,5,7
5 4
Now I want to flatMap this file to such rdd:
2 1
2 3
3 2
3 5
5 7
5 4
I wonder how to realize this function in scala:
val pairs = lines.flatMap { line =>
val a = line.split(" ")(0)
val partb = line.split(" ")(1)
for (b <- partb.split(",")) {
yield a + " " + b
}
}
Is this correct?
Thank you for clarifying your code example. In your case, the only problem is the location of your yield keyword. Move it to before the curly braces, like so:
for (b <- partb.split(",")) yield {
a + " " + b
}
You need to do yield THEN the return logic
yield {a}
The way you are doing it now is a for loop, not a for comprehension, which will yell about the yield keyword, and even if not it would return a Unit
val pairs = lines.flatMap { line =>
for (a <- line.split(",")) yield {
a
}
}
In addition to the relocation of yield for delivering a collection, as already exposed, consider this possible refactoring where we extract the first two entries from split,
val pairs = lines.flatMap { line =>
val Array(a, partb, _*) = line.split(" ")
for (b <- partb.split(","))
yield a + " " + b
}
and yet more concise is
val pairs = lines.flatMap { line =>
val Array(a,tail) = line.split(" |,", 2)
for (t <- tail) yield s"$a $t"
}
where we split by either " " or "," and extract the head and the tail, then we apply string interpolation to produce the desired result.

Functional Way of handling corner case in Folds

I've a list of nodes (String) that I want to convert into something the following.
create X ({name:"A"}),({name:"B"}),({name:"B"}),({name:"C"}),({name:"D"}),({name:"F"})
Using a fold I get everything with an extra "," at the end. I can remove that using a substring on the final String. I was wondering if there is a better/more functional way of doing this in Scala ?
val nodes = List("A", "B", "B", "C", "D", "F")
val str = nodes.map( x => "({name:\"" + x + "\"}),").foldLeft("create X ")( (acc, curr) => acc + curr )
println(str)
//create X ({name:"A"}),({name:"B"}),({name:"B"}),({name:"C"}),({name:"D"}),({name:"F"}),
Solution 1
You could use the mkString function, which won't append the seperator at the end.
In this case you first map each element to the corresponding String and then use mkString for putting the ',' inbetween.
Since the "create X" is static in the beginning you could just prepend it to the result.
val str = "create X " + nodes.map("({name:\"" + _ + "\"})").mkString(",")
Solution 2
Another way to see this: Since you append exactly one ',' too much, you could just remove it.
val str = nodes.foldLeft("create X ")((acc, x) => acc + "({name:\"" + x + "\"}),").init
init just takes all elements from a collection, except the last.
(A string is seen as a collection of chars here)
So in a case where there are elements in your nodes, you would remove a ','. When there is none you only get "create X " and therefore remove the white-space, which might not be needed anyways.
Solution 1 and 2 are not equivalent when nodes is empty. Solution 1 would keep the white-space.
Joining a bunch of things, splicing something "in between" each of the things, isn't a map-shaped problem. So adding the comma in the map call doesn't really "fit".
I generally do this sort of thing by inserting the comma before each item during the fold; the fold can test whether the accumulator is "empty" and not insert a comma.
For this particular case (string joining) it's so common that there's already a library function for it: mkString.
Move "," from map(which applies to all) to fold/reduce
val str = "create X " + nodes.map( x => "({name:\"" + x + "\"})").reduceLeftOption( _ +","+ _ ).getOrElse("")