Scala: Adding elements to Set inside 'foreach' doesn't persist - scala

I create a mutable set and iterate over a list using a 'foreach' to populate the set. When I print the set inside the foreach, it prints the contents of the set correctly. However, the set is empty after the end of 'foreach'. I am not able to figure out what I am missing.
import org.apache.spark._
import org.apache.spark.graphx._
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
object SparkTest {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Spark Test")
val sc = new SparkContext(conf)
val graph = GraphLoader.edgeListFile(sc, "followers.txt")
val edgeList = graph.edges
var mapperResults = iterateMapper(edgeList)
sc.stop()
}
def iterateMapper(edges: EdgeRDD[Int, Int]) : scala.collection.mutable.Set[(VertexId, VertexId)] = {
var mapperResults = scala.collection.mutable.Set[(VertexId, VertexId)]()
val mappedValues = edges.mapValues(edge => (edge.srcId, edge.dstId)) ++ edges.mapValues(edge => (edge.dstId, edge.srcId))
mappedValues.foreach {
edge => {
var src = edge.attr._1
var dst = edge.attr._2
mapperResults += ((src, dst))
}
}
println(mapperResults)
return mapperResults
}
}
This is the code I'm working with. It is a modified example from Spark.
The
println(mapperResults)
prints out an empty set.

Actually it works, but in the worker.
the foreach is a function that exists for the side effects, but it work on the worker, so you wont see the updated Set.
Other issue is that it design to be Immutable! so do not use mutable collection there. Also there is no need for that. The following code should do what do you meant to do:
var mapperResults = mappedValues.map(_.attr).distinct.collect
It shorter, cleaner and do the map work on the workers.

Related

How to use Array in JCommander in Scala

I want to use JCommander to parse args.
I wrote some code:
import com.beust.jcommander.{JCommander, Parameter}
import scala.collection.mutable.ArrayBuffer
object Config {
#Parameter(names = Array("--categories"), required = true)
var categories = new ArrayBuffer[String]
}
object Main {
def main(args: Array[String]): Unit = {
val cfg = Config
JCommander
.newBuilder()
.addObject(cfg)
.build()
.parse(args.toArray: _*)
println(cfg.categories)
}
}
Howewer it fails with
com.beust.jcommander.ParameterException: Could not invoke null
Reason: Can not set static scala.collection.mutable.ArrayBuffer field InterestRulesConfig$.categories to java.lang.String
What am i doing wrong?
JCommander uses knowledge about types in Java to map values to parameters. But Java doesn't have a type scala.collection.mutable.ArrayBuffer. It has a type java.util.List. If you want to use JCommander you have to stick to Java's build-in types.
If you want to use Scala's types use one of Scala's libraries that handle in in more idiomatic manner: scopt or decline.
Working example
import java.util
import com.beust.jcommander.{JCommander, Parameter}
import scala.jdk.CollectionConverters._
object Config {
#Parameter(names = Array("--categories"), required = true)
var categories: java.util.List[Integer] = new util.ArrayList[Integer]()
}
object Hello {
def main(args: Array[String]): Unit = {
val cfg = Config
JCommander
.newBuilder()
.addObject(cfg)
.build()
.parse(args.toArray: _*)
println(cfg.categories)
println(cfg.categories.getClass())
val a = cfg.categories.asScala
for (x <- a) {
println(x.toInt)
println(x.toInt.getClass())
}
}
}

What is Spark execution order with function calls in scala?

I have a spark program as follows:
object A {
var id_set: Set[String] = _
def init(argv: Array[String]) = {
val args = new AArgs(argv)
id_set = args.ids.split(",").toSet
}
def main(argv: Array[String]) {
init(argv)
val conf = new SparkConf().setAppName("some.name")
val rdd1 = getRDD(paras)
val rdd2 = getRDD(paras)
//......
}
def getRDD(paras) = {
//function details
getRDDDtails(paras)
}
def getRDDDtails(paras) = {
//val id_given = id_set
id_set.foreach(println) //worked normal, not empty
someRDD.filter{ x =>
val someSet = x.getOrElse(...)
//id_set.foreach(println) ------wrong, id_set just empty set
(some_set & id_set).size > 0
}
}
class AArgs(args: Array[String]) extends Serializable {
//parse args
}
I have a global variable id_set. At first, it is just an empty set. In main function, I call init which sets id_set to a non-empty set from args. After that, I call getRDD function which calls getRDDDtails. In getRDDDtails, I filter a rdd based on contents in id_set. However, the result semms to be empty. I tried to print is_set in executor, and it is just an empty line. So, the problem seems to be is_set is not well initilized(in init function). However, when I try to print is_set in driver(in head lines of function getRDDDtails), it worked normal, not empty.
So, I have tried to add val id_given = id_set in function getRDDDtails, and use id_given later. This seems to fix the problem. But I'm totally confused why should this happen? What is the execution order of Spark programs? Why does my solution work?

Best way to convert online csv to dataframe scala

I am trying to figure out the most efficient way to accomplish putting this online csv file into a data frame in Scala.
To save a download, the csv file in the code looks like this:
"Symbol","Name","LastSale","MarketCap","ADR
TSO","IPOyear","Sector","Industry","Summary Quote"
"DDD","3D Systems Corporation","18.09","2058834640.41","n/a","n/a","Technology","Computer Software: Prepackaged Software","http://www.nasdaq.com/symbol/ddd"
"MMM","3M Company","211.68","126423673447.68","n/a","n/a","Health Care","Medical/Dental Instruments","http://www.nasdaq.com/symbol/mmm"
....
From my research, I start by downloading the csv, and placing it into a list buffer (since you can't do this with a list because it's immutable):
import scala.collection.mutable.ListBuffer
val sc = new SparkContext(conf)
var stockInfoNYSE_ListBuffer = new ListBuffer[java.lang.String]()
import scala.io.Source
val bufferedSource =
Source.fromURL("http://www.nasdaq.com/screening/companies-by-
industry.aspx?exchange=NYSE&render=download")
for (line <- bufferedSource.getLines) {
val cols = line.split(",").map(_.trim)
stockInfoNYSE_ListBuffer += s"${cols(0)},${cols(1)},${cols(2)},${cols(3)},${cols(4)},${cols(5)},${cols(6)},${cols(7)},${cols(8)}"
}
bufferedSource.close
val stockInfoNYSE_List = stockInfoNYSE_ListBuffer.toList
So we have a list. You can basically get each value like this:
// SYMBOL : stockInfoNYSE_List(1).split(",")(0)
// COMPANY NAME : stockInfoNYSE_List(1).split(",")(1)
// IPOYear : stockInfoNYSE_List(1).split(",")(5)
// Sector : stockInfoNYSE_List(1).split(",")(6)
// Industry : stockInfoNYSE_List(1).split(",")(7)
Here is where I get stuck- how do I get this to a dataframe? The wrong approaches I have taken. I didn't put all the values in just yet- was a simple test.
case class StockMap(Symbol: String, Name: String)
val caseClassDS = Seq(StockMap(stockInfoNYSE_List(1).split(",")(0),
StockMap(stockInfoNYSE_List(1).split(",")(1))).toDS()
caseClassDS.show()
The problem with the approach above: I can only figure out how to add one sequence (row) by hard coding it. I want every Row in the list.
My second failed attempt:
val sqlContext= new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
val test = stockInfoNYSE_List.toDF
This will just give you the array, and I want to divide up the values.
Array(["Symbol","Name","LastSale","MarketCap","ADR TSO","IPOyear","Sector","Industry","Summary Quote"], ["DDD","3D Systems Corporation","18.09","2058834640.41","n/a","n/a","Technology","Computer Software: Prepackaged Software","http://www.nasdaq.com/symbol/ddd"], ["MMM","3M Company","211.68","126423673447.68","n/a","n/a","Health Care","Medical/Dental Instruments","http://www.nasdaq.com/symbol/mmm"],.......
case class TestClass(Symbol:String,Name:String,LastSale:String,MarketCap :String,ADR_TSO:String,IPOyear:String,Sector: String,Industry:String,Summary_Quote:String
| )
defined class TestClass
var stockDF= stockInfoNYSE_ListBuffer.drop(1)
val demoDS = stockDF.map(line => {
val fields = line.replace("\"","").split(",")
TestClass(fields(0), fields(1), fields(2),fields(3), fields(4), fields(5),fields(6), fields(7), fields(8))
})
scala> demoDS.toDS.show
+------+--------------------+--------+---------------+-------------+-------+-----------------+--------------------+--------------------+
|Symbol| Name|LastSale| MarketCap| ADR_TSO|IPOyear| Sector| Industry| Summary_Quote|
+------+--------------------+--------+---------------+-------------+-------+-----------------+--------------------+--------------------+
| DDD|3D Systems Corpor...| 18.09| 2058834640.41| n/a| n/a| Technology|Computer Software...|http://www.nasdaq...|
| MMM| 3M Company| 211.68|126423673447.68| n/a| n/a| Health Care|Medical/Dental In...|http://www.nasdaq...|
In case anyone is trying to get this example working, here is the code using the above solution:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import scala.collection.mutable.ListBuffer
import sqlContext.implicits._
var stockInfoNYSE_ListBuffer = new ListBuffer[java.lang.String]()
import scala.io.Source
val bufferedSource =
Source.fromURL("http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NYSE&render=download")
for (line <- bufferedSource.getLines) {
val cols = line.split(",").map(_.trim)
stockInfoNYSE_ListBuffer += s"${cols(0)},${cols(1)},${cols(2)},${cols(3)},${cols(4)},${cols(5)},${cols(6)},${cols(7)},${cols(8)}"
}
bufferedSource.close
case class TestClass(Symbol:String,Name:String,LastSale:String,MarketCap :String,ADR_TSO:String,IPOyear:String,Sector: String,Industry:String,Summary_Quote:String )
var stockDF= stockInfoNYSE_ListBuffer.drop(1)
val demoDS = stockDF.map(line => {
val fields = line.replace("\"","").split(",")
TestClass(fields(0), fields(1), fields(2),fields(3), fields(4), fields(5),fields(6), fields(7), fields(8))
})
demoDS.toDF().show

the generation of parse tree of StanfordCoreNLP is stuck

When I use the StanfordCoreNLP to generate the parse using bigdata on Spark, one of the tasks had stuck for a long time. I looked for the error, it showed as follows:
at edu.stanford.nlp.ling.CoreLabel.(CoreLabel.java:68)
  at edu.stanford.nlp.ling.CoreLabel$CoreLabelFactory.newLabel(CoreLabel.java:248)
  at edu.stanford.nlp.trees.LabeledScoredTreeFactory.newLeaf(LabeledScoredTreeFactory.java:51)
  at edu.stanford.nlp.parser.lexparser.Debinarizer.transformTreeHelper(Debinarizer.java:27)
  at edu.stanford.nlp.parser.lexparser.Debinarizer.transformTreeHelper(Debinarizer.java:34)
  at edu.stanford.nlp.parser.lexparser.Debinarizer.transformTreeHelper(Debinarizer.java:34)
  at edu.stanford.nlp.parser.lexparser.Debinarizer.transformTreeHelper(Debinarizer.java:34)
  at edu.stanford.nlp.parser.lexparser.Debinarizer.transformTreeHelper(Debinarizer.java:34)
the relevant codes I think are as follows:
import edu.stanford.nlp.pipeline.Annotation
import edu.stanford.nlp.pipeline.StanfordCoreNLP
import java.util.Properties
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation
import edu.stanford.nlp.util.CoreMap
import scala.collection.JavaConversions._
object CoreNLP {
def transform(Content: String): String = {
val v = new CoreNLP
v.runEnglishAnnotators(Content);
v.runChineseAnnotators(Content)
}
}
class CoreNLP {
def runEnglishAnnotators(inputContent: String): String = {
var document = new Annotation(inputContent)
val props = new Properties
props.setProperty("annotators", "tokenize, ssplit, parse")
val coreNLP = new StanfordCoreNLP(props)
coreNLP.annotate(document)
parserOutput(document)
}
def runChineseAnnotators(inputContent: String): String = {
var document = new Annotation(inputContent)
val props = new Properties
val corenlp = new StanfordCoreNLP("StanfordCoreNLP-chinese.properties")
corenlp.annotate(document)
parserOutput(document)
}
def parserOutput(document: Annotation): String = {
val sentences = document.get(classOf[SentencesAnnotation])
var result = ""
for (sentence: CoreMap <- sentences) {
val tree = sentence.get(classOf[TreeAnnotation])
//output the tree to file
result = result + "\n" + tree.toString
}
result
}
}
My classmate said the data used to test is recurse and thus the NLP is endlessly run. I don't know whether it's true.
If you add props.setProperty("parse.maxlen", "100"); to your code that will set the parser to not parse sentences longer than 100 tokens. That can help prevent crash issues. You should experiment with the best max sentence length for your application.

String filter using Spark UDF

input.csv:
200,300,889,767,9908,7768,9090
300,400,223,4456,3214,6675,333
234,567,890
123,445,667,887
What I want:
Read input file and compare with set "123,200,300" if match found, gives matching data
200,300 (from 1 input line)
300 (from 2 input line)
123 (from 4 input line)
What I wrote:
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD
object sparkApp {
val conf = new SparkConf()
.setMaster("local")
.setAppName("CountingSheep")
val sc = new SparkContext(conf)
def parseLine(invCol: String) : RDD[String] = {
println(s"INPUT, $invCol")
val inv_rdd = sc.parallelize(Seq(invCol.toString))
val bs_meta_rdd = sc.parallelize(Seq("123,200,300"))
return inv_rdd.intersection(bs_meta_rdd)
}
def main(args: Array[String]) {
val filePathName = "hdfs://xxx/tmp/input.csv"
val rawData = sc.textFile(filePathName)
val datad = rawData.map{r => parseLine(r)}
}
}
I get the following exception:
java.lang.NullPointerException
Please suggest where I went wrong
Problem is solved. This is very simple.
val pfile = sc.textFile("/FileStore/tables/6mjxi2uz1492576337920/input.csv")
case class pSchema(id: Int, pName: String)
val pDF = pfile.map(_.split("\t")).map(p => pSchema(p(0).toInt,p(1).trim())).toDF()
pDF.select("id","pName").show()
Define UDF
val findP = udf((id: Int,
pName: String
) => {
val ids = Array("123","200","300")
var idsFound : String = ""
for (id <- ids){
if (pName.contains(id)){
idsFound = idsFound + id + ","
}
}
if (idsFound.length() > 0) {
idsFound = idsFound.substring(0,idsFound.length -1)
}
idsFound
})
Use UDF in withCoulmn()
pDF.select("id","pName").withColumn("Found",findP($"id",$"pName")).show()
For simple answer, why we are making it so complex? In this case we don't require UDF.
This is your input data:
200,300,889,767,9908,7768,9090|AAA
300,400,223,4456,3214,6675,333|BBB
234,567,890|CCC
123,445,667,887|DDD
and you have to match it with 123,200,300
val matchSet = "123,200,300".split(",").toSet
val rawrdd = sc.textFile("D:\\input.txt")
rawrdd.map(_.split("|"))
.map(arr => arr(0).split(",").toSet.intersect(matchSet).mkString(",") + "|" + arr(1))
.foreach(println)
Your output:
300,200|AAA
300|BBB
|CCC
123|DDD
What you are trying to do can't be done the way you are doing it.
Spark does not support nested RDDs (see SPARK-5063).
Spark does not support nested RDDs or performing Spark actions inside of transformations; this usually leads to NullPointerExceptions (see SPARK-718 as one example). The confusing NPE is one of the most common sources of Spark questions on StackOverflow:
call of distinct and map together throws NPE in spark library
NullPointerException in Scala Spark, appears to be caused be collection type?
Graphx: I've got NullPointerException inside mapVertices
(those are just a sample of the ones that I've answered personally; there are many others).
I think we can detect these errors by adding logic to RDD to check whether sc is null (e.g. turn sc into a getter function); we can use this to add a better error message.