Spark: show and collect-println giving different outputs - scala

I am using Spark 2.2
I feel like I have something odd going on here. Basic premise is that
I have a set of KIE/Drools rules running through a Dataset of profile objects
I am then trying to show/collect-print the resulting output
I then cast the output as a tuple to flatmap it later
Code below
implicit val mapEncoder = Encoders.kryo[java.util.HashMap[String, Any]]
implicit val recommendationEncoder = Encoders.kryo[Recommendation]
val mapper = new ObjectMapper()
val kieOuts = uberDs.map(profile => {
val map = mapper.convertValue(profile, classOf[java.util.HashMap[String, Any]])
val profile = Profile(map)
// setup the kie session
val ks = KieServices.Factory.get
val kContainer = ks.getKieClasspathContainer
val kSession = kContainer.newKieSession() //TODO: stateful session, how to do stateless?
// insert profile object into kie session
val kCmds = ks.getCommands
val cmds = new java.util.ArrayList[Command[_]]()
cmds.add(kCmds.newInsert(profile))
cmds.add(kCmds.newFireAllRules("outFired"))
// fire kie rules
val results = kSession.execute(kCmds.newBatchExecution(cmds))
val fired = results.getValue("outFired").toString.toInt
// collect the inserted recommendation objects and create uid string
import scala.collection.JavaConversions._
var gresults = kSession.getObjects
gresults = gresults.drop(1) // drop the inserted profile object which also gets collected
val recommendations = scala.collection.mutable.ListBuffer[Recommendation]()
gresults.toList.foreach(reco => {
val recommendation = reco.asInstanceOf[Recommendation]
recommendations += recommendation
})
kSession.dispose
val uIds = StringBuilder.newBuilder
if(recommendations.size > 0) {
recommendations.foreach(recommendation => {
uIds.append(recommendation.getOfferId + "_" + recommendation.getScore)
uIds.append(";")
})
uIds.deleteCharAt(uIds.size - 1)
}
new ORecommendation(profile.getAttributes().get("cId").toString.toLong, fired, uIds.toString)
})
println("======================Output#1======================")
kieOuts.show(1000, false)
println("======================Output#2======================")
kieOuts.collect.foreach(println)
//separating cid and and each uid into individual rows
val kieOutsDs = kieOuts.as[(Long, Int, String)]
println("======================Output#3======================")
kieOutsDs.show(1000, false)
(I have sanitized/shortened the id's below, they are much bigger but with a similar format)
What I am seeing as outputs
Output#1 has a set of uIds(as String) come up
+----+-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|cId |rulesFired | eligibleUIds |
|842 | 17|123-25_2.0;12345678-48_9.0;28a-ad_5.0;123-56_10.0;123-27_2.0;123-32_3.0;c6d-e5_5.0;123-26_2.0;123-51_10.0;8e8-c1_5.0;123-24_2.0;df8-ad_5.0;123-36_5.0;123-16_2.0;123-34_3.0|
+----+-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Output#2 has mostly a similar set of uIds show up(usually off by 1 element)
ORecommendation(842,17,123-36_5.0;123-24_2.0;8e8-c1_5.0;df8-ad_5.0;28a-ad_5.0;660-73_5.0;123-34_3.0;123-48_9.0;123-16_2.0;123-51_10.0;123-26_2.0;c6d-e5_5.0;123-25_2.0;123-56_10.0;123-32_3.0)
Output#3 is same as #Output1
+----+-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|842 | 17 |123-32_3.0;8e8-c1_5.0;123-51_10.0;123-48_9.0;28a-ad_5.0;c6d-e5_5.0;123-27_2.0;123-16_2.0;123-24_2.0;123-56_10.0;123-34_3.0;123-36_5.0;123-6_2.0;123-25_2.0;660-73_5.0|
Every time I run it the difference between Output#1 and Output#2 is 1 element but never the same element (In the above example, Output#1 has 123-27_2.0 but Output#2 has 660-73_5.0)
Should they not be the same? I am still new to Scala/Spark and feel like I am missing something very fundamental

I think I figured this out, adding cache to kieOuts atleast got me identical outputs between show and collect.
I will be looking at why KIE gives me different output for every run of the same input but that is a different issue

Related

Print out all the data within a TableQuery[Restaurants]

def displayTable(table: TableQuery[Restaurants]): Unit = {
val tablequery = table.map(_.id)
val action = tablequery.result
val result = db.run(action)
result.foreach(id => id.foreach(new_id => println(new_id)))
total_points = total_points + 10
}
I have tried to print out all the data to the screen but I have gotten no where. My question is why does nothing print out. I am using Scala and JDBC connection aka Slick. If you remove new_id => println(new_id), you get:
def displayTable(table: TableQuery[Restaurants]): Unit = {
val tablequery = table.map(_.id)
val action = tablequery.result
val result = db.run(action)
result.foreach(id => println(id))
total_points = total_points + 10
}
This code produces an out put like the following: "Vector()". Can someone please help me print out all the data out? I loaded it in using the following code:
def fillTable(): TableQuery[Restaurants] ={
println("Table filled.")
val restaurants = TableQuery[Restaurants]
val setup = DBIO.seq(
restaurants.schema.create
)
val setupFuture = db.run(setup)
val bufferedSource = io.Source.fromFile("src/main/scala/Restaurants.csv")
for (line <- bufferedSource.getLines) {
val cols = line.split(",").map(_.trim)
var restaurant = new Restaurant(s"${cols(0)}",s"${cols(1)}", s"${cols(2)}",
s"${cols(3)}", s"${cols(4)}",s"${cols(5)}",s"${cols(6)}",
s"${cols(7)}",s"${cols(8)}",s"${cols(9)}")
restaurants.forceInsert(s"${cols(0)}",s"${cols(1)}", s"${cols(2)}",
s"${cols(3)}", s"${cols(4)}",s"${cols(5)}",s"${cols(6)}",
s"${cols(7)}",s"${cols(8)}",s"${cols(9)}")
total_rows = total_rows + 1
This is my first question so I apologize for the format.
The fact that Vector() is your output in the second version of displayTable is a strong hint that your query is returning an empty result, and therefore has no id's to print out. I haven't run your code myself, but I suspect this is because restaurants.forceInsert returns an action, and you need to db.run() it to actually execute the query.
I'm also curious why you create var restaurant = ... but then ignore it, and call forceInsert recreating the tuple from the csv values again. Why not restaurants.forceInsert(restaurant)?

How can I construct a String with the contents of a given DataFrame in Scala

Consider I have a dataframe. How can I retrieve the contents of that dataframe and represent it as a string.
Consider I try to do that with the below example code.
val tvalues: Array[Double] = Array(1.866393526974307, 2.864048126935307, 4.032486069215076, 7.876169953355888, 4.875333799256043, 14.316322626848278)
val pvalues: Array[Double] = Array(0.064020056478447, 0.004808399479386827, 8.914865448939047E-5, 7.489564524121306E-13, 2.8363794106756046E-6, 0.0)
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[2]");
val sc = new SparkContext(conf)
val df = sc.parallelize(tvalues zip pvalues)
val sb = StringBuilder.newBuilder
df.foreach(x => {
println("x = ", x)
sb.append(x)
})
println("sb = ", sb)
The output of the code shows the example dataframe has contents:
(x = ,(1.866393526974307,0.064020056478447))
(x = ,(7.876169953355888,7.489564524121306E-13))
(x = ,(2.864048126935307,0.004808399479386827))
(x = ,(4.032486069215076,8.914865448939047E-5))
(x = ,(4.875333799256043,2.8363794106756046E-6))
However, the final stringbuilder contains an empty string.
Any thoughts how to retrieve a String for a given dataframe in Scala?
Many thanks
UPD: as mentioned by #user8371915, solution below will work only in single JVM in development (local) mode. In fact we cant modify broadcast variables like globals. You can use accumulators, but it will be quite inefficient. Also you can read an answer about read/write global vars here. Hope it will help you.
I think you should read topic about shared variables in Spark. Link here
Normally, when a function passed to a Spark operation (such as map or reduce) is executed on a remote cluster node, it works on separate copies of all the variables used in the function. These variables are copied to each machine, and no updates to the variables on the remote machine are propagated back to the driver program. Supporting general, read-write shared variables across tasks would be inefficient. However, Spark does provide two limited types of shared variables for two common usage patterns: broadcast variables and accumulators.
Let's have a look at broadcast variables. I edited your code:
val tvalues: Array[Double] = Array(1.866393526974307, 2.864048126935307, 4.032486069215076, 7.876169953355888, 4.875333799256043, 14.316322626848278)
val pvalues: Array[Double] = Array(0.064020056478447, 0.004808399479386827, 8.914865448939047E-5, 7.489564524121306E-13, 2.8363794106756046E-6, 0.0)
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[2]");
val sc = new SparkContext(conf)
val df = sc.parallelize(tvalues zip pvalues)
val sb = StringBuilder.newBuilder
val broadcastVar = sc.broadcast(sb)
df.foreach(x => {
println("x = ", x)
broadcastVar.value.append(x)
})
println("sb = ", broadcastVar.value)
Here I used broadcastVar as a container for a StringBuilder variable sb.
Here is output:
(x = ,(1.866393526974307,0.064020056478447))
(x = ,(2.864048126935307,0.004808399479386827))
(x = ,(4.032486069215076,8.914865448939047E-5))
(x = ,(7.876169953355888,7.489564524121306E-13))
(x = ,(4.875333799256043,2.8363794106756046E-6))
(x = ,(14.316322626848278,0.0))
(sb = ,(7.876169953355888,7.489564524121306E-13)(1.866393526974307,0.064020056478447)(4.875333799256043,2.8363794106756046E-6)(2.864048126935307,0.004808399479386827)(14.316322626848278,0.0)(4.032486069215076,8.914865448939047E-5))
Hope this helps.
Does the output of df.show(false) help? If yes, then this SO answer helps: Is there any way to get the output of Spark's Dataset.show() method as a string?
Thanks everybody for the feedback and for understanding this slightly better.
The combination of responses result in the below. The requirements have changed slightly in that I represent my df as a list of jsons. The code below does this, without the use of the broadcast.
class HandleDf(df: DataFrame, limit: Int) extends java.io.Serializable {
val jsons = df.limit(limit).collect.map(rowToJson(_))
def rowToJson(r: org.apache.spark.sql.Row) : JSONObject = {
try { JSONObject(r.getValuesMap(r.schema.fieldNames)) }
catch { case t: Throwable =>
JSONObject.apply(Map("Row with error" -> t.toString))
}
}
}
The class I use here...
val jsons = new HandleDf(df, 100).jsons

Convert RDF4J stream filter (lambda?) from Java to Scala

A follow-up to Are typed literals "tricky" in RDF4J?
I have some triples abut the weight of dump trucks, using literal objects with different data types. I'm only interested in the integer values, so I want to filter based on the data type. Jeen Broekstra sent a Java solution about a week ago, and I'm having trouble converting it into Scala, my team's preferred language.
This is what I have so far. Eclipse is complaining
not found: value l
val rdf4jServer = "http://host.domain:7200"
val repositoryID = "trucks"
val MyRepo = new HTTPRepository(rdf4jServer, repositoryID)
MyRepo.initialize()
var con = MyRepo.getConnection()
val f = MyRepo.getValueFactory()
val DumpTruck = f.createIRI("http://example.com/dumpTruck")
val Weight = f.createIRI("http://example.com/weight")
val m = QueryResults.asModel(con.getStatements(DumpTruck, Weight, null))
val intValuesStream = Models.objectLiterals(m).stream()
# OK up to here
# errors start below
val intValuesFiltered =
intValuesStream.filter(l -> l.getDatatype().equals(XMLSchema.INTEGER))
val intValues = intValuesFiltered.collect(Collectors.toList())
Replace the -> with =>:
val intValuesFiltered = intValuesStream.filter(l => l.getDatatype().equals(XMLSchema.INTEGER))

Optimize Scala JSON Parsing

I am working on a Spark Streaming Application that is taking in a JSON message and needs to parse it. It has two parts but part of the JSON parsing seems to be the larger overhead when testing. Is there any way to optimize this?
import scala.util.parsing.json.JSON
val parsed = JSON.parseFull(formatted)
val subject = parsed.flatMap(_.asInstanceOf[Map[String, String]].get("subject")).toString.drop(5).dropRight(1)
val predicate = parsed.flatMap(_.asInstanceOf[Map[String, String]].get("predicate")).toString.drop(5).dropRight(1)
val obj = parsed.flatMap(_.asInstanceOf[Map[String, String]].get("object")).toString.drop(5).dropRight(1)
val label = parsed.flatMap(_.asInstanceOf[Map[String, String]].get("label")).toString.drop(5).dropRight(1)
val url = "http://" + elasticAddress.value + "/data/quad/"
val urlEncoded = java.net.URLEncoder.encode(label + subject + predicate + obj, "utf-8")
Are you also using the Play framework in your project? If so, the Play JSON library can definitely cut down on your code to make things more readable (like easy casting to a case class with matching structure), though I don't know offhand how well it will optimize things for you from an efficiency standpoint.
I have changed it to this:
import org.json4s.JsonAST.{JField, JObject, JString, JArray, JValue}
import org.json4s.jackson.JsonMethods.
val parsed = parse(data)
val output: List[(String, String, String, String)] = for {
JArray(sys) <- parsed
JObject(child) <- sys
JField("subject", JString(subject)) <- child
JField("predicate", JString(predicate)) <- child
JField("object", JString(obj)) <- child
JField("label", JString(label)) <- child
} yield (subject, predicate,obj, label)
val subject = output(0)._1
val predicate = output(0)._2
val obj = output(0)._3
val label = output(0)._4

Iterating through files in scala to create values based on the file names

I think there may be a simple solution to this, I was wondering if anybody knew how to iterate over a set of files and output a value based on the files name.
My problem is, I want to read in a set of graph edges for each month, and then create a seperate monthly graphs.
Currently I've done this the long way, which is fine for doing one years worth, but I'd like a way to automate it.
You can see my code below which hopefully clearly shows what I am doing.
//Load vertex data
val vertices= (sc.textFile("D:~vertices.csv")
.map(line => line.split(",")).map(parts => (parts.head.toLong, parts.tail)))
//Define function for creating edges from csv file
def EdgeMaker(file: RDD[String]): RDD[Edge[String]] = {
file.flatMap { line =>
if (!line.isEmpty && line(0) != '#') {
val lineArray = line.split(",")
if (lineArray.length < 0) {
None
} else {
val srcId = lineArray(0).toInt
val dstId = lineArray(1).toInt
val ID = lineArray(2).toString
(Array(Edge(srcId.toInt, dstId.toInt, ID)))
}
} else {
None
}
}
}
//make graphs -This is where I want automation, so I can iterate through a
//folder of edge files and output corresponding monthly graphs.
val edgesJan = EdgeMaker(sc.textFile("D:~edges2011Jan.txt"))
val graphJan = Graph(vertices, edgesJan)
val edgesFeb = EdgeMaker(sc.textFile("D:~edges2011Feb.txt"))
val graphFeb = Graph(vertices, edgesFeb)
val edgesMar = EdgeMaker(sc.textFile("D:~edges2011Mar.txt"))
val graphMar = Graph(vertices, edgesMar)
val edgesApr = EdgeMaker(sc.textFile("D:~edges2011Apr.txt"))
val graphApr = Graph(vertices, edgesApr)
val edgesMay = EdgeMaker(sc.textFile("D:~edges2011May.txt"))
val graphMay = Graph(vertices, edgesMay)
val edgesJun = EdgeMaker(sc.textFile("D:~edges2011Jun.txt"))
val graphJun = Graph(vertices, edgesJun)
val edgesJul = EdgeMaker(sc.textFile("D:~edges2011Jul.txt"))
val graphJul = Graph(vertices, edgesJul)
val edgesAug = EdgeMaker(sc.textFile("D:~edges2011Aug.txt"))
val graphAug = Graph(vertices, edgesAug)
val edgesSep = EdgeMaker(sc.textFile("D:~edges2011Sep.txt"))
val graphSep = Graph(vertices, edgesSep)
val edgesOct = EdgeMaker(sc.textFile("D:~edges2011Oct.txt"))
val graphOct = Graph(vertices, edgesOct)
val edgesNov = EdgeMaker(sc.textFile("D:~edges2011Nov.txt"))
val graphNov = Graph(vertices, edgesNov)
val edgesDec = EdgeMaker(sc.textFile("D:~edges2011Dec.txt"))
val graphDec = Graph(vertices, edgesDec)
Any help or pointers on this would be much appreciated.
you can use Spark Context wholeTextFiles to map the filename, and use the String for naming/calling/filtering/etc your values/output/etc
val fileLoad = sc.wholeTextFiles("hdfs:///..Path").map { case (filename, content) => ... }
The Spark Context textFile only reads the data, but does not keep the file name.
----EDIT----
Sorry I seem to have mis-understood the question; you can load multiple files using
sc.wholeTextFiles("~/path/file[0-5]*,/anotherPath/*.txt").map { case (filename, content) => ... }
the asterisk * should load in all files in the path assuming they are all supported input file types.
This read will concatenate all your files into 1 single large RDD to avoid multiple calling (because each call, you have to specify the path and filename which is what you want to avoid I think).
Reading with the filename allows you to GroupBy the file name and apply your graph function to each group.