Spark multi-thread write does not work properly in cluster-mode - scala

I have several Hive tables to convert in Parquet format and write down on HDFS. Basically I load each one of these tables in a Spark DataFrame and write it back in Parquet format. In order to parallelize even more the writing phase (a single write of a DataFrame should be already parallelized since I am using 12 executors and 5 core per executor), I've tried to spawn several threads, each thread is used to write a subset of the tables.
val outputPath = "/user/xyz/testdb/tables"
val db = "testdb"
val partitionField = "nominal_time"
val partition = "20180729"
val numQueues = 6
for (i <- 0 until numQueues) {
val thread = new Thread {
override def run(): Unit = {
val tablesToProcess = queues(i)
tablesToProcess.foreach(t => {
val table = t.trim.toUpperCase
val tempTable = "tmp_" + table
val destTable = table
var dataFrame = sqc.read.table(s"$db.$tempTable")
// write
dataFrame.write.
mode("overwrite").
format("parquet").
save(s"$outputPath/$destTable/$partitionField=$partition")
println(s"\n\nWrite completed for table $table\n")
})
}
}
thread.start()
}
This code is working fine in YARN-CLIENT mode and I can observe a significant reduction of the time required for the process to complete.
The thing I don't uderstand is when I launch the same code in YARN-CLUSTER mode, the job completes very fast (too fast I have to say) but it does not write anything.
Am I missing something fundamental here that causes a multi-thread spark program to not work properly in cluster mode?

Related

How to collect all records at spark executor and process it as batch

In my spark kinesis streaming application I am using foreachBatch to get the streaming data and need to send it to the drools rule engine for further processing.
My requirement is, I need to accumulate all json data in a list/ruleSession and send it for rule engine for processing as a batch at the executor side.
//Scala Code Example:
val dataFrame = sparkSession.readStream
.format("kinesis")
.option("streamName", streamName)
.option("region", region)
.option("endpointUrl",endpointUrl)
.option("initialPosition", "TRIM_HORIZON")
.load()
val query = dataFrame
.selectExpr("CAST(data as STRING) as krecord")
.writeStream
.foreachBatch(function)
.start()
query.awaitTermination()
val function = (batchDF: DataFrame, batchId: Long) => {
val ruleSession = kBase.newKieSession() //Drools Rule Session, this is getting created at driver side
batchDF.foreach(row => { // This piece of code is being run in executor.
val jsonData: JSONData = jsonHandler.convertStringToJSONType(row.mkString)
ruleSession.insert(jsonData) // Getting a null pointer exception here as the ruleSession is not available in executor.
}
)
ruleHandler.processRule(ruleSession) // Again this is in the driver scope.
}
In the above code, the problem I am facing is: the function used in foreachBatch is getting executed at driver side and the code inside batchDF.foreach is getting executed at worker/executor side, and thus failing to get he ruleSession.
Is there any way to run the whole function at each executor side?
OR
Is there a better way to accumulate all the data in a batch DataFrame after transformation and send it to next process from within the executor/worker?
I think this might work ... Rather than running foreach, you could use foreachBatch or foreachPartition (or or a map version like mapPartition if you want return info). In this portion, open a connection to the drools system. From that point, iterate over the dataset within each partition (or batch) sending each to the drools system (or you might send that whole chunk to drools). In the foreachPartition / foreachBatch section, at the end, close the connect (if applicable).
#codeaperature, This is how I achieved batching, inspired from your answer, posting it as an answer as this exceeds the word limit in a comment.
Using foreach on dataframe and passing in a ForeachWriter.
Initializing the rule session in open method of ForeachWriter.
Adding each input JSON to rule session in process method.
Execute the rule in close method with the rule session loaded with batch of data.
//Scala code:
val dataFrame = sparkSession.readStream
.format("kinesis")
.option("streamName", streamName)
.option("region", region)
.option("endpointUrl",endpointUrl)
.option("initialPosition", "TRIM_HORIZON")
.load()
val query = dataFrame
.selectExpr("CAST(data as STRING) as krecord")
.writeStream
.foreach(dataConsumer)
.start()
val dataConsumer = new ForeachWriter[Row] {
var ruleSession: KieSession = null;
def open(partitionId: Long, version: Long): Boolean = { // first open is called once for every batch
ruleSession = kBase.newKieSession()
true
}
def process(row: Row) = { // the process method will be called for a batch of records
val jsonData: JSONData = jsonHandler.convertStringToJSONType(row.mkString)
ruleSession.insert(jsonData) // Add all input json to rule session.
}
def close(errorOrNull: Throwable): Unit = { // after calling process for all records in bathc close is called
val factCount = ruleSession.getFactCount
if (factCount > 0) {
ruleHandler.processRule(ruleSession) //batch processing of rule
}
}
}

Spark, with tons of memory, failing to join dataframes due to OOM

We're using Spark at work to do some batch jobs, but now that we're loading up with a larger set of data, Spark is throwing java.lang.OutOfMemory errors. We're running with Yarn as a resource manager, but in client mode.
Driver memory = 64gb
Driver cores = 8
Executors = 8
Executor memory = 20gb
Executor cores = 5
Deploy mode = client
The driver first makes many network queries to other services to get data, which it writes in chunks of compressed parquet (using Spark) to many temporary sub-directories of the main batch directory. In aggregate we're talking about 100,000,000 case classes with ~40 properties written to compressed parquet using Spark -> dataframe -> parquet. This part works fine.
Once that work is done, we have two sets of data: people and animals. We need to join the animals on the id of the people they belong to. But first, we clean up a bunch of animal date columns to make sure they look right (birthdays, vaccination dates, etc.).
The amount of memory housed in each animal or people sub-directory is not massive. It's actually quite small - maybe 100kb per directory and ~800 directories. Again, it's snappy-compressed parquet.
Previously, the final size of the joined directory's parquet was about 130 megabytes. I think with our new data we'll have about double that, but I just don't understand how we hit memory issues during this process.
I would greatly appreciate any wisdom anyone has to offer.
Here are some methods used in this joining process:
def fromHdfs(path) {
// Construct Seq of sub-directories with data...
val dataframes = dirs.map(dir => spark.read.format("parquet").option("header", "true").load(s"$root/$dir"))
// Concatenate each dataframe into one resulting df
dataframes.reduce(_ union _)
}
private def convertDates(df: DataFrame, dateCols: Seq[String]): DataFrame = {
// Clean up dates from a pre-determined list of 'dateCol' strings
df.columns.intersect(dateCols).foldLeft(df)((newDf, col) =>
newDf
.withColumn(col, unix_timestamp(df(col).cast("string"), "yyyyMMdd")
.cast("timestamp")
.cast("date")
))
}
// Join the people and animals dataframes on 'id'
def peopleWithAnimals(people: DataFrame, animals: DataFrame)(implicit spark: SparkSession): DataFrame = {
// The only collect in here, just to get the column names to foldLeft over
val cols = animals.select("animal").distinct.select("animal").rdd.map(r => r(0)).collect().map(_.toString)
val animalsReshaped = cols.foldLeft(animals) { (newDf, colName) =>
newDf.withColumn(colName, when($"animal" === colName, animals("value")).otherwise(null))
}
val peopleNoDups = people.dropDuplicates()
val animalsNoDups = animalsReshaped.dropDuplicates()
convertDates(peopleNoDups.join(animalsNoDups, "id"), dateCols)
}
The above methods are used in the final method, the entirety of which is as follows:
def writeJoined(....) = {
// Read all of the animal and people data subdirectories into dataframes
val animals = Read.animalsFromHdfs(sourcePath)
val people = Read.ownersFromHdfs(sourcePath)
// Join the dataframes on id
val joined = Join.peopleWithAnimals(animals, people)
// Write the result to HDFS.
joined.write.option("header", "true").parquet(destinationPath)
}
Our application now gets as far as creating the temporary directory that the joined data will be written to, but runs out of memory somewhere in the process after that.
18/06/14 20:37:39 INFO scheduler.TaskSetManager: Finished task 1196.0 in stage 1320.0 (TID 61383) in 1388 ms on machine-name (executor 8) (1242/24967)
JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2018/06/14 20:38:39 - please wait.
JVMDUMP032I JVM requested System dump using '/appfolder/core.20180614.203839.27070.0001.dmp' in response to an event
and
java.lang.OutOfMemoryErrorException in thread "Spark Context Cleaner" Exception in thread "DataStreamer for file /spark2-history/application_1527888960577_0084.inprogress block BP-864704807-49.70.7.82-1525204078667:blk_1073896985_156181" Exception in thread "SparkListenerBus" Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Exception in thread "RequestSenderProxy-ts" Java heap space
java.lang.OutOfMemoryError: Exception in thread "task-result-getter-0"
Exception in thread "task-result-getter-3" Exception in thread "heartbeat-receiver-event-loop-thread"
Exception in thread "LeaseRenewer:valrs_dev_usr#companyhost:companyport" java/lang/OutOfMemoryError: Java heap space

Spark Multiple Output Paths result in Multiple Input Reads

First, apologies for the title, I wasn't sure how to eloquently describe this succinctly.
I have a spark job that parses logs into JSON, and then using spark-sql converts specific columns into ORC and writes to various paths. For example:
val logs = sc.textFile("s3://raw/logs")
val jsonRows = logs.mapPartitions(partition => {
partition.map(log => {
logToJson.parse(log)
}
}
jsonRows.foreach(r => {
val contentPath = "s3://content/events/"
val userPath = "s3://users/events/"
val contentDf = sqlSession.read.schema(contentSchema).json(r)
val userDf = sqlSession.read.schema(userSchema).json(r)
val userDfFiltered = userDf.select("*").where(userDf("type").isin("users")
// Save Data
val contentWriter = contentDf.write.mode("append").format("orc")
eventWriter.save(contentPath)
val userWriter = userDf.write.mode("append").format("orc")
userWriter.save(userPath)
When I wrote this I expected that the parsing would occur one time, and then it would write to the respective locations afterward. However, it seems that it is executing all of the code in the file twice - once for content and once for users. Is this expected? I would prefer that I don't end up transferring the data from S3 and parsing twice, as that is the largest bottleneck. I am attaching an image from the Spark UI to show the duplication of tasks for a single Streaming Window. Thanks for any help you can provide!
Okay, this kind of nested DFs is a no go. DataFrames are meant to be a data structure for big datasets that won't fit into normal data structures (like Seq or List) and that needs to be processed in a distributed way. It is not just another kind of array. What you are attempting to do here is to create a DataFrame per log line, which makes little sense.
As far as I can tell from the (incomplete) code you have posted here, you want to create two new DataFrames from your original input (the logs) which you then want to store in two different locations. Something like this:
val logs = sc.textFile("s3://raw/logs")
val contentPath = "s3://content/events/"
val userPath = "s3://users/events/"
val jsonRows = logs
.mapPartitions(partition => {
partition.map(log => logToJson.parse(log))
}
.toDF()
.cache() // Or use persist() if dataset is larger than will fit in memory
jsonRows
.write
.format("orc")
.save(contentPath)
jsonRows
.filter(col("type").isin("users"))
.write
.format("orc")
.save(userPath)
Hope this helps.

Issue in inserting data to Hive Table using Spark and Scala

I am new to Spark. Here is something I wanna do.
I have created two data streams; first one reads data from text file and register it as a temptable using hivecontext. The other one continuously gets RDDs from Kafka and for each RDD, it it creates data streams and register the contents as temptable. Finally I join these two temp tables on a key to get final result set. I want to insert that result set in a hive table. But I am out of ideas. Tried to follow some exmples but that only create a table with one column in hive and that too not readable. Could you please show me how to insert results in a particular database and table of hive. Please note that I can see the results of join using show function so the real challenge lies with insertion in hive table.
Below is the code I am using.
imports.....
object MSCCDRFilter {
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName("Flume, Kafka and Spark MSC CDRs Manipulation")
val sc = new SparkContext(sparkConf)
val sqlContext = new HiveContext(sc)
import sqlContext.implicits._
val cgiDF = sc.textFile("file:///tmp/omer-learning/spark/dim_cells.txt").map(_.split(",")).map(p => CGIList(p(0).trim, p(1).trim, p(2).trim,p(3).trim)).toDF()
cgiDF.registerTempTable("my_cgi_list")
val CGITable=sqlContext.sql("select *"+
" from my_cgi_list")
CGITable.show() // this CGITable is a structure I defined in the project
val streamingContext = new StreamingContext(sc, Seconds(10)
val zkQuorum="hadoopserver:2181"
val topics=Map[String, Int]("FlumeToKafka"->1)
val messages: ReceiverInputDStream[(String, String)] = KafkaUtils.createStream(streamingContext,zkQuorum,"myGroup",topics)
val logLinesDStream = messages.map(_._2) //获取数据
logLinesDStream.print()
val MSCCDRDStream = logLinesDStream.map(MSC_KPI.parseLogLine) // change MSC_KPI to MCSCDR_GO if you wanna change the class
// MSCCDR_GO and MSC_KPI are structures defined in the project
MSCCDRDStream.foreachRDD(MSCCDR => {
println("+++++++++++++++++++++NEW RDD ="+ MSCCDR.count())
if (MSCCDR.count() == 0) {
println("==================No logs received in this time interval=================")
} else {
val dataf=sqlContext.createDataFrame(MSCCDR)
dataf.registerTempTable("hive_msc")
cgiDF.registerTempTable("my_cgi_list")
val sqlquery=sqlContext.sql("select a.cdr_type,a.CGI,a.cdr_time, a.mins_int, b.Lat, b.Long,b.SiteID from hive_msc a left join my_cgi_list b"
+" on a.CGI=b.CGI")
sqlquery.show()
sqlContext.sql("SET hive.exec.dynamic.partition = true;")
sqlContext.sql("SET hive.exec.dynamic.partition.mode = nonstrict;")
sqlquery.write.mode("append").partitionBy("CGI").saveAsTable("omeralvi.msc_data")
val FilteredCDR = sqlContext.sql("select p.*, q.* " +
" from MSCCDRFiltered p left join my_cgi_list q " +
"on p.CGI=q.CGI ")
println("======================print result =================")
FilteredCDR.show()
streamingContext.start()
streamingContext.awaitTermination()
}
}
I have had some success writing to Hive, using the following:
dataFrame
.coalesce(n)
.write
.format("orc")
.options(Map("path" -> savePath))
.mode(SaveMode.Append)
.saveAsTable(fullTableName)
Our attempts to use partitions weren't followed through with, because I think there was some issue with our desired partitioning column.
The only limitation is with concurrent writes, where the table does not exist yet, then any task tries to create the table (because it didn't exist when it first attempted to write to the table) will Exception out.
Be aware, that writing to Hive in streaming applications is usually bad design, as you will often write many small files, which is very inefficient to read and store. So if you write more often than every hour or so to Hive, you should make sure you include logic for compaction, or add an intermediate storage layer more suited to transactional data.

Spark is duplicating work

I am facing a strange behaviour from Spark. Here's my code:
object MyJob {
def main(args: Array[String]): Unit = {
val sc = new SparkContext()
val sqlContext = new hive.HiveContext(sc)
val query = "<Some Hive Query>"
val rawData = sqlContext.sql(query).cache()
val aggregatedData = rawData.groupBy("group_key")
.agg(
max("col1").as("max"),
min("col2").as("min")
)
val redisConfig = new RedisConfig(new RedisEndpoint(sc.getConf))
aggregatedData.foreachPartition {
rows =>
writePartitionToRedis(rows, redisConfig)
}
aggregatedData.write.parquet(s"/data/output.parquet")
}
}
Against my intuition the spark scheduler yields two jobs for each data sink (Redis, HDFS/Parquet). The problem is the second job is also performing the hive query and doubling the work. I assumed both write operations would share the data from aggregatedData stage. Is something wrong or is it behaviour to be expected?
You've missed a fundamental concept of spark: Lazyness.
An RDD does not contain any data, all it is is a set of instructions that will be executed when you call an action (like writing data to disk/hdfs). If you reuse an RDD (or Dataframe), there's no stored data, just store instructions that will need to be evaluated everytime you call an action.
If you want to reuse data without needing to reevaluate an RDD, use .cache() or preferably persist. Persisting an RDD allows you to store the result of a transformation so that the RDD doesn't need to be reevaluated in future iterations.