Writing data overwrites existing partitions - scala

I have a spark job where I am writing data to parquet to s3.
val partitionCols = Seq("date", "src")
df
.coalesce(10)
.write
.mode(SaveMode.Overwrite)
.partitionBy(partitionCols: _*)
.parquet(params.outputPathParquet)
When I run the job on EMR it overwrites all the partitions and writes it to s3
eg: data looks like this:
s3://foo/date=2021-01-01/src=X
s3://foo/date=2021-11-01/src=X
s3://foo/date=2021-10-01/src=X
where
params.outputPathParquet = s3://foo
When I run the job for another day
eg: 2021-01-02 it replaces all existing partitions and data looks like the following
s3://foo/date=2021-01-02/src=X
Any ideas what might be happening ?

If you just need append data, you can change the SaveMode
.mode(SaveMode.Append)
If you need overwrite some specific partition, take a look at this question: Overwrite specific partitions in spark dataframe write method

Related

Spark Structured Streaming checkpoint usage in production

I have troubles understanding how checkpoints work when working with Spark Structured streaming.
I have a spark process that generates some events, which I log in an Hive table.
For those events, I receive a confirmation event in a kafka stream.
I created a new spark process that
reads the events from the Hive log table into a DataFrame
joins those events with the stream of confirmation events using Spark Structured Streaming
writes the joined DataFrame to an HBase table.
I tested the code in spark-shell and it works fine, below the pseudocode (I'm using Scala).
val tableA = spark.table("tableA")
val startingOffset = "earliest"
val streamOfData = .readStream
.format("kafka")
.option("startingOffsets", startingOffsets)
.option("otherOptions", otherOptions)
val joinTableAWithStreamOfData = streamOfData.join(tableA, Seq("a"), "inner")
joinTableAWithStreamOfData
.writeStream
.foreach(
writeDataToHBaseTable()
).start()
.awaitTermination()
Now I would like to schedule this code to run periodically, e.g. every 15 minutes, and I'm struggling understanding how to use checkpoints here.
At every run of this code, I would like to read from the stream only the events I haven't read yet in the previous run, and inner join those new events with my log table, so to write only new data to the final HBase table.
I created a directory in HDFS where to store the checkpoint file.
I provided that location to the spark-submit command I use to call the spark code.
spark-submit --conf spark.sql.streaming.checkpointLocation=path_to_hdfs_checkpoint_directory
--all_the_other_settings_and_libraries
At this moment the code runs fine every 15 minutes without any error, but it doesn't do anything basically since it is not dumping the new events to the HBase table.
Also the checkpoint directory is empty, while I assume some file has to be written there?
And does the readStream function need to be adapted so to start reading from the latest checkpoint?
val streamOfData = .readStream
.format("kafka")
.option("startingOffsets", startingOffsets) ??
.option("otherOptions", otherOptions)
I'm really struggling to understand the spark documentation regarding this.
Thank you in advance!
Trigger
"Now I would like to schedule this code to run periodically, e.g. every 15 minutes, and I'm struggling understanding how to use checkpoints here.
In case you want your job to be triggered every 15 minutes, you can make use of Triggers.
You do not need to "use" checkpointing specifically, but just provide a reliable (e.g. HDFS) checkpoint location, see below.
Checkpointing
At every run of this code, I would like to read from the stream only the events I haven't read yet in the previous run [...]"
When reading data from Kafka in a Spark Structured Streaming application it is best to have the checkpoint location set directly in your StreamingQuery. Spark uses this location to create checkpoint files that keep track of your application's state and also record the offsets already read from Kafka.
When restarting the application it will check these checkpoint files to understand from where to continue to read from Kafka so it does not skip or miss any message. You do not need to set the startingOffset manually.
It is important to keep in mind that only specific changes in your application's code are allowed such that the checkpoint files can be used for secure re-starts. A good overview can be found in the Structured Streaming Programming Guide on Recovery Semantics after Changes in a Streaming Query.
Overall, for productive Spark Structured Streaming applications reading from Kafka I recommend the following structure:
val spark = SparkSession.builder().[...].getOrCreate()
val streamOfData = spark.readStream
.format("kafka")
// option startingOffsets is only relevant for the very first time this application is running. After that, checkpoint files are being used.
.option("startingOffsets", startingOffsets)
.option("otherOptions", otherOptions)
.load()
// perform any kind of transformations on streaming DataFrames
val processedStreamOfData = streamOfData.[...]
val streamingQuery = processedStreamOfData
.writeStream
.foreach(
writeDataToHBaseTable()
)
.option("checkpointLocation", "/path/to/checkpoint/dir/in/hdfs/"
.trigger(Trigger.ProcessingTime("15 minutes"))
.start()
streamingQuery.awaitTermination()

How to extract the field values from a dataset in spark using scala?

I have a dataframe which reads streams from kafka as a source and it is then converted to a dataset after applying schema, now how to get that particular field value from the dataset to work with it?
case class Fruitdata(id:Int, name:String, color:String, price:Int)
//say this function reads streams from kafka and gives me the dataframe
val df = readFromKafka(sparkSession,inputTopic)
//say this converts dataframe to a dataset with schema defined accordingly
val ds: Dataset[Fruitdata] = getDataSet[Fruitdata](df,schema)
//and say the incoming stream data is -
//"{"id":1,"name":"Grapes","color":"Green","price":15}"
//Now how to get a particular field like name, price and so on
//this doesn't works, it says "Queries with streaming sources must be executed with writeStream.start()"
ds.first()
//same here
ds.show
//also can i get the complete string as input,this gives me Dataset[String]
val ds2 = ds.flatMap((f: Fruitdata)=>List(s"${f.id},${f.name}"))
I think it's because you're trying to read from kafka.
When you run with Spark streaming, I think you cannot run few of the commands as they are related to streaming sources. For example, if you are reading from kafka, there is nothing like first, because it is a micro batch and first refers to each micro batch. Please, try something like "console" sink to output your records to console. Also make sure to read few sample records and not real kafka feed.

structured streaming read based on kafka partitions

I am using spark structured Streaming to Read incoming messages from a Kafka topic and write to multiple parquet tables based on the incoming message
So i created a single readStream as Kafka source is common and for each parquet table created separate write stream in a loop . This works fine but the readstream is creating a bottleneck as for each writeStream it create a readStream and there is no way to cache the dataframe which is already read.
val kafkaDf=spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", conf.servers)
.option("subscribe", conf.topics)
// .option("earliestOffset","true")
.option("failOnDataLoss",false)
.load()
foreach table {
//filter the data from source based on table name
//write to parquet
parquetDf.writeStream.format("parquet")
.option("path", outputFolder + File.separator+ tableName)
.option("checkpointLocation", "checkpoint_"+tableName)
.outputMode("append")
.trigger(Trigger.Once())
.start()
}
Now every write stream is creating a new consumer group and reading entire data from Kafka and then doing the filter and writing to Parquet. This is creating a huge overhead. To avoid this overhead, I can partition the Kafka topic to have as many partitions as the number of tables and then the readstream should only read from a given partition. But I don't see a way to specify partition details as part of Kafka read stream.
if data volume is not that high, write your own sink, collect data of each micro-batch , then you should be able to cache that dataframe and write to different locations, need some tweaks though but it will work
you can use foreachBatch sink and cache the dataframe. Hopefully it works

How to execute an action before start()?

I am developing a spark streaming job(using structured streaming not using DStreams). I get a message from kafka and that will contain many fields with comma separated value out of which the first column will be a filename. Now based on that filename i will have to read the file from HDFS and create a dataframe and operate further on the same. This seems to be simple, but spark is not allowing me to run any actions before the start is called. Spark Documentation also quotes the same.
In addition, there are some Dataset methods that will not work on
streaming Datasets. They are actions that will immediately run queries
and return results, which does not make sense on a streaming Dataset.
Below is what i have tried.
object StructuredStreamingExample {
case class filenameonly(value:String)
def main(args:Array[String])
{
val spark = SparkSession.builder.appName("StructuredNetworkWordCount").master("local[*]").getOrCreate()
spark.sqlContext.setConf("spark.sql.shuffle.partitions", "5")
import spark.implicits._
val lines = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "strtest")
.load()
val values=lines.selectExpr("CAST(value AS STRING)").as[String]
val filename = values.map(x => x.split(",")(0)).toDF().select($"value")
//Here how do i convert the filename which is a Dataframe to string and apply that to spark.readtextfile(filename)
datareadfromhdfs
.writeStream
.trigger(ProcessingTime("10 seconds"))
.outputMode("append")
.format("console")
.start()
.awaitTermination()
Now in the above code after i get the filename which is a Dataframe how do i convert that to a String so that i can do spark.readtextfile(filename) to read the file in HDFS.
I'm not sure it's the best use for spark streaming but in a case like this, I would call filename.foreachRDD and read hdfs files from in there and do whatever you need after.
(Keep in mind that when running inside a foreachRDD, you cannot use global spark session but need to getOrCreate it from the builder like that: val sparkSession = SparkSession.builder.config(myCurrentForeachRDD.sparkContext.getConf).getOrCreate())
You seems to rely on a stream to tell you where to look and load files.. Have you tried simply using a file stream on that folder and let spark monitor and read new files automatically for you?
It is sure not the best use case to use spark structured streaming. If you understand the spark structured streaming correctly all the data transformations/aggregations should happen on the query that generates result table. However you can still implement some workarounds where you can write the code to read data from HDFS in (falt)mapWithGroupState. But, again it is not advisable to do so.

Split an RDD into multiple RDDS

I have a pair RDD[String,String] where key is a string and value is html. I want to split this rdd into n RDDS based on n keys and store them in HDFS.
htmlRDD = [key1,html
key2,html
key3,html
key4,html
........]
Split this RDD based on keys and store html from each RDD individually on HDFS. Why I want to do that? When, I'm trying to store the html from the main RDD to HDFS,it takes a lot of time as some tasks are denied committing by output coordinator.
I'm doing this in Scala.
htmlRDD.saveAsHadoopFile("hdfs:///Path/",classOf[String],classOf[String], classOf[Formatter])
You can also try this in place of breaking RDD:
htmlRDD.saveAsTextFile("hdfs://HOST:PORT/path/");
I tried this and it worked for me. I had RDD[JSONObject] and it wrote toString() of JSON Object very well.
Spark saves each RDD partition into 1 hdfs file partition. So to achieve good parallelism your source RDD should have many partitions(actually depends on size of whole data). So I think you want to split your RDD not into several RDDs, but rather to have RDD with many partitions.
You you can do it with repartition() or coallesce()