I need to batch my Kafka stream into time windows of 10 minutes each and then run some batch processing on it.
Note: records below have a timestamp field
val records = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", brokerPool)
.option("subscribe", topic)
.option("startingOffsets", kafkaOffset)
.load()
I add a time window to each record using,
.withColumn("window", window($"timing", windowDuration))
I created some helper classes like
case class TimingWindow(
start: java.sql.Timestamp,
end: java.sql.Timestamp
)
case class RecordWithWindow(
record: MyRecord,
groupingWindow: TimingWindow
)
Now I have a DF of type [RecordWithWindow]
All this works very well.
Next,
metricsWithWindow
.groupByKey(_.groupingWindow)
//By grouping, I get several records per time window
//resulting an object of the below type which I write out to HDFS
case class WindowWithRecords(
records: Seq[MyRecord],
window: TimingWindow
)
Where I examine HDFS,
Example:
Expected :
Each WindowWithRecords object having a unique TimingWindow
WindowWithRecordsA(TimingWindowA, Seq(MyRecordA, MyRecordB, MyRecordC))
Actual :
More than one WindowWithRecords object with the same TimingWindow
WindowWithRecordsA(TimingWindowA, Seq(MyRecordA, MyRecordB))
WindowWithRecordsB(TimingWindowA, Seq(MyRecordC))
Looks like the groupByKey logic is not working well.
I hope my question is clear. Any pointers would be helpful.
Found the problem:
I was not using an explicit trigger when processing the window. As a result, Spark was creating micro batches as soon as it could, as opposed to doing it at the end of the window.
streamingQuery
.writeStream
.trigger(Trigger.ProcessingTime(windowDuration))
...
.start
This was a result of me misunderstanding Spark documentation.
Note: groupByKey uses the object's hashcode. It is important to make sure that the hashcode of the object is consistent.
Related
I have several tasks of streaming tables in pyspark (running on Databricks).
For the most part, it looks something like this:
stream = (
spark
.readStream
.option("maxFilesPerTrigger", 20)
.table("my_table")
)
transformed_df = stream.transform(some_function)
_ = (
transformed_df
.writeStream
.trigger(availableNow=True)
.outputMode("append")
.option("checkpointLocation", "/path/to/_checkpoints/output_table/input_table/")
.foreachBatch(lambda df, epochId: batch_writer(df, epochId, "output_table"))
.start()
.awaitTermination()
)
This works as expected and I can see each batch commit and offset in the checkpoints path. The problem is that, on occasion, the stream writer will just carry on (no new data coming in) and I can see the commit and offset go into the thousands. If I stop the job, it does not do it again and then later again.
I am running Pyspark on Databricks (Apapche Spark 3.2.1, Scala 2.12).
Should I do a check like if len(transformed_df.take(1)) > 0: # write stream? I suspect this will almost always be true and the stream is only determined at write.
I am working on spark structure streaming where job consuming Kafka message, do aggregation and save data in apache hudi table every 10 seconds. The below code is working fine but it overwrites the resultant apache hudi table data on every batch. I do not yet figure out why it is happening? Is it spark structure streaming or hudi behavior? I am using MERGE_ON_READ so the table file should not delete on every update. But don't know why it is happening? Due to this issue, my other job failed which read this table.
spark.readStream
.format('kafka')
.option("kafka.bootstrap.servers",
"localhost:9092")
...
...
df1 = df.groupby('a', 'b', 'c').agg(sum('d').alias('d'))
df1.writeStream
.format('org.apache.hudi')
.option('hoodie.table.name', 'table1')
.option("hoodie.datasource.write.table.type", "MERGE_ON_READ")
.option('hoodie.datasource.write.keygenerator.class', 'org.apache.hudi.keygen.ComplexKeyGenerator')
.option('hoodie.datasource.write.recordkey.field', "a,b,c")
.option('hoodie.datasource.write.partitionpath.field', 'a')
.option('hoodie.datasource.write.table.name', 'table1')
.option('hoodie.datasource.write.operation', 'upsert')
.option('hoodie.datasource.write.precombine.field', 'c')
.outputMode('complete')
.option('path', '/Users/lucy/hudi/table1')
.option("checkpointLocation",
"/Users/lucy/checkpoint/table1")
.trigger(processingTime="10 second")
.start()
.awaitTermination()
Based on your configurations, the explanation for this problem may be that you read the same keys at each batch (the same a, b, c with different value of d), and where you have an upsert operation, hudi relace the old values by the new one. Try using insert instead of upsert or modify the hudi key depending on what you want to do.
I have a Spark streaming processor.
The Dataframe dfNewExceptions has duplicates (duplicate by "ExceptionId").
Since this is a streaming dataset, the below query fails:
val dfNewUniqueExceptions = dfNewExceptions.sort(desc("LastUpdateTime"))
.coalesce(1)
.dropDuplicates("ExceptionId")
val dfNewExceptionCore = dfNewUniqueExceptions.select("ExceptionId", "LastUpdateTime")
dfNewExceptionCore.writeStream
.format("console")
// .outputMode("complete")
.option("truncate", "false")
.option("numRows",5000)
.start()
.awaitTermination(1000)
**
Exception in thread "main" org.apache.spark.sql.AnalysisException: Sorting is not supported on streaming DataFrames/Datasets, unless it is on aggregated DataFrame/Dataset in Complete output mode;;
**
This is also documented here: https://home.apache.org/~pwendell/spark-nightly/spark-branch-2.0-docs/latest/structured-streaming-programming-guide.html
Any suggestions on how the duplicates can be removed from dfNewExceptions?
I recommend to follow the approach explained in the Structured Streaming Guide on Streaming Deduplication. There it says:
You can deduplicate records in data streams using a unique identifier in the events. This is exactly same as de-duplication on static using a unique identifier column. The query will store the necessary amount of data from previous records such that it can filter duplicate records. Similar to aggregations, you can use de-duplication with or without watermarking.
With watermark - If there is an upper bound on how late a duplicate record may arrive, then you can define a watermark on an event time column and deduplicate using both the guid and the event time columns. The query will use the watermark to remove old state data from past records that are not expected to get any duplicates any more. This bounds the amount of the state the query has to maintain.
An example in Scala is also given:
val dfExceptions = spark.readStream. ... // columns: ExceptionId, LastUpdateTime, ...
dfExceptions
.withWatermark("LastUpdateTime", "10 seconds")
.dropDuplicates("ExceptionId", "LastUpdateTime")
You can use watermarking to drop duplicates in a specific timeframe.
I am using Spark Structured Streaming (2.3) to write parquet data to buckets in the cloud ( Google Cloud Storage).
I am using the following function :
def writeStreaming(data: DataFrame, format: String, options: Map[String, String], partitions: List[String]): DataStreamWriter[Row] = {
var dataStreamWrite = data.writeStream .format(format).options(options).trigger(Trigger.ProcessingTime("120 seconds"))
if (!partitions.isEmpty)
dataStreamWrite = ddataStreamWrite.partitionBy(partitions: _*)
dataStreamWrite
}
Unfortunately, with this approach, I am getting many small files.
I tried to use the trigger approach in order to avoid this, but this didn't work too. Do you have any idea about how to handle this, please ?
Thanks a lot
The reason that you have many small files despite using trigger can be your dataframe having many partitions. To reduce the the parquet to 1 file/ 2 mins, you can coalesce to one partition before writing Parquet files.
var dataStreamWrite = data
.coalesce(1)
.writeStream
.format(format)
.options(options)
.trigger(Trigger.ProcessingTime("120 seconds"))
I am using Spark 2.1.0 and Kafka 0.9.0.
I am trying to push the output of a batch spark job to kafka. The job is supposed to run every hour but not as streaming.
While looking for an answer on the net I could only find kafka integration with Spark streaming and nothing about the integration with the batch job.
Does anyone know if such thing is feasible ?
Thanks
UPDATE :
As mentioned by user8371915, I tried to follow what was done in Writing the output of Batch Queries to Kafka.
I used a spark shell :
spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.0
Here is the simple code that I tried :
val df = Seq(("Rey", "23"), ("John", "44")).toDF("key", "value")
val newdf = df.select(to_json(struct(df.columns.map(column):_*)).alias("value"))
newdf.write.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("topic", "alerts").save()
But I get the error :
java.lang.RuntimeException: org.apache.spark.sql.kafka010.KafkaSourceProvider does not allow create table as select.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:497)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
... 50 elided
Have any idea what is this related to ?
Thanks
tl;dr You use outdated Spark version. Writes are enabled in 2.2 and later.
Out-of-the-box you can use Kafka SQL connector (the same as used with Structured Streaming). Include
spark-sql-kafka in your dependencies.
Convert data to DataFrame containing at least value column of type StringType or BinaryType.
Write data to Kafka:
df
.write
.format("kafka")
.option("kafka.bootstrap.servers", server)
.save()
Follow Structured Streaming docs for details (starting with Writing the output of Batch Queries to Kafka).
If you have a dataframe and you want to write it to a kafka topic, you need to convert columns first to a "value" column that contains data in a json format. In scala it is
import org.apache.spark.sql.functions._
val kafkaServer: String = "localhost:9092"
val topicSampleName: String = "kafkatopic"
df.select(to_json(struct("*")).as("value"))
.selectExpr("CAST(value AS STRING)")
.write
.format("kafka")
.option("kafka.bootstrap.servers", kafkaServer)
.option("topic", topicSampleName)
.save()
For this error
java.lang.RuntimeException: org.apache.spark.sql.kafka010.KafkaSourceProvider does not allow create table as select.
at scala.sys.package$.error(package.scala:27)
I think you need to parse the message to Key value pair. Your dataframe should have value column.
Let say if you have a dataframe with student_id, scores.
df.show()
>> student_id | scores
1 | 99.00
2 | 98.00
then you should modify your dataframe to
value
{"student_id":1,"score":99.00}
{"student_id":2,"score":98.00}
To convert you can use similar code like this
df.select(to_json(struct($"student_id",$"score")).alias("value"))