:)
I've ended myself in a (strange) situation where, briefly, I don't want to consume any new record from Kafka, so pause the sparkStreaming consumption (InputDStream[ConsumerRecord]) for all partitions in the topic, do some operations and finally, resume consuming records.
First of all... is this possible?
I've been trying sth like this:
var consumer: KafkaConsumer[String, String] = _
consumer = new KafkaConsumer[String, String](properties)
consumer.subscribe(java.util.Arrays.asList(topicName))
consumer.pause(consumer.assignment())
...
consumer.resume(consumer.assignment())
but I got this:
println(s"Assigned partitions: $consumer.assignment()") --> []
println(s"Paused partitions: ${consumer.paused()}") --> []
println(s"Partitions for: ${consumer.partitionsFor(topicNAme)}") --> [Partition(topic=topicAAA, partition=0, leader=1, replicas=[1,2,3], partition=1, ... ]
Any help to understand what I'm missing and why I'm getting empty results when it's clear the consumer has partitions assigned will be welcomed!
Versions:
Kafka: 0.10
Spark: 2.3.0
Scala: 2.11.8
Yes it is possible
Add check pointing in your code and pass persistent storage (local disk,S3,HDFS) path
and whenever you start/resume your job it will pickup the Kafka Consumer group info with consumer offsets from the check pointing and start processing from where it was stopped.
val context = StreamingContext.getOrCreate(checkpointDirectory, functionToCreateContext _)
Spark Check-=pointing is mechanism not only for saving the offset but also save the serialize state of your DAG of your Stages and Jobs. So whenever you restart your job with new code it would
Read and process the serialized data
Clean the cached DAG stages if there are any code changes in your Spark App
Resume processing from the new data with latest code.
Now here reading from disk is just a one time operation required by Spark to load the Kafka Offset, DAG and the old incomplete processed data.
Once it has done it will always keep on saving the data to disk on default or specified checkpoint interval.
Spark streaming provides an option to specifying Kafka group id but Spark structured stream does not.
Related
I am trying to achieve an Exactly-Once semantics in Flink-Kafka integration. I have my producer module as below:
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
env.enableCheckpointing(1000)
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(1000) //Gap after which next checkpoint can be written.
env.getCheckpointConfig.setCheckpointTimeout(4000) //Checkpoints have to complete within 4secs
env.getCheckpointConfig.setMaxConcurrentCheckpoints(1) //Only 1 checkpoints can be executed at a time
env.getCheckpointConfig.enableExternalizedCheckpoints(
ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION) //Checkpoints are retained if the job is cancelled explicitly
//env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 10)) //Number of restart attempts, Delay in each restart
val myProducer = new FlinkKafkaProducer[String](
"topic_name", // target topic
new KeyedSerializationSchemaWrapper[String](new SimpleStringSchema()), // serialization schema
getProperties(), // producer config
FlinkKafkaProducer.Semantic.EXACTLY_ONCE) //Producer Config
Consumer Module:
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("zookeeper.connect", "localhost:2181")
val consumer = new FlinkKafkaConsumer[String]("topic_name", new SimpleStringSchema(), properties)
I am generating few records and pushing it to this producer. The records are in below fashion:
1
2
3
4
5
6
..
..
and so on. So suppose while pushing this data, the producer was able to push the data till the 4th record and due to some failure it went down so when it is up and running again, will it push the record from 5th onwards? Are my properties enough for that?
I will be adding one property on the consumer side as per this link mentioned by the first user. Should I add Idempotent property on the producer side as well?
My Flink version is 1.13.5, Scala 2.11.12 and I am using Flink Kafka connector 2.11.
I think I am not able to commit the transactions using the EXACTLY_ONCE because checkpoints are not written at the mentioned path. Attaching screenshots of the Web UI:
Do I need to set any property for that?
For the producer side, Flink Kafka Consumer would bookkeeper the current offset in the distributed checkpoint, and if the consumer task failed, it will restarted from the latest checkpoint and re-emit from the offset recorded in the checkpoint. For example, suppose the latest checkpoint records offset 3, and after that flink continue to emit 4, 5 and then failover, then Flink would continue to emit records from 4. Notes that this would not cause duplication since the state of all the operators are also fallback to the state after processed records 3.
For the producer side, Flink use two-phase commit [1] to achieve exactly-once. Roughly Flink Producer would relies on Kafka's transaction to write data, and only commit data formally after the transaction is committed. Users could use Semantics.EXACTLY_ONCE to enable this functionality.
[1] https://flink.apache.org/features/2018/03/01/end-to-end-exactly-once-apache-flink.html
[2] https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/datastream/kafka/#fault-tolerance
I am fairly new to Flink and Kafka and have some data aggregation jobs written in Scala which run in Apache Flink, the jobs consume data from Kafka perform aggregation and produce results back to Kafka.
I need the jobs to consume data from any new Kafka topic created while the job is running which matches a pattern. I got this working by setting the following properties for my consumer
val properties = new Properties()
properties.setProperty(“bootstrap.servers”, “my-kafka-server”)
properties.setProperty(“group.id”, “my-group-id”)
properties.setProperty(“zookeeper.connect”, “my-zookeeper-server”)
properties.setProperty(“security.protocol”, “PLAINTEXT”)
properties.setProperty(“flink.partition-discovery.interval-millis”, “500”);
properties.setProperty(“enable.auto.commit”, “true”);
properties.setProperty(“auto.offset.reset”, “earliest”);
val consumer = new FlinkKafkaConsumer011[String](Pattern.compile(“my-topic-start-.*”), new SimpleStringSchema(), properties)
The consumer works fine and consumes data from existing topics which start with “my-topic-start-”
When I publish data against a new topic say for example “my-topic-start-test1” for the first time, my consumer does not recognise the topic until after 500 milliseconds after the topic was created, this is based on the properties.
When the consumer identifies the topic it does not read the first data record published and starts reading subsequent records so effectively I loose that first data record every time data is published against a new topic.
Is there a setting I am missing or is it how Kafka works? Any help would be appreciated.
Thanks
Shravan
I think part of the issue is my producer was creating topic and publishing message in one go, so by the time consumer discovers new partition that message has already been produced.
As a temporary solution I updated my producer to create the topic if it does not exists and then publish a message (make it 2 step process) and this works.
Would be nice to have a more robust consumer side solution though :)
I'm running PySpark using a Spark cluster in local mode and I'm trying to write a streaming DataFrame to a Kafka topic.
When I run the query, I get the following message:
java.lang.IllegalStateException: Set(topicname-0) are gone. Some data may have been missed..
Some data may have been lost because they are not available in Kafka any more; either the
data was aged out by Kafka or the topic may have been deleted before all the data in the
topic was processed. If you don't want your streaming query to fail on such cases, set the
source option "failOnDataLoss" to "false".
This is my code:
query = (
output_stream
.writeStream.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("topic", "ratings-cleaned")
.option("checkpointLocation", "checkpoints-folder")
.start()
)
sleep(2)
print(query.status)
This error message typically shows up when some messages/offsets were removed from the source topic since the last run of the query. The removal happened due to the cleanup policy, such as retention time.
Imagine your topic has messages with offsets 0, 1, 2 which have all been processed by the application. The checkpoint files stores that last offset 2 to remember continue with offset 3 next time it starts.
After some time, messages with offset 3, 4, 5 were produced to the topic but messages with offset 0, 1, 2, 3 were removed from the topic due to its retention.
Now, when restarting your spark structured streaming job it tries to fetch 3 based on its checkpoint files but realises that only the message with offset 4 is available. In exactly that case it will throw this exception.
You can solve this by
setting .option("failOnDataLoss", "false") in your readStream operation, or
delete existing checkpoint files
According to the Structured Streaming + Kafka Integration Guide the option failOnDataLoss is described as:
"Whether to fail the query when it's possible that data is lost (e.g., topics are deleted, or offsets are out of range). This may be a false alarm. You can disable it when it doesn't work as you expected. Batch queries will always fail if it fails to read any data from the provided offsets due to lost data."
On top of the answers above, Bartosz Konieczny posted a more detailed reason. The first part of the error message says the Set() is empty; that is a set of topic partitions (hence the -0 at the end). That means the partition to which the Spark cluster subscribed has been deleted. My guess is the Kafka setup was restarted. The Spark queries are using some default checkpoint folder that assumes the Kafka setup was not restarted.
This error message hints at issues with the checkpoints. During development, this can be caused by using an old checkpoints folder with an updated query.
If this is in a development environment and you don't need to save the state of the previous query, you can just remove the checkpoints folder (checkpoints-folder in the code example) and rerun your query.
So I have a problem with Kafka Sinks in Spark Streaming while sending JSONs to multiple topics and unreliable kafka brokers. Here are some parts of code:
val kS = KafkaUtils.createDirectStream[String, TMapRecord]
(ssc,
PreferConsistent,
Subscribe[String, TMapRecord](topicsSetT, kafkaParamsInT))
Then I iterate over RDD's
kSMapped.foreachRDD {
rdd: RDD[TMsg] => {
rdd.foreachPartition {
part => {
part.foreach { ...........
And inside foreach I do
kafkaSink.value.send(kafkaTopic, strJSON)
kafkaSinkMirror.value.send(kafkaTopicMirrorBroker, strJSON)
When Mirror broker is down the entire Streaming Application is waiting for it and we are not sending anything to the main broker.
How would you handle it?
For the easiest solution you propose, imagine that me just skip messages that were meant to be sent to a broker that went down (say, that's CASE 1)
for the CASE 2 we'd do some buffering.
P.S. Later on I will use Kafka Mirror, but currently I don't have such an option so I need to make some solution in my code.
I've found several decisions of this problem:
You may use throwing any timeout exception on worker and checkpoints. Spark tries to restart bad task several times described in spark.task.maxFailures property. It is possible to increase number of retries. If streaming job fails after max retries, just will restart the job from checkpoint when broker is available. Or you could manually stop the job when it fails.
You could configure backpressure spark.streaming.backpressure.enabled=true that allow to receive data only as fast as it can process it.
You could send you two results back to your technical Kafka topic and handle it later with another streaming job.
You could make Hive or Hbase buffer for this cases and send unhandled data later in batch mode.
I am new to Apache Spark and have a need to run several long-running processes (jobs) on my Spark cluster at the same time. Often, these individual processes (each of which is its own job) will need to communicate with each other. Tentatively, I'm looking at using Kafka to be the broker in between these processes. So the high-level job-to-job communication would look like:
Job #1 does some work and publishes message to a Kafka topic
Job #2 is set up as a streaming receiver (using a StreamingContext) to that same Kafka topic, and as soon as the message is published to the topic, Job #2 consumes it
Job #2 can now do some work, based on the message it consumed
From what I can tell, streaming contexts are blocking listeners that run on the Spark Driver node. This means that once I start the streaming consumer like so:
def createKafkaStream(ssc: StreamingContext,
kafkaTopics: String, brokers: String): DStream[(String,
String)] = {
// some configs here
KafkaUtils.createDirectStream[String, String, StringDecoder,
StringDecoder](ssc, props, topicsSet)
}
def consumerHandler(): StreamingContext = {
val ssc = new StreamingContext(sc, Seconds(10))
createKafkaStream(ssc, "someTopic", "my-kafka-ip:9092").foreachRDD(rdd => {
rdd.collect().foreach { msg =>
// Now do some work as soon as we receive a messsage from the topic
}
})
ssc
}
StreamingContext.getActive.foreach {
_.stop(stopSparkContext = false)
}
val ssc = StreamingContext.getActiveOrCreate(consumerHandler)
ssc.start()
ssc.awaitTermination()
...that there are now 2 implications:
The Driver is now blocking and listening for work to consume from Kafka; and
When work (messages) are received, they are sent to any available Worker Nodes to actually be executed upon
So first, if anything that I've said above is incorrect or is misleading, please begin by correcting me! Assuming I'm more or less correct, then I'm simply wondering if there is a more scalable or performant way to accomplish this, given my criteria. Again, I have two long-runnning jobs (Job #1 and Job #2) that are running on my Spark nodes, and one of them needs to be able to 'send work to' the other one. Any ideas?
From what I can tell, streaming contexts are blocking listeners that
run on the Spark Driver node.
A StreamingContext (singular) isn't a blocking listener. It's job is to create the graph of execution for your streaming job.
When you start reading from Kafka, you specify that you want to fetch new records every 10 seconds. What happens from now on depends on which Kafka abstraction you're using for Kafka, either the Receiver approach via KafkaUtils.createStream, or the Receiver-less approach via KafkaUtils.createDirectStream.
In both approaches in general, data is being consumed from Kafka and then dispatched to each Spark worker to process in parallel.
then I'm simply wondering if there is a more scalable or performant
way to accomplish this
This approach is highly scalable. When using the receiver-less approach, each Kafka partition maps to a Spark partition in a given RDD. You can increase parallelism by either increasing the amount of partitions in Kafka, or by re-partitions the data inside Spark (using DStream.repartition). I suggest testing this setup to determine if it suits your performance requirements.