So I have a problem with Kafka Sinks in Spark Streaming while sending JSONs to multiple topics and unreliable kafka brokers. Here are some parts of code:
val kS = KafkaUtils.createDirectStream[String, TMapRecord]
(ssc,
PreferConsistent,
Subscribe[String, TMapRecord](topicsSetT, kafkaParamsInT))
Then I iterate over RDD's
kSMapped.foreachRDD {
rdd: RDD[TMsg] => {
rdd.foreachPartition {
part => {
part.foreach { ...........
And inside foreach I do
kafkaSink.value.send(kafkaTopic, strJSON)
kafkaSinkMirror.value.send(kafkaTopicMirrorBroker, strJSON)
When Mirror broker is down the entire Streaming Application is waiting for it and we are not sending anything to the main broker.
How would you handle it?
For the easiest solution you propose, imagine that me just skip messages that were meant to be sent to a broker that went down (say, that's CASE 1)
for the CASE 2 we'd do some buffering.
P.S. Later on I will use Kafka Mirror, but currently I don't have such an option so I need to make some solution in my code.
I've found several decisions of this problem:
You may use throwing any timeout exception on worker and checkpoints. Spark tries to restart bad task several times described in spark.task.maxFailures property. It is possible to increase number of retries. If streaming job fails after max retries, just will restart the job from checkpoint when broker is available. Or you could manually stop the job when it fails.
You could configure backpressure spark.streaming.backpressure.enabled=true that allow to receive data only as fast as it can process it.
You could send you two results back to your technical Kafka topic and handle it later with another streaming job.
You could make Hive or Hbase buffer for this cases and send unhandled data later in batch mode.
Related
Participating in a chalenge, it says like: Your first step - consume data sample from a Apache Kafka.
So they give me topic name, API_KEY and API_SECRET. Oh, and bootstrap server.
Then they claim as if you unfamiliar with Kafka, there is comprehensive documentation provided by Confluent. So ok, sign in to confluent, make a cluster and.. what is the next step to consume data ?
Here's a basic pattern for putting messages from Kafka into a list in Python.
from kafka import KafkaConsumer
consumer = KafkaConsumer(
'someTopicName',
bootstrap_servers=['192.168.1.160:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: loads(x.decode('utf-8')))
print("We have a consumer instantiated")
print(consumer)
messageCache = []
for message in consumer:
messageCache.append(message.value)
In this case, my Kafka broker is on my private LAN, using the default port, so my bootstrap servers list is just ["192.168.1.160:9092"].
You can use standard counters and if statements to save the list to file or whatever, since the Kafka stream is assumed to run forever. For example, I have a process that consumes Kafka messages and saves them as a dataframe in parquet to HDFS for every 1,000,000 messages. In this case I was wanting to save historical messages to develop an ML model. The great thing about Kafka is that I could write another process that evaluated and potentially responded to every message in real time.
:)
I've ended myself in a (strange) situation where, briefly, I don't want to consume any new record from Kafka, so pause the sparkStreaming consumption (InputDStream[ConsumerRecord]) for all partitions in the topic, do some operations and finally, resume consuming records.
First of all... is this possible?
I've been trying sth like this:
var consumer: KafkaConsumer[String, String] = _
consumer = new KafkaConsumer[String, String](properties)
consumer.subscribe(java.util.Arrays.asList(topicName))
consumer.pause(consumer.assignment())
...
consumer.resume(consumer.assignment())
but I got this:
println(s"Assigned partitions: $consumer.assignment()") --> []
println(s"Paused partitions: ${consumer.paused()}") --> []
println(s"Partitions for: ${consumer.partitionsFor(topicNAme)}") --> [Partition(topic=topicAAA, partition=0, leader=1, replicas=[1,2,3], partition=1, ... ]
Any help to understand what I'm missing and why I'm getting empty results when it's clear the consumer has partitions assigned will be welcomed!
Versions:
Kafka: 0.10
Spark: 2.3.0
Scala: 2.11.8
Yes it is possible
Add check pointing in your code and pass persistent storage (local disk,S3,HDFS) path
and whenever you start/resume your job it will pickup the Kafka Consumer group info with consumer offsets from the check pointing and start processing from where it was stopped.
val context = StreamingContext.getOrCreate(checkpointDirectory, functionToCreateContext _)
Spark Check-=pointing is mechanism not only for saving the offset but also save the serialize state of your DAG of your Stages and Jobs. So whenever you restart your job with new code it would
Read and process the serialized data
Clean the cached DAG stages if there are any code changes in your Spark App
Resume processing from the new data with latest code.
Now here reading from disk is just a one time operation required by Spark to load the Kafka Offset, DAG and the old incomplete processed data.
Once it has done it will always keep on saving the data to disk on default or specified checkpoint interval.
Spark streaming provides an option to specifying Kafka group id but Spark structured stream does not.
I have a spark streaming job which consumes data from kafka and send back to kafka after doing some process on each data.
For this i am doing some map operations on data ,
val lines = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicNameMap, StorageLevel.MEMORY_AND_DISK)
var ad = ""
val abc = lines.map(_._2).map { x =>
val jsonObj = new JSONObject(x)
val data = someMethod(schema, jsonObj)
data
}
then i am doing foreach operation on it , i am not collecting all the data to driver here since i want to send those record inside the executor itself.
abc.foreachRDD(rdd => {
rdd.foreach { toSend =>
val producer = KafkaProducerUtils.getKafkaProducer(kafkaBrokers)
println("toSend---->" + toSend)
producer.send(new ProducerRecord[String, String](topicToSend, toSend))
}
I tried this code for 1405 data for a 10 second period , but it took approximately 2.5 minute to complete the job. I know creating KafkaProducer is costly , Is there any other way around to reduce the processing time. For my testing purpose i am using 2 executors with 2 cores and 1GM each.
After searching a lot i found this article about KafkaSink . This will give you the idea about producing data to kafka inside spark streaming in effective way.
There must be several reasons for this enormous delay processing this amount of messages:
The problem could reside in your consume phase. If you use "createStream", at least, minor versions of Spark use the high level consumer implementation which needs Zookeeper to storage the offset of the consumers which belong to a specific group. So I´d check this communication because it could take too much time in the commit phase. If for any reason that makes commit for each one by one your consume rate could be impaired. So first of all, check this.
There is another reason due to the write ahead log to the file system. Although your configuration indicates memory an disk, as you can see in Spark documentation:
Efficiency: Achieving zero-data loss in the first approach required the data to be stored in a Write Ahead Log, which further replicated the data. This is actually inefficient as the data effectively gets replicated twice - once by Kafka, and a second time by the Write Ahead Log. This second approach eliminates the problem as there is no receiver, and hence no need for Write Ahead Logs. As long as you have sufficient Kafka retention, messages can be recovered from Kafka
For better consume rates I would use createDirectStream instead.
I am new to Apache Spark and have a need to run several long-running processes (jobs) on my Spark cluster at the same time. Often, these individual processes (each of which is its own job) will need to communicate with each other. Tentatively, I'm looking at using Kafka to be the broker in between these processes. So the high-level job-to-job communication would look like:
Job #1 does some work and publishes message to a Kafka topic
Job #2 is set up as a streaming receiver (using a StreamingContext) to that same Kafka topic, and as soon as the message is published to the topic, Job #2 consumes it
Job #2 can now do some work, based on the message it consumed
From what I can tell, streaming contexts are blocking listeners that run on the Spark Driver node. This means that once I start the streaming consumer like so:
def createKafkaStream(ssc: StreamingContext,
kafkaTopics: String, brokers: String): DStream[(String,
String)] = {
// some configs here
KafkaUtils.createDirectStream[String, String, StringDecoder,
StringDecoder](ssc, props, topicsSet)
}
def consumerHandler(): StreamingContext = {
val ssc = new StreamingContext(sc, Seconds(10))
createKafkaStream(ssc, "someTopic", "my-kafka-ip:9092").foreachRDD(rdd => {
rdd.collect().foreach { msg =>
// Now do some work as soon as we receive a messsage from the topic
}
})
ssc
}
StreamingContext.getActive.foreach {
_.stop(stopSparkContext = false)
}
val ssc = StreamingContext.getActiveOrCreate(consumerHandler)
ssc.start()
ssc.awaitTermination()
...that there are now 2 implications:
The Driver is now blocking and listening for work to consume from Kafka; and
When work (messages) are received, they are sent to any available Worker Nodes to actually be executed upon
So first, if anything that I've said above is incorrect or is misleading, please begin by correcting me! Assuming I'm more or less correct, then I'm simply wondering if there is a more scalable or performant way to accomplish this, given my criteria. Again, I have two long-runnning jobs (Job #1 and Job #2) that are running on my Spark nodes, and one of them needs to be able to 'send work to' the other one. Any ideas?
From what I can tell, streaming contexts are blocking listeners that
run on the Spark Driver node.
A StreamingContext (singular) isn't a blocking listener. It's job is to create the graph of execution for your streaming job.
When you start reading from Kafka, you specify that you want to fetch new records every 10 seconds. What happens from now on depends on which Kafka abstraction you're using for Kafka, either the Receiver approach via KafkaUtils.createStream, or the Receiver-less approach via KafkaUtils.createDirectStream.
In both approaches in general, data is being consumed from Kafka and then dispatched to each Spark worker to process in parallel.
then I'm simply wondering if there is a more scalable or performant
way to accomplish this
This approach is highly scalable. When using the receiver-less approach, each Kafka partition maps to a Spark partition in a given RDD. You can increase parallelism by either increasing the amount of partitions in Kafka, or by re-partitions the data inside Spark (using DStream.repartition). I suggest testing this setup to determine if it suits your performance requirements.
I have one Kafka partition, and one sparkStreaming application. One server with 10 cores. When the spark streaming get one message from Kafka, the subsequent process will take 5 seconds(this is my code). So I found sparkStreaming read Kafka message very slow, I'm guessing that when spark read out one message then it will wait until the message was processed, so the reading and processing are synchronized.
I was wondering can I make the spark reading asynchronously? So the reading from Kafka won't be dragged by the subsequent processing. Then the spark will very quickly consume data from Kafka. And then I can focus on the slow data process inside spark. btw, I'm using foreachRDD function.
you can increase the number of partitions in kafka, it should improve the parallelism , also you can try with "Direct kafka receiver" which really improve the performance when your app is reading from kafka