Version Info:
"org.apache.storm" % "storm-core" % "1.2.1"
"org.apache.storm" % "storm-kafka-client" % "1.2.1"
I have a storm topology which looks like following:
boltA -> boltB -> boltC -> boltD
boltA just does some formatting of requests and emits another tuple. boltB does some processing and emits around 100 tuples for each tuple being accepted. boltC and boltD processes these tuples. All the bolts implements BaseBasicBolt.
What I am noticing is whenever boltD marks some tuple as fail and marks as retry by throwing FailedException, After a few minutes less than my topology timeout, I get the following error:
2018-11-30T20:01:05.261+05:30 util [ERROR] Async loop died!
java.lang.IllegalStateException: Attempting to emit a message that has already been committed. This should never occur when using the at-least-once processing guarantee.
at org.apache.storm.kafka.spout.KafkaSpout.emitOrRetryTuple(KafkaSpout.java:471) ~[stormjar.jar:?]
at org.apache.storm.kafka.spout.KafkaSpout.emitIfWaitingNotEmitted(KafkaSpout.java:440) ~[stormjar.jar:?]
at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:308) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:654) ~[storm-core-1.2.1.jar:1.2.1]
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) [storm-core-1.2.1.jar:1.2.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
2018-11-30T20:01:05.262+05:30 executor [ERROR]
java.lang.IllegalStateException: Attempting to emit a message that has already been committed. This should never occur when using the at-least-once processing guarantee.
at org.apache.storm.kafka.spout.KafkaSpout.emitOrRetryTuple(KafkaSpout.java:471) ~[stormjar.jar:?]
at org.apache.storm.kafka.spout.KafkaSpout.emitIfWaitingNotEmitted(KafkaSpout.java:440) ~[stormjar.jar:?]
at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:308) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:654) ~[storm-core-1.2.1.jar:1.2.1]
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) [storm-core-1.2.1.jar:1.2.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
What seems to be happening is this happens when boltB emits 100 out of 1 tuple and boltD fails one of the tuples out of those 100 tuples, I am getting this error. Not able to understand how to fix this, ideally it should ack an original tuple when all 100 tuples are acked, but probably an original tuple is acked before all those 100 tuples are acked, which causes this error.
Edit:
I am able to reproduce this with following topology with two bolts, It gets reproduced after around 5 minutes running in cluster mode:
BoltA
case class Abc(index: Int, rand: Boolean)
class BoltA extends BaseBasicBolt {
override def execute(input: Tuple, collector: BasicOutputCollector): Unit = {
val inp = input.getBinaryByField("value").getObj[someObj]
val randomGenerator = new Random()
var i = 0
val rand = randomGenerator.nextBoolean()
1 to 100 foreach {
collector.emit(new Values(Abc(i, rand).getJsonBytes))
i += 1
}
}
override def declareOutputFields(declarer: OutputFieldsDeclarer): Unit = {
declarer.declare(new Fields("boltAout"))
}
}
BoltB
class BoltB extends BaseBasicBolt {
override def execute(input: Tuple, collector: BasicOutputCollector): Unit = {
val abc = input.getBinaryByField("boltAout").getObj[Abc]
println(s"Received ${abc.index}th tuple in BoltB")
if(abc.index >= 97 && abc.rand){
println(s"throwing FailedException for ${abc.index}th tuple for")
throw new FailedException()
}
}
override def declareOutputFields(declarer: OutputFieldsDeclarer): Unit = {
}
}
KafkaSpout:
private def getKafkaSpoutConfig(source: Config) = KafkaSpoutConfig.builder("connections.kafka.producerConnProps.metadata.broker.list", "queueName")
.setProp(ConsumerConfig.GROUP_ID_CONFIG, "grp")
.setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer")
.setOffsetCommitPeriodMs(100)
.setRetry(new KafkaSpoutRetryExponentialBackoff(
KafkaSpoutRetryExponentialBackoff.TimeInterval.milliSeconds(100),
KafkaSpoutRetryExponentialBackoff.TimeInterval.milliSeconds(100),
10,
KafkaSpoutRetryExponentialBackoff.TimeInterval.milliSeconds(3000)
))
.setFirstPollOffsetStrategy(offsetStrategyMapping(ConnektConfig.getOrElse("connections.kafka.consumerConnProps.offset.strategy", "UNCOMMITTED_EARLIEST")))
.setMaxUncommittedOffsets(ConnektConfig.getOrElse("connections.kafka.consumerConnProps.max.uncommited.offset", 10000))
.build()
Other config:
messageTimeoutInSecons: 300
The fix for this was provided by #Stig Rohde Døssing here. The exact cause of the issue has been described here as below:
In the fix for STORM-2666 and followups, we added logic to handle cases where the spout received the ack for an offset after the following offsets were already acked. The issue was that the spout might commit all the acked offsets, but not adjust the consumer position forward, or clear out waitingToEmit properly. If the acked offset was sufficiently far behind the log end offset, the spout might end up polling for offsets it had already committed.
The fix is slightly wrong. When the consumer position drops behind the committed offset, we make sure to adjust the position forward, and clear out any waitingToEmit messages that are behind the committed offset. We don't clear out waitingToEmit unless we adjust the consumer position, which turns out to be a problem.
For example, say offset 1 has failed, offsets 2-10 have been acked and maxPollRecords is 10. Say there are 11 records (1-11) in Kafka. If the spout seeks back to offset 1 to replay it, it will get offsets 1-10 back from the consumer in the poll. The consumer position is now 11. The spout emits offset 1. Say it gets acked immediately. On the next poll, the spout will commit offset 1-10 and check if it should adjust the consumer position and waitingToEmit. Since the position (11) is ahead of the committed offset (10), it doesn't clear out waitingToEmit. Since waitingToEmit still contains offsets 2-10 from the previous poll, the spout will end up emitting these tuples again.
One can see the fix here.
Related
I'm trying some variation of connecting a producer to a consumer with the special case that some times I'd need to produce 1 extra message per message (e.g. 1 to the output topic and 1 message to a different topic) while keeping guarantees on that.
I was thinking of doing mapConcat and outputing multiple ProducerRecord objects, I'm concerned about loose guarantees in the edge case where the first message is enough for the commit to happen on that offset thus causing a potential loss of the second. Also it seems you can't just do .flatmap as you'd be going into the graph API which gets even more muddy as then it becomes harder to make sure once you merge into a commit flow you don't just ignore the duplicated offset.
Consumer.committableSource(consumerSettings, Subscriptions.topics(inputTopic))
.map(msg => (msg, addLineage(msg.record.value())))
.mapConcat(input =>
if (math.random > 0.25)
List(ProducerMessage.Message(
new ProducerRecord[Array[Byte], Array[Byte]](outputTopic, input._1.record.key(), input._2),
input._1.committableOffset
))
else List(ProducerMessage.Message(
new ProducerRecord[Array[Byte], Array[Byte]](outputTopic, input._1.record.key(), input._2),
input._1.committableOffset
),ProducerMessage.Message(
new ProducerRecord[Array[Byte], Array[Byte]](outputTopic2, input._1.record.key(), input._2),
input._1.committableOffset
))
)
.via(Producer.flow(producerSettings))
.map(_.message.passThrough)
.batch(max = 20, first => CommittableOffsetBatch.empty.updated(first)) {
(batch, elem) => batch.updated(elem)
}
.mapAsync(parallelism = 3)(_.commitScaladsl())
.runWith(Sink.ignore)
The original 1 to 1 documentation is here: https://doc.akka.io/docs/akka-stream-kafka/current/consumer.html#connecting-producer-and-consumer
Has anyone thought of / solved this problem?
The Alpakka Kafka connector has recently introduced the flexiFlow which supports your use-case: Let one stream element produce multiple messages to Kafka
I have a streaming job running on Spark 2.1.1, polling Kafka 0.10. I am using the Spark KafkaUtils class to create a DStream, and everything is working fine until I have data that ages out of the topic because of the retention policy. My problem comes when I stop my job to make some changes if any data has aged out of the topic I get an error saying that my offsets are out of range. I have done a lot of research including looking at the spark source code, and I see lots of comments like the comments in this issue: SPARK-19680 - basically saying that data should not be lost silently - so auto.offset.reset is ignored by spark. My big question, though, is what can I do now? My topic will not poll in spark - it dies on startup with the offsets exception. I don't know how to reset the offsets so my job will just get started again. I have not enabled checkpoints since I read that those are unreliable for this use. I used to have a lot of code to manage offsets, but it appears that spark ignores requested offsets if there are any committed, so I am currently managing offsets like this:
val stream = KafkaUtils.createDirectStream[String, T](
ssc,
PreferConsistent,
Subscribe[String, T](topics, kafkaParams))
stream.foreachRDD { (rdd, batchTime) =>
val offsets = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
Log.debug("processing new batch...")
val values = rdd.map(x => x.value())
val incomingFrame: Dataset[T] = SparkUtils.sparkSession.createDataset(values)(consumer.encoder()).persist
consumer.processDataset(incomingFrame, batchTime)
stream.asInstanceOf[CanCommitOffsets].commitAsync(offsets)
}
ssc.start()
ssc.awaitTermination()
As a workaround I have been changing my group ids but that is really lame. I know this is expected behavior and should not happen, I just need to know how to get the stream running again. Any help would be appreciated.
Here is a block of code I wrote to get by this until a real solution is introduced to spark-streaming-kafka. It basically resets the offsets for the partitions that have aged out based on the OffsetResetStrategy you set. Just give it the same Map params, _params, you provide to KafkaUtils. Call this before calling KafkaUtils.create****Stream() from your driver.
final OffsetResetStrategy offsetResetStrategy = OffsetResetStrategy.valueOf(_params.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toString().toUpperCase(Locale.ROOT));
if(OffsetResetStrategy.EARLIEST.equals(offsetResetStrategy) || OffsetResetStrategy.LATEST.equals(offsetResetStrategy)) {
LOG.info("Going to reset consumer offsets");
final KafkaConsumer<K,V> consumer = new KafkaConsumer<>(_params);
LOG.debug("Fetching current state");
final List<TopicPartition> parts = new LinkedList<>();
final Map<TopicPartition, OffsetAndMetadata> currentCommited = new HashMap<>();
for(String topic: this.topics()) {
List<PartitionInfo> info = consumer.partitionsFor(topic);
for(PartitionInfo i: info) {
final TopicPartition p = new TopicPartition(topic, i.partition());
final OffsetAndMetadata m = consumer.committed(p);
parts.add(p);
currentCommited.put(p, m);
}
}
final Map<TopicPartition, Long> begining = consumer.beginningOffsets(parts);
final Map<TopicPartition, Long> ending = consumer.endOffsets(parts);
LOG.debug("Finding what offsets need to be adjusted");
final Map<TopicPartition, OffsetAndMetadata> newCommit = new HashMap<>();
for(TopicPartition part: parts) {
final OffsetAndMetadata m = currentCommited.get(part);
final Long begin = begining.get(part);
final Long end = ending.get(part);
if(m == null || m.offset() < begin) {
LOG.info("Adjusting partition {}-{}; OffsetAndMeta={} Begining={} End={}", part.topic(), part.partition(), m, begin, end);
final OffsetAndMetadata newMeta;
if(OffsetResetStrategy.EARLIEST.equals(offsetResetStrategy)) {
newMeta = new OffsetAndMetadata(begin);
} else if(OffsetResetStrategy.LATEST.equals(offsetResetStrategy)) {
newMeta = new OffsetAndMetadata(end);
} else {
newMeta = null;
}
LOG.info("New offset to be {}", newMeta);
if(newMeta != null) {
newCommit.put(part, newMeta);
}
}
}
consumer.commitSync(newCommit);
consumer.close();
}
auto.offset.reset=latest/earliest will be applied only when consumer starts first time.
there is Spark JIRA to resolve this issue, till then we need live with work arounds.
https://issues.apache.org/jira/browse/SPARK-19680
Try
auto.offset.reset=latest
Or
auto.offset.reset=earliest
earliest: automatically reset the offset to the earliest offset
latest: automatically reset the offset to the latest offset
none: throw exception to the consumer if no previous offset is found for the consumer's group
anything else: throw exception to the consumer.
One more thing that affects what offset value will correspond to smallest and largest configs is log retention policy. Imagine you have a topic with retention configured to 1 hour. You produce 10 messages, and then an hour later you post 10 more messages. The largest offset will still remain the same but the smallest one won't be able to be 0 because Kafka will already remove these messages and thus the smallest available offset will be 10.
This problem was solved in the stream structuring structure by including "failOnDataLoss" = "false". It is unclear why there is no such option in the spark DStream framework.
This is a BIG quesion for spark developers!
In our projects, we tried to solve this problem by resetting the offsets form ealiest + 5 minutes ... it helps in most cases.
I am writing a Producer in Scala and I want to do batching. The way batching should work is, it should hold the messages in queue till it is full and then post all of them together on the topic. But somehow it's not working. The moment I start sending message, it starts posting the message one by one. Does anyone know how to use batching in Kafka Producer.
val kafkaStringSerializer = "org.apache.kafka.common.serialization.StringSerializer"
val batchSize: java.lang.Integer = 163840
val props = new Properties()
props.put("key.serializer", kafkaStringSerializer)
props.put("value.serializer", kafkaStringSerializer)
props.put("batch.size", batchSize);
props.put("bootstrap.servers", "localhost:9092")
val producer = new KafkaProducer[String,String](props)
val TOPIC="topic"
val inlineMessage = "adsdasdddddssssssssssss"
for(i<- 1 to 10){
val record: ProducerRecord[String, String] = new ProducerRecord(TOPIC, inlineMessage )
val futureResponse: Future[RecordMetadata] = producer.send(record)
futureResponse.isDone
println("Future Response ==========>" + futureResponse.get().serializedValueSize())
}
You have to set linger.ms in your props
By default, it is to zero, meaning that message is send immediatly if possible.
You can increase it (for example 100) so that batch occur - this means higher latency, but higher throughput.
batch.size is a maximum : if you reach it before linger.ms has passed, data will be sent without waiting more time.
To view the batches actually sent, you will need to configure your logging (batching s done on a background thread and you will not be able to view what batches are done with producer api - you can't send or receive batches, only send a record and receive its response, communication with broker via batch is done internally)
First, if not already done, bind a log4j properties file (Dlog4j.configuration=file:path/to/log4j.properties)
log4j.rootLogger=WARN, stderr
log4j.logger.org.apache.kafka.clients.producer.internals.Sender=TRACE, stderr
log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stderr.Target=System.err
For example, I will receive
TRACE Sent produce request to 2: (type=ProduceRequest, magic=1, acks=1, timeout=30000, partitionRecords=({test-1=[(record=LegacyRecordBatch(offset=0, Record(magic=1, attributes=0, compression=NONE, crc=2237306008, CreateTime=1502444105996, key=0 bytes, value=2 bytes))), (record=LegacyRecordBatch(offset=1, Record(magic=1, attributes=0, compression=NONE, crc=3259548815, CreateTime=1502444106029, key=0 bytes, value=2 bytes)))]}), transactionalId='' (org.apache.kafka.clients.producer.internals.Sender)
Which is a batch of 2 data. Batch will contain records send to a same broker
Then, play with batch.size and linger.ms to see the difference. Note that a record contain some overhead, so a batch.size of 1000 will not contain 10 messages of size 100
Note that I did not find documentation which stated all logger and what they do (like log4j.logger.org.apache.kafka.clients.producer.internals.Sender). You can enable DEBUG/TRACE on rootLogger and find the data you want, or explore the code
Your are producing the data to the Kafka server synchronously. Means, the moment you call producer.send with futureResponse.get, it will return only after the data gets stored in the Kafka Server.
Store the response in a separate list, and call futureResponse.get outside the for loop.
With default configuration, Kafka supports batching, see linger.ms and batch.size
List<Future<RecordMetadata>> responses = new ArrayList<>();
for (int i=1; i<=10; i++) {
ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC, inlineMessage);
Future<RecordMetadata> response = producer.send(record);
responses.add(response);
}
for (Future<RecordMetadata> response : responses) {
response.get(); // verify whether the message is sent to the broker.
}
We occasionally suffer from high latency between the replica leader and the rest of the ISR nodes which lead to the consumer getting the following error:
org.apache.kafka.clients.consumer.RetriableCommitFailedException: Commit offsets failed with retriable exception. You should retry committing offsets.
Caused by: org.apache.kafka.common.errors.TimeoutException: The request timed out.
I can increase the offsets.commit.timeout.ms but I don't want to as it may lead to additional side effects.
But on broader view,I don't want the broker to wait syncing the commit offset on all the other replicas, rather commit locally and async update the rest.
Going over the broker configuration I found offsets.commit.required.acks which looks to configure exactly that, BUT the doc also cryptically states: the default (-1) should not be overridden.
Why? I even tried going over the broker source code but found little additional information.
Any idea why this isn't recommended? is there a different way of achieving the same result?
I recommend to actually retry committing the offsets.
Let your consumer commit the offsets asynchronously and implement a retry mechanism. However, retrying an asynchronous commit could lead to the case that you commit a smaller offset after committing a larger offset which should be avoided by all means.
In the book "Kafka - The Definitive Guide", there is a hint on how to mitigate this problem:
Retrying Async Commits: A simple pattern to get commit order right for asynchronous retries is to use a monotonically increasing sequence number. Increase the sequence number every time you commit and add the sequence number at the time of the commit to the commitAsync callback. When you’re getting ready to send a retry, check if the commit sequence number the callback got is equal to the instance variable; if it is, there was no newer commit and it is safe to retry. If the instance sequence number is higher, don’t retry because a newer commit was already sent.
As an example, you can see an implementation of this idea in Scala below:
import java.util._
import java.time.Duration
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord, KafkaConsumer, OffsetAndMetadata, OffsetCommitCallback}
import org.apache.kafka.common.{KafkaException, TopicPartition}
import collection.JavaConverters._
object AsyncCommitWithCallback extends App {
// define topic
val topic = "myOutputTopic"
// set properties
val props = new Properties()
props.put(ConsumerConfig.GROUP_ID_CONFIG, "AsyncCommitter5")
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")
// [set more properties...]
// create KafkaConsumer and subscribe
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJavaCollection)
// initialize global counter
val atomicLong = new AtomicLong(0)
// consume message
try {
while(true) {
val records = consumer.poll(Duration.ofMillis(1)).asScala
if(records.nonEmpty) {
for (data <- records) {
// do something with the records
}
consumer.commitAsync(new KeepOrderAsyncCommit)
}
}
} catch {
case ex: KafkaException => ex.printStackTrace()
} finally {
consumer.commitSync()
consumer.close()
}
class KeepOrderAsyncCommit extends OffsetCommitCallback {
// keeping position of this callback instance
val position = atomicLong.incrementAndGet()
override def onComplete(offsets: util.Map[TopicPartition, OffsetAndMetadata], exception: Exception): Unit = {
// retrying only if no other commit incremented the global counter
if(exception != null){
if(position == atomicLong.get) {
consumer.commitAsync(this)
}
}
}
}
}
I'm facing an issue with high level kafka consumer (0.8.2.0) - after consuming some amount of data one of our consumers stops. After restart it consumes some messages and stops again with no error/exception or warning.
After some investigation I found that the problem with consumer was this exception:
ERROR c.u.u.e.impl.kafka.KafkaConsumer - Error consuming message stream:
kafka.message.InvalidMessageException: Message is corrupt (stored crc = 3801080313, computed crc = 2728178222)
Any ideas how can I simple skip such messages at all?
So, answering my own question. After some debugging of Kafka Consumer, I found one possible solution:
Create a subclass of kafka.consumer.ConsumerIterator
Override makeNext-method. In this method catch InvalidMessageException and return some dummy-placeholder.
In your while-loop you have to convert the kafka.consumer.ConsumerIterator to your implementation. Unfortunately all fields of kafka.consumer.ConsumerIterator are private, so you have to use reflection.
So this is the code example:
val skipIt = createKafkaSkippingIterator(ks.iterator())
while(skipIt.hasNext()) {
val messageAndTopic = skipIt.next()
if (messageNotCorrupt(messageAndTopic)) {
consumeFn(messageAndTopic)
}
}
The messageNotCorrupt-method simply checks if the argument is equal to the dummy-message.
another solution, possibly easier, using Kafka 0.8.2 client.
try {
val m = it.next()
//...
} catch {
case e: kafka.message.InvalidMessageException ⇒
log.warn("Corrupted message. Skipping.", e)
resetIteratorState(it)
}
//...
def resetIteratorState(it: ConsumerIterator[Array[Byte], Array[Byte]]): Unit = {
val method = classOf[IteratorTemplate[_]].getDeclaredMethod("resetState")
method.setAccessible(true)
method.invoke(it)
}