I have a single Kafka broker with single partition. The requirement was to do following:
Read from this partition
Transform message by invoking a REST API
Publish the transformed message to another REST API
Push the response message to another topic
I am using Kafka Streams for achieving this using the following code
StreamsBuilder builder = new StreamsBuilder();`
KStream<Object, Object> consumerStream = builder.stream(kafkaConfiguration.getConsumerTopic());
consumerStream = consumerStream.map(getKeyValueMapper(keyValueMapperClassName));
consumerStream.to(kafkaConfiguration.getProducerTopic(), Produced.with(lStringKeySerde, lAvroValueSerde));
return builder.build();
FOllowing is my configuration:
streamsConfig.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, String.join(",", bootstrapServers));
if (schemaRegistry != null && schemaRegistry.length > 0) {
streamsConfig.put(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, String.join(",", schemaRegistry));
}
streamsConfig.put(this.keySerializerKeyName, keyStringSerializerClassName);
streamsConfig.put(this.valueSerialzerKeyName, valueAVROSerializerClassName);
streamsConfig.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
streamsConfig.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
streamsConfig.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1000);
streamsConfig.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, FailOnInvalidTimestamp.class);
streamsConfig.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, "exactly_once");
streamsConfig.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 30000);
streamsConfig.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 1);
streamsConfig.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 1);
streamsConfig.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG, DeserializationExceptionHandler.class);
streamsConfig.put(StreamsConfig.DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG, ProductionExceptionHandler.class);
streamsConfig.put(StreamsConfig.TOPOLOGY_OPTIMIZATION,StreamsConfig.OPTIMIZE);
streamsConfig.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, compressionMode);
streamsConfig.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1000);
I was looking for a mechanism to do the following in my KeyValueMapper:
If any of the REST API is down then I catch the exception
I would like the same offset to be kept on looping until the system is back up OR pause the consumption till the system is back up
I've checked the following links but they do not seem to help.
How to run kafka streams effectively with single app instance and single topic partitions?
Following link talks about KafkaTransactionManager but that would not work I guess the way KStream is initialized above
Kafka transaction failed but commits offset anyway
Any help / pointers in this direction would be much appreciated.
What you want to do is not really supported. Pausing the consumer is not possible in Kafka Streams.
You can "halt" processing only, if you loop withing your KeyValueMapper, however, for this case, the consumer may drop out of the consumer group. For your case, with a single input topic partition and can only have a single thread in a single KafkaStreams instance anyway, hence, it would not affect any other member of the group (as there are none). However, the problem will be that committing the offset would fail after the thread dropped out of the group. Hence, after the thread rejoin the group it would fetch an older offset and reprocess some data (ie, you get duplicate data processing). To avoid dropping out of the consumer group, you could set max.poll.interval.ms config to a high value (maybe even Integer.MAX_VALUE) though -- given that you have a single member in the consumer group, setting a high value should be ok.
Another alternative might be te use a transform() with a state store. If you cannot make the REST calls, you put the data into the store and retry later. This way the consumer would not drop out of the group. However, reading new data would never stop, and you would need to buffer all data in the store until the REST API can be called again. You should be able to slow down reading new data (to reduce the amount of data you need to buffer) by "sleeping" in your Transformer -- you just need to ensure that you don't violate max.poll.interval.ms config (default is 30 seconds).
Related
I am new to Kafka, and am developing a personal project with a few services and the communication between them is made through Kafka and I am using Confluent for housing Kafka remotely.
All works fine, but when I startup a server it will try to process all the old messages in the topics that were generated as I was testing the system.
I would like to avoid this because it is time consuming and those messages were already processed, when the server was up the last time. Is there any way to prevent this in the development environment?
Am I even using Kafka correctly? Are there good practises that I missed?
By "server", I assume you mean consumer. The broker server doesn't process data, only stores it.
If you have auto.offset.reset=earliest + enable.auto.commit=false + are not committing the records in your code (or are overall using a new group.id each time), this is the expected behavior since your group.id is not tracking already consumed data.
Since you're now in a situation where you have processed data, but no stored offsets, first set a static group id, then your options include
re-process all the data again, accepting the duplicates, perhaps adding some conditional filter in your consumer code to skip records
skip all processed and un-processed data and only start consuming brand-new records after the consumer starts, by either setting a new group.id + auto.offset.reset=latest, or use consumer.seekToEnd() / the kafka-consumer-groups CLI tool ; downside of setting auto.offset.reset=latest is that you might run into a situation where the consumer group has been idle too long, and the group expires, causing you to go back to the end of the topic, even though there may still be un-processed data
manually find the offsets for all the partitions for the last processed data and consumer.seek() to those offsets
I'm aware that you can define stream-processing Kafka application in the form of a topology that implicitly understands which record has gone through successfully, and therefore can correctly commit the consumer offset so that when the microservice has to be restarted, it will continue reading the input toppic without missing messages.
But what happens when I introduce my own processing classes into the stream? For instance, perhaps I need to submit information from the input records to a web service with a big startup time. So I write my own processor class that accumulates, say, 1000 messages and then submits a batch request to the external service, like this.
KStream<String, Prediction> stream = new StreamsBuilder()
.stream(inputTopic, Consumed.with(Serdes.String(), new MessageSerde()))
// talk to web service
.map((k, v) -> new KeyValue<>("", wrapper.consume(v.getPayload())))
.flatMapValues((ValueMapper<List<Prediction>, Iterable<Prediction>>) value -> value);
// send downstream
stream.peek((k, v) -> metrics.countOutgoingMessage())
.to(outputTopic, Produced.with(Serdes.String(), new PredictionSerde()));
Assuming that the external service can issue zero, one or more predictions of some kind for every input, and that my wrapper submits inputs in batches to increase throughput. It seems to me that KStream cannot possibly keep track of which input record corresponds to which output record, and therefore no matter how it is implemented, it cannot guarantee that the correct consumer offset for the input topic is committed.
So in this paradigm, how can I give the library hints about which messages have been successfully processed? Or failing that, how can I get access to the consumer offset for the topic and perform commits explicitly so that no data loss can occur?
I think you would might have a problem if you are using map. combining remote calls in a DSL operator is not recommended. You might want to look into using the Processor API docs. With ProcessorContext you can forward or commit which could give you flexibility you need.
We have a Kafka producer that produces keyed messages in a very high frequency to topics whose retention time = 10 hours. These messages are real-time updates and the used key is the ID of the element whose value has changed. So the topic is acting as a changelog and will have many duplicate keys.
Now, what we're trying to achieve is that when a Kafka consumer launches, regardless of the last known state (new consumer, crashed, restart, etc..), it will somehow construct a table with the latest values of all the keys in a topic, and then keeps listening for new updates as normal, keeping the minimum load on Kafka server and letting the consumer do most of the job. We tried many ways and none of them seems the best.
What we tried:
1 changelog topic + 1 compact topic:
The producer sends the same message to both topics wrapped in a transaction to assure successful send.
Consumer launches and requests the latest offset of the changelog topic.
Consumes the compacted topic from beginning to construct the table.
Continues consuming the changelog since the requested offset.
Cons:
Having duplicates in compacted topic is a very high possibility even with setting the log compaction frequency the highest possible.
x2 number of topics on Kakfa server.
KSQL:
With KSQL we either have to rewrite a KTable as a topic so that consumer can see it (Extra topics), or we will need consumers to execute KSQL SELECT using to KSQL Rest Server and query the table (Not as fast and performant as Kafka APIs).
Kafka Consumer API:
Consumer starts and consumes the topic from beginning. This worked perfectly, but the consumer has to consume the 10 hours change log to construct the last values table.
Kafka Streams:
By using KTables as following:
KTable<Integer, MarketData> tableFromTopic = streamsBuilder.table("topic_name", Consumed.with(Serdes.Integer(), customSerde));
KTable<Integer, MarketData> filteredTable = tableFromTopic.filter((key, value) -> keys.contains(value.getRiskFactorId()));
Kafka Streams will create 1 topic on Kafka server per KTable (named {consumer_app_id}-{topic_name}-STATE-STORE-0000000000-changelog), which will result in a huge number of topics since we a big number of consumers.
From what we have tried, it looks like we need to either increase the server load, or the consumer launch time. Isn't there a "perfect" way to achieve what we're trying to do?
Thanks in advance.
By using KTables, Kafka Streams will create 1 topic on Kafka server per KTable, which will result in a huge number of topics since we a big number of consumers.
If you are just reading an existing topic into a KTable (via StreamsBuilder#table()), then no extra topics are being created by Kafka Streams. Same for KSQL.
It would help if you could clarify what exactly you want to do with the KTable(s). Apparently you are doing something that does result in additional topics being created?
1 changelog topic + 1 compact topic:
Why were you thinking about having two separate topics? Normally, changelog topics should always be compacted. And given your use case description, I don't see a reason why it should not be:
Now, what we're trying to achieve is that when a Kafka consumer launches, regardless of the last known state (new consumer, crashed, restart, etc..), it will somehow construct a table with the latest values of all the keys in a topic, and then keeps listening for new updates as normal [...]
Hence compaction would be very useful for your use case. It would also prevent this problem you described:
Consumer starts and consumes the topic from beginning. This worked perfectly, but the consumer has to consume the 10 hours change log to construct the last values table.
Note that, to reconstruct the latest table values, all three of Kafka Streams, KSQL, and the Kafka Consumer must read the table's underlying topic completely (from beginning to end). If that topic is NOT compacted, this might indeed take a long time depending on the data volume, topic retention settings, etc.
From what we have tried, it looks like we need to either increase the server load, or the consumer launch time. Isn't there a "perfect" way to achieve what we're trying to do?
Without knowing more about your use case, particularly what you want to do with the KTable(s) once they are populated, my answer would be:
Make sure the "changelog topic" is also compacted.
Try KSQL first. If this doesn't satisfy your needs, try Kafka Streams. If this doesn't satisfy your needs, try the Kafka Consumer.
For example, I wouldn't use the Kafka Consumer if it is supposed to do any stateful processing with the "table" data, because the Kafka Consumer lacks built-in functionality for fault-tolerant stateful processing.
Consumer starts and consumes the topic from beginning. This worked
perfectly, but the consumer has to consume the 10 hours change log to
construct the last values table.
During the first time your application starts up, what you said is correct.
To avoid this during every restart, store the key-value data in a file.
For example, you might want to use a persistent map (like MapDB).
Since you give the consumer group.id and you commit the offset either periodically or after each record is stored in the map, the next time your application restarts it will read it from the last comitted offset for that group.id.
So the problem of taking a lot of time occurs only initially (during first time). So long as you have the file, you don't need to consume from beginning.
In case, if the file is not there or is deleted, just seekToBeginning in the KafkaConsumer and build it again.
Somewhere, you need to store this key-values for retrieval and why cannot it be a persistent store?
In case if you want to use Kafka streams for whatever reason, then an alternative (not as simple as the above) is to use a persistent backed store.
For example, a persistent global store.
streamsBuilder.addGlobalStore(Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(topic), keySerde, valueSerde), topic, Consumed.with(keySerde, valueSerde), this::updateValue);
P.S: There will be a file called .checkpoint in the directory which stores the offsets. In case if the topic is deleted in the middle you get OffsetOutOfRangeException. You may want to avoid this, perhaps by using UncaughtExceptionHandler
Refer to https://stackoverflow.com/a/57301986/2534090 for more.
Finally,
It is better to use Consumer with persistent file rather than Streams for this, because of simplicity it offers.
We are using Kafka to store messages that are produced by a node in our cluster and to be distributed to all nodes in the cluster and I have it mostly working with akka-streams but there is a couple of questions I have to tie this up. There are some constraints to this.
First of all the message has to be consumed by every node in the cluster but produced by only one node. I understand I can assign each node a group id that is probably its node ID which means each node will get the message. That sorted. But here are the questions.
The data is extremely transient and fairly large (just under a meg) and cannot be compressed further or broken up. If there is a new message on the topic the old one is pretty much trash. How can I limit the topic to basically just one message currently maximum?
Given that the data is necessary for the node to start, I need to consume the latest message on the topic no matter whether I have consumed it before and, hopefully without creating a unique group id every time I start the server. Is this possible and if so, how can it be done.
Finally, the data is usually on the topic but on occasion it is not there and I, ideally, need to be able to check if there is a message there and if not ask the producer to create the message. Is this possible?
This is the code I am currently using to start the consumer:
private Control startMatrixConsumer() {
final ConsumerSettings<Long, byte[]> matrixConsumerSettings = ConsumerSettings
.create(services.actorSystem(), new LongDeserializer(), new ByteArrayDeserializer())
.withBootstrapServers(services.config().getString("kafka.bootstrapServers"))
.withGroupId("group1") // todo put in the conf ??
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
final String topicName = Matrix.class.getSimpleName() + '-' + eventId;
final AutoSubscription subscription = Subscriptions.topics(topicName);
return Consumer.plainSource(MatrixConsumerSettings, subscription)
.named(Matrix.class.getSimpleName() + "-Kafka-Consumer-" + eventId)
.map(data -> {
final Matrix matrix = services.kryoDeserialize(data.value(), Matrix.class);
log.debug(format("Received %s for event %d from Kafka", Matrix.class.getSimpleName(), matrix.getEventId()));
return matrix;
})
.filter(Objects::nonNull)
.to(Sink.actorRef(getSelf(), NotUsed.getInstance()))
.run(ActorMaterializer.create(getContext()));
}
Thanks a bunch.
All the message has to be consumed by every node in the cluster but
produced by only one.
You are correct, you can achieve this by having an unique group id per node.
How can I limit the topic to basically just one message currently
maximum?
Kafka provides compacted topics.
Compacted topic maintains only the most recent message of a given key. For instance, Kafka consumers store their offsets in compacted topic.
In your case, produce every message with the same key, and Kafka Log Cleaner will delete old messages. Please be aware that compaction is performed periodically, so you can end up with two (or more) messages with the same key for a short period of time (depends on your Log Cleaner configuration.
I need to consume the latest message on the topic no matter whether I
have consumed it before.
You can achieve this by not committing the consumer offset (enable.auto.commit set to false) and setting auto.offset.reset to earliest. By having one message in your compacted topic and consumer that starts from the beginning of the topic, that message is always consumed after node starts.
I need to be able to check if there is a message there and if not ask
the producer to create the message.
Unfortunately, I am not aware of any Kafka functionality that could help you with that. Most of the time Kafka is used to decouple producers and consumers.
I have a requirement to match Kafka producer offset value to consumer offset by using Java API?
I am new to KAFKA,Could anyone suggest how to proceed with this ?
Depending on your exact use case there are a couple of ways that you could go about this, but all of them will require an external system.
First of, Confluent offers the Confluent Control Center as part of their commercial offering, this would probably be the easiest way to go about this, if you are willing to spend the money.
If that is not for you, then you'd need to implement some sort of system to keep track of what you are producing and what you are consuming. For example you could simply use a database, take topic, partition and offset as primary key and have columns for produced_at and consumed_at.
Every time your producer writes a message to the cluster you have it update the produced_at column (look at ProducerInterceptor). Same on the consumer side, you could implement an interceptor that confirms having read the message, or confirm from the consumer itself, once it has successfully been processed.
Or if you don't need every message confirmed you could just implement regular checkpointing every 10k messages or something similar and trust that the consumer read everything up to the last offset it confirmed.
There's also the possibility of injecting checkpoint messages into the stream at regular intervalls and when the consumer sees one of these it triggers an action - again, you have to trust the consumer that it got everything in between the checkpoints.
As I said initially, it all depends on your exact use case, if you give us more detail I'm sure we can come up with something that works for you.
Update:
If you want to retrieve the offset after sending a message to Kafka you need to check the Future that the producer returns on send, this will contain the offset.
// Send message and store the future
Future<RecordMetadata> messageFuture = producer.send(new ProducerRecord<String, byte[]>(topic, serialize(currentMessage)));
producer.flush();
// as flush blocks until all operations have been completed (regardless of success or failure) we can be sure
// that our future is available at this point
try {
RecordMetadata metaData = messageFuture.get();
System.out.println("Sent message with offset: " + metaData.offset());
} catch (Exception e) {
// do some error handling
}
You can expose the offset of the producer and the consumer via Java Management Beans. There by you can do the comparison in realtime using the JConsole provided with the JDK.
Read about Gauge on how to expose the offset position of the producer and the consumer.