Whenever a user favorites some content on our site we collect the events and what we were planning to do is to hourly commit the aggregated favorites of a content and update the total favorite count in the DB.
We were evaluating Kafka Streams. Followed the word count example. Our topology is simple, produce to a topic A and read and commit aggregated data to another topic B. Then consume events from Topic B every hour and commit in the DB.
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public StreamsConfig kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "favorite-streams");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, WallclockTimestampExtractor.class.getName());
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, brokerAddress);
return new StreamsConfig(props);
}
#Bean
public KStream<String, String> kStream(StreamsBuilder kStreamBuilder) {
StreamsBuilder builder = streamBuilder();
KStream<String, String> source = builder.stream(topic);
source.flatMapValues(value -> Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+")))
.groupBy((key, value) -> value)
.count(Materialized.<String, Long, KeyValueStore<Bytes, byte[]>> as("counts-store")).toStream()
.to(topic + "-grouped", Produced.with(Serdes.String(), Serdes.Long()));
Topology topology = builder.build();
KafkaStreams streams = new KafkaStreams(topology, kStreamsConfigs());
streams.start();
return source;
}
#Bean
public StreamsBuilder streamBuilder() {
return new StreamsBuilder();
}
However when I consume this Topic B it gives me aggregated data from the beginning. My question is that can we have some provision wherein I can consume the previous hours grouped data and then commit to DB and then Kakfa forgets about the previous hours data and gives new data each hour rather than cumulative sum. Is the design topology correct or can we do something better?
If you want to get one aggregation result per hour, you can use a windowed aggregation with a window size of 1 hour.
stream.groupBy(...)
.windowedBy(TimeWindow.of(1 *3600 * 1000))
.count(...)
Check the docs for more details: https://docs.confluent.io/current/streams/developer-guide/dsl-api.html#windowing
The output type is Windowed<String> for the key (not String). You need to provide a custom Window<String> Serde, or convert the key type. Consult SessionWindowsExample.
Related
Is it possible to merge records in kafka and publish the output to different stream ?
For example , there is a stream of events produced to a kafka topic like below
{txnId:1,startTime:0900},{txnId:1,endTime:0905},{txnId:2,endTime:0912},{txnId:3,endTime:0930},{txnId:2,startTime:0912},{txnId:3,startTime:0925}......
I want to merge these events by txnId and create the merged output like below
{txnId:1,startTime:0900,endTime:0905},{txnId:2,startTime:0910,endTime:0912},{txnId:3,startTime:0925,endTime:0930}
Please note that order is not maintained in the incoming events.So if endTime is received for a txn Id before start time event , then we need to wait till the start time event is received for that txnId before initiating the merge
I went through the word count example that comes along with Kafka Streams example but its not clear how to wait for events and then merge while doing the transformation.
Any thoughts is highly appreciated.
You could try solving this by splitting the start and end events into 2 separate streams with txnId as the key and then joining both the streams.
KStream<String, String> eventSource = new StreamBuilder().stream("INPUT-TOPIC");
KStream<String, JsonNode>[] splitEvents =
eventSource.map((key, eventString) -> {
JsonNode event = new ObjectMapper().readTree(eventString);
String txnId = event.path("txnId").asText();
return KeyValue.pair(txnId, event);
})
.branch((key, event) -> event.findValue("startTime") != null,
(key, event) -> event.findValue("endTime") != null);
KStream<String, JsonNode> startEvents = splitEvents[0];
KStream<String, JsonNode> endEvents = splitEvents[1];
A join between 2 streams as shown will produce a join result when there is an event in either side of the join. So the order of both events won't matter (you will have to ensure that you set an appropriate window period for the join).
Serde<JsonNode> jsonSerde = Serdes.serdeFrom(new JsonSerializer(), new JsonDeserializer());
KStream<String, String> completeEvents = startEvents.join(endEvents,
(startEvent, endEvent) -> {
// Add logic to merge startEvent and endEvent as seen fit
ObjectNode completeEvent = JsonNodeFactory.instance.objectNode();
completeEvent.put("startTime", startEvent.path("startTime).asText());
completeEvent.put("endTime", endEvent.path("endTime").asText());
return completeEvent.toString();
},
JoinWindows.of(Duration.ofMinutes(15)),
Joined.with(
Serdes.String(), // key
jsonSerde, // left object
jsonSerde // right object
)
);
I'm very new to Kafka Stream API.
I have a KStream like this:
KStream<Long,String> joinStream = builder.stream(("output"));
The KStream with records value look like this:
The stream will be updated every 1s.
I need to build a Rest API that will be calculated based on the value profit and spotPrice.
But I've struggled to get the value of the last record.
I am assuming that you mean the max value of the stream when you say the last value as the values are continuously arriving. Then you can use the reduce transformation to always update the output stream with the max value.
final StreamsBuilder builder = new StreamsBuilder();
KStream<Long, String> stream = builder.stream("INPUT_TOPIC", Consumed.with(Serdes.Long(), Serdes.String()));
stream
.mapValues(value -> Long.valueOf(value))
.groupByKey()
.reduce(new Reducer<Long>() {
#Override
public Long apply(Long currentMax, Long v) {
return (currentMax > v) ? currentMax : v;
}
})
.toStream().to("OUTPUT_TOPIC");
return builder.build();
And in case that you want to retrive it in a rest api i suggest to take a look at Spring cloud + Kafka streams (https://cloud.spring.io/spring-cloud-stream-binder-kafka/spring-cloud-stream-binder-kafka.html) that you can exchange messages to spring web.
I am new to Kafka and I'm building a starter project using the Twitter API as a data source. I have create a Producer which can query the Twitter API and sends the data to my kafka topic with string serializer for both key and value. My Kafka Stream Application reads this data and does a word count, but also grouping by the date of the tweet. This part is done through a KTable called wordCounts to make use of its upsert functionality. The structure of this KTable is:
Key: {word: exampleWord, date: exampleDate}, Value: numberOfOccurences
I then attempt to restructure the data in the KTable stream to a flat structure so I can later send it to a database. You can see this in the wordCountsStructured KStream object. This restructures the data to look like the structure below. The value is initially a JsonObject but i convert it to a string to match the Serdes which i set.
Key: null, Value: {word: exampleWord, date: exampleDate, Counts: numberOfOccurences}
However, when I try to send this to my second kafka topic, I get the error below.
A serializer (key:
org.apache.kafka.common.serialization.StringSerializer / value:
org.apache.kafka.common.serialization.StringSerializer) is not
compatible to the actual key or value type (key type:
com.google.gson.JsonObject / value type: com.google.gson.JsonObject).
Change the default Serdes in StreamConfig or provide correct Serdes
via method parameters.
I'm confused by this since the KStream I am sending to the topic is of type <String, String>. Does anyone know how I might fix this?
public class TwitterWordCounter {
private final JsonParser jsonParser = new JsonParser();
public Topology createTopology(){
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> textLines = builder.stream("test-topic2");
KTable<JsonObject, Long> wordCounts = textLines
//parse each tweet as a tweet object
.mapValues(tweetString -> new Gson().fromJson(jsonParser.parse(tweetString).getAsJsonObject(), Tweet.class))
//map each tweet object to a list of json objects, each of which containing a word from the tweet and the date of the tweet
.flatMapValues(TwitterWordCounter::tweetWordDateMapper)
//update the key so it matches the word-date combination so we can do a groupBy and count instances
.selectKey((key, wordDate) -> wordDate)
.groupByKey()
.count(Materialized.as("Counts"));
/*
In order to structure the data so that it can be ingested into SQL, the value of each item in the stream must be straightforward: property, value
so we have to:
1. take the columns which include the dimensional data and put this into the value of the stream.
2. lable the count with 'count' as the column name
*/
KStream<String, String> wordCountsStructured = wordCounts.toStream()
.map((key, value) -> new KeyValue<>(null, MapValuesToIncludeColumnData(key, value).toString()));
KStream<String, String> wordCountsPeek = wordCountsStructured.peek(
(key, value) -> System.out.println("key: " + key + "value:" + value)
);
wordCountsStructured.to("test-output2", Produced.with(Serdes.String(), Serdes.String()));
return builder.build();
}
public static void main(String[] args) {
Properties config = new Properties();
config.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application1111");
config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "myIPAddress");
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
config.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
TwitterWordCounter wordCountApp = new TwitterWordCounter();
KafkaStreams streams = new KafkaStreams(wordCountApp.createTopology(), config);
streams.start();
// shutdown hook to correctly close the streams application
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
}
//this method is used for taking a tweet and transforming it to a representation of the words in it plus the date
public static List<JsonObject> tweetWordDateMapper(Tweet tweet) {
try{
List<String> words = Arrays.asList(tweet.tweetText.split("\\W+"));
List<JsonObject> tweetsJson = new ArrayList<JsonObject>();
for(String word: words) {
JsonObject tweetJson = new JsonObject();
tweetJson.add("date", new JsonPrimitive(tweet.formattedDate().toString()));
tweetJson.add("word", new JsonPrimitive(word));
tweetsJson.add(tweetJson);
}
return tweetsJson;
}
catch (Exception e) {
System.out.println(e);
System.out.println(tweet.serialize().toString());
return new ArrayList<JsonObject>();
}
}
public JsonObject MapValuesToIncludeColumnData(JsonObject key, Long countOfWord) {
key.addProperty("count", countOfWord); //new JsonPrimitive(count));
return key;
}
Because you are performing a key changing operation before the groupBy(), it will create a repartition topic and for that topic, it will rely on the default key, value serdes, which you have set to String Serde.
You can modify the groupBy() call to groupBy(Grouped.with(StringSerde,JsonSerde) and this should help.
I am building an ecommerce application, where I am currently dealing with two data feeds: order executions, and broken sales. A broken sale would be an invalid execution, for a variety of reasons. A broken sale would have the same order ref number as the order, so the join is on order ref # and line item #.
Currently, I have two topics - orders, and broken. Both have been defined using Avro Schemas, and built using SpecificRecord. The key is OrderReferenceNumber.
Fields for orders: OrderReferenceNumber, Timestamp, OrderLine, ItemNumber, Quantity
Fields for broken: OrderReferenceNumber, OrderLine, Timestamp
Corresponding Java classes were generated by running
mvn clean package
I need to left-join orders with broken and include the following fields in the output: OrderReferenceNumber, Timestamp, BrokenSaleTimestamp, OrderLine, ItemNumber, Quantity
Here is my code:
public static void main(String[] args) {
// Declare variables
final Map<String, String> avroSerdeConfig = Collections.singletonMap(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081");
// Add Kafka Streams Properties
Properties streamsProperties = new Properties();
streamsProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "orderProcessor");
streamsProperties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
streamsProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
streamsProperties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
streamsProperties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
streamsProperties.put(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "localhost:8081");
// Specify Kafka Topic Names
String orderTopic = "com.ecomapp.input.OrderExecuted";
String brokenTopic = "com.ecomapp.input.BrokenSale";
// Specify Serializer-Deserializer or Serdes for each Message Type
Serdes.StringSerde stringSerde = new Serdes.StringSerde();
Serdes.LongSerde longSerde = new Serdes.LongSerde();
// For the Order Executed Message
SpecificAvroSerde<OrderExecuted> ordersSpecificAvroSerde = new SpecificAvroSerde<OrderExecuted>();
ordersSpecificAvroSerde.configure(avroSerdeConfig, false);
// For the Broken Sale Message
SpecificAvroSerde<BrokenSale> brokenSpecificAvroSerde = new SpecificAvroSerde<BrokenSale>();
brokenSpecificAvroSerde.configure(avroSerdeConfig, false);
StreamsBuilder streamBuilder = new StreamsBuilder();
KStream<String, OrderExecuted> orders = streamBuilder
.stream(orderTopic, Consumed.with(stringSerde, ordersSpecificAvroSerde))
.selectKey((key, orderExec) -> orderExec.getMatchNumber().toString());
KStream<String, BrokenSale> broken = streamBuilder
.stream(brokenTopic, Consumed.with(stringSerde, brokenSpecificAvroSerde))
.selectKey((key, brokenS) -> brokenS.getMatchNumber().toString());
KStream<String, JoinOrdersExecutedNonBroken> joinOrdersNonBroken = orders
.leftJoin(broken,
(orderExec, brokenS) -> JoinOrdersExecutedNonBroken.newBuilder()
.setOrderReferenceNumber((Long) orderExec.get("OrderReferenceNumber"))
.setTimestamp((Long) orderExec.get("Timestamp"))
.setBrokenSaleTimestamp((Long) brokenS.get("Timestamp"))
.setOrderLine((Long) orderExec.get("OrderLine"))
.setItemNumber((String) orderExec.get("ItemNumber"))
.setQuantity((Long) orderExec.get("Quantity"))
.build(),
JoinWindows.of(TimeUnit.MILLISECONDS.toMillis(1))
Joined.with(stringSerde, ordersSpecificAvroSerde, brokenSpecificAvroSerde))
.peek((key, value) -> System.out.println("key = " + key + ", value = " + value));
KafkaStreams orderStreams = new KafkaStreams(streamBuilder.build(), streamsProperties);
orderStreams.start();
// print the topology
System.out.println(orderStreams.localThreadsMetadata());
// shutdown hook to correctly close the streams application
Runtime.getRuntime().addShutdownHook(new Thread(orderStreams::close));
}
When I run this, I get the following maven compile error:
[ERROR] /Tech/Projects/jCom/src/main/java/com/ecomapp/kafka/orderProcessor.java:[96,26] incompatible types: cannot infer type-variable(s) VO,VR,K,V,VO
(argument mismatch; org.apache.kafka.streams.kstream.Joined<K,V,com.ecomapp.input.BrokenSale> cannot be converted to org.apache.kafka.streams.kstream.Joined<java.lang.String,com.ecomapp.OrderExecuted,com.ecomapp.input.BrokenSale>)
The issue really is in defining my ValueJoiner. The Confluent documentation is not very clear on how to do this when Avro schemas are involved (I can't find examples either). What is the right way to define this?
Not sure why Java cannot resolve the type.
Try:
Joined.<String,OrderExecuted,BrokenSale>with(stringSerde, ordersSpecificAvroSerde, brokenSpecificAvroSerde))
To specify the types explicitly.
I'm playing around with Kafka Streams trying to do basic aggregations (for the purpose of this question, just incrementing by 1 on each message). On the output topic that receives the changes done to the KTable, I get really weird output:
#B�
#C
#C�
#D
#D�
#E
#E�
#F
#F�
I recognize that the "�" means that it's printing out some kind of character that doesn't exist in the character set, but I'm not sure why. Here's my code for reference:
public class KafkaMetricsAggregator {
public static void main(final String[] args) throws Exception {
final String bootstrapServers = args.length > 0 ? args[0] : "my-kafka-ip:9092";
final Properties streamsConfig = new Properties();
streamsConfig.put(StreamsConfig.APPLICATION_ID_CONFIG, "metrics-aggregator");
// Where to find Kafka broker(s).
streamsConfig.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
// Specify default (de)serializers for record keys and for record values.
streamsConfig.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
streamsConfig.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
// Records should be flushed every 10 seconds. This is less than the default
// in order to keep this example interactive.
streamsConfig.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 10 * 1000);
// For illustrative purposes we disable record caches
streamsConfig.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
// Class to extract the timestamp from the event object
streamsConfig.put(StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG, "my.package.EventTimestampExtractor");
// Set up serializers and deserializers, which we will use for overriding the default serdes
// specified above.
final Serde<JsonNode> jsonSerde = Serdes.serdeFrom(new JsonSerializer(), new JsonDeserializer());
final Serde<String> stringSerde = Serdes.String();
final Serde<Double> doubleSerde = Serdes.Double();
final KStreamBuilder builder = new KStreamBuilder();
final KTable<String, Double> aggregatedMetrics = builder.stream(jsonSerde, jsonSerde, "test2")
.groupBy(KafkaMetricsAggregator::generateKey, stringSerde, jsonSerde)
.aggregate(
() -> 0d,
(key, value, agg) -> agg + 1,
doubleSerde,
"metrics-table2");
aggregatedMetrics.to(stringSerde, doubleSerde, "metrics");
final KafkaStreams streams = new KafkaStreams(builder, streamsConfig);
// Only clean up in development
streams.cleanUp();
streams.start();
// Add shutdown hook to respond to SIGTERM and gracefully close Kafka Streams
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
}
}
EDIT: Using aggregatedMetrics.print(); does print out the correct output to the console:
[KSTREAM-AGGREGATE-0000000002]: my-generated-key , (43.0<-null)
Any ideas about what's going on?
You're using Serdes.Double() for your values, that uses a binary efficient encoding [1] for the serialised values and that's what you're seeing on your topic. To get human-readable numbers on the console, you'd need to instruct the consumer to use the DoubleDeserializer too.
[1] https://github.com/apache/kafka/blob/e31c0c9bdbad432bc21b583bd3c084f05323f642/clients/src/main/java/org/apache/kafka/common/serialization/DoubleSerializer.java#L29-L44
Specify DoubleDeserializer as value deserializer at consumer's command line as shown below
--property value.deserializer=org.apache.kafka.common.serialization.DoubleDeserializer