How to print TimeWindowedKStream and KTable in Kafka streams? - apache-kafka

We have a Kafka process that takes a topic as input and writes timed window to the output topic.. the following code is being used. I would like to print TimeWindowedKStream(groupedStream) and KTable(aggregatedTable) and see the output for some debugging purposes..
String intopic = input_topic;
Long window = 60;
String outtopic = output_topic;
final Serde<String> stringSerde = Serdes.String();
Properties property = new Properties();
property.put("bootstrap.servers", "127.0.0.1:9092");
property.put("group.id", "test-consumer-group");
property.put("application.id", "sliding-window-min-bar");
property.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, stringSerde.getClass().getName());
property.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, stringSerde.getClass().getName());
Duration windowSizeMs = Duration.ofMinutes(window);
StreamsBuilder builder = new StreamsBuilder();
System.out.println(intopic);
KStream<String, String> equitybar = builder.stream(intopic, Consumed.with(stringSerde, stringSerde));
System.out.println(equitybar);
equitybar.print(Printed.toSysOut());
// convert string of csv to a double on the mean value
KStream<String, String> transformedbar = equitybar
.map((key, value) -> KeyValue.pair(key, value.substring(1,value.length()-2).split(",")[2]));
System.out.println(transformedbar);
transformedbar.print(Printed.toSysOut());
// group by equity and sliding window
System.out.println(windowSizeMs);
System.out.println(TimeWindows.of(windowSizeMs).advanceBy(advanceMs));
TimeWindowedKStream<String, String> groupedStream = transformedbar.groupByKey().windowedBy(TimeWindows.of(windowSizeMs).advanceBy(advanceMs));
System.out.println(groupedStream);
KTable<Windowed<String>, String> aggregatedTable = groupedStream.aggregate(
() -> "|",
(aggKey, newValue, aggValue) -> aggValue + newValue.trim() + "|") ;
I tried to print it using the the print command that is used for Kafka streams - groupedStream.print(Printed.toSysOut()); - but it doesn't seem to be working.
Thanks.

KGroupedStream and TimeWindowedKStream are "just" helper classes to allow the DSL to present a fluent API to chain operator without too many overloads on a single class.
In the DSL, there are only two main abstractions, KStream and KTable that are actual first-class data-containers. Thus, it's not possible what you want to do.

Related

Kafka streams join duplicates

Please don't mark this question as a duplicate of kafka-streams join produce duplicates. I think my scenario is different. I'm also already using kafka EOS via processing.guarantee=exactly_once
I have an input topic transactions_topic with json data that looks like
{
"timestamp": "2022-10-08T13:04:30Z",
"transactionId": "842d38ea-1d3d-41a4-b724-bcc7e81aec9a",
"accountId": "account123",
"amount": 1.0
}
It's represented as a simple class using lombok #Data
#Data
class Transaction {
String transactionId;
String timestamp;
String accountId;
Double amount;
}
I want to compute the total amount spent by accountId for the past 1 hour, past 1 day and past 30 days. These computations are the features represented by the the following class
#Data
public class Features {
double totalAmount1Hour;
double totalAmount1Day;
double totalAmount30Day;
}
I'm using kafka-streams and springboot to achieve this.
First I subscribe to the input topic and select the accountId as key
KStream<String, Transaction> kStream = builder.stream(inputTopic,
Consumed.with(Serdes.String(), new JsonSerde<>(Transaction.class)).
withTimestampExtractor(new TransactionTimestampExtractor())).
selectKey((k,v)-> v.getAccountId());
TransactionTimestampExtractor is implemented as follows
public class TransactionTimestampExtractor implements TimestampExtractor {
#Override
public long extract(ConsumerRecord<Object, Object> consumerRecord, long l) {
Transaction value = (Transaction) consumerRecord.value();
long epoch = Instant.parse(value.getTimestamp()).toEpochMilli();
return epoch;
}
}
Now in order to compute the total amount for the past 1 hour, past 1 day and past 30 days, I created a function that will aggregate the amount based on a sliding window
private <T> KStream<String, T> windowAggregate(KStream<String, Transaction> kStream,
SlidingWindows window,
Initializer<T> initializer,
Aggregator<String, Transaction, T> aggregator,
Class<T> t) {
return kStream.
groupByKey(Grouped.with(Serdes.String(), new JsonSerde<>(Transaction.class))).
windowedBy(window).
aggregate(initializer,
aggregator,
Materialized.with(Serdes.String(), Serdes.serdeFrom(t))).
suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded())).
toStream().
map((k, v) -> KeyValue.pair(k.key(), v));
}
Now we can use it like
Aggregator<String, Transaction, Double> amountAggregator = (k, v, aggregate) -> aggregate + v.getAmount();
KStream<String, Double> totalAmount1Hour = windowAggregate(kStream, SlidingWindows.ofTimeDifferenceWithNoGrace(Duration.ofHours(1)), () -> 0.0, amountAggregator, Double.class);
KStream<String, Double> totalAmount1Day = windowAggregate(kStream, SlidingWindows.ofTimeDifferenceWithNoGrace(Duration.ofDays(1)), () -> 0.0, amountAggregator, Double.class);
KStream<String, Double> totalAmount30Day = windowAggregate(kStream, SlidingWindows.ofTimeDifferenceWithNoGrace(Duration.ofDays(30)), () -> 0.0, amountAggregator, Double.class);
Now all I need to do is to join these streams and return a new stream with Features as values
private KStream<String, Features> joinAmounts(KStream<String, Double> totalAmount1Hour, KStream<String, Double> totalAmount1Day, KStream<String, Double> totalAmount30Day) {
JoinWindows joinWindows = JoinWindows.ofTimeDifferenceWithNoGrace(Duration.ofSeconds(0));
KStream<String, Features> totalAmount1HourAnd1Day = totalAmount1Hour.join(totalAmount1Day,
(amount1Hour, amount1Day) -> {
Features features = new Features();
features.setTotalAmount1Hour(amount1Hour);
features.setTotalAmount1Day(amount1Day);
return features;
},
joinWindows,
StreamJoined.with(Serdes.String(), Serdes.Double(), Serdes.Double()));
KStream<String, Features> featuresKStream = totalAmount1HourAnd1Day.join(totalAmount30Day,
(features, amount30Day) -> {
features.setTotalAmount30Day(amount30Day);
return features;
},
joinWindows,
StreamJoined.with(Serdes.String(), new JsonSerde<>(Features.class), Serdes.Double()));
return featuresKStream;
}
I print the features stream for debugging purposes
KStream<String, Features> features = joinAmounts(totalAmount1Hour, totalAmount1Day, totalAmount30Day);
features.print(Printed.<String, Features>toSysOut().withLabel("features"));
This works and prints the correct values for the features however when I process the same payload more than once, the features stream produces duplicates. For example processing the following payload twice produces the following output.
{
"timestamp":"2022-10-08T01:09:32Z",
"accountId":"account1",
"transactionId":"33694a6e-8c15-4cc2-964a-b8b0ecce2682",
"amount":1.0
}
Output
[features]: account1, Features(totalAmount1Hour=2.0, totalAmount1Day=1.0, totalAmount30Day=1.0)
[features]: account1, Features(totalAmount1Hour=1.0, totalAmount1Day=2.0, totalAmount30Day=1.0)
[features]: account1, Features(totalAmount1Hour=2.0, totalAmount1Day=2.0, totalAmount30Day=1.0)
[features]: account1, Features(totalAmount1Hour=1.0, totalAmount1Day=1.0, totalAmount30Day=2.0)
[features]: account1, Features(totalAmount1Hour=2.0, totalAmount1Day=1.0, totalAmount30Day=2.0)
[features]: account1, Features(totalAmount1Hour=1.0, totalAmount1Day=2.0, totalAmount30Day=2.0)
[features]: account1, Features(totalAmount1Hour=2.0, totalAmount1Day=2.0, totalAmount30Day=2.0)
My expected output would be just the last one
[features]: account1, Features(totalAmount1Hour=2.0, totalAmount1Day=2.0, totalAmount30Day=2.0)
How can I achive this and get rid of the duplicates in the features stream? Is kafka-streams join() doing a cartesian product because I have the same timestamp and key?
Yes, the toStream will convert from a KTable back to a KStream, giving you full changelogs for the tables. Then, for every single change of the each of the 3 tables, you will also get a join result.
Maybe a better idea to achieve what you want is to chain your aggregations. So that you generate the KTable for 1 hour changes, and from this table you derive the 1 day changes, and from the resulting table you finally generate the 30 day changes. See this Wiki page for an example: https://cwiki.apache.org/confluence/display/KAFKA/Windowed+aggregations+over+successively+increasing+timed+windows

kafka streams groupBy aggregate produces unexpected values

my question is about Kafka streams Ktable.groupBy.aggregate. and the resulting aggregated values.
situation
I am trying to aggregate minute events per day.
I have a minute event generator (not shown here) that generates events for a few houses. Sometimes the event value is wrong and the minute event must be republished.
Minute events are published in the topic "minutes".
I am doing an aggregation of these events per day and house using kafka Streams groupBy and aggregate.
problem
Normally as there are 1440 minutes in a day, there should never have an aggregation with more than 1440 values.
Also there should never have an aggregation with a negative amount of events.
... But it happens anyways and we do not understand what is wrong in our code.
sample code
Here is a sample simplified code to illustrate the problem. The IllegalStateException is thrown sometimes.
StreamsBuilder builder = new StreamsBuilder();
KTable<String, MinuteEvent> minuteEvents = builder.table(
"minutes",
Consumed.with(Serdes.String(), minuteEventSerdes),
Materialized.<String, MinuteEvent, KeyValueStore<Bytes, byte[]>>with(Serdes.String(), minuteEventSerdes)
.withCachingDisabled());
// preform daily aggregation
KStream<String, MinuteAggregate> dayEvents = minuteEvents
// group by house and day
.filter((key, minuteEvent) -> minuteEvent != null && StringUtils.isNotBlank(minuteEvent.house))
.groupBy((key, minuteEvent) -> KeyValue.pair(
minuteEvent.house + "##" + minuteEvent.instant.atZone(ZoneId.of("Europe/Paris")).truncatedTo(ChronoUnit.DAYS), minuteEvent),
Grouped.<String, MinuteEvent>as("minuteEventsPerHouse")
.withKeySerde(Serdes.String())
.withValueSerde(minuteEventSerdes))
.aggregate(
MinuteAggregate::new,
(String key, MinuteEvent value, MinuteAggregate aggregate) -> aggregate.addLine(key, value),
(String key, MinuteEvent value, MinuteAggregate aggregate) -> aggregate.removeLine(key, value),
Materialized
.<String, MinuteAggregate, KeyValueStore<Bytes, byte[]>>as(BILLLINEMINUTEAGG_STORE)
.withKeySerde(Serdes.String())
.withValueSerde(minuteAggSerdes)
.withLoggingEnabled(new HashMap<>())) // keep this aggregate state forever
.toStream();
// check daily aggregation
dayEvents.filter((key, value) -> {
if (value.nbValues < 0) {
throw new IllegalStateException("got an aggregate with a negative number of values " + value.nbValues);
}
if (value.nbValues > 1440) {
throw new IllegalStateException("got an aggregate with too many values " + value.nbValues);
}
return true;
}).to("days", minuteAggSerdes);
and here are the sample class used in this code snippet :
public class MinuteEvent {
public final String house;
public final double sensorValue;
public final Instant instant;
public MinuteEvent(String house,double sensorValue, Instant instant) {
this.house = house;
this.sensorValue = sensorValue;
this.instant = instant;
}
}
public class MinuteAggregate {
public int nbValues = 0;
public double totalSensorValue = 0.;
public String house = "";
public MinuteAggregate addLine(String key, MinuteEvent value) {
this.nbValues = this.nbValues + 1;
this.totalSensorValue = this.totalSensorValue + value.sensorValue;
this.house = value.house;
return this;
}
public MinuteAggregate removeLine(String key, MinuteEvent value) {
this.nbValues = this.nbValues -1;
this.totalSensorValue = this.totalSensorValue - value.sensorValue;
return this;
}
public MinuteAggregate() {
}
}
If someone could tell us what we are doing wrong here and why we have these unexpected values that would be great.
additional notes
we configure our stream job to run with 4 threads properties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 4);
we are forced to use a Ktable.groupBy().aggregate() because minute values can be
republished with different sensorValue for an already published Instant. And daily aggregation modified accordingly.
Stream.groupBy().aggregate() does not have an adder AND a substractor.
I think, it is actually possible that the count become negative temporary.
The reason is, that each update in your first KTable sends two messaged downstream -- the old value to be subtracted in the downstream aggregation and the new value to be added to the downstream aggregation. Both message will be processed independently in the downstream aggregation.
If the current count is zero, and a subtractions is processed before an addition, the count would become negative temporarily.

applying keyed state on top of stream from co group stream

I have two kafka sources
I am trying to perform world count and merge the counts from two streams
I have created window of 1 min for both data streams and applying coGroupBykey , from DoFn , i am emitting <Key,Value> (word,count)
On top of this coGroupByKey function , I am applying stateful ParDo
Let say if i get (Test,2) from stream 1, (Test,3) from stream 2 in same window time then in CogroupByKey function , i ll merge as (Test,5), but if they are not falling in same window , i will emit (Test,2) and (Test,3)
Now i will apply state for merging these elements
So finally as result i should get (Test,5), but i am not getting the expected result , All elements form stream 1 are going to one partition and
elements from stream 2 to another partition , thats why i am getting result
(Test,2)
(Test,3)
// word count stream from kafka topic 1
PCollection<KV<String,Long>> stream1 = ...
// word count stream from kafka topic 2
PCollection<KV<String,Long>> stream2 = ...
PCollection<KV<String,Long>> windowed1 =
stream1.apply(
Window
.<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.millis(1000))
.discardingFiredPanes());
PCollection<KV<String,Long>> windowed2 =
stream2.apply(
Window
.<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.millis(1000))
.discardingFiredPanes());
final TupleTag<Long> count1 = new TupleTag<Long>();
final TupleTag<Long> count2 = new TupleTag<Long>();
// Merge collection values into a CoGbkResult collection.
PCollection<KV<String, CoGbkResult>> joinedStream =
KeyedPCollectionTuple.of(count1, windowed1).and(count2, windowed2)
.apply(CoGroupByKey.<String>create());
// applying state operation after coGroupKey fun
PCollection<KV<String,Long>> finalCountStream =
joinedStream.apply(ParDo.of(
new DoFn<KV<String, CoGbkResult>, KV<String,Long>>() {
#StateId(stateId)
private final StateSpec<MapState<String, Long>> mapState =
StateSpecs.map();
#ProcessElement
public void processElement(
ProcessContext processContext,
#StateId(stateId) MapState<String, Long> state) {
KV<String, CoGbkResult> element = processContext.element();
Iterable<Long> count1 = element.getValue().getAll(web);
Iterable<Long> count2 = element.getValue().getAll(assist);
Long sumAmount =
StreamSupport
.stream(
Iterables.concat(count1, count2).spliterator(), false)
.collect(Collectors.summingLong(n -> n));
System.out.println(element.getKey()+"::"+sumAmount);
// processContext.output(element.getKey()+"::"+sumAmount);
Long currCount =
state.get(element.getKey()).read() == null
? 0L
: state.get(element.getKey()).read();
Long newCount = currCount+sumAmount;
state.put(element.getKey(),newCount);
processContext.output(KV.of(element.getKey(),newCount));
}
}));
finalCountStream
.apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {
#StateId(myState)
private final StateSpec<MapState<String, Long>> mapState =
StateSpecs.map();
#ProcessElement
public void processElement(
ProcessContext c,
#StateId(myState) MapState<String, Long> state) {
KV<String,Long> e = c.element();
Long currCount = state.get(e.getKey()).read()==null
? 0L
: state.get(e.getKey()).read();
Long newCount = currCount+e.getValue();
state.put(e.getKey(),newCount);
c.output(e.getKey()+":"+newCount);
}
}))
.apply(KafkaIO.<Void, String>write()
.withBootstrapServers("localhost:9092")
.withTopic("test")
.withValueSerializer(StringSerializer.class)
.values());
Alternatively, you can use a Flatten + Combine approach, which should be give you simpler code:
PCollection<KV<String, Long>> pc1 = ...;
PCollection<KV<String, Long>> pc2 = ...;
PCollectionList<KV<String, Long>> pcs = PCollectionList.of(pc1).and(pc2);
PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());
merged.apply(windiw...).apply(Combine.perKey(Sum.ofLongs()))
You have set up both streams with the trigger Repeatedly.forever(AfterPane.elementCountAtLeast(1)) and discardingFiredPanes(). This will cause the CoGroupByKey to output as soon as possible after each input element and then reset its state each time. So it is normal behavior that it basically passes each input straight through.
Let me explain more: CoGroupByKey is executed like this:
All elements from stream1 and stream2 are tagged as you specified. So every (key, value1) from stream1 effectively becomes (key, (count1, value1)). And every (key, value2) from stream2 becomes `(key, (count2, value2))
These tagged collects are flattened together. So now there is one collection with elements like (key, (count1, value1)) and (key, (count2, value2)).
The combined collection goes through a normal GroupByKey. This is where triggers happen. So with the default trigger, you get (key, [(count1, value1), (count2, value2), ...]) with all the values for a key getting grouped. But with your trigger, you will often get separate (key, [(count1, value1)]) and (key, [(count2, value2)]) because each grouping fires right away.
The output of the GroupByKey is wrapped in just an API that is CoGbkResult. In many runners this is just a filtered view of the grouped iterable.
Of course, triggers are nondeterministic and runners are also allowed to have different implementations of CoGroupByKey. But the behavior you are seeing is expected. You probably don't want to use trigger like that or discarding mode, or else you need to do more grouping downstream.
Generally, doing a join with CoGBK is going to require some work downstream, until Beam supports retractions.
PipelineOptions options = PipelineOptionsFactory.create();
options.as(FlinkPipelineOptions.class)
.setRunner(FlinkRunner.class);
Pipeline p = Pipeline.create(options);
PCollection<KV<String,Long>> stream1 = new KafkaWordCount("localhost:9092","test1")
.build(p);
PCollection<KV<String,Long>> stream2 = new KafkaWordCount("localhost:9092","test2")
.build(p);
PCollectionList<KV<String, Long>> pcs = PCollectionList.of(stream1).and(stream2);
PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());
merged.apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {
#StateId(myState)
private final StateSpec<MapState<String, Long>> mapState = StateSpecs.map();
#ProcessElement
public void processElement(ProcessContext c, #StateId(myState) MapState<String, Long> state){
KV<String,Long> e = c.element();
System.out.println("Thread ID :"+ Thread.currentThread().getId());
Long currCount = state.get(e.getKey()).read()==null? 0L:state.get(e.getKey()).read();
Long newCount = currCount+e.getValue();
state.put(e.getKey(),newCount);
c.output(e.getKey()+":"+newCount);
}
})).apply(KafkaIO.<Void, String>write()
.withBootstrapServers("localhost:9092")
.withTopic("test")
.withValueSerializer(StringSerializer.class)
.values()
);
p.run().waitUntilFinish();

kafka stream windowed count output unreadable

I am trying windowed count with word count example. It works fine except that output is partially unreadable.
Code:
StringSerializer stringSerializer = new StringSerializer();
StringDeserializer stringDeserializer = new StringDeserializer();
WindowedSerializer<String> windowedSerializer = new WindowedSerializer<>(stringSerializer);
WindowedDeserializer<String> windowedDeserializer = new WindowedDeserializer<>(stringDeserializer);
Serde<Windowed<String>> windowedSerde = Serdes.serdeFrom(windowedSerializer, windowedDeserializer);
TimeWindows window = TimeWindows.of(TimeUnit.MINUTES.toMillis(1)).advanceBy(TimeUnit.MINUTES.toMillis(1));
KStream<String, String> textLines = builder.stream("streams-plaintext-input");
KTable<Windowed<String>, Long> wordCounts = textLines
.flatMapValues(textLine -> Arrays.asList(textLine.toLowerCase().split("\\W+")))
.groupBy((key, word) -> word)
.windowedBy(window)
.count(Materialized.<String, Long, WindowStore<Bytes, byte[]>>as("counts-store"));
wordCounts.toStream().to("streams-plaintext-output", Produced.with(windowedSerde, Serdes.Long()));
KafkaStreams streams = new KafkaStreams(builder.build(), config);
streams.start();
Output:
kafka c[?? 1
yaya c[?? 1
kafka c[?? 2
I guess the unreadable part might be windows duration.
What can I do to let it readable?
EDIT:
Tried to use windowedSerde to print output:
KStream<Windowed<String>, Long> output = builder.stream("streams-plaintext-output");
output.print(windowedSerde, Serdes.Long());
It still doesn't work.
When reading from the topic you need to use a Deserializer appropriate for Serializer that was used to produce to the topic. In this case, you need to use the windowDeserializer, which you are already constructing like so:
WindowedDeserializer<String> windowedDeserializer = new WindowedDeserializer<>(stringDeserializer);

how to write emitted tuple into kafka topic

Application is reading messages from one Kafka topic and after storing in MongoDB and doing some validations it is writing into another topic. Here I am facing issue like application is going into infinite loop.
Code I have is below:
Hosts zkHosts = new ZkHosts("localhost:2181");
String zkRoot = "/brokers/topics" ;
String clientRequestID = "reqtest";
String clientPendingID = "pendtest";
SpoutConfig kafkaRequestConfig = new SpoutConfig(zkHosts,"reqtest",zkRoot,clientRequestID);
SpoutConfig kafkaPendingConfig = new SpoutConfig(zkHosts,"pendtest",zkRoot,clientPendingID);
kafkaRequestConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
kafkaPendingConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
KafkaSpout kafkaRequestSpout = new KafkaSpout(kafkaRequestConfig);
KafkaSpout kafkaPendingSpout = new KafkaSpout(kafkaPendingConfig);
MongoBolt mongoBolt = new MongoBolt() ;
DeviceFilterBolt deviceFilterBolt = new DeviceFilterBolt() ;
KafkaRequestBolt kafkaReqBolt = new KafkaRequestBolt() ;
abc1DeviceBolt abc1DevBolt = new abc1DeviceBolt() ;
DefaultTopicSelector defTopicSelector = new DefaultTopicSelector(xyzKafkaTopic.RESPONSE.name()) ;
KafkaBolt kafkaRespBolt = new KafkaBolt()
.withTopicSelector(defTopicSelector)
.withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper()) ;
TopologyBuilder topoBuilder = new TopologyBuilder();
topoBuilder.setSpout(xyzComponent.KAFKA_REQUEST_SPOUT.name(), kafkaRequestSpout);
topoBuilder.setSpout(xyzComponent.KAFKA_PENDING_SPOUT.name(), kafkaPendingSpout);
topoBuilder.setBolt(xyzComponent.KAFKA_PENDING_BOLT.name(),
deviceFilterBolt, 1)
.shuffleGrouping(xyzComponent.KAFKA_PENDING_SPOUT.name()) ;
topoBuilder.setBolt(xyzComponent.abc1_DEVICE_BOLT.name(),
abc1DevBolt, 1)
.shuffleGrouping(xyzComponent.KAFKA_PENDING_BOLT.name(),
xyzDevice.abc1.name()) ;
topoBuilder.setBolt(xyzComponent.MONGODB_BOLT.name(),
mongoBolt, 1)
.shuffleGrouping(xyzComponent.abc1_DEVICE_BOLT.name(),
xyzStreamID.KAFKARESP.name());
topoBuilder.setBolt(xyzComponent.KAFKA_RESPONSE_BOLT.name(),
kafkaRespBolt, 1)
.shuffleGrouping(xyzComponent.abc1_DEVICE_BOLT.name(),
xyzStreamID.KAFKARESP.name());
Config config = new Config() ;
config.setDebug(true);
config.setNumWorkers(1);
Properties props = new Properties();
props.put("metadata.broker.list", "localhost:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("request.required.acks", "1");
config.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
LocalCluster cluster = new LocalCluster();
try{
cluster.submitTopology("demo", config, topoBuilder.createTopology());
}
In the above code, KAFKA_RESPONSE_BOLT is writing the data into topic.
abc1_DEVICE_BOLT is feeding this KAFKA_RESPONSE_BOLT by emitting the data like:
#Override
public void declareOutputFields(OutputFieldsDeclarer ofd) {
Fields respFields = IoTFields.getKafkaResponseFieldsRTEXY();
ofd.declareStream(IoTStreamID.KAFKARESP.name(), respFields);
}
#Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
List<Object> newTuple = new ArrayList<Object>() ;
String params = tuple.getStringByField("params") ;
newTuple.add(3, params);
----
collector.emit(IoTStreamID.KAFKARESP.name(), newTuple);
}
I have been bothered by the same question for a long time, the answer is very simple... you will not believe it .
As far as I understand,implementation of KafkaBolt have to receive tuples has field name of “message”,no matter it is a Bolt or Spout.So you have to do some changes to your code, which I have not seen carefully.(But I believe this would help!)
The specific reason are said at https://mail-archives.apache.org/mod_mbox/incubator-storm-user/201409.mbox/%3C6AF1CAC6-60EA-49D9-8333-0343777B48A7#andrashatvani.com%3E