Difference between Flux.compose and Flux.transform? - reactive-programming

I am learning reactive streams and working on Publishers(Flux), and working for the transformation of Flux. For this i got compose and transform methods.
Here is my code:
private static void composeStream() {
System.out.println("*********Calling composeStream************");
Function<Flux<String>, Flux<String>> alterMap = f -> {
return f.filter(color -> !color.equals("ram"))
.map(String::toUpperCase);
};
Flux<String> compose = Flux.fromIterable(Arrays.asList("ram", "sam", "kam", "dam"))
.doOnNext(System.out::println)
.compose(alterMap);
compose.subscribe(d -> System.out.println("Subscriber to Composed AlterMap :"+d));
System.out.println("-------------------------------------");
}
private static void transformStream() {
System.out.println("*********Calling transformStream************");
Function<Flux<String>, Flux<String>> alterMap = f -> f.filter(color -> !color.equals("ram"))
.map(String::toUpperCase);
Flux.fromIterable(Arrays.asList("ram", "sam", "kam", "dam"))
.doOnNext(System.out::println)
.transform(alterMap)
.subscribe(d -> System.out.println("Subscriber to Transformed AlterMap: "+d));
System.out.println("-------------------------------------");
}
and here is the output, which is same for both the cases:
*********Calling transformStream************
ram
sam
Subscriber to Transformed AlterMap: SAM
kam
Subscriber to Transformed AlterMap: KAM
dam
Subscriber to Transformed AlterMap: DAM
-------------------------------------
*********Calling composeStream************
ram
sam
Subscriber to Composed AlterMap :SAM
kam
Subscriber to Composed AlterMap :KAM
dam
Subscriber to Composed AlterMap :DAM
-------------------------------------
What is the difference between the two?
Please suggest

According to documentation:
Transform this Flux in order to generate a target Flux. Unlike Flux#compose(Function), the provided function is executed as part of assembly.
What does it means?
If we will write a small test like next:
int[] counter = new int[1];
Function transformer = f -> {
counter[0]++;
return f;
}
Flux flux = flux Flux.just("")
.transform(transformer);
System.out.println(counter[0]);
flux.subscribe();
flux.subscribe();
flux.subscribe();
System.out.println(counter[0]);
In the output we will observe next result:
1
1
Which means that transform function will be executed once during assembling of the pipe, so in other words transformation function will be executed eagerly.
In turn, with .compose we will get next behavior for the same code
int[] counter = new int[1];
Function transformer = f -> {
counter[0]++;
return f;
}
Flux flux = flux Flux.just("")
.compose(transformer);
System.out.println(counter[0]);
flux.subscribe();
flux.subscribe();
flux.subscribe();
System.out.println(counter[0]);
And output
0
3
Which means that for each subscriber transformation function will be executed separately, and we may consider that kind of execution as lazy

Related

Group by and then ungroup causes loss of all events on beam TestPipeline

We found this line(mainly the reshuffle to be INTERMITTENTLY dropping events(causing flaky tests) on apache beam org.apache.beam.sdk.testing.TestPipeline ->
PCollection<OrderlyBeamDto<RosterRecord>> records = PCollectionList.of(csv.get(TupleTags.OUTPUT_RECORD)).and(excel.get(TupleTags.OUTPUT_RECORD))
.apply(Flatten.pCollections())
.apply("Reshuffle Records", Reshuffle.viaRandomKey());
soooo, I modified it to this and now we lose ALL events. In fact, the UngroupFn() is never called
PCollection<OrderlyBeamDto<RosterRecord>> records = PCollectionList.of(csv.get(TupleTags.OUTPUT_RECORD)).and(excel.get(TupleTags.OUTPUT_RECORD))
.apply(Flatten.pCollections())
.apply(ParDo.of(new CreateKeyFn()))
.apply(GroupByKey.<String, OrderlyBeamDto<RosterRecord>>create())
.apply(ParDo.of(new UngroupFn()));
The code for Ungroup is simply this
#ProcessElement
public void processElement(final ProcessContext c) {
log.info("running ungroup");
KV<String, Iterable<OrderlyBeamDto<RosterRecord>>> keyValue = c.element();
Iterable<OrderlyBeamDto<RosterRecord>> list = keyValue.getValue();
for(OrderlyBeamDto<RosterRecord> record : list) {
log.info("ungroup="+record.getValue().getRowNumber());
c.output(record);
}
}
We never see the 'running ungroup' log :(

kafka streams groupBy aggregate produces unexpected values

my question is about Kafka streams Ktable.groupBy.aggregate. and the resulting aggregated values.
situation
I am trying to aggregate minute events per day.
I have a minute event generator (not shown here) that generates events for a few houses. Sometimes the event value is wrong and the minute event must be republished.
Minute events are published in the topic "minutes".
I am doing an aggregation of these events per day and house using kafka Streams groupBy and aggregate.
problem
Normally as there are 1440 minutes in a day, there should never have an aggregation with more than 1440 values.
Also there should never have an aggregation with a negative amount of events.
... But it happens anyways and we do not understand what is wrong in our code.
sample code
Here is a sample simplified code to illustrate the problem. The IllegalStateException is thrown sometimes.
StreamsBuilder builder = new StreamsBuilder();
KTable<String, MinuteEvent> minuteEvents = builder.table(
"minutes",
Consumed.with(Serdes.String(), minuteEventSerdes),
Materialized.<String, MinuteEvent, KeyValueStore<Bytes, byte[]>>with(Serdes.String(), minuteEventSerdes)
.withCachingDisabled());
// preform daily aggregation
KStream<String, MinuteAggregate> dayEvents = minuteEvents
// group by house and day
.filter((key, minuteEvent) -> minuteEvent != null && StringUtils.isNotBlank(minuteEvent.house))
.groupBy((key, minuteEvent) -> KeyValue.pair(
minuteEvent.house + "##" + minuteEvent.instant.atZone(ZoneId.of("Europe/Paris")).truncatedTo(ChronoUnit.DAYS), minuteEvent),
Grouped.<String, MinuteEvent>as("minuteEventsPerHouse")
.withKeySerde(Serdes.String())
.withValueSerde(minuteEventSerdes))
.aggregate(
MinuteAggregate::new,
(String key, MinuteEvent value, MinuteAggregate aggregate) -> aggregate.addLine(key, value),
(String key, MinuteEvent value, MinuteAggregate aggregate) -> aggregate.removeLine(key, value),
Materialized
.<String, MinuteAggregate, KeyValueStore<Bytes, byte[]>>as(BILLLINEMINUTEAGG_STORE)
.withKeySerde(Serdes.String())
.withValueSerde(minuteAggSerdes)
.withLoggingEnabled(new HashMap<>())) // keep this aggregate state forever
.toStream();
// check daily aggregation
dayEvents.filter((key, value) -> {
if (value.nbValues < 0) {
throw new IllegalStateException("got an aggregate with a negative number of values " + value.nbValues);
}
if (value.nbValues > 1440) {
throw new IllegalStateException("got an aggregate with too many values " + value.nbValues);
}
return true;
}).to("days", minuteAggSerdes);
and here are the sample class used in this code snippet :
public class MinuteEvent {
public final String house;
public final double sensorValue;
public final Instant instant;
public MinuteEvent(String house,double sensorValue, Instant instant) {
this.house = house;
this.sensorValue = sensorValue;
this.instant = instant;
}
}
public class MinuteAggregate {
public int nbValues = 0;
public double totalSensorValue = 0.;
public String house = "";
public MinuteAggregate addLine(String key, MinuteEvent value) {
this.nbValues = this.nbValues + 1;
this.totalSensorValue = this.totalSensorValue + value.sensorValue;
this.house = value.house;
return this;
}
public MinuteAggregate removeLine(String key, MinuteEvent value) {
this.nbValues = this.nbValues -1;
this.totalSensorValue = this.totalSensorValue - value.sensorValue;
return this;
}
public MinuteAggregate() {
}
}
If someone could tell us what we are doing wrong here and why we have these unexpected values that would be great.
additional notes
we configure our stream job to run with 4 threads properties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 4);
we are forced to use a Ktable.groupBy().aggregate() because minute values can be
republished with different sensorValue for an already published Instant. And daily aggregation modified accordingly.
Stream.groupBy().aggregate() does not have an adder AND a substractor.
I think, it is actually possible that the count become negative temporary.
The reason is, that each update in your first KTable sends two messaged downstream -- the old value to be subtracted in the downstream aggregation and the new value to be added to the downstream aggregation. Both message will be processed independently in the downstream aggregation.
If the current count is zero, and a subtractions is processed before an addition, the count would become negative temporarily.

applying keyed state on top of stream from co group stream

I have two kafka sources
I am trying to perform world count and merge the counts from two streams
I have created window of 1 min for both data streams and applying coGroupBykey , from DoFn , i am emitting <Key,Value> (word,count)
On top of this coGroupByKey function , I am applying stateful ParDo
Let say if i get (Test,2) from stream 1, (Test,3) from stream 2 in same window time then in CogroupByKey function , i ll merge as (Test,5), but if they are not falling in same window , i will emit (Test,2) and (Test,3)
Now i will apply state for merging these elements
So finally as result i should get (Test,5), but i am not getting the expected result , All elements form stream 1 are going to one partition and
elements from stream 2 to another partition , thats why i am getting result
(Test,2)
(Test,3)
// word count stream from kafka topic 1
PCollection<KV<String,Long>> stream1 = ...
// word count stream from kafka topic 2
PCollection<KV<String,Long>> stream2 = ...
PCollection<KV<String,Long>> windowed1 =
stream1.apply(
Window
.<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.millis(1000))
.discardingFiredPanes());
PCollection<KV<String,Long>> windowed2 =
stream2.apply(
Window
.<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.millis(1000))
.discardingFiredPanes());
final TupleTag<Long> count1 = new TupleTag<Long>();
final TupleTag<Long> count2 = new TupleTag<Long>();
// Merge collection values into a CoGbkResult collection.
PCollection<KV<String, CoGbkResult>> joinedStream =
KeyedPCollectionTuple.of(count1, windowed1).and(count2, windowed2)
.apply(CoGroupByKey.<String>create());
// applying state operation after coGroupKey fun
PCollection<KV<String,Long>> finalCountStream =
joinedStream.apply(ParDo.of(
new DoFn<KV<String, CoGbkResult>, KV<String,Long>>() {
#StateId(stateId)
private final StateSpec<MapState<String, Long>> mapState =
StateSpecs.map();
#ProcessElement
public void processElement(
ProcessContext processContext,
#StateId(stateId) MapState<String, Long> state) {
KV<String, CoGbkResult> element = processContext.element();
Iterable<Long> count1 = element.getValue().getAll(web);
Iterable<Long> count2 = element.getValue().getAll(assist);
Long sumAmount =
StreamSupport
.stream(
Iterables.concat(count1, count2).spliterator(), false)
.collect(Collectors.summingLong(n -> n));
System.out.println(element.getKey()+"::"+sumAmount);
// processContext.output(element.getKey()+"::"+sumAmount);
Long currCount =
state.get(element.getKey()).read() == null
? 0L
: state.get(element.getKey()).read();
Long newCount = currCount+sumAmount;
state.put(element.getKey(),newCount);
processContext.output(KV.of(element.getKey(),newCount));
}
}));
finalCountStream
.apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {
#StateId(myState)
private final StateSpec<MapState<String, Long>> mapState =
StateSpecs.map();
#ProcessElement
public void processElement(
ProcessContext c,
#StateId(myState) MapState<String, Long> state) {
KV<String,Long> e = c.element();
Long currCount = state.get(e.getKey()).read()==null
? 0L
: state.get(e.getKey()).read();
Long newCount = currCount+e.getValue();
state.put(e.getKey(),newCount);
c.output(e.getKey()+":"+newCount);
}
}))
.apply(KafkaIO.<Void, String>write()
.withBootstrapServers("localhost:9092")
.withTopic("test")
.withValueSerializer(StringSerializer.class)
.values());
Alternatively, you can use a Flatten + Combine approach, which should be give you simpler code:
PCollection<KV<String, Long>> pc1 = ...;
PCollection<KV<String, Long>> pc2 = ...;
PCollectionList<KV<String, Long>> pcs = PCollectionList.of(pc1).and(pc2);
PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());
merged.apply(windiw...).apply(Combine.perKey(Sum.ofLongs()))
You have set up both streams with the trigger Repeatedly.forever(AfterPane.elementCountAtLeast(1)) and discardingFiredPanes(). This will cause the CoGroupByKey to output as soon as possible after each input element and then reset its state each time. So it is normal behavior that it basically passes each input straight through.
Let me explain more: CoGroupByKey is executed like this:
All elements from stream1 and stream2 are tagged as you specified. So every (key, value1) from stream1 effectively becomes (key, (count1, value1)). And every (key, value2) from stream2 becomes `(key, (count2, value2))
These tagged collects are flattened together. So now there is one collection with elements like (key, (count1, value1)) and (key, (count2, value2)).
The combined collection goes through a normal GroupByKey. This is where triggers happen. So with the default trigger, you get (key, [(count1, value1), (count2, value2), ...]) with all the values for a key getting grouped. But with your trigger, you will often get separate (key, [(count1, value1)]) and (key, [(count2, value2)]) because each grouping fires right away.
The output of the GroupByKey is wrapped in just an API that is CoGbkResult. In many runners this is just a filtered view of the grouped iterable.
Of course, triggers are nondeterministic and runners are also allowed to have different implementations of CoGroupByKey. But the behavior you are seeing is expected. You probably don't want to use trigger like that or discarding mode, or else you need to do more grouping downstream.
Generally, doing a join with CoGBK is going to require some work downstream, until Beam supports retractions.
PipelineOptions options = PipelineOptionsFactory.create();
options.as(FlinkPipelineOptions.class)
.setRunner(FlinkRunner.class);
Pipeline p = Pipeline.create(options);
PCollection<KV<String,Long>> stream1 = new KafkaWordCount("localhost:9092","test1")
.build(p);
PCollection<KV<String,Long>> stream2 = new KafkaWordCount("localhost:9092","test2")
.build(p);
PCollectionList<KV<String, Long>> pcs = PCollectionList.of(stream1).and(stream2);
PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());
merged.apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {
#StateId(myState)
private final StateSpec<MapState<String, Long>> mapState = StateSpecs.map();
#ProcessElement
public void processElement(ProcessContext c, #StateId(myState) MapState<String, Long> state){
KV<String,Long> e = c.element();
System.out.println("Thread ID :"+ Thread.currentThread().getId());
Long currCount = state.get(e.getKey()).read()==null? 0L:state.get(e.getKey()).read();
Long newCount = currCount+e.getValue();
state.put(e.getKey(),newCount);
c.output(e.getKey()+":"+newCount);
}
})).apply(KafkaIO.<Void, String>write()
.withBootstrapServers("localhost:9092")
.withTopic("test")
.withValueSerializer(StringSerializer.class)
.values()
);
p.run().waitUntilFinish();

Rx Operator: Take from Observable whenever latest value from other Observable is true

I'm looking for an Rx operator with the following semantics:
Observable<int> inputValues = …;
Observable<bool> gate = …;
inputValues
.combineLatest(gate)
.filter((pair<int, bool> pair) -> { return pair.second; })
.map((pair<int, bool> pair) -> { return pair.first; });
It emits values from the first Observable while the latest value from the second Observable is true. Using combineLatest, we also get a value when the second Observable becomes true. If we don't want that, we could use withLatestFrom in place of combineLatest.
Does this operator exist (in any Rx implementation)?
My apologies for giving you C# code, but here's how this can be written. Hopefully someone can translate for me.
Use the .Switch operator. It turns an IObservable<IObservable<T>> into an IObservable<T> by only returning the values produced by the latest observable returned by the outer observable.
var _gate = new Subject<bool>();
var _inputValues = new Subject<int>();
IObservable<int> inputValues = _inputValues;
IObservable<bool> gate = _gate;
IObservable<int> query =
gate
.StartWith(true)
.Select(g => g ? inputValues : Observable.Never<int>())
.Switch();
query
.Subscribe(v => Console.WriteLine(v));
_inputValues.OnNext(1);
_inputValues.OnNext(2);
_inputValues.OnNext(3);
_gate.OnNext(false);
_inputValues.OnNext(4);
_inputValues.OnNext(5);
_gate.OnNext(true);
_inputValues.OnNext(6);
This produces:
1
2
3
6
If inputValues is a ReplaySubject then you can do this to make the query work correctly:
IObservable<int> query =
inputValues
.Publish(i =>
gate
.StartWith(true)
.Select(g => g ? i : Observable.Never<int>())
.Switch());

rxjava2 - how to zip Maybe that can be empty?

I am attempting to make 3 web services calls (e.g.: getPhoneNumber, getFirstName, getLastName) and collect the answers into a common object Person. Any of the web services calls can return a Maybe.empty().
When attempting to zip the response together, rxjava2 skips over the zip operation and terminate normally (without having aggregated my answer).
For a simplified example, see below:
#Test
public void maybeZipEmptyTest() throws Exception {
Maybe<Integer> a = Maybe.just(1);
Maybe<Integer> b = Maybe.just(2);
Maybe<Integer> empty = Maybe.empty();
TestObserver<String> observer = Maybe.zip(a, b, empty, (x, y, e) -> {
String output = "test: a "+x+" b "+y+" empty "+e;
return output;
})
.doOnSuccess(output -> {
System.out.println(output);
})
.test();
observer.assertNoErrors();
}
How can we collect empty values within a zip operation instead of having the zip operation skipped/ignored? If this is the wrong pattern to solve this problem, how would you recommend solving this?
For most use cases, leveraging the defaultIfEmpty method is the right way to go.
For representing something which is ultimately optional (doesn't even use default), I used Java 8 Optional type to represent.
For example
#Test
public void maybeZipEmptyTest() throws Exception {
Maybe<Optional<Integer>> a = Maybe.just(Optional.of(1));
Maybe<Optional<Integer>> b = Maybe.just(Optional.of(2));
Maybe<Optional<Integer>> empty = Maybe.just(Optional.empty());
TestObserver<String> observer = Maybe.zip(a, b, empty, (x, y, e) -> {
String output = "test: a "+toStringOrEmpty(x)+" b "+toStringOrEmpty(y)+" empty "+toStringOrEmpty(e);
return output;
})
.doOnSuccess(output -> {
System.out.println(output);
})
.test();
observer.assertNoErrors();
}
private String toStringOrEmpty(Optional<Integer> value){
if(value.isPresent()){
return value.get().toString();
}
else {
return "";
}
}