How to unregister two vertx consumers and returns an rxjava completable? - rx-java2

I need a small help with Rxjava . currently I have two hash maps . Each hash map contains vertex message consumers against a subscription key. I want to return a completable object only if I am able to unregister both vertex message consumers. How can I achieve it .
I can post the code i am working on.
#Override
public Completable deregisterKeyEvents(String subscriptionId) {
MessageConsumer<JsonObject> messageConsumer = consumerMap.get(subscriptionId);
MessageConsumer<JsonObject> subscriptionConsumer = subscriptionConsumerMap.get(subscriptionId);
if( subscriptionConsumer != null) {
subscriptionConsumerMap.remove(subscriptionId);
subscriptionConsumer.unregister( res-> {
if(res.succeeded()) {
LOGGER.debug("Subscription channel consumer deregistered successfully!");
} else {
LOGGER.error("Unable to de-register Subscription channel consumer");
}
});
}
if (messageConsumer != null) {
consumerMap.remove(subscriptionId);
return Completable.create(emitter -> {
messageConsumer.unregister(res -> {
if (res.succeeded()) {
emitter.onComplete();
} else {
emitter.onError(res.cause());
}
});
});
} else {
LOGGER.warn("There was no consumer registered!");
return Completable.create(emitter -> emitter.onError(new KvNoSuchElementException("Subscription '" + subscriptionId + "' not found")));
}
}
I want to rewrite the above code in such a way
subscriptionConsumer.unregister() & messageConsumer.unregister() is successful then return a completable
The MessageConsumer class is from vert.x libary io.vertx.core.eventbus.MessageConsumer.
appreciate if you can help
thank you

If you're willing to add Vert.x RxJava2 to your dependencies, you could do this with toCompletable:
#Override
public Completable deregisterKeyEvents(String subscriptionId) {
MessageConsumer<JsonObject> messageConsumer = consumerMap.get(subscriptionId);
MessageConsumer<JsonObject> subscriptionConsumer = subscriptionConsumerMap.get(subscriptionId);
Completable c1;
if( subscriptionConsumer != null) {
subscriptionConsumerMap.remove(subscriptionId);
c1 = CompletableHelper.toCompletable(handler -> subscriptionConsumer.unregister(handler))
.doOnSuccess(() -> LOGGER.debug("Subscription channel consumer deregistered successfully!"))
.doOnError(t-> LOGGER.error("Unable to de-register Subscription channel consumer", t));
} else {
c1 = Completable.complete();
}
Completable c2;
if (messageConsumer != null) {
consumerMap.remove(subscriptionId);
c2 = CompletableHelper.toCompletable(handler -> messageConsumer.unregister(handler));
} else {
LOGGER.warn("There was no consumer registered!");
c2 = Completable.error(new KvNoSuchElementException("Subscription '" + subscriptionId + "' not found"));
}
return c1.concatWith(c2);
}
Note that this is a bit different than your original code because:
the messageConsumer unregistration happens only after the unregistration of subscriptionConsumer,
the messageConsumer unregistration happens only if unregistration of subscriptionConsumer was successful.
You can use a different method of Completable if that's not the behavior you want.

Related

How to send avro message to new Topic using functional Kstream (processor)

I am getting "could not be established. Broker may not be available" when the message sent to new topic "failureTopic" or any exception. I am using Kafka version: 3.0.0
#Bean
#SuppressWarnings("unchecked")
public Function<KStream<String, AvroClass>, KStream<String, AvroClass>> process() {
final AvroClass[] finalMessage = {null};
return input -> input.branch( (k, avroMessage) -> {
try {
finalMessage[0] = subprocess();
if (finalMessage[0] != null)
return true;
else {
Message<NewAvroClass> mess = MessageBuilder.withPayload(avroMessage).build();
streamBridge.send("failureTopic", mess);
return false;
}
} catch (Exception e) {
handleProcessingException(k, avroMessage);
return false;
}
},
(k, v) -> true)[0].map((k, v) -> new KeyValue<>(k, finalMessage[0] ) );
}

Flink handling Kafka messages with parsing error

I have some Kafka messages of type InputIoTMessage coming in from Kafka and consumed through FlinkKafkaConsumer as below. I want to add an error field in InputIoTMessage class if there is a NoSuchFieldException. Also, Is this the best practice to handle this types of scenario or we have something more elegant in Java 8 e.g. using Option or Future?
String inputTopic = "sensors";
String outputTopic = "sensors_out";
String consumerGroup = "baeldung";
String address = "kafka:9092";
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkKafkaConsumer011<InputIoTMessage> flinkKafkaConsumer = createIoTConsumerForTopic(inputTopic, address, consumerGroup);
flinkKafkaConsumer.setStartFromEarliest();
DataStream<InputIoTMessage> stringInputStream = environment.addSource(flinkKafkaConsumer);
System.out.println("IoT Message received :: " );
stringInputStream
.filter((event) -> {
if(event.has("jsonParseError")) {
LOG.warn("JsonParseException was handled: " + event.get("jsonParseError").asText());
return false;
}
return true;
})
.print();
InputIoTMessage.java (has method to check if field exists)
public boolean has(String fieldName) {
boolean isExists;
try {
isExists = fieldName.equalsIgnoreCase(this.getClass().getField(fieldName).getName());
} catch (NoSuchFieldException | SecurityException e) {
Field[] fieldArr = this.getClass().getDeclaredFields();
//Question: how to add "jsonParseError" field to the object here ?
}
return true;
}
The filter function does not modify the input records, maybe you can implement the flatMap function, after modifying the record, output through out.collect
stringInputStream.flatMap(new FlatMapFunction<InputIoTMessage, InputIoTMessage>() {
#Override
public void flatMap(InputIoTMessage input, Collector<InputIoTMessage> out) {
if (!input.has("jsonParseError")) {
InputIoTMessage output = xxxxx;
out.collect(output);
}
}
});

Kafka reactor - How to disable KAFKA consumer being autostarted?

Below is my KAFKA consumer
#Bean("kafkaConfluentInboundReceiver")
#ConditionalOnProperty(value = "com.demo.kafka.core.inbound.confluent.topic-name",
matchIfMissing = false)
public KafkaReceiver<String, Object> kafkaInboundReceiver() {
ReceiverOptions<String, Object> receiverOptions = ReceiverOptions.create(inboundConsumerConfigs());
receiverOptions.schedulerSupplier(() -> Schedulers
.fromExecutorService(applicationContext.getBean("inboundKafkaExecutorService", ExecutorService.class)));
receiverOptions.maxCommitAttempts(kafkaProperties.getKafka().getCore().getMaxCommitAttempts());
return KafkaReceiver.create(receiverOptions.addAssignListener(Collection::iterator)
.subscription(Collections.singleton(
kafkaProperties.getKafka()
.getCore().getInbound().getConfluent()
.getTopicName()))
.commitInterval(Duration.ZERO).commitBatchSize(0));
}
My KAFKA consumer is getting started automatically. However I want to disable KAFKA consumer being autostarted.
I got to know that, In spring KAFKA we can do something like this
factory.setAutoStartup(start);
however, I am not sure how I achieve(control auto start/stop behavior) in Kafka reactor. I want to have something like below
Introducing a property to handle the auto start/stop behavior
#Value("${consumer.autostart:true}")
private boolean start;
using the above property I should be able to set the KAFKA Auto-Start flag in Kafka reactor, something like this
return KafkaReceiver.create(receiverOptions.addAssignListener(Collection::iterator)
.subscription(Collections.singleton(
kafkaProperties.getKafka()
.getCore().getInbound().getConfluent()
.getTopicName()))
.commitInterval(Duration.ZERO).commitBatchSize(0)).setAutoStart(start);
Note: .setAutoStart(start);
Is this doable in Kafka reactor, if so, how do I do it?
Update:
protected void inboundEventHubListener(String topicName, List<String> allowedValues) {
Scheduler scheduler = Schedulers.fromExecutorService(kafkaExecutorService);
kafkaEventHubInboundReceiver
.receive()
.publishOn(scheduler)
.groupBy(receiverRecord -> {
try {
return receiverRecord.receiverOffset().topicPartition();
} catch (Throwable throwable) {
log.error("exception in groupby", throwable);
return Flux.empty();
}
}).flatMap(partitionFlux -> partitionFlux.publishOn(scheduler)
.map(record -> {
processMessage(record, topicName, allowedValues).block(
Duration.ofSeconds(60L));//This subscribe is to trigger processing of a message
return record;
}).concatMap(message -> {
log.info("Received message after processing offset: {} partition: {} ",
message.offset(), message.partition());
return message.receiverOffset()
.commit()
.onErrorContinue((t, o) -> log.error(
String.format("exception raised while commit offset %s", o), t)
);
})).onErrorContinue((t, o) -> {
try {
if (null != o) {
ReceiverRecord<String, Object> record = (ReceiverRecord<String, Object>) o;
ReceiverOffset offset = record.receiverOffset();
log.debug("failed to process message: {} partition: {} and message: {} ",
offset.offset(), record.partition(), record.value());
}
log.error(String.format("exception raised while processing message %s", o), t);
} catch (Throwable inner) {
log.error("encountered error in onErrorContinue", inner);
}
}).subscribeOn(scheduler).subscribe();
Can I do something like this?
kafkaEventHubInboundReceiverObj = kafkaEventHubInboundReceiver.....subscribeOn(scheduler);
if(consumer.autostart) {
kafkaEventHubInboundReceiverObj.subscribe();
}
With reactor-kafka there is no concept of "auto start"; you are in complete control.
The consumer is not "started" until you subscribe to the Flux returned from receiver.receive().
Simply delay the flux.subscribe() until you are ready to consume data.

Implementing resource queue in rx

I have a hot observable Observable<Resource> resources that represents consumable resources and I want to queue up consumers Action1<Resource> for these resources. A Resource can be used by at most 1 consumer. It should not be used at all once a new value is pushed from resources. If my consumers were also wrapped in a hot observable then the marble-diagram of what I'm after would be
--A--B--C--D--E--
----1----2--34---
----A----C--D-E--
----1----2--3-4--
I've managed a naive implementation using a PublishSubject and zip but this only works if each resource is consumed before a new resource is published (i.e. instead of the required sequence [A1, C2, D3, E4] this implementation will actually produce [A1, B2, C3, D4]).
This is my first attempt at using rx and I've had a play around with both delay and join but can't quite seem to get what I'm after. I've also read that ideally Subjects should be avoided, but I can't see how else I would implement this.
public class ResourceQueue<Resource> {
private final PublishSubject<Action1<Resource>> consumers = PublishSubject.create();
public ResourceQueue(Observable<Resource> resources) {
resources.zipWith(this.consumers, new Func2<Resource, Action1<Resource>, Object>() {
#Override
public Object call(Resource resource, Action1<Resource> consumer) {
consumer.execute(resource);
return null;
}
}).publish().connect();
}
public void queue(final Action1<Resource> consumer) {
consumers.onNext(consumer);
}
}
Is there a way to achieve what I'm after? Is there a more 'rx-y' approach to the solution?
EDIT: changed withLatesFrom suggestion with combineLatest.
The only solution I can think of is to use combineLatest to get all the possible combinations, and manually exclude the ones that you do not need:
final ExecutorService executorService = Executors.newCachedThreadPool();
final Observable<String> resources = Observable.create(s -> {
Runnable r = new Runnable() {
#Override
public void run() {
final List<Integer> sleepTimes = Arrays.asList(200, 200, 200, 200, 200);
for (int i = 0; i < sleepTimes.size(); i++) {
try {
Thread.sleep(sleepTimes.get(i));
} catch (Exception e) {
e.printStackTrace();
}
String valueOf = String.valueOf((char) (i + 97));
System.out.println("new resource " + valueOf);
s.onNext(valueOf);
}
s.onCompleted();
}
};
executorService.submit(r);
});
final Observable<Integer> consumers = Observable.create(s -> {
Runnable r = new Runnable() {
#Override
public void run() {
final List<Integer> sleepTimes = Arrays.asList(300, 400, 200, 0);
for (int i = 0; i < sleepTimes.size(); i++) {
try {
Thread.sleep(sleepTimes.get(i));
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("new consumer " + (i + 1));
s.onNext(i + 1);
}
s.onCompleted();
};
};
executorService.submit(r);
});
final LatestValues latestValues = new LatestValues();
final Observable<String> combineLatest = Observable.combineLatest(consumers, resources, (c, r) -> {
if (latestValues.alreadyProcessedAnyOf(c, r)) {
return "";
}
System.out.println("consumer " + c + " will consume resource " + r);
latestValues.updateWithValues(c, r);
return c + "_" + r;
});
combineLatest.subscribe();
executorService.shutdown();
executorService.awaitTermination(10, TimeUnit.SECONDS);
The class holding the latest consumers and resources.
static class LatestValues {
Integer latestConsumer = Integer.MAX_VALUE;
String latestResource = "";
public boolean alreadyProcessedAnyOf(Integer c, String r) {
return latestConsumer.equals(c) || latestResource.equals(r);
}
public void updateWithValues(Integer c, String r) {
latestConsumer = c;
latestResource = r;
}
}

Does commitOffsets on high-level consumer block?

In the Java Client (http://kafka.apache.org/documentation.html#highlevelconsumerapi), does commitOffsets on the high-level consumer block until offsets are successfully commited, or is it fire-and-forget?
Does commitOffsets on the high-level consumer block until offsets are successfully committed?
It looks like commitOffsets() loops through each consumer and calls updatePersistentPath if its offset has changed, and if so writes data via zkClient.writeData(path, getBytes(data)). It appears is though commitOffsets() does block until all the offsets are committed.
Here is the source code for commitOffsets() (ref):
public void commitOffsets() {
if (zkClient == null) {
logger.error("zk client is null. Cannot commit offsets");
return;
}
for (Entry<String, Pool<Partition, PartitionTopicInfo>> e : topicRegistry.entrySet()) {
ZkGroupTopicDirs topicDirs = new ZkGroupTopicDirs(config.getGroupId(), e.getKey());
for (PartitionTopicInfo info : e.getValue().values()) {
final long lastChanged = info.getConsumedOffsetChanged().get();
if (lastChanged == 0) {
logger.trace("consume offset not changed");
continue;
}
final long newOffset = info.getConsumedOffset();
//path: /consumers/<group>/offsets/<topic>/<brokerid-partition>
final String path = topicDirs.consumerOffsetDir + "/" + info.partition.getName();
try {
ZkUtils.updatePersistentPath(zkClient, path, "" + newOffset);
} catch (Throwable t) {
logger.warn("exception during commitOffsets, path=" + path + ",offset=" + newOffset, t);
} finally {
info.resetComsumedOffsetChanged(lastChanged);
if (logger.isDebugEnabled()) {
logger.debug("Committed [" + path + "] for topic " + info);
}
}
}
}
}
and for updatePersistentPath(...) (ref):
public static void updatePersistentPath(ZkClient zkClient, String path, String data) {
try {
zkClient.writeData(path, getBytes(data));
} catch (ZkNoNodeException e) {
createParentPath(zkClient, path);
try {
zkClient.createPersistent(path, getBytes(data));
} catch (ZkNodeExistsException e2) {
zkClient.writeData(path, getBytes(data));
}
}
}