Configure Kafka consumer with EJB? Is it possible to configure mdb for Kafka with EJB? - apache-kafka

I want to replace existing JMS message consumer in EJB with Apache Kafka message consumer. I am not able to figure out the option to configure Apache Kafka consumer with EJB configuration.

Kafka is not a straight messaging solution (comparable to say RabbitMQ), but can be used as one.
You will need to translate your JMS concepts (topics, queues) into Kafka topics (which are closer to JMS topics).
Also given that consumers have a configureable/storeable start offset you will need to define these policies for your consumers.

Related

Kafka Connector To read from a Topic and write to a topic

I want to build a Kafka connector which needs to read from the Kafka topic and make a call to the GRPC service to get some data and write the whole data into another kafka topic.
I have written a Kafka Sink connector which reads from a topic and called a GRPC service. But not sure how to redirect this data into a Kafka topic.
Kafka Streams can read from topics, call external services as necessary, then forward this data to a new topic in the same cluster.
MirrorMaker2 can be used between different clusters, but using Connect transforms is generally not recommended with external services.
Or you could make your gRPC service into a Kafka producer.

Regd. WSO2 ESB Kafka Consumer Project

I am creating two WSO2 ESB Integration projects for the purpose of one for Kafka Message Producer and second for Kafka Message Consumer. I created Kafka Message Producer successfully on invocation of REST API and message going to be dropped on the topic. When I am creating Kafka message consumer there is no transport such as Kafka for proxy to listen on and as per the documentation we need to use IP endpoint for the same. My requirement is Kafka Message should be received at consumer automatically from a topic (similar to JMS Consumer) when message is available on the topic. So how can i achieve that using IP endpoint with Kafka details.
any inputs?
If you want to consume messages in a Kafka topic using an EI server, you need to use Kafka inbound endpoint. This inbound endpoint will periodically poll the messages from the topic and the polling interval is configurable. Refer to the documentation [1] for more information on this.
[1]-https://docs.wso2.com/display/EI660/Kafka+Inbound+Protocol
I resolved the issue with separate wso2 consumer Integration project. Its getting issues while creating the inbound endpoint type as kafka.

Apache Kafka: how to configure message buffering properly

I run a system comprising an InfluxDB, a Kafka Broker and data sources (sensors) producing time series data. The purpose of the broker is to protect the database from inbound event overload and as a format-agnostic platform for ingesting data. The data is transferred from Kafka to InfluxDB via Apache Camel routes.
I would like to use Kafka a intermediate message buffer in case a Camel route crashes or becomes unavailable - which is the most often error in the system. Up to now, I didn’t achieve to configure Kafka in a manner that inbound messages remain available for later consumption.
How do I configure it properly?
The messages will retain in Kafka topics based on its retention policies (you can choose between time or byte size limits) as described in the Topic Configurations. With
cleanup.policy=delete
Retention.ms=-1
the messages will in a Kafka topic will never be deleted.
Then your camel consumer will be able to re-read all messages (offsets) if you select a new consumer group or reset the offsets of the existing consumer group. Otherwise, your camel consumer might auto commit the messages (check corresponding consumer configuration) and it will not be possible to re-read offsets again for the same consumer group.
To limit the consumption rate of the camel consumer you may adjust configurations like maxPollRecords or fetchMaxBytes which are described in the docs.

Kafka consumer api (no zookeeper configuration)

I am using Kafka client library comes with Kafka 0.11.0.1. I noticed that using kafkaconsumer does not need to configure zookeeper anymore. Does that mean zookeep server will automatically be located by the kafka bootstrap server?
Since Kafka 0.9 the KafkaConsumer implementation stores offsets commit and consumer group information in Kafka brokers themselves. This eliminates the zookeeper dependency and increases the scalability of the consumers.

Two spring kafka consumer with different configuration in one application

I am newbie to Kafka. I am working on a Spring-Kafka POC. Our KAFKA severs are Kerberized. With all required configuration, we are able to access the Kerberized Kafka server. Now we have another requirement where we have to consume topics from non-Kerberized (Simple Kafka Consumer) Kafka servers. Can we do this in single application by creating another KafkaConsumer with its own Listener?
Yes; just define a different consumer factory bean for the second consumer.
If you are using Spring Boot's auto configuration, you will have to manually declare both because the auto configuration is disabled if a user-defined bean is discovered.