I have a kafka consumer group running on node.js powered by node-kafka. When this consumer group is active or in-active, I expect to see it reported by the kafa-consumer-groups CLI.
The kafka-consumer-groups CLI does show the console consumers and not just the node consumer.
I can see the node consumer group in Kafka Tool. It doesn't show up in the Kafa-consumer-groups CLI output
kafka-consumer-groups --bootstrap-server localhost:9092 --list
kafka-consumer-groups --bootstrap-server localhost:9092 --group node-kafka-consumer --describe
kafka-consumer-groups CLI should show all consumers - console and programmatic (in my case node-kafka consumer)
Here is the solution that uses kafka-node ConsumerGroup object to write offsets to kafka instead of zookeeper
const { ConsumerGroup } = kafka;
const consumerOptions = {
kafkaHost: 'localhost:9092',
groupId: 'kafka-node-consumer-group',
protocol: ['roundrobin'],
fromOffset: 'earliest'
};
const topics = ['zoo_animals'];
const consumerGroup = new ConsumerGroup(
{ id: 'node-app-1', ...consumerOptions },
topics
);
consumerGroup.on('message', onMessage);
consumerGroup.on('error', onError);
function onMessage(message) {
console.log('message', message);
}
function onError(error) {
console.log('error', error);
}
process.once('SIGINT', function() {
consumerGroup.close(true, err => {
if (err) {
console.log('error closing consumer', err);
} else {
console.log('closed consumer');
}
});
});```
Related
Im trying to send a message string to a kafka topic(example) but im getting this error
error: Failed to send data to Kafka server: Expiring 1 record(s) for example-0:120010 ms has passed since batch creation {}
Producer code
kafka:ProducerConfiguration producerConfiguration = {
clientId: "basic-producer",
acks: "all",
retryCount: 3
};
kafka:Producer kafkaProducer = check new (kafka:DEFAULT_URL,producerConfiguration);
public function main() returns error? {
string message = "Hello World, Ballerina";
check kafkaProducer->send({
topic: "example",
value: message.toBytes()});
check kafkaProducer->'flush();
}
How I created the topic
bin/kafka-topics.sh --create --topic example --replication-factor 1 --partitions
2 --bootstrap-server localhost:9092
I am using default server.properties/zookeeper.properties files provided by Kafka framework.
I am trying to create a simple NodeJS app which would send messages to Producer and consume them.
Below is NodeJS code.
config.js
module.exports = {
kafka_topic: 'catalog',
kafka_server: 'localhost:9092',
};
nodejs-producer.js
const kafka = require('kafka-node');
const config = require('./config');
try {
// set the desired timeout in options
const options = {
timeout: 5000,
};
const Producer = kafka.Producer;
const client = new kafka.KafkaClient({kafkaHost: config.kafka_server, requestTimeout: 5000});
const producer = new Producer(client);
const kafka_topic = config.kafka_topic;
let payloads = [
{
topic: kafka_topic,
messages: 'This is test message'
}
];
producer.on('ready', async function() {
let push_status = producer.send(payloads, (err, data) => {
if (err) {
console.log(err.toString());
console.log('[kafka-producer -> '+kafka_topic+']: broker update failed');
} else {
console.log(data.toString());
console.log('[kafka-producer -> '+kafka_topic+']: broker update success');
}
});
});
producer.on('error', function(err) {
console.log(err);
console.log('[kafka-producer -> '+kafka_topic+']: connection errored');
throw err;
});
}
catch(e) {
console.log(e);
}
kafka version = 2.8.0
kafka-node version = 5.0.0
I am getting the error - Error: LeaderNotAvailable
How to fix this? I tried playing with different values in server.properties file like advertised.listeners but didn't get solution.
I have already answered this problem here
In short: this problem happens when trying to produce messages to a topic that doesn't exist.
You may configure your kafka installation to automatically create topic in such case: what will then happen is - in order: you will still receive the error message and the framework will create the topic. In my case i then had to re-produce the same message a second time but this was on an old version of Kafka.
EDIT:
here a link to a post which explains how to setup your kafka configuration to automatically create kafka topics.
I have also faced same issue while sending a message. I solved the issue by adding a partition in the payload and same partition is used in the consumer also.
Code I have used
Since I got this error in the development environment. I solved this problem by deleting the zookeeper snapshot and Kafka consumer offset.
NOTE: Don't do this on production.
rm -rf /tmp/zookeeper
rm -rf /tmp/kafka-logs
Since #EnableBinding and #StreamListener(Sink.INPUT) were deprecated in favor to functions, I need to create a consumer that would read messages from Kafka topic.
My consumer function:
#Bean
public Consumer<Person> log() {
return person -> {
System.out.println("Received: " + person);
};
}
, application.yml configs
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
consumer:
destination: messages
contentType: application/json
Instead of connecting to topic messages, it keeps connecting to log-in-0 topic.
How could I fix this ?
spring.cloud.stream.bindings.log-in-0.destination=messages
I'm trying to produce two events to the same kafka topic in a batch, only the second event ends on kafka and the first is not sent.
// sudo code of what i'm doing
// producer
await kafka.produce(
event1 { message: "vito", topic: "corleone" },
event2 { message: "sonny", topic: "corleone" }
event3 { message: "fredo", topic: "corleone" }
)
// consumer listening to topic "corleone"
kafka.handler(payload) {
log(payload) // prints "fredo" but doesn't print "vito" or "sonnie"
}
What works though is if I have these events go to different topics:
// producer
await kafka.produce(
event1 { message: "vito", topic: "corleone" },
event2 { message: "sonny", topic: "deadinpart1" }
event3 { message: "fredo", topic: "deadinpart2" }
)
If I do that, I receive all three events (by listening to the three topics) which makes me think that Kafka might not be supporting multiple messages to the same topic in a batch.
My producer settings looks like this:
const kafkaConfig: KafkaConfigSchema = {
brokers: config().kafka.brokers, // array of brokers
useSasl: config().kafka.useSasl, // true
useSsl: config().kafka.useSsl, // true
username: config().kafka.username,
password: config().kafka.password,
groupId: config().kafka.groupId, // a unique string
};
Are there any settings I am missing or am I doing something wrong architecturally by sending messages that share the topic in the same batch?
I'm using ELK 5.0.1 and Kafka 0.10.1.0 . I'm not sure why my logs aren't forwarding I installed Kafkacat and was successfully able to Produce and Consume logs from all the 3 servers where Kafka cluster is installed.
shipper.conf
input {
file {
start_position => "beginning"
path => "/var/log/logstash/logstash-plain.log"
}
}
output {
kafka {
topic_id => "stash"
bootstrap_servers => "<i.p1>:9092,<i.p2>:9092,<i.p3>:9092"
}
}
receiver.conf
input {
kafka {
topics => ["stash"]
group_id => "stashlogs"
bootstrap_servers => "<i.p1>:2181,<i,p2>:2181,<i.p3>:2181"
}
}
output {
elasticsearch {
hosts => ["<eip>:9200","<eip>:9200","<eip>:9200"]
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"
}
}
Logs: Getting the below warnings in logstash-plain.log
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.common.protocol.Errors] Unexpected error
code: 38.
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.clients.NetworkClient] Error while fetching
metadata with correlation id 44 : {stash=UNKNOWN}
It looks like your bootstrap servers are using zookeeper ports. Try using Kafka ports (default 9092)