I am using default server.properties/zookeeper.properties files provided by Kafka framework.
I am trying to create a simple NodeJS app which would send messages to Producer and consume them.
Below is NodeJS code.
config.js
module.exports = {
kafka_topic: 'catalog',
kafka_server: 'localhost:9092',
};
nodejs-producer.js
const kafka = require('kafka-node');
const config = require('./config');
try {
// set the desired timeout in options
const options = {
timeout: 5000,
};
const Producer = kafka.Producer;
const client = new kafka.KafkaClient({kafkaHost: config.kafka_server, requestTimeout: 5000});
const producer = new Producer(client);
const kafka_topic = config.kafka_topic;
let payloads = [
{
topic: kafka_topic,
messages: 'This is test message'
}
];
producer.on('ready', async function() {
let push_status = producer.send(payloads, (err, data) => {
if (err) {
console.log(err.toString());
console.log('[kafka-producer -> '+kafka_topic+']: broker update failed');
} else {
console.log(data.toString());
console.log('[kafka-producer -> '+kafka_topic+']: broker update success');
}
});
});
producer.on('error', function(err) {
console.log(err);
console.log('[kafka-producer -> '+kafka_topic+']: connection errored');
throw err;
});
}
catch(e) {
console.log(e);
}
kafka version = 2.8.0
kafka-node version = 5.0.0
I am getting the error - Error: LeaderNotAvailable
How to fix this? I tried playing with different values in server.properties file like advertised.listeners but didn't get solution.
I have already answered this problem here
In short: this problem happens when trying to produce messages to a topic that doesn't exist.
You may configure your kafka installation to automatically create topic in such case: what will then happen is - in order: you will still receive the error message and the framework will create the topic. In my case i then had to re-produce the same message a second time but this was on an old version of Kafka.
EDIT:
here a link to a post which explains how to setup your kafka configuration to automatically create kafka topics.
I have also faced same issue while sending a message. I solved the issue by adding a partition in the payload and same partition is used in the consumer also.
Code I have used
Since I got this error in the development environment. I solved this problem by deleting the zookeeper snapshot and Kafka consumer offset.
NOTE: Don't do this on production.
rm -rf /tmp/zookeeper
rm -rf /tmp/kafka-logs
Related
I have a kafka consumer group running on node.js powered by node-kafka. When this consumer group is active or in-active, I expect to see it reported by the kafa-consumer-groups CLI.
The kafka-consumer-groups CLI does show the console consumers and not just the node consumer.
I can see the node consumer group in Kafka Tool. It doesn't show up in the Kafa-consumer-groups CLI output
kafka-consumer-groups --bootstrap-server localhost:9092 --list
kafka-consumer-groups --bootstrap-server localhost:9092 --group node-kafka-consumer --describe
kafka-consumer-groups CLI should show all consumers - console and programmatic (in my case node-kafka consumer)
Here is the solution that uses kafka-node ConsumerGroup object to write offsets to kafka instead of zookeeper
const { ConsumerGroup } = kafka;
const consumerOptions = {
kafkaHost: 'localhost:9092',
groupId: 'kafka-node-consumer-group',
protocol: ['roundrobin'],
fromOffset: 'earliest'
};
const topics = ['zoo_animals'];
const consumerGroup = new ConsumerGroup(
{ id: 'node-app-1', ...consumerOptions },
topics
);
consumerGroup.on('message', onMessage);
consumerGroup.on('error', onError);
function onMessage(message) {
console.log('message', message);
}
function onError(error) {
console.log('error', error);
}
process.once('SIGINT', function() {
consumerGroup.close(true, err => {
if (err) {
console.log('error closing consumer', err);
} else {
console.log('closed consumer');
}
});
});```
I'm trying to produce two events to the same kafka topic in a batch, only the second event ends on kafka and the first is not sent.
// sudo code of what i'm doing
// producer
await kafka.produce(
event1 { message: "vito", topic: "corleone" },
event2 { message: "sonny", topic: "corleone" }
event3 { message: "fredo", topic: "corleone" }
)
// consumer listening to topic "corleone"
kafka.handler(payload) {
log(payload) // prints "fredo" but doesn't print "vito" or "sonnie"
}
What works though is if I have these events go to different topics:
// producer
await kafka.produce(
event1 { message: "vito", topic: "corleone" },
event2 { message: "sonny", topic: "deadinpart1" }
event3 { message: "fredo", topic: "deadinpart2" }
)
If I do that, I receive all three events (by listening to the three topics) which makes me think that Kafka might not be supporting multiple messages to the same topic in a batch.
My producer settings looks like this:
const kafkaConfig: KafkaConfigSchema = {
brokers: config().kafka.brokers, // array of brokers
useSasl: config().kafka.useSasl, // true
useSsl: config().kafka.useSsl, // true
username: config().kafka.username,
password: config().kafka.password,
groupId: config().kafka.groupId, // a unique string
};
Are there any settings I am missing or am I doing something wrong architecturally by sending messages that share the topic in the same batch?
I'm using ELK 5.0.1 and Kafka 0.10.1.0 . I'm not sure why my logs aren't forwarding I installed Kafkacat and was successfully able to Produce and Consume logs from all the 3 servers where Kafka cluster is installed.
shipper.conf
input {
file {
start_position => "beginning"
path => "/var/log/logstash/logstash-plain.log"
}
}
output {
kafka {
topic_id => "stash"
bootstrap_servers => "<i.p1>:9092,<i.p2>:9092,<i.p3>:9092"
}
}
receiver.conf
input {
kafka {
topics => ["stash"]
group_id => "stashlogs"
bootstrap_servers => "<i.p1>:2181,<i,p2>:2181,<i.p3>:2181"
}
}
output {
elasticsearch {
hosts => ["<eip>:9200","<eip>:9200","<eip>:9200"]
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"
}
}
Logs: Getting the below warnings in logstash-plain.log
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.common.protocol.Errors] Unexpected error
code: 38.
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.clients.NetworkClient] Error while fetching
metadata with correlation id 44 : {stash=UNKNOWN}
It looks like your bootstrap servers are using zookeeper ports. Try using Kafka ports (default 9092)
I'm trying to consume a Kafka topic using Logstash, for indexing by Elasticsearch. The Kafka events are JSON documents.
We recently upgraded our Elastic Stack to 5.1.2.
I believe that I was able to consume the topic OK in 5.0, using the same settings, but that was a while ago so perhaps I'm doing something wrong now, but can't see it. This is my config (slightly sanitized):
input {
kafka {
bootstrap_servers => "host1:9092,host2:9092,host3:9092"
client_id => "logstash-elastic-5-c5"
group_id => "logstash-elastic-5-g5"
topics => "trp_v1"
auto_offset_reset => "earliest"
}
}
filter {
json {
source => "message"
}
mutate {
rename => { "#timestamp" => "indexedDatetime" }
remove_field => [
"#timestamp",
"#version",
"message"
]
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["host10:9200", "host11:9200", "host12:9200", "host13:9200"]
action => "index"
index => "trp-i"
document_type => "event"
}
}
When I run this, no messages are consumed, no sign of activity appears in the log after "[org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] Setting newly assigned partitions", and in Kafka Manager the consumer appears to immediately appear with "total lag = 0" for the topic.
This version of the Kafka plugin stores consumer offsets in Kafka itself, so each time I try to run Logstash against the same topic, I increment the group_id so in theory, it should start from the earliest offset for the topic.
Any advice?
EDIT: It appears that despite setting auto_offset_reset to "earliest", it isn't working - it's as if it's being set to "latest". I left Logstash running, then had more entries loaded into the Kafka queue, and they were processed by Logstash.
I'm trying to read from multiple kafka topics (say 'newtest-1' and 'newtest-2') using 'white_list' configuration in the logstash input plugin. My logstash conf looks like:
input { kafka { white_list => "newtest-1|newtest-2" } } output { stdout {codec => rubydebug } }
With this configuration I can successfully read from two different topics. But I want to use regex for input topics as I'm expecting the topics to be of the form 'newtest-*'. According to the suggestion in this link, the following configuration should work:
input { kafka { white_list => "newtest-*" } } output { stdout {codec => rubydebug } }
But with this I'm not able to read from kafka. Any help is appreciated.
The white_list should be newtest-.*
This is relevant to older versions of the plugin. Now you can use topics.