Apache ZooKeeper - apache-zookeeper

I am trying to start Kafka, but my Zookeeper id is not displayed rather it only shows [myid:]
I tried changing the port numbers yet I still have the same problem. Please find the attached copy of the error message.
I tried changing the file name, and port number and removing extra spaces in server properties yet I could not run my Kafka Cluster.
Find the image of the config file and the error message

Related

i have been trying to run my Apache kafka Node. however, I keep in geeting error message Invalid config, exiting normally

I cannot start my Node server in Apache Kafka. my id is not displayed rather it only shows [myid: ]
i tried changing the port numbers yet I still have the same problem. find the attached copy of error message.

Kafka consumer Error : ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer)

In my case, I am having Kafka binary kafka_2.11-1.0.0 install both on server and client side, but after creating the topic my consumer is not working when I was using --bootstrap-server instead of --zookeeper.
And I changed as per warning coming. Would you please update why the consumer is not working with expected one but working for the old way of calling consumer.
As mentioned in the comments as well:
2181 is a common zookeeper port number.
It seems you tried to update the command but not the url. Make sure to use the actual broker url and port rather than trying to talk with the new command to the zookeeper port and url.

kafka Can't receive messages from external producer

I have docker in a linux machine, there I download kafka version 1.1.0, also I have run a zookeeper container and expose the port 2181, I change the zookeeper.connect of my server.properties to the one that the zookeeper container, I create an image where I include kafka, this is my dockerfile:
FROM openjdk:8
WORKDIR /app
ADD kafka_2.11-1.1.0 /app
ENTRYPOINT bin/kafka-server-start.sh config/server.properties
After the image is created I run my kafka container and everything is works great, my problem is when I try to send messages from an external producer, the messages never arrive, I search for this error and I found this link where I need to configure the advertised.host.name and advertised.port, but after change those properties and try to run my kafka container I get the next error:
kafka.common.KafkaException: Socket server failed to bind to ip:9092: Cannot assign requested address.
Also I try with this property:
listeners=PLAINTEXT://ip:9092
But I get the same error, if I don't make any change in kafka and I try to send a message from my spring boot application I get this error:
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for prueba-0: 30041 ms has passed since batch creation plus linger time
Can someone help me with this?,
Thanks in advance.
After searching a lot I found this link, I only need to specify the ip address of the host not the ip of docker container, this ip need to be on the advertised.listeners property:
advertised.listeners=PLAINTEXT://myip:9092
Hope this helps someone else.

getting "org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)"

I have installed zookeeper and kafka,
first step :
running zookeeper by the following commands :
bin/zkServer.sh start
bin/zkCli.sh
second step :
running kafka server
bin/kafka-server-start.sh config/server.properties
kafka should run at localhost:9092
but I am getting the following error :
WARN Unexpected error from /0:0:0:0:0:0:0:1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
I am following the following link :
Link1
Link2
I am new to kafka ,please help me to set it up.
1195725856 is GET[space] encoded as a big-endian, four-byte integer (see here for more information on how that works). This indicates that HTTP traffic is being sent to Kafka port 9092, but Kafka doesn't accept HTTP traffic, it only accepts its own protocol (which takes the first four bytes as the receive size, hence the error).
Since the error is received on startup, it is likely benign and may indicate a scanning service or similar on your network scanning ports with protocols that Kafka doesn't understand.
In order to find the cause, you can find where the HTTP traffic is coming from using tcpdump:
tcpdump -i any -w trap.pcap dst port 9092
# ...wait for logs to appear again, then ^C...
tcpdump -qX -r trap.pcap | less +/HEAD
Overall though, this is probably annoying but harmless. At least Kafka isn't actually allocating/dirtying the memory. :-)
Try to reset socket.request.max.bytes value in $KAFKA_HOME/config/server.properties file to more than your packet size and restart kafka server.
My initial guess would be that you might be trying to receive a request that is too large. The maximum size is the default size for socket.request.max.bytes, which is 100MB. So if you have a message which is bigger than 100MB try to increase the value of this variable under server.properties and make sure to restart the cluster before trying again.
If the above doesn't work, then most probably you are trying to connect to a non-SSL-listener.
If you are using the default broker of the port, you need to verify that :9092 is the SSL listener port on that broker.
For example,
listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL
should do the trick for you (Make sure you restart Kafka after re-configuring these properties).
This is how I resolved this issue after installing a Kafka, ELK and Kafdrop set up:
First stop every application one by one that interfaces with Kakfa
to track down the offending service.
Resolve the issue with that application.
In my set up it was Metricbeats.
It was resolved by editing the Metricbeats kafka.yml settings file located in modules.d sub folder:
Ensuring the Kafka advertised.listener in server.properties was
referenced in the hosts property.
Uncomment the metricsets and client_id properties.
The resulting kafka.yml looks like:
# Module: kafka
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.6/metricbeat-module-kafka.html
# Kafka metrics collected using the Kafka protocol
- module: kafka
metricsets:
- partition
- consumergroup
period: 10s
hosts: ["[your advertised.listener]:9092"]
client_id: metricbeat
The answer is most likely in one of the 2 areas
a. socket.request.max.bytes
b. you are using a non SSL end point to connect the producer and the consumer too.
Note: the port you run it really does not matter. Make sure if you have an ELB the ELB is returning all the healthchecks to be successful.
In my case i had an AWS ELB fronting KAFKA. I had specified the Listernet Protocol as TCP instead of Secure TCP. This caused the issue.
#listeners=PLAINTEXT://:9092
inter.broker.listener.name=INTERNAL
listeners=INTERNAL://:9093,EXTERNAL://:9092
advertised.listeners=EXTERNAL://<AWS-ELB>:9092,INTERNAL://<EC2-PRIVATE-DNS>:9093
listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
Here is a snippet of my producer.properties and consumer.properties for testing externally
bootstrap.servers=<AWS-ELB>:9092
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
In my case, some other application was already sending data to port 9092, hence the starting of server failed. Closing the application resolved this issue.
Please make sure that you use .security.protocol=plaintext or you have mismatch server security compared to the clients trying to connect.

Why the size of Zookeeper log block is not 64M?

I started zookeeper cluster in my computer, it's includes three instances.By default the size of log file should be 64M, but i found a strange things
If anyone can explain what happened with Zookeeper?
here is the content of the log file
The FileTxnLog is truncated, which is implemented by FileTxnSnapLog.truncateLog.
This scenario happens when there is a new election, and follower has some transaction that is not committed in leader.
This can be verified if log like:
Truncating log to get in sync with ...
exists in zookeeper.out or the log file you specified.