Able to connect successfully to local kafka broker/cluster running locally (dockerized) using Conduktor, but when trying to connect to Kafka cluster running on Unix VM, getting below error.
Error:
"The broker [...] is reachable but Kafka can't connect. Ensure you have access to the advertised listeners of the the brokers and the proper authorization"
Appreciate any assistance.
running locally (dockerized)
When running in docker, you need to ensure that the ports are accessible from outside of your container. To verify this, try doing a telnet <ip> <port> and check if you are able to connect.
Since the error message says, the broker is reachable, I suppose you would be able to successfully telnet to the broker.
Next, check your broker config called advertised.listeners. Here you need to mention your IP:Port combination where IP is what you will be giving in your client program i.e. Conduktor.
An example for that would be
advertised.listeners=PLAINTEXT://1.2.3.4:9092
and then restart your broker and reconnect. If you are using ssl then you need to provide some extra configuration. See Configuring Kafka brokers for more.
Try to add in /etc/hosts (Unix-like) or C:\Windows\System32\drivers\etc\hosts (windows-like) the Kafka server in such manner kafka_server_ip kafka_server_name_in_dns (e.g. 10.10.0.1 kafka).
Related
I am deploying a confluent single-node so I am only manipulating connect-standalone.properties and server.properties.
I am trying to connect a remote producer to my local set-up so I have the following overrides in server.properties
listeners=PLAINTEXT://10.20.23.105:9092,EXTERNAL://10.20.23.105:29092
advertised.listeners=PLAINTEXT://10.20.23.105:9092,EXTERNAL://localhost:29092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
After checking using Offset Explorer, I can see that Kafka is still working and I am successfully getting the remote stream. However, Connect fails upon trying to start the service.
[2023-02-13 10:26:02,992] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:85)
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:79)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:60)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:96)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:79)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:73)
What are the possible fixes for this problem?
I have checked out this question Kafka-connect, Bootstrap broker disconnected, but since I am still using PLAINTEXT for my external listener, there shouldn't need to be any changes to the workers right?
Kafka Connect isn't the problem. Start debugging with kafka-console-producer, for example.
listeners should not be hard-coded to any one IP.
Use bind addresses to allow connection from all interfaces.
listeners=PLAINTEXT://0.0.0.0:9092,EXTERNAL://0.0.0.0:29092
For advertised listeners, "external" addresses should use a LAN IP. Not clear what your "localhost" listener is needed for here, since any connection that IP from that same machine would route back to itself, by default. More importantly, you don't need two ports opened for the same protocol-connection.
You've not shown your connect worker properties, but if it is running on an external machine, make sure there is no firewall interfering with the connection, and that you are using the correct IP/hostname and ports.
I've started learning some big data tools for a new project, and right now I'm on Kafka and Zookeeper.
I have them both install on my local machine, and I can start them up and start producing and consuming messages just fine. Now, I want to try it having two machines, one with a kafka broker, zookeepr and a producer, and the other with a consumer. Lets call them Machine A and Machine B.
Machine A has runs the Zookeeper server, the broker and a producer. Machine B runs a consumer. From what I think I understand, I should be able to setup the consumer to listen to a topic from the producer on Machine A, using Zookeeper. Since both machines are on the same network (i.e. my local home network), I thought I could change the kafka broker server.properties to use my static ip address for Machine A, and then have the consumer on Machine B connect to it.
My problem, is that zookeeper keeps spinning up on localhost, and connecting to 0.0.0.0/0.0.0.0:2181 so when my broker tries to connect to it using my static ip address (i.e 192.168.x.x), it times out. I have looked all over for a solution, but I cannot find anything that tells me how to configure the Zookeeper sever to start on a different ip address.
Maybe my understanding of these technologies is simply wrong, but I thought this would be a fairly simple thing to do. Does anyone know any way to resolve this? Or else if I'm doing it completely wrong, what is the correct approach
zookeeper keeps spinning up on localhost, and connecting to 0.0.0.0/0.0.0.0:2181
Well, that is the bind address.
You need to also (preferably) have a static IP for Zookeeper, then set zookeeper.connect within the server.properties file of Kafka to reach to that other machine's external address.
From the Zookeeper configuration file, you would make sure you have the myid file and have a line in the property file that looks like this (without the double brackets)
server.{{ myid }}={{ ip_address }}:2888:3888
You wouldn't find this in the Kafka documentation, but it is in the Zookeeper documentation
However, if Kafka and Zookeeper are on the same machine, this isn't necessary.
Your external consumer should be setting bootstrap.servers property and the Kafka IP address(es) w/ port 9092.
Your problem might me related instead to the advertised.listeners setting within Kafka.
For example, start with listeners=PLAINTEXT://:9092
As of Zookeeper 3.3.0 (see Advanced Configuration):
clientPortAddress : New in 3.3.0: the address (ipv4, ipv6 or hostname)
to listen for client connections; that is, the address that clients
attempt to connect to. This is optional, by default we bind in such a
way that any connection to the clientPort for any
address/interface/nic on the server will be accepted
So you could use:
clientPortAddress=127.0.0.1
I am facing below error message when i was trying to connect and see the topic/consumer details of one of my kafka clusters we have.
we have 3 brokers in the cluster which I able to see but the topic and its partitions.
Note : I have kafka 1.0 and kafka tool version is 2.0.1
I had the same issue on my MacBook Pro. The tool was using "tshepo-mbp" as the hostname which it could not resolve. To get it to work I added 127.0.0.1 tshepo-mbp to the /etc/hosts file.
kafka tool is most likely using the hostname to connect to the broker and cannot reach it. You maybe connecting to the zookeeper host by IP address but make sure you can connect/ping the host name of the broker from the machine running the kafka tool.
If you cannot ping the broker either fix the network issues or as a workaround edit the host file on your client to let it know how to reach the broker by its name
This issue occurs if you have not set listeners and advertised.listeners property in server.properties file.
For Ex:
config/server.properties
...
listeners=PLAINTEXT://:9092
...
advertised.listeners=PLAINTEXT://<public-ip/host-name>:9092
...
To fix this issue, we need to change the server.properties file.
$ vim /usr/local/etc/kafka/server.properties
Here update the listeners value from
listeners=PLAINTEXT://:9092
to
listeners=PLAINTEXT://localhost:9092
source:https://medium.com/#Ankitthakur/apache-kafka-installation-on-mac-using-homebrew-a367cdefd273
For better visibility (even already commented the same in early days thread)
In my case, I got to know when I used Kafkatool from my local machine, tool tris to find out Kafka broker port which was blocked from my cluster admins for my local machine, that is the reason I was not able to connect.
Resolution:
Either ask the admin to open the port for intranet if they can, if they can not you can use tunnelling for your testing purpose or time being for your port.
Hope this would help a few.
We are trying to consume from a Kafka Cluster using the Java Client. The Cluster is a behind a Jump host and hence the only way to access is through a SSH Tunnel. But we are not able read because once the consumer fetches metadata it uses the original hosts to connect to brokers. Can this behaviour be overridden? Can we ask Kafka Client to not use the metadata?
Not as far as I know.
The trick I used when I needed to do something similar was:
setup a virtual interface for each Kafka broker
open a tunnel to each broker so that broker n is bound to virtual interface n
configure your /etc/hosts file so that the advertised hostname of broker n is resolved to the ip of the virtual interface n.
Es.
Kafka brokers:
broker1 (advertised as broker1.mykafkacluster)
broker2 (advertised as broker2.mykafkacluster)
Virtual interfaces:
veth1 (192.168.1.1)
veth2 (192.168.1.2)
Tunnels:
broker1: ssh -L 192.168.1.1:9092:broker1.mykafkacluster:9092 jumphost
broker2: ssh -L 192.168.1.2:9092:broker1.mykafkacluster:9092 jumphost
/etc/hosts:
192.168.1.1 broker1.mykafkacluster
192.168.1.2 broker2.mykafkacluster
If you configure your system like this you should be able reach all the brokers in your Kafka cluster.
Note: if you configured your Kafka brokers to advertise an ip address instead of a hostname the procedure can still work but you need to configure the virtual interfaces with the same ip address that the broker advertises.
You don't actually have to add virtual interfaces to acces the brokers via SSH tunnel if they advertise a hostname. It's enough to add a hosts entry in /etc/hosts of your client and bind the tunnel to the added name.
Assuming broker.kafkacluster is the advertised.hostname of your broker:
/etc/hosts:
127.0.2.1 broker.kafkacluster
Tunnel:
ssh -L broker.kafkacluster:9092:broker.kafkacluster:9092 <brokerhostip/name>
Try sshuttle like this:
sshuttle -r user#host broker-1-ip:port broker-2-ip:port broker-3-ip:port
Of course, the list of broker depends on advertised listeners broker setting.
Absolutely best solution for me was to use kafkatunnel (https://github.com/simple-machines/kafka-tunnel). Worked like a charm.
Changing the /etc/hosts file is NOT the right way.
Quoting Confluent blog post:
I saw a Stack Overflow answer suggesting to just update my hosts file…isn’t that easier?
This is nothing more than a hack to work around a misconfiguration instead of actually fixing it.
You need to set advertised.listeners (or KAFKA_ADVERTISED_LISTENERS if you’re using Docker images) to the external address (host/IP) so that clients can correctly connect to it. Otherwise, they’ll try to connect to the internal host address—and if that’s not reachable, then problems ensue.
Confluent blog post
Additionally you can have a look at this Pull Request on GitHub where I wrote an integration test to connect to Kafka via SSH. It should be easy to understand even if you don't know Golang.
There you have a full client and server example (see TestSSH). The test is bringing up actual Docker containers and it runs assertions against them.
TL;DR I had to configure the KAFKA_ADVERTISED_LISTENERS when connecting over SSH so that the host advertised by each broker would be one reachable from the SSH host. This is because the client connects to the SSH host first and then from there it connects to a Kafka broker. So the host in the advertised.listeners must be reachable from the SSH server.
In our Openshift ecosystem, we have a kafka instance sourced from wurstmeister/kafka. As of now I am able to have the kafka accessible withing the Openshift system using the below parameters,
KAFKA_LISTENERS=PLAINTEXT://:9092
KAFKA_ADVERTISED_HOST_NAME=kafka_service_name
And ofcourse, the params for port and zookeper is there.
I am able to access the kafka from the pods within the openshift system. But I am unable to access kafka service from the host machine. Eventhough I am able to access the kafka pod using its IP and able to telnet the pod using, telnet Pod_IP 9092
When I am trying to connect using the kafka producer from the host machine, I am getting the below error,
2017-08-07 07:45:13,925] WARN Error while fetching metadata with
correlation id 2 : {tls21=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
And When I try to connect from Kafka consumer from the host machine using IP, it is blank.
Note: As of now, its a single openshift server. And the use case is for dev testing.
Maybe you want to take a look at this POC for having Kafka on OpenShift ?
https://github.com/EnMasseProject/barnabas