I'am trying to run kafka from ambari UI. sandbox hdp. it does not start. i checked the log the error is :
java.lang.IllegalArgumentException: Error creating broker listeners from 'sandbox-hdp.hortonworks.com:9092': Unable to parse sandbox-hdp.hortonworks.com:9092 to a broker $.
It seems this was resolved on the Cloudera Community. The key conclusion:
It's worked for me after changing my listener ip from localhost to
exact ip of virtualbox.
Related
Able to connect successfully to local kafka broker/cluster running locally (dockerized) using Conduktor, but when trying to connect to Kafka cluster running on Unix VM, getting below error.
Error:
"The broker [...] is reachable but Kafka can't connect. Ensure you have access to the advertised listeners of the the brokers and the proper authorization"
Appreciate any assistance.
running locally (dockerized)
When running in docker, you need to ensure that the ports are accessible from outside of your container. To verify this, try doing a telnet <ip> <port> and check if you are able to connect.
Since the error message says, the broker is reachable, I suppose you would be able to successfully telnet to the broker.
Next, check your broker config called advertised.listeners. Here you need to mention your IP:Port combination where IP is what you will be giving in your client program i.e. Conduktor.
An example for that would be
advertised.listeners=PLAINTEXT://1.2.3.4:9092
and then restart your broker and reconnect. If you are using ssl then you need to provide some extra configuration. See Configuring Kafka brokers for more.
Try to add in /etc/hosts (Unix-like) or C:\Windows\System32\drivers\etc\hosts (windows-like) the Kafka server in such manner kafka_server_ip kafka_server_name_in_dns (e.g. 10.10.0.1 kafka).
I've a single node cluster setup of Apache kafka_2.12-2.5.1 (embedded zookeeper) on the same host. Enabled SSL on Zookeeper and it starts just fine. Kafka, however, throws fatal error "NoAuth for /brokers/ids" at start up.
Please help out if you've any pointers.
I am currently working with a simple project to query the messages from Apache Kafka topic using Apache Drill. And now I am encountering an error when running the Apache Drill cluster when running this command.
sqlline.bat -u "jdbc:drill:zk=localhost:2181"
And the error that I encountered is:
No active Drillbit endpoint found from ZooKeeper. Check connection parameters
I am using the single cluster instance of ZooKeeper that came from Apache Kafka.
Can anyone help me with this problem? Is it ok to use the Zookeeper from Apache Kafka installation with Drill?
sqlline.bat -u "jdbc:drill:zk=localhost:2181" command only connects to running DrillBit. If you have Drill running in distributed mode, please replace localhost with the correct IP address of the node, where Zookeeper is running and update port if needed.
If you want to start Drill in embedded mode, you may try running drill-embedded.bat or sqlline.bat -u "jdbc:drill:zk=local" command.
For more details please refer to https://drill.apache.org/docs/starting-drill-on-windows/.
I am facing below error message when i was trying to connect and see the topic/consumer details of one of my kafka clusters we have.
we have 3 brokers in the cluster which I able to see but the topic and its partitions.
Note : I have kafka 1.0 and kafka tool version is 2.0.1
I had the same issue on my MacBook Pro. The tool was using "tshepo-mbp" as the hostname which it could not resolve. To get it to work I added 127.0.0.1 tshepo-mbp to the /etc/hosts file.
kafka tool is most likely using the hostname to connect to the broker and cannot reach it. You maybe connecting to the zookeeper host by IP address but make sure you can connect/ping the host name of the broker from the machine running the kafka tool.
If you cannot ping the broker either fix the network issues or as a workaround edit the host file on your client to let it know how to reach the broker by its name
This issue occurs if you have not set listeners and advertised.listeners property in server.properties file.
For Ex:
config/server.properties
...
listeners=PLAINTEXT://:9092
...
advertised.listeners=PLAINTEXT://<public-ip/host-name>:9092
...
To fix this issue, we need to change the server.properties file.
$ vim /usr/local/etc/kafka/server.properties
Here update the listeners value from
listeners=PLAINTEXT://:9092
to
listeners=PLAINTEXT://localhost:9092
source:https://medium.com/#Ankitthakur/apache-kafka-installation-on-mac-using-homebrew-a367cdefd273
For better visibility (even already commented the same in early days thread)
In my case, I got to know when I used Kafkatool from my local machine, tool tris to find out Kafka broker port which was blocked from my cluster admins for my local machine, that is the reason I was not able to connect.
Resolution:
Either ask the admin to open the port for intranet if they can, if they can not you can use tunnelling for your testing purpose or time being for your port.
Hope this would help a few.
I'm using Confluent Platform 3.3 as a Kafka connector , while starting the connector using the below command,
./bin/connect-standalone ./etc/schema-registry/connect-avro-standalone.properties ./etc/kafka-connect-jdbc/connect-jdbc-source.properties
getting the below error
ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:52)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
[2017-10-30 13:49:56,178] ERROR Failed to send HTTP request to endpoint: http://localhost:8081/subjects/jdbc-source-accounts-value/versions
(io.confluent.kafka.schemaregistry.client.rest.RestService:156)
The Zookeeper is running in kafka client 2181 port, and I tried to start schema by the below command
./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties &
But it didn't show any error messages but the port 8081 didn't up.Please help me to sort this out.
If you're using Confluent Platform 3.3, I would recommend using Confluent CLI since it's part of the download you've already got and makes life much simpler. Then you can easily check the status of the components.
confluent start
confluent status kafka
etc
Check out this vid here: https://vimeo.com/228505612
In terms of the issue you've got, I would check the log for Schema Registry. You can do that with Confluent CLI
confluent log schema-registry
I also seen same issue while running the producer with Spring Cloud Stream. Replacing the localhost with actual IP address will help
This might help someone, l faced a similar issue. The solution that worked for me was to change localhost in my connector settings (in my schema.registry.url) and replace it with the ip address of of the container for/running schema registry and it worked. For example
l had set it up like below:
"value.converter.schema.registry.url": "http://localhost:8081"
"key.converter.schema.registry.url": "http://localhost:8081"
and l changed it to the following
"value.converter.schema.registry.url": "http://172.19.0.4:8081",
"key.converter.schema.registry.url": "http://172.19.0.4:8081"