ClassNotFoundException exception occurred: io.confluent.kafka.security.config.provider.SecurePassConfigProvider (kafka.server.KafkaConfig) - apache-kafka

The broker is failed on start-up and I can see the following errors :
INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
ERROR ClassNotFoundException exception occurred: io.confluent.kafka.security.config.provider.SecurePassConfigProvider (kafka.server.KafkaConfig)
INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
ERROR ClassNotFoundException exception occurred: io.confluent.kafka.security.config.provider.SecurePassConfigProvider (kafka.server.KafkaConfig)
INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
ERROR ClassNotFoundException exception occurred: io.confluent.kafka.security.config.provider.SecurePassConfigProvider (kafka.server.KafkaConfig)
INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
ERROR ClassNotFoundException exception occurred: io.confluent.kafka.security.config.provider.SecurePassConfigProvider (kafka.server.KafkaConfig)
INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
INFO KafkaConfig values:
----------------------------
I did a secret setup for one of the property(i.e., ssl.truststore.password) in server.properties file and tried re-starting the server and observed the above error.
Any help would be appreciated. Thanks!!!
---server.properties---
##
ssl.truststore.password = ${securepass:/home/secret/secrets.txt:server.properties/ssl.truststore.password}
config.providers = securepass
config.providers.securepass.class = io.confluent.kafka.security.config.provider.SecurePassConfigProvider
Confluent Community version used - 5.5.2

The community edition of Confluent Platform 5.5.2 does not come with this class...
$ find ./confluent-5.5.2 -name 'kafka-client-plugins*.jar'
Download the file here and make sure it is in the Kafka broker classpath, e.g. /usr/share/java/kafka if installed directly to the OS, or the share/java/kafka folder of the Confluent tarball.
https://packages.confluent.io/maven/io/confluent/kafka-client-plugins/5.5.2-ce/kafka-client-plugins-5.5.2-ce.jar
Verified with
$ jar -tf kafka-client-plugins-5.5.2-ce.jar| grep SecurePassConfigProvider
io/confluent/kafka/security/config/provider/SecurePassConfigProvider.class
Overall, if someone has file-system access to your brokers, you have bigger problems, and obscuring the file with a direct reference to another is not "secure"

Related

KAFKA - ERROR Disk error while locking directory

I got this error ERROR Disk error while locking directory while trying to start kafka-server-start.sh config/server.properties
[2020-05-21 23:44:11,323] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,323] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,324] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,340] ERROR Disk error while locking directory /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/.lock
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at kafka.utils.FileLock.<init>(FileLock.scala:31)
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:235)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:118)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:105)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:38)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:233)
at kafka.log.LogManager.<init>(LogManager.scala:104)
at kafka.log.LogManager$.apply(LogManager.scala:1084)
at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
[2020-05-21 23:44:11,344] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/recovery-point-offset-checkpoint
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.OffsetCheckpointFile.<init>(OffsetCheckpointFile.scala:57)
at kafka.log.LogManager.$anonfun$recoveryPointCheckpoints$1(LogManager.scala:106)
at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:100)
at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:87)
at scala.collection.mutable.ArraySeq.map(ArraySeq.scala:38)
at kafka.log.LogManager.<init>(LogManager.scala:105)
at kafka.log.LogManager$.apply(LogManager.scala:1084)
at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
This is known issue with Kafka distribution for windows. Refer: https://issues.apache.org/jira/browse/KAFKA-13391
Either use Kafka 2.8 (kafka_2.12-2.8.1.tgz) or wait for Kafka 3.0.1 or Kafka 3.1.0
For people using Kafka on windows and having a related error to
java.nio.file.AccessDeniedException:
This is a common error when log retention happens.
Kafka doesn’t have good support for windows filesystem.
You can use WSL2 or Docker to work around these limitations
Try to use kafka_2.12-2.8.1.tgz. This resolved the issue for me.
As the error states, the user that starts the Kafka Server process does not have access to your log.dirs:
[2020-05-21 23:44:11,344] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/recovery-point-offset-checkpoint
You can either:
Change log.dirs (Make sure NOT to use /tmp/)
Or grant read/write access for /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/
If none of the above options works for you, then probably it might be worth checking if the directory actually exists. If not, simply create it by running
mkdir -p /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data
As a side note, I wouldn't say that /opt/ is the best place to store data.

kafka is failing to start because cluster/id is deleted from Zookeeper

Apache Kafka is failing to start. It shows in its logs "Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper."
How can I check the "/cluster/id"?
previously, when kafka is failing to start it is because I updated java on my server. So, I need to re-direct $JAVA_HOME to the new path & restart kafka and it will work again just fine. But, this case is different, I checked $JAVA_HOME & it's correct. So, I wanted to know more details of this issue. I read one of the logs file in /opt/kafka/logs and I got this log:
[2019-08-21 10:18:40,472] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper.
at kafka.zk.KafkaZkClient.$anonfun$createOrGetClusterId$1(KafkaZkClient.scala:1498)
at scala.Option.getOrElse(Option.scala:138)
at kafka.zk.KafkaZkClient.createOrGetClusterId(KafkaZkClient.scala:1498)
at kafka.server.KafkaServer.$anonfun$getOrGenerateClusterId$1(KafkaServer.scala:390)
at scala.Option.getOrElse(Option.scala:138)
at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:390)
at kafka.server.KafkaServer.startup(KafkaServer.scala:208)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-08-21 10:18:40,476] INFO shutting down (kafka.server.KafkaServer)
[2019-08-21 10:18:40,488] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-08-21 10:18:40,500] INFO Session: 0x100494a8714000a closed (org.apache.zookeeper.ZooKeeper)
[2019-08-21 10:18:40,505] INFO EventThread shut down for session: 0x100494a8714000a (org.apache.zookeeper.ClientCnxn)
[2019-08-21 10:18:40,507] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-08-21 10:18:40,517] INFO shut down completed (kafka.server.KafkaServer)
[2019-08-21 10:18:40,517] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-08-21 10:18:40,554] INFO shutting down (kafka.server.KafkaServer)
when I read the previous Error
Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper.
I though that "zookeeper id" in this path was deleted by mistake /tmp/zookeeper/myid but the file is still there with the corresponding server number written in it & zk is working fine. my kafka version is 2.2.0 .
I searched online for this ERROR "Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper."
but honestly, I did not find any thing helpful.

java.lang.ClassCastException: class com.airbnb.kafka.kafka08.StatsdMetricsReporter

Working on setting up a monitoring system for a Kafka cluster using statsd. I'm using the statsd library here. Currently my broker server won't start up.
I am positive my issue lies in this configuration line inside my server.properties file: metric.reporters=com.airbnb.kafka.kafka08.StatsdMetricsReporter.
When I comment that line out, the server starts up. Hell, I even get the statsd confirmation like this:
[2017-06-06 15:19:35,669] INFO Reporter is enabled and starting... (com.airbnb.metrics.StatsDReporter)
[2017-06-06 15:19:35,679] INFO Started Reporter with host=localhost, port=8125, polling_period_secs=10, prefix= (com.airbnb.metrics.StatsDReporter)
However, stats aren't reported (I believe) because this is also true: metric.reporters = []. Therefore, that line that's causing the issue must exist in the properties file, right?
When I try to start the server, it fails with this message:
[2017-06-06 15:21:34,712] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.ClassCastException: class com.airbnb.kafka.kafka08.StatsdMetricsReporter
at java.lang.Class.asSubclass(Class.java:3404)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:356)
at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstances(AbstractConfig.java:243)
at kafka.server.KafkaServer.startup(KafkaServer.scala:198)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-06-06 15:21:34,713] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)
[2017-06-06 15:21:34,714] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-06-06 15:21:34,718] INFO Session: 0x15c7ed717e30004 closed (org.apache.zookeeper.ZooKeeper)
[2017-06-06 15:21:34,720] INFO EventThread shut down for session: 0x15c7ed717e30004 (org.apache.zookeeper.ClientCnxn)
[2017-06-06 15:21:34,720] INFO [Kafka Server 0], shut down completed (kafka.server.KafkaServer)
[2017-06-06 15:21:34,720] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.ClassCastException: class com.airbnb.kafka.kafka08.StatsdMetricsReporter
at java.lang.Class.asSubclass(Class.java:3404)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:356)
at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstances(AbstractConfig.java:243)
at kafka.server.KafkaServer.startup(KafkaServer.scala:198)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-06-06 15:21:34,721] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)
For the record, here is my config:
kafka:type=com.airbnb.kafka.kafka08.StatsdMetricsReporter
metric.reporters=com.airbnb.kafka.kafka08.StatsdMetricsReporter
kafka.metric.reporters=com.airbnb.kafka.kafka08.StatsdMetricsReporter
external.kafka.statsd.reporter.enabled=true
external.kafka.statsd.host=localhost
external.kafka.statsd.port=8125
external.kafka.statsd.metrics.prefix=
external.kafka.statsd.tag.enabled=true
I found the answer. I had to use kafka09 instead of kafka08.

Getting SmscManagement is not registered error in smsc gateway management UI

I am using Telestax Restcomm smsc gateway 7.2.109.
When I load sms gateway management UI, I am getting
15:31:12:520 [ERROR] javax.management.InstanceNotFoundException : org.mobicents.smsc:layer=SmscPropertiesManagement,name=SmscManagement is not registered.. (Full Stack Trace)
Also I am getting following errors while starting smsc server (jboss).
08:56:25,851 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SCTPManagement
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SCTPManagement",type=Component already registered.
08:56:25,858 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SCTPShellExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SCTPShellExecutor",type=Component already registered.
08:56:25,865 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean RoutingLabelFormat
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="RoutingLabelFormat",type=Component already registered.
08:56:25,874 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean Mtp3UserPart
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="Mtp3UserPart",type=Component already registered.
08:56:25,882 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean M3UAShellExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="M3UAShellExecutor",type=Component already registered.
08:56:25,889 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SS7Clock
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SS7Clock",type=Component already registered.
08:56:25,899 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SS7Scheduler
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SS7Scheduler",type=Component already registered.
08:56:25,907 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SccpStack
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SccpStack",type=Component already registered.
08:56:25,914 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SccpExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SccpExecutor",type=Component already registered.
08:56:25,921 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean TcapStack
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="TcapStack",type=Component already registered.
08:56:25,927 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean TcapExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="TcapExecutor",type=Component already registered.
08:56:25,934 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean ShellExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="ShellExecutor",type=Component already registered.
08:56:25,940 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean MapStack
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="MapStack",type=Component already registered.
08:56:25,950 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean MAPSS7Service
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="MAPSS7Service",type=Component already registered.
08:56:25,984 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean Ss7Management
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="Ss7Management",type=Component already registered.
08:56:26,041 ERROR [AbstractKernelController] (main) Error installing to Real: name=vfsfile:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/restcomm-smsc-server/META-INF/jboss-beans.xml state=PreReal mode=Manual requiredState=Real
org.jboss.deployers.spi.DeploymentException: Error deploying: SCTPManagement
DEPLOYMENTS MISSING DEPENDENCIES:
Deployment "vfszip:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/smpp-server-ra-du-7.0.5.jar/" is missing the following dependencies:
Dependency "SmppManagement" (should be in state "Real", but is actually in state "** NOT FOUND Depends on 'SmppManagement' ")
Deployment "vfszip:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/smsc-resource-adaptors-du-7.2.109.jar/" is missing the following dependencies:
Dependency "SmscManagement" (should be in state "Real", but is actually in state " NOT FOUND Depends on 'SmscManagement' ")
Deployment "vfszip:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/smsc-services-du-7.2.109.jar/" is missing the following dependencies:
Dependency "SmscManagement" (should be in state "Real", but is actually in state " NOT FOUND Depends on 'SmscManagement' **")
DEPLOYMENTS IN ERROR:
Deployment "vfsfile:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/restcomm-smsc-server/META-INF/jboss-beans.xml" is in error due to the following reason(s): java.lang.IllegalStateException: SCTPManagement is already installed.
Deployment "SmscManagement" is in error due to the following reason(s): ** NOT FOUND Depends on 'SmscManagement' **
Deployment "SmppManagement" is in error due to the following reason(s): ** NOT FOUND Depends on 'SmppManagement' **
Kindly help.
Thanks.
Update:
Server is working fine now.
Getting below error when calling from smtp simulator client.
14:26:41,913 INFO [SmppServerConnector] (SmppManagement) New channel from [172.17.0.1:57210]
14:26:41,916 INFO [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) received PDU: (bind_transmitter: 0x00000023 0x00000002 0x00000000 0x00000001) (body: systemId [test] password [test] systemType [] interfaceVersion [0x34] addressRange (0x01 0x01 [6666])) (opts: )
14:26:41,917 ERROR [DefaultSmppServerHandler] (SmppManagement.UnboundSession.172.17.0.1:57210) Received BIND request but no ESME configured for SystemId=test Host=172.17.0.1 Port=57210 SmppBindType=TRANSMITTER
14:26:41,918 WARN [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) Bind request rejected or failed for connection [172.17.0.1:57210] with error [SMPP processing error [0x0000000F]]
14:26:41,918 INFO [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) send PDU: (bind_transmitter_resp: 0x0000001A 0x80000002 0x0000000F 0x00000001 result: "System ID invalid") (body: systemId [test]) (opts: (sc_interface_version: 0x0210 0x0001 [34]))
14:26:41,919 INFO [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) Connection closed with [172.17.0.1:57210]
Can you retry from the latest snapshot release from https://mobicents.ci.cloudbees.com/job/RestComm-SMSC/ ?

kafka cant connect to zookeeper- FATAL Fatal error during KafkaServerStable startup

Well..every service in the world can connect to my zookeeper expect kafka. Below is my connection string in server.properties file
zk.connect=1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181
Have have all ports on the two zookeeper servers ....total promiscuous mode. I can even telnet into the zookeeper server from the kafka server..
telnet 2.dzk.syd.druid.neo.com 2181
Trying 54.252.183.218...
Connected to 2.dzk.syd.druid.neo.com.
Escape character is '^]'.
So....rather confused on why kafka will not connect to zookeeper?
I am using ubuntu 12.04 and kafka 0.7.2
[2013-07-16 04:36:49,915] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,915] INFO Client environment:user.dir=/etc/sv/kafka (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,916] INFO Initiating client connection, connectString=1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#39cc65b1 (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,935] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2013-07-16 04:36:49,938] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to 1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:66)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:872)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
at kafka.server.KafkaZooKeeper.startup(KafkaZooKeeper.scala:44)
at kafka.log.LogManager.<init>(LogManager.scala:93)
at kafka.server.KafkaServer.startup(KafkaServer.scala:58)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
at kafka.Kafka$.main(Kafka.scala:47)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: 2.dzk.syd.druid.neo.com: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:894)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1286)
at java.net.InetAddress.getAllByName0(InetAddress.java:1239)
at java.net.InetAddress.getAllByName(InetAddress.java:1155)
at java.net.InetAddress.getAllByName(InetAddress.java:1091)
at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:387)
at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:332)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:383)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:64)
... 9 more
[2013-07-16 04:36:49,942] INFO Shutting down Kafka server (kafka.server.KafkaServer)
[2013-07-16 04:36:49,943] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler)
[2013-07-16 04:36:49,944] INFO Kafka server shut down completed (kafka.server.KafkaServer)
In your kafka/config/server.properties, there should be a property
#host.name=localhost
if you have uncommented this, or set this to another name, then that name should be in the /etc/hosts file
It's been a while since this has been answered, but in case it could help someone here is how i fixed it :
Actually i am using an Ansible playbook to install Kafka cluster and the params generated in zookeeper.properties file were not correctly ordered :
server.1=0.0.0.0:2888:3888
server.2=kafka-4:2888:3888
server.3=kafka-5:2888:3888
server.4=kafka-3:2888:3888
server.5=kafka-2:2888:3888
Putting them in the right order,
server.1=0.0.0.0:2888:3888
server.2=kafka-2:2888:3888
server.3=kafka-3:2888:3888
server.4=kafka-4:2888:3888
server.5=kafka-5:2888:3888
Then restart Kafka service, fixed it.
Change this in zookeeper.properties
maxClientCnxns=0 to maxClientCnxns=1