filebeat to kafka : Failed to connect to broker - apache-kafka

Im new in apache environment and currently im trying to send log data from filebeat producer to kafka broker.
environment :
kafka 2.11 (installed via ambari)
filebeat 7.4.2 (windows)
I tried to send logs from filebeat into ambari, I've started kafka servers and created the topic named "test" and it was listed on --list. Im pretty confused about kafka broker's port. In some tutorials i saw they were using 9092 instead 2181. So now ,what port i should use to send logs from filebeat?
here is my filebeat.conf
filebeat.inputs:
- type: log
paths:
- C:/Users/A/Desktop/DATA/mailbox3.csv
output.kafka:
hosts: ["x.x.x.x:9092"]
topic: "test"
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
result
2020-06-10T09:00:32.214+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:32.214+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:32.215+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:32.215+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:32.215+0700 INFO kafka/log.go:53 client/metadata retrying after 250ms... (3 attempts remaining)
2020-06-10T09:00:32.466+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
2020-06-10T09:00:34.475+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:34.475+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:34.477+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:34.477+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:34.478+0700 INFO kafka/log.go:53 client/metadata retrying after 250ms... (2 attempts remaining)
2020-06-10T09:00:34.729+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
2020-06-10T09:00:36.737+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:36.737+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:36.738+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:36.738+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:36.738+0700 INFO kafka/log.go:53 client/metadata retrying after 250ms... (1 attempts remaining)
2020-06-10T09:00:36.989+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
2020-06-10T09:00:39.002+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:39.002+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:39.004+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:39.004+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:39.004+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
it makes me wonder if i really got 9092 port. So i check out server.properties.Some that i concern the most that:
port=6667
listeners=PLAINTEXT://x.x.x.x:6667
so then i try again to do the filebeat.conf and change port 9092 to 6667 and here is the result
2020-06-10T09:18:01.448+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:6667
2020-06-10T09:18:01.450+0700 INFO kafka/log.go:53 producer/broker/1001 starting up
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 producer/broker/1001 state change to [open] on test/0
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 producer/leader/test/0 selected broker 1001
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:6667: dial tcp: lookup x.x.x.x: no such host
2020-06-10T09:18:01.452+0700 INFO kafka/log.go:53 producer/broker/1001 state change to [closing] because dial tcp: lookup x.x.x.x: no such host
2020-06-10T09:18:01.453+0700 DEBUG [kafka] kafka/client.go:264 finished kafka batch
2020-06-10T09:18:01.453+0700 DEBUG [kafka] kafka/client.go:278 Kafka publish failed with: dial tcp: lookup x.x.x.x: no such host
2020-06-10T09:18:01.454+0700 INFO kafka/log.go:53 producer/leader/test/0 state change to [flushing-3]
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/leader/test/0 state change to [normal]
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/leader/test/0 state change to [retrying-3]
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/leader/test/0 abandoning broker 1001
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/broker/1001 shut down
questions
What happened? Which port should be used? What is the use of each port?
Any respond will be appreciated so much. Thank you
UPDATE
according this source the right source is 6667 since kafka was installed via ambari

There is no restriction on the port that can be used, it only depends on the availability.
In the first case, as you said, the broker could have been started on 6667 and hence no process was running on 9092.
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 Failed to
connect to broker x.x.x.x:6667: dial tcp: lookup x.x.x.x: no such host
Next, when you mention advertised.listeners property, you should ensure that the IP you mention in the advertised.listeners is the IP assigned to that machine. You cannot assign 1.1.1.1:9092 (just to mention some example).
Execute ifconfig (linux), ipconfig (windows) and see the IP of your machine on the network interface that is accessible from your application machine.
In linux, it will mostly be eth0
This IP must be accessible from the machine where you are running your application.
So the machine your application is running on should be able to resolve that IP. You may also want to check your network connection between your Kafka broker and the machine you are running your application on.

Related

Kafka stretched cluster stopped when second DC become down

My Kafka version:
/opt/kafka/bin/kafka-topics.sh --version
2.4.1 (Commit:c57222ae8cd7866b)
My Kafka cluster configuration looks like:
6 nodes Kafka cluster
6 x Zookeeper i.e. is installed on each node/broker
2 DC's, there are 3 nodes in each DC
rack-awareness feature is enabled on each node:
node1 DC1:
broker.id=1
broker.rack=dc1
node2 DC1:
broker.id=2
broker.rack=dc1
node3 DC1:
broker.id=3
broker.rack=dc1
node1 DC2:
broker.id=4
broker.rack=dc2
node2 DC2:
broker.id=5
broker.rack=dc2
node3 DC2:
broker.id=6
broker.rack=dc2
When the whole DC2 become down the kafka cluster stopped and node1 from DC1 show errors like this:
[2022-03-16 07:38:45,422] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,549] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,787] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:45,787] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:45,787] INFO Opening socket connection to server dc2kafkabr2/A.B.C.72:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,788] INFO Socket error occurred: dc2kafkabr2/A.B.C.72:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,503] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,503] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,503] INFO Opening socket connection to server dc1kafkabr1/A.B.C.68:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,504] INFO Socket connection established, initiating session, client: /A.B.C.68:35796, server: dc1kafkabr1/A.B.C.68:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,505] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,616] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,617] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,617] INFO Opening socket connection to server dc1kafkabr2/A.B.C.69:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,617] INFO Socket connection established, initiating session, client: /A.B.C.68:38936, server: dc1kafkabr2/A.B.C.69:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,619] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,896] INFO Client successfully logged in. (org.apache.zookeeper.Login)
However when the Kafka nodes will be stopped normally/humanly in DC2 by systemctl command then Kafka cluster works properly on the nodes in DC1.
The question is why if DC2 is turned off, the Kafka cluster stops working? How to prevent of it? Any idea?
Best Regards,
Dan
Dears,
After next tests I know that problem is by side of Zookeeper because when I trun off two brokers in DC2 the Kafka cluster still works. After turn off kafka.service on the last broker in DC2 the Kafka cluster still works. But when I turn off zookeeper.service on the last broker in DC2 the cluster becomes unresponsive.
This is my zookeeper's configuration:
cat zookeeper.properties
tickTime=2000
dataDir=/opt/zookeeper/data
#dataLogDir=/var/log/zookeeper
clientPort=2181
initLimit=5
syncLimit=3
############## HARDENING #################
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
###########################################
server.1=A.B.C.68:2888:3888
server.2=A.B.C.69:2888:3888
server.3=A.B.C.70:2888:3888
server.4=A.B.C.71:2888:3888
server.5=A.B.C.72:2888:3888
server.6=A.B.C.73:2888:3888
Any idea what is wrong in this configuration?
Best Regards,
Dan
Zookeeper quorum is not ensure and this is reason.

Kafka gives Invalid receive size with Hyperledger Fabric Orderer connection

I was setting up a new cluster for Hyperledger Fabric on EKS. The cluster has 4 kafka nodes, 3 zookeeper nodes, 4 peers, 3 orderers, 1 CA. All the containers come up individually, and the kafka/zookeeper backend is also stable. I can SSH into any kafka/zookeeper and check for connections to any other nodes, create topics, post messages etc. The kafka is accessible via Telnet from all orderers.
When I try to create a channel I get the following error from the orderer:
2019-04-25 13:34:17.660 UTC [orderer.common.broadcast] ProcessMessage -> WARN 025 [channel: channel1] Rejecting broadcast of message from 192.168.94.15:53598 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-04-25 13:34:17.660 UTC [comm.grpc.server] 1 -> INFO 026 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.94.15:53598 grpc.code=OK grpc.call_duration=14.805833ms
2019-04-25 13:34:17.661 UTC [common.deliver] Handle -> WARN 027 Error reading from 192.168.94.15:53596: rpc error: code = Canceled desc = context canceled
2019-04-25 13:34:17.661 UTC [comm.grpc.server] 1 -> INFO 028 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.94.15:53596 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=24.987468ms
And the Kafka leader reports the following error:
[2019-04-25 14:07:09,453] WARN [SocketServer brokerId=2] Unexpected error from /192.168.89.200; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295617 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:231)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:192)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:528)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:469)
at org.apache.kafka.common.network.Selector.poll(Selector.java:398)
at kafka.network.Processor.poll(SocketServer.scala:535)
at kafka.network.Processor.run(SocketServer.scala:452)
at java.lang.Thread.run(Thread.java:748)
[2019-04-25 14:13:53,917] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
The error indicates that you are receiving messages larger than the permitted maximum size, that defaults to ~100MB. Try to increase the following property in server.properties file, so that it can fit larger receive (in this case at least 369295617 bytes):
# Set to 500MB
socket.request.max.bytes=500000000
and then restart your Kafka Cluster.
If this doesn't work for you, then I guess that you are trying to connect to a non-SSL listener. Therefore, you'd have to verify that broker's SSL listener port is 9092 (or the corresponding port in case you are not using the default one) . The following should do the trick:
listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

Zookeeper does not start normally (Established session 0x10000025c8a0001 with negotiated timeout 6000 )and kafka fails

I had previously run zookeeper and kafka successfully many times, and I believe my installation and configurations are correct.
The only change I made was to the zookeeper config file:
dataDir=/Users/garynackenson/Downloads/kafka_2.12-2.0.0/data/zookeeper
which I have created the directory for.
Now when I run zookeeper instead of getting info binding to port 0.0.0.0/0.0.0.0:2181
I get the error below, and kafka fails with a port 9092 in use error (i have restarted my machine and checked every way i know to see that port 9092 is not in use
the last message from zoopkeeper below, which does not look right
INFO Established session 0x10000025c8a0001 with negotiated timeout 6000 for client /127.0.0.1:49977 (org.apache.zookeeper.server.ZooKeeperServer)
When zookeeper starts that way kafka fails with a 9092 in use error (see below) – I restarted and checked that I am not using port 9092.
org.apache.kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092: Address already in use.
A little while later , I saw that zookeeper had a different issue:
INFO Closed socket connection for client /0:0:0:0:0:0:0:1:49986 which had sessionid 0x100000b679d0000 (org.apache.zookeeper.server.NIOServerCnxn)
I ran zookeeper again and saw the more ‘normal’ binding to 2081 4 messages up
[2018-10-03 18:25:08,064] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2018-10-03 18:25:09,055] INFO Accepted socket connection from /127.0.0.1:50014 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2018-10-03 18:25:09,062] INFO Client attempting to renew session 0x10000025c8a0001 at /127.0.0.1:50014 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-10-03 18:25:09,066] INFO Established session 0x10000025c8a0001 with negotiated timeout 6000 for client /127.0.0.1:50014 (org.apache.zookeeper.server.ZooKeeperServer)
but kafka is still failing every time
also sometimes i get the following message when i start zookeeper
[2018-10-03 18:10:36,097] INFO Got user-level KeeperException when processing sessionid:0x10000025c8a0001 type:delete cxid:0x47 zxid:0x179 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election (org.apache.zookeeper.server.PrepRequestProcessor)

Kafka Zookeeper Connection drop continuously

I have setup Kafka 3-node cluster and Zookeeper 3-node cluster, on separate nodes. Using Kafka I can produce and consume messages successfully and run commands like kafka-topic.sh to get topic lists and their informations from Zookeeper, but there are some errors on Kafka server.log file. The following warning appears continuously:
[2018-02-18 21:50:01,241] WARN Client session timed out, have not heard from server in 320190154ms for sessionid 0x161a94b101f0001 (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:01,242] INFO Client session timed out, have not heard from server in 320190154ms for sessionid 0x161a94b101f0001, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:01,343] INFO zookeeper state changed (Disconnected) (org.I0Itec.zkclient.ZkClient)
[2018-02-18 21:50:01,989] INFO Opening socket connection to server zookeeper3/192.168.1.206:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:02,008] INFO Socket connection established to zookeeper3/192.168.1.206:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:02,042] INFO Session establishment complete on server zookeeper3/192.168.1.206:2181, sessionid = 0x161a94b101f0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:02,042] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2018-02-18 21:59:31,570] INFO [Group Metadata Manager on Broker 102]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
It seems the Kafka sessions in zookeeper expires periodically!
In Zookeeper logs are the following warninngs, too:
2018-02-18 18:20:06,149 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#368] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x161a94b101f0001, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
at java.lang.Thread.run(Thread.java:748)
2018-02-18 18:20:06,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /192.168.1.203:43162 which had sessionid 0x161a94b101f0001
2018-02-18 18:20:06,781 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#368] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x161a94b101f0002, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
at java.lang.Thread.run(Thread.java:748)
2018-02-18 18:20:06,782 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /192.168.1.201:45330 which had sessionid 0x161a94b101f0002
2018-02-18 18:37:29,127 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /192.168.1.202:52480
2018-02-18 18:37:29,139 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#942] - Client attempting to establish new session at /192.168.1.202:52480
2018-02-18 18:37:29,143 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer#687] - Established session 0x161a94b101f0003 with negotiated timeout 30000 for client /192.168.1.202:52480
2018-02-18 18:37:29,432 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /192.168.1.202:52480 which had sessionid 0x161a94b101f0003
I think it's because zookeeper can't get heartbeat from Kafka nodes. The followings are Zookeeper zoo.cfg:
tickTime=2000
dataDir=/var/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zookeeper1:2888:3888
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888
and Kafka server.properties customized setting:
broker.id=1
listeners = PLAINTEXT://kafka1:9092
num.partitions=24
delete.topic.enable=true
default.replication.factor=2
log.dirs=/data/kafka/data
zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
log.retention.hours=168
I use the same zookeeper cluster for Hadoop HA without any problem. I think there is something wrong with the Kafka properties listeners and advertised.listeners. I read the Kafka documentation but couldn't understand their meaning.
In the host file of all OSes, hostnames such that zookeeper1 to zookeeper3 and kafka1 to kafka3 are defined and reachable through ping command. I removed the following lines from hosts:
127.0.0.1 localhost
127.0.1.1 hostname
I think it couldn't cause the problem.
Kafka version: 0.11
Zookeeper version: 3.4.10
Can anyone help?
We were facing a similar issue with Kafka. As #Soheil pointed out it was due to a Major GC running.
When a Major GC runs, then Kafka would sometimes not be able to send heartbeat to zookeeper. For us the Major GC was running almost once every 15 sec. On taking a heap dump, we realized it was due to a Metric Memory Leak in Kafka.

Why can't I connect to Kafka/Zookeeper? (In a Docker)

I'm running Kafka (0.10.0.0) in a Docker on a Mac (w/docker-machine). I derived my Dockerfile from Spotify's, which means Kafka and Zookeeper run in the same image.
My instance starts cleanly and poking around inside it appears everything is normal/OK.
Docker maps ports 2181 and 9092 to high-ports 32822 and 32820 in this case. From outside my running Kafka Docker I am able to successfully telnet 192.168.99.100 32822 (where 192.168.99.100 is the IP of my docker-machine). From there I can issue a zookeeper command and get expected output.
It all seems so encouraging, but... I then try this code:
val numPartitions = 4
val replicationFactor = 1
val topicConfig = new java.util.Properties
// zookeeper = "192.168.99.100:32822"
val zkClient = ZkUtils(zookeeper, 10000, 10000, false)
try {
AdminUtils.createTopic(zkClient, topic, numPartitions, replicationFactor, topicConfig)
} catch {
case k: kafka.common.TopicExistsException => // do nothing...topic exists
}
zkClient.close()
This produces this error output:
DEBUG ZkConnection - Creating new ZookKeeper instance to connect to 192.168.99.100:32822.
INFO ZkEventThread - Starting ZkClient event thread.
INFO ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
INFO ZooKeeper - Client environment:host.name=172.25.42.82
INFO ZooKeeper - Client environment:java.version=1.8.0_60
INFO ZooKeeper - Client environment:java.vendor=Oracle Corporation
INFO ZooKeeper - Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/jre
INFO ZooKeeper - Client environment:java.class.path=/usr/local/Cellar/sbt/0.13.11/libexec/sbt-launch.jar
INFO ZooKeeper - Client environment:java.library.path=/Users/wmy965/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
INFO ZooKeeper - Client environment:java.io.tmpdir=/var/folders/ph/ccz4n1qs62n0bn8mqdg94gswt1jlwk/T/
INFO ZooKeeper - Client environment:java.compiler=<NA>
INFO ZooKeeper - Client environment:os.name=Mac OS X
INFO ZooKeeper - Client environment:os.arch=x86_64
INFO ZooKeeper - Client environment:os.version=10.11.5
INFO ZooKeeper - Client environment:user.name=wmy965
INFO ZooKeeper - Client environment:user.home=/Users/wmy965
INFO ZooKeeper - Client environment:user.dir=/Users/wmy965/git/LateKafka
INFO ZooKeeper - Initiating client connection, connectString=192.168.99.100:32822 sessionTimeout=10000 watcher=org.I0Itec.zkclient.ZkClient#55397e3
DEBUG ClientCnxn - zookeeper.disableAutoWatchReset is false
DEBUG ZkClient - Awaiting connection to Zookeeper server
INFO ZkClient - Waiting for keeper state SyncConnected
INFO ClientCnxn - Opening socket connection to server 192.168.99.100/192.168.99.100:32822. Will not attempt to authenticate using SASL (unknown error)
WARN ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
DEBUG ClientCnxnSocketNIO - Ignoring exception during shutdown input
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:780)
at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:399)
at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:200)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1185)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1110)
DEBUG ClientCnxnSocketNIO - Ignoring exception during shutdown output
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:797)
at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:407)
at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:207)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1185)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1110)
INFO ClientCnxn - Opening socket connection to server 192.168.99.100/192.168.99.100:32822. Will not attempt to authenticate using SASL (unknown error)
INFO ClientCnxn - Socket connection established to 192.168.99.100/192.168.99.100:32822, initiating session
DEBUG ClientCnxn - Session establishment request sent on 192.168.99.100/192.168.99.100:32822
INFO ClientCnxn - Session establishment complete on server 192.168.99.100/192.168.99.100:32822, sessionid = 0x155225c51720000, negotiated timeout = 10000
DEBUG ZkClient - Received event: WatchedEvent state:SyncConnected type:None path:null
INFO ZkClient - zookeeper state changed (SyncConnected)
DEBUG ZkClient - Leaving process event
DEBUG ZkClient - State is SyncConnected
DEBUG ClientCnxn - Reading reply sessionid:0x155225c51720000, packet:: clientPath:null serverPath:null finished:false header:: 1,8 replyHeader:: 1,1,-101 request:: '/brokers/ids,F response:: v{}
It looks like I can't connect (presumably to zookeeper). Why not?
In new kafka streams, the ip of producer must have been knowing by kafka (docker). Kafka send their uuid (you can show this in /etc/hosts inside kafka docker) and espect response from this.
Summary:
Map uuid kafka docker to docker-machine in /etc/host of mac OS.
To help you, how to change etc/host file in mac:
https://www.tekrevue.com/tip/edit-hosts-file-mac-os-x/
Cleaner would be to set advertised.listeners=host-ip:port since advertised.host.name and advertised.port are deprecated in Kafka server.properties file.
If set host-ip to 0.0.0.0 it will listen requests from anywhere. But it's insecure.