I have installed confluent-oss-5.0.0 on Azure VM and exposed all necessary ports to access using public IP Address.
I tried to change the etc/kafka/server.properties below things to achieve but no luck
Approach - 1
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://<publicIP>:9092
--------------------------------------
Approach - 2
advertised.listeners=PLAINTEXT://<publicIP>:9092
--------------------------------------
Approach - 3
listeners=PLAINTEXT://<publicIP>:9092
I experienced below error
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-producer --broker-list <publicIp>:9092 --topic pj_test123>dfsds
[2019-03-25 19:13:38,784] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-producer --broker-list <publicIp>:9092 --topic pj_test123
>message1
>message2
>[2019-03-25 19:20:13,216] ERROR Error when sending message to topic pj_test123 with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for pj_test123-0: 1503 ms has passed since batch creation plus linger time
[2019-03-25 19:20:13,218] ERROR Error when sending message to topic pj_test123 with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-consumer --bootstrap-server <publicIp>:9092 --topic pj_test123 --from-beginning
[2019-03-25 19:29:27,742] WARN [Consumer clientId=consumer-1, groupId=console-consumer-42352] Error while fetching metadata with correlation id 2 : {pj_test123=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-consumer --bootstrap-server <publicIp>:9092 --topic pj_test123 --from-beginning
[2019-03-25 19:27:06,589] WARN [Consumer clientId=consumer-1, groupId=console-consumer-33252] Connection to node 0 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
All other service like zookeeper, kafka-connect and restAPI are working fine using the <PublicIP>:<port>
kafka-topics --zookeeper 13.71.115.20:2181 --list --- This is working
Ref:
Not able to access messages from confluent kafka on EC2
https://kafka.apache.org/documentation/#brokerconfigs
Why I cannot connect to Kafka from outside?
Solutions
Thanks, #Robin Moffatt, It works for me. I do below changes along with allowing all Kafka related ports on Azure networking
kafka#kafka:~/confluent-oss-5.0.0$ sudo vi etc/kafka/server.properties
listeners=INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://<privateIp>:9092,EXTERNAL://<publicIp>:19092
inter.broker.listener.name=INTERNAL
You need to configure both internal and external listeners for your broker. This article details how: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
You will also have to give public access to port 9092 (your broker). TO do that,
Go to your Virtual machine in Azure portal
Select Networking under settings in the left menu
Add inbound port rule
Add port 9092 to be accessbile from anywhere
I'm facing issue while finding the depth of a topic in kafka with SSL enabled port by using kafka.tools.GetOffsetShell, but could able to consume the messages from the same port.
However could be able to execute the command to get the depth of a topic for PLAINTEXT port.
Below is the error stack for the same
sshuser#wn0:~$sudo bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list wn0.internal.cloudapp.net:9093 --topic TP.TOPIC --time -1 --offsets 1 --security-protocol SSL
{metadata.broker.list=wn0.internal.cloudapp.net:9093, request.timeout.ms=1000, client.id=GetOffsetShell, security.protocol=SSL}
[2017-09-26 19:21:59,026] WARN Fetching topic metadata with correlation id 0 for topics [Set(TP.TOPIC)] from broker [BrokerEndPoint(0,wn0.internal.cloudapp.net:9093)] failed (kafka.client.ClientUtils$)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:140)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:131)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:84)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:81)
at kafka.producer.SyncProducer.send(SyncProducer.scala:126)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:96)
at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:98)
at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)
While producing message in kafka, i am getting the following error :
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic nil_PF1_P1
hi
hello
[2016-07-19 17:06:34,542] ERROR Error when sending message to topic nil_PF1_P1 with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2016-07-19 17:07:34,544] ERROR Error when sending message to topic nil_PF1_P1 with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic nil_PF1_P1
Topic:nil_PF1_P1 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: nil_PF1_P1 Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Any idea on this??
Instead of changing server.properties include the address 0.0.0.0 in the code itself.
Instead of
/usr/bin/kafka-console-producer --broker-list Hostname:9092 --topic MyFirstTopic1
use
/usr/bin/kafka-console-producer --broker-list 0.0.0.0:9092 --topic MyFirstTopic1
It may be because of some parameters from Kafka's server.properties file. You can find more information here
Stop the Kafka server with
cd $KAFKA_HOME/bin
./kafka-server-stop.sh
Change
listeners=PLAINTEXT://hostname:9092
to
listeners=PLAINTEXT://0.0.0.0:9092
in $KAFKA_HOME/config/server.properties
Restart the Kafka server with
$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
I know this is old but this may work for someone else who's dealing with it:
I changed 2 things:
1. change the "bootstrap.servers" property or the --broker-list option to 0.0.0.0:9092
2. change (uncomment and edit in my case) the server.properties in 2 properties
listeners = PLAINTEXT://your.host.name:9092 to listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://your.host.name:9092 to advertised.listeners=PLAINTEXT://localhost:9092
I faced similar problem, where I was able to produce and consume on localhost but not from different machines on network. Based few answers I got the clue that essentially we need to expose advertised.listener to producer and consumer, however giving 0.0.0.0 was also not working. So gave exact IP against advertised.listeners
advertised.listeners=PLAINTEXT://HOST.IP:9092
And I left listener=PLAINTEXT://:9092 as it is.
So with this the spark exposes advertised ip and port to producers and consumers
If you are running hortonworks cluster, check the listening port in ambari.
In my case 9092 was not my port. I went to ambari and found the listening port was set to 6667
it worked for me. :)
I get the same error today with confluent_kafka 0.9.2 (0x90200) and librdkafka 0.9.2 (0x90401). In my case, I specified the wrong broker port in tutorialpoints example:
$ kafka-console-producer.sh --broker-list localhost:9092 --topic tutorialpoint-basic-ops-01
although my broker was started on port 9094:
$ cat server-02.properties
broker.id=2
port=9094
log.dirs=/tmp/kafka-example-logs-02
zookeeper.connect=localhost:2181
Although the 9092 port was not open (netstat -tunap), it took 60s for kafka-console-producer.sh to raise an error. Looks like this tool needs a fix to:
fail faster
with a more explicit error message.
I faced the above exception stacktrace. I investigated and found the root cause.I faced it when I established Kafka cluster with two nodes.With the following settings in server.properties.Here I am denoting server.properties of kafka node 1 and 2 as broker1.properties and broker2.properties
broker1.properties settings
listeners=PLAINTEXT://A.B.C.D:9092
zookeeper.connect=A.B.C.D:2181,E.F.G.H:2181
broker2.properties settings
listeners=PLAINTEXT://E.F.G.H:9092
zookeeper.connect=A.B.C.D:2181,E.F.G.H:2181
I was trying to start a producer from node1 or from node2 using the following command:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic OUR_TOPIC
and I was getting the above timeout exception stacktrace although Kafka is running in both machine.
Although producer is starting either from Leader node or from a follower I was always getting the same.
While using below command from any broker I was able to get producer the message.
./bin/kafka-console-producer.sh --broker-list A.B.C.D:9092 --topic OUR_TOPIC
or
./bin/kafka-console-producer.sh --broker-list E.F.G.H:9092 --topic OUR_TOPIC
or
./bin/kafka-console-producer.sh --broker-list A.B.C.D:9092,E.F.G.H:9092 --topic OUR_TOPIC
So the root cause is that Kafka broker internally using listeners=PLAINTEXT://E.F.G.H:9092 property while staring a producer.This property must match to start a kafka broker from any of the node while starting a producer.Converting this property to listeners=PLAINTEXT://localhost:9092 will work for our very first command.
Had this issue:
Using Hortonworks HDP 2.5.
Kerberisation enabled
Fixed by providing the correct security protocol and ports.
Example commands:
./kafka-console-producer.sh --broker-list sand01.intranet:6667, san02.intranet:6667, san03.intranet:6667--topic test--security-protocol PLAINTEXTSASL
./kafka-console-consumer.sh --zookeeper sand01:2181 --topic test--from-beginning --security-protocol PLAINTEXTSASL
In my case, I am using Kafka docker with Openshift. I was getting the same problem. It got fixed when I passed the environment variable KAFKA_LISTENERS with a value of PLAINTEXT://:9092. This will eventually add create an entry listeners=PLAINTEXT://:9092 under server.properties.
The listeners doesn't have to have a hostname.
Another scenario here. No clue on what was happening till I find a kafka log with the following message:
Caused by: java.lang.IllegalArgumentException: Invalid version for API key 3: 2
Apparently the producer was using a newer kafka-client (java) than the kafka server and the API used was invalid (client using 1.1 and server on 10.0). On the Client/Producer I got the:
Error producing to topic Failed to update metadata after 60000 ms.
For Apache Kafka v2.11-1.1.0
Start zookeeper server:
$ bin/zookeeper-server-start.sh config/zookeeper.properties
Start kafka server:
$ bin/kafka-server-start.sh config/server.properties
Create one topic name "my_topic":
$ bin/kafka-topics.sh --create --topic my_topic --zookeeper localhost:2181 --replication-factor 1 --partitions 1
Start the producer:
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my_topic
Start the consumer:
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my_topic --from-beginning
I use Apache Kafka on a Hortonworks (HDP 2.X release) installation. The error message encountered means that Kafka producer was not able to push the data to the segment log file. From a command-line console, that would mean 2 things :
You are using incorrect port for the brokers
Your listener config in server.properties are not working
If you encounter the error message while writing via scala api, additionally check connection to kafka cluster using telnet <cluster-host> <broker-port>
NOTE: If you are using scala api to create topic, it takes sometime for the brokers to know about the newly created topic. So, immediately after topic creation, the producers might fail with the error Failed to update metadata after 60000 ms.
I did the following checks in order to resolve this issue:
The first difference once I check via Ambari is that Kafka brokers listen on port 6667 on HDP 2.x (apache kafka uses 9092).
listeners=PLAINTEXT://localhost:6667
Next, use the ip instead of localhost.
I executed netstat -na | grep 6667
tcp 0 0 192.30.1.5:6667 0.0.0.0:* LISTEN
tcp 1 0 192.30.1.5:52242 192.30.1.5:6667 CLOSE_WAIT
tcp 0 0 192.30.1.5:54454 192.30.1.5:6667 TIME_WAIT
So, I modified the producer call to user the IP and not localhost:
./kafka-console-producer.sh --broker-list 192.30.1.5:6667 --topic rdl_test_2
To monitor if you have new records being written, monitor the /kafka-logs folder.
cd /kafka-logs/<topic name>/
ls -lart
-rw-r--r--. 1 kafka hadoop 0 Feb 10 07:24 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
Once, the producer successfully writes, the segment log-file 00000000000000000000.log will grow in size.
See the size below:
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
-rw-r--r--. 1 kafka hadoop **45** Feb 10 09:16 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
At this point, you can run the consumer-console.sh:
./kafka-console-consumer.sh --bootstrap-server 192.30.1.5:6667 --topic rdl_test_2 --from-beginning
response is hello world
After this step, if you want to produce messages via the Scala API's , then change the listeners value(from localhost to a public IP) and restart Kafka brokers via Ambari:
listeners=PLAINTEXT://192.30.1.5:6667
A Sample producer will be as follows:
package com.scalakafka.sample
import java.util.Properties
import java.util.concurrent.TimeUnit
import org.apache.kafka.clients.producer.{ProducerRecord, KafkaProducer}
import org.apache.kafka.common.serialization.{StringSerializer, StringDeserializer}
class SampleKafkaProducer {
case class KafkaProducerConfigs(brokerList: String = "192.30.1.5:6667") {
val properties = new Properties()
val batchsize :java.lang.Integer = 1
properties.put("bootstrap.servers", brokerList)
properties.put("key.serializer", classOf[StringSerializer])
properties.put("value.serializer", classOf[StringSerializer])
// properties.put("serializer.class", classOf[StringDeserializer])
properties.put("batch.size", batchsize)
// properties.put("linger.ms", 1)
// properties.put("buffer.memory", 33554432)
}
val producer = new KafkaProducer[String, String](KafkaProducerConfigs().properties)
def produce(topic: String, messages: Iterable[String]): Unit = {
messages.foreach { m =>
println(s"Sending $topic and message is $m")
val result = producer.send(new ProducerRecord(topic, m)).get()
println(s"the write status is ${result}")
}
producer.flush()
producer.close(10L, TimeUnit.MILLISECONDS)
}
}
Hope this helps someone.
adding such a line after the topic helped with the same issue:
... --topic XXX --property "parse.key = true" --property "key.separator =:"
Hope this helps someone.
When I try to use the Kafka producer and consumer (0.9.0) script to push/pull messages from a topic, I get the errors below.
Producer Error
[2016-01-13 02:49:40,078] ERROR Error when sending message to topic test with key: null, value: 11 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Consumer Error
> [2016-01-13 02:47:18,620] WARN
> [console-consumer-90116_f89a0b380f19-1452653212738-9f857257-leader-finder-thread],
> Failed to find leader for Set([test,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.KafkaException: fetching topic metadata for topics
> [Set(test)] from broker
> [ArrayBuffer(BrokerEndPoint(0,192.168.99.100,9092))] failed at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73) at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94) at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> Caused by: java.io.EOFException at
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
> at
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:77)
> at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:119) at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
> ... 3 more
Why am I getting the error, and how do I resolve it?
Configuration
Running all components in Docker containers on Mac. ZooKeeper and Kafka running in separate Docker containers.
Docker Machine (boot2docker) IP Address: 192.168.99.100
ZooKeeper Port: 2181
Kafka Port: 9092
Kafka configuration file server.properties sets the following:
host.name=localhost
broker.id=0
port=9092
advertised.host.name=192.168.99.100
advertised.port=9092
Commands
I run the following commands from within the kafka server Docker container. I've already created a topic with one partition and a replication factor of 1.
Notice the leader designation is 0 which might be part of the problem.
root#f89a0b380f19:/opt/kafka/dist# ./bin/kafka-topics.sh --zookeeper 192.168.99.100:2181 --topic test --describe
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
I then do the following to send some messages:
root#f89a0b380f19:/opt/kafka/dist# ./bin/kafka-console-producer.sh --broker-list 192.168.99.100:9092 --topic test
one message
two message
three message
four message
[2016-01-13 02:49:40,078] ERROR Error when sending message to topic test with key: null, value: 11 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-01-13 02:50:40,080] ERROR Error when sending message to topic test with key: null, value: 11 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-01-13 02:51:40,081] ERROR Error when sending message to topic test with key: null, value: 13 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-01-13 02:52:40,083] ERROR Error when sending message to topic test with key: null, value: 12 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
This is the command I'm using to attempt to consume messages which yields the consumer error I posted above.
root#f89a0b380f19:/opt/kafka/dist# ./bin/kafka-console-consumer.sh --zookeeper 192.168.99.100:2181 --topic test --from-beginning
I've confirmed ports 2181 and 9092 are open and accessible from within the Kafka Docker container:
root#f89a0b380f19:/# nc -z 192.168.99.100 2181; echo $?;
0
root#f89a0b380f19:/# nc -z 192.168.99.100 9092; echo $?;
0
The solution wasn't what I expected at all. The error message did not line up with what was really happening.
The primary problem was mounting the log directory in Docker to my local file system. My docker run command used a volume mount to mount the Kafka log.dir folder in the container to a local directory on the host VM which was actually mounted to my Mac. It's that latter point that was the problem.
For instance,
docker run --name kafka -v /Users/<me>/kafka/logs:/var/opt/kafka:rw -p 9092:9092 -d kafka
Since I'm on a Mac and use docker-machine (e.g. boot2docker), I have to mount through my /Users/ path which boot2docker auto-mounts in the host VM. Because the underlying VM itself uses a bind mount, Kafka's I/O engine wasn't able to communicate with it correctly. If the volume mount was to a directory directly on the host Linux VM (i.e. boot2docker machine) it would work.
I can't explain the exact details since I don't know the ins-and-outs of Kafka I/O, but when I remove the mounted volume to my Mac file system it works.
Kafka 0.8 works great. I am able to use CLI as well as write my own producers/consumers!
Checking Zookeeper... and I see all the topics and partitions created successfully for 0.8.
Kafka 0.7 does not work!
Why Kafka 0.7? I am using Kafka Spout from Storm which is made for Kafka 0.7.
First I just want to run CLI based producer/consumer for Kafka 0.7, which I am unable to. I carry out the following steps:
I delete all the topics/partitions etc. in Zookeeper that were created from my Kafka 0.8
I change the dataDir in zoo.cfg to point to different location.
Now I start the kafka server 0.7. It starts successfully. However I don’t know why it again registers the broker topics I deleted?
Now I start the Kafka Producer :
bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic topicime
& it starts successfully:
[2013-06-28 14:06:05,521] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2013-06-28 14:06:05,606] INFO Creating async producer for broker id = 0 at 0:0 (kafka.producer.ProducerPool)
Time to send some messages & oops I get this error:
[2013-06-28 14:07:19,650] INFO Disconnecting from 0:0 (kafka.producer.SyncProducer)
[2013-06-28 14:07:19,653] ERROR Connection attempt to 0:0 failed, next attempt in 1 ms (kafka.producer.SyncProducer)
java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:364)
at sun.nio.ch.Net.connect(Net.java:356)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:623)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:173)
at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:196)
at kafka.producer.SyncProducer.send(SyncProducer.scala:92)
at kafka.producer.SyncProducer.multiSend(SyncProducer.scala:135)
at kafka.producer.async.DefaultEventHandler.send(DefaultEventHandler.scala:58)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:44)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:116)
at scala.collection.immutable.Stream.foreach(Stream.scala:254)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:70)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:41)
Note that Zookeeper is already running.
Any help would really be appreciated.
EDIT:
I don't even see the topic being created in zookeeper. I am running the following command:
bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic topicime
After the command everything is fine & I get the following message:
[2013-06-28 14:30:17,614] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x13f805c6673004b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2013-06-28 14:30:17,615] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2013-06-28 14:30:17,700] INFO Creating async producer for broker id = 0 at 0:0 (kafka.producer.ProducerPool)
However now when i type a string to send I get the above error (Connection refused!)
INFO Disconnecting from 0:0 (kafka.producer.SyncProducer)
The above line has the error hidden in it. 0:0 is not a valid host and port. The solution is to explicitly set the host ip to be registered in Zookeeper by setting the "hostname" property in server.properties.
Consider checking out the storm-kafka fork, available at https://github.com/wurstmeister/storm-kafka-0.8-plus
I'm installing it right now for our servers =).