Apache ActiveMQ Artemis unsubscribes from current session when it disconnects a previous session - activemq-artemis

Apache ActiveMQ Artemis unsubscribes from current session when it disconnects a previous session
Sometimes when we have a network glitch, clients will disconnect and reconnect. What I think we're seeing from time to time is Artemis cleaning up old sessions, and when it disconnects the old session, it unsubscribes the subscriptions of the new session.
These are the logs we captured using a log plugin
2022-05-24 08:32:50,448 INFO UserA Connect Broker, Success. Mqtt ClientId=clientA SessionId=18c42139-db3c-11ec-8290-00505683049d SourceIP=/192.168.2.8:40302
2022-05-24 08:32:50,448 INFO UserA Connect Broker, Success. Mqtt ClientId=clientA SessionId=18ef01cf-db3c-11ec-8290-00505683049d SourceIP=/192.168.2.8:40302
2022-05-24 08:32:51,198 INFO UserA Unsubscribe Broker, Success. Mqtt ClientId=clientA SessionId=04fb8e9e-d51f-11ec-8290-00505683049d SourceIP=/192.168.2.8:37032 Topic=Sample.Test.Topic.A.#
2022-05-24 08:32:51,198 INFO UserA Unsubscribe Broker, Success. Mqtt ClientId=clientA SessionId=04fb8e9e-d51f-11ec-8290-00505683049d SourceIP=/192.168.2.8:37032 Topic=Sample.Test.Topic.B.#
2022-05-24 08:32:51,729 INFO UserA Subscribe Broker, Success. Mqtt ClientId=clientA SessionId=18c42139-db3c-11ec-8290-00505683049d SourceIP=/192.168.2.8:40302 Topic=Sample.Test.Topic.A.#
2022-05-24 08:32:51,901 INFO UserA Subscribe Broker, Success. Mqtt ClientId=clientA SessionId=18c42139-db3c-11ec-8290-00505683049d SourceIP=/192.168.2.8:40302 Topic=Sample.Test.Topic.B.#
2022-05-24 08:33:03,495 INFO UserA DisconnectedFrom Broker, Success. Mqtt ClientId=clientA SessionId=04fb8e9e-d51f-11ec-8290-00505683049d SourceIP=/192.168.2.8:37032
2022-05-24 08:33:03,495 INFO UserA DisconnectedFrom Broker, Success. Mqtt ClientId=clientA SessionId=05135c7e-d51f-11ec-8290-00505683049d SourceIP=/192.168.2.8:37032
2022-05-24 08:33:03,495 INFO UserA Unsubscribe Broker, Success. Mqtt ClientId=clientA SessionId=18c42139-db3c-11ec-8290-00505683049d SourceIP=/192.168.2.8:40302 Topic=Sample.Test.Topic.A.#
2022-05-24 08:33:03,495 INFO UserA Unsubscribe Broker, Success. Mqtt ClientId=clientA SessionId=18c42139-db3c-11ec-8290-00505683049d SourceIP=/192.168.2.8:40302 Topic=Sample.Test.Topic.B.#
We're currently on Artemis 2.16.

Related

Kafka stretched cluster stopped when second DC become down

My Kafka version:
/opt/kafka/bin/kafka-topics.sh --version
2.4.1 (Commit:c57222ae8cd7866b)
My Kafka cluster configuration looks like:
6 nodes Kafka cluster
6 x Zookeeper i.e. is installed on each node/broker
2 DC's, there are 3 nodes in each DC
rack-awareness feature is enabled on each node:
node1 DC1:
broker.id=1
broker.rack=dc1
node2 DC1:
broker.id=2
broker.rack=dc1
node3 DC1:
broker.id=3
broker.rack=dc1
node1 DC2:
broker.id=4
broker.rack=dc2
node2 DC2:
broker.id=5
broker.rack=dc2
node3 DC2:
broker.id=6
broker.rack=dc2
When the whole DC2 become down the kafka cluster stopped and node1 from DC1 show errors like this:
[2022-03-16 07:38:45,422] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,549] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,787] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:45,787] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:45,787] INFO Opening socket connection to server dc2kafkabr2/A.B.C.72:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,788] INFO Socket error occurred: dc2kafkabr2/A.B.C.72:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,503] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,503] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,503] INFO Opening socket connection to server dc1kafkabr1/A.B.C.68:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,504] INFO Socket connection established, initiating session, client: /A.B.C.68:35796, server: dc1kafkabr1/A.B.C.68:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,505] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,616] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,617] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,617] INFO Opening socket connection to server dc1kafkabr2/A.B.C.69:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,617] INFO Socket connection established, initiating session, client: /A.B.C.68:38936, server: dc1kafkabr2/A.B.C.69:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,619] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,896] INFO Client successfully logged in. (org.apache.zookeeper.Login)
However when the Kafka nodes will be stopped normally/humanly in DC2 by systemctl command then Kafka cluster works properly on the nodes in DC1.
The question is why if DC2 is turned off, the Kafka cluster stops working? How to prevent of it? Any idea?
Best Regards,
Dan
Dears,
After next tests I know that problem is by side of Zookeeper because when I trun off two brokers in DC2 the Kafka cluster still works. After turn off kafka.service on the last broker in DC2 the Kafka cluster still works. But when I turn off zookeeper.service on the last broker in DC2 the cluster becomes unresponsive.
This is my zookeeper's configuration:
cat zookeeper.properties
tickTime=2000
dataDir=/opt/zookeeper/data
#dataLogDir=/var/log/zookeeper
clientPort=2181
initLimit=5
syncLimit=3
############## HARDENING #################
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
###########################################
server.1=A.B.C.68:2888:3888
server.2=A.B.C.69:2888:3888
server.3=A.B.C.70:2888:3888
server.4=A.B.C.71:2888:3888
server.5=A.B.C.72:2888:3888
server.6=A.B.C.73:2888:3888
Any idea what is wrong in this configuration?
Best Regards,
Dan
Zookeeper quorum is not ensure and this is reason.

filebeat to kafka : Failed to connect to broker

Im new in apache environment and currently im trying to send log data from filebeat producer to kafka broker.
environment :
kafka 2.11 (installed via ambari)
filebeat 7.4.2 (windows)
I tried to send logs from filebeat into ambari, I've started kafka servers and created the topic named "test" and it was listed on --list. Im pretty confused about kafka broker's port. In some tutorials i saw they were using 9092 instead 2181. So now ,what port i should use to send logs from filebeat?
here is my filebeat.conf
filebeat.inputs:
- type: log
paths:
- C:/Users/A/Desktop/DATA/mailbox3.csv
output.kafka:
hosts: ["x.x.x.x:9092"]
topic: "test"
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
result
2020-06-10T09:00:32.214+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:32.214+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:32.215+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:32.215+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:32.215+0700 INFO kafka/log.go:53 client/metadata retrying after 250ms... (3 attempts remaining)
2020-06-10T09:00:32.466+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
2020-06-10T09:00:34.475+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:34.475+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:34.477+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:34.477+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:34.478+0700 INFO kafka/log.go:53 client/metadata retrying after 250ms... (2 attempts remaining)
2020-06-10T09:00:34.729+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
2020-06-10T09:00:36.737+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:36.737+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:36.738+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:36.738+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:36.738+0700 INFO kafka/log.go:53 client/metadata retrying after 250ms... (1 attempts remaining)
2020-06-10T09:00:36.989+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
2020-06-10T09:00:39.002+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:9092: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:39.002+0700 INFO kafka/log.go:53 client/metadata got error from broker -1 while fetching metadata: dial tcp x.x.x.x:9092: connectex: No connection could be made because the target machine actively refused it.
2020-06-10T09:00:39.004+0700 INFO kafka/log.go:53 kafka message: client/metadata no available broker to send metadata request to
2020-06-10T09:00:39.004+0700 INFO kafka/log.go:53 client/brokers resurrecting 1 dead seed brokers
2020-06-10T09:00:39.004+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:9092
it makes me wonder if i really got 9092 port. So i check out server.properties.Some that i concern the most that:
port=6667
listeners=PLAINTEXT://x.x.x.x:6667
so then i try again to do the filebeat.conf and change port 9092 to 6667 and here is the result
2020-06-10T09:18:01.448+0700 INFO kafka/log.go:53 client/metadata fetching metadata for [test] from broker x.x.x.x:6667
2020-06-10T09:18:01.450+0700 INFO kafka/log.go:53 producer/broker/1001 starting up
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 producer/broker/1001 state change to [open] on test/0
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 producer/leader/test/0 selected broker 1001
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 Failed to connect to broker x.x.x.x:6667: dial tcp: lookup x.x.x.x: no such host
2020-06-10T09:18:01.452+0700 INFO kafka/log.go:53 producer/broker/1001 state change to [closing] because dial tcp: lookup x.x.x.x: no such host
2020-06-10T09:18:01.453+0700 DEBUG [kafka] kafka/client.go:264 finished kafka batch
2020-06-10T09:18:01.453+0700 DEBUG [kafka] kafka/client.go:278 Kafka publish failed with: dial tcp: lookup x.x.x.x: no such host
2020-06-10T09:18:01.454+0700 INFO kafka/log.go:53 producer/leader/test/0 state change to [flushing-3]
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/leader/test/0 state change to [normal]
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/leader/test/0 state change to [retrying-3]
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/leader/test/0 abandoning broker 1001
2020-06-10T09:18:01.456+0700 INFO kafka/log.go:53 producer/broker/1001 shut down
questions
What happened? Which port should be used? What is the use of each port?
Any respond will be appreciated so much. Thank you
UPDATE
according this source the right source is 6667 since kafka was installed via ambari
There is no restriction on the port that can be used, it only depends on the availability.
In the first case, as you said, the broker could have been started on 6667 and hence no process was running on 9092.
2020-06-10T09:18:01.451+0700 INFO kafka/log.go:53 Failed to
connect to broker x.x.x.x:6667: dial tcp: lookup x.x.x.x: no such host
Next, when you mention advertised.listeners property, you should ensure that the IP you mention in the advertised.listeners is the IP assigned to that machine. You cannot assign 1.1.1.1:9092 (just to mention some example).
Execute ifconfig (linux), ipconfig (windows) and see the IP of your machine on the network interface that is accessible from your application machine.
In linux, it will mostly be eth0
This IP must be accessible from the machine where you are running your application.
So the machine your application is running on should be able to resolve that IP. You may also want to check your network connection between your Kafka broker and the machine you are running your application on.

Kafka consumer not connecting to remote host

In the past few days i've been learning about kafka and doing smalls tests and stuff.
I could already consume messages succesfuly in my localhost, even from another pc within the same network. But now that i'm trying to connect to a remote server (it's actually the same pc, and same broker and topic, i just have two internet service providers so i just switched in order to try with the public ip), i don't receive any messages. Also, i get this in the console:
[main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-1, groupId=0a396775-94e2-46a0-a6bf-08f0d848ffc9] Connection with /xxx.xx.xxx.xxx disconnected
java.net.ConnectException: Connection timed out: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:216)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:531)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:249)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:326)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1251)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1220)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1159)
at com.okta.javakafka.kafkajava.SimpleConsumer.main(SimpleConsumer.java:39)
I tried a couple of pinging tests and the connection was succesful. So maybe i'm missing something that's different in the case of a remote kafka connection?
If someone could help, i'd appreciate it a lot
************** EDIT ****************************
listeners on my server.properties:
listeners=PLAINTEXT://192.168.1.101:9092 (local ip)
advertised.listeners=PLAINTEXT://200.x.xxx.xxx:9092 (public ip)

Zookeeper refuses Kafka connection from an old client

I have a cluster configuration using Kubernetes on GCE, I have a pod for zookeeper and other for Kafka; it was working normally until Zookeeper get crashed and restarted, and it start refusing connections from the kafka pod:
Refusing session request for client /10.4.4.58:52260 as it has seen
zxid 0x1962630
The complete refusal log is here:
2017-08-21 20:05:32,013 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /10.4.4.58:52260
2017-08-21 20:05:32,013 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#882] - Connection request from old client /10.4.4.58:52260; will be dropped if server is in r-o mode
2017-08-21 20:05:32,013 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#901] - Refusing session request for client /10.4.4.58:52260 as it has seen zxid 0x1962630 our last zxid is 0xab client must try another server
2017-08-21 20:05:32,013 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1008] - Closed socket connection for client /10.4.4.58:52260 (no session established for client)
Because the kafka maintain a zookeeper session which remember the last zxid it has seen. So when the zookeeper sevice go down and come again, the zk's zxid begin from a smaller value. and ZKserver think the kafka has seen a bigger zxid, so it refuse it.
Have a try to restart the kafka.
For the record, I had this problem and all my kafka were off.
But, my kafka-manager was still up and listening on zookeepers. Turning it off resolved the issue.
Related to the answer from #GuangshengZuo.... Steps
Stop any residual zookeeper instances - zookeeper-server-stop.bat
Start a fresh zookeeper- zookeeper-server-start.bat .\config\zookeeper.properties
This will do

Apache Kafka - Consumer not receiving messages from producer

I would appreciate your help on this.
I am building a Apache Kafka consumer to subscribe to another already running Kafka. Now, my problem is that when my producer pushes message to server...my consumer does not receive them .. and I get the below info in my logs printed::
13/08/30 18:00:58 INFO producer.SyncProducer: Connected to xx.xx.xx.xx:6667:false for producing
13/08/30 18:00:58 INFO producer.SyncProducer: Disconnecting from xx.xx.xx.xx:6667:false
13/08/30 18:00:58 INFO consumer.ConsumerFetcherManager: [ConsumerFetcherManager- 1377910855898] Stopping leader finder thread
13/08/30 18:00:58 INFO consumer.ConsumerFetcherManager: [ConsumerFetcherManager- 1377910855898] Stopping all fetchers
13/08/30 18:00:58 INFO consumer.ConsumerFetcherManager: [ConsumerFetcherManager- 1377910855898] All connections stopped
I am not sure if I am missing any important configuration here...However, I can see some messages coming from my server using WireShark but they are not getting consumed by my consumer....
My code is the exact replica of the sample consumer example:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
UPDATE:
[2013-09-03 00:57:30,146] INFO Starting ZkClient event thread.
(org.I0Itec.zkclient.ZkEventThread)
[2013-09-03 00:57:30,146] INFO Opening socket connection to server /xx.xx.xx.xx:2181 (org.apache.zookeeper.ClientCnxn)
[2013-09-03 00:57:30,235] INFO Connected to xx.xx.xx:6667 for producing (kafka.producer.SyncProducer)
[2013-09-03 00:57:30,299] INFO Socket connection established to 10.224.62.212/10.224.62.212:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2013-09-03 00:57:30,399] INFO Disconnecting from xx.xx.xx.net:6667 (kafka.producer.SyncProducer)
[2013-09-03 00:57:30,400] INFO [ConsumerFetcherManager-1378195030845] Stopping leader finder thread (kafka.consumer.ConsumerFetcherManager)
[2013-09-03 00:57:30,400] INFO [ConsumerFetcherManager-1378195030845] Stopping all fetchers (kafka.consumer.ConsumerFetcherManager)
[2013-09-03 00:57:30,400] INFO [ConsumerFetcherManager-1378195030845] All connections stopped (kafka.consumer.ConsumerFetcherManager)
[2013-09-03 00:57:30,400] INFO [console-consumer-49997_xx.xx.xx-1378195030443-cce6fc51], Cleared all relevant queues for this fetcher (kafka.consumer.ZookeeperConsumerConnector)
[2013-09-03 00:57:30,400] INFO [console-consumer-49997_xx.xx.xx.-1378195030443-cce6fc51], Cleared the data chunks in all the consumer message iterators (kafka.consumer.ZookeeperConsumerConnector)
[2013-09-03 00:57:30,400] INFO [console-consumer-49997_xx.xx.xx.xx-1378195030443-cce6fc51], Committing all offsets after clearing the fetcher queues (kafka.consumer.ZookeeperConsumerConnector)
[2013-09-03 00:57:30,401] ERROR [console-consumer-49997_xx.xx.xx.xx-1378195030443-cce6fc51], zk client is null. Cannot commit offsets (kafka.consumer.ZookeeperConsumerConnector)
[2013-09-03 00:57:30,401] INFO [console-consumer-49997_xx.xx.xx.xx-1378195030443-cce6fc51], Releasing partition ownership (kafka.consumer.ZookeeperConsumerConnector)
[2013-09-03 00:57:30,401] INFO [console-consumer-49997_xx.xx.xx.xx-1378195030443-cce6fc51], exception during rebalance (kafka.consumer.ZookeeperConsumerConnector)
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:185)
at scala.None$.get(Option.scala:183)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$2.apply(ZookeeperConsumerConnector.scala:434)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$2.apply(ZookeeperConsumerConnector.scala:429)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
at scala.collection.Iterator$class.foreach(Iterator.scala:631)
at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:194)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:80)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperConsumerConnector.scala:429)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:374)
at scala.collection.immutable.Range$ByOne$class.foreach$mVc$sp(Range.scala:282)
at scala.collection.immutable.Range$$anon$2.foreach$mVc$sp(Range.scala:265)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:369)
at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:681)
at kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.<init>(ZookeeperConsumerConnector.scala:715)
at kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:140)
at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
Can you please provide your producer code sample?
Do you have the latest 0.8 version checked out? It appears that there has been some known issue with consumerFetched deadlock which has been patched and fixed in the current version
you can try to use the admin console script to consume messages making sure your producer is working fine.
If possible post some more logs and code snippet, should help debugging further
(it seems I need more reputation to make a comment so had to answer instead)