Kafka Manager not able to connect to brokers - apache-kafka

I have 3 brokers (brker1, broker2 and broker3), my kafka manger is able connect to broker2 but not able to connect with remaining two brokers (broker1 and broker3). I restarted kafka-manager, broker1 and broker3 as well but still not connecting to these 2 brokers.
Note :- kafka is running on 3 brokers
This log I got from kafka-manage
[error] k.m.a.c.BrokerViewCacheActor - Failed to get broker metrics for BrokerIdentity(3,broker3,9092,9999,false)
java.rmi.ConnectException: Connection refused to host: IP; nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619) ~[na:1.8.0_121]
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216) ~[na:1.8.0_121]
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202) ~[na:1.8.0_121]
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:130) ~[na:1.8.0_121]
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227) ~[na:1.8.0_121]
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179) ~[na:1.8.0_121]
at com.sun.proxy.$Proxy6.newClient(Unknown Source) ~[na:na]
at javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2430) ~[na:1.8.0_121]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:308) ~[na:1.8.0_121]
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270) ~[na:1.8.0_121]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_121]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:538) ~[na:1.8.0_121]
at java.net.Socket.<init>(Socket.java:434) ~[na:1.8.0_121]
at java.net.Socket.<init>(Socket.java:211) ~[na:1.8.0_121]
at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:40) ~[na:1.8.0_121]
[info] k.m.a.KafkaManagerActor - Updating internal state...
I am using kafka-manager-1.3.2.1
3 brokers(broker1, broker2 and broker3) are running
this is the log I am getting from broker1 (kafka-manager not able to connect to this broker)
[2018-05-08 07:10:11,217] INFO [GroupCoordinator 1]: Loading group metadata for topic-1 with generation 27 (kafka.coordinator.GroupCoordinator)
[2018-05-08 07:10:11,217] INFO [GroupCoordinator 1]: Loading group metadata for topic-2 with generation 3 (kafka.coordinator.GroupCoordinator)
[2018-05-08 07:10:11,217] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from [__consumer_offsets,2] in 28 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2018-05-08 07:10:11,218] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from [__consumer_offsets,16] (kafka.coordinator.GroupMetadataManager)
[2018-05-08 07:10:11,224] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from [__consumer_offsets,16] in 6 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2018-05-08 07:17:53,774] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 4 milliseconds. (kafka.coordinator.GroupMetadataManager)

The error "Connection refused" could indicate there is no process listening on that port, or there is a firewall on that machine blocking that port.
From the machine where kafka-manager runs, can you do nc broker3 9092 and nc broker3 9999? If these two commands "hang" it means it's not just a Kafka Manager problem.
I realize this is an old question, I just came across it by chance as I was looking for something else.

It seems you need to publish JMX PORT/HOST so that Kafka-Manager would be able to discover brokers, those are controlled by below params in broker starter script.
Djava.rmi.server.hostname=${ip}
JMX_PORT=9997
For more details follow: https://github.com/yahoo/kafka-manager/issues/214

Related

taskManager could not connect to the newly elected jobmanager

I have a Flink cluster running on minikube : 1 jobmanager and 3 taskmanagers.
I am using Kubernetes Ha service to handle jobmanager leader election.
when i am trying to kill the jobmanager to simulate a crash , the taskmanager could not connect
the new jobmanager it try always to connect the previous ip address of the jobmanager that was terminated.
here is the exception :
2021-05-05 12:14:28.126 [flink-akka.actor.default-dispatcher-3] WARN akka.remote.ReliableDeliverySupervisor flink-akka.remote.default-remote-dispatcher-7 - Association with remote system [akka.tcp://flink#172.17.0.7:6123] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#172.17.0.7:6123]] Caused by: [java.net.NoRouteToHostException: No route to host]
2021-05-05 12:14:28.131 [flink-akka.actor.default-dispatcher-3] ERROR o.a.f.runtime.rest.handler.cluster.ClusterOverviewHandler - Unhandled exception.
org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:386)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at scala.concurrent.java8.FuturesConvertersImpl$CF$$anon$1.accept(FutureConvertersImpl.scala:61)
at scala.concurrent.java8.FuturesConvertersImpl$CF$$anon$1.accept(FutureConvertersImpl.scala:53)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.rpc.exceptions.RpcConnectionException: Could not connect to rpc endpoint under address akka.tcp://flink#172.17.0.7:6123/user/rpc/resourcemanager_0.
at org.apache.flink.runtime.rpc.akka.AkkaRpcService.lambda$resolveActorAddress$10(AkkaRpcService.java:570)
at scala.concurrent.java8.FuturesConvertersImpl$CF$$anon$1.accept(FutureConvertersImpl.scala:59)
... 5 common frames omitted
Caused by: org.apache.flink.runtime.rpc.exceptions.RpcConnectionException: Could not connect to rpc endpoint under address akka.tcp://flink#172.17.0.7:6123/user/rpc/resourcemanager_0.
... 7 common frames omitted
Caused by: akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka.tcp://flink#172.17.0.7:6123/), Path(/user/rpc/resourcemanager_0)]
at akka.actor.ActorSelection.$anonfun$resolveOne$1(ActorSelection.scala:71)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:81)
at akka.dispatch.BatchingExecutor.execute(BatchingExecutor.scala:120)
at akka.dispatch.BatchingExecutor.execute$(BatchingExecutor.scala:114)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:80)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573)
at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:556)
at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:593)
at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:582)
at akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:104)
at akka.remote.EndpointWriter.postStop(Endpoint.scala:606)
at akka.actor.Actor.aroundPostStop(Actor.scala:536)
at akka.actor.Actor.aroundPostStop$(Actor.scala:536)
at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:458)
at akka.actor.dungeon.FaultHandling.finishTerminate(FaultHandling.scala:210)
at akka.actor.dungeon.FaultHandling.terminate(FaultHandling.scala:172)
at akka.actor.dungeon.FaultHandling.terminate$(FaultHandling.scala:142)
at akka.actor.ActorCell.terminate(ActorCell.scala:429)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:533)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:549)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:283)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Apache Storm Starter 2.2.0 in Eclipse in Windows - Exception while trying to get leader nimbus info from localhost NimbusLeaderNotFound

I have downloaded the zip file for Apache Storm 2.2.0.
I imported the storm-starter maven project into eclipse in Windows 10. Did a Maven--> Update Project
My project did not have any errors. I did not do any updates to pom.xml.
I executed the WordCountTopology.java
I get the error NimbusLeaderNotFoundException.
I tried after stopping firewall also.
00:54:00.584 [main] INFO o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -xxxxxxxxxxxxxx
00:54:00.668 [main] WARN o.a.s.v.ConfigValidation - task.heartbeat.frequency.secs is a deprecated config please see class org.apache.storm.Config.TASK_HEARTBEAT_FREQUENCY_SECS for more information.
00:54:16.923 [main] WARN o.a.s.u.NimbusClient - Ignoring exception while trying to get leader nimbus info from localhost. will retry with a different seed host.
java.lang.RuntimeException: java.lang.RuntimeException: org.apache.storm.thrift.transport.TTransportException: java.net.ConnectException: Connection refused: connect
at org.apache.storm.security.auth.ThriftClient.reconnect(ThriftClient.java:108) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.ThriftClient.<init>(ThriftClient.java:69) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.utils.NimbusClient.<init>(NimbusClient.java:80) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:221) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:179) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.utils.NimbusClient.getConfiguredClient(NimbusClient.java:138) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.blobstore.NimbusBlobStore.prepare(NimbusBlobStore.java:47) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.utils.Utils.validateTopologyBlobStoreMap(Utils.java:1178) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.StormSubmitter.validateConfs(StormSubmitter.java:530) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:236) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:210) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:173) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.topology.ConfigurableTopology.submit(ConfigurableTopology.java:119) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.starter.WordCountTopology.run(WordCountTopology.java:58) [classes/:?]
at org.apache.storm.topology.ConfigurableTopology.start(ConfigurableTopology.java:68) [storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.starter.WordCountTopology.main(WordCountTopology.java:36) [classes/:?]
Caused by: java.lang.RuntimeException: org.apache.storm.thrift.transport.TTransportException: java.net.ConnectException: Connection refused: connect
at org.apache.storm.security.auth.TBackoffConnect.retryNext(TBackoffConnect.java:59) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:51) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.ThriftClient.reconnect(ThriftClient.java:98) ~[storm-client-2.2.0.jar:2.2.0]
... 15 more
Caused by: org.apache.storm.thrift.transport.TTransportException: java.net.ConnectException: Connection refused: connect
at org.apache.storm.thrift.transport.TSocket.open(TSocket.java:226) ~[storm-shaded-deps-2.2.0.jar:2.2.0]
at org.apache.storm.thrift.transport.TFramedTransport.open(TFramedTransport.java:91) ~[storm-shaded-deps-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.SimpleTransportPlugin.connect(SimpleTransportPlugin.java:101) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:48) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.ThriftClient.reconnect(ThriftClient.java:98) ~[storm-client-2.2.0.jar:2.2.0]
... 15 more
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[?:1.8.0_241]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[?:1.8.0_241]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_241]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_241]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_241]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[?:1.8.0_241]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_241]
at java.net.Socket.connect(Socket.java:606) ~[?:1.8.0_241]
at org.apache.storm.thrift.transport.TSocket.open(TSocket.java:221) ~[storm-shaded-deps-2.2.0.jar:2.2.0]
at org.apache.storm.thrift.transport.TFramedTransport.open(TFramedTransport.java:91) ~[storm-shaded-deps-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.SimpleTransportPlugin.connect(SimpleTransportPlugin.java:101) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:48) ~[storm-client-2.2.0.jar:2.2.0]
at org.apache.storm.security.auth.ThriftClient.reconnect(ThriftClient.java:98) ~[storm-client-2.2.0.jar:2.2.0]
... 15 more
org.apache.storm.utils.NimbusLeaderNotFoundException: Could not find leader nimbus from seed hosts [localhost]. Did you specify a valid list of nimbus hosts for config nimbus.seeds?
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:250)
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:179)
at org.apache.storm.utils.NimbusClient.getConfiguredClient(NimbusClient.java:138)
at org.apache.storm.blobstore.NimbusBlobStore.prepare(NimbusBlobStore.java:47)
at org.apache.storm.utils.Utils.validateTopologyBlobStoreMap(Utils.java:1178)
at org.apache.storm.StormSubmitter.validateConfs(StormSubmitter.java:530)
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:236)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:210)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:173)
at org.apache.storm.topology.ConfigurableTopology.submit(ConfigurableTopology.java:119)
at org.apache.storm.starter.WordCountTopology.run(WordCountTopology.java:58)
at org.apache.storm.topology.ConfigurableTopology.start(ConfigurableTopology.java:68)
at org.apache.storm.starter.WordCountTopology.main(WordCountTopology.java:36)
Same Eclipse in Windows 10, Under a different eclipse project, For previous storm versions' code (Version 0.0.1) Zookeeper starts, nimbus starts fine. I am able to get the output.
I did not do any cluster configuration in local/eclipse.Success Topology with version 0.0.1

Connection refused to Schema Registry

I have installed the new version of confluent i.e 5.4 and since after that I am unable to connect to the confluent, my schema registry also gets terminated untimely.
Today when I started the confluent and tried to produce the data, I recieved the following error:
2020-03-05 12:25:00,453] ERROR Failed to send HTTP request to endpoint: http://localhost:8081/subjects/avro-key/versions (io.confluent.kafka.schemaregistry.client.rest.RestService:245)
java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.Socket.connect(Socket.java:609)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1248)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1362)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1337)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:241)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:322)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:422)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:414)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:400)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:140)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:196)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:172)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:71)
at io.confluent.kafka.formatter.AvroMessageReader.readMessage(AvroMessageReader.java:199)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:55)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)'
Updated the question with the Schema-registry logs:
INFO Logging initialized #865ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:169)
[2020-03-09 12:35:51,851] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer:316)
[2020-03-09 12:35:52,366] INFO Created schema registry namespace localhost:2181 /schema_registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:709)
[2020-03-09 12:35:53,329] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://LAP-LIN-897:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore:108)
[2020-03-09 12:38:03,215] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:77)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:248)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:75)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:217)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:185)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:177)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:119)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:246)
... 6 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:170)
... 8 more
'

Why i am getting SolrException and TimeoutException then Could not connect to ZooKeeper localhost:6511 within 10000 ms

i am trying to connect from application layer to solr search layer i get this
error in liferay log file, could you please help me on this
## Heading ##List item
00:00:06,363 ERROR
[liferay/search_writer/SYSTEM_ENGINE-5][SolrIndexWriter:134] Max
delete retries reached (uid 15_PORTLET_46713_FIELD_59339731)
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper
localhost:6511 within 10000 ms
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:148)
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:99)
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:89)
at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:195)
at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:240)
at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:501)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:118)
at org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:239)
at org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:225)
at com.liferay.portal.search.solr.SolrIndexWriter.deleteDocument(SolrIndexWriter.java:122)
at com.liferay.portal.search.solr.SolrIndexWriter.updateDocument(SolrIndexWriter.java:241)
at sun.reflect.GeneratedMethodAccessor362.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.liferay.portal.kernel.util.MethodHandler.invoke(MethodHandler.java:83)
at com.liferay.portal.kernel.messaging.proxy.ProxyRequest.execute(ProxyRequest.java:57)
at com.liferay.portal.kernel.messaging.proxy.ProxyMessageListener.receive(ProxyMessageListener.java:51)
at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:72)
at com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:69)
at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:678)
at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:589)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper localhost:6511 within 10000 ms
at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:223)
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:140)
... 21 more

kafka.common.KafkaStorageException: I/O exception in append to log

I get some big problems of kafka,when I shutdown my consumer application then change a groupId and restart it,my kafka brokers will stop working, this is the stack trace I get
[2016-07-11 17:02:47,314] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,0] (kafka.coordinator.GroupMetadataManager)
[2016-07-11 17:02:47,955] FATAL [Replica Manager on Broker 0]: Halting due to unrecoverable I/O error while handling produce request: (kafka.server.ReplicaManager)
kafka.common.KafkaStorageException: I/O exception in append to log '__consumer_offsets-38'
at kafka.log.Log.append(Log.scala:318)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
at kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
at kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
at kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
at scala.Option.foreach(Option.scala:236)
at kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
at kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: /tmp/kafka-logs/__consumer_offsets-38/00000000000000000000.index (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
Probably your /tmp is automatically cleaned up i.e. systemd-tmpfiles.
https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles.html