Storm is crashing due to zookeeper status changing - apache-zookeeper

I have very basic storm topology which contains 1 unreliable spout + 1 bolt. Spout is fed by Kafka broker via zookeeper. The Spout sends tuples to a bolt witch in its role do nothing ( has empty execute method).
The topology run in local cluster mode and it crashes after a while from run.
I tried to increase the app memory by -Xmx parameter many times, but the problem still appears.
please help!
Here is the code snippet that build the topology:
builder.setSpout("kafka_forwarding_spout", new KafkaForwardingSpout(),1);
builder.setBolt("fake_bolt", new FakeBolt(),1).localOrShuffleGrouping("kafka_forwarding_spout");
Here is a snippet from storm log:
1318482 [Thread-37-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
1319574 [CuratorFramework-9] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: LOST
1312307 [Thread-39-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
1307441 [Thread-25-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
1227216 [Thread-29-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
1191153 [Thread-31-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
1771836 [Thread-39-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
1688630 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
1650422 [Thread-35-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
1635500 [Thread-19-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
1616369 [CuratorFramework-11] ERROR com.netflix.curator.framework.imps.CuratorFrameworkImpl - Background exception was not retry-able or retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.netflix.curator.ConnectionState.getZooKeeper(ConnectionState.java:72) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:74) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:353) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.BackgroundSyncImpl.performBackgroundOperation(BackgroundSyncImpl.java:39) ~[curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.OperationAndData.callPerformBackgroundOperation(OperationAndData.java:40) ~[curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:547) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.access$200(CuratorFrameworkImpl.java:50) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl$2.call(CuratorFrameworkImpl.java:177) [curator-framework-1.0.1.jar:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_05]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_05]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_05]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
1617025 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2001] WARN org.apache.zookeeper.server.NIOServerCnxn - EndOfStreamException: Unable to read additional data from client sessionid 0x149dd7eb6a2000b, likely client has closed socket
1600212 [Thread-23-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
1584086 [Thread-21-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
1570224 [Thread-9] INFO backtype.storm.daemon.worker - Shutting down receive thread
1569786 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
2158302 [Thread-5] ERROR backtype.storm.event - Error when processing event
java.lang.RuntimeException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at backtype.storm.util$wrap_in_runtime.invoke(util.clj:44) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at backtype.storm.zookeeper$get_children.invoke(zookeeper.clj:133) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at backtype.storm.cluster$mk_distributed_cluster_state$reify__2385.get_children(cluster.clj:101) ~[na:na]
at backtype.storm.cluster$mk_storm_cluster_state$reify__2814.assignments(cluster.clj:243) ~[na:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_05]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_05]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_05]
at java.lang.reflect.Method.invoke(Method.java:483) ~[na:1.8.0_05]
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) ~[clojure-1.4.0.jar:na]
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) ~[clojure-1.4.0.jar:na]
at backtype.storm.daemon.supervisor$assignments_snapshot.invoke(supervisor.clj:39) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at backtype.storm.daemon.supervisor$mk_synchronize_supervisor$this__6251.invoke(supervisor.clj:298) ~[na:na]
at backtype.storm.event$event_manager$fn__2921.invoke(event.clj:39) ~[na:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.netflix.curator.ConnectionState.getZooKeeper(ConnectionState.java:72) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:74) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:353) ~[curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:180) ~[curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:173) ~[curator-framework-1.0.1.jar:na]
at com.netflix.curator.RetryLoop.callWithRetry(RetryLoop.java:85) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:169) ~[curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:161) ~[curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:36) ~[curator-framework-1.0.1.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_05]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_05]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_05]
at java.lang.reflect.Method.invoke(Method.java:483) ~[na:1.8.0_05]
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) ~[clojure-1.4.0.jar:na]
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) ~[clojure-1.4.0.jar:na]
at backtype.storm.zookeeper$get_children.invoke(zookeeper.clj:131) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
... 13 common frames omitted
2117747 [Thread-4] ERROR com.netflix.curator.ConnectionState - Connection timed out
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.netflix.curator.ConnectionState.getZooKeeper(ConnectionState.java:72) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:74) [curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:353) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:149) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:138) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.RetryLoop.callWithRetry(RetryLoop.java:85) [curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:134) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:125) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:34) [curator-framework-1.0.1.jar:na]
at backtype.storm.zookeeper$exists_node_QMARK_$fn__2170.invoke(zookeeper.clj:96) [na:0.9.1-incubating]
at backtype.storm.zookeeper$exists_node_QMARK_.invoke(zookeeper.clj:93) [na:0.9.1-incubating]
at backtype.storm.zookeeper$mkdirs.invoke(zookeeper.clj:107) [na:0.9.1-incubating]
at backtype.storm.cluster$mk_distributed_cluster_state$reify__2385.set_ephemeral_node(cluster.clj:69) [na:0.9.1-incubating]
at backtype.storm.cluster$mk_storm_cluster_state$reify__2814.supervisor_heartbeat_BANG_(cluster.clj:315) [na:0.9.1-incubating]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_05]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_05]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_05]
at java.lang.reflect.Method.invoke(Method.java:483) ~[na:1.8.0_05]
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) [clojure-1.4.0.jar:na]
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) [clojure-1.4.0.jar:na]
at backtype.storm.daemon.supervisor$eval6298$exec_fn__1459__auto____6299$heartbeat_fn__6301.invoke(supervisor.clj:378) [na:0.9.1-incubating]
at backtype.storm.timer$schedule_recurring$this__2141.invoke(timer.clj:92) [na:0.9.1-incubating]
at backtype.storm.timer$mk_timer$fn__2124$fn__2125.invoke(timer.clj:48) [na:0.9.1-incubating]
at backtype.storm.timer$mk_timer$fn__2124.invoke(timer.clj:41) [na:0.9.1-incubating]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
2078727 [Thread-16] ERROR com.netflix.curator.ConnectionState - Connection timed out
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.netflix.curator.ConnectionState.getZooKeeper(ConnectionState.java:72) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:74) [curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:353) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:149) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:138) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.RetryLoop.callWithRetry(RetryLoop.java:85) [curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:134) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:125) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:34) [curator-framework-1.0.1.jar:na]
at backtype.storm.zookeeper$exists_node_QMARK_$fn__2170.invoke(zookeeper.clj:96) [na:0.9.1-incubating]
at backtype.storm.zookeeper$exists_node_QMARK_.invoke(zookeeper.clj:93) [na:0.9.1-incubating]
at backtype.storm.zookeeper$exists.invoke(zookeeper.clj:141) [na:0.9.1-incubating]
at backtype.storm.cluster$mk_distributed_cluster_state$reify__2385.set_data(cluster.clj:85) [na:0.9.1-incubating]
at backtype.storm.cluster$mk_storm_cluster_state$reify__2814.worker_heartbeat_BANG_(cluster.clj:291) [na:0.9.1-incubating]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_05]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_05]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_05]
at java.lang.reflect.Method.invoke(Method.java:483) ~[na:1.8.0_05]
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) [clojure-1.4.0.jar:na]
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) [clojure-1.4.0.jar:na]
at backtype.storm.daemon.worker$do_executor_heartbeats.doInvoke(worker.clj:53) [na:0.9.1-incubating]
at clojure.lang.RestFn.invoke(RestFn.java:439) [clojure-1.4.0.jar:na]
at backtype.storm.daemon.worker$eval5665$exec_fn__1459__auto____5666$fn__5669.invoke(worker.clj:367) [na:0.9.1-incubating]
at backtype.storm.timer$schedule_recurring$this__2141.invoke(timer.clj:92) [na:0.9.1-incubating]
at backtype.storm.timer$mk_timer$fn__2124$fn__2125.invoke(timer.clj:48) [na:0.9.1-incubating]
at backtype.storm.timer$mk_timer$fn__2124.invoke(timer.clj:41) [na:0.9.1-incubating]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
2079838 [CuratorFramework-11] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: LOST
2052508 [Thread-37-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
1948998 [yassin-long-d_localhost.localdomain-1416760311632-aa3d627e_watcher_executor] INFO kafka.consumer.ZookeeperConsumerConnector - [yassin-long-d_localhost.localdomain-1416760311632-aa3d627e], exception during rebalance
java.lang.NullPointerException: null
at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:103) ~[zkclient-0.3.jar:0.3]
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:770) ~[zkclient-0.3.jar:0.3]
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:766) ~[zkclient-0.3.jar:0.3]
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675) ~[zkclient-0.3.jar:0.3]
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766) ~[zkclient-0.3.jar:0.3]
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761) ~[zkclient-0.3.jar:0.3]
at kafka.utils.ZkUtils$.readData(ZkUtils.scala:461) ~[kafka_2.10-0.8.1.jar:na]
at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:505) ~[kafka_2.10-0.8.1.jar:na]
at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:504) ~[kafka_2.10-0.8.1.jar:na]
at scala.collection.Iterator$class.foreach(Iterator.scala:727) ~[scala-library-2.10.1.jar:na]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) ~[scala-library-2.10.1.jar:na]
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[scala-library-2.10.1.jar:na]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[scala-library-2.10.1.jar:na]
at kafka.utils.ZkUtils$.getCluster(ZkUtils.scala:504) ~[kafka_2.10-0.8.1.jar:na]
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:407) ~[kafka_2.10-0.8.1.jar:na]
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) [scala-library-2.10.1.jar:na]
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:402) [kafka_2.10-0.8.1.jar:na]
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:355) [kafka_2.10-0.8.1.jar:na]
1872056 [Thread-39-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
1853312 [Thread-35-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
1840380 [Thread-31-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
1808599 [Thread-41-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
1799622 [Thread-25-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
2287292 [Thread-41-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
2287075 [Thread-37-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
2286862 [yassin-long-d_localhost.localdomain-1416760311632-aa3d627e_watcher_executor] INFO kafka.consumer.ZookeeperConsumerConnector - [yassin-long-d_localhost.localdomain-1416760311632-aa3d627e], end rebalancing consumer yassin-long-d_localhost.localdomain-1416760311632-aa3d627e try #0
2280400 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2001] WARN org.apache.zookeeper.server.NIOServerCnxn - EndOfStreamException: Unable to read additional data from client sessionid 0x149dd7eb6a20005, likely client has closed socket
2280180 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
2267345 [Thread-9] INFO backtype.storm.messaging.loader - Shutting down receiving-thread: [events_topology-1-1416760309, 1027]
2242225 [Thread-19-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
2224569 [Thread-29-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
2226559 [Thread-21-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
2209276 [Thread-23-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
2296607 [Thread-41-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
2293718 [Thread-37-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Expired)
2292830 [Thread-25-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
2498572 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2001] WARN org.apache.zookeeper.server.NIOServerCnxn - EndOfStreamException: Unable to read additional data from client sessionid 0x149dd7eb6a20003, likely client has closed socket
2540517 [Thread-41-kafka_forwarding_spout-EventThread] ERROR org.apache.zookeeper.ClientCnxn - Error while calling watcher
java.lang.OutOfMemoryError: Java heap space
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main-SendThread(localhost:2001)"
2637439 [Thread-35-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
2647739 [Thread-39-kafka_forwarding_spout-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (Disconnected)
2672605 [main-SendThread(localhost:2001)] ERROR org.apache.zookeeper.ClientCnxn - from main-SendThread(localhost:2001)
java.lang.OutOfMemoryError: Java heap space
2764243 [Thread-25-kafka_forwarding_spout-SendThread(zoo2:2182)] ERROR org.apache.zookeeper.ClientCnxn - from Thread-25-kafka_forwarding_spout-SendThread(zoo2:2182)
java.lang.OutOfMemoryError: Java heap space
2720032 [main-SendThread(localhost:2001)] ERROR org.apache.zookeeper.ClientCnxn - from main-SendThread(localhost:2001)
java.lang.OutOfMemoryError: Java heap space
2731157 [Thread-8] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[Thread-8,5,main] died
java.lang.OutOfMemoryError: Java heap space
2800693 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2001] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2001,5,main] died

Seems like a java heap size problem. Check there are many other configurable memory related parameters.

This issue is being caused due to the Java Virtual Machine running out of available Heap memory in the system. Depending on on which underlying operating system that you are using, you can use the -Xmx parameter followed by the amount of memory you want to allocate. For example, if you want to allocate 2GB of memory then you need to mention -Xms2048m and the maximum heap size to be set as -Xmx2048m. However, remember that increasing the heap size after getting an OutofMemoryError is generally a short-term solution and it is highly probable to occur again in the future. It is generally an indicator for optimising your code to be memory efficient.

Related

Connection ERROR while writing Dataframe (Pyspark 3.x on EMR 6.x) to RDS (MySQL)

I get "connection refused error" when I try to write the results of a Dataframe to an RDS (MySQL). I am using PySpark 3 on EMR cluster v6.x (1 master node, 1 slave node). The table does not exist yet. But the data base exist.
spark-submit --jars s3://{some s3 folder}/mysql-connector-java-8.0.25.jar s3://{some s3 folder}/pyspark_script.py
The part of the script that writes to mysql is here (after testing, its the only part of the script that delivers error/is not working): * I have changed the name of my db, user, and password here below
df.write\
.mode("overwrite")\
.format("jdbc")\
.option("url","jdbc:mysql://localhost:3306/{my database name}?useSSL=false")\
.option("driver","com.mysql.cj.jdbc.Driver")\
.option("dbtable","mydb_table")\
.option("user","myuser")\
.option("password","mypassword")\
.save()
This is the error I get: It is about connection refused.
I have already given the EMR Role access RDS and its data!
Traceback (most recent call last):
File "/mnt/tmp/spark-93919f38-ea4d-44d6-be7d-0416be972753/pyspark_script.py", line 57, in <module>
.option("password","assignment")\
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1107, in save
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o163.save.
: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198)
at org.apache.spark.sql.execution.datasources.jdbc.connection.BasicConnectionProvider.getConnection(BasicConnectionProvider.scala:49)
at org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider$.create(ConnectionProvider.scala:68)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$createConnectionFactory$1(JdbcUtils.scala:62)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:48)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:194)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:190)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:89)
at com.mysql.cj.NativeSession.connect(NativeSession.java:144)
at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:953)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:823)
... 45 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:155)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63)
... 48 more
21/12/19 11:40:04 INFO SparkContext: Invoking stop() from shutdown hook
21/12/19 11:40:04 INFO AbstractConnector: Stopped Spark#74d96709{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
21/12/19 11:40:04 INFO SparkUI: Stopped Spark web UI at http://{ip}.eu-central-1.compute.internal:4040
21/12/19 11:40:04 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
21/12/19 11:40:04 INFO MemoryStore: MemoryStore cleared
21/12/19 11:40:04 INFO BlockManager: BlockManager stopped
21/12/19 11:40:04 INFO BlockManagerMaster: BlockManagerMaster stopped
21/12/19 11:40:04 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/12/19 11:40:04 INFO SparkContext: Successfully stopped SparkContext
21/12/19 11:40:04 INFO ShutdownHookManager: Shutdown hook called
21/12/19 11:40:04 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-fd1b8e7c-7b4c-424d-a451-743a6e075fbd
21/12/19 11:40:04 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-93919f38-ea4d-44d6-be7d-0416be972753
21/12/19 11:40:04 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-fd1b8e7c-7b4c-424d-a451-743a6e075fbd/pyspark-40fbaaf5-2e34-44ba-875f-88308084546d
Try this thread: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
EMR with localhost RDS? seems odd, did you missed set the IP correctly ?

WSO2 EI 6.5 - Name or service not known | DefaultAddressPicker [LOCAL] [wso2.ei.domain] [3.5.4] integrator Name or service not known

WSO2 EI-6.5 is setup in k8s environment. It worked fine without any issue. But due to some reason now it crashing with the below exception. [EI-Core] ERROR - DefaultAddressPicker [LOCAL] [wso2.ei.domain] [3.5.4] integrator: Name or service not known
Full Log:
[2019-09-06 05:07:38,292] [EI-Core] INFO - JMSListener JMS listener
started [2019-09-06 05:07:38,293] [EI-Core] INFO -
PassThroughHttpListener Starting Pass-through HTTP Listener...
[2019-09-06 05:07:38,305] [EI-Core] INFO -
PassThroughListeningIOReactorManager Pass-through HTTP Listener
started on 0.0.0.0:8280 [2019-09-06 05:07:38,305] [EI-Core] INFO -
PassThroughHttpSSLListener Starting Pass-through HTTPS Listener...
[2019-09-06 05:07:38,311] [EI-Core] INFO -
PassThroughListeningIOReactorManager Pass-through HTTPS Listener
started on 0.0.0.0:8243 [2019-09-06 05:07:38,311] [EI-Core] INFO -
RabbitMQListener RABBITMQ listener started [2019-09-06 05:07:38,320]
[EI-Core] INFO - HazelcastClusteringAgent Cluster domain:
wso2.ei.domain [2019-09-06 05:07:38,320] [EI-Core] INFO -
HazelcastClusteringAgent Loading hazelcast configuration from axis2
clustering configuration [2019-09-06 05:07:38,336] [EI-Core] INFO -
HazelcastClusteringAgent Using kubernetes based membership management
scheme [2019-09-06 05:07:38,352] [EI-Core] INFO -
KubernetesMembershipScheme Initializing kubernetes membership
scheme... [2019-09-06 05:07:38,353] [EI-Core] INFO -
ApiBasedPodIpResolver Parameter KUBERNETES_API_SERVER not found,
checking KUBERNETES_SERVICE_HOST & KUBERNETES_SERVICE_PORT_HTTPS
[2019-09-06 05:07:38,354] [EI-Core] INFO - ApiBasedPodIpResolver
Kubernetes clustering configuration: [api-server]
https://10.233.0.1:443 [namespace] wso2 [services]
wso2ei-integrator-service [skip-master-ssl-verification] true
[2019-09-06 05:07:38,565] [EI-Core] INFO - ApiBasedPodIpResolver
Reading IP addresses from endpoints [2019-09-06 05:07:38,565]
[EI-Core] INFO - KubernetesMembershipScheme Member added to cluster
configuration: [container-ip] 10.233.64.189 [2019-09-06 05:07:38,565]
[EI-Core] INFO - KubernetesMembershipScheme Kubernetes membership
scheme initialized successfully [2019-09-06 05:07:38,567] [EI-Core]
INFO - HazelcastClusteringAgent Hazelcast cluster is initializing...
[2019-09-06 05:07:38,625] [EI-Core] ERROR - DefaultAddressPicker
[LOCAL] [wso2.ei.domain] [3.5.4] integrator: Name or service not known
java.net.UnknownHostException: integrator: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at
java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at
java.net.InetAddress.getAllByName(InetAddress.java:1193) at
java.net.InetAddress.getAllByName(InetAddress.java:1127) at
java.net.InetAddress.getByName(InetAddress.java:1077) at
com.hazelcast.instance.DefaultAddressPicker.getPublicAddress(DefaultAddressPicker.java:289)
at
com.hazelcast.instance.DefaultAddressPicker.pickAddress(DefaultAddressPicker.java:124)
at com.hazelcast.instance.Node.(Node.java:143) at
com.hazelcast.instance.HazelcastInstanceImpl.(HazelcastInstanceImpl.java:120)
at
com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:152)
at
com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:135)
at
com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:111)
at
com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
at
org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent.init(HazelcastClusteringAgent.java:196)
at
org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.enableClustering(StartupFinalizerServiceComponent.java:293)
at
org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.completeInitialization(StartupFinalizerServiceComponent.java:187)
at
org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.serviceChanged(StartupFinalizerServiceComponent.java:317)
at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
at
org.wso2.carbon.throttling.agent.internal.ThrottlingAgentServiceComponent.registerThrottlingAgent(ThrottlingAgentServiceComponent.java:123)
at
org.wso2.carbon.throttling.agent.internal.ThrottlingAgentServiceComponent.activate(ThrottlingAgentServiceComponent.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at
org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
at
org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
at
org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:515)
at
org.wso2.carbon.core.init.CarbonServerManager.start(CarbonServerManager.java:220)
at
org.wso2.carbon.core.internal.CarbonCoreServiceComponent.activate(CarbonCoreServiceComponent.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at
org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
at
org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
at
org.eclipse.equinox.http.servlet.internal.Activator.registerHttpService(Activator.java:81)
at
org.eclipse.equinox.http.servlet.internal.Activator.addProxyServlet(Activator.java:60)
at
org.eclipse.equinox.http.servlet.internal.ProxyServlet.init(ProxyServlet.java:40)
at
org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.init(DelegationServlet.java:38)
at
org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1230)
at
org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1174)
at
org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1066)
at
org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5433)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5731)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1707)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1697)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) [2019-09-06 05:07:38,627]
[EI-Core] ERROR - StartupFinalizerServiceComponent Cannot initialize
cluster com.hazelcast.core.HazelcastException:
java.net.UnknownHostException: integrator: Name or service not known
at com.hazelcast.util.ExceptionUtil.rethrow(ExceptionUtil.java:67)
at com.hazelcast.instance.Node.(Node.java:145) at
com.hazelcast.instance.HazelcastInstanceImpl.(HazelcastInstanceImpl.java:120)
at
com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:152)
at
com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:135)
at
com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:111)
at
com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
at
org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent.init(HazelcastClusteringAgent.java:196)
at
org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.enableClustering(StartupFinalizerServiceComponent.java:293)
at
org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.completeInitialization(StartupFinalizerServiceComponent.java:187)
at
org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.serviceChanged(StartupFinalizerServiceComponent.java:317)
at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
at
org.wso2.carbon.throttling.agent.internal.ThrottlingAgentServiceComponent.registerThrottlingAgent(ThrottlingAgentServiceComponent.java:123)
at
org.wso2.carbon.throttling.agent.internal.ThrottlingAgentServiceComponent.activate(ThrottlingAgentServiceComponent.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at
org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
at
org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
at
org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:515)
at
org.wso2.carbon.core.init.CarbonServerManager.start(CarbonServerManager.java:220)
at
org.wso2.carbon.core.internal.CarbonCoreServiceComponent.activate(CarbonCoreServiceComponent.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at
org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
at
org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
at
org.eclipse.equinox.http.servlet.internal.Activator.registerHttpService(Activator.java:81)
at
org.eclipse.equinox.http.servlet.internal.Activator.addProxyServlet(Activator.java:60)
at
org.eclipse.equinox.http.servlet.internal.ProxyServlet.init(ProxyServlet.java:40)
at
org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.init(DelegationServlet.java:38)
at
org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1230)
at
org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1174)
at
org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1066)
at
org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5433)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5731)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1707)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1697)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
java.net.UnknownHostException: integrator: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at
java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getAllByName0(InetAddress.java:1277)
You can get this resolved by adding an /etc/host entry to the container. So in K8s you can update the pod template in Deployment as follows.
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "integrator"

I built a springboot jar file into a mirror,but when I run it ,I can't connect mysql

The jar file can be run successfully with java, but the image is run incorrectly.
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
backend latest bad78d214712 16 minutes ago 735MB
command :docker run -p 8000:8000 backend
a springboot jar,mysql 5.7
Dockerfile and jar package in the same directory
dockfile
FROM aurorasystem/openjdk-zeromq:latest
MAINTAINER Aurora System <it#aurora-system.com>
VOLUME /tmp
COPY backend-2.0.0.jar app.jar
RUN bash -c "touch /app.jar"
EXPOSE 8888
ENTRYPOINT ["java","-jar","/app.jar"]
error
19-06-06 08:19:37.173 [main] WARN : c.z.hikari.util.DriverDataSource.<init> - Registered driver with driverClassName=com.mysql.jdbc.Driver was not found, trying direct instantiation.
19-06-06 08:19:38.380 [main] ERROR : com.zaxxer.hikari.pool.HikariPool.throwPoolInitializationException - HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:835)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:455)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:136)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:369)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:198)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:467)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:541)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:302)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1804)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1741)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:576)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:307)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1083)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:853)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
at com.aurora.backend.Application.main(Application.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91)
at com.mysql.cj.NativeSession.connect(NativeSession.java:152)
at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:955)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:825)
... 39 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:155)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65)
... 42 common frames omitted
19-06-06 08:19:38.382 [main] WARN : o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext.refresh - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/springframework/boot/autoconfigure/liquibase/LiquibaseAutoConfiguration$LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
19-06-06 08:19:38.407 [main] INFO : o.a.catalina.core.StandardService.log - Stopping service [Tomcat]
19-06-06 08:19:38.434 [main] INFO : o.s.b.a.l.ConditionEvaluationReportLoggingListener.logMessage -
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
19-06-06 08:19:38.439 [main] ERROR : o.s.boot.SpringApplication.reportFailure - Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/springframework/boot/autoconfigure/liquibase/LiquibaseAutoConfiguration$LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1745)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:576)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:307)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1083)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:853)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
at com.aurora.backend.Application.main(Application.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: liquibase.exception.DatabaseException: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:307)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1804)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1741)
... 26 common frames omitted
Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:835)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:455)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:136)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:369)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:198)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:467)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:541)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:302)
... 28 common frames omitted
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91)
at com.mysql.cj.NativeSession.connect(NativeSession.java:152)
at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:955)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:825)
... 39 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:155)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65)
... 42 common frames omitted
Seems you connect to you mysql database on you local machine, and put the host name as local host. Please try docker run --net=host -p 8000:8000 backend to use host network, or use the ip address directly.

Spark Streaming Twitter "receiver-supervisor-future-0" java.lang.AbstractMethodError

I am using Spark Streaming Twitter package to fetch the popular hashtags. To be specific, I am trying to replicate this code https://github.com/andrewrgoss/udemy-spark-scala/tree/master/twitter_streaming
I ran the following commands on Windows command line :
sbt package
spark-submit --packages org.apache.bahir:spark-streaming-twitter_2.11:2.1.0 --class com.andrewrgoss.spark.PopularHashtags \target\scala-2.11\twitter_streaming_2.11-0.1.jar
This gives me the below error.
2018-04-20 03:16:30 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 172.24.20.2, 56082, None)
2018-04-20 03:16:30 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 172.24.20.2, 56082, None)
2018-04-20 03:16:30 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#6b00ad9{/metrics/json,null,AVAILABLE,#Spark}
Exception in thread "receiver-supervisor-future-0" java.lang.AbstractMethodError
at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
at org.apache.spark.streaming.twitter.TwitterReceiver.initializeLogIfNecessary(TwitterInputDStream.scala:60)
at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
at org.apache.spark.streaming.twitter.TwitterReceiver.log(TwitterInputDStream.scala:60)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
at org.apache.spark.streaming.twitter.TwitterReceiver.logInfo(TwitterInputDStream.scala:60)
at org.apache.spark.streaming.twitter.TwitterReceiver.onStop(TwitterInputDStream.scala:106)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.stopReceiver(ReceiverSupervisor.scala:170)
at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:194)
at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
-------------------------------------------
Time: 1524208600000 ms
-------------------------------------------
-------------------------------------------
Time: 1524208610000 ms
-------------------------------------------
-------------------------------------------
I found the related thread Spark Program finding Popular HashTags from twiiter but no solution. Can someone from the community elaborate the error and how to resolve it?

Flink streaming job switched to failed status

I hava a 8 nodes Flink cluster and a 5 nodes Kafka cluster to run a WordCount job. In the first case, lot of data is generated and pushed to Kafka and then Flink job is launched. Everything works fine in this case.
While in the second case, Flink streaming job is launched first, then data is produced into Kafka topic. In this case, the Flink job is usually switched to failed status. Some times it fails immediately after the job is launched. Sometimes it fails several minutes after the job is launched.
org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Error at remote task manager 'worker1/192.168.1.38:35240'.
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.decodeMsg(PartitionRequestClientHandler.java:241)
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.channelRead(PartitionRequestClientHandler.java:164)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flink.runtime.io.network.partition.ProducerFailedException
at org.apache.flink.runtime.io.network.netty.PartitionRequestQueue.writeAndFlushNextMessageIfPossible(PartitionRequestQueue.java:164)
at org.apache.flink.runtime.io.network.netty.PartitionRequestQueue.userEventTriggered(PartitionRequestQueue.java:96)
at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:279)
at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:265)
at io.netty.channel.ChannelInboundHandlerAdapter.userEventTriggered(ChannelInboundHandlerAdapter.java:108)
at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:279)
at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:265)
at io.netty.channel.ChannelInboundHandlerAdapter.userEventTriggered(ChannelInboundHandlerAdapter.java:108)
at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:279)
at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:265)
at io.netty.channel.ChannelInboundHandlerAdapter.userEventTriggered(ChannelInboundHandlerAdapter.java:108)
at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:279)
at io.netty.channel.AbstractChannelHandlerContext.access$500(AbstractChannelHandlerContext.java:32)
at io.netty.channel.AbstractChannelHandlerContext$6.run(AbstractChannelHandlerContext.java:270)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
... 2 more
01/24/2016 22:21:32 Keyed Reduce -> Sink: Unnamed(29/32) switched to FAILED
org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Error at remote task manager 'worker1/192.168.1.38:35240'.
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.decodeMsg(PartitionRequestClientHandler.java:241)
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.channelRead(PartitionRequestClientHandler.java:164)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
In the log file of worker4, the error is:
23:03:43,786 INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source -> Map -> Flat Map -> Map (20/32) switched to FAILED with exception.
java.lang.Exception: Error while fetching from broker:
Exception for partition 19: kafka.common.UnknownException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:383)
at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:406)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:397)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:218)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Error while fetching from broker:
Exception for partition 19: kafka.common.UnknownException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:383)
at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:406)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:422)
Before this UnknowException, there are some logs related to zookeeper:
08:58:47,720 INFO org.I0Itec.zkclient.ZkEventThread - Terminate ZkClient event thread.
08:58:47,737 INFO org.apache.zookeeper.ZooKeeper - Session: 0x15277fbb7c70020 closed
08:58:47,737 INFO org.apache.zookeeper.ClientCnxn - EventThread shut down
08:58:47,737 INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source -> Map -> Flat Map -> Map (6/32) switched to FAILED with exception.
The root cause of the error is
Exception for partition 19: kafka.common.UnknownException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:383)
at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:406)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:397)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:218)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
The UnknownException is most likely triggered by this error on the Kafka side:
[2016-01-25 12:45:30,195] ERROR [Replica Manager on Broker 2]: Error when processing fetch request for partition [WordCount,4] offset 335517 from consumer with correlation id 0. Possible cause: Attempt to read with a maximum offset (335515) less than the start offset (335517). (kafka.server.ReplicaManager)
I've filed a JIRA in Flink for the problem: https://issues.apache.org/jira/browse/FLINK-3288