Issue while setting up nifi cluster - apache-zookeeper

I followed the below tutorial to setup nifi cluster. Performed everything as mentioned, but receiving the below error.
https://pierrevillard.com/2016/08/13/apache-nifi-1-0-0-cluster-setup/
But the zookeeper was unable to elect a leader and getting the following error message.
2018-04-20 09:27:40,942 INFO [main] org.wali.MinimalLockingWriteAheadLog Successfully recovered 0 records in 3 milliseconds 2018-04-20 09:27:41,007 INFO [main] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog#27caa186 checkpointed with 0 Records and 0 Swap Files in 65 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 7 millis), max Transaction ID -1 2018-04-20 09:27:41,300 INFO [main] o.a.z.server.DatadirCleanupManager autopurge.snapRetainCount set to 30 2018-04-20 09:27:41,300 INFO [main] o.a.z.server.DatadirCleanupManager autopurge.purgeInterval set to 24 2018-04-20 09:27:41,311 INFO [main] o.a.n.c.s.server.ZooKeeperStateServer Starting Embedded ZooKeeper Peer 2018-04-20 09:27:41,319 INFO [PurgeTask] o.a.z.server.DatadirCleanupManager Purge task started. 2018-04-20 09:27:41,343 INFO [PurgeTask] o.a.z.server.DatadirCleanupManager Purge task completed. 2018-04-20 09:27:41,621 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected... 2018-04-20 09:27:41,907 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting 2018-04-20 09:27:49,882 WARN [main] o.a.n.c.l.e.CuratorLeaderElectionManager Unable to determine the Elected Leader for role 'Cluster Coordinator' due to org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/leaders/Cluster Coordinator; assuming no leader has been elected 2018-04-20 09:27:49,885 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting 2018-04-20 09:27:49,986 INFO [main] o.apache.nifi.controller.FlowController It appears that no Cluster Coordinator has been Elected yet. Registering for Cluster Coordinator Role. 2018-04-20 09:27:49,988 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election. 2018-04-20 09:27:49,988 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting 2018-04-20 09:27:49,997 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election. 2018-04-20 09:27:49,997 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started 2018-04-20 09:27:49,997 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started 2018-04-20 09:27:58,179 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#76d9fe26{/nifi-api,file:///root/nifi-1.6.0/work/jetty/nifi-web-api-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-api-1.6.0.war} 2018-04-20 09:27:59,657 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=861ms 2018-04-20 09:27:59,834 INFO [main] o.e.j.C./nifi-content-viewer No Spring WebApplicationInitializer types detected on classpath 2018-04-20 09:27:59,840 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#43b41cb9{/nifi-content-viewer,file:///root/nifi-1.6.0/work/jetty/nifi-web-content-viewer-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-content-viewer-1.6.0.war} 2018-04-20 09:27:59,863 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.s.h.ContextHandler#49825659{/nifi-docs,null,AVAILABLE} 2018-04-20 09:28:00,079 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=69ms 2018-04-20 09:28:00,083 INFO [main] o.e.jetty.ContextHandler./nifi-docs No Spring WebApplicationInitializer types detected on classpath 2018-04-20 09:28:00,171 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#2ab0ca04{/nifi-docs,file:///root/nifi-1.6.0/work/jetty/nifi-web-docs-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.6.0.war} 2018-04-20 09:28:00,343 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=119ms 2018-04-20 09:28:00,344 INFO [main] org.eclipse.jetty.ContextHandler./ No Spring WebApplicationInitializer types detected on classpath 2018-04-20 09:28:00,454 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#67e255cf{/,file:///root/nifi-1.6.0/work/jetty/nifi-web-error-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.6.0.war} 2018-04-20 09:28:00,541 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#6d9f624{HTTP/1.1,[http/1.1]}{node-1:8080} 2018-04-20 09:28:00,559 INFO [main] org.eclipse.jetty.server.Server Started #98628ms 2018-04-20 09:28:00,599 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow... 2018-04-20 09:28:00,688 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 9999 2018-04-20 09:28:00,916 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow 2018-04-20 09:28:01,337 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: node-1:8080 2018-04-20 09:28:07,922 INFO [Curator-Framework-0] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED 2018-04-20 09:28:07,927 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener#188207a7 Connection State changed to SUSPENDED 2018-04-20 09:28:07,930 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-04-19 15:56:55,209 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at
org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I set the following properties on all 3 nodes:
mkdir ./state
mkdir ./state/zookeeper
echo 1 > ./state/zookeeper/myid (2 & 3 for other nodes)
in zookeeper.properties
server.1=node-1:2888:3888
server.2=node-2:2888:3888
server.3=node-3:2888:3888
in nifi.properties
nifi.state.management.embedded.zookeeper.start=true
nifi.zookeeper.connect.string=node-1:2181,node-2:2181,node-3:2181
nifi.cluster.protocol.is.secure=false
nifi.cluster.is.node=true
nifi.cluster.node.address=node-1(node-2 & node-3)
nifi.cluster.node.protocol.port=9999
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.remote.input.host=node-1(node-2 & node-3)
nifi.remote.input.secure=false
nifi.remote.input.socket.port=9998
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.web.http.host=node-1(node-2 & node-3)

After Bryan's comment, got reminded that I forgot to disable the iptables (I guess I can even open the ports but since this is for my practice I am disabling the iptables) and its working now.

Related

Confluent Control Center failure: Unable to fetch consumer offsets for cluster id

I am running confluent platform (version 6.1.1). I deploy the following components: 3 Brokers, 3 ZK, Schema Registry, 3 Kafka Connect, KSQL and Confluent Control Center (CCC).
The CCC has entered into a failed state and I have difficulties to bring it back.
To make things cleaner, I have created another EC2 instance (m4.2xlarge) where I configured new CCC with the aim to connect it to the current cluster. New CCC has exactly the same configuration as the failed one, but with a different confluent.controlcenter.id.
I start the CCC and it is running. I can access the CCC UI but it is not working properly: the pages are loading too long, it keeps showing the changing state of the connect cluster (sometimes healthy, sometimes not), it keeps showing the changing state of the brokers (sometimes healthy, sometimes not)
For example it looks like this (see screenshots):
After running certain amount of time, it is automatically restarted and keeps restarting every 5-7 minutes.
When it is started, I see a bunch of new topics created in the Kafka cluster.
After that in the control-center.log I see :
INFO [main] Setting offsets for topic=_confluent-monitoring (io.confluent.controlcenter.KafkaHelper)
INFO [main] found 12 topicPartitions for topic=_confluent-monitoring (io.confluent.controlcenter.KafkaHelper)
INFO [main] Setting offsets for topic=_confluent-metrics (io.confluent.controlcenter.KafkaHelper)
INFO [main] found 12 topicPartitions for topic=_confluent-metrics (io.confluent.controlcenter.KafkaHelper)
INFO [main] action=starting topology=command (io.confluent.controlcenter.ControlCenter)
INFO [main] waiting for streams to be in running state REBALANCING (io.confluent.command.CommandStore)
INFO [main] Streams state RUNNING (io.confluent.command.CommandStore)
INFO [main] action=started topology=command (io.confluent.controlcenter.ControlCenter)
INFO [main] action=starting operation=command-migration (io.confluent.controlcenter.ControlCenter)
INFO [main] action=completed operation=command-migration (io.confluent.controlcenter.ControlCenter)
INFO [main] action=starting topology=monitoring (io.confluent.controlcenter.ControlCenter)
INFO [main] action=started topology=monitoring (io.confluent.controlcenter.ControlCenter)
INFO [main] Starting Health Check (io.confluent.controlcenter.ControlCenter)
INFO [main] Starting Alert Manager (io.confluent.controlcenter.ControlCenter)
INFO [main] Starting Consumer Offsets Fetch (io.confluent.controlcenter.ControlCenter)
INFO [control-center-heartbeat-0] current clusterId=lCRehAk0RqmLR04nhXKHtA (io.confluent.controlcenter.healthcheck.HealthCheck)
INFO [control-center-heartbeat-0] broker id set has changed new={1001=[10.251.xx.xx:9093 (id: 1001 rack: null)], 1002=[10.251.xx.xx:9093 (id: 1002 rack: null)], 1003=[10.251.xx.xx:9093 (id: 1003 rack: null)]} removed={} (io.confluent.controlcenter.healthcheck.HealthCheck)
INFO [control-center-heartbeat-0] new controller=10.251.xx.xx:9093 (id: 1002 rack: null) (io.confluent.controlcenter.healthcheck.HealthCheck)
INFO [main] Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
INFO [main] Adding listener: http://0.0.0.0:9021 (io.confluent.rest.ApplicationServer)
INFO [main] x509=X509#3a8ead9(ip-44-135-xx-xx.eu-central-1.compute.internal,h=[ip-44-135-xx-xx.eu-central-1.compute.internal],w=[]) for Server#7c8b37a8[provider=null,keyStore=file:///var/kafka-ssl/server.keystore.jks,trustStore=file:///var/kafka-ssl/client.truststore.jks] (org.eclipse.jetty.util.ssl.SslContextFactory)
INFO [main] x509=X509#3831f4c2(caroot,h=[eu-central-1.compute.internal],w=[]) for Server#7c8b37a8[provider=null,keyStore=file:///var/kafka-ssl/server.keystore.jks,trustStore=file:///var/kafka-ssl/client.truststore.jks] (org.eclipse.jetty.util.ssl.SslContextFactory)
INFO [main] jetty-9.4.38.v20210224; built: 2021-02-24T20:25:07.675Z; git: 288f3cc74549e8a913bf363250b0744f2695b8e6; jvm 11.0.13+8-LTS (org.eclipse.jetty.server.Server)
INFO [main] DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)
INFO [main] No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)
INFO [main] node0 Scavenging every 660000ms (org.eclipse.jetty.server.session)
INFO [main] Started o.e.j.s.ServletContextHandler#1ef5cde4{/,[jar:file:/usr/share/java/acl/acl-6.1.1.jar!/io/confluent/controlcenter/rest/static],AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
INFO [main] Started o.e.j.s.ServletContextHandler#5401c6a8{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
INFO [main] Started NetworkTrafficServerConnector#5d6b5d3d{HTTP/1.1, (http/1.1)}{0.0.0.0:9021} (org.eclipse.jetty.server.AbstractConnector)
INFO [main] Started #36578ms (org.eclipse.jetty.server.Server)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=monitoring-input-topic-progress-.count type=monitoring cluster= value=0.0 (io.confluent.controlcenter.util.StreamProgressReporter)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=monitoring-input-topic-progress-.rate type=monitoring cluster= value=0.0 (io.confluent.controlcenter.util.StreamProgressReporter)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=monitoring-input-topic-progress-.timestamp type=monitoring cluster= value=NaN (io.confluent.controlcenter.util.StreamProgressReporter)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=monitoring-input-topic-progress-.min type=monitoring cluster= value=1.7976931348623157E308 (io.confluent.controlcenter.util.StreamProgressReporter)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=metrics-input-topic-progress-lCRehAk0RqmLR04nhXKHtA.count type=metrics cluster=lCRehAk0RqmLR04nhXKHtA value=0.0 (io.confluent.controlcenter.util.StreamProgressReporter)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=metrics-input-topic-progress-lCRehAk0RqmLR04nhXKHtA.rate type=metrics cluster=lCRehAk0RqmLR04nhXKHtA value=0.0 (io.confluent.controlcenter.util.StreamProgressReporter)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=metrics-input-topic-progress-lCRehAk0RqmLR04nhXKHtA.timestamp type=metrics cluster=lCRehAk0RqmLR04nhXKHtA value=NaN (io.confluent.controlcenter.util.StreamProgressReporter)
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-1] name=metrics-input-topic-progress-lCRehAk0RqmLR04nhXKHtA.min type=metrics cluster=lCRehAk0RqmLR04nhXKHtA value=1.7976931348623157E308 (io.confluent.controlcenter.util.StreamProgressReporter)
WARN [control-center-heartbeat-0] misconfigured topic=_confluent-command config=segment.bytes value=1073741824 expected=134217728 (io.confluent.controlcenter.healthcheck.HealthCheck)
WARN [control-center-heartbeat-0] misconfigured topic=_confluent-command config=delete.retention.ms value=86400000 expected=259200000 (io.confluent.controlcenter.healthcheck.HealthCheck)
INFO [control-center-heartbeat-0] misconfigured topic=_confluent-metrics config=min.insync.replicas value=1 expected=2 (io.confluent.controlcenter.healthcheck.HealthCheck)
WARN [control-center-heartbeat-1] Unable to fetch consumer offsets for cluster id lCRehAk0RqmLR04nhXKHtA (io.confluent.controlcenter.data.ConsumerOffsetsFetcher)
java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at io.confluent.controlcenter.data.ConsumerOffsetsDao.getAllConsumerGroupDescriptions(ConsumerOffsetsDao.java:220)
at io.confluent.controlcenter.data.ConsumerOffsetsDao.getAllConsumerGroupOffsets(ConsumerOffsetsDao.java:58)
at io.confluent.controlcenter.data.ConsumerOffsetsFetcher.run(ConsumerOffsetsFetcher.java:73)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
WARN [kafka-admin-client-thread | adminclient-3] failed fetching description for consumerGroup=_confluent-ksql-eim_ksql_non_prodquery_CSAS_SDL_STMTS_GG_347 (io.confluent.controlcenter.data.ConsumerOffsetsDao)
org.apache.kafka.common.errors.TimeoutException: Call(callName=describeConsumerGroups, deadlineMs=1654853629184, tries=1, nextAllowedTryMs=1654853629324) timed out at 1654853629224 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.DisconnectException: Cancelled describeConsumerGroups request with correlation id 168 due to node 1001 being disconnected
WARN [kafka-admin-client-thread | adminclient-3] failed fetching description for consumerGroup=connect-mongo-dci-grid-partner-test11 (io.confluent.controlcenter.data.ConsumerOffsetsDao)
org.apache.kafka.common.errors.TimeoutException: Call(callName=describeConsumerGroups, deadlineMs=1654853629184, tries=1, nextAllowedTryMs=1654853629324) timed out at 1654853629224 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: describeConsumerGroups
WARN [kafka-admin-client-thread | adminclient-3] failed fetching description for consumerGroup=_confluent-ksql-eim_ksql_non_prodquery_CSAS_SDL_STMTS_UPWARD_GG_355 (io.confluent.controlcenter.data.ConsumerOffsetsDao)
org.apache.kafka.common.errors.TimeoutException: Call(callName=describeConsumerGroups, deadlineMs=1654853629184, tries=1, nextAllowedTryMs=1654853629324) timed out at 1654853629224 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. Call: describeConsumerGroups
WARN [kafka-admin-client-thread | adminclient-3] failed fetching description for consumerGroup=_eim_c3_non_prod-4 (io.confluent.controlcenter.data.ConsumerOffsetsDao)
org.apache.kafka.common.errors.TimeoutException: Call(callName=describeConsumerGroups, deadlineMs=1654853629184, tries=1, nextAllowedTryMs=1654853629324) timed out at 1654853629224 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. Call: describeConsumerGroups
...
and so on...
WARN [control-center-heartbeat-1] Unable to fetch consumer offsets for cluster id lCRehAk0RqmLR04nhXKHtA (io.confluent.controlcenter.data.ConsumerOffsetsFetcher)
java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at io.confluent.controlcenter.data.ConsumerOffsetsDao.getAllConsumerGroupDescriptions(ConsumerOffsetsDao.java:220)
at io.confluent.controlcenter.data.ConsumerOffsetsDao.getAllConsumerGroupOffsets(ConsumerOffsetsDao.java:58)
at io.confluent.controlcenter.data.ConsumerOffsetsFetcher.run(ConsumerOffsetsFetcher.java:73)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
and so on...
In the control-center-kafka.log I see:
INFO [control-center-heartbeat-1] Kafka version: 6.1.1-ce (org.apache.kafka.common.utils.AppInfoParser)
INFO [control-center-heartbeat-1] Kafka commitId: 73deb3aeb1f8647c (org.apache.kafka.common.utils.AppInfoParser)
INFO [control-center-heartbeat-1] Kafka startTimeMs: 1654853610852 (org.apache.kafka.common.utils.AppInfoParser)
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-5-consumer, groupId=_eim_c3_non_prod-4] Resetting offset for partition _eim_c3_non_prod-4-monitoring-message-rekey-store-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.251.6.2:9093 (id: 1002 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-5-consumer, groupId=_eim_c3_non_prod-4] Resetting offset for partition _eim_c3_non_prod-4-monitoring-trigger-event-rekey-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.251.6.2:9093 (id: 1002 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-5-consumer, groupId=_eim_c3_non_prod-4] Resetting offset for partition _eim_c3_non_prod-4-MonitoringStream-ONE_MINUTE-repartition-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.251.6.2:9093 (id: 1002 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-5-consumer, groupId=_eim_c3_non_prod-4] Resetting offset for partition _eim_c3_non_prod-4-aggregatedTopicPartitionTableWindows-ONE_MINUTE-repartition-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.251.6.1:9093 (id: 1001 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
and so on ...
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-10-consumer, groupId=_eim_c3_non_prod-4] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 1003: (org.apache.kafka.clients.FetchSessionHandler)
org.apache.kafka.common.errors.DisconnectException
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-3] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-3-consumer, groupId=_eim_c3_non_prod-4] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 1002: (org.apache.kafka.clients.FetchSessionHandler)
org.apache.kafka.common.errors.DisconnectException
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-3-consumer, groupId=_eim_c3_non_prod-4] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 1001: (org.apache.kafka.clients.FetchSessionHandler)
org.apache.kafka.common.errors.DisconnectException
INFO [_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-10] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-10-consumer, groupId=_eim_c3_non_prod-4] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 1002: (org.apache.kafka.clients.FetchSessionHandler)
org.apache.kafka.common.errors.DisconnectException
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-5-consumer, groupId=_eim_c3_non_prod-4] Error sending fetch request (sessionId=1478925475, epoch=1) to node 1003: (org.apache.kafka.clients.FetchSessionHandler)
org.apache.kafka.common.errors.DisconnectException
INFO [kafka-coordinator-heartbeat-thread | _eim_c3_non_prod-4] [Consumer clientId=_eim_c3_non_prod-4-b6c9d6bd-717d-4559-bcfe-a4c9be647b7f-StreamThread-6-consumer, groupId=_eim_c3_non_prod-4] Error sending fetch request (sessionId=1947312909, epoch=1) to node 1002: (org.apache.kafka.clients.FetchSessionHandler)
org.apache.kafka.common.errors.DisconnectException
and so on ...
Any ideas what can be wrong here?

occurred error while use zkcopy copy data from zookeeper

1、using zkcopy and run cmd as following:
java -jar target/zkcopy.jar --source xxxx:2181/clickhouse --target xxxx:2181/clickhouse
2、cmd output as following:
2021-09-07 21:47:33,542 [main] INFO com.github.ksprojects.ZkCopy - using 10 concurrent workers to copy data
2021-09-07 21:47:33,542 [main] INFO com.github.ksprojects.ZkCopy - delete nodes = true
2021-09-07 21:47:33,542 [main] INFO com.github.ksprojects.ZkCopy - ignore ephemeral nodes = true
2021-09-07 21:47:33,543 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Reading /clickhouse from 10.201.226.32:2181
2021-09-07 21:47:34,590 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12417, processed=3142
2021-09-07 21:47:35,655 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12417, processed=6059
2021-09-07 21:47:36,655 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12417, processed=9172
2021-09-07 21:47:37,655 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12497, processed=12497
2021-09-07 21:47:37,657 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Completed.
2021-09-07 21:47:37,687 [main] INFO com.github.ksprojects.zkcopy.writer.Writer - Writing data...
2021-09-07 21:47:38,338 [main] INFO com.github.ksprojects.zkcopy.writer.Writer - Committing transaction
2021-09-07 21:47:38,954 [main] INFO com.github.ksprojects.zkcopy.writer.Writer - Committing transaction
Exception in thread "main" picocli.CommandLine$ExecutionException: Error while calling command (com.github.ksprojects.ZkCopy#7960847b): java.lang.RuntimeException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at picocli.CommandLine.execute(CommandLine.java:458)
at picocli.CommandLine.access$300(CommandLine.java:134)
at picocli.CommandLine$RunLast.handleParseResult(CommandLine.java:538)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:656)
at picocli.CommandLine.call(CommandLine.java:883)
at picocli.CommandLine.call(CommandLine.java:834)
at com.github.ksprojects.ZkCopy.main(ZkCopy.java:69)
Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.github.ksprojects.zkcopy.writer.AutoCommitTransactionWrapper.maybeCommitTransaction(AutoCommitTransactionWrapper.java:74)
at com.github.ksprojects.zkcopy.writer.AutoCommitTransactionWrapper.create(AutoCommitTransactionWrapper.java:39)
at com.github.ksprojects.zkcopy.writer.Writer.upsertNode(Writer.java:147)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:96)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.write(Writer.java:65)
at com.github.ksprojects.ZkCopy.call(ZkCopy.java:86)
at com.github.ksprojects.ZkCopy.call(ZkCopy.java:14)
at picocli.CommandLine.execute(CommandLine.java:456)

Solve Zookeeper NoSuchMethodError from Kafka

I am facing this error when i run /bin/kafka-server-start.sh /config/server.properties
2020-02-27T13:23:34,540 ERROR [main] kafka.server.KafkaServer - [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V
at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:239) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$2(ZooKeeperClient.scala:161) ~[kafka_2.12-2.4.0.jar:?]
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) ~[scala-library-2.12.10.jar:?]
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253) ~[kafka_2.12-2.4.0.jar:?]
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1(ZooKeeperClient.scala:161) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1$adapted(ZooKeeperClient.scala:157) ~[kafka_2.12-2.4.0.jar:?]
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) ~[scala-library-2.12.10.jar:?]
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) ~[scala-library-2.12.10.jar:?]
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) ~[scala-library-2.12.10.jar:?]
at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:157) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1691) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1678) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1673) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1743) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1720) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:93) ~[kafka_2.12-2.4.0.jar:?]
at kafka.server.KafkaServer.startup(KafkaServer.scala:270) [kafka_2.12-2.4.0.jar:?]
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) [kafka_2.12-2.4.0.jar:?]
at kafka.Kafka$.main(Kafka.scala:84) [kafka_2.12-2.4.0.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.12-2.4.0.jar:?]
2020-02-27T13:23:34,549 INFO [main] kafka.server.KafkaServer - [KafkaServer id=0] shutting down
2020-02-27T13:23:34,556 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Stopping socket server request processors
2020-02-27T13:23:34,590 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Stopped socket server request processors
2020-02-27T13:23:34,606 INFO [main] kafka.server.ReplicaManager - [ReplicaManager broker=0] Shutting down
2020-02-27T13:23:34,610 INFO [main] kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down
2020-02-27T13:23:34,620 INFO [main] kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed
2020-02-27T13:23:34,623 INFO [main] kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] shutting down
2020-02-27T13:23:34,631 INFO [LogDirFailureHandler] kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped
2020-02-27T13:23:34,631 INFO [main] kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] shutdown completed
2020-02-27T13:23:34,634 INFO [main] kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 0] shutting down
2020-02-27T13:23:34,646 INFO [main] kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 0] shutdown completed
2020-02-27T13:23:34,646 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Fetch]: Shutting down
2020-02-27T13:23:34,774 INFO [ExpirationReaper-0-Fetch] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Fetch]: Stopped
2020-02-27T13:23:34,774 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Fetch]: Shutdown completed
2020-02-27T13:23:34,776 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Produce]: Shutting down
2020-02-27T13:23:34,959 INFO [ExpirationReaper-0-Produce] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Produce]: Stopped
2020-02-27T13:23:34,959 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Produce]: Shutdown completed
2020-02-27T13:23:34,961 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-DeleteRecords]: Shutting down
2020-02-27T13:23:35,003 INFO [ExpirationReaper-0-DeleteRecords] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-DeleteRecords]: Stopped
2020-02-27T13:23:35,003 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-DeleteRecords]: Shutdown completed
2020-02-27T13:23:35,004 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-ElectLeader]: Shutting down
2020-02-27T13:23:35,012 INFO [ExpirationReaper-0-ElectLeader] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-ElectLeader]: Stopped
2020-02-27T13:23:35,013 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-ElectLeader]: Shutdown completed
2020-02-27T13:23:35,039 INFO [main] kafka.server.ReplicaManager - [ReplicaManager broker=0] Shut down completely
2020-02-27T13:23:35,042 INFO [main] kafka.log.LogManager - Shutting down.
2020-02-27T13:23:35,047 INFO [main] kafka.log.LogCleaner - Shutting down the log cleaner.
2020-02-27T13:23:35,048 INFO [main] kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down
2020-02-27T13:23:35,050 INFO [kafka-log-cleaner-thread-0] kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped
2020-02-27T13:23:35,052 INFO [main] kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed
2020-02-27T13:23:35,118 INFO [main] kafka.log.LogManager - Shutdown complete.
2020-02-27T13:23:35,120 INFO [main] kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing.
2020-02-27T13:23:35,145 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x17085a21f120000 closed
2020-02-27T13:23:35,150 INFO [main] kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed.
2020-02-27T13:23:35,151 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Fetch]: Shutting down
2020-02-27T13:23:35,152 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down
2020-02-27T13:23:35,156 INFO [ThrottledChannelReaper-Fetch] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Fetch]: Stopped
2020-02-27T13:23:35,159 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Fetch]: Shutdown completed
2020-02-27T13:23:35,159 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Produce]: Shutting down
2020-02-27T13:23:36,160 INFO [ThrottledChannelReaper-Produce] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Produce]: Stopped
2020-02-27T13:23:36,161 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Produce]: Shutdown completed
2020-02-27T13:23:36,161 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Request]: Shutting down
2020-02-27T13:23:36,163 INFO [ThrottledChannelReaper-Request] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Request]: Stopped
2020-02-27T13:23:36,163 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Request]: Shutdown completed
2020-02-27T13:23:36,167 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Shutting down socket server
2020-02-27T13:23:36,311 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Shutdown completed
2020-02-27T13:23:36,326 INFO [main] kafka.server.KafkaServer - [KafkaServer id=0] shut down completed
2020-02-27T13:23:36,329 ERROR [main] kafka.server.KafkaServerStartable - Exiting Kafka.
2020-02-27T13:23:36,337 INFO [kafka-shutdown-hook] kafka.server.KafkaServer - [KafkaServer id=0] shutting down
ERROR [main] kafka.server.KafkaServer - [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown
i am facing this problem when i start kafka server

Sqoop installation export and import from postgresql

I v'e just installed sqoop and was testing it . I tried to export some data from hdfs to postgresql using sqoop. When I run it it throws the following exception : java.io.IOException: Can't export data, please check task tracker logs . I think there may also have been a problem in installation.
The File content is :
ustNU 45
MB1bA 0
gNbCO 76
iZP10 39
B2aoo 45
SI7eG 93
5sC4k 60
2IhFV 2
u2A48 16
yvy6R 51
LNhsV 26
mZ2yn 65
80Gp3 43
Wk5Ag 85
VUfyp 93
P077j 94
f1Oj5 11
LxJkg 72
0H7NP 99
Dk406 25
g4KRp 76
Fw3U0 80
6LD59 1
07KHx 91
F1S88 72
Bnb0v 85
A2qM7 79
Z6cAt 81
0M3DO 23
m0s09 44
KIvwd 13
GNUD0 78
um93a 20
19bHv 75
4Of3s 75
5hFen 16
This is the posgres table:
Table "public.mysort"
Column | Type | Modifiers
--------+---------+-----------
name | text |
marks | integer |
The sqoop command is:
sqoop export --connect jdbc:postgresql://localhost/testdb --username akshay --password akshay --table mysort -m 1 --export-dir MySort/input
Followed by the error:
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
14/06/11 18:28:06 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/06/11 18:28:06 INFO manager.SqlManager: Using default fetchSize of 1000
14/06/11 18:28:06 INFO tool.CodeGenTool: Beginning code generation
14/06/11 18:28:06 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM "mysort" AS t LIMIT 1
14/06/11 18:28:06 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
Note: /tmp/sqoop-hduser/compile/0402ad4b5cf7980040264af35de406cb/mysort.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/06/11 18:28:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hduser/compile/0402ad4b5cf7980040264af35de406cb/mysort.jar
14/06/11 18:28:07 INFO mapreduce.ExportJobBase: Beginning export of mysort
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/11 18:28:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/06/11 18:28:22 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/06/11 18:28:23 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/06/11 18:28:23 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
14/06/11 18:28:23 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/06/11 18:28:23 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/06/11 18:28:24 INFO input.FileInputFormat: Total input paths to process : 1
14/06/11 18:28:24 INFO input.FileInputFormat: Total input paths to process : 1
14/06/11 18:28:25 INFO mapreduce.JobSubmitter: number of splits:1
14/06/11 18:28:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1402488523460_0003
14/06/11 18:28:25 INFO impl.YarnClientImpl: Submitted application application_1402488523460_0003
14/06/11 18:28:25 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1402488523460_0003/
14/06/11 18:28:25 INFO mapreduce.Job: Running job: job_1402488523460_0003
14/06/11 18:28:46 INFO mapreduce.Job: Job job_1402488523460_0003 running in uber mode : false
14/06/11 18:28:46 INFO mapreduce.Job: map 0% reduce 0%
14/06/11 18:29:04 INFO mapreduce.Job: Task Id : attempt_1402488523460_0003_m_000000_0, Status : FAILED
Error: java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
14/06/11 18:29:23 INFO mapreduce.Job: Task Id : attempt_1402488523460_0003_m_000000_1, Status : FAILED
Error: java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
14/06/11 18:29:42 INFO mapreduce.Job: Task Id : attempt_1402488523460_0003_m_000000_2, Status : FAILED
Error: java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
14/06/11 18:30:03 INFO mapreduce.Job: map 100% reduce 0%
14/06/11 18:30:03 INFO mapreduce.Job: Job job_1402488523460_0003 failed with state FAILED due to: Task failed task_1402488523460_0003_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/06/11 18:30:03 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=69336
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=69336
Total vcore-seconds taken by all map tasks=69336
Total megabyte-seconds taken by all map tasks=71000064
14/06/11 18:30:03 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/06/11 18:30:03 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 100.1476 seconds (0 bytes/sec)
14/06/11 18:30:03 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/06/11 18:30:03 INFO mapreduce.ExportJobBase: Exported 0 records.
14/06/11 18:30:03 ERROR tool.ExportTool: Error during export: Export job failed!
This is the log file :
2014-06-11 17:54:37,601 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-06-11 17:54:37,602 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-06-11 17:54:52,678 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-06-11 17:54:52,777 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-06-11 17:54:52,846 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-06-11 17:54:52,847 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2014-06-11 17:54:52,855 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2014-06-11 17:54:52,855 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1402488523460_0002, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#971d0d8)
2014-06-11 17:54:52,901 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2014-06-11 17:54:53,165 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1402488523460_0002
2014-06-11 17:54:53,249 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-06-11 17:54:53,249 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-06-11 17:54:53,393 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2014-06-11 17:54:53,689 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2014-06-11 17:54:53,899 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/user/hduser/MySort/input/data.txt:0+891082
2014-06-11 17:54:53,904 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file
2014-06-11 17:54:53,904 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start
2014-06-11 17:54:53,904 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Exception raised during data export
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Exception:
java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
2014-06-11 17:54:54,030 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input: ustNU 45
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input file: hdfs://localhost:9000/user/hduser/MySort/input/data.txt
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: At position 0
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Currently processing split:
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Paths:/user/hduser/MySort/input/data.txt:0+891082
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: This issue might not necessarily be caused by current input
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: due to the batching nature of export.
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,032 INFO [Thread-12] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
2014-06-11 17:54:54,033 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser (auth:SIMPLE) cause:java.io.IOException: Can't export data, please check task tracker logs
2014-06-11 17:54:54,033 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
2014-06-11 17:54:54,037 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task
Any help in resolving the issue is appreciated.
Here is the complete procedure for installation and import and export commands for Sqoop. Hope fully it may be helpful to some one. This one is tried and tested by me and actually works.
Download : apache.mirrors.tds.net/sqoop/1.4.4/sqoop-1.4.4.bin__hadoop-2.0.4-alpha.tar.gz
sudo mv sqoop-1.4.4.bin__hadoop-2.0.4-alpha.tar.gz /usr/lib/sqoop
copy paste followingtwo lines in .bashrc
export SQOOP_HOME=/usr/lib/sqoop
export PATH=$PATH:$SQOOP_HOME/bin
Go to /usr/lib/sqoop/conf folder and copy sqoop-env-template.sh to new file sqoop-env.sh and modify export HADOOP_HOME ,HBASE_HOME,etc to the installation directory
Download the postgresql conector jar file from jdbc.postgresql.org/download/postgresql-9.3-1101.jdbc41.jar
create a directory manager.d in sqoop/conf/
create a file postgresql in conf/ and add the following line in it
org.postgresql.Driver=/usr/lib/sqoop/lib/postgresql-9.3-1101.jdbc41.jar
name the connector.jar file accordingly
For Export
Create a user in postgres:
createuser -P -s -e ace
Enter password for new role: ace
Enter it again: ace
CREATE DATABASE testdb OWNER ace TABLESPACE ace;
create table stud1(id int,name text);
Create a file student.txt
Add lines such as:
1,Ace
2,iloveapis
hadoop fs -put student.txt
sqoop export --connect jdbc:postgresql://localhost:5432/testdb --username ace --password ace --table stud1 -m 1 --export-dir student.txt
check in postgres: Select * from stud1;
For Import:
sqoop import --connect jdbc:postgresql://localhost:5432/testdb --username akshay --password akshay --table stud1 --m 1
hadoop fs -ls -R stud1
Expected Output:
-rw-r--r-- 1 hduser supergroup 0 2014-06-13 18:10 stud1/_SUCCESS
-rw-r--r-- 1 hduser supergroup 21 2014-06-13 18:10 stud1/part-m-00000
hadoop fs -cat stud1/part-m-00000
Expected Output:
1,Ace
2,iloveapis
hadoop fs -copyToLocal stud1/part-m-00000 $HOME/imported_data.txt

com.thinkaurelius.titan.core.TitanException: Could not acquire new ID block from storage

I am running a simple program
TitanGraph bg = TitanFactory.open("/home/titan-all-0.4.2/conf/titan-cassandra-es.properties");
IdGraph g = new IdGraph(bg, true, false);
Vertex v = g.addVertex("xyz132456");
g.commit();
Here there is no vertex xyz132456 exist before but i am getting following exception. I am getting this exception very frequently. My titan server(0.4.2) is configured with all default settings as i am just testing simple operations with this.
253 [main] INFO org.elasticsearch.node - [Chtylok] version[0.90.5], pid[4015], build[c8714e8/2013-09-17T12:50:20Z]
253 [main] INFO org.elasticsearch.node - [Chtylok] initializing ...
260 [main] INFO org.elasticsearch.plugins - [Chtylok] loaded [], sites []
2303 [main] INFO org.elasticsearch.node - [Chtylok] initialized
2303 [main] INFO org.elasticsearch.node - [Chtylok] starting ...
2309 [main] INFO org.elasticsearch.transport - [Chtylok] bound_address {local[1]}, publish_address {local[1]}
2316 [elasticsearch[Chtylok][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.service - [Chtylok] new_master [Chtylok][1][local[1]]{local=true}, reason: local-disco-initial_connect(master)
2325 [main] INFO org.elasticsearch.discovery - [Chtylok] elasticsearch/1
2420 [main] INFO org.elasticsearch.http - [Chtylok] bound_address {inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/10.0.0.5:9201]}
2420 [main] INFO org.elasticsearch.node - [Chtylok] started
2987 [elasticsearch[Chtylok][clusterService#updateTask][T#1]] INFO org.elasticsearch.gateway - [Chtylok] recovered [1] indices into cluster_state
3234 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Initiated backend operations thread pool of size 2
3542 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration - Configuring edge store cache size: 201476830
5105 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [1000 ms]
6105 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [2000 ms]
7105 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [3000 ms]
8106 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [4001 ms]
9106 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [5001 ms]
10106 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [6001 ms]
11107 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [7002 ms]
12107 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [8002 ms]
13107 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [9002 ms]
14108 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [10003 ms]
Exception in thread "Thread-4" com.thinkaurelius.titan.core.TitanException: Could not acquire new ID block from storage
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.renewBuffer(StandardIDPool.java:117)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.access$100(StandardIDPool.java:14)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool$IDBlockThread.run(StandardIDPool.java:172)
Caused by: com.thinkaurelius.titan.diskstorage.PermanentStorageException: Permanent failure in storage backend
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.convertException(CassandraThriftKeyColumnValueStore.java:311)
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getNamesSlice(CassandraThriftKeyColumnValueStore.java:196)
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getSlice(CassandraThriftKeyColumnValueStore.java:120)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager$1.call(ConsistentKeyIDManager.java:106)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager$1.call(ConsistentKeyIDManager.java:103)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:90)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager.getCurrentID(ConsistentKeyIDManager.java:103)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager.getIDBlock(ConsistentKeyIDManager.java:159)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.renewBuffer(StandardIDPool.java:111)
... 2 more
Caused by: TimedOutException()
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:11623)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:11560)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:11486)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:701)
at org.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:685)
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getNamesSlice(CassandraThriftKeyColumnValueStore.java:176)
... 9 more
Exception in thread "main" java.lang.IllegalArgumentException: -1
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextBlock(StandardIDPool.java:88)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextID(StandardIDPool.java:134)
at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:269)
at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:155)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.assignID(StandardTitanGraph.java:226)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addPropertyInternal(StandardTitanTx.java:521)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.setProperty(StandardTitanTx.java:552)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addProperty(StandardTitanTx.java:495)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addVertex(StandardTitanTx.java:345)
at com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsTransaction.addVertex(TitanBlueprintsTransaction.java:72)
at com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.addVertex(TitanBlueprintsGraph.java:157)
at com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.addVertex(TitanBlueprintsGraph.java:24)
at com.tinkerpop.blueprints.util.wrappers.id.IdGraph.addVertex(IdGraph.java:131)
at newpackage.CityBizzStarting.main(CityBizzStarting.java:24)