occurred error while use zkcopy copy data from zookeeper - apache-zookeeper

1、using zkcopy and run cmd as following:
java -jar target/zkcopy.jar --source xxxx:2181/clickhouse --target xxxx:2181/clickhouse
2、cmd output as following:
2021-09-07 21:47:33,542 [main] INFO com.github.ksprojects.ZkCopy - using 10 concurrent workers to copy data
2021-09-07 21:47:33,542 [main] INFO com.github.ksprojects.ZkCopy - delete nodes = true
2021-09-07 21:47:33,542 [main] INFO com.github.ksprojects.ZkCopy - ignore ephemeral nodes = true
2021-09-07 21:47:33,543 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Reading /clickhouse from 10.201.226.32:2181
2021-09-07 21:47:34,590 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12417, processed=3142
2021-09-07 21:47:35,655 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12417, processed=6059
2021-09-07 21:47:36,655 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12417, processed=9172
2021-09-07 21:47:37,655 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Processing, total=12497, processed=12497
2021-09-07 21:47:37,657 [main] INFO com.github.ksprojects.zkcopy.reader.Reader - Completed.
2021-09-07 21:47:37,687 [main] INFO com.github.ksprojects.zkcopy.writer.Writer - Writing data...
2021-09-07 21:47:38,338 [main] INFO com.github.ksprojects.zkcopy.writer.Writer - Committing transaction
2021-09-07 21:47:38,954 [main] INFO com.github.ksprojects.zkcopy.writer.Writer - Committing transaction
Exception in thread "main" picocli.CommandLine$ExecutionException: Error while calling command (com.github.ksprojects.ZkCopy#7960847b): java.lang.RuntimeException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at picocli.CommandLine.execute(CommandLine.java:458)
at picocli.CommandLine.access$300(CommandLine.java:134)
at picocli.CommandLine$RunLast.handleParseResult(CommandLine.java:538)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:656)
at picocli.CommandLine.call(CommandLine.java:883)
at picocli.CommandLine.call(CommandLine.java:834)
at com.github.ksprojects.ZkCopy.main(ZkCopy.java:69)
Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.github.ksprojects.zkcopy.writer.AutoCommitTransactionWrapper.maybeCommitTransaction(AutoCommitTransactionWrapper.java:74)
at com.github.ksprojects.zkcopy.writer.AutoCommitTransactionWrapper.create(AutoCommitTransactionWrapper.java:39)
at com.github.ksprojects.zkcopy.writer.Writer.upsertNode(Writer.java:147)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:96)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.update(Writer.java:104)
at com.github.ksprojects.zkcopy.writer.Writer.write(Writer.java:65)
at com.github.ksprojects.ZkCopy.call(ZkCopy.java:86)
at com.github.ksprojects.ZkCopy.call(ZkCopy.java:14)
at picocli.CommandLine.execute(CommandLine.java:456)

Related

Solve Zookeeper NoSuchMethodError from Kafka

I am facing this error when i run /bin/kafka-server-start.sh /config/server.properties
2020-02-27T13:23:34,540 ERROR [main] kafka.server.KafkaServer - [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V
at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:239) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$2(ZooKeeperClient.scala:161) ~[kafka_2.12-2.4.0.jar:?]
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) ~[scala-library-2.12.10.jar:?]
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253) ~[kafka_2.12-2.4.0.jar:?]
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1(ZooKeeperClient.scala:161) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1$adapted(ZooKeeperClient.scala:157) ~[kafka_2.12-2.4.0.jar:?]
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) ~[scala-library-2.12.10.jar:?]
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) ~[scala-library-2.12.10.jar:?]
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) ~[scala-library-2.12.10.jar:?]
at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:157) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1691) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1678) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1673) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1743) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1720) ~[kafka_2.12-2.4.0.jar:?]
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:93) ~[kafka_2.12-2.4.0.jar:?]
at kafka.server.KafkaServer.startup(KafkaServer.scala:270) [kafka_2.12-2.4.0.jar:?]
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) [kafka_2.12-2.4.0.jar:?]
at kafka.Kafka$.main(Kafka.scala:84) [kafka_2.12-2.4.0.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.12-2.4.0.jar:?]
2020-02-27T13:23:34,549 INFO [main] kafka.server.KafkaServer - [KafkaServer id=0] shutting down
2020-02-27T13:23:34,556 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Stopping socket server request processors
2020-02-27T13:23:34,590 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Stopped socket server request processors
2020-02-27T13:23:34,606 INFO [main] kafka.server.ReplicaManager - [ReplicaManager broker=0] Shutting down
2020-02-27T13:23:34,610 INFO [main] kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down
2020-02-27T13:23:34,620 INFO [main] kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed
2020-02-27T13:23:34,623 INFO [main] kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] shutting down
2020-02-27T13:23:34,631 INFO [LogDirFailureHandler] kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped
2020-02-27T13:23:34,631 INFO [main] kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] shutdown completed
2020-02-27T13:23:34,634 INFO [main] kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 0] shutting down
2020-02-27T13:23:34,646 INFO [main] kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 0] shutdown completed
2020-02-27T13:23:34,646 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Fetch]: Shutting down
2020-02-27T13:23:34,774 INFO [ExpirationReaper-0-Fetch] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Fetch]: Stopped
2020-02-27T13:23:34,774 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Fetch]: Shutdown completed
2020-02-27T13:23:34,776 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Produce]: Shutting down
2020-02-27T13:23:34,959 INFO [ExpirationReaper-0-Produce] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Produce]: Stopped
2020-02-27T13:23:34,959 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Produce]: Shutdown completed
2020-02-27T13:23:34,961 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-DeleteRecords]: Shutting down
2020-02-27T13:23:35,003 INFO [ExpirationReaper-0-DeleteRecords] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-DeleteRecords]: Stopped
2020-02-27T13:23:35,003 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-DeleteRecords]: Shutdown completed
2020-02-27T13:23:35,004 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-ElectLeader]: Shutting down
2020-02-27T13:23:35,012 INFO [ExpirationReaper-0-ElectLeader] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-ElectLeader]: Stopped
2020-02-27T13:23:35,013 INFO [main] kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-ElectLeader]: Shutdown completed
2020-02-27T13:23:35,039 INFO [main] kafka.server.ReplicaManager - [ReplicaManager broker=0] Shut down completely
2020-02-27T13:23:35,042 INFO [main] kafka.log.LogManager - Shutting down.
2020-02-27T13:23:35,047 INFO [main] kafka.log.LogCleaner - Shutting down the log cleaner.
2020-02-27T13:23:35,048 INFO [main] kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down
2020-02-27T13:23:35,050 INFO [kafka-log-cleaner-thread-0] kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped
2020-02-27T13:23:35,052 INFO [main] kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed
2020-02-27T13:23:35,118 INFO [main] kafka.log.LogManager - Shutdown complete.
2020-02-27T13:23:35,120 INFO [main] kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing.
2020-02-27T13:23:35,145 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x17085a21f120000 closed
2020-02-27T13:23:35,150 INFO [main] kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed.
2020-02-27T13:23:35,151 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Fetch]: Shutting down
2020-02-27T13:23:35,152 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down
2020-02-27T13:23:35,156 INFO [ThrottledChannelReaper-Fetch] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Fetch]: Stopped
2020-02-27T13:23:35,159 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Fetch]: Shutdown completed
2020-02-27T13:23:35,159 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Produce]: Shutting down
2020-02-27T13:23:36,160 INFO [ThrottledChannelReaper-Produce] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Produce]: Stopped
2020-02-27T13:23:36,161 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Produce]: Shutdown completed
2020-02-27T13:23:36,161 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Request]: Shutting down
2020-02-27T13:23:36,163 INFO [ThrottledChannelReaper-Request] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Request]: Stopped
2020-02-27T13:23:36,163 INFO [main] kafka.server.ClientQuotaManager$ThrottledChannelReaper - [ThrottledChannelReaper-Request]: Shutdown completed
2020-02-27T13:23:36,167 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Shutting down socket server
2020-02-27T13:23:36,311 INFO [main] kafka.network.SocketServer - [SocketServer brokerId=0] Shutdown completed
2020-02-27T13:23:36,326 INFO [main] kafka.server.KafkaServer - [KafkaServer id=0] shut down completed
2020-02-27T13:23:36,329 ERROR [main] kafka.server.KafkaServerStartable - Exiting Kafka.
2020-02-27T13:23:36,337 INFO [kafka-shutdown-hook] kafka.server.KafkaServer - [KafkaServer id=0] shutting down
ERROR [main] kafka.server.KafkaServer - [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown
i am facing this problem when i start kafka server

Issue while setting up nifi cluster

I followed the below tutorial to setup nifi cluster. Performed everything as mentioned, but receiving the below error.
https://pierrevillard.com/2016/08/13/apache-nifi-1-0-0-cluster-setup/
But the zookeeper was unable to elect a leader and getting the following error message.
2018-04-20 09:27:40,942 INFO [main] org.wali.MinimalLockingWriteAheadLog Successfully recovered 0 records in 3 milliseconds 2018-04-20 09:27:41,007 INFO [main] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog#27caa186 checkpointed with 0 Records and 0 Swap Files in 65 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 7 millis), max Transaction ID -1 2018-04-20 09:27:41,300 INFO [main] o.a.z.server.DatadirCleanupManager autopurge.snapRetainCount set to 30 2018-04-20 09:27:41,300 INFO [main] o.a.z.server.DatadirCleanupManager autopurge.purgeInterval set to 24 2018-04-20 09:27:41,311 INFO [main] o.a.n.c.s.server.ZooKeeperStateServer Starting Embedded ZooKeeper Peer 2018-04-20 09:27:41,319 INFO [PurgeTask] o.a.z.server.DatadirCleanupManager Purge task started. 2018-04-20 09:27:41,343 INFO [PurgeTask] o.a.z.server.DatadirCleanupManager Purge task completed. 2018-04-20 09:27:41,621 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected... 2018-04-20 09:27:41,907 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting 2018-04-20 09:27:49,882 WARN [main] o.a.n.c.l.e.CuratorLeaderElectionManager Unable to determine the Elected Leader for role 'Cluster Coordinator' due to org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/leaders/Cluster Coordinator; assuming no leader has been elected 2018-04-20 09:27:49,885 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting 2018-04-20 09:27:49,986 INFO [main] o.apache.nifi.controller.FlowController It appears that no Cluster Coordinator has been Elected yet. Registering for Cluster Coordinator Role. 2018-04-20 09:27:49,988 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election. 2018-04-20 09:27:49,988 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting 2018-04-20 09:27:49,997 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election. 2018-04-20 09:27:49,997 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started 2018-04-20 09:27:49,997 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started 2018-04-20 09:27:58,179 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#76d9fe26{/nifi-api,file:///root/nifi-1.6.0/work/jetty/nifi-web-api-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-api-1.6.0.war} 2018-04-20 09:27:59,657 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=861ms 2018-04-20 09:27:59,834 INFO [main] o.e.j.C./nifi-content-viewer No Spring WebApplicationInitializer types detected on classpath 2018-04-20 09:27:59,840 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#43b41cb9{/nifi-content-viewer,file:///root/nifi-1.6.0/work/jetty/nifi-web-content-viewer-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-content-viewer-1.6.0.war} 2018-04-20 09:27:59,863 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.s.h.ContextHandler#49825659{/nifi-docs,null,AVAILABLE} 2018-04-20 09:28:00,079 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=69ms 2018-04-20 09:28:00,083 INFO [main] o.e.jetty.ContextHandler./nifi-docs No Spring WebApplicationInitializer types detected on classpath 2018-04-20 09:28:00,171 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#2ab0ca04{/nifi-docs,file:///root/nifi-1.6.0/work/jetty/nifi-web-docs-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.6.0.war} 2018-04-20 09:28:00,343 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=119ms 2018-04-20 09:28:00,344 INFO [main] org.eclipse.jetty.ContextHandler./ No Spring WebApplicationInitializer types detected on classpath 2018-04-20 09:28:00,454 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#67e255cf{/,file:///root/nifi-1.6.0/work/jetty/nifi-web-error-1.6.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.6.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.6.0.war} 2018-04-20 09:28:00,541 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#6d9f624{HTTP/1.1,[http/1.1]}{node-1:8080} 2018-04-20 09:28:00,559 INFO [main] org.eclipse.jetty.server.Server Started #98628ms 2018-04-20 09:28:00,599 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow... 2018-04-20 09:28:00,688 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 9999 2018-04-20 09:28:00,916 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow 2018-04-20 09:28:01,337 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: node-1:8080 2018-04-20 09:28:07,922 INFO [Curator-Framework-0] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED 2018-04-20 09:28:07,927 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener#188207a7 Connection State changed to SUSPENDED 2018-04-20 09:28:07,930 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-04-19 15:56:55,209 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at
org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I set the following properties on all 3 nodes:
mkdir ./state
mkdir ./state/zookeeper
echo 1 > ./state/zookeeper/myid (2 & 3 for other nodes)
in zookeeper.properties
server.1=node-1:2888:3888
server.2=node-2:2888:3888
server.3=node-3:2888:3888
in nifi.properties
nifi.state.management.embedded.zookeeper.start=true
nifi.zookeeper.connect.string=node-1:2181,node-2:2181,node-3:2181
nifi.cluster.protocol.is.secure=false
nifi.cluster.is.node=true
nifi.cluster.node.address=node-1(node-2 & node-3)
nifi.cluster.node.protocol.port=9999
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.remote.input.host=node-1(node-2 & node-3)
nifi.remote.input.secure=false
nifi.remote.input.socket.port=9998
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.web.http.host=node-1(node-2 & node-3)
After Bryan's comment, got reminded that I forgot to disable the iptables (I guess I can even open the ports but since this is for my practice I am disabling the iptables) and its working now.

UnresolvedAdressException milo

Im trying to connect from an basic Milo-Client (ReadSample), but getting UnresolvedAdressException. Both Client and Server are in an remote Network and I only got Access to the Client. I'm pretty sure its not a Firewall since I can Connect with other Clients (Prosys OPC UA Client) and i can see that the ip is resolved to an Host-Name in the Logs:
Server is opc.tcp://192.168.115.40:49580 aka opc.tcp://Extern-Mess-Rec:49580 (tried both in UaTcpStackClient.getEndpoints(url).get();)
13:24:51.530 [main] DEBUG
io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as
the default logging framework 13:24:51.546 [main] DEBUG
io.netty.channel.MultithreadEventLoopGroup -
-Dio.netty.eventLoopThreads: 8 13:24:51.561 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address:
available 13:24:51.561 [main] DEBUG
io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe:
available 13:24:51.561 [main] DEBUG
io.netty.util.internal.PlatformDependent0 -
sun.misc.Unsafe.copyMemory: available 13:24:51.561 [main] DEBUG
io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned:
true 13:24:51.561 [main] DEBUG
io.netty.util.internal.PlatformDependent - Platform: Windows
13:24:51.561 [main] DEBUG io.netty.util.internal.PlatformDependent -
Java version: 8 13:24:51.561 [main] DEBUG
io.netty.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
13:24:51.561 [main] DEBUG io.netty.util.internal.PlatformDependent -
sun.misc.Unsafe: available 13:24:51.561 [main] DEBUG
io.netty.util.internal.PlatformDependent - -Dio.netty.noJavassist:
false 13:24:51.686 [main] DEBUG
io.netty.util.internal.PlatformDependent - Javassist: available
13:24:51.686 [main] DEBUG io.netty.util.internal.PlatformDependent -
-Dio.netty.tmpdir: C:\Users\SOFTWA~1\AppData\Local\Temp\3 (java.io.tmpdir) 13:24:51.686 [main] DEBUG
io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64
(sun.arch.data.model) 13:24:51.686 [main] DEBUG
io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect:
false 13:24:51.718 [main] DEBUG io.netty.channel.nio.NioEventLoop -
-Dio.netty.noKeySetOptimization: false 13:24:51.718 [main] DEBUG io.netty.channel.nio.NioEventLoop -
-Dio.netty.selectorAutoRebuildThreshold: 512 13:24:51.858 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level:
simple 13:24:51.858 [main] DEBUG io.netty.util.ResourceLeakDetector -
-Dio.netty.leakDetection.maxRecords: 4 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.numHeapArenas: 8 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.numDirectArenas: 8 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.pageSize: 8192 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.maxOrder: 11 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.chunkSize: 16777216 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.tinyCacheSize: 512 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.smallCacheSize: 256 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.normalCacheSize: 64 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.maxCachedBufferCapacity: 32768 13:24:52.264 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.cacheTrimInterval: 8192 13:24:52.296 [main] DEBUG io.netty.util.internal.ThreadLocalRandom -
-Dio.netty.initialSeedUniquifier: 0x35f32988e43eab85 (took 10 ms) 13:24:52.327 [main] DEBUG io.netty.buffer.ByteBufUtil -
-Dio.netty.allocator.type: unpooled 13:24:52.327 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize:
65536 13:24:52.327 [main] DEBUG io.netty.buffer.ByteBufUtil -
-Dio.netty.maxThreadLocalCharBufferSize: 16384 13:24:52.358 [ua-netty-event-loop-0] DEBUG
io.netty.util.internal.JavassistTypeParameterMatcherGenerator -
Generated:
io.netty.util.internal.matchers.org.eclipse.milo.opcua.stack.client.handlers.UaRequestFutureMatcher 13:24:52.389 [ua-netty-event-loop-0] DEBUG
io.netty.buffer.AbstractByteBuf -
-Dio.netty.buffer.bytebuf.checkAccessible: true 13:24:52.858 [ua-netty-event-loop-0] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.maxCapacity.default: 262144 13:24:52.890 [ua-netty-event-loop-0] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientAcknowledgeHandler
- Sent Hello message on channel=[id: 0xa0ec7fec, L:/130.83.225.169:58872 - R:/192.168.115.40:49580]. 13:24:52.905
[ua-netty-event-loop-0] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientAcknowledgeHandler
- Received Acknowledge message on channel=[id: 0xa0ec7fec, L:/130.83.225.169:58872 - R:/192.168.115.40:49580]. 13:24:52.921
[ua-netty-event-loop-0] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler
- OpenSecureChannel timeout scheduled for +5s 13:24:52.967 [ua-netty-event-loop-0] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler
- OpenSecureChannel timeout canceled 13:24:52.967 [ua-shared-pool-0] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler
- Sent OpenSecureChannelRequest (Issue, id=0, currentToken=-1, previousToken=-1). 13:24:52.999 [ua-shared-pool-1] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler
- Received OpenSecureChannelResponse. 13:24:52.999 [ua-shared-pool-1] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler
- SecureChannel id=1140, currentTokenId=1, previousTokenId=-1, lifetime=3600000ms, createdAt=DateTime{utcTime=131384570808248472,
javaDate=Fri May 05 13:24:40 CEST 2017} 13:24:52.999
[ua-netty-event-loop-0] DEBUG
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler
- 0 message(s) queued before handshake completed; sending now. 13:24:52.999 [ForkJoinPool.commonPool-worker-1] DEBUG
org.eclipse.milo.opcua.stack.client.ClientChannelManager - Channel
bootstrap succeeded: localAddress=/130.83.225.169:58872,
remoteAddress=/192.168.115.40:49580 13:24:53.061
[ForkJoinPool.commonPool-worker-1] DEBUG
org.eclipse.milo.opcua.stack.client.ClientChannelManager - Sending
CloseSecureChannelRequest... 13:24:53.061 [main] INFO
org.eclipse.milo.examples.client.ClientExampleRunner - Using endpoint:
opc.tcp://Extern-Mess-Rec:49580 [None] 13:24:53.077
[ua-netty-event-loop-0] DEBUG
org.eclipse.milo.opcua.stack.client.ClientChannelManager -
channelInactive(), disconnect complete 13:24:53.077
[ua-netty-event-loop-0] DEBUG
org.eclipse.milo.opcua.stack.client.ClientChannelManager - disconnect
complete, state set to Idle 13:24:53.124 [main] DEBUG
org.eclipse.milo.opcua.sdk.client.OpcUaClient - Added
ServiceFaultListener:
org.eclipse.milo.opcua.sdk.client.ClientSessionManager$$Lambda$1049/664457955#58134517
13:24:53.171 [main] DEBUG
org.eclipse.milo.opcua.sdk.client.OpcUaClient - Added
SessionActivityListener:
org.eclipse.milo.opcua.sdk.client.subscriptions.OpcUaSubscriptionManager$1#2d2e5f00
13:24:55.592 [ForkJoinPool.commonPool-worker-1] DEBUG
org.eclipse.milo.opcua.stack.client.ClientChannelManager - Channel
bootstrap failed: null java.nio.channels.UnresolvedAddressException:
null
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:209)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:207)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1279)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:453)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:439)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:453)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:439)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:421)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:1024)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:203)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:167)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:374)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745) 13:24:55.608 [main] ERROR org.eclipse.milo.examples.client.ClientExampleRunner - Error
running client example: java.nio.channels.UnresolvedAddressException
java.util.concurrent.ExecutionException:
java.nio.channels.UnresolvedAddressException
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at org.eclipse.milo.examples.client.ReadExample.run(ReadExample.java:43)
at org.eclipse.milo.examples.client.ClientExampleRunner.run(ClientExampleRunner.java:106)
at org.eclipse.milo.examples.client.ReadExample.main(ReadExample.java:35)
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:209)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:207)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1279)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:453)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:439)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:453)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:439)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:421)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:1024)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:203)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:167)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:374)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
The server you're getting endpoints from is probably returning "Extern-Mess-Rec" as its hostname, which you can't resolve.
See this answer for how to deal with that scenario.

Storing HDFS data into MongoDB using Pig

I am new to Hadoop and having a requirement to store the Hadoop data into MongoDB. Here I am using Pig to store the data in Hadoop into MongoDB.
I downloaded and registered the following drivers to do this in Pig Grunt shell with the help of given command,
REGISTER /home/miracle/Downloads/mongo-hadoop-pig-2.0.2.jar
REGISTER /home/miracle/Downloads/mongo-java-driver-3.4.2.jar
REGISTER /home/miracle/Downloads/mongo-hadoop-core-2.0.2.jar
After this I successfully got the data from MongoDB using the following command.
raw = LOAD 'mongodb://localhost:27017/pavan.pavan.in' USING com.mongodb.hadoop.pig.MongoLoader;
Then I tried the following command to insert the data from pig bag to MongoDB and got succeeded.
STORE student INTO 'mongodb://localhost:27017/pavan.persons_info' USING com.mongodb.hadoop.pig.MongoInsertStorage('','');
Then I am trying the Mongo Update using the below command.
STORE student INTO 'mongodb://localhost:27017/pavan.persons_info1' USING com.mongodb.hadoop.pig.MongoUpdateStorage(' ','{first:"\$firstname", last:"\$lastname", phone:"\$phone", city:"\$city"}','firstname: chararray,lastname: chararray,phone: chararray,city: chararray');
But I am getting the below error while performing the above command.
2017-03-22 11:16:42,516 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-03-22 11:16:43,064 [main] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: ; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:43,180 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2017-03-22 11:16:43,306 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-03-22 11:16:43,308 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2017-03-22 11:16:43,309 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2017-03-22 11:16:43,310 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2017-03-22 11:16:43,314 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2017-03-22 11:16:43,314 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2017-03-22 11:16:43,415 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-03-22 11:16:43,419 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-03-22 11:16:43,423 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2017-03-22 11:16:43,425 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2017-03-22 11:16:43,438 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - This job cannot be converted run in-process
2017-03-22 11:16:43,603 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/home/miracle/Downloads/mongo-java-driver-3.0.4.jar to DistributedCache through /tmp/temp159471787/tmp643027494/mongo-java-driver-3.0.4.jar
2017-03-22 11:16:43,687 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/home/miracle/Downloads/mongo-hadoop-core-2.0.2.jar to DistributedCache through /tmp/temp159471787/tmp-1745369112/mongo-hadoop-core-2.0.2.jar
2017-03-22 11:16:43,822 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/home/miracle/Downloads/mongo-hadoop-pig-2.0.2.jar to DistributedCache through /tmp/temp159471787/tmp116725398/mongo-hadoop-pig-2.0.2.jar
2017-03-22 11:16:44,693 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/local/pig/pig-0.16.0/pig-0.16.0-core-h2.jar to DistributedCache through /tmp/temp159471787/tmp499355324/pig-0.16.0-core-h2.jar
2017-03-22 11:16:44,762 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/local/pig/pig-0.16.0/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp159471787/tmp413788756/automaton-1.11-8.jar
2017-03-22 11:16:44,830 [DataStreamer for file /tmp/temp159471787/tmp-380031198/antlr-runtime-3.4.jar block BP-1303579226-127.0.1.1-1489750707340:blk_1073742392_1568] WARN org.apache.hadoop.hdfs.DFSClient - Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1249)
at java.lang.Thread.join(Thread.java:1323)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:577)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:573)
2017-03-22 11:16:44,856 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/local/pig/pig-0.16.0/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp159471787/tmp-380031198/antlr-runtime-3.4.jar
2017-03-22 11:16:44,960 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/local/pig/pig-0.16.0/lib/joda-time-2.9.3.jar to DistributedCache through /tmp/temp159471787/tmp1163422388/joda-time-2.9.3.jar
2017-03-22 11:16:44,996 [main] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: ; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:45,004 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2017-03-22 11:16:45,147 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2017-03-22 11:16:45,166 [JobControl] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-03-22 11:16:45,253 [JobControl] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: ; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:45,318 [JobControl] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2017-03-22 11:16:45,572 [JobControl] INFO org.apache.pig.builtin.PigStorage - Using PigTextInputFormat
2017-03-22 11:16:45,579 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2017-03-22 11:16:45,581 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2017-03-22 11:16:45,593 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2017-03-22 11:16:45,690 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2017-03-22 11:16:45,884 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local1070093788_0006
2017-03-22 11:16:47,476 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: /tmp/hadoop-miracle/mapred/local/1490206606120/mongo-java-driver-3.0.4.jar <- /home/miracle/mongo-java-driver-3.0.4.jar
2017-03-22 11:16:47,534 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized hdfs://localhost:9000/tmp/temp159471787/tmp643027494/mongo-java-driver-3.0.4.jar as file:/tmp/hadoop-miracle/mapred/local/1490206606120/mongo-java-driver-3.0.4.jar
2017-03-22 11:16:47,534 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: /tmp/hadoop-miracle/mapred/local/1490206606121/mongo-hadoop-core-2.0.2.jar <- /home/miracle/mongo-hadoop-core-2.0.2.jar
2017-03-22 11:16:47,674 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized hdfs://localhost:9000/tmp/temp159471787/tmp-1745369112/mongo-hadoop-core-2.0.2.jar as file:/tmp/hadoop-miracle/mapred/local/1490206606121/mongo-hadoop-core-2.0.2.jar
2017-03-22 11:16:48,194 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: /tmp/hadoop-miracle/mapred/local/1490206606122/mongo-hadoop-pig-2.0.2.jar <- /home/miracle/mongo-hadoop-pig-2.0.2.jar
2017-03-22 11:16:48,201 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized hdfs://localhost:9000/tmp/temp159471787/tmp116725398/mongo-hadoop-pig-2.0.2.jar as file:/tmp/hadoop-miracle/mapred/local/1490206606122/mongo-hadoop-pig-2.0.2.jar
2017-03-22 11:16:48,329 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: /tmp/hadoop-miracle/mapred/local/1490206606123/pig-0.16.0-core-h2.jar <- /home/miracle/pig-0.16.0-core-h2.jar
2017-03-22 11:16:48,337 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized hdfs://localhost:9000/tmp/temp159471787/tmp499355324/pig-0.16.0-core-h2.jar as file:/tmp/hadoop-miracle/mapred/local/1490206606123/pig-0.16.0-core-h2.jar
2017-03-22 11:16:48,338 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: /tmp/hadoop-miracle/mapred/local/1490206606124/automaton-1.11-8.jar <- /home/miracle/automaton-1.11-8.jar
2017-03-22 11:16:48,370 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized hdfs://localhost:9000/tmp/temp159471787/tmp413788756/automaton-1.11-8.jar as file:/tmp/hadoop-miracle/mapred/local/1490206606124/automaton-1.11-8.jar
2017-03-22 11:16:48,371 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: /tmp/hadoop-miracle/mapred/local/1490206606125/antlr-runtime-3.4.jar <- /home/miracle/antlr-runtime-3.4.jar
2017-03-22 11:16:48,384 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized hdfs://localhost:9000/tmp/temp159471787/tmp-380031198/antlr-runtime-3.4.jar as file:/tmp/hadoop-miracle/mapred/local/1490206606125/antlr-runtime-3.4.jar
2017-03-22 11:16:48,389 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: /tmp/hadoop-miracle/mapred/local/1490206606126/joda-time-2.9.3.jar <- /home/miracle/joda-time-2.9.3.jar
2017-03-22 11:16:48,409 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized hdfs://localhost:9000/tmp/temp159471787/tmp1163422388/joda-time-2.9.3.jar as file:/tmp/hadoop-miracle/mapred/local/1490206606126/joda-time-2.9.3.jar
2017-03-22 11:16:48,798 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/tmp/hadoop-miracle/mapred/local/1490206606120/mongo-java-driver-3.0.4.jar
2017-03-22 11:16:48,803 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/tmp/hadoop-miracle/mapred/local/1490206606121/mongo-hadoop-core-2.0.2.jar
2017-03-22 11:16:48,803 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/tmp/hadoop-miracle/mapred/local/1490206606122/mongo-hadoop-pig-2.0.2.jar
2017-03-22 11:16:48,804 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/tmp/hadoop-miracle/mapred/local/1490206606123/pig-0.16.0-core-h2.jar
2017-03-22 11:16:48,806 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/tmp/hadoop-miracle/mapred/local/1490206606124/automaton-1.11-8.jar
2017-03-22 11:16:48,807 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/tmp/hadoop-miracle/mapred/local/1490206606125/antlr-runtime-3.4.jar
2017-03-22 11:16:48,807 [JobControl] INFO org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/tmp/hadoop-miracle/mapred/local/1490206606126/joda-time-2.9.3.jar
2017-03-22 11:16:48,807 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/
2017-03-22 11:16:48,809 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_local1070093788_0006
2017-03-22 11:16:48,812 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases student1
2017-03-22 11:16:48,812 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: student1[7,11] C: R:
2017-03-22 11:16:48,889 [Thread-455] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null
2017-03-22 11:16:48,915 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2017-03-22 11:16:48,915 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_local1070093788_0006]
2017-03-22 11:16:48,999 [Thread-455] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: ; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:49,011 [Thread-455] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2017-03-22 11:16:49,013 [Thread-455] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
2017-03-22 11:16:49,013 [Thread-455] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-03-22 11:16:49,054 [Thread-455] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputCommitter
2017-03-22 11:16:49,094 [Thread-455] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-miracle/mapred/local/localRunner/miracle/job_local1070093788_0006/job_local1070093788_0006.xml; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:49,104 [Thread-455] INFO com.mongodb.hadoop.output.MongoOutputCommitter - Setting up job.
2017-03-22 11:16:49,126 [Thread-455] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks
2017-03-22 11:16:49,127 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1070093788_0006_m_000000_0
2017-03-22 11:16:49,253 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-miracle/mapred/local/localRunner/miracle/job_local1070093788_0006/job_local1070093788_0006.xml; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:49,279 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-miracle/mapred/local/localRunner/miracle/job_local1070093788_0006/job_local1070093788_0006.xml; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:49,290 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.output.MongoOutputCommitter - Setting up task.
2017-03-22 11:16:49,296 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2017-03-22 11:16:49,340 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Processing split: Number of splits :1
Total Length = 212
Input split[0]:
Length = 212
ClassName: org.apache.hadoop.mapreduce.lib.input.FileSplit
Locations:
-----------------------
2017-03-22 11:16:49,415 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.builtin.PigStorage - Using PigTextInputFormat
2017-03-22 11:16:49,417 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed hdfs://localhost:9000/input/student_dir/student_Info.txt:0+212
2017-03-22 11:16:49,459 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-miracle/mapred/local/localRunner/miracle/job_local1070093788_0006/job_local1070093788_0006.xml; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:49,684 [LocalJobRunner Map Task Executor #0] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2017-03-22 11:16:50,484 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.output.MongoRecordWriter - Writing to temporary file: /tmp/hadoop-miracle/attempt_local1070093788_0006_m_000000_0/_MONGO_OUT_TEMP/_out
2017-03-22 11:16:50,516 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Preparing to write to com.mongodb.hadoop.output.MongoRecordWriter#1fd6ae6
2017-03-22 11:16:50,736 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.impl.util.SpillableMemoryManager - Selected heap (Tenured Gen) of size 699072512 to monitor. collectionUsageThreshold = 489350752, usageThreshold = 489350752
2017-03-22 11:16:50,739 [LocalJobRunner Map Task Executor #0] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2017-03-22 11:16:50,746 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map - Aliases being processed per job phase (AliasName[line,offset]): M: student1[7,11] C: R:
2017-03-22 11:16:50,880 [Thread-455] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2017-03-22 11:16:50,919 [Thread-455] INFO com.mongodb.hadoop.pig.MongoUpdateStorage - Store location config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-miracle/mapred/local/localRunner/miracle/job_local1070093788_0006/job_local1070093788_0006.xml; for namespace: pavan.persons_info1; hosts: [localhost:27017]
2017-03-22 11:16:50,963 [Thread-455] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local1070093788_0006
java.lang.Exception: java.io.IOException: java.io.IOException: Couldn't convert tuple to bson:
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.io.IOException: java.io.IOException: Couldn't convert tuple to bson:
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.StoreFuncDecorator.putNext(StoreFuncDecorator.java:83)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:144)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:97)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:261)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:65)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Couldn't convert tuple to bson:
at com.mongodb.hadoop.pig.MongoUpdateStorage.putNext(MongoUpdateStorage.java:165)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.StoreFuncDecorator.putNext(StoreFuncDecorator.java:75)
... 17 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at org.apache.pig.data.DefaultTuple.get(DefaultTuple.java:117)
at com.mongodb.hadoop.pig.JSONPigReplace.substitute(JSONPigReplace.java:120)
at com.mongodb.hadoop.pig.MongoUpdateStorage.putNext(MongoUpdateStorage.java:142)
... 18 more
2017-03-22 11:16:53,944 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2017-03-22 11:16:53,944 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local1070093788_0006 has failed! Stop running all dependent jobs
2017-03-22 11:16:53,945 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2017-03-22 11:16:53,949 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-03-22 11:16:53,954 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-03-22 11:16:53,962 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
2017-03-22 11:16:53,981 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.3 0.16.0 miracle 2017-03-22 11:16:43 2017-03-22 11:16:53 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_local1070093788_0006 student1 MAP_ONLY Message: Job failed! mongodb://localhost:27017/pavan.persons_info1,
Input(s):
Failed to read data from "hdfs://localhost:9000/input/student_dir/student_Info.txt"
Output(s):
Failed to produce result in "mongodb://localhost:27017/pavan.persons_info1"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local1070093788_0006
2017-03-22 11:16:53,983 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2017-03-22 11:16:54,004 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1002: Unable to store alias student1
Details at logfile: /home/miracle/pig_1490205716326.log
Here is the input that I have dumped here,
Input(s):
Successfully read 6 records (5378419 bytes) from: "hdfs://localhost:9000/input/student_dir/student_Info.txt"
Output(s):
Successfully stored 6 records (5378449 bytes) in: "hdfs://localhost:9000/tmp/temp-1419179625/tmp882976412"
Counters:
Total records written : 6
Total bytes written : 5378449
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local1866034015_0001
2017-03-23 02:43:37,677 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-03-23 02:43:37,681 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-03-23 02:43:37,689 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-03-23 02:43:37,736 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2017-03-23 02:43:37,748 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-03-23 02:43:37,751 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2017-03-23 02:43:37,793 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2017-03-23 02:43:37,793 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
(Rajiv,Reddy,9848022337,Hyderabad)
(siddarth,Battacharya,9848022338,Kolkata)
(Rajesh,Khanna,9848022339,Delhi)
(Preethi,Agarwal,9848022330,Pune)
(Trupthi,Mohanthy,9848022336,Bhuwaneshwar)
(Archana,Mishra,9848022335,Chennai.)
I don't know What to do next please give me any suggestions on this.
I do not use Mongo , but from HBase experience and From the error it looks like few fields are not flatten 'ed enough for it to fit the column family. And also few of columns you are trying to STORE don't exist.
See to DUMP the data instead of STORE and check if it is matching the STORE structure you have applied.
Caused by: java.io.IOException: Couldn't convert tuple to bson: at
com.mongodb.hadoop.pig.MongoUpdateStorage.putNext(MongoUpdateStorage.java:165)
at
org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.StoreFuncDecorator.putNext(StoreFuncDecorator.java:75)
... 17 more Caused by: java.lang.IndexOutOfBoundsException: Index: 1,
Size: 1 at java.util.ArrayList.rangeCheck(ArrayList.java:653) at
java.util.ArrayList.get(ArrayList.java:429) at
org.apache.pig.data.DefaultTuple.get(DefaultTuple.java:117) at
com.mongodb.hadoop.pig.JSONPigReplace.substitute(JSONPigReplace.java:120)
at
com.mongodb.hadoop.pig.MongoUpdateStorage.putNext(MongoUpdateStorage.java:142)
... 18 more

com.thinkaurelius.titan.core.TitanException: Could not acquire new ID block from storage

I am running a simple program
TitanGraph bg = TitanFactory.open("/home/titan-all-0.4.2/conf/titan-cassandra-es.properties");
IdGraph g = new IdGraph(bg, true, false);
Vertex v = g.addVertex("xyz132456");
g.commit();
Here there is no vertex xyz132456 exist before but i am getting following exception. I am getting this exception very frequently. My titan server(0.4.2) is configured with all default settings as i am just testing simple operations with this.
253 [main] INFO org.elasticsearch.node - [Chtylok] version[0.90.5], pid[4015], build[c8714e8/2013-09-17T12:50:20Z]
253 [main] INFO org.elasticsearch.node - [Chtylok] initializing ...
260 [main] INFO org.elasticsearch.plugins - [Chtylok] loaded [], sites []
2303 [main] INFO org.elasticsearch.node - [Chtylok] initialized
2303 [main] INFO org.elasticsearch.node - [Chtylok] starting ...
2309 [main] INFO org.elasticsearch.transport - [Chtylok] bound_address {local[1]}, publish_address {local[1]}
2316 [elasticsearch[Chtylok][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.service - [Chtylok] new_master [Chtylok][1][local[1]]{local=true}, reason: local-disco-initial_connect(master)
2325 [main] INFO org.elasticsearch.discovery - [Chtylok] elasticsearch/1
2420 [main] INFO org.elasticsearch.http - [Chtylok] bound_address {inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/10.0.0.5:9201]}
2420 [main] INFO org.elasticsearch.node - [Chtylok] started
2987 [elasticsearch[Chtylok][clusterService#updateTask][T#1]] INFO org.elasticsearch.gateway - [Chtylok] recovered [1] indices into cluster_state
3234 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Initiated backend operations thread pool of size 2
3542 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration - Configuring edge store cache size: 201476830
5105 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [1000 ms]
6105 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [2000 ms]
7105 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [3000 ms]
8106 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [4001 ms]
9106 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [5001 ms]
10106 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [6001 ms]
11107 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [7002 ms]
12107 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [8002 ms]
13107 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [9002 ms]
14108 [main] WARN com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool - Waiting for id renewal thread on partition 2 [10003 ms]
Exception in thread "Thread-4" com.thinkaurelius.titan.core.TitanException: Could not acquire new ID block from storage
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.renewBuffer(StandardIDPool.java:117)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.access$100(StandardIDPool.java:14)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool$IDBlockThread.run(StandardIDPool.java:172)
Caused by: com.thinkaurelius.titan.diskstorage.PermanentStorageException: Permanent failure in storage backend
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.convertException(CassandraThriftKeyColumnValueStore.java:311)
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getNamesSlice(CassandraThriftKeyColumnValueStore.java:196)
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getSlice(CassandraThriftKeyColumnValueStore.java:120)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager$1.call(ConsistentKeyIDManager.java:106)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager$1.call(ConsistentKeyIDManager.java:103)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:90)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager.getCurrentID(ConsistentKeyIDManager.java:103)
at com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDManager.getIDBlock(ConsistentKeyIDManager.java:159)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.renewBuffer(StandardIDPool.java:111)
... 2 more
Caused by: TimedOutException()
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:11623)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:11560)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:11486)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:701)
at org.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:685)
at com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getNamesSlice(CassandraThriftKeyColumnValueStore.java:176)
... 9 more
Exception in thread "main" java.lang.IllegalArgumentException: -1
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextBlock(StandardIDPool.java:88)
at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextID(StandardIDPool.java:134)
at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:269)
at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:155)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.assignID(StandardTitanGraph.java:226)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addPropertyInternal(StandardTitanTx.java:521)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.setProperty(StandardTitanTx.java:552)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addProperty(StandardTitanTx.java:495)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addVertex(StandardTitanTx.java:345)
at com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsTransaction.addVertex(TitanBlueprintsTransaction.java:72)
at com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.addVertex(TitanBlueprintsGraph.java:157)
at com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.addVertex(TitanBlueprintsGraph.java:24)
at com.tinkerpop.blueprints.util.wrappers.id.IdGraph.addVertex(IdGraph.java:131)
at newpackage.CityBizzStarting.main(CityBizzStarting.java:24)