Hadoop Eclipse PlugIn Error: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.EOFException - eclipse

I saw a similar question being raised one year ago. Here is the link:
see here
I have similar configurations, but facing the same EOFException error.
Is it some problem with the Hadoop plugin for Eclipse or something to do with my Hadoop configurations? (NOTE: I have followed the standard configuration; so there is no chance of a mistake; Also when I run bin/start-all.sh the single-node Hadoop cluster is running fine)
Below is the stack trace from Eclipse while connecting to HDFS:
java.io.IOException: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at org.apache.hadoop.mapred.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:470)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:455)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:442)
at org.apache.hadoop.eclipse.server.HadoopServer.getJobClient(HadoopServer.java:473)
at org.apache.hadoop.eclipse.server.HadoopServer$LocationStatusUpdater.run(HadoopServer.java:102)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
Hadoop NameNode Log is as follows:
STARTUP_MSG: Starting NameNode STARTUP_MSG: host = somnath-laptop/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.0.4 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 ************************************************************/
2013-02-11 13:26:56,505 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-11 13:26:56,520 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-11 13:26:56,521 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-11 13:26:56,521 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-11 13:26:56,792 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-11 13:26:56,797 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-02-11 13:26:56,807 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-11 13:26:56,809 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-11 13:26:56,882 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-02-11 13:26:56,882 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2013-02-11 13:26:56,883 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-02-11 13:26:56,883 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-11 13:26:56,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-11 13:26:56,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-11 13:26:56,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-11 13:26:56,957 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-11 13:26:57,023 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-02-11 13:26:57,109 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 7
2013-02-11 13:26:57,123 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-02-11 13:26:57,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 loaded in 0 seconds.
2013-02-11 13:26:57,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-02-11 13:26:57,126 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
2013-02-11 13:26:57,548 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
2013-02-11 13:26:57,825 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-02-11 13:26:57,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 923 msecs
2013-02-11 13:26:57,830 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
2013-02-11 13:26:57,838 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-02-11 13:26:57,845 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-02-11 13:26:57,870 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort54310 registered.
2013-02-11 13:26:57,871 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort54310 registered.
2013-02-11 13:26:57,873 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:54310
2013-02-11 13:26:57,879 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-02-11 13:27:03,041 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-02-11 13:27:03,142 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-02-11 13:27:03,156 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-02-11 13:27:03,164 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-02-11 13:27:03,165 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-02-11 13:27:03,166 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-02-11 13:27:03,166 INFO org.mortbay.log: jetty-6.1.26
2013-02-11 13:27:03,585 INFO org.mortbay.log: Started SelectChannelConnector#0.0.0.0:50070
2013-02-11 13:27:03,585 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2013-02-11 13:27:03,586 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
2013-02-11 13:27:03,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
2013-02-11 13:27:03,588 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
2013-02-11 13:27:03,589 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
2013-02-11 13:27:07,306 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
2013-02-11 13:27:07,308 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51327: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:14,657 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-747527201-127.0.1.1-50010-1360274163059
2013-02-11 13:27:14,661 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2013-02-11 13:27:14,687 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 29 seconds.
2013-02-11 13:27:14,687 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 1, processing time: 3 msecs
2013-02-11 13:27:15,988 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at
org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:15,988 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:16,007 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:16,008 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:16,023 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user at
org.apache.hadoop.util.Shell.runCommand(Shell.java:255) at
org.apache.hadoop.util.Shell.run(Shell.java:182) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68) at
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45) at
org.apache.hadoop.security.Groups.getGroups(Groups.java:79) at
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026) at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:50) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:5178) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:2338) at
org.apache.hadoop.hdfs.server.namenode.NameNode.getListing(NameNode.java:831) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:16,024 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser
2013-02-11 13:27:17,327 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds.
2013-02-11 13:27:17,327 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51335: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 27 seconds. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2013-02-11 13:27:27,339 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds.
2013-02-11 13:27:27,339 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call delete(/app/hadoop/tmp/mapred/system, true) from 127.0.0.1:51336: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /app/hadoop/tmp/mapred/system. Name node is in safe mode. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 17 seconds. at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974) at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:616) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:416) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
Any quick help will be appreciated.

I had the same problem, I solved it by
Make it .jar through Eclipse.
Run in Hadoop hadoop jar <jar_name> main_class_name.

Related

Vert.x Infinispan getting "failed sending discovery request to /228.6.7.8:46655"

I'm running the Chapter 3 Infinispan example from Julien Ponge's book "Vert.x In Action" on MacOS 11.3.
https://github.com/jponge/vertx-in-action/tree/master/chapter3
public class FirstInstance {
private static final Logger logger = LoggerFactory.getLogger(FirstInstance.class);
public static void main(String[] args) {
Vertx.clusteredVertx(new VertxOptions(), ar -> {
if (ar.succeeded()) {
logger.info("First instance has been started");
Vertx vertx = ar.result();
vertx.deployVerticle("chapter3.HeatSensor", new DeploymentOptions().setInstances(4));
vertx.deployVerticle("chapter3.HttpServer");
} else {
logger.error("Could not start", ar.cause());
}
});
}
}
After downloading the sample code, inside the chapter3 folder I do
./gradlew run -PmainClass=chapter3.cluster.FirstInstance -Pjvmargs="-Djava.net.preferIPv4Stack=true"
This results in
failed sending discovery request to /228.6.7.8:46655
This is working in Windows. I wonder if there is an issue with MacOS multicasting? I tried following these steps to enable multicasting but still get the same error: https://blogs.agilefaqs.com/2009/11/08/enabling-multicast-on-your-macos-unix/
Full output:
News-MacBook-Pro:chapter3f newuser$ ./gradlew run -PmainClass=chapter3.cluster.FirstInstance
> Task :run
DEBUG [main] LoggerFactory - Using io.vertx.core.logging.SLF4JLogDelegateFactory
WARN [vert.x-worker-thread-0] InfinispanClusterManager - Cannot find Infinispan config 'infinispan.xml', using default
WARN [vert.x-worker-thread-0] CONFIG - ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
INFO [vert.x-worker-thread-0] CONTAINER - ISPN000128: Infinispan version: Infinispan 'Corona Extra' 11.0.5.Final
INFO [vert.x-worker-thread-0] CLUSTER - ISPN000078: Starting JGroups channel ISPN with stack jgroups
ERROR [jgroups-6,News-MacBook-Pro-2273] MPING - News-MacBook-Pro-2273: failed sending discovery request to /228.6.7.8:46655
java.io.IOException: Can't assign requested address
at java.base/java.net.PlainDatagramSocketImpl.send(Native Method)
at java.base/java.net.DatagramSocket.send(DatagramSocket.java:695)
at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:286)
at org.jgroups.protocols.PING.sendDiscoveryRequest(PING.java:63)
at org.jgroups.protocols.PING.findMembers(PING.java:31)
at org.jgroups.protocols.Discovery.invokeFindMembers(Discovery.java:217)
at org.jgroups.protocols.Discovery.lambda$findMembers$0(Discovery.java:228)
at org.jgroups.protocols.Discovery$$Lambda$380.000000009D98CA20.run(Unknown Source)
at org.jgroups.util.TimeScheduler3$Task.run(TimeScheduler3.java:328)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:831)
ERROR [jgroups-6,News-MacBook-Pro-2273] MPING - News-MacBook-Pro-2273: failed sending discovery request to /228.6.7.8:46655
java.io.IOException: Can't assign requested address
at java.base/java.net.PlainDatagramSocketImpl.send(Native Method)
at java.base/java.net.DatagramSocket.send(DatagramSocket.java:695)
at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:286)
at org.jgroups.protocols.PING.sendDiscoveryRequest(PING.java:63)
at org.jgroups.protocols.PING.findMembers(PING.java:31)
at org.jgroups.protocols.Discovery.invokeFindMembers(Discovery.java:217)
at org.jgroups.protocols.Discovery.lambda$findMembers$0(Discovery.java:228)
at org.jgroups.protocols.Discovery$$Lambda$380.000000009D98CA20.run(Unknown Source)
at org.jgroups.util.TimeScheduler3$Task.run(TimeScheduler3.java:328)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:831)
ERROR [jgroups-6,News-MacBook-Pro-2273] MPING - News-MacBook-Pro-2273: failed sending discovery request to /228.6.7.8:46655
java.io.IOException: Can't assign requested address
at java.base/java.net.DatagramSocket.send(DatagramSocket.java:695)
at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:286)
at org.jgroups.protocols.PING.sendDiscoveryRequest(PING.java:63)
at org.jgroups.protocols.PING.findMembers(PING.java:31)
at org.jgroups.protocols.Discovery.invokeFindMembers(Discovery.java:217)
at org.jgroups.protocols.Discovery.lambda$findMembers$0(Discovery.java:228)
at org.jgroups.protocols.Discovery$$Lambda$380.000000009D98CA20.run(Unknown Source)
at org.jgroups.util.TimeScheduler3$Task.run(TimeScheduler3.java:328)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:831)
INFO [vert.x-worker-thread-0] GMS - News-MacBook-Pro-2273: no members discovered after 2003 ms: creating cluster as coordinator
INFO [vert.x-worker-thread-0] CLUSTER - ISPN000094: Received new cluster view for channel ISPN: [News-MacBook-Pro-2273|0] (1) [News-MacBook-Pro-2273]
INFO [vert.x-worker-thread-0] CLUSTER - ISPN000079: Channel ISPN local address is News-MacBook-Pro-2273, physical addresses are [192.168.0.101:7800]
INFO [vert.x-eventloop-thread-0] FirstInstance - First instance has been starte
INFO [vert.x-eventloop-thread-5] threads - JBoss Threads version 2.3.3.Final
ERROR [jgroups-5,News-MacBook-Pro-2273] MPING - News-MacBook-Pro-2273: failed sending discovery request to /228.6.7.8:46655
java.io.IOException: Can't assign requested address
at java.base/java.net.DatagramSocket.send(DatagramSocket.java:695)
at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:286)
at org.jgroups.protocols.PING.sendDiscoveryRequest(PING.java:63)
at org.jgroups.protocols.PING.findMembers(PING.java:31)
at org.jgroups.protocols.Discovery.invokeFindMembers(Discovery.java:217)
at org.jgroups.protocols.Discovery.findMembers(Discovery.java:244)
at org.jgroups.protocols.Discovery.down(Discovery.java:387)
at org.jgroups.protocols.MERGE3$InfoSender.run(MERGE3.java:412)
at org.jgroups.util.TimeScheduler3$Task.run(TimeScheduler3.java:328)
at org.jgroups.util.TimeScheduler3$RecurringTask.run(TimeScheduler3.java:362)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:831)
ERROR [jgroups-6,News-MacBook-Pro-2273] MPING - News-MacBook-Pro-2273: failed sending discovery request to /228.6.7.8:46655
java.io.IOException: Can't assign requested address
at java.base/java.net.DatagramSocket.send(DatagramSocket.java:695)
at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:286)
at org.jgroups.protocols.PING.sendDiscoveryRequest(PING.java:63)
at org.jgroups.protocols.PING.findMembers(PING.java:31)
at org.jgroups.protocols.Discovery.invokeFindMembers(Discovery.java:217)
at org.jgroups.protocols.Discovery.findMembers(Discovery.java:244)
at org.jgroups.protocols.Discovery.down(Discovery.java:387)
at org.jgroups.protocols.MERGE3$InfoSender.run(MERGE3.java:412)
at org.jgroups.util.TimeScheduler3$Task.run(TimeScheduler3.java:328)
at org.jgroups.util.TimeScheduler3$RecurringTask.run(TimeScheduler3.java:362)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:831)
#tsegismont provided the answer which worked.
https://gist.github.com/rafaeltuelho/208568668e4205bd9b93
Adds a multicast route for 224.0.0.1-231.255.255.254:
sudo route add -net 224.0.0.0/5 127.0.0.1
Adds a multicast route for 232.0.0.1-239.255.255.254
sudo route add -net 232.0.0.0/5 192.168.1.3
So for 230.0.0.0 I bind to 192.168.1.3

neo4j - graphaware plugins

I downloaded the plugins of graphaware nlp,open-nlp,framework and copied the jar files to the plugins directory.
And as per the steps in neo4j , i included the following lines in neo4j.config file
dbms.unmanaged_extension_classes=com.graphaware.server=/graphaware
com.graphaware.runtime.enabled=true
com.graphaware.module.NLP.2=com.graphaware.nlp.module.NLPBootstrapper
After inserting this the localhost:7474 is not starting.
But when i comment these lines localhost starts and works properly but doesnt include the plugins.
Version : enterprise 3.1.3
Error in LocalLost after commenting those lines:
Failed to invoke procedure `ga.nlp.annotate`: Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: No GraphAware Runtime is registered with the given database
Error in log file:
2017-11-07 10:41:03.839+0000 INFO ======== Neo4j 3.1.3 ========
2017-11-07 10:41:04.120+0000 INFO Starting...
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/share/neo4j/lib/slf4j-nop-1.7.22.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/lib/neo4j/plugins/nlp-opennlp-3.1.3.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
2017-11-07 10:41:04.985+0000 INFO Bolt enabled on localhost:7687.
2017-11-07 10:41:05.010+0000 INFO Initiating metrics...
2017-11-07 10:41:07.374+0000 INFO [c.g.r.b.RuntimeKernelExtension] GraphAware Runtime enabled, bootstrapping...
2017-11-07 10:41:07.444+0000 INFO [c.g.r.b.RuntimeKernelExtension] Bootstrapping module with order 2, ID NLP, using com.graphaware.nlp.module.NLPBootstrapper
2017-11-07 10:41:07.523+0000 INFO Registering module NLP with GraphAware Runtime.
2017-11-07 10:41:07.523+0000 INFO [c.g.r.b.RuntimeKernelExtension] GraphAware Runtime bootstrapped, starting the Runtime...
2017-11-07 10:41:21.893+0000 INFO Starting GraphAware...
2017-11-07 10:41:21.894+0000 INFO Loading module metadata...
2017-11-07 10:41:21.894+0000 INFO Loading metadata for module NLP
2017-11-07 10:41:21.946+0000 INFO Module NLP seems to have been registered for the first time.
2017-11-07 10:41:21.947+0000 INFO Module NLP seems to have been registered for the first time, will try to initialize...
2017-11-07 10:41:21.947+0000 INFO InitializeUntil set to 9223372036854775807 and it is 1510051281947. Will initialize.
2017-11-07 10:41:24.709+0000 INFO Started.
2017-11-07 10:41:24.811+0000 INFO Mounted REST API at: /db/manage
2017-11-07 10:41:24.823+0000 INFO [c.g.s.f.b.GraphAwareServerBootstrapper] started
2017-11-07 10:41:24.825+0000 INFO Mounted unmanaged extension [com.graphaware.server] at [/graphaware]
Exception in thread "GraphAware Starter" java.lang.RuntimeException: Error while initializing model of class: class opennlp.tools.namefind.TokenNameFinderModel
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.loadModel(OpenNLPPipeline.java:503)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.lambda$loadNamedEntitiesFinders$2(OpenNLPPipeline.java:161)
at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1691)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.loadNamedEntitiesFinders(OpenNLPPipeline.java:158)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.init(OpenNLPPipeline.java:118)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.<init>(OpenNLPPipeline.java:108)
at com.graphaware.nlp.processor.opennlp.PipelineBuilder.build(PipelineBuilder.java:79)
at com.graphaware.nlp.processor.opennlp.OpenNLPTextProcessor.createPhrasePipeline(OpenNLPTextProcessor.java:106)
at com.graphaware.nlp.processor.opennlp.OpenNLPTextProcessor.init(OpenNLPTextProcessor.java:56)
at com.graphaware.nlp.processor.TextProcessorsManager.lambda$initiateTextProcessors$0(TextProcessorsManager.java:61)
at java.util.HashMap$Values.forEach(HashMap.java:980)
at com.graphaware.nlp.processor.TextProcessorsManager.initiateTextProcessors(TextProcessorsManager.java:60)
at com.graphaware.nlp.processor.TextProcessorsManager.<init>(TextProcessorsManager.java:37)
at com.graphaware.nlp.NLPManager.init(NLPManager.java:95)
at com.graphaware.nlp.module.NLPModule.initialize(NLPModule.java:52)
at com.graphaware.runtime.manager.ProductionTxDrivenModuleManager.initialize(ProductionTxDrivenModuleManager.java:57)
at com.graphaware.runtime.manager.BaseTxDrivenModuleManager.initializeIfAllowed(BaseTxDrivenModuleManager.java:128)
at com.graphaware.runtime.manager.BaseTxDrivenModuleManager.handleNoMetadata(BaseTxDrivenModuleManager.java:72)
at com.graphaware.runtime.manager.BaseTxDrivenModuleManager.handleNoMetadata(BaseTxDrivenModuleManager.java:39)
at com.graphaware.runtime.manager.BaseModuleManager.loadMetadata(BaseModuleManager.java:143)
at com.graphaware.runtime.manager.BaseModuleManager.loadMetadata(BaseModuleManager.java:125)
at com.graphaware.runtime.TxDrivenRuntime.loadMetadata(TxDrivenRuntime.java:130)
at com.graphaware.runtime.ProductionRuntime.loadMetadata(ProductionRuntime.java:80)
at com.graphaware.runtime.BaseGraphAwareRuntime.startModules(BaseGraphAwareRuntime.java:154)
at com.graphaware.runtime.TxDrivenRuntime.startModules(TxDrivenRuntime.java:146)
at com.graphaware.runtime.ProductionRuntime.startModules(ProductionRuntime.java:70)
at com.graphaware.runtime.BaseGraphAwareRuntime.start(BaseGraphAwareRuntime.java:134)
at com.graphaware.runtime.bootstrap.RuntimeKernelExtension.lambda$start$8(RuntimeKernelExtension.java:117)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.loadModel(OpenNLPPipeline.java:499)
... 29 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at opennlp.tools.ml.model.AbstractModelReader.getParameters(AbstractModelReader.java:140)
at opennlp.tools.ml.maxent.io.GISModelReader.constructModel(GISModelReader.java:78)
at opennlp.tools.ml.model.GenericModelReader.constructModel(GenericModelReader.java:62)
at opennlp.tools.ml.model.AbstractModelReader.getModel(AbstractModelReader.java:85)
at opennlp.tools.util.model.GenericModelSerializer.create(GenericModelSerializer.java:32)
at opennlp.tools.util.model.GenericModelSerializer.create(GenericModelSerializer.java:29)
at opennlp.tools.util.model.BaseModel.finishLoadingArtifacts(BaseModel.java:309)
at opennlp.tools.util.model.BaseModel.loadModel(BaseModel.java:239)
at opennlp.tools.util.model.BaseModel.<init>(BaseModel.java:173)
at opennlp.tools.namefind.TokenNameFinderModel.<init>(TokenNameFinderModel.java:103)
at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.loadModel(OpenNLPPipeline.java:499)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.lambda$loadNamedEntitiesFinders$2(OpenNLPPipeline.java:161)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline$$Lambda$239/1188677545.accept(Unknown Source)
at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1691)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.loadNamedEntitiesFinders(OpenNLPPipeline.java:158)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.init(OpenNLPPipeline.java:118)
at com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.<init>(OpenNLPPipeline.java:108)
at com.graphaware.nlp.processor.opennlp.PipelineBuilder.build(PipelineBuilder.java:79)
at com.graphaware.nlp.processor.opennlp.OpenNLPTextProcessor.createPhrasePipeline(OpenNLPTextProcessor.java:106)
at com.graphaware.nlp.processor.opennlp.OpenNLPTextProcessor.init(OpenNLPTextProcessor.java:56)
at com.graphaware.nlp.processor.TextProcessorsManager.lambda$initiateTextProcessors$0(TextProcessorsManager.java:61)
at com.graphaware.nlp.processor.TextProcessorsManager$$Lambda$234/2094381213.accept(Unknown Source)
at java.util.HashMap$Values.forEach(HashMap.java:980)
at com.graphaware.nlp.processor.TextProcessorsManager.initiateTextProcessors(TextProcessorsManager.java:60)
at com.graphaware.nlp.processor.TextProcessorsManager.<init>(TextProcessorsManager.java:37)
at com.graphaware.nlp.NLPManager.init(NLPManager.java:95)
at com.graphaware.nlp.module.NLPModule.initialize(NLPModule.java:52)
at com.graphaware.runtime.manager.ProductionTxDrivenModuleManager.initialize(ProductionTxDrivenModuleManager.java:57)
please help me out
You do not have sufficient memory for the NLP plugins to load, hence the NLP module is not registered and thus not available once that database has started.
As stated in the NLP plugin README, you need at least 4GB of heap for the modules to run, adapt it in your neo4j.conf and restart.

Azure Data Factory Jobs Failing in Hadoop/Map Reduce?

Some of my ADF jobs are randomly failing, with the output directed in data in the /PackageJobs/~job/Status/stderr file below.
Note that this doesn't always happen, it occurs randomly on some of the jobs, and others complete normally.
What can be causing this problem?
The stderr data is as follows:
log4j:ERROR Could not instantiate class [com.microsoft.log4jappender.FilterLogAppender].
java.lang.ClassNotFoundException: com.microsoft.log4jappender.FilterLogAppender
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:844)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.util.ShutdownHookManager.<clinit>(ShutdownHookManager.java:44)
at org.apache.hadoop.util.RunJar.run(RunJar.java:200)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
log4j:ERROR Could not instantiate appender named "RMSUMFilterLog".
16/03/04 10:56:02 INFO impl.TimelineClientImpl: Timeline service address: http://headnodehost:8188/ws/v1/timeline/
16/03/04 10:56:02 INFO client.RMProxy: Connecting to ResourceManager at headnodehost/100.74.24.3:9010
16/03/04 10:56:02 INFO client.AHSProxy: Connecting to Application History server at headnodehost/100.74.24.3:10200
16/03/04 10:56:03 INFO impl.TimelineClientImpl: Timeline service address: http://headnodehost:8188/ws/v1/timeline/
16/03/04 10:56:03 INFO client.RMProxy: Connecting to ResourceManager at headnodehost/100.74.24.3:9010
16/03/04 10:56:03 INFO client.AHSProxy: Connecting to Application History server at headnodehost/100.74.24.3:10200
16/03/04 10:56:06 INFO mapred.FileInputFormat: Total input paths to process : 1
16/03/04 10:56:06 INFO mapreduce.JobSubmitter: number of splits:1
16/03/04 10:56:06 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
16/03/04 10:56:06 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/03/04 10:56:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1457068773628_0022
16/03/04 10:56:07 INFO mapreduce.JobSubmitter: Kind: mapreduce.job, Service: job_1457068773628_0019, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#655019bc)
16/03/04 10:56:08 INFO impl.YarnClientImpl: Submitted application application_1457068773628_0022
16/03/04 10:56:08 INFO mapreduce.Job: The url to track the job: http://headnodehost:9014/proxy/application_1457068773628_0022/
16/03/04 10:56:08 INFO mapreduce.Job: Running job: job_1457068773628_0022
16/03/04 10:56:18 INFO mapreduce.Job: Job job_1457068773628_0022 running in uber mode : false
16/03/04 10:56:18 INFO mapreduce.Job: map 0% reduce 0%
16/03/04 10:56:31 INFO mapreduce.Job: map 100% reduce 0%
16/03/04 23:48:59 INFO mapreduce.Job: Task Id : attempt_1457068773628_0022_m_000000_0, Status : FAILED
AttemptID:attempt_1457068773628_0022_m_000000_0 Timed out after 600 secs
16/03/04 23:49:00 INFO mapreduce.Job: map 0% reduce 0%
16/03/04 23:49:16 INFO mapreduce.Job: map 100% reduce 0%
16/03/05 00:01:00 INFO mapreduce.Job: Task Id : attempt_1457068773628_0022_m_000000_1, Status : FAILED
AttemptID:attempt_1457068773628_0022_m_000000_1 Timed out after 600 secs
16/03/05 00:01:01 INFO mapreduce.Job: map 0% reduce 0%
16/03/05 00:01:21 INFO mapreduce.Job: map 100% reduce 0%
16/03/05 00:13:00 INFO mapreduce.Job: Task Id : attempt_1457068773628_0022_m_000000_2, Status : FAILED
AttemptID:attempt_1457068773628_0022_m_000000_2 Timed out after 600 secs
16/03/05 00:13:01 INFO mapreduce.Job: map 0% reduce 0%
16/03/05 00:13:18 INFO mapreduce.Job: map 100% reduce 0%
16/03/05 00:25:03 INFO mapreduce.Job: Job job_1457068773628_0022 failed with state FAILED due to: Task failed task_1457068773628_0022_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
16/03/05 00:25:03 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=48514665
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=48514665
Total vcore-seconds taken by all map tasks=48514665
Total megabyte-seconds taken by all map tasks=74518525440
16/03/05 00:25:03 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed!
This looks like the known timeout issue with Hadoop/HDI. If an activity doesn’t write anything on the console for 10 mins, then it gets killed. Can you please modify your code to write a ping on console every 9 minutes and see if it works

Unable to perform Sqoop import

I am unable to import data from Mysql to Hdfs.My bashrc & sqoop-env.sh files are fine. Also I am able to run sqoop list-databases command successfully. The problem is with import command it is throwin an outputconnectionfailed exception please refer to below error & help me out:
Blockquote
rahul#ubuntu:~$ sqoop import --connect jdbc:mysql://localhost/rahul
--username root --password 123 --table emp -m1 --target-dir /sqoopimport/emp Warning: /usr/lib/hbase does not exist! HBase imports
will fail. Please set $HBASE_HOME to the root of your HBase
installation. 14/09/09 01:22:45 WARN tool.BaseSqoopTool: Setting your
password on the command-line is insecure. Consider using -P instead.
14/09/09 01:22:45 INFO manager.MySQLManager: Preparing to use a MySQL
streaming resultset. 14/09/09 01:22:45 INFO tool.CodeGenTool:
Beginning code generation 14/09/09 01:22:45 INFO manager.SqlManager:
Executing SQL statement: SELECT t.* FROM emp AS t LIMIT 1 14/09/09
01:22:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.*
FROM emp AS t LIMIT 1 14/09/09 01:22:45 INFO orm.CompilationManager:
HADOOP_MAPRED_HOME is /usr/local/hadoop Note:
/tmp/sqoop-rahul/compile/a81597835880664d34a2ff0e4c7b9b33/emp.java
uses or overrides a deprecated API. Note: Recompile with
-Xlint:deprecation for details. 14/09/09 01:22:46 INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-rahul/compile/a81597835880664d34a2ff0e4c7b9b33/emp.jar
14/09/09 01:22:46 WARN manager.MySQLManager: It looks like you are
importing from mysql. 14/09/09 01:22:46 WARN manager.MySQLManager:
This transfer can be faster! Use the --direct 14/09/09 01:22:46 WARN
manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/09/09 01:22:46 INFO manager.MySQLManager: Setting zero DATETIME
behavior to convertToNull (mysql) 14/09/09 01:22:46 INFO
mapreduce.ImportJobBase: Beginning import of emp 14/09/09 01:22:47
INFO mapred.JobClient: Running job: job_201409090100_0003 14/09/09
01:22:48 INFO mapred.JobClient: map 0% reduce 0% 14/09/09 01:22:54
INFO mapred.JobClient: Task Id : attempt_201409090100_0003_m_000000_0,
Status : FAILED java.lang.RuntimeException:
java.lang.RuntimeException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:722)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: java.lang.RuntimeException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:193)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:162)
... 9 more Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:355)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2479)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2516)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2301)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:834)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:317)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:278)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:187)
... 10 more Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at java.net.Socket.(Socket.java:425)
at java.net.Socket.(Socket.java:241)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:259)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:305)
... 26 more
14/09/09 01:22:54 WARN mapred.JobClient: Error reading task
outputConnection refused 14/09/09 01:22:54 WARN mapred.JobClient:
Error reading task outputConnection refused 14/09/09 01:22:59 INFO
mapred.JobClient: Task Id : attempt_201409090100_0003_m_000000_1,
Status : FAILED java.lang.RuntimeException:
java.lang.RuntimeException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:722)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: java.lang.RuntimeException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:193)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:162)
... 9 more Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:355)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2479)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2516)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2301)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:834)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:317)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:278)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:187)
... 10 more Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at java.net.Socket.(Socket.java:425)
at java.net.Socket.(Socket.java:241)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:259)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:305)
... 26 more
14/09/09 01:22:59 WARN mapred.JobClient: Error reading task
outputConnection refused 14/09/09 01:22:59 WARN mapred.JobClient:
Error reading task outputConnection refused 14/09/09 01:23:03 INFO
mapred.JobClient: Task Id : attempt_201409090100_0003_m_000000_2,
Status : FAILED java.lang.RuntimeException:
java.lang.RuntimeException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:722)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: java.lang.RuntimeException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications l
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:193)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:162)
... 9 more Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:355)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2479)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2516)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2301)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:834)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:317)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:278)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:187)
... 10 more Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at java.net.Socket.(Socket.java:425)
at java.net.Socket.(Socket.java:241)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:259)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:305)
... 26 more
14/09/09 01:23:03 WARN mapred.JobClient: Error reading task
outputConnection refused 14/09/09 01:23:03 WARN mapred.JobClient:
Error reading task outputConnection refused 14/09/09 01:23:09 INFO
mapred.JobClient: Job complete: job_201409090100_0003 14/09/09
01:23:09 INFO mapred.JobClient: Counters: 6 14/09/09 01:23:09 INFO
mapred.JobClient: Job Counters 14/09/09 01:23:09 INFO
mapred.JobClient: SLOTS_MILLIS_MAPS=20325 14/09/09 01:23:09 INFO
mapred.JobClient: Total time spent by all reduces waiting after
reserving slots (ms)= 14/09/09 01:23:09 INFO mapred.JobClient:
Total time spent by all maps waiting after reserving slots (ms)=0
14/09/09 01:23:09 INFO mapred.JobClient: Launched map tasks=4
14/09/09 01:23:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
14/09/09 01:23:09 INFO mapred.JobClient: Failed map tasks=1
14/09/09 01:23:09 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
23.174 seconds (0 bytes/sec) 14/09/09 01:23:09 INFO mapreduce.ImportJobBase: Retrieved 0 records. 14/09/09 01:23:09 ERROR
tool.ImportTool: Error during import: Import job failed!
Blockquote
I fixed it. The problem was that I was using localhost in the import statement, as the sql was running in the same system. when I used the actual IP instead of localhost. then it worked like a charm.
Also i was using root username & password to connect to sql. for some reason it didn't work.So I created another user & granted all privileges to that user.
GRANT ALL PRIVILEGES ON employee.* to 'sqoopuser'#'%' IDENTIFIED BY 'passphrase';
Mistake:sqoop import --connect jdbc:mysql://localhost/rahul --username root --password 123 --table emp -m1 --target-dir /sqoopimport/emp
Correction:sqoop import --connect jdbc:mysql://192.168.202.139:3306/rahul --username sqoopuser --password 123 --table emp -m1 --target-dir /sqoopimport/emp
There are three checkpoints:
Make sure mysql service accessible -> Manually connect to localhost:3306 as root.
Make sure no firewall restriction to localhost:3306
Download the most recent mysql-connector to lib and then import again.
The Solution in my case was to use the proper IP address instead of the "localhost" in sqoop command:
sqoop import --connect jdbc:mysql://192.168.69.69:3306/testdb --username root -P --table TESTABLE --target-dir /data/import

Sqoop installation export and import from postgresql

I v'e just installed sqoop and was testing it . I tried to export some data from hdfs to postgresql using sqoop. When I run it it throws the following exception : java.io.IOException: Can't export data, please check task tracker logs . I think there may also have been a problem in installation.
The File content is :
ustNU 45
MB1bA 0
gNbCO 76
iZP10 39
B2aoo 45
SI7eG 93
5sC4k 60
2IhFV 2
u2A48 16
yvy6R 51
LNhsV 26
mZ2yn 65
80Gp3 43
Wk5Ag 85
VUfyp 93
P077j 94
f1Oj5 11
LxJkg 72
0H7NP 99
Dk406 25
g4KRp 76
Fw3U0 80
6LD59 1
07KHx 91
F1S88 72
Bnb0v 85
A2qM7 79
Z6cAt 81
0M3DO 23
m0s09 44
KIvwd 13
GNUD0 78
um93a 20
19bHv 75
4Of3s 75
5hFen 16
This is the posgres table:
Table "public.mysort"
Column | Type | Modifiers
--------+---------+-----------
name | text |
marks | integer |
The sqoop command is:
sqoop export --connect jdbc:postgresql://localhost/testdb --username akshay --password akshay --table mysort -m 1 --export-dir MySort/input
Followed by the error:
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
14/06/11 18:28:06 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/06/11 18:28:06 INFO manager.SqlManager: Using default fetchSize of 1000
14/06/11 18:28:06 INFO tool.CodeGenTool: Beginning code generation
14/06/11 18:28:06 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM "mysort" AS t LIMIT 1
14/06/11 18:28:06 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
Note: /tmp/sqoop-hduser/compile/0402ad4b5cf7980040264af35de406cb/mysort.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/06/11 18:28:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hduser/compile/0402ad4b5cf7980040264af35de406cb/mysort.jar
14/06/11 18:28:07 INFO mapreduce.ExportJobBase: Beginning export of mysort
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/11 18:28:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/06/11 18:28:22 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/06/11 18:28:23 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/06/11 18:28:23 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
14/06/11 18:28:23 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/06/11 18:28:23 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/06/11 18:28:24 INFO input.FileInputFormat: Total input paths to process : 1
14/06/11 18:28:24 INFO input.FileInputFormat: Total input paths to process : 1
14/06/11 18:28:25 INFO mapreduce.JobSubmitter: number of splits:1
14/06/11 18:28:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1402488523460_0003
14/06/11 18:28:25 INFO impl.YarnClientImpl: Submitted application application_1402488523460_0003
14/06/11 18:28:25 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1402488523460_0003/
14/06/11 18:28:25 INFO mapreduce.Job: Running job: job_1402488523460_0003
14/06/11 18:28:46 INFO mapreduce.Job: Job job_1402488523460_0003 running in uber mode : false
14/06/11 18:28:46 INFO mapreduce.Job: map 0% reduce 0%
14/06/11 18:29:04 INFO mapreduce.Job: Task Id : attempt_1402488523460_0003_m_000000_0, Status : FAILED
Error: java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
14/06/11 18:29:23 INFO mapreduce.Job: Task Id : attempt_1402488523460_0003_m_000000_1, Status : FAILED
Error: java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
14/06/11 18:29:42 INFO mapreduce.Job: Task Id : attempt_1402488523460_0003_m_000000_2, Status : FAILED
Error: java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
14/06/11 18:30:03 INFO mapreduce.Job: map 100% reduce 0%
14/06/11 18:30:03 INFO mapreduce.Job: Job job_1402488523460_0003 failed with state FAILED due to: Task failed task_1402488523460_0003_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/06/11 18:30:03 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=69336
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=69336
Total vcore-seconds taken by all map tasks=69336
Total megabyte-seconds taken by all map tasks=71000064
14/06/11 18:30:03 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/06/11 18:30:03 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 100.1476 seconds (0 bytes/sec)
14/06/11 18:30:03 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/06/11 18:30:03 INFO mapreduce.ExportJobBase: Exported 0 records.
14/06/11 18:30:03 ERROR tool.ExportTool: Error during export: Export job failed!
This is the log file :
2014-06-11 17:54:37,601 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-06-11 17:54:37,602 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-06-11 17:54:52,678 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-06-11 17:54:52,777 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-06-11 17:54:52,846 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-06-11 17:54:52,847 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2014-06-11 17:54:52,855 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2014-06-11 17:54:52,855 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1402488523460_0002, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#971d0d8)
2014-06-11 17:54:52,901 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2014-06-11 17:54:53,165 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1402488523460_0002
2014-06-11 17:54:53,249 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-06-11 17:54:53,249 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-06-11 17:54:53,393 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2014-06-11 17:54:53,689 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2014-06-11 17:54:53,899 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/user/hduser/MySort/input/data.txt:0+891082
2014-06-11 17:54:53,904 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file
2014-06-11 17:54:53,904 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start
2014-06-11 17:54:53,904 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Exception raised during data export
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,028 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Exception:
java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
2014-06-11 17:54:54,030 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input: ustNU 45
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input file: hdfs://localhost:9000/user/hduser/MySort/input/data.txt
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: At position 0
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Currently processing split:
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Paths:/user/hduser/MySort/input/data.txt:0+891082
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: This issue might not necessarily be caused by current input
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: due to the batching nature of export.
2014-06-11 17:54:54,031 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2014-06-11 17:54:54,032 INFO [Thread-12] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
2014-06-11 17:54:54,033 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser (auth:SIMPLE) cause:java.io.IOException: Can't export data, please check task tracker logs
2014-06-11 17:54:54,033 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:839)
at mysort.__loadFromFields(mysort.java:198)
at mysort.parse(mysort.java:147)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
2014-06-11 17:54:54,037 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task
Any help in resolving the issue is appreciated.
Here is the complete procedure for installation and import and export commands for Sqoop. Hope fully it may be helpful to some one. This one is tried and tested by me and actually works.
Download : apache.mirrors.tds.net/sqoop/1.4.4/sqoop-1.4.4.bin__hadoop-2.0.4-alpha.tar.gz
sudo mv sqoop-1.4.4.bin__hadoop-2.0.4-alpha.tar.gz /usr/lib/sqoop
copy paste followingtwo lines in .bashrc
export SQOOP_HOME=/usr/lib/sqoop
export PATH=$PATH:$SQOOP_HOME/bin
Go to /usr/lib/sqoop/conf folder and copy sqoop-env-template.sh to new file sqoop-env.sh and modify export HADOOP_HOME ,HBASE_HOME,etc to the installation directory
Download the postgresql conector jar file from jdbc.postgresql.org/download/postgresql-9.3-1101.jdbc41.jar
create a directory manager.d in sqoop/conf/
create a file postgresql in conf/ and add the following line in it
org.postgresql.Driver=/usr/lib/sqoop/lib/postgresql-9.3-1101.jdbc41.jar
name the connector.jar file accordingly
For Export
Create a user in postgres:
createuser -P -s -e ace
Enter password for new role: ace
Enter it again: ace
CREATE DATABASE testdb OWNER ace TABLESPACE ace;
create table stud1(id int,name text);
Create a file student.txt
Add lines such as:
1,Ace
2,iloveapis
hadoop fs -put student.txt
sqoop export --connect jdbc:postgresql://localhost:5432/testdb --username ace --password ace --table stud1 -m 1 --export-dir student.txt
check in postgres: Select * from stud1;
For Import:
sqoop import --connect jdbc:postgresql://localhost:5432/testdb --username akshay --password akshay --table stud1 --m 1
hadoop fs -ls -R stud1
Expected Output:
-rw-r--r-- 1 hduser supergroup 0 2014-06-13 18:10 stud1/_SUCCESS
-rw-r--r-- 1 hduser supergroup 21 2014-06-13 18:10 stud1/part-m-00000
hadoop fs -cat stud1/part-m-00000
Expected Output:
1,Ace
2,iloveapis
hadoop fs -copyToLocal stud1/part-m-00000 $HOME/imported_data.txt