This problem seems to be already raised in Stackoverflow, but my case is quite different, file or folder location hadoop looking for is created in C:/tmp/hadoop-SYSTEM/mapred/local/taskTracker/jobcache/, in this location job folder are created while run the wordcount example, but even the files and folder are avalilable, its throwing the file not found exception, it seems like files not been identified, i even tried the re-formating of namenode which is one of the solution provided in forums,but still problem exist
Note: Hadoop version 0.20.2
ERROR:
13/04/11 10:24:20 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
13/04/11 10:24:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/04/11 10:24:21 INFO input.FileInputFormat: Total input paths to process : 1
13/04/11 10:24:22 INFO mapred.JobClient: Running job: job_201304111023_0001
13/04/11 10:24:23 INFO mapred.JobClient: map 0% reduce 0%
13/04/11 10:24:34 INFO mapred.JobClient: Task Id : attempt_201304111023_0001_m_000002_0, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-SYSTEM/mapred/local/taskTracker/jobcache/job_201304111023_0001/attempt_201304111023_0001_m_000002_0/work/tmp does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
Check if the permissions to that folder have been set properly, this type of error may occur if write permissions are not given to that folder.
Related
I Try almost all Solution That I found on Internet But my problem is Still not solved.
I Open gradle.properties and I add The Below code It doesn't help me,
org.gradle.jvmargs=-Xmx1024m -XX:MaxPermSize=512m
I also delete .gradle directory then I try building again. But The Problem Still Exists. Please I want Help?
FAILURE: Build failed with an exception.
* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the User Manual chapter on the daemon at https://docs.gradle.org/6.7/userguide/gradle_daemon.html
Process command line: C:\Program Files\Java\jdk1.8.0_291\bin\java.exe -Xmx1024M -Dfile.encoding=windows-1252 -Duser.country=US -Duser.language=en -Duser.variant -cp C:\Users\likec\.gradle\wrapper\dists\gradle-6.7-all\cuy9mc7upwgwgeb72wkcrupxe\gradle-6.7\lib\gradle-launcher-6.7.jar org.gradle.launcher.daemon.bootstrap.GradleDaemon 6.7
Please read the following process output to find out more:
-----------------------
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1048576 bytes for AllocateHeap
# An error report file with more information is saved as:
# C:\Users\likec\.gradle\daemon\6.7\hs_err_pid14492.log
I followed the official documents about voltdb, but encounter a error when using
voltdb init --config=deployment.xml
init voltdb configure file.
and the error is
ERROR: Deployment information could not be obtained from cluster node or locally
VoltDB has encountered an unrecoverable error and is exiting
The log may contain additional information.
my voltdb version is voltdb-community-8.0
about the log file volt.log:
2018-05-02 08:52:25,048 INFO [main] HOST: PID of this Volt process is 15950
2018-05-02 08:52:25,062 INFO [main] HOST: Command line arguments: org.voltdb.VoltDB initialize deployment deployment.xml
2018-05-02 08:52:25,063 INFO [main] HOST: Command line JVM arguments: -Xmx2048m -Xms2048m -XX:+AlwaysPreTouch -Djava.awt.headless=true -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.net.inetaddr.ttl=300 -Dsun.net.inetaddr.negative.ttl=3600 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseTLAB -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCondCardMark -Dsun.rmi.dgc.server.gcInterval=9223372036854775807 -Dsun.rmi.dgc.client.gcInterval=9223372036854775807 -XX:CMSWaitDuration=120000 -XX:CMSMaxAbortablePrecleanTime=120000 -XX:+ExplicitGCInvokesConcurrent -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled -Dlog4j.configuration=file:///usr/local/voltdb-community-8.0/voltdb/log4j.xml -Djava.library.path=default
2018-05-02 08:52:25,064 INFO [main] HOST: Command line JVM classpath: /usr/local/voltdb-community-8.0/voltdb/voltdb-8.0.jar:/usr/local/voltdb-community-8.0/lib/vmetrics.jar:/usr/local/voltdb-community-8.0/lib/commons-logging-1.1.3.jar:/usr/local/voltdb-community-8.0/lib/log4j-1.2.16.jar:/usr/local/voltdb-community-8.0/lib/jetty-io-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/avro-1.7.7.jar:/usr/local/voltdb-community-8.0/lib/lz4-1.2.0.jar:/usr/local/voltdb-community-8.0/lib/jetty-server-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/jline-2.10.jar:/usr/local/voltdb-community-8.0/lib/tomcat-juli.jar:/usr/local/voltdb-community-8.0/lib/jsch-0.1.51.jar:/usr/local/voltdb-community-8.0/lib/slf4j-nop-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/kafka-clients-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-4.3.3.jar:/usr/local/voltdb-community-8.0/lib/super-csv-2.1.0.jar:/usr/local/voltdb-community-8.0/lib/felix.jar:/usr/local/voltdb-community-8.0/lib/commons-codec-1.6.jar:/usr/local/voltdb-community-8.0/lib/scala-xml_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-util-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/slf4j-api-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-servlet-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/snappy-java-1.1.1.7.jar:/usr/local/voltdb-community-8.0/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/kafka_2.11-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-security-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/scala-library-2.11.5.jar:/usr/local/voltdb-community-8.0/lib/owner-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/owner-java8-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/snmp4j-2.5.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-continuation-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/httpclient-4.3.6.jar:/usr/local/voltdb-community-8.0/lib/servlet-api-3.1.jar:/usr/local/voltdb-community-8.0/lib/jna.jar:/usr/local/voltdb-community-8.0/lib/jetty-http-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/metrics-core-2.2.0.jar:/usr/local/voltdb-community-8.0/lib/tomcat-jdbc.jar:/usr/local/voltdb-community-8.0/lib/httpasyncclient-4.0.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-nio-4.3.2.jar:/usr/local/voltdb-community-8.0/lib/protobuf-java-3.4.0.jar:/usr/local/voltdb-community-8.0/lib/scala-parser-combinators_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jackson-core-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/commons-lang3-3.0.jar:/usr/local/voltdb-community-8.0/lib/extension/voltdb-rabbitmq.jar
2018-05-02 08:52:25,064 ERROR [main] HOST: Deployment information could not be obtained from cluster node or locally
so, it lead to can't generating configure file. Please tell me what means about the "Deployment information could not be obtained from cluster node or locally".
This error means that it could not find the specified deployment.xml file in the local directory. You can omit the --config=deployment.xml and just run "voltdb init" it will generate a default deployment.xml file for you. Then you could proceed to "voltdb start" if you just want a simple standalone instance with the default settings.
Or, if you want to modify the configuration settings, you could run "voltdb init" to get a default configuration, then run "voltdb get deployment" to retrieve the generated deployment.xml file from the voltdbroot directory to the local directory. Then you could delete the voltdbroot directory, modify this deployment.xml file and start over. You could also start over using a deployment file you generate manually or one copied from the examples/HOWTOs/deployment-file-examples folder.
(Disclosure: I work at VoltDB)
Getting the following exception.I ensured that I have jars in the location and i run the job with root permission.
ERROR [ExecutorRunner for app-20170509111035-0004/19686] 2017-05-09 11:11:19,267 SPARK-WORKER Logging.scala:95 - Error running executor
java.lang.IllegalStateException: No assemblies found in '/usr/apps/cassandra/dse/resources/spark/lib'.
at org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:249) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.findAssembly(AbstractCommandBuilder.java:342) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.buildClassPath(AbstractCommandBuilder.java:187) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.buildJavaCommand(AbstractCommandBuilder.java:119) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:39) ~[spark-core_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:48) ~[spark-core_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:63) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.CommandUtils$.buildProcessBuilder(CommandUtils.scala:51) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.ExecutorRunner.org$apache$spark$deploy$worker$ExecutorRunner$$fetchAndRunExecutor(ExecutorRunner.scala:143) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:71) [spark-core_2.10-1.6.3.3.jar:5.0.8]
INFO [dispatcher-event-loop-0] 2017-05-09 11:11:19,268 SPARK-WORKER Logging.scala:58 - Executor app-20170509111035-0004/19686 finished with state FAILED message java.lang.IllegalStateException: No assemblies found in '/usr/apps/cassandra/dse/resources/spark/lib'.
Any help would be appreciated.
I find out the root cause. I'm running DSE analytics cluster and on one of my seed node, accidentally JRE version got changed
and after pointing to the correct JRE this issue disappeared.
I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem
After struggling to get proper testsuites, I'm now pretty disappointed by the fact that , while following as close as possible this tutorial (pretty straightforward, right ?) Setting up Selenium server on a headless Jenkins CI build machine, Jenkins keeps looping on the current build, outputting :
So I decided to run a selenium build by hand on the ci machine, and got this :
user#machine:/var/log$ export DISPLAY=":99" && java -jar /var/lib/selenium/selenium- server.jar -browserSessionReuse -htmlSuite *firefox http://staging.site.com /var/lib/jenkins/jobs/project/workspace/tests/selenium/testsuite.html /var/lib/jenkins/jobs/project/workspace/logs/selenium.html
24 janv. 2012 19:27:56 org.openqa.grid.selenium.GridLauncher main
INFO: Launching a standalone server
19:27:59.927 INFO - Java: Sun Microsystems Inc. 20.0-b11
19:27:59.929 INFO - OS: Linux 3.0.0-14-generic amd64
19:27:59.951 INFO - v2.17.0, with Core v2.17.0. Built from revision 15540
19:27:59.958 INFO - Will recycle browser sessions when possible.
19:28:00.143 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
19:28:00.144 INFO - Version Jetty/5.1.x
19:28:00.145 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
19:28:00.147 INFO - Started HttpContext[/selenium-server,/selenium-server]
19:28:00.147 INFO - Started HttpContext[/,/]
19:28:00.183 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#16ba8602
19:28:00.184 INFO - Started HttpContext[/wd,/wd]
19:28:00.199 INFO - Started SocketListener on 0.0.0.0:4444
19:28:00.199 INFO - Started org.openqa.jetty.jetty.Server#6f7a29a1
HTML suite exception seen:
java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:900)
at org.openqa.selenium.server.SeleniumServer.runHtmlSuite(SeleniumServer.java:603)
at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:287)
at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:245)
at org.openqa.grid.selenium.GridLauncher.main(GridLauncher.java:54)
19:28:00.218 INFO - Shutting down...
19:28:00.220 INFO - Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=4444]
While understanding the output is'nt that hard, finding what to do to remove this issue is.
Any chance you guys already have been facing that kind of stuff ? Thanks
I only just got past these problems myself, but I was able to run your command when I pointed it at my .jar, testSuite and report file. I'm thinking that perhaps the location of your files under,
/var/lib/selenium
could be part of the problem. Try putting them where your user has permission perhaps under
/home/USERNAME/selenium
Other than that the only thing I can say is make sure your .jar, testSuite and report file are valid.
Also (I assume this is an error of copy and paste into stack overflow) but, this part of your command is incorrect
/var/lib/selenium/selenium- server.jar
You are not getting the error I would expect from an incorrect jar location so I assume something was lost when you pasted to stackoverflow.