Getting the following exception.I ensured that I have jars in the location and i run the job with root permission.
ERROR [ExecutorRunner for app-20170509111035-0004/19686] 2017-05-09 11:11:19,267 SPARK-WORKER Logging.scala:95 - Error running executor
java.lang.IllegalStateException: No assemblies found in '/usr/apps/cassandra/dse/resources/spark/lib'.
at org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:249) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.findAssembly(AbstractCommandBuilder.java:342) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.buildClassPath(AbstractCommandBuilder.java:187) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.buildJavaCommand(AbstractCommandBuilder.java:119) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:39) ~[spark-core_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:48) ~[spark-core_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:63) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.CommandUtils$.buildProcessBuilder(CommandUtils.scala:51) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.ExecutorRunner.org$apache$spark$deploy$worker$ExecutorRunner$$fetchAndRunExecutor(ExecutorRunner.scala:143) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:71) [spark-core_2.10-1.6.3.3.jar:5.0.8]
INFO [dispatcher-event-loop-0] 2017-05-09 11:11:19,268 SPARK-WORKER Logging.scala:58 - Executor app-20170509111035-0004/19686 finished with state FAILED message java.lang.IllegalStateException: No assemblies found in '/usr/apps/cassandra/dse/resources/spark/lib'.
Any help would be appreciated.
I find out the root cause. I'm running DSE analytics cluster and on one of my seed node, accidentally JRE version got changed
and after pointing to the correct JRE this issue disappeared.
Related
I Try almost all Solution That I found on Internet But my problem is Still not solved.
I Open gradle.properties and I add The Below code It doesn't help me,
org.gradle.jvmargs=-Xmx1024m -XX:MaxPermSize=512m
I also delete .gradle directory then I try building again. But The Problem Still Exists. Please I want Help?
FAILURE: Build failed with an exception.
* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the User Manual chapter on the daemon at https://docs.gradle.org/6.7/userguide/gradle_daemon.html
Process command line: C:\Program Files\Java\jdk1.8.0_291\bin\java.exe -Xmx1024M -Dfile.encoding=windows-1252 -Duser.country=US -Duser.language=en -Duser.variant -cp C:\Users\likec\.gradle\wrapper\dists\gradle-6.7-all\cuy9mc7upwgwgeb72wkcrupxe\gradle-6.7\lib\gradle-launcher-6.7.jar org.gradle.launcher.daemon.bootstrap.GradleDaemon 6.7
Please read the following process output to find out more:
-----------------------
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1048576 bytes for AllocateHeap
# An error report file with more information is saved as:
# C:\Users\likec\.gradle\daemon\6.7\hs_err_pid14492.log
I downloaded Drools 7.46.0.Final and extracted the contents to my local drive. When I try to run the examples from the Linux command line using the provided runExamples.sh script I'm getting the following exception. I've tried with Java 8 and Java 11 (the only versions I have installed). Does this really require Java 6 like the message recommends or is there some other problem here?
I'm new to Drools, so I'm afraid I'm not sure how to troubleshoot this.
UPDATE: interestingly I tried version 7.44.0.Final and that runs fine. So downloaded 7.45.0.Final and that one is broken too. So something changed between 7.44 and 7.45 that's causing this.
10:06:44.154 [main] INFO o.k.a.i.utils.ServiceDiscoveryImpl.processKieService:129 - Cannot load service: org.kie.internal.process.CorrelationKeyFactory
10:06:44.157 [main] ERROR o.k.a.i.utils.ServiceDiscoveryImpl.processKieService:131 - Loading failed because There already exists an implementation for service org.drools.core.reteoo.KieComponentFactoryFactory with same priority 0
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.drools.dynamic.DynamicServiceRegistrySupplier.get(DynamicServiceRegistrySupplier.java:32)
at org.drools.dynamic.DynamicServiceRegistrySupplier.get(DynamicServiceRegistrySupplier.java:23)
at org.kie.api.internal.utils.ServiceRegistry$Impl.getServiceRegistry(ServiceRegistry.java:88)
at org.kie.api.internal.utils.ServiceRegistry$ServiceRegistryHolder.<clinit>(ServiceRegistry.java:47)
at org.kie.api.internal.utils.ServiceRegistry.getInstance(ServiceRegistry.java:39)
at org.kie.api.internal.utils.ServiceRegistry.getService(ServiceRegistry.java:35)
at org.kie.api.KieServices$Factory$LazyHolder.<clinit>(KieServices.java:358)
at org.kie.api.KieServices$Factory.get(KieServices.java:365)
at org.kie.api.KieServices.get(KieServices.java:349)
at org.drools.examples.DroolsExamplesApp.<init>(DroolsExamplesApp.java:59)
at org.drools.examples.DroolsExamplesApp.main(DroolsExamplesApp.java:52)
Caused by: java.lang.RuntimeException: Unable to build kie service url = jar:file:/home/davek/apps/drools-distribution-7.46.0.Final/examples/binaries/drools-examples-7.46.0.Final.jar!/META-INF/kie.conf
at org.kie.api.internal.utils.ServiceDiscoveryImpl.registerConfs(ServiceDiscoveryImpl.java:105)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.lambda$getServices$1(ServiceDiscoveryImpl.java:83)
at java.util.Optional.ifPresent(Optional.java:159)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.getServices(ServiceDiscoveryImpl.java:81)
at org.kie.api.internal.utils.ServiceRegistry$Impl.<init>(ServiceRegistry.java:60)
at org.drools.dynamic.DynamicServiceRegistrySupplier$LazyHolder.<clinit>(DynamicServiceRegistrySupplier.java:27)
... 11 more
Caused by: java.lang.RuntimeException: There already exists an implementation for service org.drools.core.reteoo.KieComponentFactoryFactory with same priority 0
at org.kie.api.internal.utils.ServiceDiscoveryImpl$PriorityMap.put(ServiceDiscoveryImpl.java:222)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.processKieService(ServiceDiscoveryImpl.java:124)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.registerConfs(ServiceDiscoveryImpl.java:101)
... 16 more
Unfortunately this is a known issue that I fixed with this commit.
Upcoming Drools 7.47.0.Final (to be released next week) won't suffer of this.
Switching to version 8.16.0.Beta or newer resolved this for me
I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem
This problem seems to be already raised in Stackoverflow, but my case is quite different, file or folder location hadoop looking for is created in C:/tmp/hadoop-SYSTEM/mapred/local/taskTracker/jobcache/, in this location job folder are created while run the wordcount example, but even the files and folder are avalilable, its throwing the file not found exception, it seems like files not been identified, i even tried the re-formating of namenode which is one of the solution provided in forums,but still problem exist
Note: Hadoop version 0.20.2
ERROR:
13/04/11 10:24:20 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
13/04/11 10:24:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/04/11 10:24:21 INFO input.FileInputFormat: Total input paths to process : 1
13/04/11 10:24:22 INFO mapred.JobClient: Running job: job_201304111023_0001
13/04/11 10:24:23 INFO mapred.JobClient: map 0% reduce 0%
13/04/11 10:24:34 INFO mapred.JobClient: Task Id : attempt_201304111023_0001_m_000002_0, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-SYSTEM/mapred/local/taskTracker/jobcache/job_201304111023_0001/attempt_201304111023_0001_m_000002_0/work/tmp does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
Check if the permissions to that folder have been set properly, this type of error may occur if write permissions are not given to that folder.
After struggling to get proper testsuites, I'm now pretty disappointed by the fact that , while following as close as possible this tutorial (pretty straightforward, right ?) Setting up Selenium server on a headless Jenkins CI build machine, Jenkins keeps looping on the current build, outputting :
So I decided to run a selenium build by hand on the ci machine, and got this :
user#machine:/var/log$ export DISPLAY=":99" && java -jar /var/lib/selenium/selenium- server.jar -browserSessionReuse -htmlSuite *firefox http://staging.site.com /var/lib/jenkins/jobs/project/workspace/tests/selenium/testsuite.html /var/lib/jenkins/jobs/project/workspace/logs/selenium.html
24 janv. 2012 19:27:56 org.openqa.grid.selenium.GridLauncher main
INFO: Launching a standalone server
19:27:59.927 INFO - Java: Sun Microsystems Inc. 20.0-b11
19:27:59.929 INFO - OS: Linux 3.0.0-14-generic amd64
19:27:59.951 INFO - v2.17.0, with Core v2.17.0. Built from revision 15540
19:27:59.958 INFO - Will recycle browser sessions when possible.
19:28:00.143 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
19:28:00.144 INFO - Version Jetty/5.1.x
19:28:00.145 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
19:28:00.147 INFO - Started HttpContext[/selenium-server,/selenium-server]
19:28:00.147 INFO - Started HttpContext[/,/]
19:28:00.183 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#16ba8602
19:28:00.184 INFO - Started HttpContext[/wd,/wd]
19:28:00.199 INFO - Started SocketListener on 0.0.0.0:4444
19:28:00.199 INFO - Started org.openqa.jetty.jetty.Server#6f7a29a1
HTML suite exception seen:
java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:900)
at org.openqa.selenium.server.SeleniumServer.runHtmlSuite(SeleniumServer.java:603)
at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:287)
at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:245)
at org.openqa.grid.selenium.GridLauncher.main(GridLauncher.java:54)
19:28:00.218 INFO - Shutting down...
19:28:00.220 INFO - Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=4444]
While understanding the output is'nt that hard, finding what to do to remove this issue is.
Any chance you guys already have been facing that kind of stuff ? Thanks
I only just got past these problems myself, but I was able to run your command when I pointed it at my .jar, testSuite and report file. I'm thinking that perhaps the location of your files under,
/var/lib/selenium
could be part of the problem. Try putting them where your user has permission perhaps under
/home/USERNAME/selenium
Other than that the only thing I can say is make sure your .jar, testSuite and report file are valid.
Also (I assume this is an error of copy and paste into stack overflow) but, this part of your command is incorrect
/var/lib/selenium/selenium- server.jar
You are not getting the error I would expect from an incorrect jar location so I assume something was lost when you pasted to stackoverflow.