Why appear this error when run optaplanner .jar? - drools

I have an optaplanner aplication, and i want to export it as a runnable jar, but when i run the jar, appear a error and I don't know why.
I'm using the Vehicle Routing example, and i want to use the GUI interface without eclipse.
Exception in thread "main" java.lang.NullPointerException
at org.kie.internal.io.ResourceFactory.newByteArrayResource(ResourceFactory.java:66)
at org.drools.compiler.kie.builder.impl.AbstractKieModule.getResource(AbstractKieModule.java:308)
at org.drools.compiler.kie.builder.impl.AbstractKieModule.addResourceToCompiler(AbstractKieModule.java:273)
at org.drools.compiler.kie.builder.impl.AbstractKieModule.addResourceToCompiler(AbstractKieModule.java:268)
at org.drools.compiler.kie.builder.impl.AbstractKieProject.buildKnowledgePackages(AbstractKieProject.java:253)
at org.drools.compiler.kie.builder.impl.AbstractKieProject.verify(AbstractKieProject.java:74)
at org.drools.compiler.kie.builder.impl.KieBuilderImpl.buildKieProject(KieBuilderImpl.java:267)
at org.drools.compiler.kie.builder.impl.KieBuilderImpl.buildAll(KieBuilderImpl.java:235)
at org.drools.compiler.kie.builder.impl.KieBuilderImpl.buildAll(KieBuilderImpl.java:184)
at org.optaplanner.core.config.score.director.ScoreDirectorFactoryConfig.buildDroolsScoreDirectorFactory(ScoreDirectorFactoryConfig.java:544)
at org.optaplanner.core.config.score.director.ScoreDirectorFactoryConfig.buildScoreDirectorFactory(ScoreDirectorFactoryConfig.java:351)
at org.optaplanner.core.config.solver.SolverConfig.buildSolver(SolverConfig.java:255)
at org.optaplanner.core.impl.solver.AbstractSolverFactory.buildSolver(AbstractSolverFactory.java:61)
at org.optaplanner.examples.common.app.CommonApp.createSolver(CommonApp.java:136)
at org.optaplanner.examples.common.app.CommonApp.createSolutionBusiness(CommonApp.java:124)
at org.optaplanner.examples.common.app.CommonApp.init(CommonApp.java:115)
at org.optaplanner.examples.common.app.CommonApp.init(CommonApp.java:111)
at org.optaplanner.examples.pmrouting.app.PMRoutingAPP.main(PMRoutingAPP.java:39)

Because Drools is incompatible with uber jars.
Use one of the other score calculators if you really want to run with an uber jar...

Related

NoClassDefFoundError when using instrumentation on Bluemix

I am trying to add a javaagent to my bluemix app, this agent uses Instrumentation. The thing is that when I run the application I get the following error:
premain() - Instrumentation is already running
...
CWWKF0004E: An unknown exception occurred while installing or removing features. Exception: java.lang.NoClassDefFoundError: agent.ClassInstrumentorTransform
ERR at com.ibm.ws.kernel.feature.internal.subsystem.SubsystemFeatureDefinitionImpl.setHeader(SubsystemFeatureDefinitionImpl.java)
ERR at [internal classes]
I have tried creating another agent with the same Premain-Class and Agent-Class structure but with my own classes and it works, I have tried uploading my own copy of Instrumentation classes and point the javaagent to it using Class-Path but the error still appears.
Any sugestion what can be the problem?
I suspect maybe the Bluemix enviorment uses the Instrumentation, any ideas how this might be checked and how I can solve the inter dependency?
Based on the error message, it looks like you have a class in a feature bundle that is trying to access a class from the javaagent, but you have not added the javaagent package to org.osgi.framework.bootdelegation as described in the Specifying Liberty profile bootstrap properties" topic in the knowledge center topic.

Testing Samza with RocksDB application with SBT

I would like to run a Samza (using RocksDB KV store) application from SBT. When I do ./sbt "run " I receive the following error
java.lang.ExceptionInInitializerError
(snip)
Caused by: java.lang.RuntimeException: librocksdbjni-linux64.so was not found inside JAR.
(snip)
I assume that since I run with ./run, sbt runs the classes directly, without assembling a JAR.
The dependencies are set correctly, and I've got the librocksdbjni-linux64.so inside RocksDB JAR.
Do I have to create an assembly before running?
How can I test in this case without creating an assembly?
Well, librocksdbjni-linux64.so sounds like a native library, and those usually require a little extra fiddling with things, even if they are inside the path, in order to be recognized and added. Check this question.

NoSuchMethod exception in Flink when using dataset with custom object array

I have a problem with Flink
java.lang.NoSuchMethodError: org.apache.flink.api.java.typeutils.ObjectArrayTypeInfo.getInfoFor(Lorg/apache/flink/api/common/typeinfo/TypeInformation;)Lorg/apache/flink/api/java/typeutils/ObjectArrayTypeInfo;
at LowLevel.FlinkImplementation.FlinkImplementation$$anon$6.<init>(FlinkImplementation.scala:28)
at LowLevel.FlinkImplementation.FlinkImplementation.<init>(FlinkImplementation.scala:28)
at IRLogic.GmqlServer.<init>(GmqlServer.scala:15)
at it.polimi.App$.main(App.scala:20)
at it.polimi.App.main(App.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
the line with the problem is this one
implicit val regionTypeInformation =
api.scala.createTypeInformation[FlinkDataTypes.FlinkRegionType]
in the FlinkRegionType I have an Array of custom object
I developed the app with the maven plugin in the IDE and everything is working good, but when I move to the version I downloaded from the website I get the error above
I am using Flink 0.9
I was thinking that some library may be missing but I am using maven for handling everything. Moreover running through the code of ObjectArrayTypeInfo.java it doesn't seem to be the problem
A NoSuchMethodError commonly indicates a version mismatch between the libraries a Flink program was compiled with and the system the program is executed on. Especially if the same code works in an IDE setup where compile and execution libraries are the same.
In such case, you should check the version of the Flink dependencies, for example in the Maven POM file.

How fix sigar library when I run spray application?

I have a sbt project written in scala. The project uses akka and spray. There is a class with main function. When I run scala console application sometimes I get
[on-spray-can-akka.actor.default-dispatcher-4] [DEBUG] [2014-11-07 16:48:30,336] Sigar: no sigar-amd64-winnt.dll in java.library.path
org.hyperic.sigar.SigarException: no sigar-amd64-winnt.dll in java.library.path
I do not change anything run it again and it runs well. So it can be run successful or fail several times on end. How to fix this?
UPDATED
Also when it start normal there is a message:
[INFO] [11/07/2014 17:02:36.772] [on-spray-can-akka.actor.default-dispatcher-2]
[Cluster(akka://myApp)] Cluster Node [akka.tcp://myApp#127.0.0.1:2551] - Metrics will be
retreived from MBeans, and may be incorrect on some platforms. To increase metric accuracy
add the 'sigar.jar' to the classpath and the appropriate platform-specific native libary to
'java.library.path'. Reason: java.lang.IllegalArgumentException: java.lang.UnsatisfiedLinkError:
org.hyperic.sigar.Sigar.getPid()J
Sigar is a native library for gathering performance stats, used by Typesafe Console atmos Scala library. If you're not interested in hooking up Typesafe Console to your application, you can simply remove all references to atmos library from sbt build script and app config files without affecting your app functionality.

Launch a mapreduce job from eclipse

I've written a mapreduce program in Java, which I can submit to a remote cluster running in distributed mode. Currently, I submit the job using the following steps:
export the mapreuce job as a jar (e.g. myMRjob.jar)
submit the job to the remote cluster using the following shell command: hadoop jar myMRjob.jar
I would like to submit the job directly from Eclipse when I try to run the program. How can I do this?
I am currently using CDH3, and an abridged version of my conf is:
conf.set("hbase.zookeeper.quorum", getZookeeperServers());
conf.set("fs.default.name","hdfs://namenode/");
conf.set("mapred.job.tracker", "jobtracker:jtPort");
Job job = new Job(conf, "COUNT ROWS");
job.setJarByClass(CountRows.class);
// Set up Mapper
TableMapReduceUtil.initTableMapperJob(inputTable, scan,
CountRows.MyMapper.class, ImmutableBytesWritable.class,
ImmutableBytesWritable.class, job);
// Set up Reducer
job.setReducerClass(CountRows.MyReducer.class);
job.setNumReduceTasks(16);
// Setup Overall Output
job.setOutputFormatClass(MultiTableOutputFormat.class);
job.submit();
When I run this directly from Eclipse, the job is launched but Hadoop cannot find the mappers/reducers. I get the following errors:
12/06/27 23:23:29 INFO mapred.JobClient: map 0% reduce 0%
12/06/27 23:23:37 INFO mapred.JobClient: Task Id : attempt_201206152147_0645_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.mypkg.mapreduce.CountRows$MyMapper
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:996)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:212)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:602)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
...
Does anyone know how to get past these errors? If I can fix this, I can integrate more MR jobs into my scripts which would be awesome!
If you're submitting the hadoop job from within the Eclipse project that defines the classes for the job then you most probably have a classpath problem.
The job.setjarByClass(CountRows.class) call is finding the class file on the build classpath, and not in the CountRows.jar (which may or may not have been built yet, or even on the classpath).
You should be able to assert this is true by printing out the result of job.getJar() after you call job.setjarByClass(..), and if it doesn't display a jar filepath, then it's found the build class, rather than the jar'd class
What worked for me was exporting a runnable JAR (the difference between it and a JAR is that the first defines the class, which has the main method) and selecting the "packaging required libraries into JAR" option (choosing the "extracting..." option leads to duplicate errors and it also has to extract the class files from the jars, which, ultimately, in my case, resulted in not resolving the class not found exception).
After that, you can just set the jar, as was suggested by Chris White. For Windows it would look like this: job.setJar("C:\\\MyJar.jar");
If it helps anybody, I made a presentation on what I learned from creating a MapReduce project and running it in Hadoop 2.2.0 in Windows 7 (in Eclipse Luna)
I have used this method from the following website to configure a Map/Reduce project of mine to run the project using Eclipse (w/o exporting project as JAR)
Configuring Eclipse to run Hadoop Map/Reduce project
Note: If you decide to debug you program, your Mapper class and Reducer class won't be debug-able.
Hope it helps. :)