StackOverflowError when running Spring Batch job in Spring XD - spring-batch

I have a Spring Batch job that is running through Spring XD. When I run it through my IDE everything runs fine, but when I run it through Spring XD I get the following error:
2015-07-21T12:05:51-0400 1.2.0.RELEASE ERROR task-scheduler-2 step.AbstractStep - Encountered an error executing step loadPublicationRunStep in job MyLoader
java.lang.StackOverflowError: null
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.<init>($Gson$Types.java:542) ~[gson-2.2.4.jar:na]
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108) ~[gson-2.2.4.jar:na]
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.<init>($Gson$Types.java:549) ~[gson-2.2.4.jar:na]
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108) ~[gson-2.2.4.jar:na]
(the logs go on and on...)
Has anyone encountered something similar? Does anyone know what could be the potential cause?

Related

RuntimeException trying to run Drools examples

I downloaded Drools 7.46.0.Final and extracted the contents to my local drive. When I try to run the examples from the Linux command line using the provided runExamples.sh script I'm getting the following exception. I've tried with Java 8 and Java 11 (the only versions I have installed). Does this really require Java 6 like the message recommends or is there some other problem here?
I'm new to Drools, so I'm afraid I'm not sure how to troubleshoot this.
UPDATE: interestingly I tried version 7.44.0.Final and that runs fine. So downloaded 7.45.0.Final and that one is broken too. So something changed between 7.44 and 7.45 that's causing this.
10:06:44.154 [main] INFO o.k.a.i.utils.ServiceDiscoveryImpl.processKieService:129 - Cannot load service: org.kie.internal.process.CorrelationKeyFactory
10:06:44.157 [main] ERROR o.k.a.i.utils.ServiceDiscoveryImpl.processKieService:131 - Loading failed because There already exists an implementation for service org.drools.core.reteoo.KieComponentFactoryFactory with same priority 0
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.drools.dynamic.DynamicServiceRegistrySupplier.get(DynamicServiceRegistrySupplier.java:32)
at org.drools.dynamic.DynamicServiceRegistrySupplier.get(DynamicServiceRegistrySupplier.java:23)
at org.kie.api.internal.utils.ServiceRegistry$Impl.getServiceRegistry(ServiceRegistry.java:88)
at org.kie.api.internal.utils.ServiceRegistry$ServiceRegistryHolder.<clinit>(ServiceRegistry.java:47)
at org.kie.api.internal.utils.ServiceRegistry.getInstance(ServiceRegistry.java:39)
at org.kie.api.internal.utils.ServiceRegistry.getService(ServiceRegistry.java:35)
at org.kie.api.KieServices$Factory$LazyHolder.<clinit>(KieServices.java:358)
at org.kie.api.KieServices$Factory.get(KieServices.java:365)
at org.kie.api.KieServices.get(KieServices.java:349)
at org.drools.examples.DroolsExamplesApp.<init>(DroolsExamplesApp.java:59)
at org.drools.examples.DroolsExamplesApp.main(DroolsExamplesApp.java:52)
Caused by: java.lang.RuntimeException: Unable to build kie service url = jar:file:/home/davek/apps/drools-distribution-7.46.0.Final/examples/binaries/drools-examples-7.46.0.Final.jar!/META-INF/kie.conf
at org.kie.api.internal.utils.ServiceDiscoveryImpl.registerConfs(ServiceDiscoveryImpl.java:105)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.lambda$getServices$1(ServiceDiscoveryImpl.java:83)
at java.util.Optional.ifPresent(Optional.java:159)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.getServices(ServiceDiscoveryImpl.java:81)
at org.kie.api.internal.utils.ServiceRegistry$Impl.<init>(ServiceRegistry.java:60)
at org.drools.dynamic.DynamicServiceRegistrySupplier$LazyHolder.<clinit>(DynamicServiceRegistrySupplier.java:27)
... 11 more
Caused by: java.lang.RuntimeException: There already exists an implementation for service org.drools.core.reteoo.KieComponentFactoryFactory with same priority 0
at org.kie.api.internal.utils.ServiceDiscoveryImpl$PriorityMap.put(ServiceDiscoveryImpl.java:222)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.processKieService(ServiceDiscoveryImpl.java:124)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.registerConfs(ServiceDiscoveryImpl.java:101)
... 16 more
Unfortunately this is a known issue that I fixed with this commit.
Upcoming Drools 7.47.0.Final (to be released next week) won't suffer of this.
Switching to version 8.16.0.Beta or newer resolved this for me

quartz Fire Job immediately doesn't work

I integrated quartz 2 and spring 4 with maven and java annotation ( using servlet 3 ), also i am using tomcat 7 maven plugin for deploying my project,my quartz Configuration class like as below :
and my job class define simply like as below :
then i use the quartz Scheduler for using fire my job trigger immediately as below :
but my problem is : when i call fireNow methode with "job1" , "mygroup" parameters nothing happens and my job1 do not call immediately and don't print anything in console, i also track the db tables an i noticed
after running the fireNow method new row inserted in my qrtz_triggers table in mysql:
If Quartz scheduler is not set to start automatically. You need to start it explicitly.
scheduler.start();
If Quartz scheduler started successful, you should see information in your log or console output similar as below.
[main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.2.1)'org.springframework.scheduling.quartz.SchedulerFactoryBean#0' with instanceId 'MyScheduler'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
...
[main] INFO org.quartz.core.QuartzScheduler - started
Finally I found solution for my problem, after enabling quartz log4j (adding log4j.logger.org.quartz=DEBUG in my log4j.properties ), I saw the jdbc exception in console, the exception related to using outdated quartz-query.
I added quartz 2.2.1 dependency in my POM but I used quartz sql query for 2.1.7 version and that mismatched between quartz jar and quartz sql query version cause missing some table like SCHED_TIME.

GCS Connector Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found

We are trying to run Hive queries on HDP 2.1 using GCS Connector, it was working fine until yesterday but since today morning our jobs are randomly started failing. When we restart them manually they just work fine. I suspect it's something to do with number of parallel Hive jobs running at a given point of time.
Below is the error message:
vertexId=vertex_1407434664593_37527_2_00, diagnostics=[Vertex Input: audience_history initializer failed., java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found]
DAG failed due to vertex failure. failedVertices:1 killedVertices:0
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask
Any help will be highly appreciated.
Thanks!

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem

How do I fix server Oops error when deploying Play Framework 2.0 application as a Windows service?

I was following the processes in this very helpful post:
How do I run a Play Framework 2.0 application as a Windows service?
when I ran into trouble in Step 9. When I execute runConsole.bat, the service cycles between the running and restarting states. The full log is here:
wrapper.log
but what jumps out at me in the log are the following:
INFO|7268/0|play.core.server.NettyServer|13-12-28 13:07:28|Oops, cannot start the server.
INFO|7268/0|play.core.server.NettyServer|13-12-28 13:07:28|Configuration error: Configuration error[Cannot connect to database [default]]
...
INFO|7268/0|play.core.server.NettyServer|13-12-28 13:07:28|Caused by: org.h2.jdbc.JdbcSQLException: Database may be already in use: "Locked by another process". Possible solutions: close all other connection(s); use the server mode [90020-168]
...
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:28|restart process due to default exit code rule
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:28|restart internal RUNNING
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:28|stopping process with pid/timeout 7268 45000
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:30|process exit code: -1
...
INFO|7812/1|play.core.server.NettyServer|13-12-28 13:07:45|[error] c.j.b.h.AbstractConnectionHook - Failed to obtain initial connection Sleeping for 0ms and trying again. Attempts left: 0. Exception: null
INFO|7812/1|play.core.server.NettyServer|13-12-28 13:07:45|Oops, cannot start the server.
INFO|7812/1|play.core.server.NettyServer|13-12-28 13:07:45|Configuration error: Configuration error[Cannot connect to database [default]]
repeats several times...
Half way through writing this post, I realized that before step 9., you should terminate the start.bat script which you start up in step 6. When I did this, runConsole.bat would execute normally without all the errors I list above.