I integrated quartz 2 and spring 4 with maven and java annotation ( using servlet 3 ), also i am using tomcat 7 maven plugin for deploying my project,my quartz Configuration class like as below :
and my job class define simply like as below :
then i use the quartz Scheduler for using fire my job trigger immediately as below :
but my problem is : when i call fireNow methode with "job1" , "mygroup" parameters nothing happens and my job1 do not call immediately and don't print anything in console, i also track the db tables an i noticed
after running the fireNow method new row inserted in my qrtz_triggers table in mysql:
If Quartz scheduler is not set to start automatically. You need to start it explicitly.
scheduler.start();
If Quartz scheduler started successful, you should see information in your log or console output similar as below.
[main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.2.1)'org.springframework.scheduling.quartz.SchedulerFactoryBean#0' with instanceId 'MyScheduler'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
...
[main] INFO org.quartz.core.QuartzScheduler - started
Finally I found solution for my problem, after enabling quartz log4j (adding log4j.logger.org.quartz=DEBUG in my log4j.properties ), I saw the jdbc exception in console, the exception related to using outdated quartz-query.
I added quartz 2.2.1 dependency in my POM but I used quartz sql query for 2.1.7 version and that mismatched between quartz jar and quartz sql query version cause missing some table like SCHED_TIME.
Related
I downloaded Drools 7.46.0.Final and extracted the contents to my local drive. When I try to run the examples from the Linux command line using the provided runExamples.sh script I'm getting the following exception. I've tried with Java 8 and Java 11 (the only versions I have installed). Does this really require Java 6 like the message recommends or is there some other problem here?
I'm new to Drools, so I'm afraid I'm not sure how to troubleshoot this.
UPDATE: interestingly I tried version 7.44.0.Final and that runs fine. So downloaded 7.45.0.Final and that one is broken too. So something changed between 7.44 and 7.45 that's causing this.
10:06:44.154 [main] INFO o.k.a.i.utils.ServiceDiscoveryImpl.processKieService:129 - Cannot load service: org.kie.internal.process.CorrelationKeyFactory
10:06:44.157 [main] ERROR o.k.a.i.utils.ServiceDiscoveryImpl.processKieService:131 - Loading failed because There already exists an implementation for service org.drools.core.reteoo.KieComponentFactoryFactory with same priority 0
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.drools.dynamic.DynamicServiceRegistrySupplier.get(DynamicServiceRegistrySupplier.java:32)
at org.drools.dynamic.DynamicServiceRegistrySupplier.get(DynamicServiceRegistrySupplier.java:23)
at org.kie.api.internal.utils.ServiceRegistry$Impl.getServiceRegistry(ServiceRegistry.java:88)
at org.kie.api.internal.utils.ServiceRegistry$ServiceRegistryHolder.<clinit>(ServiceRegistry.java:47)
at org.kie.api.internal.utils.ServiceRegistry.getInstance(ServiceRegistry.java:39)
at org.kie.api.internal.utils.ServiceRegistry.getService(ServiceRegistry.java:35)
at org.kie.api.KieServices$Factory$LazyHolder.<clinit>(KieServices.java:358)
at org.kie.api.KieServices$Factory.get(KieServices.java:365)
at org.kie.api.KieServices.get(KieServices.java:349)
at org.drools.examples.DroolsExamplesApp.<init>(DroolsExamplesApp.java:59)
at org.drools.examples.DroolsExamplesApp.main(DroolsExamplesApp.java:52)
Caused by: java.lang.RuntimeException: Unable to build kie service url = jar:file:/home/davek/apps/drools-distribution-7.46.0.Final/examples/binaries/drools-examples-7.46.0.Final.jar!/META-INF/kie.conf
at org.kie.api.internal.utils.ServiceDiscoveryImpl.registerConfs(ServiceDiscoveryImpl.java:105)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.lambda$getServices$1(ServiceDiscoveryImpl.java:83)
at java.util.Optional.ifPresent(Optional.java:159)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.getServices(ServiceDiscoveryImpl.java:81)
at org.kie.api.internal.utils.ServiceRegistry$Impl.<init>(ServiceRegistry.java:60)
at org.drools.dynamic.DynamicServiceRegistrySupplier$LazyHolder.<clinit>(DynamicServiceRegistrySupplier.java:27)
... 11 more
Caused by: java.lang.RuntimeException: There already exists an implementation for service org.drools.core.reteoo.KieComponentFactoryFactory with same priority 0
at org.kie.api.internal.utils.ServiceDiscoveryImpl$PriorityMap.put(ServiceDiscoveryImpl.java:222)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.processKieService(ServiceDiscoveryImpl.java:124)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.registerConfs(ServiceDiscoveryImpl.java:101)
... 16 more
Unfortunately this is a known issue that I fixed with this commit.
Upcoming Drools 7.47.0.Final (to be released next week) won't suffer of this.
Switching to version 8.16.0.Beta or newer resolved this for me
We are facing a major incident in our Camunda Orchestrator. When we hit 100 running process instances, Camunda Cockpit takes an eternity and never responds.
We have the same issue when calling /app/engine/.
Few messages are being consumed from RabbitMQ, and then everything stops.
The application however is not down.
I suspect a process engine configuration issue, because of being unable to get the job executor log.
When I set JobExecutorActivate to false, all things go right for Cockpit and queue consumption, but processes stop at the end of the first subprocess.
We have this log loop non stop:
2018/11/17 14:47:33.258 DEBUG ENGINE-14012 Job acquisition thread woke up
2018/11/17 14:47:33.258 DEBUG ENGINE-14022 Acquired 0 jobs for process engine 'default': []
2018/11/17 14:47:33.258 DEBUG ENGINE-14023 Execute jobs for process engine 'default': [8338]
2018/11/17 14:47:33.258 DEBUG ENGINE-14023 Execute jobs for process engine 'default': [8217]
2018/11/17 14:47:33.258 DEBUG ENGINE-14023 Execute jobs for process engine 'default': [8256]
2018/11/17 14:47:33.258 DEBUG ENGINE-14011 Job acquisition thread sleeping for 100 millis
2018/11/17 14:47:33.359 DEBUG ENGINE-14012 Job acquisition thread woke up
And this log too (for queue consumption):
2018/11/17 15:04:19.582 DEBUG Waiting for message from consumer. {"null":null}
2018/11/17 15:04:19.582 DEBUG Retrieving delivery for Consumer#5d05f453: tags=[{amq.ctag-0ivcbc2QL7g-Duyu2Rcbow=queue_response}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,4), conn: Proxy#77a5983d Shared Rabbit Connection: SimpleConnection#17a1dd78 [delegate=amqp://guest#127.0.0.1:5672/, localPort= 49812], acknowledgeMode=AUTO local queue size=0 {"null":null}
Environment :
Spring Boot 2.0.3.RELEASE, Camunda v7.9.0 with PostgreSQL, RabbitMQ
Camunda BPM listen and push to 165 RabbitMQ queue.
Configuration :
# Data source (PostgreSql)
com.campDo.fr.camunda.datasource.url=jdbc:postgresql://localhost:5432/campDo
com.campDo.fr.camunda.datasource.username=campDo
com.campDo.fr.camunda.datasource.password=password
com.campDo.fr.camunda.datasource.driver-class-name=org.postgresql.Driver
com.campDo.fr.camunda.bpm.database.jdbc-batch-processing=false
oms.camunda.retry.timer=1
oms.camunda.retry.nb-max=2
SpringProcessEngineConfiguration :
#Bean
public SpringProcessEngineConfiguration processEngineConfiguration() throws IOException {
final SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setDataSource(camundaDataSource);
config.setDatabaseSchemaUpdate("true");
config.setTransactionManager(transactionManager());
config.setHistory("audit");
config.setJobExecutorActivate(true);
config.setMetricsEnabled(false);
final Resource[] resources = resourceLoader.getResources(CLASSPATH_ALL_URL_PREFIX + "/processes/*.bpmn");
config.setDeploymentResources(resources);
return config;
}
Pom dependencies :
<dependency>
<groupId>org.camunda.bpm.springboot</groupId>
<artifactId>camunda-bpm-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.camunda.bpm.springboot</groupId>
<artifactId>camunda-bpm-spring-boot-starter-webapp</artifactId>
</dependency>
<dependency>
<groupId>org.camunda.bpm.springboot</groupId>
<artifactId>camunda-bpm-spring-boot-starter-rest</artifactId>
</dependency>
I am quite sure that my job executor config is wrong.
Update :
I can start cockpit and make Camunda consume messages by setting JobExecutorActivate to false, but processes are still stopping at the first job executor required step:
config.setJobExecutorActivate(false);
Thanks for your help.
First: if your process contains async steps (Jobs) then it will pause. Activating the jobExecutor will just say that camunda should manage how these jobs are worked on. If you disable the executor, your processes will still stop and since no-one will execute them, they remain stopped.
Disabling job-execution is only sensible during testing or when you have multiple nodes and only some of them should do processing.
To your main issue: the job executor works with a threadPool. From what you describe, it is very likely, that all threads in the pool block forever, so they never finish and never return, meaning your system is stuck.
This happened to us a while ago when working with a smtp server, there was an infinite timeout on the connection so the threads kept waiting although the machine was not available.
Since job execution in camunda is highly reliable and well tested per se, I yywould suggest that you double check everything you do in your delegates, if you are lucky (and I am right) you will find the spot where you just wait forever ...
I have a Spring Batch job that is running through Spring XD. When I run it through my IDE everything runs fine, but when I run it through Spring XD I get the following error:
2015-07-21T12:05:51-0400 1.2.0.RELEASE ERROR task-scheduler-2 step.AbstractStep - Encountered an error executing step loadPublicationRunStep in job MyLoader
java.lang.StackOverflowError: null
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.<init>($Gson$Types.java:542) ~[gson-2.2.4.jar:na]
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108) ~[gson-2.2.4.jar:na]
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.<init>($Gson$Types.java:549) ~[gson-2.2.4.jar:na]
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108) ~[gson-2.2.4.jar:na]
(the logs go on and on...)
Has anyone encountered something similar? Does anyone know what could be the potential cause?
I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem
I am using Spring-boot 0.5.0.M6 with Spring-Batch. Configuration has by using #EnableBatchProcessing with datasource etc configured in application.properties.
During first run of the application, everything works fine but after I stop the application and restart application, following error is seen
org.springframework.dao.DuplicateKeyException: PreparedStatementCallback; SQL [INSERT into BATCH_JOB_INSTANCE(JOB_INSTANCE_ID, JOB_NAME, JOB_KEY, VERSION) values (?, ?, ?, ?)]; Duplicate entry '1' for key 'PRIMARY'; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY'
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:239)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:659)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:908)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:969)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:974)
When digging down, i had observed following lines in logs
2013-12-06 12:12:37 INFO ResourceDatabasePopulator:162 - Executing SQL script from class path resource [org/springframework/batch/core/schema-mysql.sql]
2013-12-06 12:12:37 INFO ResourceDatabasePopulator:217 - Done executing SQL script from class path resource [org/springframework/batch/core/schema-mysql.sql] in 13 ms.
Root problem over here was schema-drop-mysql.sql was not triggered by schema-mysql.sql was triggered, thereby creating two entries in BATCH_JOB_SEQ.
For resolution of the same, I have added
#EnableAutoConfiguration(exclude={BatchAutoConfiguration.class})
However due to this I need to execute schema-mysql.sql explicitly, which as of now is ok, but would be problem when spring-batch version would be updated with updates in schema
Hence have couple of questions:
1. How to autoconfigure batch for even executing schema-drop-mysql.sql before schema-mysql.sql?
2. is there way to configure this BatchDatabaseInitializer to run kind of "update" mode?
Regards
With the current version of Spring Batch autoconfiguration that isn't possible with the upcoming version it is possible to disable the automatic creation of the database tables by specifying the spring.batch.initializer.enabled property and setting it to false.
IMHO you shouldn't use the automatic creation/update features to create schema's either do it yourself or use tools like LiquiBase or FlyWay to do it more controlled.
Also see https://stackoverflow.com/questions/8418814/db-migration-tool-liquibase-or-flyway
You can always execute the schema-drop-mysql.sql yourself, as a work around you could add a #PreDestroy method to your #Configuration class which executes this script. (Maybe you could even add this to an #Configuration class which is enabled only in dev mode/profile).