Quartz Scheduler failed to initialize - quartz-scheduler

Getting these errors:
2018-01-22 18:00:59,797 [ServerService Thread Pool -- 79] ERROR org.quartz.ee.servlet.QuartzInitializerListener - Quartz Scheduler failed to initialize: org.quartz.SchedulerException: SchedulerPlugin class 'org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin' could not be instantiated. [See nested exception: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.]
2018-01-22 18:00:59,797 [ServerService Thread Pool -- 79] ERROR stderr - org.quartz.SchedulerException: SchedulerPlugin class 'org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin' could not be instantiated. [See nested exception: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.]
2018-01-22 18:00:59,805 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.
2018-01-22 18:00:59,805 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.LinkageError: Failed to link org/quartz/plugins/xml/XMLSchedulingDataProcessorPlugin (Module "deployment.Reports.ear:main" from Service Module Loader)
2018-01-22 18:00:59,806 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.NoClassDefFoundError: org/quartz/jobs/FileScanListener
2018-01-22 18:00:59,807 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.ClassNotFoundException: org.quartz.jobs.FileScanListener from [Module "deployment.Reports.ear:main" from Service Module Loader]
quartz.properties file:
org.quartz.scheduler.instanceName = Scheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = quartzDataSource
org.quartz.jobStore.tablePrefix = QUARTZ.QRTZ_
#org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
org.quartz.dataSource.quartzDataSource.jndiURL=java:QuartzDataSource
org.quartz.dataSource.quartzDataSource.java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.fileNames = quartz-jobs-xml
org.quartz.plugin.jobInitializer.failOnFileNotFound = true
org.quartz.plugin.jobInitializer.scanInterval = 0
org.quartz.plugin.jobInitializer.wrapInUserTransaction =false
My quartz-jobs.xml file is in the same directory as my quartz.properties.xml file. I tried playing around with that path a little. But I think the errors are just saying it can't instantiate my jobs.

I needed to add a new quartz library to my build to include the XMLSchedulingDataProcessorPlugin that I reference in my quartz.properties.
In my root build.gradle I add this to my ext.libraries section:
quartz_jobs: 'org.quartz-scheduler:quartz-jobs:2.3.0'
Then in my web services build.gradle where I use quartz, I add the library to my providedCompile section:
libraries.quartz-jobs

Related

Livy start new session

I have a problem when I want to create a new session by livy the session dead after their creation, I have installed Livy , spark 3.0.0 , scala 1.12.10 and python 3.7.I followed these steps:
step 01 : start Livy server
./livy-server start
step 02 : start spark shell
spark-shell
step 03 : executing this code using python 3 :
import json, pprint, requests, textwrap
host = 'http://localhost:8998'
data = {'kind': 'spark'}
headers = {'Content-Type': 'application/json'}
r = requests.post(host + '/sessions', data=json.dumps(data), headers=headers)
r.json()
session_url = host + r.headers['location']
r = requests.get(session_url, headers=headers)
r.json()
statements_url = session_url + '/statements'
data = {'code': '1 + 1'}
r = requests.post(statements_url, data=json.dumps(data), headers=headers)
r.json()
statement_url = host + r.headers['location']
r = requests.get(statement_url, headers=headers)
pprint.pprint(r.json())
data = {
'code': textwrap.dedent("""
val NUM_SAMPLES = 100000;
val count = sc.parallelize(1 to NUM_SAMPLES).map { i =>
val x = Math.random();
val y = Math.random();
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _);
println(\"Pi is roughly \" + 4.0 * count / NUM_SAMPLES)
""")
}
r = requests.post(statements_url, data=json.dumps(data), headers=headers)
pprint.pprint(r.json())
statement_url = host + r.headers['location']
r = requests.get(statement_url, headers=headers)
pprint.pprint(r.json())
this is the log of my session :
stderr:
20/02/17 14:20:06 WARN Utils: Your hostname, zekri-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
20/02/17 14:20:06 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/usr/local/spark-3.0.0-preview2-bin-hadoop2.7/jars/spark-unsafe_2.12-3.0.0-preview2.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
20/02/17 14:20:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NoClassDefFoundError: scala/Function0$class
at org.apache.livy.shaded.json4s.ThreadLocal.<init>(Formats.scala:311)
at org.apache.livy.shaded.json4s.DefaultFormats$class.$init$(Formats.scala:318)
at org.apache.livy.shaded.json4s.DefaultFormats$.<init>(Formats.scala:296)
at org.apache.livy.shaded.json4s.DefaultFormats$.<clinit>(Formats.scala)
at org.apache.livy.repl.Session.<init>(Session.scala:66)
at org.apache.livy.repl.ReplDriver.initializeSparkEntries(ReplDriver.scala:41)
at org.apache.livy.rsc.driver.RSCDriver.run(RSCDriver.java:337)
at org.apache.livy.rsc.driver.RSCDriverBootstrapper.main(RSCDriverBootstrapper.java:93)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: scala.Function0$class
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 20 more

App can't run from eclipse which created by 'jhipster --skipClient'

java.lang.IllegalStateException: Error processing condition on org.springframework.boot.autoconfigure.context.MessageSourceAutoConfiguration
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:64)
at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:108)
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader$TrackedConditionEvaluator.shouldSkip(ConfigurationClassBeanDefinitionReader.java:441)
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:129)
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:118)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:328)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:233)
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:271)
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:91)
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:692)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:530)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307)
at com.einfochips.demo.TestApp.main(TestApp.java:63)
Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration] from ClassLoader [sun.misc.Launcher$AppClassLoader#659e0bfd]

Slick: java.util.concurrent.RejectedExecutionException in Quartz Job

I have Play 2.5 App that uses Slick. The play app has few Quartz job scheduled and runs within the app. The Quartz Job uses slick to insert/update data in the database. The Quartz Job fails with the below exception.
[error] o.q.c.ErrorLogger - Job (8a3a3ec7f96a2d1aceb2dc96c5dddaed.becdc8372f54d501222a2ca94c264ff0 threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#5fae8abe rejected from java.util.concurrent.ThreadPoolExecutor#39c55f20[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 268]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
at slick.backend.DatabaseComponent$DatabaseDef$class.runSynchronousDatabaseAction(DatabaseComponent.scala:230)
at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInContext(DatabaseComponent.scala:207)
at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInternal(DatabaseComponent.scala:75)
at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:38)
[error] o.q.c.ErrorLogger - Job (8a3a3ec7f96a2d1aceb2dc96c5dddaed.becdc8372f54d501222a2ca94c264ff0 threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#5fae8abe rejected from java.util.concurrent.ThreadPoolExecutor#39c55f20[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 268]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
at slick.backend.DatabaseComponent$DatabaseDef$class.runSynchronousDatabaseAction(DatabaseComponent.scala:230)
at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInContext(DatabaseComponent.scala:207)
at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInternal(DatabaseComponent.scala:75)
at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:38)
This failure doesnt happen when slick calls are made outside of the Quartz job but only when inside the Quartz job. Some of the initial few Quartz job succeeds but then after few runs the above error is thrown.
what is the causing this issue and why the stacktrace says the pool size is zero as shown below?
Caused by: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#5fae8abe rejected from java.util.concurrent.ThreadPoolExecutor#39c55f20[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 268]
Below is the Slick and Quartz settings
slick.dbs.default.driver="slick.driver.MySQLDriver$"
slick.dbs.default.db.driver="com.mysql.jdbc.Driver"
slick.dbs.default.db.url="jdbc:mysql://mysql-server:3306/database"
slick.dbs.default.db.user=username
slick.dbs.default.db.password=password
slick.dbs.default.db.queueSize = 10000
slick.dbs.default.db.connectionTimeout=5s
// Quartz Settings
scheduler = {
org.quartz.scheduler.instanceId = AUTO
org.quartz.threadPool.class = "org.quartz.simpl.SimpleThreadPool"
org.quartz.threadPool.threadCount = 100
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
org.quartz.dataSource.quartzDataSource.driver=com.mysql.jdbc.Driver
org.quartz.dataSource.quartzDataSource.URL="jdbc:mysql://mysql-server:3306/database"
org.quartz.dataSource.quartzDataSource.user=username
org.quartz.dataSource.quartzDataSource.password=password
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = "org.quartz.impl.jdbcjobstore.JobStoreTX"
org.quartz.jobStore.driverDelegateClass = "org.quartz.impl.jdbcjobstore.StdJDBCDelegate"
org.quartz.jobStore.dataSource = quartzDataSource
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.useProperties = true
org.quartz.dataSource.quartzDataSource.maxConnections=3
}

Getting noclassfoundexception when integrating flume-kafka

I am using Flume and trying to publish the output "/usr/bin/vmstat 1" to a kafka topic.
But I get exceptions as mentioned in the stack trace. Please help me to find solution to the problem.
==============================================================================
Flume Config
tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1
tier1.sources.source1.type = exec
tier1.sources.source1.command = /usr/bin/vmstat 1
tier1.sources.source1.channels = channel1
tier1.channels.channel1.type = memory
tier1.channels.channel1.capacity = 10000
tier1.channels.channel1.transactionCapacity = 1000
tier1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
tier1.sinks.sink1.topic = sink1
tier1.sinks.sink1.brokerList = localhost:9092
tier1.sinks.sink1.channel = channel1
tier1.sinks.sink1.batchSize = 20
==============================================================================
Exception
2015-09-23 10:39:36,936 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.ExecSource.start(ExecSource.java:169)] Exec source starting with command:/usr/bin/vmstat 1
2015-09-23 10:39:36,944 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)] Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#12c50438 counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoClassDefFoundError: scala/collection/IndexedSeqOptimized
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
at org.apache.flume.sink.kafka.KafkaSink.start(KafkaSink.java:163)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.ClassNotFoundException: scala.collection.IndexedSeqOptimized
at java.net.URLClassLoader$1.run(URLClassLoader.java:214)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
... 23 more
Caused by: java.util.zip.ZipException: invalid LOC header (bad signature)
at java.util.zip.ZipFile.read(Native Method)
at java.util.zip.ZipFile.access$1300(ZipFile.java:46)
at java.util.zip.ZipFile$ZipFileInputStream.read(ZipFile.java:485)
at java.util.zip.ZipFile$2.fill(ZipFile.java:268)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at sun.misc.Resource.getBytes(Resource.java:124)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:273)
at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
... 28 more
2015-09-23 10:39:36,954 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.
2015-09-23 10:39:36,977 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: source1 started
2015-09-23 10:39:36,977 (lifecycleSupervisor-1-3) [DEBUG - org.apache.flume.source.ExecSource.start(ExecSource.java:187)] Exec source started
2015-09-23 10:39:36,977 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:264)] Unsuccessful attempt to shutdown component: {} due to missing dependencies. Please shutdown the agentor disable this component, or the agent will bein an undefined state.
java.lang.NullPointerException
at org.apache.flume.sink.kafka.KafkaSink.stop(KafkaSink.java:171)
at org.apache.flume.sink.DefaultSinkProcessor.stop(DefaultSinkProcessor.java:53)
at org.apache.flume.SinkRunner.stop(SinkRunner.java:115)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:259)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
2015-09-23 10:39:39,978 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:233)] Component SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#12c50438 counterGroup:{ name:null counters:{} } } is in error state, and Flume will notattempt to change its state
^C2015-09-23 10:39:42,250 (pool-3-thread-1) [INFO - org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:376)] Command [/usr/bin/vmstat 1] exited with 130

jboss seam + quartz scheduler 2.2.1 + clustering

I have problem with configuration quartz version 2.2.1 with seam framevork.
Now a get error in log:
[24.4.14 12:51:37:843 SELČ] 0000000b SystemOut O ERROR manage:3853 - ClusterManager: Error managing cluster: Failure updating scheduler state when checking-in: ORA-01400: cannot insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
org.quartz.JobPersistenceException: Failure updating scheduler state when checking-in: ORA-01400: cannot insert NULL into
("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
[See nested exception: java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot
insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.clusterCheckIn(JobStoreSupport.java:3373)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.doCheckin(JobStoreSupport.java:3226)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$ClusterManager.manage(JobStoreSupport.java:3847)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$ClusterManager.initialize(JobStoreSupport.java:3832)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.schedulerStarted(JobStoreSupport.java:639)
at org.quartz.core.QuartzScheduler.start(QuartzScheduler.java:513)
at org.quartz.impl.StdScheduler.start(StdScheduler.java:143)
at org.jboss.seam.async.QuartzDispatcher.initScheduler(QuartzDispatcher.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.jboss.seam.util.Reflections.invoke(Reflections.java:22)
at org.jboss.seam.util.Reflections.invokeAndWrap(Reflections.java:144)
at org.jboss.seam.Component.callComponentMethod(Component.java:2275)
at org.jboss.seam.Component.callCreateMethod(Component.java:2198)
at org.jboss.seam.Component.newInstance(Component.java:2158)
at org.jboss.seam.contexts.Contexts.startup(Contexts.java:304)
at org.jboss.seam.contexts.Contexts.startup(Contexts.java:278)
at org.jboss.seam.contexts.ServletLifecycle.endInitialization(ServletLifecycle.java:143)
at org.jboss.seam.init.Initialization.init(Initialization.java:744)
at org.jboss.seam.servlet.SeamListener.contextInitialized(SeamListener.java:36)
at com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:1682)
at com.ibm.ws.webcontainer.webapp.WebAppImpl.initialize(WebAppImpl.java:410)
at com.ibm.ws.webcontainer.webapp.WebGroupImpl.addWebApplication(WebGroupImpl.java:88)
at com.ibm.ws.webcontainer.VirtualHostImpl.addWebApplication(VirtualHostImpl.java:169)
at com.ibm.ws.webcontainer.WSWebContainer.addWebApp(WSWebContainer.java:749)
at com.ibm.ws.webcontainer.WSWebContainer.addWebApplication(WSWebContainer.java:634)
at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:422)
at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:714)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1163)
at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1369)
at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:639)
at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:967)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:769)
at com.ibm.ws.runtime.component.ApplicationMgrImpl$5.run(ApplicationMgrImpl.java:2160)
at com.ibm.ws.security.auth.ContextManagerImpl.runAs(ContextManagerImpl.java:5468)
at com.ibm.ws.security.auth.ContextManagerImpl.runAsSystem(ContextManagerImpl.java:5594)
at com.ibm.ws.security.core.SecurityContext.runAsSystem(SecurityContext.java:255)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:2165)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:446)
at com.ibm.ws.runtime.component.CompositionUnitImpl.start(CompositionUnitImpl.java:123)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:389)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.access$500(CompositionUnitMgrImpl.java:117)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl$CUInitializer.run(CompositionUnitMgrImpl.java:995)
at com.ibm.wsspi.runtime.component.WsComponentImpl$_AsynchInitializer.run(WsComponentImpl.java:496)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1700)
Caused by:
java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:85)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:953)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3468)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteUpdate(WSJdbcPreparedStatement.java:1185)
at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeUpdate(WSJdbcPreparedStatement.java:802)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.insertSchedulerState(StdJDBCDelegate.java:3227)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.clusterCheckIn(JobStoreSupport.java:3368)
... 46 more
My configuration quartz is:
#==============================================================
# Configure Main Scheduler Properties
#==============================================================
org.quartz.scheduler.instanceName = Sched1
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.scheduler.userTransactionURL = jta/usertransaction
#==============================================================
# Configure ThreadPool
#==============================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
#==============================================================
# Configure JobStore
#==============================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = quartzDS
#org.quartz.jobStore.nonManagedTXDataSource = quartzDSNoTx
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 15000
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.quartzDS.jndiURL= jdc
#org.quartz.dataSource.quartzDSNoTx.jndiURL= java:/jdcNoTX
#org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
#org.quartz.plugin.jobInitializer.fileNames = jobSchedule.xml
Show your QuartzJobs config file and tell when en error occurs? While compile or when the job should execute?