I have problem with configuration quartz version 2.2.1 with seam framevork.
Now a get error in log:
[24.4.14 12:51:37:843 SELČ] 0000000b SystemOut O ERROR manage:3853 - ClusterManager: Error managing cluster: Failure updating scheduler state when checking-in: ORA-01400: cannot insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
org.quartz.JobPersistenceException: Failure updating scheduler state when checking-in: ORA-01400: cannot insert NULL into
("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
[See nested exception: java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot
insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.clusterCheckIn(JobStoreSupport.java:3373)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.doCheckin(JobStoreSupport.java:3226)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$ClusterManager.manage(JobStoreSupport.java:3847)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$ClusterManager.initialize(JobStoreSupport.java:3832)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.schedulerStarted(JobStoreSupport.java:639)
at org.quartz.core.QuartzScheduler.start(QuartzScheduler.java:513)
at org.quartz.impl.StdScheduler.start(StdScheduler.java:143)
at org.jboss.seam.async.QuartzDispatcher.initScheduler(QuartzDispatcher.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.jboss.seam.util.Reflections.invoke(Reflections.java:22)
at org.jboss.seam.util.Reflections.invokeAndWrap(Reflections.java:144)
at org.jboss.seam.Component.callComponentMethod(Component.java:2275)
at org.jboss.seam.Component.callCreateMethod(Component.java:2198)
at org.jboss.seam.Component.newInstance(Component.java:2158)
at org.jboss.seam.contexts.Contexts.startup(Contexts.java:304)
at org.jboss.seam.contexts.Contexts.startup(Contexts.java:278)
at org.jboss.seam.contexts.ServletLifecycle.endInitialization(ServletLifecycle.java:143)
at org.jboss.seam.init.Initialization.init(Initialization.java:744)
at org.jboss.seam.servlet.SeamListener.contextInitialized(SeamListener.java:36)
at com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:1682)
at com.ibm.ws.webcontainer.webapp.WebAppImpl.initialize(WebAppImpl.java:410)
at com.ibm.ws.webcontainer.webapp.WebGroupImpl.addWebApplication(WebGroupImpl.java:88)
at com.ibm.ws.webcontainer.VirtualHostImpl.addWebApplication(VirtualHostImpl.java:169)
at com.ibm.ws.webcontainer.WSWebContainer.addWebApp(WSWebContainer.java:749)
at com.ibm.ws.webcontainer.WSWebContainer.addWebApplication(WSWebContainer.java:634)
at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:422)
at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:714)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1163)
at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1369)
at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:639)
at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:967)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:769)
at com.ibm.ws.runtime.component.ApplicationMgrImpl$5.run(ApplicationMgrImpl.java:2160)
at com.ibm.ws.security.auth.ContextManagerImpl.runAs(ContextManagerImpl.java:5468)
at com.ibm.ws.security.auth.ContextManagerImpl.runAsSystem(ContextManagerImpl.java:5594)
at com.ibm.ws.security.core.SecurityContext.runAsSystem(SecurityContext.java:255)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:2165)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:446)
at com.ibm.ws.runtime.component.CompositionUnitImpl.start(CompositionUnitImpl.java:123)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:389)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.access$500(CompositionUnitMgrImpl.java:117)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl$CUInitializer.run(CompositionUnitMgrImpl.java:995)
at com.ibm.wsspi.runtime.component.WsComponentImpl$_AsynchInitializer.run(WsComponentImpl.java:496)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1700)
Caused by:
java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:85)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:953)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3468)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteUpdate(WSJdbcPreparedStatement.java:1185)
at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeUpdate(WSJdbcPreparedStatement.java:802)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.insertSchedulerState(StdJDBCDelegate.java:3227)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.clusterCheckIn(JobStoreSupport.java:3368)
... 46 more
My configuration quartz is:
#==============================================================
# Configure Main Scheduler Properties
#==============================================================
org.quartz.scheduler.instanceName = Sched1
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.scheduler.userTransactionURL = jta/usertransaction
#==============================================================
# Configure ThreadPool
#==============================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
#==============================================================
# Configure JobStore
#==============================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = quartzDS
#org.quartz.jobStore.nonManagedTXDataSource = quartzDSNoTx
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 15000
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.quartzDS.jndiURL= jdc
#org.quartz.dataSource.quartzDSNoTx.jndiURL= java:/jdcNoTX
#org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
#org.quartz.plugin.jobInitializer.fileNames = jobSchedule.xml
Show your QuartzJobs config file and tell when en error occurs? While compile or when the job should execute?
Related
I am trying to get a distributed setup for the ignite-connector to run. Sadly, it does not work. I was able to grab the log on creation of the connector via the api.
API POST payload to /connectors
{
"name": "ignite-connector",
"config": {
"connector.class": "org.apache.ignite.stream.kafka.connect.IgniteSinkConnector",
"tasks.max": "2",
"topics": "someTopic1",
"cacheName": "myCache",
"cacheAllowOverwrite": true,
"igniteCfg":"/opt/ignite/examples/config/example-cache.xml"}
}
}
I set up the ignite-connector as a plugin. I built an uber-jar from the repo and put it to a separate direcotry and included it as plugin in the .properties file I am using to start connect-distributed.sh.
I set the classpath for the jobs for both the connetor and kafka I am managing with systemd:
Environment=CLASSPATH=/opt/kafka/ignite-connector/*
Following the full error log:
[2022-11-17 19:49:30,268] INFO [ignite-connector|worker] SinkConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = ignite-connector
predicates = []
tasks.max = 2
topics = [someTopic1]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.SinkConnectorConfig:376)
[2022-11-17 19:49:30,272] INFO [ignite-connector|worker] EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = ignite-connector
predicates = []
tasks.max = 2
topics = [someTopic1]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:376)
[2022-11-17 19:49:30,276] INFO [ignite-connector|worker] Instantiated connector ignite-connector with version 3.3.1 of type class org.apache.ignite.stream.kafka.connect.IgniteSinkConnector (org.apache.kafka.connect.runtime.Worker:322)
[2022-11-17 19:49:30,276] INFO [ignite-connector|worker] Finished creating connector ignite-connector (org.apache.kafka.connect.runtime.Worker:347)
[2022-11-17 19:49:30,277] ERROR [ignite-connector|worker] WorkerConnector{id=ignite-connector} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:201)
java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:193)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:218)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:363)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:346)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:146)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-11-17 19:49:30,277] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1687)
[2022-11-17 19:49:30,280] ERROR [ignite-connector|worker] [Worker clientId=connect-1, groupId=connect-cluster] Failed to start connector 'ignite-connector' (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1811)
org.apache.kafka.connect.errors.ConnectException: Failed to start connector: ignite-connector
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$startConnector$35(DistributedHerder.java:1782)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:349)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:146)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to transition connector ignite-connector to state STARTED
... 8 more
Caused by: java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:193)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:218)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:363)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:346)
... 7 more
The mentioned class (A) is included in the ignite-core-2.9.1.jar that is bundeld in the uberJar in the Plugin directory.
Any pointers are appreciated
There seems to be a misunderstanding what "plugins" are. Those only are classes defined as implementations of Converters, Transforms, and Connectors.
Internal Ignite classes are none of these, so they wouldn't be loaded into the plugin.path classloader.
To fix this, you'll need to ensure you export CLASSPATH=/path/to/ignite-files/*.jar and you can use jar -tf commands to validate the class exists in any specific JAR before running Connect process.
It is not a hack; that's just how Java Classloader works.
Getting these errors:
2018-01-22 18:00:59,797 [ServerService Thread Pool -- 79] ERROR org.quartz.ee.servlet.QuartzInitializerListener - Quartz Scheduler failed to initialize: org.quartz.SchedulerException: SchedulerPlugin class 'org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin' could not be instantiated. [See nested exception: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.]
2018-01-22 18:00:59,797 [ServerService Thread Pool -- 79] ERROR stderr - org.quartz.SchedulerException: SchedulerPlugin class 'org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin' could not be instantiated. [See nested exception: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.]
2018-01-22 18:00:59,805 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.
2018-01-22 18:00:59,805 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.LinkageError: Failed to link org/quartz/plugins/xml/XMLSchedulingDataProcessorPlugin (Module "deployment.Reports.ear:main" from Service Module Loader)
2018-01-22 18:00:59,806 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.NoClassDefFoundError: org/quartz/jobs/FileScanListener
2018-01-22 18:00:59,807 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.ClassNotFoundException: org.quartz.jobs.FileScanListener from [Module "deployment.Reports.ear:main" from Service Module Loader]
quartz.properties file:
org.quartz.scheduler.instanceName = Scheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = quartzDataSource
org.quartz.jobStore.tablePrefix = QUARTZ.QRTZ_
#org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
org.quartz.dataSource.quartzDataSource.jndiURL=java:QuartzDataSource
org.quartz.dataSource.quartzDataSource.java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.fileNames = quartz-jobs-xml
org.quartz.plugin.jobInitializer.failOnFileNotFound = true
org.quartz.plugin.jobInitializer.scanInterval = 0
org.quartz.plugin.jobInitializer.wrapInUserTransaction =false
My quartz-jobs.xml file is in the same directory as my quartz.properties.xml file. I tried playing around with that path a little. But I think the errors are just saying it can't instantiate my jobs.
I needed to add a new quartz library to my build to include the XMLSchedulingDataProcessorPlugin that I reference in my quartz.properties.
In my root build.gradle I add this to my ext.libraries section:
quartz_jobs: 'org.quartz-scheduler:quartz-jobs:2.3.0'
Then in my web services build.gradle where I use quartz, I add the library to my providedCompile section:
libraries.quartz-jobs
I have Play 2.5 App that uses Slick. The play app has few Quartz job scheduled and runs within the app. The Quartz Job uses slick to insert/update data in the database. The Quartz Job fails with the below exception.
[error] o.q.c.ErrorLogger - Job (8a3a3ec7f96a2d1aceb2dc96c5dddaed.becdc8372f54d501222a2ca94c264ff0 threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#5fae8abe rejected from java.util.concurrent.ThreadPoolExecutor#39c55f20[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 268]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
at slick.backend.DatabaseComponent$DatabaseDef$class.runSynchronousDatabaseAction(DatabaseComponent.scala:230)
at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInContext(DatabaseComponent.scala:207)
at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInternal(DatabaseComponent.scala:75)
at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:38)
[error] o.q.c.ErrorLogger - Job (8a3a3ec7f96a2d1aceb2dc96c5dddaed.becdc8372f54d501222a2ca94c264ff0 threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#5fae8abe rejected from java.util.concurrent.ThreadPoolExecutor#39c55f20[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 268]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
at slick.backend.DatabaseComponent$DatabaseDef$class.runSynchronousDatabaseAction(DatabaseComponent.scala:230)
at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInContext(DatabaseComponent.scala:207)
at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$DatabaseDef$class.runInternal(DatabaseComponent.scala:75)
at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:38)
This failure doesnt happen when slick calls are made outside of the Quartz job but only when inside the Quartz job. Some of the initial few Quartz job succeeds but then after few runs the above error is thrown.
what is the causing this issue and why the stacktrace says the pool size is zero as shown below?
Caused by: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#5fae8abe rejected from java.util.concurrent.ThreadPoolExecutor#39c55f20[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 268]
Below is the Slick and Quartz settings
slick.dbs.default.driver="slick.driver.MySQLDriver$"
slick.dbs.default.db.driver="com.mysql.jdbc.Driver"
slick.dbs.default.db.url="jdbc:mysql://mysql-server:3306/database"
slick.dbs.default.db.user=username
slick.dbs.default.db.password=password
slick.dbs.default.db.queueSize = 10000
slick.dbs.default.db.connectionTimeout=5s
// Quartz Settings
scheduler = {
org.quartz.scheduler.instanceId = AUTO
org.quartz.threadPool.class = "org.quartz.simpl.SimpleThreadPool"
org.quartz.threadPool.threadCount = 100
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
org.quartz.dataSource.quartzDataSource.driver=com.mysql.jdbc.Driver
org.quartz.dataSource.quartzDataSource.URL="jdbc:mysql://mysql-server:3306/database"
org.quartz.dataSource.quartzDataSource.user=username
org.quartz.dataSource.quartzDataSource.password=password
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = "org.quartz.impl.jdbcjobstore.JobStoreTX"
org.quartz.jobStore.driverDelegateClass = "org.quartz.impl.jdbcjobstore.StdJDBCDelegate"
org.quartz.jobStore.dataSource = quartzDataSource
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.useProperties = true
org.quartz.dataSource.quartzDataSource.maxConnections=3
}
I am using Flume and trying to publish the output "/usr/bin/vmstat 1" to a kafka topic.
But I get exceptions as mentioned in the stack trace. Please help me to find solution to the problem.
==============================================================================
Flume Config
tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1
tier1.sources.source1.type = exec
tier1.sources.source1.command = /usr/bin/vmstat 1
tier1.sources.source1.channels = channel1
tier1.channels.channel1.type = memory
tier1.channels.channel1.capacity = 10000
tier1.channels.channel1.transactionCapacity = 1000
tier1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
tier1.sinks.sink1.topic = sink1
tier1.sinks.sink1.brokerList = localhost:9092
tier1.sinks.sink1.channel = channel1
tier1.sinks.sink1.batchSize = 20
==============================================================================
Exception
2015-09-23 10:39:36,936 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.ExecSource.start(ExecSource.java:169)] Exec source starting with command:/usr/bin/vmstat 1
2015-09-23 10:39:36,944 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)] Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#12c50438 counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoClassDefFoundError: scala/collection/IndexedSeqOptimized
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
at org.apache.flume.sink.kafka.KafkaSink.start(KafkaSink.java:163)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.ClassNotFoundException: scala.collection.IndexedSeqOptimized
at java.net.URLClassLoader$1.run(URLClassLoader.java:214)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
... 23 more
Caused by: java.util.zip.ZipException: invalid LOC header (bad signature)
at java.util.zip.ZipFile.read(Native Method)
at java.util.zip.ZipFile.access$1300(ZipFile.java:46)
at java.util.zip.ZipFile$ZipFileInputStream.read(ZipFile.java:485)
at java.util.zip.ZipFile$2.fill(ZipFile.java:268)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at sun.misc.Resource.getBytes(Resource.java:124)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:273)
at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
... 28 more
2015-09-23 10:39:36,954 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.
2015-09-23 10:39:36,977 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: source1 started
2015-09-23 10:39:36,977 (lifecycleSupervisor-1-3) [DEBUG - org.apache.flume.source.ExecSource.start(ExecSource.java:187)] Exec source started
2015-09-23 10:39:36,977 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:264)] Unsuccessful attempt to shutdown component: {} due to missing dependencies. Please shutdown the agentor disable this component, or the agent will bein an undefined state.
java.lang.NullPointerException
at org.apache.flume.sink.kafka.KafkaSink.stop(KafkaSink.java:171)
at org.apache.flume.sink.DefaultSinkProcessor.stop(DefaultSinkProcessor.java:53)
at org.apache.flume.SinkRunner.stop(SinkRunner.java:115)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:259)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
2015-09-23 10:39:39,978 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:233)] Component SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#12c50438 counterGroup:{ name:null counters:{} } } is in error state, and Flume will notattempt to change its state
^C2015-09-23 10:39:42,250 (pool-3-thread-1) [INFO - org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:376)] Command [/usr/bin/vmstat 1] exited with 130
After the eclipse update, it can't start up again. Also, it has a warning to check the log.
But I don't know how to fix this problem through the log.
Following is the log.
!SESSION 2012-10-15 08:21:47.367 -----------------------------------------------
eclipse.buildId=M20120914-1800
java.version=1.7.0_01
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32
Command-line arguments: -os win32 -ws win32 -arch x86_64
!ENTRY org.eclipse.equinox.event 2 0 2012-10-15 08:21:48.459
!MESSAGE [SCR] Found components with duplicated names! Details:
Component1 : Component[
name = org.eclipse.equinox.event
activate = activate
deactivate = deactivate
modified =
configuration-policy = optional
factory = null
autoenable = true
immediate = false
implementation = org.eclipse.equinox.internal.event.EventComponent
state = Unsatisfied
properties =
serviceFactory = false
serviceInterface = [org.osgi.service.event.EventAdmin]
references = null
located in bundle = org.eclipse.equinox.event_1.2.100.v20110502 [270]
]
Component2: Component[
name = org.eclipse.equinox.event
activate = activate
deactivate = deactivate
modified =
configuration-policy = optional
factory = null
autoenable = true
immediate = false
implementation = org.eclipse.equinox.internal.event.EventComponent
state = Unsatisfied
properties =
serviceFactory = false
serviceInterface = [org.osgi.service.event.EventAdmin]
references = null
located in bundle = org.eclipse.equinox.event_1.2.200.v20120522-2049 [89]
]
!ENTRY org.eclipse.equinox.p2.transport.ecf 2 0 2012-10-15 08:21:48.474
!MESSAGE [SCR] Found components with duplicated names! Details:
Component1 : Component[
name = org.eclipse.equinox.p2.transport.ecf
activate = activate
deactivate = deactivate
modified =
configuration-policy = optional
factory = null
autoenable = true
immediate = false
implementation = org.eclipse.equinox.internal.p2.transport.ecf.ECFTransportComponent
state = Unsatisfied
properties = {p2.agent.servicename=org.eclipse.equinox.internal.p2.repository.Transport}
serviceFactory = false
serviceInterface = [org.eclipse.equinox.p2.core.spi.IAgentServiceFactory]
references = null
located in bundle = org.eclipse.equinox.p2.transport.ecf_1.0.0.v20111128-0624 [275]
]
Component2: Component[
name = org.eclipse.equinox.p2.transport.ecf
activate = activate
deactivate = deactivate
modified =
configuration-policy = optional
factory = null
autoenable = true
immediate = false
implementation = org.eclipse.equinox.internal.p2.transport.ecf.ECFTransportComponent
state = Unsatisfied
properties = {p2.agent.servicename=org.eclipse.equinox.internal.p2.repository.Transport}
serviceFactory = false
serviceInterface = [org.eclipse.equinox.p2.core.spi.IAgentServiceFactory]
references = null
located in bundle = org.eclipse.equinox.p2.transport.ecf_1.0.100.v20120305-0333 [120]
]
!ENTRY org.eclipse.core.resources 2 10035 2012-10-15 08:21:49.535
!MESSAGE The workspace exited with unsaved changes in the previous session; refreshing workspace to recover changes.
!ENTRY org.eclipse.e4.ui.workbench 4 0 2012-10-15 08:21:52.000
!MESSAGE Unable to create class 'org.eclipse.e4.core.commands.CommandServiceAddon' from bundle '53'
!STACK 0
org.eclipse.e4.core.di.InjectionException: java.lang.ClassCastException: Cannot cast org.eclipse.core.commands.CommandManager to org.eclipse.core.commands.CommandManager
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:63)
at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:859)
at org.eclipse.e4.core.internal.di.InjectorImpl.inject(InjectorImpl.java:111)
at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:319)
at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:253)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:185)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.createFromBundle(ReflectionContributionFactory.java:105)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.doCreate(ReflectionContributionFactory.java:71)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.create(ReflectionContributionFactory.java:49)
at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createE4Workbench(E4Application.java:254)
at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:557)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:543)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
at org.eclipse.equinox.launcher.Main.main(Main.java:1414)
Caused by: java.lang.ClassCastException: Cannot cast org.eclipse.core.commands.CommandManager to org.eclipse.core.commands.CommandManager
at java.lang.Class.cast(Unknown Source)
at org.eclipse.e4.core.internal.contexts.EclipseContext.get(EclipseContext.java:566)
at org.eclipse.e4.core.commands.CommandServiceAddon.init(CommandServiceAddon.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:56)
... 27 more
Basically, it just a part in the log file. From here, I found some post and tried:
1) eclipse -clean
2) delete .snap
3) delete workbench.xmi
4) rename org.eclipse.core.resources
etc..
Unfortunately, they are not work.
go to
Your installation path\configuration\org.eclipse.equinox.simpleconfigurator\
Backup file bundle.info
remove the duplicate entry for "org.eclipse.core.commands" from the file bundle.info
I removed the entry with the lower version.
restart the eclipse with option -clean and you should be fine
i have the same problem, after i updated my pydev.
i re-extract JUNO to my installation dir
problem solved
Try this:
Go into damaged workspace/.metadata and delete the .lock file.
Go to workspace/.metadata/.plugins/org.eclipse.core.resources and delete the .snap file.
Start Eclipse with the command eclipse -clean.
Eclipse should start now. After Eclipse has been started and has initialized and validated the workspace, close Eclipse.
Check the "Name" of your ComponentDefinitions in the Overview Tab(*.xml)
If there are two ComponentDefinitions with different File names but the same "Name"-value in the Overview Tab, then this error occurs.