JENKINS Maven Build Failure - plugins

When i try to build my project using Jenkis i got the following error:
Caused by: org.apache.maven.plugin.PluginContainerException: An API incompatibility was encountered while executing org.wso2.maven:wso2-esb-proxy-plugin:2.0.12:pom-gen: java.lang.ExceptionInInitializerError: null
realm = plugin>org.wso2.maven:wso2-esb-proxy-plugin:2.0.12
strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
urls[0] = file:/home/jira/.m2/repository/org/wso2/maven/wso2-esb-proxy-plugin/2.0.12/wso2-esb-proxy-plugin-2.0.12.jar
urls[1] = file:/home/jira/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar
urls[2] = file:/home/jira/.m2/repository/org/apache/maven/maven-archiver/2.2/maven-archiver-2.2.jar
urls[3] = file:/home/jira/.m2/repository/org/apache/maven/shared/maven-dependency-tree/1.2/maven-dependency-tree-1.2.jar
urls[4] = file:/home/jira/.m2/repository/junit/junit/3.8.1/junit-3.8.1.jar
urls[5] = file:/home/jira/.m2/repository/org/codehaus/plexus/plexus-archiver/1.0-alpha-7/plexus-archiver-1.0-alpha-7.jar
urls[6] = file:/home/jira/.m2/repository/org/codehaus/plexus/plexus-utils/1.4.7/plexus-utils-1.4.7.jar
urls[7] = file:/home/jira/.m2/repository/org/wso2/maven/org.wso2.maven.capp/2.0.12/org.wso2.maven.capp-2.0.12.jar
urls[8] = file:/home/jira/.m2/repository/org/apache/ws/commons/axiom/wso2/axiom/1.2.9-wso2v1/axiom-1.2.9-wso2v1.jar
urls[9] = file:/home/jira/.m2/repository/javax/mail/mail/1.4/mail-1.4.jar
urls[10] = file:/home/jira/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar
urls[11] = file:/home/jira/.m2/repository/org/apache/ws/commons/axiom/axiom-impl/1.2.9-wso2v1/axiom-impl-1.2.9-wso2v1.jar
urls[12] = file:/home/jira/.m2/repository/org/apache/ws/commons/axiom/axiom-api/1.2.9-wso2v1/axiom-api-1.2.9-wso2v1.jar
urls[13] = file:/home/jira/.m2/repository/jaxen/jaxen/1.1.1/jaxen-1.1.1.jar
urls[14] = file:/home/jira/.m2/repository/xerces/xercesImpl/2.6.2/xercesImpl-2.6.2.jar
urls[15] = file:/home/jira/.m2/repository/org/apache/geronimo/specs/geronimo-activation_1.1_spec/1.0.2/geronimo-activation_1.1_spec-1.0.2.jar
urls[16] = file:/home/jira/.m2/repository/org/apache/geronimo/specs/geronimo-javamail_1.4_spec/1.6/geronimo-javamail_1.4_spec-1.6.jar
urls[17] = file:/home/jira/.m2/repository/org/codehaus/woodstox/wstx-asl/3.2.9/wstx-asl-3.2.9.jar
urls[18] = file:/home/jira/.m2/repository/org/apache/geronimo/specs/geronimo-stax-api_1.0_spec/1.0.1/geronimo-stax-api_1.0_spec-1.0.1.jar
urls[19] = file:/home/jira/.m2/repository/org/apache/ws/commons/axiom/axiom-dom/1.2.9-wso2v1/axiom-dom-1.2.9-wso2v1.jar
urls[20] = file:/home/jira/.m2/repository/dom4j/dom4j/1.6.1/dom4j-1.6.1.jar
urls[21] = file:/home/jira/.m2/repository/xml-apis/xml-apis/1.0.b2/xml-apis-1.0.b2.jar
urls[22] = file:/home/jira/.m2/repository/org/wso2/maven/org.wso2.maven.utils/2.0.2/org.wso2.maven.utils-2.0.2.jar
urls[23] = file:/home/jira/.m2/repository/org/wso2/maven/org.wso2.maven.core/2.0.3/org.wso2.maven.core-2.0.3.jar
urls[24] = file:/home/jira/.m2/repository/org/wso2/maven/org.wso2.maven.esb/2.0.3/org.wso2.maven.esb-2.0.3.jar
Number of foreign imports: 1
import: Entry[import from realm ClassRealm[project>it.innovapuglia.diogene.integration.diogene-wso2:Diogene-WSO2Artifacts:1.0c-rc2.0-SNAPSHOT, parent: ClassRealm[maven.api, parent: null]]]
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:179)
... 31 more
Caused by: java.lang.ExceptionInInitializerError
at org.apache.axiom.om.OMAbstractFactory.getMetaFactory(OMAbstractFactory.java:68)
at org.apache.axiom.om.OMAbstractFactory.getOMFactory(OMAbstractFactory.java:107)
at org.wso2.maven.core.model.AbstractXMLDoc.<clinit>(AbstractXMLDoc.java:47)
at org.wso2.maven.esb.utils.ESBMavenUtils.retrieveArtifacts(ESBMavenUtils.java:42)
at org.wso2.maven.esb.utils.ESBMavenUtils.retrieveArtifacts(ESBMavenUtils.java:36)
at org.wso2.maven.proxy.ProxyServicePOMGenMojo.retrieveArtifacts(ProxyServicePOMGenMojo.java:102)
at org.wso2.maven.proxy.ProxyServicePOMGenMojo.execute(ProxyServicePOMGenMojo.java:107)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
... 31 more
Caused by: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
at org.apache.axiom.injection.FactoryInjectionComponent.<clinit>(FactoryInjectionComponent.java:34)
... 39 more
Instead, when i try to build it locally using my installed maven... the build
ends successfully without any problem.
The maven jenkins command i launch is a simple
mvn clean install -P MyProfile...
I cannot figure out why there is an exception related to log4j commons...
What could be the problem?

Related

PySpark CrossValidator Fails - org.apache.spark.SparkException: Failed to execute user defined function

I have created an ML Pipeline with PySpark, and I can fit and transform the pipeline with LogisticRegression successfully. I get a trained model.
The issue is when I try to use CrossValidate. It runs part way, and then always fails with the following error:
Py4JJavaError: An error occurred while calling o1883.evaluate.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 135.0 failed 1 times, most recent failure: Lost task 0.0 in stage 135.0 (TID 110) (06496cbd55cf executor driver): org.apache.spark.SparkException: Failed to execute user defined function (HasSimpleAnnotate$$Lambda$3820/0x0000000841652840: (array<array<struct<annotatorType:string,begin:int,end:int,result:string,metadata:map<string,string>,embeddings:array<float>>>>) => array<struct<annotatorType:string,begin:int,end:int,result:string,metadata:map<string,string>,embeddings:array<float>>>)
at org.apache.spark.sql.errors.QueryExecutionErrors$.failedExecuteUserDefinedFunctionError(QueryExecutionErrors.scala:136)
at org.apache.spark.sql.errors.QueryExecutionErrors.failedExecuteUserDefinedFunctionError(QueryExecutionErrors.scala)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.Exception: feature rules is not set
at com.johnsnowlabs.nlp.serialization.Feature.$anonfun$getOrDefault$1(Feature.scala:116)
at scala.Option.getOrElse(Option.scala:189)
at com.johnsnowlabs.nlp.serialization.Feature.getOrDefault(Feature.scala:116)
at com.johnsnowlabs.nlp.HasFeatures.$$(HasFeatures.scala:73)
at com.johnsnowlabs.nlp.HasFeatures.$$$(HasFeatures.scala:73)
at com.johnsnowlabs.nlp.AnnotatorModel.$$(AnnotatorModel.scala:28)
at com.johnsnowlabs.nlp.annotators.TokenizerModel.$anonfun$tag$5(TokenizerModel.scala:340)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:513)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at scala.collection.AbstractIterator.to(Iterator.scala:1431)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
at com.johnsnowlabs.nlp.annotators.TokenizerModel.$anonfun$tag$2(TokenizerModel.scala:392)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:286)
at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at com.johnsnowlabs.nlp.annotators.TokenizerModel.tag(TokenizerModel.scala:288)
at com.johnsnowlabs.nlp.annotators.TokenizerModel.annotate(TokenizerModel.scala:400)
at com.johnsnowlabs.nlp.HasSimpleAnnotate.$anonfun$dfAnnotate$1(HasSimpleAnnotate.scala:46)
... 19 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2279)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
at org.apache.spark.rdd.PairRDDFunctions.$anonfun$collectAsMap$1(PairRDDFunctions.scala:737)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:736)
at org.apache.spark.mllib.evaluation.MulticlassMetrics.confusions$lzycompute(MulticlassMetrics.scala:61)
at org.apache.spark.mllib.evaluation.MulticlassMetrics.confusions(MulticlassMetrics.scala:52)
at org.apache.spark.mllib.evaluation.MulticlassMetrics.labelCountByClass$lzycompute(MulticlassMetrics.scala:66)
at org.apache.spark.mllib.evaluation.MulticlassMetrics.labelCountByClass(MulticlassMetrics.scala:64)
at org.apache.spark.mllib.evaluation.MulticlassMetrics.weightedFMeasure(MulticlassMetrics.scala:227)
at org.apache.spark.mllib.evaluation.MulticlassMetrics.weightedFMeasure$lzycompute(MulticlassMetrics.scala:235)
at org.apache.spark.mllib.evaluation.MulticlassMetrics.weightedFMeasure(MulticlassMetrics.scala:235)
at org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator.evaluate(MulticlassClassificationEvaluator.scala:153)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:829)
I have tried to debug this, and I am stuck. If I use just my OneHotEncoder column as the 'features' vector I have no issues. It is when I add in the TFIDF vectors too that it starts to fail.
Here is my code to build the pipeline:
import sparknlp
spark = sparknlp.start()
from pyspark.ml.feature import HashingTF, IDF
from pyspark.ml.feature import CountVectorizer
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import OneHotEncoder
from pyspark.ml.feature import VectorAssembler
from pyspark.ml import Pipeline
documentAssemblerTitle = DocumentAssembler().setInputCol('title').setOutputCol('titles')
documentAssemblerBody = DocumentAssembler().setInputCol('body').setOutputCol('bodies')
tokenizer_titles = Tokenizer().setInputCols(['titles']).setOutputCol('tokenized_titles')
tokenizer_bodies = Tokenizer().setInputCols(['bodies']).setOutputCol('tokenized_bodies')
stemmer_titles = Stemmer().setInputCols(['tokenized_titles']).setOutputCol('stemmed_titles')
stemmer_bodies = Stemmer().setInputCols(['tokenized_bodies']).setOutputCol('stemmed_bodies')
ngrammer_titles = NGramGenerator().setInputCols(['stemmed_titles']).setOutputCol('ngrams_titles').setN(1).setEnableCumulative(True)
ngrammer_bodies = NGramGenerator().setInputCols(['stemmed_bodies']).setOutputCol('ngrams_bodies').setN(1).setEnableCumulative(True)
finisher = Finisher().setInputCols(['ngrams_titles','ngrams_bodies'])
# tf_titles = HashingTF(inputCol='finished_stemmed_titles',outputCol='tf_features_titles')
# tf_bodies = HashingTF(inputCol='finished_stemmed_bodies',outputCol='tf_features_bodies')
tf_titles = CountVectorizer(inputCol='finished_ngrams_titles',outputCol='tf_features_titles')
tf_bodies = CountVectorizer(inputCol='finished_ngrams_bodies',outputCol='tf_features_bodies')
idf_titles = IDF(inputCol='tf_features_titles', outputCol='idf_titles',minDocFreq=5)
idf_bodies = IDF(inputCol='tf_features_bodies', outputCol='idf_bodies',minDocFreq=5)
author_indexer = StringIndexer(inputCol="author_association", outputCol="author_index")
ohe = OneHotEncoder(inputCol="author_index", outputCol="aa")
assembler = VectorAssembler(inputCols=['aa', 'idf_titles', 'idf_bodies'],outputCol='features')
pipeline_stages = [documentAssemblerTitle,
documentAssemblerBody,
tokenizer_titles,
tokenizer_bodies,
stemmer_titles,
stemmer_bodies,
ngrammer_titles,
ngrammer_bodies,
finisher,
tf_titles,
tf_bodies,
idf_titles,
idf_bodies,
author_indexer,
ohe,
assembler]
from pyspark.ml.classification import LogisticRegression
label_stringIdx = StringIndexer(inputCol = "Target", outputCol = "label")
pipeline_stages.append(label_stringIdx)
lr = LogisticRegression(maxIter=20) # regParam=0.3, elasticNetParam=0.8
pipeline_stages.append(lr)
pipeline = Pipeline().setStages(pipeline_stages)
pipeline_stages
Here is the code for the cross-validation:
from pyspark.ml.evaluation import MulticlassClassificationEvaluator, BinaryClassificationEvaluator
from pyspark.ml.tuning import TrainValidationSplit, CrossValidator, ParamGridBuilder
import numpy as np
# pipeline = Pipeline().setStages(pipeline_stages)
paramGrid_lr = ParamGridBuilder().addGrid(lr.maxIter, [1,2]).build()
# paramGrid_lr = ParamGridBuilder() \
# .addGrid(lr.regParam, [0.01, 0.5, 1]) \
# .addGrid(lr.elasticNetParam, [0.0, 0.5, 1]) \
# .addGrid(model.maxIter, [10, 20]) \
# .addGrid(idf.numFeatures, [10, 100, 1000]) \
# .build()
# evaluator = BinaryClassificationEvaluator()
evaluator = MulticlassClassificationEvaluator()
cv = TrainValidationSplit(estimator=pipeline,
estimatorParamMaps=paramGrid_lr,
evaluator=evaluator)
cvModel = cv.fit(train)
And my preprocessing code:
import re
from pyspark.sql.functions import udf
#udf("String")
def strip_text(text):
if text is not None:
stripped = text.lower()
# remove all headings, bold text, and HTML comments from the Markdown text.
# These items have all been used by the React team in their issue templates on GitHub
headings_pattern = r'(<=\s|^)#{1,6}(.*?)$'
bold_pattern = r'\*\*(.+?)\*\*(?!\*)'
comments_pattern = r'<!--((.|\n)*?)-->'
combined_pattern = r'|'.join((headings_pattern, bold_pattern, comments_pattern))
stripped = re.sub(combined_pattern, '', stripped)
# find all URLs in the string, and then remove the final directory from each to leave the general URL form
# there may be useful patterns based on what URLs issues are commonly linking to
url_pattern = re.compile(r'(https?://[^\s]+)')
for url in re.findall(url_pattern, stripped):
new_url = url.rsplit("/", 1)[0]
stripped = stripped.replace(url, new_url)
non_alpha_pattern = r'[^A-Za-z ]+'
stripped = re.sub(non_alpha_pattern, '', stripped)
return ' '.join(stripped.split())
else:
return " "
from pyspark.sql.functions import col
train = train.withColumn("body", strip_text(col("body"))).withColumn("title", strip_text(col("title")))
validation = validation.withColumn("body", strip_text(col("body"))).withColumn("title", strip_text(col("title")))
Thanks for any help you may be able to provide. I have been working on this with no progress for a long time now.
I have tried removing my pre-processing UDF, but there was no difference.
I have tried to impute null values as I saw this could potentially be an issue, but it doesn't fix it
train = train.fillna(0)
I have tried removing the IDF text features altogether just leaving the OneHotEncoded column, and this did work. So I believe it is an issue with the text column, but I am not sure what to do next.

NoClassDefFoundError on creating IgniteSinkConnector

I am trying to get a distributed setup for the ignite-connector to run. Sadly, it does not work. I was able to grab the log on creation of the connector via the api.
API POST payload to /connectors
{
"name": "ignite-connector",
"config": {
"connector.class": "org.apache.ignite.stream.kafka.connect.IgniteSinkConnector",
"tasks.max": "2",
"topics": "someTopic1",
"cacheName": "myCache",
"cacheAllowOverwrite": true,
"igniteCfg":"/opt/ignite/examples/config/example-cache.xml"}
}
}
I set up the ignite-connector as a plugin. I built an uber-jar from the repo and put it to a separate direcotry and included it as plugin in the .properties file I am using to start connect-distributed.sh.
I set the classpath for the jobs for both the connetor and kafka I am managing with systemd:
Environment=CLASSPATH=/opt/kafka/ignite-connector/*
Following the full error log:
[2022-11-17 19:49:30,268] INFO [ignite-connector|worker] SinkConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = ignite-connector
predicates = []
tasks.max = 2
topics = [someTopic1]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.SinkConnectorConfig:376)
[2022-11-17 19:49:30,272] INFO [ignite-connector|worker] EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = ignite-connector
predicates = []
tasks.max = 2
topics = [someTopic1]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:376)
[2022-11-17 19:49:30,276] INFO [ignite-connector|worker] Instantiated connector ignite-connector with version 3.3.1 of type class org.apache.ignite.stream.kafka.connect.IgniteSinkConnector (org.apache.kafka.connect.runtime.Worker:322)
[2022-11-17 19:49:30,276] INFO [ignite-connector|worker] Finished creating connector ignite-connector (org.apache.kafka.connect.runtime.Worker:347)
[2022-11-17 19:49:30,277] ERROR [ignite-connector|worker] WorkerConnector{id=ignite-connector} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:201)
java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:193)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:218)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:363)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:346)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:146)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-11-17 19:49:30,277] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1687)
[2022-11-17 19:49:30,280] ERROR [ignite-connector|worker] [Worker clientId=connect-1, groupId=connect-cluster] Failed to start connector 'ignite-connector' (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1811)
org.apache.kafka.connect.errors.ConnectException: Failed to start connector: ignite-connector
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$startConnector$35(DistributedHerder.java:1782)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:349)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:146)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to transition connector ignite-connector to state STARTED
... 8 more
Caused by: java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:193)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:218)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:363)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:346)
... 7 more
The mentioned class (A) is included in the ignite-core-2.9.1.jar that is bundeld in the uberJar in the Plugin directory.
Any pointers are appreciated
There seems to be a misunderstanding what "plugins" are. Those only are classes defined as implementations of Converters, Transforms, and Connectors.
Internal Ignite classes are none of these, so they wouldn't be loaded into the plugin.path classloader.
To fix this, you'll need to ensure you export CLASSPATH=/path/to/ignite-files/*.jar and you can use jar -tf commands to validate the class exists in any specific JAR before running Connect process.
It is not a hack; that's just how Java Classloader works.

Livy start new session

I have a problem when I want to create a new session by livy the session dead after their creation, I have installed Livy , spark 3.0.0 , scala 1.12.10 and python 3.7.I followed these steps:
step 01 : start Livy server
./livy-server start
step 02 : start spark shell
spark-shell
step 03 : executing this code using python 3 :
import json, pprint, requests, textwrap
host = 'http://localhost:8998'
data = {'kind': 'spark'}
headers = {'Content-Type': 'application/json'}
r = requests.post(host + '/sessions', data=json.dumps(data), headers=headers)
r.json()
session_url = host + r.headers['location']
r = requests.get(session_url, headers=headers)
r.json()
statements_url = session_url + '/statements'
data = {'code': '1 + 1'}
r = requests.post(statements_url, data=json.dumps(data), headers=headers)
r.json()
statement_url = host + r.headers['location']
r = requests.get(statement_url, headers=headers)
pprint.pprint(r.json())
data = {
'code': textwrap.dedent("""
val NUM_SAMPLES = 100000;
val count = sc.parallelize(1 to NUM_SAMPLES).map { i =>
val x = Math.random();
val y = Math.random();
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _);
println(\"Pi is roughly \" + 4.0 * count / NUM_SAMPLES)
""")
}
r = requests.post(statements_url, data=json.dumps(data), headers=headers)
pprint.pprint(r.json())
statement_url = host + r.headers['location']
r = requests.get(statement_url, headers=headers)
pprint.pprint(r.json())
this is the log of my session :
stderr:
20/02/17 14:20:06 WARN Utils: Your hostname, zekri-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
20/02/17 14:20:06 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/usr/local/spark-3.0.0-preview2-bin-hadoop2.7/jars/spark-unsafe_2.12-3.0.0-preview2.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
20/02/17 14:20:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NoClassDefFoundError: scala/Function0$class
at org.apache.livy.shaded.json4s.ThreadLocal.<init>(Formats.scala:311)
at org.apache.livy.shaded.json4s.DefaultFormats$class.$init$(Formats.scala:318)
at org.apache.livy.shaded.json4s.DefaultFormats$.<init>(Formats.scala:296)
at org.apache.livy.shaded.json4s.DefaultFormats$.<clinit>(Formats.scala)
at org.apache.livy.repl.Session.<init>(Session.scala:66)
at org.apache.livy.repl.ReplDriver.initializeSparkEntries(ReplDriver.scala:41)
at org.apache.livy.rsc.driver.RSCDriver.run(RSCDriver.java:337)
at org.apache.livy.rsc.driver.RSCDriverBootstrapper.main(RSCDriverBootstrapper.java:93)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: scala.Function0$class
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 20 more

Quartz Scheduler failed to initialize

Getting these errors:
2018-01-22 18:00:59,797 [ServerService Thread Pool -- 79] ERROR org.quartz.ee.servlet.QuartzInitializerListener - Quartz Scheduler failed to initialize: org.quartz.SchedulerException: SchedulerPlugin class 'org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin' could not be instantiated. [See nested exception: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.]
2018-01-22 18:00:59,797 [ServerService Thread Pool -- 79] ERROR stderr - org.quartz.SchedulerException: SchedulerPlugin class 'org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin' could not be instantiated. [See nested exception: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.]
2018-01-22 18:00:59,805 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.ClassNotFoundException: Unable to load class org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin by any known loaders.
2018-01-22 18:00:59,805 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.LinkageError: Failed to link org/quartz/plugins/xml/XMLSchedulingDataProcessorPlugin (Module "deployment.Reports.ear:main" from Service Module Loader)
2018-01-22 18:00:59,806 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.NoClassDefFoundError: org/quartz/jobs/FileScanListener
2018-01-22 18:00:59,807 [ServerService Thread Pool -- 79] ERROR stderr - Caused by: java.lang.ClassNotFoundException: org.quartz.jobs.FileScanListener from [Module "deployment.Reports.ear:main" from Service Module Loader]
quartz.properties file:
org.quartz.scheduler.instanceName = Scheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = quartzDataSource
org.quartz.jobStore.tablePrefix = QUARTZ.QRTZ_
#org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
org.quartz.dataSource.quartzDataSource.jndiURL=java:QuartzDataSource
org.quartz.dataSource.quartzDataSource.java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.fileNames = quartz-jobs-xml
org.quartz.plugin.jobInitializer.failOnFileNotFound = true
org.quartz.plugin.jobInitializer.scanInterval = 0
org.quartz.plugin.jobInitializer.wrapInUserTransaction =false
My quartz-jobs.xml file is in the same directory as my quartz.properties.xml file. I tried playing around with that path a little. But I think the errors are just saying it can't instantiate my jobs.
I needed to add a new quartz library to my build to include the XMLSchedulingDataProcessorPlugin that I reference in my quartz.properties.
In my root build.gradle I add this to my ext.libraries section:
quartz_jobs: 'org.quartz-scheduler:quartz-jobs:2.3.0'
Then in my web services build.gradle where I use quartz, I add the library to my providedCompile section:
libraries.quartz-jobs

jboss seam + quartz scheduler 2.2.1 + clustering

I have problem with configuration quartz version 2.2.1 with seam framevork.
Now a get error in log:
[24.4.14 12:51:37:843 SELČ] 0000000b SystemOut O ERROR manage:3853 - ClusterManager: Error managing cluster: Failure updating scheduler state when checking-in: ORA-01400: cannot insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
org.quartz.JobPersistenceException: Failure updating scheduler state when checking-in: ORA-01400: cannot insert NULL into
("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
[See nested exception: java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot
insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.clusterCheckIn(JobStoreSupport.java:3373)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.doCheckin(JobStoreSupport.java:3226)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$ClusterManager.manage(JobStoreSupport.java:3847)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$ClusterManager.initialize(JobStoreSupport.java:3832)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.schedulerStarted(JobStoreSupport.java:639)
at org.quartz.core.QuartzScheduler.start(QuartzScheduler.java:513)
at org.quartz.impl.StdScheduler.start(StdScheduler.java:143)
at org.jboss.seam.async.QuartzDispatcher.initScheduler(QuartzDispatcher.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.jboss.seam.util.Reflections.invoke(Reflections.java:22)
at org.jboss.seam.util.Reflections.invokeAndWrap(Reflections.java:144)
at org.jboss.seam.Component.callComponentMethod(Component.java:2275)
at org.jboss.seam.Component.callCreateMethod(Component.java:2198)
at org.jboss.seam.Component.newInstance(Component.java:2158)
at org.jboss.seam.contexts.Contexts.startup(Contexts.java:304)
at org.jboss.seam.contexts.Contexts.startup(Contexts.java:278)
at org.jboss.seam.contexts.ServletLifecycle.endInitialization(ServletLifecycle.java:143)
at org.jboss.seam.init.Initialization.init(Initialization.java:744)
at org.jboss.seam.servlet.SeamListener.contextInitialized(SeamListener.java:36)
at com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:1682)
at com.ibm.ws.webcontainer.webapp.WebAppImpl.initialize(WebAppImpl.java:410)
at com.ibm.ws.webcontainer.webapp.WebGroupImpl.addWebApplication(WebGroupImpl.java:88)
at com.ibm.ws.webcontainer.VirtualHostImpl.addWebApplication(VirtualHostImpl.java:169)
at com.ibm.ws.webcontainer.WSWebContainer.addWebApp(WSWebContainer.java:749)
at com.ibm.ws.webcontainer.WSWebContainer.addWebApplication(WSWebContainer.java:634)
at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:422)
at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:714)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1163)
at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1369)
at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:639)
at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:967)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:769)
at com.ibm.ws.runtime.component.ApplicationMgrImpl$5.run(ApplicationMgrImpl.java:2160)
at com.ibm.ws.security.auth.ContextManagerImpl.runAs(ContextManagerImpl.java:5468)
at com.ibm.ws.security.auth.ContextManagerImpl.runAsSystem(ContextManagerImpl.java:5594)
at com.ibm.ws.security.core.SecurityContext.runAsSystem(SecurityContext.java:255)
at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:2165)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:446)
at com.ibm.ws.runtime.component.CompositionUnitImpl.start(CompositionUnitImpl.java:123)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:389)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.access$500(CompositionUnitMgrImpl.java:117)
at com.ibm.ws.runtime.component.CompositionUnitMgrImpl$CUInitializer.run(CompositionUnitMgrImpl.java:995)
at com.ibm.wsspi.runtime.component.WsComponentImpl$_AsynchInitializer.run(WsComponentImpl.java:496)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1700)
Caused by:
java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot insert NULL into ("JDC"."QRTZ_SCHEDULER_STATE"."SCHED_NAME")
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:85)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:953)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3468)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteUpdate(WSJdbcPreparedStatement.java:1185)
at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeUpdate(WSJdbcPreparedStatement.java:802)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.insertSchedulerState(StdJDBCDelegate.java:3227)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.clusterCheckIn(JobStoreSupport.java:3368)
... 46 more
My configuration quartz is:
#==============================================================
# Configure Main Scheduler Properties
#==============================================================
org.quartz.scheduler.instanceName = Sched1
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.scheduler.userTransactionURL = jta/usertransaction
#==============================================================
# Configure ThreadPool
#==============================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
#==============================================================
# Configure JobStore
#==============================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = quartzDS
#org.quartz.jobStore.nonManagedTXDataSource = quartzDSNoTx
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 15000
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.quartzDS.jndiURL= jdc
#org.quartz.dataSource.quartzDSNoTx.jndiURL= java:/jdcNoTX
#org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
#org.quartz.plugin.jobInitializer.fileNames = jobSchedule.xml
Show your QuartzJobs config file and tell when en error occurs? While compile or when the job should execute?