I have used quartz for scheduling my jobs, When I used RAM JOB Store the scheduler starts and triggers successfully but when I use JDBC store it fails to start. Can you please guide, I've placed the below artefacts,
# Default Properties file for use by StdSchedulerFactory
# to create a Quartz Scheduler Instance, if a different
# properties file is not explicitly specified.
#
org.quartz.scheduler.instanceName: DefaultQuartzScheduler
org.quartz.scheduler.rmi.export: false
org.quartz.scheduler.rmi.proxy: false
org.quartz.scheduler.wrapJobExecutionInUserTransaction: false
org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount: 2
org.quartz.threadPool.threadPriority: 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread: true
org.quartz.jobStore.misfireThreshold: 60000
#org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
org.quartz.jobStore.class: org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.dataSource.myDS.driver = org.hsqldb.jdbc.JDBCDriver
org.quartz.dataSource.myDS.URL = jdbc:hsqldb:file:x\\myds
org.quartz.dataSource.myDS.user = SA
org.quartz.dataSource.myDS.password = sa
org.quartz.dataSource.myDS.maxConnections = 30
These are my logs...
2014-01-17 11:36:42 INFO MLog:80 - MLog clients using log4j logging.
2014-01-17 11:36:42 INFO C3P0Registry:204 - Initializing c3p0-0.9.1.1 [built 15-March-2007 01:32:31; debug? true; trace: 10]
2014-01-17 11:36:42 INFO StdSchedulerFactory:1184 - Using default implementation for ThreadExecutor
2014-01-17 11:36:42 INFO SimpleThreadPool:268 - Job execution threads will use class loader of thread: main
2014-01-17 11:36:42 INFO SchedulerSignalerImpl:61 - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
2014-01-17 11:36:42 INFO QuartzScheduler:240 - Quartz Scheduler v.2.2.1 created.
2014-01-17 11:36:42 INFO JobStoreTX:670 - Using thread monitor-based data access locking (synchronization).
2014-01-17 11:36:42 INFO JobStoreTX:59 - JobStoreTX initialized.
2014-01-17 11:36:42 INFO QuartzScheduler:305 - Scheduler meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 2 threads.
Using job-store 'org.quartz.impl.jdbcjobstore.JobStoreTX' - which supports persistence. and is not clustered.
2014-01-17 11:36:42 INFO StdSchedulerFactory:1339 - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
2014-01-17 11:36:42 INFO StdSchedulerFactory:1343 - Quartz scheduler version: 2.2.1
2014-01-17 11:36:42 INFO AbstractPoolBackedDataSource:462 - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 2yhpp38z182altw1uxr4l9|6df6f81b, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.hsqldb.jdbc.JDBCDriver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2yhpp38z182altw1uxr4l9|6df6f81b, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:hsqldb:file:x\database\myds, lastAcquisitionFailureDefaultUser -> null, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 30, maxStatements -> 0, maxStatementsPerConnection -> 120, minPoolSize -> 1, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, preferredTestQuery -> null, properties -> {user=******, password=******}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ]
Currently the database tables are empty...
Your suggestions needed....
Ok I think it is not an error. I have same LOG info in my project (NOT STARTED) however when I check my tables, I'm able to see my jobs and triggers. Try to add database tables manually and schedule a job in your project. After you will see in database.
Related
I am experiencing an unexpected automatic EAR restart from Wildfly 23.0.2.Final (also verified with 22.0.1.Final and 23.0.0.Final).
Basically, what happens is that the application is started, but when everything is up and running, I receive a message that reads WFLYSRV0212: Resuming server and the deploy process starts over again. This also seems to jeopardize JMS queues definition (defined from within the EJB module as #JMSResourceDestinations).
Below the snippet of the logs when this happens (just a bit anonymized):
13:16:12,228 INFO [org.jboss.as.server] (ServerService Thread Pool -- 48) WFLYSRV0010: Deployed "XXXX.ear" (runtime-name : "XXXX.ear")
13:16:12,250 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-5) WFLYJCA0019: Stopped Driver service with driver-name = XXXXX.ear_org.postgresql.Driver_42_2
13:16:12,251 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
13:16:12,260 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 9) WFLYUT0022: Unregistered web context: '/api' from server 'default-server'
13:16:12,269 INFO [io.undertow.servlet] (ServerService Thread Pool -- 9) Closing Spring root WebApplicationContext
13:16:12,303 WARN [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 25) AMQ222061: Client connection failed, clearing up resources for session 1610e737-d282-11eb-9d4b-0242e77fb65d
13:16:12,309 WARN [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 25) AMQ222107: Cleared up resources for session 1610e737-d282-11eb-9d4b-0242e77fb65d
13:16:12,313 WARN [org.apache.activemq.artemis.ra] (default-threads - 1) AMQ152005: Failure in broker activation org.apache.activemq.artemis.ra.inflow.ActiveMQActivationSpec(ra=org.wildfly.extension.messaging.activemq.ActiveMQResourceAdapter#9f1bfbc destination=java:global/XXXX/jms/queue/queue1 destinationType=javax.jms.Queue ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15): ActiveMQUnBlockedException[errorType=UNBLOCKED message=AMQ219016: Connection failure detected. Unblocking a blocking call that will never get a response]
at org.apache.activemq.artemis#2.16.0//org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:540)
at org.apache.activemq.artemis#2.16.0//org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:434)
at org.apache.activemq.artemis#2.16.0//org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQClientProtocolManager.createSessionContext(ActiveMQClientProtocolManager.java:300)
at org.apache.activemq.artemis#2.16.0//org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQClientProtocolManager.createSessionContext(ActiveMQClientProtocolManager.java:249)
at org.apache.activemq.artemis#2.16.0//org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSessionChannel(ClientSessionFactoryImpl.java:1401)
at org.apache.activemq.artemis#2.16.0//org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSessionInternal(ClientSessionFactoryImpl.java:705)
at org.apache.activemq.artemis#2.16.0//org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSession(ClientSessionFactoryImpl.java:316)
at org.apache.activemq.artemis.ra#2.16.0//org.apache.activemq.artemis.ra.ActiveMQResourceAdapter.createSession(ActiveMQResourceAdapter.java:1602)
at org.apache.activemq.artemis.ra#2.16.0//org.apache.activemq.artemis.ra.inflow.ActiveMQActivation.setupSession(ActiveMQActivation.java:491)
at org.apache.activemq.artemis.ra#2.16.0//org.apache.activemq.artemis.ra.inflow.ActiveMQActivation.setup(ActiveMQActivation.java:316)
at org.apache.activemq.artemis.ra#2.16.0//org.apache.activemq.artemis.ra.inflow.ActiveMQActivation$SetupActivation.run(ActiveMQActivation.java:764)
at org.wildfly.extension.messaging-activemq//org.wildfly.extension.messaging.activemq.ActiveMQResourceAdapter$WorkWrapper.run(ActiveMQResourceAdapter.java:161)
at org.jboss.ironjacamar.impl#1.4.27.Final//org.jboss.jca.core.workmanager.WorkWrapper.runWork(WorkWrapper.java:445)
at org.jboss.as.connector#23.0.2.Final//org.jboss.as.connector.services.workmanager.WildflyWorkWrapper.runWork(WildflyWorkWrapper.java:69)
at org.jboss.ironjacamar.impl#1.4.27.Final//org.jboss.jca.core.workmanager.WorkWrapper.run(WorkWrapper.java:223)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.SimpleDirectExecutor.execute(SimpleDirectExecutor.java:29)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.QueueExecutor.runTask(QueueExecutor.java:789)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.QueueExecutor.access$100(QueueExecutor.java:44)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.QueueExecutor$Worker.run(QueueExecutor.java:809)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
13:16:12,329 INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-5) WFLYMSGAMQ0006: Unbound messaging object to jndi name java:global/XXXX/jms/queue/queue1
The EAR was created with the specific Wildfly archetype. The configuration of the Wildfly instance is the standalone-full, as it comes out of the box.
Anyone who can help me understand what's happening?
UPDATE: Wildfly logs
Increasing the root log level and goinf through it, I've found the following:
2021-06-22 18:44:59,357 DEBUG [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) Deployment scan of [/home/xxxx/wildfly-23.0.2.Final/standalone/deployments] found update action [{
"operation" => "composite",
"address" => [],
"steps" => [
{
"operation" => "add",
"address" => [("deployment" => "togather-engine.ear")],
"content" => [{
"archive" => false,
"path" => "deployments/XXXX.ear",
"relative-to" => "jboss.server.base.dir"
}],
"persistent" => false,
"owner" => [
("subsystem" => "deployment-scanner"),
("scanner" => "default")
]
},
{
"operation" => "deploy",
"address" => [("deployment" => "XXXX.ear")],
"owner" => [
("subsystem" => "deployment-scanner"),
("scanner" => "default")
]
}
],
"operation-headers" => {"rollback-on-runtime-failure" => false}
}]
After a bit
2021-06-22 18:45:11,886 DEBUG [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 2) Deployment scan of [/home/xxxxx/wildfly-23.0.2.Final/standalone/deployments] found update action [{
"operation" => "redeploy",
"address" => [("deployment" => "XXXX.ear")],
"owner" => [
("subsystem" => "deployment-scanner"),
("scanner" => "default")
]
}]
It looks that it's actually that is going through the deployment twice.
Getting the below exception when trying hibernate with C3P0.
[main] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - [Amazon](500310) Invalid operation: subquery in FROM must have an alias
Position: 15;
Exception in thread "main" org.hibernate.exception.SQLGrammarException: Error accessing table metadata
at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:106)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:111)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:97)
at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.convertSQLException(InformationExtractorJdbcDatabaseMetaDataImpl.java:99)
at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.locateTableInNamespace(InformationExtractorJdbcDatabaseMetaDataImpl.java:354)
at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.getTable(InformationExtractorJdbcDatabaseMetaDataImpl.java:241)
at org.hibernate.tool.schema.internal.exec.ImprovedDatabaseInformationImpl.getTableInformation(ImprovedDatabaseInformationImpl.java:109)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.performMigration(SchemaMigratorImpl.java:252)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:137)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:110)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:176)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:64)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:458)
at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:465)
at com.xxxxx.validation.execute.Executor.main(Executor.java:62)
Caused by: java.sql.SQLException: [Amazon](500310) Invalid operation: subquery in FROM must have an alias
Position: 15;
at com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(Unknown Source)
at com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(Unknown Source)
at com.amazon.redshift.client.PGMessagingContext.handleMessage(Unknown Source)
at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(Unknown Source)
at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(Unknown Source)
at com.amazon.redshift.client.PGMessagingContext.getBindComplete(Unknown Source)
at com.amazon.redshift.client.PGClient.handleErrorsScenario1(Unknown Source)
at com.amazon.redshift.client.PGClient.handleErrors(Unknown Source)
at com.amazon.redshift.client.PGClient.directExecute(Unknown Source)
at com.amazon.redshift.client.PGClient.directExecute(Unknown Source)
at com.amazon.redshift.dataengine.PGDataEngine.makeNewMetadataSource(Unknown Source)
at com.amazon.dsi.dataengine.impl.DSIDataEngine.makeNewMetadataResult(Unknown Source)
at com.amazon.redshift.dataengine.PGDataEngine.makeNewMetadataResult(Unknown Source)
at com.amazon.jdbc.jdbc41.S41DatabaseMetaData.createMetaDataResult(Unknown Source)
at com.amazon.jdbc.common.SDatabaseMetaData.getTables(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyDatabaseMetaData.getTables(NewProxyDatabaseMetaData.java:2962)
at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.locateTableInNamespace(InformationExtractorJdbcDatabaseMetaDataImpl.java:339)
at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.getTable(InformationExtractorJdbcDatabaseMetaDataImpl.java:241)
at org.hibernate.tool.schema.internal.exec.ImprovedDatabaseInformationImpl.getTableInformation(ImprovedDatabaseInformationImpl.java:109)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.performMigration(SchemaMigratorImpl.java:252)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:137)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:110)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:176)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:64)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:458)
at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:465)
Caused by: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: subquery in FROM must have an alias
Position: 15;
... 26 more
If i change the Environment.HBM2DDL_AUTO, "none" it is working.
or
If i remove the C3P0 properties and "?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory&tcpKeepAlive=true" from the DB_URL it is working.
I'm not able to understand why it is not working if the Environment.HBM2DDL_AUTO is set to update or create.
Configuration code:
Configuration configuration = new Configuration();
configuration.setProperty("hibernate.current_session_context_class", "thread");
configuration.setProperty(Environment.DRIVER, "org.postgresql.Driver");
configuration.setProperty(Environment.URL,
DB_URL +
"?ssl=true&sslfactory=org.postgresql.ssl" +
".NonValidatingFactory&tcpKeepAlive=true");
configuration.setProperty(Environment.USER, getUsername());
configuration.setProperty(Environment.PASS, getPassword());
configuration.setProperty("hibernate.connection.release_mode", "auto");
configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.PostgreSQLDialect");
configuration.setProperty("hibernate.show_sql", "true");
configuration.setProperty(Environment.HBM2DDL_AUTO, "update");
configuration.setProperty(Environment.AUTOCOMMIT, "true");
configuration.setProperty("hibernate.c3p0.min_size", "1");
configuration.setProperty("hibernate.c3p0.max_size", "1");
configuration.setProperty("hibernate.c3p0.timeout", "300");
configuration.setProperty("hibernate.c3p0.max_statements", "5");
configuration.setProperty("hibernate.c3p0.idle_test_period", "300");
configuration.addAnnotatedClass(SourceTable.class);
ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder()
.applySettings(configuration.getProperties())
.build();
SessionFactory sessionFactory = configuration.buildSessionFactory(serviceRegistry);
Session session = sessionFactory.getCurrentSession();
Transaction transaction = session.beginTransaction();
SourceTable sourceTable = new SourceTable();
sourceTable.setStringID("1");
sourceTable.setStringValue("somevalue");
session.save(sourceTable);
transaction.commit();
Source Table code:
import lombok.Getter;
import lombok.Setter;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;
#Getter
#Setter
#Entity
#Table(name = "sourcetable")
public class SourceTable {
#Id
#Column(name = "stringid")
private String stringID;
#Column(name = "stringvalue")
private String stringValue;
}
Adding logs:
[main] INFO org.hibernate.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
[main] INFO org.hibernate.c3p0.internal.C3P0ConnectionProvider - HHH010002: C3P0 using driver: org.postgresql.Driver at URL: jdbc:postgresql://xxxxx:port/dbname?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory&tcpKeepAlive=true
[main] INFO org.hibernate.c3p0.internal.C3P0ConnectionProvider - HHH10001001: Connection properties: {user=*****, password=****, autocommit=true, release_mode=auto}
[main] INFO org.hibernate.c3p0.internal.C3P0ConnectionProvider - HHH10001003: Autocommit mode: true
[MLog-Init-Reporter] INFO com.mchange.v2.log.MLog - MLog clients using slf4j logging.
[main] INFO com.mchange.v2.c3p0.C3P0Registry - Initializing c3p0-0.9.5.1 [built 16-June-2015 00:06:36 -0700; debug? true; trace: 10]
[main] INFO org.hibernate.c3p0.internal.C3P0ConnectionProvider - HHH10001007: JDBC isolation level: <unknown>
[main] INFO com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource - Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource#56e8076 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource#894303c4 [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 30f3c7a7juegyz7iq99d|56aaaecd, idleConnectionTestPeriod -> 300, initialPoolSize -> 1, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 300, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 1, maxStatements -> 5, maxStatementsPerConnection -> 0, minPoolSize -> 1, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource#6247eb23 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 30f3c7a7juegyz7iq99d|302a07d, jdbcUrl -> jdbc:postgresql://XXXXX:port/dbname?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory&tcpKeepAlive=true, properties -> {user=******, password=******, autocommit=true, release_mode=auto} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 30f3c7a7juegyz7iq99d|395b56bb, numHelperThreads -> 3 ]
[main] INFO org.hibernate.dialect.Dialect - HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect
[main] INFO org.hibernate.engine.jdbc.env.internal.LobCreatorBuilderImpl - HHH000424: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException
[main] INFO org.hibernate.type.BasicTypeRegistry - HHH000270: Type registration [java.util.UUID] overrides previous : org.hibernate.type.UUIDBinaryType#21d8bcbe
[main] INFO org.hibernate.envers.boot.internal.EnversServiceImpl - Envers integration enabled? : true
[main] INFO org.hibernate.tuple.PojoInstantiator - HHH000182: No default (no-argument) constructor for class: com.validation.tables.SourceTable (class must be instantiated by Interceptor)
[main] WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: 500310, SQLState: 42601
Ensure your dialect matches the appropriate version of Postgres if you are using Hibernate 5.
See here: https://docs.jboss.org/hibernate/orm/5.2/javadocs/org/hibernate/dialect/package-summary.html
Since you are using Hibernate 5.1 you should try setting the dialect to:
org.hibernate.dialect.PostgreSQL94Dialect
Additional dialects can be found here:
https://docs.jboss.org/hibernate/orm/5.1/javadocs/
In Hibernate 5 the PostgreSQLDialect is deprecated.
I am unable to receive messages in msgItr where as in command promt using kafka commands i am able to see the messages in partition. please let me know what is going on here. what should i do get the messages.
I tried to print but nothing prints. May be because it is an RDD and it is printing something on the executor node.
val ssc = new StreamingContext(conf, Seconds(props.getProperty("spark.streaming.batchDuration").toInt))
val topics = Set(props.getProperty("kafkaConf.topic"))
// TODO: Externalize StorageLevel to props file
val storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2
//"zookeeper.connect" -> "fepp-cdhmn-d2.fepoc.com"
val kafkaParams = Map[String, Object](
// the usual params, make sure to change the port in bootstrap.servers if 9092 is not TLS
"zookeeper.connect" -> props.getProperty("kafkaConf.zookeeper.connect"),
"bootstrap.servers" -> props.getProperty("kafkaConf.bootstrap.servers"),
"group.id" -> props.getProperty("kafkaConf.group.id"),
"zookeeper.connection.timeout.ms" -> props.getProperty("kafkaConf.zookeeper.connection.timeout.ms"),
"security.protocol" -> props.getProperty("kafkaConf.security.protocol"),
"ssl.protocol" -> props.getProperty("kafkaConf.ssl.protocol"),
"ssl.keymanager.algorithm" -> props.getProperty("kafkaConf.ssl.keymanager.algorithm"),
"ssl.enabled.protocols" -> props.getProperty("kafkaConf.ssl.enabled.protocols"),
"ssl.truststore.type" -> props.getProperty("kafkaConf.ssl.truststore.type"),
"ssl.keystore.type" -> props.getProperty("kafkaConf.ssl.keystore.type"),
"ssl.truststore.location" -> props.getProperty("kafkaConf.ssl.truststore.location"),
"ssl.truststore.password" -> props.getProperty("kafkaConf.ssl.truststore.password"),
"ssl.keystore.location" -> props.getProperty("kafkaConf.ssl.keystore.location"),
"ssl.keystore.password" -> props.getProperty("kafkaConf.ssl.keystore.password"),
"ssl.key.password" -> props.getProperty("kafkaConf.ssl.key.password"),
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"auto.offset.reset" -> props.getProperty("kafkaConf.auto.offset.reset"),
"enable.auto.commit" -> (props.getProperty("kafkaConf.enable.auto.commit").toBoolean: java.lang.Boolean),
"key.serializer" -> "org.apache.kafka.common.serialization.StringSerializer",
"value.serializer" -> "org.apache.kafka.common.serialization.StringSerializer"
//"heartbeat.interval.ms" -> props.getProperty("kafkaConf.heartbeat.interval.ms"),
//"session.timeout.ms" -> props.getProperty("kafkaConf.session.timeout.ms")
)
// Must use the direct api as the old api does not support SSL
log.debug("Creating direct kafka stream")
val kafkaStream = KafkaUtils.createDirectStream[String, String](ssc, PreferConsistent,
Subscribe[String, String](topics, kafkaParams))
val res = kafkaStream.foreachRDD((kafkaRdd: RDD[ConsumerRecord[String, String]]) => {
val numPartitions = kafkaRdd.getNumPartitions
log.info(s"Processing RDD with '$numPartitions' partitions.")
// Only one partition for the kafka topic is supported at this time
if (numPartitions != 1) {
throw new RuntimeException("Kafka topic must have 1 partition")
}
val offsetRanges = kafkaRdd.asInstanceOf[HasOffsetRanges].offsetRanges
kafkaRdd.foreachPartition((msgItr: Iterator[ConsumerRecord[String, String]]) => {
val log = LogManager.getRootLogger()
msgItr.foreach((kafkaMsg: ConsumerRecord[String, String]) => {
// Hbase connection Fails here. because of authentication with below error
2018-09-19 15:28:01 INFO ZooKeeper:100 - Client environment:user.home=/home/service_account
2018-09-19 15:28:01 INFO ZooKeeper:100 - Client environment:user.dir=/data/09/yarn/nm/usercache/service_account/appcache/application_1536891989660_9297/container_e208_1536891989660_9297_01_000002
2018-09-19 15:28:01 INFO ZooKeeper:438 - Initiating client connection, connectString=depp-cdhmn-d1.domnnremvd.com:2181,depp-cdhmn-d2.domnnremvd.com:2181,depp-cdhmn-d3.domnnremvd.com:2181 sessionTimeout=90000 watcher=hconnection-0x16648f570x0, quorum=depp-cdhmn-d1.domnnremvd.com:2181,depp-cdhmn-d2.domnnremvd.com:2181,depp-cdhmn-d3.domnnremvd.com:2181, baseZNode=/hbase
2018-09-19 15:28:01 INFO ClientCnxn:975 - Opening socket connection to server depp-cdhmn-d3.domnnremvd.com/999.99.999.777:2181. Will not attempt to authenticate using SASL (unknown error)
2018-09-19 15:28:01 INFO ClientCnxn:852 - Socket connection established, initiating session, client: /999.99.999.999:33314, server: depp-cdhmn-d3.domnnremvd.com/999.99.999.777:2181
2018-09-19 15:28:01 INFO ClientCnxn:1235 - Session establishment complete on server depp-cdhmn-d3.domnnremvd.com/999.99.999.777:2181, sessionid = 0x365cb965ff33958, negotiated timeout = 60000
false
false
2018-09-19 15:28:02 WARN UserGroupInformation:1923 - PriviledgedActionException as:service_account (auth:SIMPLE) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
2018-09-19 15:28:02 WARN RpcClientImpl:675 - Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
2018-09-19 15:28:02 ERROR RpcClientImpl:685 - SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:618)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:163)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:744)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:741)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:741)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:907)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:874)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1243)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1712)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1650)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1672)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4313)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4305)
at org.apache.hadoop.hbase.client.HBaseAdmin.listTableNames(HBaseAdmin.java:533)
at org.apache.hadoop.hbase.client.HBaseAdmin.listTableNames(HBaseAdmin.java:517)
at com.company.etl.HbaseConnect.mainMethod(HbaseConnect.scala:39)
at com.company.etl.App$$anonfun$1$$anonfun$apply$2$$anonfun$apply$3.apply(App.scala:205)
at com.company.etl.App$$anonfun$1$$anonfun$apply$2$$anonfun$apply$3.apply(App.scala:178)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.foreach(KafkaRDD.scala:189)
at com.company.etl.App$$anonfun$1$$anonfun$apply$2.apply(App.scala:178)
at com.company.etl.App$$anonfun$1$$anonfun$apply$2.apply(App.scala:161)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 43 more
It's because of kerberos authentications.
Set System Properties.
System.setProperty("java.security.auth.login.config","/your/conf/directory/kafkajaas.conf");
System.setProperty("sun.security.jgss.debug","true");
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
System.setProperty("java.security.krb5.conf", "/your/krb5/conf/directory/krb5.conf");
You can read data from Cloudera Kafka. (Producer)
val df = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "xx.xx.xx.xx:9092")
.option("subscribe", "test")
.option("kafka.security.protocol","SASL_PLAINTEXT")
.option("kafka.sasl.kerberos.service.name","kafka")
You can write data to Cloudera Kafka topic (Consumer)
val query = blacklistControl.select(to_json(struct("Column1","Column2")).alias("value"))
.writeStream
.format("kafka")
.option("checkpointLocation", "/your/empty/directory")
.option("kafka.bootstrap.servers", "xx.xx.xx.xx:9092")
.option("kafka.security.protocol","SASL_PLAINTEXT")
.option("kafka.sasl.kerberos.service.name","kafka")
.option("topic", "topic_xdr")
.start()
i faced exactly the same issue. What is happening is the executor node is trying to write to hbase and doesnt have the credentials . What you need to do is pass the keytab file to the executors and explicitly call the KDC authentication WITH In THe executor block
UserGroupInformation.loginUserFromKeytab("hdfs-user#MYCORP.NET",
"/home/hdfs-user/hdfs-user.keytab");
From the stacktrace, it looks like the kafka is authenticated with sasl.
The supported SASL mechanims are:
GSSAPI (Kerberos)
OAUTHBEARER
SCRAM
PLAIN
From your stacktrace, kafka is configured using GSSAPI and you need to
authenticate accordingly. You are authenticating for SSL and not
SASL. Check the this link for steps to authenticate.
My jboss server starts without errors and deploys my ear but after some seconds it undeploys it:
16:03:34,762 DEBUG [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 2) Deployment scan of [U:\JBOSS_CFG\deployments] found update action
16:28:46,919 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "myear.ear" (runtime-name : "my-ear.ear")
[{
"operation" => "composite",
"address" => [],
"steps" => [
{
"operation" => "undeploy",
"address" => [("deployment" => "my-ear.ear")]
},
{
"operation" => "remove",
"address" => [("deployment" => "my-ear.ear")]
}
]
}]
16:28:51,014 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015877: Stopped deployment null (runtime-name: my-ejb.jar) in 25ms
16:28:51,093 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015877: Stopped deployment my-ear.ear (runtime-name: my-ear.ear) in 82ms
16:28:51,186 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS018558: Undeployed "my-ear.ear" (runtime-name: "my-ear.ear")
I have used Quartz jobs in my web application. Everything was working fine when I used Quartz 1.6.5 with Teradata database version 13.10.
I faced frequent deadlock issues in the quartz older version. So, I upgraded my version to Quartz2.2.1. Everything was working fine when I used Quartz 2.2.1 with Teradata database version 13.10.
Later we faced a weird charset issue in Teradata 13.10, so we upgraded to Teradata 14.0.
Now, we faced a weird problem, when we used Quartz 2.2.1 with Teradata database version 14.0
We got the following exception,
INFO >2014-03-20 10:35:34,541 com.mchange.v2.log.MLog[main]: MLog clients using log4j logging.
INFO >2014-03-20 10:35:35,007 com.mchange.v2.c3p0.C3P0Registry[main]: Initializing c3p0-0.9.1 [built 16-January-2007 14:46:42; debug? true; trace: 10]
INFO >2014-03-20 10:35:35,504 com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource[main]: Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 30b5x8901q4ns4b1b241po|1b7bf86, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.objectriver.jdbc.driver.L2PDriver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 30b5x8901q4ns4b1b241po|1b7bf86, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:teradata://10.219.82.10/database=T01DGF0_Q,CHARSET=UTF8,TMODE=TERA, lastAcquisitionFailureDefaultUser -> null, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 10, maxStatements -> 0, maxStatementsPerConnection -> 120, minPoolSize -> 1, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, preferredTestQuery -> null, properties -> {user=******, password=******}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ]
WARN >2014-03-20 10:36:04,519 com.mchange.v2.resourcepool.BasicResourcePool[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0]: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#18837f1 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:264)
at com.mchange.v2.c3p0.DriverManagerDataSource.driver(DriverManagerDataSource.java:224)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
INFO >2014-03-20 10:36:05,903 com.ssc.faw.common.LogManager[GenCache]: GenCache.Worker(1) created
WARN >2014-03-20 10:36:06,657 com.mchange.v2.resourcepool.BasicResourcePool[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2]: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#150b45a -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:264)
at com.mchange.v2.c3p0.DriverManagerDataSource.driver(DriverManagerDataSource.java:224)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
WARN >2014-03-20 10:36:06,657 com.mchange.v2.resourcepool.BasicResourcePool[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1]: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#170a650 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:264)
at com.mchange.v2.c3p0.DriverManagerDataSource.driver(DriverManagerDataSource.java:224)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
Please find the following quartz properties and jobs xml,
quartz.properties
#==============================================================
# Registry Scheduler Properties
#==============================================================
org.quartz.scheduler.instanceName=Service_Dgf_Quartz_Scheduler
org.quartz.scheduler.makeSchedulerThreadDaemon = true
#============================================================================
# Cluster Configuration
#============================================================================
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 60000
org.quartz.jobStore.selectWithLockSQL=LOCKING ROW FOR WRITE SELECT * FROM {0}LOCKS WHERE LOCK_NAME = ?
org.quartz.scheduler.instanceId = AUTO
#==============================================================
# Configure ThreadPool
#==============================================================
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount=10
org.quartz.threadPool.threadPriority=5
#==============================================================
# Configure JobStore
#==============================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = com.ssc.mfw.server.quartz.TeradataDelegate
#========================================================================================
# Configure JobInitializer Plugin
#========================================================================================
org.quartz.plugin.jobInitializer.wrapInUserTransaction = false
org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.scanInterval = 0
org.quartz.plugin.jobInitializer.fileNames=quartz/service_dgf_jobs.xml
org.quartz.plugin.jobInitializer.failOnFileNotFound = true
#============================================================================
# Configure Plugins
#============================================================================
org.quartz.plugin.triggHistory.class = org.quartz.plugins.history.LoggingJobHistoryPlugin
#============================================================================
# Configure JobStore Additional Code
#============================================================================
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = QuartzDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.dataSource.QuartzDS.connectionProvider.class=com.ssc.mfw.server.util.TeradataConnectionProvider
quartz_jobs.xml
<?xml version="1.0" encoding="UTF-8"?>
<job-scheduling-data
xmlns="http://www.quartz-scheduler.org/xml/JobSchedulingData"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.quartz-scheduler.org/xml/JobSchedulingData http://www.quartz-scheduler.org/xml/job_scheduling_data_1_8.xsd"
version="1.8">
<schedule>
<job>
<name>simpleJob</name>
<group>SimpleGroup</group>
<description>Mart Creation Job</description>
<job-class>com.ssc.mfw.server.job.VirtualMartCreationJob</job-class>
</job>
<trigger>
<!-- ServiceNotification will be fired every 5 minutes -->
<cron>
<name>simpleJobTrigger</name>
<job-name>simpleJob</job-name>
<job-group>SimpleGroup</job-group>
<cron-expression>0 0/5 * * * ?</cron-expression>
</cron>
</trigger>
</schedule>
<schedule>
<job>
<name>dashboardJob</name>
<group>dashboardGroup</group>
<description>Dashboard Job</description>
<job-class>com.ssc.mfw.server.job.DashBoardJob</job-class>
</job>
<trigger>
<!-- ServiceNotification will be fired every 12 hours -->
<cron>
<name>dashboardJobTrigger</name>
<job-name>dashboardJob</job-name>
<job-group>dashboardGroup</job-group>
<cron-expression>0 0 0/12 * * ?</cron-expression>
</cron>
</trigger>
</schedule>
<schedule>
<job>
<name>updateAsAtTmsJob</name>
<group>updateAsAtTmsGroup</group>
<description>Update DB Key Job</description>
<job-class>com.ssc.mfw.server.job.UpdateAsAtTmsJob</job-class>
</job>
<trigger>
<!-- ServiceNotification will be fired every 4 hours -->
<cron>
<name>updateAsAtTmsJobTrigger</name>
<job-name>updateAsAtTmsJob</job-name>
<job-group>updateAsAtTmsGroup</job-group>
<cron-expression>0 0 0/4 * * ?</cron-expression>
</cron>
</trigger>
</schedule>
</job-scheduling-data>
We are facing the above said only when quartz database tables are empty. If the quartz tables contains the job details, jobs are running fine.
Can any one advice what is causing the issue? Am I doing anything wrong here.
Regards,
Suresh.
Your issue is pretty simple: JDBC cannot resolve the URL you have provided of the database to an appropriate Driver class. You can fix this very easily in several different ways, but unfortunately it's hard to give specific advice, because all of your JDBC configuration is hidden behind...
org.quartz.dataSource.QuartzDS.connectionProvider.class=com.ssc.mfw.server.util.TeradataConnectionProvider
In all likelihood, that class overides org.quartz.utils.PoolingConnectionProvider, and when it does so, it provides String dbDriver as the first argument to its superconstructor. (That string may be hardcoded, or externally configured somehow.) You need to update that String to the JDBC driver appropriate to your new version of Teradata. You will also need to ensure that the JDBC url you are using, probably, the second argument to the superconstructor of TeradataConnectionProvider, is a URL to your new database that is consistent with the dbDriver class you have surprised. Check the Teradata 14 JDBC documentation docs for the driver name and compatible JDBC url format.
(If your TeradataConnectionProvider implementation supplies its superconstructor with a Properties object, make sure that the key "driver" is bound to the JDBC driver class name and that the String URL is bound the the appropriate JDBC URL.)
(If you want more specific help, include the source to TeradataConnectionProvider.)
(Alternatively and more transparently, configure your DataSource directly using the config properties defined here.)
We are using another third party jar to create connections. That third party jar accepts URL in a specific format. We were sending the URL in wrong format. Now, we have fixed the issue and got it working. #Steve: Thanks for your time and support.