I have used Quartz jobs in my web application. Everything was working fine when I used Quartz 1.6.5 with Teradata database version 13.10.
I faced frequent deadlock issues in the quartz older version. So, I upgraded my version to Quartz2.2.1. Everything was working fine when I used Quartz 2.2.1 with Teradata database version 13.10.
Later we faced a weird charset issue in Teradata 13.10, so we upgraded to Teradata 14.0.
Now, we faced a weird problem, when we used Quartz 2.2.1 with Teradata database version 14.0
We got the following exception,
INFO >2014-03-20 10:35:34,541 com.mchange.v2.log.MLog[main]: MLog clients using log4j logging.
INFO >2014-03-20 10:35:35,007 com.mchange.v2.c3p0.C3P0Registry[main]: Initializing c3p0-0.9.1 [built 16-January-2007 14:46:42; debug? true; trace: 10]
INFO >2014-03-20 10:35:35,504 com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource[main]: Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 30b5x8901q4ns4b1b241po|1b7bf86, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.objectriver.jdbc.driver.L2PDriver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 30b5x8901q4ns4b1b241po|1b7bf86, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:teradata://10.219.82.10/database=T01DGF0_Q,CHARSET=UTF8,TMODE=TERA, lastAcquisitionFailureDefaultUser -> null, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 10, maxStatements -> 0, maxStatementsPerConnection -> 120, minPoolSize -> 1, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, preferredTestQuery -> null, properties -> {user=******, password=******}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ]
WARN >2014-03-20 10:36:04,519 com.mchange.v2.resourcepool.BasicResourcePool[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0]: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#18837f1 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:264)
at com.mchange.v2.c3p0.DriverManagerDataSource.driver(DriverManagerDataSource.java:224)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
INFO >2014-03-20 10:36:05,903 com.ssc.faw.common.LogManager[GenCache]: GenCache.Worker(1) created
WARN >2014-03-20 10:36:06,657 com.mchange.v2.resourcepool.BasicResourcePool[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2]: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#150b45a -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:264)
at com.mchange.v2.c3p0.DriverManagerDataSource.driver(DriverManagerDataSource.java:224)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
WARN >2014-03-20 10:36:06,657 com.mchange.v2.resourcepool.BasicResourcePool[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1]: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#170a650 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:264)
at com.mchange.v2.c3p0.DriverManagerDataSource.driver(DriverManagerDataSource.java:224)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
Please find the following quartz properties and jobs xml,
quartz.properties
#==============================================================
# Registry Scheduler Properties
#==============================================================
org.quartz.scheduler.instanceName=Service_Dgf_Quartz_Scheduler
org.quartz.scheduler.makeSchedulerThreadDaemon = true
#============================================================================
# Cluster Configuration
#============================================================================
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 60000
org.quartz.jobStore.selectWithLockSQL=LOCKING ROW FOR WRITE SELECT * FROM {0}LOCKS WHERE LOCK_NAME = ?
org.quartz.scheduler.instanceId = AUTO
#==============================================================
# Configure ThreadPool
#==============================================================
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount=10
org.quartz.threadPool.threadPriority=5
#==============================================================
# Configure JobStore
#==============================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = com.ssc.mfw.server.quartz.TeradataDelegate
#========================================================================================
# Configure JobInitializer Plugin
#========================================================================================
org.quartz.plugin.jobInitializer.wrapInUserTransaction = false
org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.scanInterval = 0
org.quartz.plugin.jobInitializer.fileNames=quartz/service_dgf_jobs.xml
org.quartz.plugin.jobInitializer.failOnFileNotFound = true
#============================================================================
# Configure Plugins
#============================================================================
org.quartz.plugin.triggHistory.class = org.quartz.plugins.history.LoggingJobHistoryPlugin
#============================================================================
# Configure JobStore Additional Code
#============================================================================
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = QuartzDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.dataSource.QuartzDS.connectionProvider.class=com.ssc.mfw.server.util.TeradataConnectionProvider
quartz_jobs.xml
<?xml version="1.0" encoding="UTF-8"?>
<job-scheduling-data
xmlns="http://www.quartz-scheduler.org/xml/JobSchedulingData"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.quartz-scheduler.org/xml/JobSchedulingData http://www.quartz-scheduler.org/xml/job_scheduling_data_1_8.xsd"
version="1.8">
<schedule>
<job>
<name>simpleJob</name>
<group>SimpleGroup</group>
<description>Mart Creation Job</description>
<job-class>com.ssc.mfw.server.job.VirtualMartCreationJob</job-class>
</job>
<trigger>
<!-- ServiceNotification will be fired every 5 minutes -->
<cron>
<name>simpleJobTrigger</name>
<job-name>simpleJob</job-name>
<job-group>SimpleGroup</job-group>
<cron-expression>0 0/5 * * * ?</cron-expression>
</cron>
</trigger>
</schedule>
<schedule>
<job>
<name>dashboardJob</name>
<group>dashboardGroup</group>
<description>Dashboard Job</description>
<job-class>com.ssc.mfw.server.job.DashBoardJob</job-class>
</job>
<trigger>
<!-- ServiceNotification will be fired every 12 hours -->
<cron>
<name>dashboardJobTrigger</name>
<job-name>dashboardJob</job-name>
<job-group>dashboardGroup</job-group>
<cron-expression>0 0 0/12 * * ?</cron-expression>
</cron>
</trigger>
</schedule>
<schedule>
<job>
<name>updateAsAtTmsJob</name>
<group>updateAsAtTmsGroup</group>
<description>Update DB Key Job</description>
<job-class>com.ssc.mfw.server.job.UpdateAsAtTmsJob</job-class>
</job>
<trigger>
<!-- ServiceNotification will be fired every 4 hours -->
<cron>
<name>updateAsAtTmsJobTrigger</name>
<job-name>updateAsAtTmsJob</job-name>
<job-group>updateAsAtTmsGroup</job-group>
<cron-expression>0 0 0/4 * * ?</cron-expression>
</cron>
</trigger>
</schedule>
</job-scheduling-data>
We are facing the above said only when quartz database tables are empty. If the quartz tables contains the job details, jobs are running fine.
Can any one advice what is causing the issue? Am I doing anything wrong here.
Regards,
Suresh.
Your issue is pretty simple: JDBC cannot resolve the URL you have provided of the database to an appropriate Driver class. You can fix this very easily in several different ways, but unfortunately it's hard to give specific advice, because all of your JDBC configuration is hidden behind...
org.quartz.dataSource.QuartzDS.connectionProvider.class=com.ssc.mfw.server.util.TeradataConnectionProvider
In all likelihood, that class overides org.quartz.utils.PoolingConnectionProvider, and when it does so, it provides String dbDriver as the first argument to its superconstructor. (That string may be hardcoded, or externally configured somehow.) You need to update that String to the JDBC driver appropriate to your new version of Teradata. You will also need to ensure that the JDBC url you are using, probably, the second argument to the superconstructor of TeradataConnectionProvider, is a URL to your new database that is consistent with the dbDriver class you have surprised. Check the Teradata 14 JDBC documentation docs for the driver name and compatible JDBC url format.
(If your TeradataConnectionProvider implementation supplies its superconstructor with a Properties object, make sure that the key "driver" is bound to the JDBC driver class name and that the String URL is bound the the appropriate JDBC URL.)
(If you want more specific help, include the source to TeradataConnectionProvider.)
(Alternatively and more transparently, configure your DataSource directly using the config properties defined here.)
We are using another third party jar to create connections. That third party jar accepts URL in a specific format. We were sending the URL in wrong format. Now, we have fixed the issue and got it working. #Steve: Thanks for your time and support.
Related
All of sudden I got the following error continuously for a select query.
Unable to enlist connection in transaction: enlistResource returns 'false'
This appears to be thrown from DBCP (https://commons.apache.org/proper/commons-dbcp/jacoco/org.apache.commons.dbcp2.managed/TransactionContext.java.html)
Essentially, this code calls javax.transaction.Transaction.enlistResource
Looking at the API, it says "Enlist the resource specified with the transaction associated with the target Transaction object. true if the resource was enlisted successfully; otherwise false."
I am still not clear why this exception thrown.
What is enlist the resource to transaction?
When enlist resource returns false? I mean when transaction will not be able to enlist the resource?
How to avoid this problem?
Tech Stack:
TomEE 7.0.4
JPA
MariaDB
Data source config:
<Resource id="jdbc/myDS" type="javax.sql.DataSource">
dataSourceCreator = tomcat
jtaManaged = true
driverClassName = ${jdbc.driver}
url = ${jdbc.url}
username = ${jdbc.username}
password = ${jdbc.password}
initialSize = 5
maxActive = 100
maxIdle = 10
minIdle = 5
maxWait = 30000
validationQuery = SELECT 1
testOnBorrow = false
testOnReturn = false
testWhileIdle = true
timeBetweenEvictionRunsMillis = 1800000
numTestsPerEvictionRun = 2
minEvictableIdleTimeMillis = 1800000
accessToUnderlyingConnectionAllowed = false
</Resource>
Stack Trace:
Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: Unable to enlist connection in transaction: enlistResource returns 'false'.
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:218) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:198) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$000(LoggingConnectionDecorator.java:58) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection.prepareStatement(LoggingConnectionDecorator.java:250) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator$ConfiguringConnection.prepareStatement(ConfiguringConnectionDecorator.java:139) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$RefCountConnection.prepareStatement(JDBCStoreManager.java:1642) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:122) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:513) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:493) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.SelectImpl.prepareStatement(SelectImpl.java:480) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:421) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:392) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.LogicalUnion$UnionSelect.execute(LogicalUnion.java:427) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:230) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:220) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.jdbc.kernel.SelectResultObjectProvider.open(SelectResultObjectProvider.java:93) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.kernel.QueryImpl$PackingResultObjectProvider.open(QueryImpl.java:2075) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.lib.rop.EagerResultList.<init>(EagerResultList.java:33) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.kernel.QueryImpl.toResult(QueryImpl.java:1257) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1013) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:869) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:800) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:541) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:274) ~[openjpa-2.4.2.jar:2.4.2]
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:290) ~[openjpa-2.4.2.jar:2.4.2]
Just had the same issue and digged a little more into my log file. Right before the first appearance of
enlistResource returns 'false'
I found a
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Checking further up and found a
WARNING [https-jsse-nio-8443-exec-1] org.apache.geronimo.transaction.manager.TransactionImpl.enlistResource Unable to enlist XAResource org.apache.openejb.resource.jdbc.managed.local.LocalXAResource#2ce72e5a, errorCode: 0
javax.transaction.xa.XAException: Count not turn off auto commit for a XA transaction
All this leaded me to a old post from tomee-openejb mailing list. So my recent tomee.xml configuration looked like that:
<Resource id="[Resourcename]" type="javax.sql.DataSource">
jdbcDriver = com.mysql.jdbc.Driver
jdbcUrl = jdbc:mysql://localhost:3306/[Databasename]
userName = [Username]
password = [Password]
</Resource>
Now I changed it to the following config:
<Resource id=[Name] type="javax.sql.DataSource">
jdbcDriver = com.mysql.jdbc.Driver
jdbcUrl = jdbc:mysql://localhost:3306/[Databasename]
jtaManaged = true
username = [Username]
password = [Password]
defaultAutoCommit = false
testOnReturn = true
testWhileIdle = true
timeBetweenEvictionRunsMillis = 60
initialSize = 2
minIdle = 2
validationQuery = "select 1"
</Resource>
Finger crossed that it will work without any further issue. If I don't update here anymore it will just have worked fine.
Further information:
Tomee DataSource Configuration
I have a problem when running unit test using specs2with scalikejdbc 2.4.1, scalikejdbc-config2.4.1
Here is my code:
object PostDAOImplSpec extends Specification{
sequential
DBs.setupAll
implicit val session = AutoSession
"resolveAll shoudn't have any syntax error" in new AutoRollback {
val postIds = DB readOnly { implicit session =>
sql"select post_id from posts".map(_.long(1)).list.apply()
}
}
DBs.closeAll()
}
Here is logs:
09:11:16.931 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:mysql://localhost/bbs, user:root) using factory : <default>
09:11:17.130 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:mysql://localhost/bbs, user:root) using factory : <default>
java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
As you can see from the first two of lines, scalikejdbc found database's configuration, but it can't initilize connection pool.
Do you have any idea? Thanks.
The DBs.closeAll() closes your connection pools before running your tests.
I have used quartz for scheduling my jobs, When I used RAM JOB Store the scheduler starts and triggers successfully but when I use JDBC store it fails to start. Can you please guide, I've placed the below artefacts,
# Default Properties file for use by StdSchedulerFactory
# to create a Quartz Scheduler Instance, if a different
# properties file is not explicitly specified.
#
org.quartz.scheduler.instanceName: DefaultQuartzScheduler
org.quartz.scheduler.rmi.export: false
org.quartz.scheduler.rmi.proxy: false
org.quartz.scheduler.wrapJobExecutionInUserTransaction: false
org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount: 2
org.quartz.threadPool.threadPriority: 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread: true
org.quartz.jobStore.misfireThreshold: 60000
#org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
org.quartz.jobStore.class: org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.dataSource.myDS.driver = org.hsqldb.jdbc.JDBCDriver
org.quartz.dataSource.myDS.URL = jdbc:hsqldb:file:x\\myds
org.quartz.dataSource.myDS.user = SA
org.quartz.dataSource.myDS.password = sa
org.quartz.dataSource.myDS.maxConnections = 30
These are my logs...
2014-01-17 11:36:42 INFO MLog:80 - MLog clients using log4j logging.
2014-01-17 11:36:42 INFO C3P0Registry:204 - Initializing c3p0-0.9.1.1 [built 15-March-2007 01:32:31; debug? true; trace: 10]
2014-01-17 11:36:42 INFO StdSchedulerFactory:1184 - Using default implementation for ThreadExecutor
2014-01-17 11:36:42 INFO SimpleThreadPool:268 - Job execution threads will use class loader of thread: main
2014-01-17 11:36:42 INFO SchedulerSignalerImpl:61 - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
2014-01-17 11:36:42 INFO QuartzScheduler:240 - Quartz Scheduler v.2.2.1 created.
2014-01-17 11:36:42 INFO JobStoreTX:670 - Using thread monitor-based data access locking (synchronization).
2014-01-17 11:36:42 INFO JobStoreTX:59 - JobStoreTX initialized.
2014-01-17 11:36:42 INFO QuartzScheduler:305 - Scheduler meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 2 threads.
Using job-store 'org.quartz.impl.jdbcjobstore.JobStoreTX' - which supports persistence. and is not clustered.
2014-01-17 11:36:42 INFO StdSchedulerFactory:1339 - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
2014-01-17 11:36:42 INFO StdSchedulerFactory:1343 - Quartz scheduler version: 2.2.1
2014-01-17 11:36:42 INFO AbstractPoolBackedDataSource:462 - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 2yhpp38z182altw1uxr4l9|6df6f81b, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.hsqldb.jdbc.JDBCDriver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2yhpp38z182altw1uxr4l9|6df6f81b, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:hsqldb:file:x\database\myds, lastAcquisitionFailureDefaultUser -> null, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 30, maxStatements -> 0, maxStatementsPerConnection -> 120, minPoolSize -> 1, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, preferredTestQuery -> null, properties -> {user=******, password=******}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ]
Currently the database tables are empty...
Your suggestions needed....
Ok I think it is not an error. I have same LOG info in my project (NOT STARTED) however when I check my tables, I'm able to see my jobs and triggers. Try to add database tables manually and schedule a job in your project. After you will see in database.
I'm using hadoop distcp -update to copy directory from one HDFS cluster to different one.
Sometime (pretty often) I get this kind of exception:
13/07/03 00:20:03 INFO tools.DistCp: srcPaths=[hdfs://HDFS1:51175/directory_X]
13/07/03 00:20:03 INFO tools.DistCp: destPath=hdfs://HDFS2:51175/directory_X
13/07/03 00:25:27 WARN hdfs.DFSClient: src=directory_X, datanodes[0].getName()=***.***.***.***:8550
java.net.SocketTimeoutException: 69000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/***.***.***.***:35872 remote=/***.***.***.***:8550]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:116)
at java.io.DataInputStream.readShort(DataInputStream.java:295)
at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:885)
at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:822)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:541)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:53)
at org.apache.hadoop.tools.DistCp.sameFile(DistCp.java:1230)
at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1110)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
13/07/03 00:26:40 INFO tools.DistCp: sourcePathsCount=8542
13/07/03 00:26:40 INFO tools.DistCp: filesToCopyCount=0
13/07/03 00:26:40 INFO tools.DistCp: bytesToCopyCount=0.0
Does anyone has any idea what could it be?
Using Hadoop 0.20.205.0
Suggest to increase timeouts for both dfs.socket.timeout, for read timeout. And dfs.datanode.socket.write.timeout, for write timeout.
Default:
// Timeouts for communicating with DataNode for streaming writes/reads
public static int READ_TIMEOUT = 60 * 1000; // here, 69000 millis > 60000
public static int WRITE_TIMEOUT = 8 * 60 * 1000;
Add below in your hadoop-site.xml or hdfs-site.xml
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>3000000</value>
</property>
<property>
<name>dfs.socket.timeout</name>
<value>3000000</value>
</property>
Hope that helps.
I think you also want to set dfs.client.socket-timeout
Here is why.
Deprecated property name -> New property name
dfs.socket.timeout ->dfs.client.socket-timeout
The development part of shark/spark wiki is really brief, so I tried to put together a code in an effort to programmatically query a table. Here it is ...
object Test extends App {
val master = "spark://localhost.localdomain:8084"
val jobName = "scratch"
val sparkHome = "/home/shengc/Downloads/software/spark-0.6.1"
val executorEnvVars = Map[String, String](
"SPARK_MEM" -> "1g",
"SPARK_CLASSPATH" -> "",
"HADOOP_HOME" -> "/home/shengc/Downloads/software/hadoop-0.20.205.0",
"JAVA_HOME" -> "/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64",
"HIVE_HOME" -> "/home/shengc/Downloads/software/hive-0.9.0-bin"
)
val sc = new shark.SharkContext(master, jobName, sparkHome, Nil, executorEnvVars)
sc.sql2console("create table src")
sc.sql2console("load data local inpath '/home/shengc/Downloads/software/hive-0.9.0-bin/examples/files/kv1.txt' into table src")
sc.sql2console("select count(1) from src")
}
I can create table src and load data into src fine, but the last query threw NPE and failed, here is the output...
13/01/06 17:33:20 INFO execution.SparkTask: Executing shark.execution.SparkTask
13/01/06 17:33:20 INFO shark.SharkEnv: Initializing SharkEnv
13/01/06 17:33:20 INFO execution.SparkTask: Adding jar file:///home/shengc/workspace/shark/hive/lib/hive-builtins-0.9.0.jar
java.lang.NullPointerException
at shark.execution.SparkTask$$anonfun$execute$5.apply(SparkTask.scala:58)
at shark.execution.SparkTask$$anonfun$execute$5.apply(SparkTask.scala:55)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:38)
at shark.execution.SparkTask.execute(SparkTask.scala:55)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1326)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1118)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951)
at shark.SharkContext.sql(SharkContext.scala:58)
at shark.SharkContext.sql2console(SharkContext.scala:84)
at Test$delayedInit$body.apply(Test.scala:20)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:60)
at scala.App$$anonfun$main$1.apply(App.scala:60)
at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
at scala.collection.immutable.List.foreach(List.scala:76)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:30)
at scala.App$class.main(App.scala:60)
at Test$.main(Test.scala:4)
at Test.main(Test.scala)
FAILED: Execution Error, return code -101 from shark.execution.SparkTask13/01/06 17:33:20 ERROR ql.Driver: FAILED: Execution Error, return code -101 from shark.execution.SparkTask
13/01/06 17:33:20 INFO ql.Driver: </PERFLOG method=Driver.execute start=1357511600030 end=1357511600054 duration=24>
13/01/06 17:33:20 INFO ql.Driver: <PERFLOG method=releaseLocks>
13/01/06 17:33:20 INFO ql.Driver: </PERFLOG method=releaseLocks start=1357511600054 end=1357511600054 duration=0>
However, I can query src table by typing in select * from src within the shell invoked by bin/shark-withinfo
You might ask me how about trying that sql in the shell trigged by "bin/shark-shell". Well, I cannot get into that shell. Here is the error I came across...
https://groups.google.com/forum/?fromgroups=#!topic/shark-users/glZzrUfabGc
[EDIT 1]: this NPE seems to be resulting from SharkENV.sc has not been set, so I added
shark.SharkEnv.sc = sc
right before any sql2console opertions are executed. It then complained ClassNotFoundException of scala.tools.nsc, so I manually put scala-compiler in the classpath. After that, the code complained another ClassNotFoundException, which I cannot figure out how to fix it, since I did put shark jar in classpath.
13/01/06 18:09:34 INFO cluster.TaskSetManager: Lost TID 1 (task 1.0:1)
13/01/06 18:09:34 INFO cluster.TaskSetManager: Loss was due to java.lang.ClassNotFoundException: shark.execution.TableScanOperator$$anonfun$preprocessRdd$3
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
[EDIT 2]: OK, I figured out another code which can fulfill what I want by following exactly shark's source code of how to initialize the interactive repl.
System.setProperty("MASTER", "spark://localhost.localdomain:8084")
System.setProperty("SPARK_MEM", "1g")
System.setProperty("SPARK_CLASSPATH", "")
System.setProperty("HADOOP_HOME", "/home/shengc/Downloads/software/hadoop-0.20.205.0")
System.setProperty("JAVA_HOME", "/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64")
System.setProperty("HIVE_HOME", "/home/shengc/Downloads/software/hive-0.9.0-bin")
System.setProperty("SCALA_HOME", "/home/shengc/Downloads/software/scala-2.9.2")
shark.SharkEnv.initWithSharkContext("scratch")
val sc = shark.SharkEnv.sc.asInstanceOf[shark.SharkContext]
sc.sql2console("select * from src")
this is ugly, but at least it works. Any comments of how to write a more robust piece of code is welcome!!
For whoever wishes to programmatically operate on shark, please note that all hive and shark jars must be in your CLASSPATH, and scala compiler has to be in your classpath too. The other important thing is hadoop's conf should be in the classpath too.
I believe the issue is your SharkEnv is not initialized.
I'm using shark 0.9.0 (but I believe you have to initialize SharkEnv in 0.6.1 too), and my SharkEnv is initialized in the following way:
// SharkContext
val sc = new SharkContext(master,
jobName,
System.getenv("SPARK_HOME"),
Nil,
executorEnvVar)
// Initialize SharkEnv
SharkEnv.sc = sc
// create and populate table
sc.runSql("CREATE TABLE src(key INT, value STRING)")
sc.runSql("LOAD DATA LOCAL INPATH '${env:HIVE_HOME}/examples/files/kv1.txt' INTO TABLE src")
// print result to stdout
println(sc.runSql("select * from src"))
println(sc.runSql("select count(*) from src"))
Also, try to query data from src table (comment line with "select count(*) ...") without aggregating functions, I had similar issue when data query was ok, but count(*) throwed exception, fixed by adding mysql-connector-java.jar to yarn.application.classpath in my case.