value too large in quartz in jboss - quartz-scheduler

I am getting following error in the server log. I would like to know which query(insert or update) making changes on the column of a quartz table.
ERROR [org.quartz.impl.jdbcjobstore.JobStoreTX] MisfireHandler: Error
handling misfires: Couldn't store trigger: ORA-01438: value larger
than specified precision allowed for this column
org.quartz.JobPersistenceException: Couldn't store trigger: ORA-01438:
value larger than specified precision allowed for this column [See
nested exception: org.quartz.JobPersistenceException: Couldn't store
trigger: ORA-01438: value larger than specified precision allowed for
this column [See nested exception: java.sql.SQLException: ORA-01438:
value larger than specified precision allowed for this column ]] at
org.quartz.impl.jdbcjobstore.JobStoreTX.doRecoverMisfires(JobStoreTX.java:1354)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:2449)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:2468)
* Nested Exception (Underlying Cause) --------------- org.quartz.JobPersistenceException: Couldn't store trigger: ORA-01438:
value larger than specified precision allowed for this column [See
nested exception: java.sql.SQLException: ORA-01438: value larger than
specified precision allowed for this column ] at
org.quartz.impl.jdbcjobstore.JobStoreSupport.storeTrigger(JobStoreSupport.java:964)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverMisfiredJobs(JobStoreSupport.java:780)
at
org.quartz.impl.jdbcjobstore.JobStoreTX.doRecoverMisfires(JobStoreTX.java:1352)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:2449)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:2468)
* Nested Exception (Underlying Cause) --------------- java.sql.SQLException: ORA-01438: value larger than specified
precision allowed for this column
at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331) at
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288) at
oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743) at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
at
oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:955)
at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168)
at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
at
oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3368)
at
org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:365)
at
org.quartz.impl.jdbcjobstore.StdJDBCDelegate.updateSimpleTrigger(StdJDBCDelegate.java:1440)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport.storeTrigger(JobStoreSupport.java:942)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverMisfiredJobs(JobStoreSupport.java:780)
at
org.quartz.impl.jdbcjobstore.JobStoreTX.doRecoverMisfires(JobStoreTX.java:1352)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:2449)
at
org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:2468)
Could you please any one help on this. Is there any configuration parameter to see the insert/update queries in the log?

I have seen this error due to the TIMES_TRIGGERED column in the SIMPLE_TRIGGERS table growing too large to be stored in the column, which is normally defined as NUMBER(7) (i.e. max value of 9999999).
Normally, you can just set this column to 0 and the error will go away (until the TIMES_TRIGGERED gets big enough again). It seems like quartz looks at the start time of the trigger and the configured frequency and determines what the value should be and overrides your update, though in my experience it has set it to something well below the max.
See Couldn't store trigger for more details.

Related

Azure Data Factory DataFlow Error: Key partitioning does not allow computed columns

We have a generic dataflow that works for many tables, the schema is detected at runtime.
We are trying to add a Partition Column for the Ingestion or Sink portion of the delta.
We are getting error:
Azure Data Factory DataFlow Error: Key partitioning does not allow computed columns
Job failed due to reason: at Source 'Ingestion'(Line 7/Col 0): Key partitioning does not allow computed columns
Can we pass the partition column as a parameter to a generic dataflow?
Can we pass the partition column as a parameter to a generic dataflow?
I tried your scenario and got similar error.
There is a limitation of key partition method is we cannot apply any calculation to the partition column while declaring it. Instead, this must be created in advanced, either using derived column or read in from source.
To resolve this, you can try following steps -
First, I created a pipeline parameter with datatype string and gave column name as value.
Click on Dataflow >> Go to Parameter >> In value of parameter select Pipeline expression >> and pass the above created parameter.
OUTPUT:
It is taking it as partition key column and partitioning data accordingly.
Reference : How To Use Data Flow Partitions To Optimize Spark Performance In Data Factor

IBM DataStage : ODBC_Connector_0: Schema reconciliation detected a size mismatch for column

I have this job where i input from the source to the target database. I set the Set Fail on size mismatch to "No", the error is ODBC_Connector_0: Schema reconciliation detected a size mismatch for column plafon. When reading database column DECIMAL(15,2) into column INT32, truncation, loss of precision, data corruption or padding can occur. in the previous job this trick works but somehow with this new job it does not work. Is the only way to fix it is with this closure?
You are converting data implicitly - and in your case even data types - this can cause a lot of trouble so the message is right.
Suggested solution:
Convert explicitly - either in SQL with CAST or within DataStage.

Talend - tCacheIn / tCacheOut throws null pointer exception

I am working on the big data spark requirement where I have to use tCacheOut and tCacheIn. Attached job screen shot is working fine but in one scenario when tCacheOut has nothing to store i.e. filter is not allowing to flow any row to next component, it throws null pointer exception.
I know, there are other alternatives like write output in disk and read again at the next step but I don't want to do that because disk read and write is always an overhead.
How we can handle null pointer exception in this case?
Suppose in tFileInputDelimited having 2 columns i.e. name and age. in tFilterRow i have two conditions like name not equals to NULL with AND operator name equals "John". i have connected tFilterRow component to tCacheOut with 'filter' flow.
Now connect a tFilterRow to tDie or tWarn with reject flow.

Hidden records which exceed numeric limits in Postgresql DB

I have a PG DB containing mostly numeric data, where accuracy of numeric data is important so fields are typically Numeric (7,3) because the largest expected non error value is around 1300. I am importing some data which contains some records exceeding the 10^4 limit, due to sensor faults, so I have applied an error code to these values using:
UPDATE schema.table1 SET fieldname1 = 3333 WHERE fieldname1 > 3333;
(i.e. If an operator sees 3333 in the data it is known to be an erroneous value because it is outside the normally expected range and it's known to be human applied due to the sequence which is unlikely in nature).
The success of this query was confirmed by:
MAX (fieldname1)
which returns 3333 for all fields which previously had values exceeding 3333.
This initial tidying of data was done in a temporary table with the Fields defined as Numeric (10,3), and I now need to import the data into the main table, using:
INSERT INTO schema.table2
(DateTime, Fieldname1)
SELECT
DISTINCT ON (DateTime)
DateTime,
Fieldname1
FROM Schema.table1
WHERE DateTime IS NOT NULL
But I get an error message saying 'datatype Numeric(7,3) cannot contain values exceeding 10^4' (or words to that effect).
As an experiment I tried redefining the datatypes in the temporary table as Numeric (7,3). This worked for most of the fields, but for a few fields I got the 'datatype Numeric(7,3) cannot contain values exceeding 10^4' message, implying that there is still data >10^4 despite the MAX(Fieldname1) command returning 3333.
Tried VACUUM and ANALYZE to clean up the tables, no cigar.
Is there a known issue here? am I going about this the wrong way?

SQL Bulk copy when doing write to server throws exception

I am trying to do bulk copy using SqlBulkCopy.WriteToServer. It gives me an exception. Cannot create a row of size 8635 which is greater than the allowable maximum row size of 8060.
The statement has been terminated. Can someone please explain me what this is ?