Seeing exception in Quartz scheduler causing jobs to not run - persistence

We are using Quartz 2.2.1 and are seeing the following exception occurring at customers and in our own site. The quartz tables seem to be corrupted.
Has anyone seen this or know how to fix it?
update
2017-04-18 00:01:38,685 ERROR org.quartz.impl.jdbcjobstore.JobStoreTX MisfireHandler: Error handling misfires: Couldn't retrieve trigger: No record found for selection of Trigger with key: 'DEFAULT.Delete PS Audit logs' and statement: SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE SCHED_NAME = 'MFTScheduler' AND TRIGGER_NAME = ? AND TRIGGER_GROUP = ?
org.quartz.JobPersistenceException: Couldn't retrieve trigger: No record found for selection of Trigger with key: 'DEFAULT.Delete PS Audit logs' and statement: SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE SCHED_NAME = 'MFTScheduler' AND TRIGGER_NAME = ? AND TRIGGER_GROUP = ? [See nested exception: java.lang.IllegalStateException: No record found for selection of Trigger with key: 'DEFAULT.Delete PS Audit logs' and statement: SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE SCHED_NAME = 'MFTScheduler' AND TRIGGER_NAME = ? AND TRIGGER_GROUP = ?]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(JobStoreSupport.java:1533)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverMisfiredJobs(JobStoreSupport.java:979)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3187)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:3935)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:3956)
Caused by: java.lang.IllegalStateException: No record found for selection of Trigger with key: 'DEFAULT.Delete PS Audit logs' and statement: SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE SCHED_NAME = 'MFTScheduler' AND TRIGGER_NAME = ? AND TRIGGER_GROUP = ?
at org.quartz.impl.jdbcjobstore.SimplePropertiesTriggerPersistenceDelegateSupport.loadExtendedTriggerProperties(SimplePropertiesTriggerPersistenceDelegateSupport.java:157)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.selectTrigger(StdJDBCDelegate.java:1819)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(JobStoreSupport.java:1531)
... 4 more

I had the same problem.
I fixed it by removing corrupted records in database.
Adapted to your case:
DELETE FROM QRTZ_CRON_TRIGGERS WHERE SCHED_NAME = 'MFTScheduler' AND TRIGGER_NAME = 'Delete PS Audit logs' AND TRIGGER_GROUP = 'DEFAULT';
DELETE FROM QRTZ_TRIGGERS WHERE SCHED_NAME = 'MFTScheduler' AND TRIGGER_NAME = 'Delete PS Audit logs' AND TRIGGER_GROUP = 'DEFAULT';

Related

Delete duplicate records in Postgres , ctid does not exist

I have a table with duplicate records.
Upon googling, I stumbled upon this one: How to delete duplicate rows without unique identifier.
So I followed the solution to use the ctid.
However, running it returns an error for me:
SQL Error [42703]: ERROR: column "ctid" does not exist in t2
this is the code I ran:
update my_schema.my_table
set control_flag = 'deleted'
where customer_id = 'A001'
and date between '2019-10-30' and '2020-09-29'
and mrr = 69
and exists (select ctid, 1
from my_schema.my_table t2
where t2.gateway_id = my_schema.my_table.gateway_id and
t2.gateway_item_id = my_schema.my_table.gateway_item_id and
t2.customer_id = my_schema.my_table.customer_id and
t2.mrr = my_schema.my_table.mrr and
t2.date = my_schema.my_table.date and
t2.ctid > my_schema.my_table.ctid
);

Apache Flink : Handle bad avro records in confluent-avro from Kafka

I've created a table using Flink's table APIs.
CREATE TABLE recommendations (
...
) WITH (
'connector' = 'kafka',
'topic' = 'my_kafka_topic',
'properties.bootstrap.servers' = 'localhost:9092',
'properties.group.id' = 'testGroup',
'properties.security.protocol' = 'SASL_PLAINTEXT',
'properties.sasl.kerberos.service.name' = 'kafka',
'scan.startup.mode' = 'latest-offset',
'value.format' = 'avro-confluent',
'value.avro-confluent.url' = 'http://schema-registry-address',
'value.fields-include' = 'EXCEPT_KEY'
);
When running the SQL to view the records, I'm getting:
Flink SQL> select * from default_catalog.default_database.recommendations ;
[ERROR] Could not execute SQL statement. Reason:
java.lang.ArrayIndexOutOfBoundsException: -25
Flink SQL> select * from default_catalog.default_database.recommendations ;
[ERROR] Could not execute SQL statement. Reason:
java.io.IOException: Failed to deserialize Avro record.
I'm aware there are some BAD avro records being pushed into the Kafka topic. In JSON format, there's an option to skip/filter these records by setting
'json.ignore-parse-errors' = 'true'. Is there any way we can skip these records when reading from confluent-avro format?
It's not ideal but unfortunately, I can't control what's being pushed to Kafka despite having a schema registry.
There's currently no such option for AVRO. There's an open ticket for it at https://issues.apache.org/jira/browse/FLINK-20091

Cannot insert NULL value into column error

I have a issue where I want to update a column in a table and with a trigger to update same column but in another table. It says I cannot insert NULL but I can't seem to understand from where it gets that NULL value. This is the trigger:
CREATE TRIGGER Custom_WF_Update_WF_DefinitionSteps_DefinitionId ON WF.Definition
AFTER UPDATE AS BEGIN
IF UPDATE(DefinitionId)
IF TRIGGER_NESTLEVEL() < 2
BEGIN
ALTER TABLE WF.DefinitionSteps NOCHECK CONSTRAINT ALL
UPDATE WF.DefinitionSteps
SET DefinitionId =
(SELECT i.DefinitionId
FROM inserted i,
deleted d
WHERE WF.DefinitionSteps.DefinitionId = d.DefinitionId
AND i.oldPkCol = d.DefinitionId)
WHERE WF.DefinitionSteps.DefinitionId IN
(SELECT DefinitionId FROM deleted)
ALTER TABLE WF.DefinitionSteps CHECK CONSTRAINT ALL
END
END
This update statement works just fine:
UPDATE [CCHMergeIntermediate].[WF].[Definition]
SET DefinitionId = source.DefinitionId + 445
FROM [CCHMergeIntermediate].[WF].[Definition] source
But this one fails:
UPDATE [CCHMergeIntermediate].[WF].[Definition]
SET DefinitionId = target.DefinitionId
FROM [CCHMergeIntermediate].[WF].[Definition] source
INNER JOIN [centralq3].[WF].[Definition] target
ON (((source.Name = target.Name) OR (source.Name IS NULL AND target.Name IS NULL)))
I get the following error:
Msg 515, Level 16, State 2, Procedure Custom_WF_Update_WF_DefinitionSteps_DefinitionId, Line 7
Cannot insert the value NULL into column 'DefinitionId', table 'CCHMergeIntermediate.WF.DefinitionSteps'; column does not allow nulls. UPDATE fails.
If I do a select instead of the update statement, like this:
SELECT source.DefinitionId, target.DefinitionId
FROM [CCHMergeIntermediate].[WF].[Definition] source
INNER JOIN [centralq3].[WF].[Definition] target
ON (((source.Name = target.Name) OR (source.Name IS NULL AND target.Name IS NULL)))
I get this result:
http://i.stack.imgur.com/3cZsM.png (sorry for external link, I don't have enaugh reputation to post image here )
What am I doing wrong? What I don't see? What am I missing..?
The problem was in the trigger at the condition. I modified the second where from i.oldPkCol = d.DefinitionId to i.oldPkCol = **d.oldPkCol** and it worked.
UPDATE WF.DefinitionSteps
SET DefinitionId =
(SELECT i.DefinitionId
FROM inserted i,
deleted d
WHERE WF.DefinitionSteps.DefinitionId = d.DefinitionId
AND i.oldPkCol = **d.oldPkCol**)
WHERE WF.DefinitionSteps.DefinitionId IN
(SELECT DefinitionId FROM deleted)

DB2 Merge using multiple columns in ON statement

I have a megre statement that does something like the following:
MERGE INTO TABLE_NAME1 tgt
USING (SELECT CONTRACTOR, TRACTOR, COL1, COL2 FROM TABLE_NAME2) src
ON src.CONTRACTOR = tgt.CONTRACTOR AND src.TRACTOR = tgt.TRACTOR
This is because a contractor can have multiple tractors. The table key is not used because it is an identity key only - auto number on insert.
The Merge runs OK when the table is empty, but when running it again it duplicates the rows where the tractor is null. So I tried:
ON ((src.CONTRACTOR = tgt.CONTRACTOR AND src.TRACTOR = tgt.TRACTOR)
OR (src.CONTRACTOR = tgt.CONTRACTOR AND tgt.TRACTOR IS NULL))
But this causes it to hang. Does DB2 have an issue comparing NULL to NULL?
"Does DB2 have an issue comparing NULL to NULL?" No, it does not. However, the result of such a comparison is unknown, in other words, it is neither true nor false:
$ db2 "select * from sysibm.sysdummy1"
IBMREQD
-------
Y
1 record(s) selected.
$ db2 "select * from sysibm.sysdummy1 where null = null"
IBMREQD
-------
0 record(s) selected.
$ db2 "select * from sysibm.sysdummy1 where null != null"
IBMREQD
-------
0 record(s) selected.
Without seeing your complete statement and sample data it's hard to provide a definite answer, but you may want to try instead:
...ON ((src.CONTRACTOR = tgt.CONTRACTOR AND src.TRACTOR = tgt.TRACTOR
AND tgt.TRACTOR IS NOT NULL))

Needing help to improve some TSQL "not exists" query performance

I'm having an performance issue running a query on a table containing 750 000 entries. It takes between 15 to 20 seconds to execute, blocking access to the database during that time and creating lots of error logs (and angry customers, of course).
Here is the query:
DECLARE #FROM_ID AS UNIQUEIDENTIFIER = 'XXX'
DECLARE #TO_ID AS UNIQUEIDENTIFIER = 'YYY'
update tbl_share
set user_id = #TO_ID
where user_id = #FROM_ID
and not exists (
select *
from tbl_share ts
where ts.file_id = file_id
and ts.user_id = #TO_ID
and ts.corr_id = corr_id
and ts.local_group_id = local_group_id
and ts.global_group_id = global_group_id
)
I'm kind of stuck right now since my TSQL knowledge is limited.
I'm wondering if:
I should create a temporary table
I should select something else than "*"
I haven't lot of opportunities to run the tests since it's a production database and there are permanently 10-20 customers connected on day time.
Thanks for your help!
How about restructuring your code logic?
DECLARE #FROM_ID AS UNIQUEIDENTIFIER = 'XXX'
DECLARE #TO_ID AS UNIQUEIDENTIFIER = 'YYY'
IF NOT EXISTS (select *
from tbl_share ts
where ts.user_id = #TO_ID)
BEGIN
update tbl_share
set user_id = #TO_ID
where user_id = #FROM_ID
END
So, you are doing your check beforehand and do only updating the database in the case it is needed.
HTH
Let's start with optimizing the select.
Check query plans.
If that is the PK then is it fragmented?
select *
from tbl_share
where user_id = #FROM_ID
and not exists (
select *
from tbl_share ts
where ts.file_id = file_id
and ts.user_id = #TO_ID
and ts.corr_id = corr_id
and ts.local_group_id = local_group_id
and ts.global_group_id = global_group_id
)
select tUpdate.*
from tbl_share as tUpdate
left outer join tbl_share as tExists
on tUpdate.user_id = #FROM_ID
and tExists.user_id = #TO_ID
and tExists.file_id = tUpdate.file_id
and tExists.corr_id = tUpdate.corr_id
and tExists.local_group_id = tUpdate.local_group_id
and tExists.global_group_id = tUpdate.global_group_id
where tExists.user_id is null