Update Quartz.net jobs and triggers from XML - quartz-scheduler

I am trying to move my Quartz.NET setup from RamJobStore to AdoJobStore to deal with situations when the server is down for some reason when a trigger should fire.
I use the XMLSchedulingDataProcessorPlugin for job and trigger initialization. The database connection is working and at application startup the jobs and triggers are inserted in the database tables.
<processing-directives>
<overwrite-existing-data>true</overwrite-existing-data>
<ignore-duplicates>false</ignore-duplicates>
</processing-directives>
If I set the overwrite-existing-data property to true in the XML config, all saved data is overwritten (for example PREV_FIRE_TIME, NEXT_FIRE_TIME, START_TIME for triggers) at the next application startup. If I set the property to false, changes for example in trigger cron expressions are ignored.
Is there a way to just "update" the data from XML?

Related

Spring Batch: reading from a database and being aware of the previous processed id?

I'm trying to setup Spring Batch to move DB records from Oracle to Cassandra daily.
I know I can manually define JPA repository queries based on additional entity table (like MyBatchProgress where I store previously completed Id + date or something like that), so that the next batch job knows which entity to start with for further operations.
My question is: does Spring Batch provide something like this inbuilt (also by utilising Spring Data JPA)?
Or is this something that I have to write manually in the job reader step where I just pick up the last Id stored in my custom "progress" table?
Thanks in advance!
You can store the last ID in the execution context, which is persisted in the meta-data tables. With that in place, you can make the code that launches the job look for the last job execution, take the ID from its context and pass it as a job parameter to the next job instance.

How to create a Logic App Custom Connector polling trigger?

I've been able to create a Logic App Custom Connector with a webhook trigger by following the docs, however I can't find any documentation on creating a polling trigger. I was only able to find Jeff Hollan's trigger examples, but the polling trigger doesn't seem compatible with the custom connector.
I tried setting up a polling trigger by performing the following steps:
Create an Azure Function with a GET operation expecting a date time query parameter
Have the function return a set of entities that have changed since the last poll
Configure the custom connector to call the Azure Function with the date time query parameter
Configure the response body of the custom connector
Try different things in the 'Trigger configuration' section, but this is most confusing to me.
Whatever I tried, the trigger always fails with a 404 in the trigger outputs, similar to what I initially had with the webhook trigger type.
There are a few things that confuse me:
1. Path of trigger query seems screwed up
It looks like the custom connector UI screws up the path to the trigger. I noticed this when I downloaded the OpenAPI file. The path to my trigger API should be /api/trigger/tasks/completed, but in the OpenAPI file it read /trigger/api/trigger/tasks/completed. It appears the custom connector adds /trigger in front of the path. I sometimes noticed it doing this multiple times, giving me something similar to /trigger/trigger/trigger/api/trigger/tasks/completed. I fixed this in the OpenAPI file and re-imported it into the custom connector.
2. Trigger Configuration section
I don't understand what to do in the Trigger Configuration section of a polling trigger.
I assume the query parameter to monitor state change is some parameter I define myself, e.g. a timestamp, to determine what entities to return.
As the 'select value to pass to selected query param' I would expect I could pick a timestamp from the trigger response. It looks like I can only pick values from a collection, not scalar values from the response as I would expect. How does that work?
Is 'trigger hint' just some information or does it actually control something?

Event Base Drools Rule

I am new to Drools Fusion, I am unable to create a rule for below condition
Read the server log file with (Date, Error message etc...)
If found Event Type: ERROR with Event Message: "Memory Error" have to
trigger some event (as of now SOP)
Another (with in) 1hr it should not trigger event for same Event Message & Event Type (if its found in log file)
After 1hr if it found the same, it has to trigger event
Note: Have to use same date & time specified in log file
Please do the needful for the same.
I'm not sure exactly what you're looking for. I'll respond conceptually. I'm going to assume you are trying to do everything within the drools framework.
For drools to be constantly aware of the server log you will need to be running a stateful knowledge session and constantly inserting new facts into it. These facts would be derived from the server log.
It looks like you want to talk about Events in your model. Make an Event class. For this example, the class should probably have "type" and "message" fields. Presumably you would insert new event objects using code which is constantly getting information from the server log (either reading a file, through REST, or whatever).
In order to do time based logic you can use cron expressions. You can also use Calendar in more recent versions of drools. This is a brief example of doing it with cron.

Obtain ServiceDeploymentId in TrackingParticipant

In WF4, I've created a descendant of TrackingParticipant. In the Track method, record.InstanceId gives me the GUID of the workflow instance.
I'm using the SqlWorkflowInstanceStore for persistence. By default records are automatically deleted from the InstancesTable when the workflow completes. I want to keep it that way to keep the transaction database small.
This creates a problem for reporting, though. My TrackingParticipant will log the instance ID to a reporting table (along with other tracking information), but I'll want to join to the ServiceDeploymentsTable. If the workflow is complete, that GUID won't be in the InstancesTable, so I won't be able to look up the ServiceDeploymentId.
How can I obtain the ServiceDeploymentId in the TrackingParticipant? Alternately, how can I obtain it in the workflow to add it to a CustomTrackingRecord?
You can't get the ServiceDeploymentId in the TrackingParticipant. Basically the ServiceDeploymentId is an internal detail of the SqlWorkflowInstanceStore.
I would either set the SqlWorkflowInstanceStore to not delete the worklow instance upon completion and do so myself at some later point in time after saving the ServiceDeploymentId with the InstanceId.
An alternative is to use auto cleanup with the SqlWorkflowInstanceStore and retreive the ServiceDeploymentId when the first tracking record is generated. At that point the workflow is not complete so the original instance record is still there.

continue insert when exception is raised in postgres

HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.