Terminate unowned workflow - workflow

I'm currently using WF with multiple hosts. If one of these hosts owns a workflow, but crashes, I'd like another host to be able to terminate the workflow. Is there any way to do this?
What I've tried so far is to first remove ownership by executing a sql query to set ownerID and ownedUntil to NULL, unlocked to 1, and nextTimer to the current date. Then I get the workflow instance from the runtime and call terminate on it. This only seems to work when the host that starts the workflow is the one that terminates it.

I found a workaround. I call Terminate twice on the workflow instance. I still don't understand why that is needed, but it seems to work.

Related

A file prepared by one spring batch job is not accessible to other for deletion

I have a requirement where I have to prepare a file using one job and another job which runs once a day will send the file to external system and delete/or move from the location. When this job tries to delete/or move the file it can't access it.
I tried setting writable to true when file is created. Running jobs on separate times (Running one job at a time). Tried adding "delete" as a step to the same job as well. Nothing worked.
I am using file.delete(). Also tried Files.deleteIfExists().
I suspect the first job is not assigning proper permissions but don't know a way around it set permissions in spring batch
Are these jobs run by the same user? i.e. Same user and permissions?
Also what is the actual error message? Does it say permissions denied? If so they it is likely an OS restriction not Spring Batch/Java limitation.
An easier solution would be to just add a step to the first job to send the files are part of the job and drop the job that just transfers the files.
Answering my own question 😀. Hope it helps someone.
Issue was the last ItemWriter was holding the resources because I was using the composite writer. While using CompositeWriter beforeStep, afterStep methods are “hidden”. You have to call them explicitely. I selected the approach to write a custom writer which will explicitely call writer.close().
Adding afterStep method and calling super.close() should also work. Though I have nit tries that out.

Processing a row externally that fired a trigger

I'm working on a PostgreSQL 9.3-database on an Ubuntu 14 server.
I try to write a trigger-function (AFTER EACH ROW) that launches an external process that needs to access the row that fired that trigger.
My problem:
Even tough I can run queries on the table including the new row inside the trigger, the external process does not see it (while the trigger function is still running).
Is there a way to manage that?
I thought about starting some kind of asynchronous function call to give the trigger some time to terminate first, but that's of course really ugly.
Also I read about notifiers and listeners, but that would require some refactoring of my existing code and also some additional listener, which I tried to prevent with my trigger. (I'm also afraid of new problems which may occur on this road.)
Any more thoughts?
Robin

Talend Force run order of joblets

My company has a couple of joblets that we put in new jobs to do things like initialization of variables, get system information from the database and sending out error / warning emails. The issue we are running into is that if we go ahead and start creating the components of a job and realize that we forgot to include these 3 joblets, we have to basically re-create the job to ensure that the joblets are added first so they run first.
Is there any way to force these joblets to run first and possibly also in a certain order before moving on to the contents of the job being created? Please let me know if there is any information you may need that I'm missing as I have only been using Talend for a few days. The rest of the team has not been using it too much longer than I have, so they do not have the answer I'm looking for either. Thanks in advance!
In Joblets you can use the components Trigger_Input and Trigger_Output as connection-points for on subjob OK triggers. So you can connect joblets and other components in a job with triggers. Thus enforcing execution order.
But you cannot get a on subjob OK trigger from a tPreJob. I am thinking on triggering from a tPreJob to a tWarn (on component OK) and then from tWarn to the joblet (on subjob OK).

DB2 lock timeout

We have a WebSphere cluster with four clones. Identical code runs on each of the clones. We have Quartz periodically kick off a job that runs the code.
The code tries to update a row in a table so that only one of the clones will be able to successfully update the table, and then that clone will run the rest of the job. Something like:
update <table> set status = 'RUNNING' where job_name = 'JOB1' and status = 'STOPPED'
We do not start a transaction when we execute the update statement.
What we see sometimes is that all four clones fail to update the table, and all get a lock timeout error (sql code -913).
We've also tried an alternative where we start a transaction, select to see if the row is marked as running, and if not, then performing an update and committing; and otherwise rolling back.
That had the same problem.
One solution we did not try yet is to modify the select to be a "select for update" although from my googleing, I have doubts as to whether that will help.
Any suggestions?
This ended up not being a problem (that's what I get for listening to someone without checking it out myself).
I tested this out in our development environment with two clones. One of the clones would see the -913 lock timeout error occasionally while the other clone would successfully update the table. Other than the ugly log message, everything worked as it should.
Usually, however, we would not get the -913 error, but rather a warning indicating that there was no row to update from one of the clones. Again, this behavior is fine.
So, as we originally thought, and Clockwork-Muse also suggests, using UPDATE statements in this manner to enforce a lock works just fine in DB2.

Retry period after an unhandled exception in a Workflow

Currently in our workflow application if it encounters an unhandled exception it will reload the workflow from the most recently persisted state and try again. Are there any ways to configure how this works exactly? If a service is down for example the workflow will reload around every second and try to run again which when there are multiple workflows all doing the same thing can result in thousands of exceptions per minute.
I think that using the timeToPersist and timeToUnload properties on workflowIdle might have something to do with this. Currently we have this set to:
If I set timeToUnload to 1 minute will that mean the workflow will only be able to retry once every minute?
TimeToPersist and TimeToUnload won't come into play here- those values determine how long a workflow has to be idle before being persisted/unloaded.
You can probably use WorkflowApplication.OnUnhandledException to create a catch-all exception handler (assuming you're using this class to create workflows).
http://msdn.microsoft.com/en-us/library/system.activities.workflowapplication.onunhandledexception.aspx