could not find an answer here, therefore I ask it :-)
Using Typo3 8.7.24 and want to use scheduler for removing delete db entries (recycler). I made the task and executed it manually.
Result is: nothing is done. Recylcer still shows the deleted entries.
Does somebody have an idea why this is not working?
BR
Jürgen
Have you configured the scheduler task correctly?
Aside from the configuration how often the task should be executed you need to configure:
"Delete entries older than (in days)"
how long ago should a deletion have occurred so that the record can be removed?
"Tables"
which tables should be considered to clean up?
Either your deletion wasn't that long ago or your records do not belong to any monitored table.
Related
My application uses mongock 4.1.19 and when ever there is a changeSet with runAlways=true, there are duplicate entries getting created in the dbchangelog collection.
the below line does not seem to consider already executed case and may be resulting in duplicate changelog entries
Any pointers on how this can be addressed
https://github.com/cloudyrock/mongock-core/blob/91d15d65a22234f4a2e8d28c759d0641d36750e0/mongock-runner/mongock-runner-core/src/main/java/com/github/cloudyrock/mongock/runner/core/executor/MigrationExecutor.java#L139
Below Logger logged at startup -
RE-APPLIED - ChangeEntry{...}
It's not really duplicated. It creates a changelog entry per execution.
However, we understand this is not the more common desired behaviour, we are releasing a bugfix(4.3.8) for version 4 in the next days, probably today.
In version 5, which is under development, we'll keep this by default, plus updating the last_execution field we'll add, and add the option to insert a new entry per execution if desired.
I have a requirement where I have to prepare a file using one job and another job which runs once a day will send the file to external system and delete/or move from the location. When this job tries to delete/or move the file it can't access it.
I tried setting writable to true when file is created. Running jobs on separate times (Running one job at a time). Tried adding "delete" as a step to the same job as well. Nothing worked.
I am using file.delete(). Also tried Files.deleteIfExists().
I suspect the first job is not assigning proper permissions but don't know a way around it set permissions in spring batch
Are these jobs run by the same user? i.e. Same user and permissions?
Also what is the actual error message? Does it say permissions denied? If so they it is likely an OS restriction not Spring Batch/Java limitation.
An easier solution would be to just add a step to the first job to send the files are part of the job and drop the job that just transfers the files.
Answering my own question 😀. Hope it helps someone.
Issue was the last ItemWriter was holding the resources because I was using the composite writer. While using CompositeWriter beforeStep, afterStep methods are “hidden”. You have to call them explicitely. I selected the approach to write a custom writer which will explicitely call writer.close().
Adding afterStep method and calling super.close() should also work. Though I have nit tries that out.
My company has a couple of joblets that we put in new jobs to do things like initialization of variables, get system information from the database and sending out error / warning emails. The issue we are running into is that if we go ahead and start creating the components of a job and realize that we forgot to include these 3 joblets, we have to basically re-create the job to ensure that the joblets are added first so they run first.
Is there any way to force these joblets to run first and possibly also in a certain order before moving on to the contents of the job being created? Please let me know if there is any information you may need that I'm missing as I have only been using Talend for a few days. The rest of the team has not been using it too much longer than I have, so they do not have the answer I'm looking for either. Thanks in advance!
In Joblets you can use the components Trigger_Input and Trigger_Output as connection-points for on subjob OK triggers. So you can connect joblets and other components in a job with triggers. Thus enforcing execution order.
But you cannot get a on subjob OK trigger from a tPreJob. I am thinking on triggering from a tPreJob to a tWarn (on component OK) and then from tWarn to the joblet (on subjob OK).
If I click on rename for a particular sequence in the Bluemix Editor, it will create a new sequence with the name, but the "old" sequence does still exist.
Thanks very much for the bug report. We have replicated this issue: under certain conditions, the rename fails to delete the old sequence. We'll get this fixed as soon as possible!
In the time being, a workaround is to manually delete the sequence with the old name. There will be no harm in doing so.
Nick Mitchell
IBM OpenWhisk Team
We have a WebSphere cluster with four clones. Identical code runs on each of the clones. We have Quartz periodically kick off a job that runs the code.
The code tries to update a row in a table so that only one of the clones will be able to successfully update the table, and then that clone will run the rest of the job. Something like:
update <table> set status = 'RUNNING' where job_name = 'JOB1' and status = 'STOPPED'
We do not start a transaction when we execute the update statement.
What we see sometimes is that all four clones fail to update the table, and all get a lock timeout error (sql code -913).
We've also tried an alternative where we start a transaction, select to see if the row is marked as running, and if not, then performing an update and committing; and otherwise rolling back.
That had the same problem.
One solution we did not try yet is to modify the select to be a "select for update" although from my googleing, I have doubts as to whether that will help.
Any suggestions?
This ended up not being a problem (that's what I get for listening to someone without checking it out myself).
I tested this out in our development environment with two clones. One of the clones would see the -913 lock timeout error occasionally while the other clone would successfully update the table. Other than the ugly log message, everything worked as it should.
Usually, however, we would not get the -913 error, but rather a warning indicating that there was no row to update from one of the clones. Again, this behavior is fine.
So, as we originally thought, and Clockwork-Muse also suggests, using UPDATE statements in this manner to enforce a lock works just fine in DB2.