How do I continue where I left off in Spring Batch? - spring-batch

So I wrote an ItemReader. When this app is run from the command line again I want to continue reading from where I left off. How do I do that?
I've added spring-task. It seems to track certain things. Does it help with this?
Everything I have read online seems to be talking about restarting the job after a failure. I don't think I have any use for that. I've added all of my stuff into the ExecutionContext. Should I use the JobRepository and start looking around for the last successful execution?

Related

How to make Libfuzzer run without stopping similar to AFL?

I have been trying to fuzz using both AFL and Libfuzzer. One of the distinct differences that I have come across is that when the AFL is executed, it runs continuously unless it is manually stopped by the developer.
On the other hand, Libfuzzer stops the fuzzing process when a bug is identified.I know that it allow the addition of parallel fuzzing through the jobs=N command, however those processes still stop when a bug is identified.
Is there any reason behind this behavior?
Also, is there any command that allows the Libfuzzer to run continuously unless the developer stops the fuzzing process?
This question is old but I also was in need to run libFuzzer without stopping.
It can be accomplished with the flags -fork=<N of jobs> combined with -ignore_crashes=1.
Be aware that now Ctrl+C doesn't work anymore. It is considered as a crash and just spawns a new job. But I think this is a bug, see here.

Threading profile for one endpoint in Mule results in endless hanging when used for two endpoints

I added a program to a Mule project in order to avoid duplication of code. They are both REST, access the same webservice and return json. The first service's threading profile looked like this:
<configuration doc:name="Configuration">
<default-receiver-threading-profile maxThreadsActive="10" poolExhaustedAction="WAIT" threadWaitTimeout="10000" maxBufferSize="100" maxThreadsIdle="2" >
</configuration>
When I added my second service, the same threading profile caused the program to hang endlessly. No log entries, no nothing. Raising max active threads to 1000, or even deleting the WAIT option doesn't do anything. It's only when I change "maxThreadsIdle" to 3 or higher, or delete maxBufferSize or delete the entire thing that both can work in the same project. One other thing...when I edit the mule flow and save, the program automatically launches again. Oddly enough, the results end up appearing in any browser I left hanging from trying to submit.
What I want to know is why the min threads needs to be set to 3 or higher...I mean what the heck is actually going on here? Ideally, I would like to keep the threading configuration set to what I have here.

Batch processing with celery?

I am using celery to process and deploy some data and I would like to be able to send this data in batches.
I found this:
http://docs.celeryproject.org/en/latest/reference/celery.contrib.batches.html
But I am having problems with it, such as:
It does not play nicely with eventlets, putting exceptions in the log
files stating that the timer is null after the queue is empty.
It seems to leave additional hanging threads after calling celery
multi stop
It does not appear to adhere to the standard logging of a typical
Task.
It does not appear to retry the task when raise mytask.retry() is
called.
I am wondering if others have experienced these problems, and is there a solution?
I am fine with implementing batch processing on my own, but I do not know a good strategy to make sure that all items are deployed (i.e. even those at the end of the thread).
Also, if the batch fails, I would want to retry the entire batch. I am not sure of any elegant way to do that.
Basically, I am looking for any viable solution for doing real batch processing with celery.
I am using Celery v3.0.21
Thanks!

Part of DROOLS ruleflow not working randomly

We have a main ruleflow which calls 8 more rule flows (Rule1.rf to Rule8.rf) through an AND splitter. One of the rule flows - say Rules4.rf - is fired sometimes and not fired sometimes.
This is for an online application and we use jBoss. When the server is started, everything works fine. After many hours, for some requests, Rules4.rf is not fired at all and for others, its fired properly.
We even posted the same request again and again and the issue happens some times only. There is no difference in the logs between the success & failure requests, except for the logs from the Rules4.rf which missing in failued requests.
We are using drools 5.1 and java 6.
Please help me. This is creating a very big issue.
It is very difficult to figure out what might be going on without being able to look at the actual code and log. Could you by any chance create a JIRA and attach the process and if possible (a part of) an audit log that shows the issue?
Kris

Quartz job fires multiple times

I have a building block which sets up a Quartz job to send out emails every morning. The job is fired three times every morning instead of once. We have a hosted instance of Blackboard, which I am told runs on three virtual servers. I am guessing this is what is causing the problem, as the building block was previously working fine on a single server installation.
Does anyone have Quartz experience, or could suggest how one might prevent the job from firing multiple times?
Thanks,
You didn't describe in detail how your Quartz instance(s) are being instantiated and started, but be aware that undefined behavior will result if you run multiple Quartz instances against the same job store database at the same time, unless you enable clustering (see http://www.quartz-scheduler.org/docs/configuration/ConfigJDBCJobStoreClustering.html).
I guess I'm a little late responding to this, but we have a similar sort of scenario with our application. We have 4 servers running jobs, some of which can run on multiple servers concurrently, and some should only be run once. As Will's response said, you can look into the clustering features of Quartz.
Our approach was a bit different, as we had a home-grown solution in place before we switched to Quartz. Our jobs utilize a database table that store the cron triggers and other job information, and then "lock" the entry for a job so that none of the other servers can execute it. This keeps jobs from running multiple-times on the servers, and has been fairly effective so far.
Hope that helps.
I had the same issue before but I discovered that I was calling scheduler.scheduleJob(job, trigger); to update the job data while the job is running which is randomly triggered the job 5-6 times each run. I had to use the following to update the job data without updating the trigger scheduler.addJob(job, true);