How spring batch admin is stopping a running job from the UI .
On the spring batch admin's online documentation i have read the following lines .
"A job that is executing can be stopped by the user (whether or not it
is launchable). The stop signal is sent via the database and once
detected by Spring Batch in whatever process is running the job, the
job is stopped (status moves from STOPPING to STOPPED) and no further
processing takes place."
Does that mean Spring batch admin UI is directly changing the status of job inside the spring batch table ?
UPDATE: I tried executing the below query on the running job .
update batch_job_execution set status="STOPPED" where job_ins
tance_id=19;
The above query is getting updated in the DB but spring batch is not bale to stop the running job.
If anybody has tried this please do share the logic here .
You're confused between Batch Status vs. Exit Status.
What are you doing with that SQL is changed the STATUS to STOPPED
When a job is running you can stop the job from the code. In each step iteration, check their status and if STOPPING its set, then send the step to stop ongoing.
Anyway, what you doing is not elegant. The correct way is explained in Common Batch Patterns -> 11.2 Stopping a Job Manually for Business Reasons
public class FooProcessor implements ItemProcessor<FooIn,FooOut>{
public FooOut process(FooIn foo) throws Exception {
if (sendToStop(item)) {
throw new MyStopException("I need to Stop: " + item);
}
//do my stuff
return new FooOut(foo);
}
}
Another simple way to stop chunk step is return null in the reader. This tells us that no more elements to iterate the reader
public T read() throws Exception {
T item = delegate.read();
if (ifNeedStop(item)) {
return null; // end the step here
}
return item;
}
I investigated the spring batch code.
It seems they update both the version and status of the BATCH_JOB_EXECUTION.
This works for me:
update batch_job_execution set status="STOPPED", version=version+1 where job_instance_id=19;
If you look into the jars of spring batch admin, you can see that in AbstractStep.java(spring-batch admin class) it checks for the status of the Step and Job from Database .
Based on this status it validates step before running it .
This works well for all cases except in chunk, since next step is called after large processing . If you want to implement in it, you can implement your own listener to check status (but it will increase DB hits) .
Related
I have Started my job using jobLauncher.run(processJob,jobParameters); and when i try stop job using another request jobOperator.stop(jobExecution.getId()); then get exeption :
org.springframework.batch.core.launch.JobExecutionNotRunningException:
JobExecution must be running so that it can be stopped
Set<JobExecution> jobExecutionsSet= jobExplorer.findRunningJobExecutions("processJob");
for (JobExecution jobExecution:jobExecutionsSet) {
System.err.println("job status : "+ jobExecution.getStatus());
if (jobExecution.getStatus()== BatchStatus.STARTED|| jobExecution.getStatus()== BatchStatus.STARTING || jobExecution.getStatus()== BatchStatus.STOPPING){
jobOperator.stop(jobExecution.getId());
System.out.println("###########Stopped#########");
}
}
when print job status always get job status : STOPPING but batch job is running
its web app, first upload some CSV file and start some operation using spring batch and during this execution if user need stop then stop request from another controller method come and try to stop running job
Please help me for stop running job
If you stop a job while it is running (typically in a STARTED state), you should not get this exception. If you have this exception, it means you have stopped your job while it is currently stopping (that is what the STOPPING status means).
jobExplorer.findRunningJobExecutions returns only running executions, so if in the next line right after this one you have a job in STOPPING status, this means the status changed right after calling jobExplorer.findRunningJobExecutions. You need to be aware that this is possible and your controller should handle this case.
When you tell spring batch to stop a job it goes into STOPPING mode. What this means is it will attempt to complete the unit of work chunk it is currently processing but then stop working. Likely what's happening is you are working on a long running task that is not finishing a unit of work (is it hung?) so it can't move from STOPPING to STOPPED.
Doing it twice rightly leads to an Exception because your job is already STOPPING by the time you did it the first time.
I have gone through few posts related to the similar issue, i am calling spring batch application through shell script and getting the exit status.Everything works fine on successful execution. ExitStatus gets populated as 0. However if there is any database error(to create database error i gave the wrong port of database) then ExitStatus is being returned as empty. Code is below
I have referred below posts and implemented similarly
Make a spring-batch job exit with non-zero code if an exception is thrown
Spring batch return custom process exit code
Shell Script:
java -jar $JOBDIR/lib/feed*.jar
result=$?
echo $result
Java:
public static void main(String[] args) {
ConfigurableApplicationContext context
=SpringApplication.run(App.class, args);
int exitCode = SpringApplication.exit(context);
System.out.print("Exit code is" + exitCode);
System.exit(exitCode);
}
#Primary
#Bean(destroyMethod = "")
public DataSource dataSource() throws Exception {
return BatchDataSource.create(url, user, password);
}
in case of database error it is not even reaching end of the main method System.exit(exitCode); Can any one guide me what is wrong??
if there is any database error(to create database error i gave the wrong port of database) then ExitStatus is being returned as empty.
That's because in that case, your job is not executed at all. According to your configuration, the dataSource bean creation error prevent the Spring application context from being correctly started and your job is not even executed.
I've got an ECS/Fargate task that runs every five minutes. Is there a way to tell it to not run if the prior instance is still working? At the moment I'm just passing it a cron expression, and there's nothing in the cron/rate aws doc about blocking subsequent runs.
Conseptually I'm looking for something similar to Spring's #Scheduled(fixedDelay=xxx) where it'll run every five minutes after it finishes.
EDIT - I've created the task using cloudformation, not the cli
This solution works if you are using Cloudwatch Logging for your ECS application
- Have your script emit a 'task completed' or 'script successfully completed running' message so you can track it later on.
Using the describeLogStreams function, first retrieve the latest log stream. This will be the stream that was created for the task which ran 5 minutes ago in your case.
Once you have the name of the stream, check the last few logged events (text printed in the stream) to see if it's the expected task completed event that your stream should have printed. Use the getLogEvents function for this.
If it isn't, don't launch the next task and invoke a wait or handle as needed
Schedule your script to run every 5 minutes as you would normally.
API links to aws-sdk docs are below. This script is written in JS and uses the AWS-SDK (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS.html) but you can use boto3 for python or a different lib for other languages
API ref for describeLogStreams
API ref for getLogEvents
const logGroupName = 'logGroupName';
this.cloudwatchlogs = new AwsSdk.CloudWatchLogs({
apiVersion: '2014-03-28', region: 'us-east-1'
});
// Get the latest log stream for your task's log group.
// Limit results to 1 to get only one stream back.
var descLogStreamsParams = {
logGroupName: logGroupName,
descending: true,
limit: 1,
orderBy: 'LastEventTime'
};
this.cloudwatchlogs.describeLogStreams(descLogStreamsParams, (err, data) => {
// Log Stream for the previous task run..
const latestLogStreamName = data.logStreams[0].logStreamName;
// Call getLogEvents to read from this log stream now..
const getEventsParams = {
logGroupName: logGroupName,
logStreamName: latestLogStreamName,
};
this.cloudwatchlogs.getLogEvents(params, (err, data) => {
const latestParsedMessage = JSON.parse(data.events[0].message);
// Loop over this to get last n messages
// ...
});
});
If you are launching the task with the CLI, the run-task command will return you the task-arn.
You can then use this to check the status of that task:
aws ecs describe-tasks --cluster MYCLUSTER --tasks TASK-ARN --query 'tasks[0].lastStatus'
It will return RUNNING if it's still running, STOPPED if stopped, etc.
Note that Fargate is very aggressive about harvesting stopped tasks. If that command returns null, you can consider it STOPPED.
I'm running spark jobs through YARN with Spark submit , after my spark job failing the job is still showing status as SUCCEED instead of FAILED. how can I return exit code as failed state from code to the YARN?
How can we send yarn different application code status from the code?
I do not think you can do that. I have experienced the same behavior with spark-1.6.2, but after analyzing the failures, I don't see any obvious way of sending a "bad" exit code from my application.
I tried sending a non-zero exit code within the code using
System.exit(1);
which did not work as expected and #gsamaras mentioned the same in his answer. The following work around worked for me though , using try catch.
try{
}
catch {
case e: Exception => {
log.error("Error " + e)
throw e;
} }
I had same issue, fixed this by changing deploy mode from client to cluster. If deploy mode is client, then state will always be FINISHED. Refer https://issues.apache.org/jira/browse/SPARK-3627
I would like to clarify details of the scheduler.getCurrentlyExecutingJobs() method in Quartz1.6. I have a job that should have only one instance running at any given moment. It can be triggered to "run Now" from a UI but if a job instance already running for this job - nothing should happen.
This is how I check whether there is a job running that interests me:
List<JobExecutionContext> currentJobs = scheduler.getCurrentlyExecutingJobs();
for (JobExecutionContext jobCtx: currentJobs){
jobName = jobCtx.getJobDetail().getName();
groupName = jobCtx.getJobDetail().getGroup();
if (jobName.equalsIgnoreCase("job_I_am_looking_for_name") &&
groupName.equalsIgnoreCase("job_group_I_am_looking_for_name")) {
//found it!
logger.warn("the job is already running - do nothing");
}
}
then, to test this, I have a unit test that tries to schedule two instances of this job one after the other. I was expecting to see the warning when trying to schedule the second job, however, instead, I'm getting this exception:
org.quartz.ObjectAlreadyExistsException: Unable to store Job with name:
'job_I_am_looking_for_name' and group: 'job_group_I_am_looking_for_name',
because one already exists with this identification.
When I run this unit test in a debug mode, with the break on this line:
List currentJobs = scheduler.getCurrentlyExecutingJobs();
I see the the list is empty - so the scheduler does not see this job as running , but it still fails to schedule it again - which tells me the job was indeed running at the time...
Am I missing some finer points with this scheduler method?
Thanks!
Marina
For the benefit of others, I'm posting an answer to the issue I was having - I got help from the Terracotta Forum's Zemian Deng: posting on Terracotta's forum
Here is the re-cap:
The actual checking of the running jobs was working fine - it was just timing in the Unit tests, of course. I've added some sleeping in the job, and tweaked unit tests to schedule the second job while the first one is still running - and verified that I could indeed find the first job still running.
The exception I was getting was because I was trying to schedule a new job with the same name, rather than try to trigger the already stored in the scheduler job. The following code worked exactly as I needed:
List<JobExecutionContext> currentJobs = scheduler.getCurrentlyExecutingJobs();
for (JobExecutionContext jobCtx: currentJobs){
jobName = jobCtx.getJobDetail().getName();
groupName = jobCtx.getJobDetail().getGroup();
if (jobName.equalsIgnoreCase("job_I_am_looking_for_name") && groupName.equalsIgnoreCase("job_group_I_am_looking_for_name")) {
//found it!
logger.warn("the job is already running - do nothing");
return;
}
}
// check if this job is already stored in the scheduler
JobDetail emailJob;
emailJob = scheduler.getJobDetail("job_I_am_looking_for_name", "job_group_I_am_looking_for_name");
if (emailJob == null){
// this job is not in the scheduler yet
// create JobDetail object for my job
emailJob = jobFactory.getObject();
emailJob.setName("job_I_am_looking_for_name");
emailJob.setGroup("job_group_I_am_looking_for_name");
scheduler.addJob(emailJob, true);
}
// this job is in the scheduler and it is not running right now - run it now
scheduler.triggerJob("job_I_am_looking_for_name", "job_group_I_am_looking_for_name");
Thanks!
Marina