How to perform logic after ItemWriter has completed? - spring-batch

Hi all I'm new to SO and to Springbatch. I have written a batch job with Classifer that updates a table in two different ways (i.e. two ItemWriters) depending on what's retrieved through the ItemReader and all that's working fine. Now, I want to perform some logic after the ItemWriters are done updating. I want to do some logging and update another table with the same set of data retrieved previously. How can I achieve this? I looked at ItemWriterListener but seems it cannot perform data specific logics. I did some searching but with no luck. Any help would be appreciated. Thanks in advance!!

You can try using StepExecutionListener implementing it to your Writer Class to execute logic once ItemWriter is done with the execution. Below is a snippet of the ItemWriter for your reference,
public class TestWriter implements ItemWriter<Test>, StepExecutionListener {
#Override
public void beforeStep(StepExecution stepExecution) {
}
#Override
public void write(List<? extends Test> items) throws Exception {
// Logic of Writer
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
// You can perform post logic after writer here inside afterStep based on your requirements
// Return custom exit status based on the run
return ExitStatus.COMPLETED;
}
}

Now, I want to perform some logic after the ItemWriters are done updating. I want to do some logging and update another table with the same set of data retrieved previously. How can I achieve this? I looked at ItemWriterListener but seems it cannot perform data specific logics.
Since you want to do something with the same items retrieved previously, you need to use a ItemWriteListener#afterWrite as this method gives you access to the items that have just been written.
EDIT: Add details about the failure case based on comments
If the transaction is rolled back, the method ItemWriteListener#onWriteError will be called. Please find more details about this in the common patterns section.

Related

How do you read meta data before the actual job in Spring Batch

I'm currently designing a Spring Batch application that reads from a table, transforms the data and then writes it to another table.
However, before I begin reading the source table, I need to collect some meta data for the application run (e.g. read the holiday calendar table to determine if it's a bank holiday or not). This meta data will not change anymore during runtime, so it needs to be read only once, at the very beginning of the application run.
How can this be achieved? Use a JobListener? Configure a separate Job for this and then pass the information to the "actual" job through an ExecutionContext? Configure a separate step that gets only executed once?
Configure a JobExecutionListener to get the information you need and store it on the Job's ExecutionContext.
You can create a Listener class that either extends JobExecutionListenerSupport to only override the beforeJob method or create a standalone Listener class with a beforeJob method annotated with #BeforeJob.
When configuring the job, just add an instance of your custom Listener class to your JobBuilder configuration before adding any steps.
#Bean
public Job myJob() {
return this.jobBuilderFactory.get("myJob")
.listener(new MyListener())
.start(step1())
.next(step2())
.next(step3())
.build();
}
Anything you add in your Job's ExecutionContext can then be injected into any other Processor/Reader/Writer/Step beans that are configured as long as they are annotated with either #JobScope or #StepScope:
#Bean
#JobScope
public ItemReader<MyItem> myItemReader(
#Value("#{jobExecutionContext['myDate']}") Date myDate) {
//...
}
Component classes work the same as well
#Component
#JobScope
static class MyProcessor implements ItemProcessor<ItemA, ItemB> {
private Date myDate;
public MyProcessor(
#Value("#{jobExecutionContext['myDate']}") Date myDate) {
this.myDate = myDate;
}
// ...
}

How to run code before/after cucumber suite?

I'm trying to figure out how to run some code before and after all my cucumber tests run.
I've been tracking down a bug for a few days where some our processes create jobs on a server, and don't properly clean it up. It's easy to miss so ideally I don't want engineers to have to manually add a check to every test.
I was hoping there'd be a way to put a hook in before any tests ran to cache how many jobs exist on the server, then a hook at the end to ensure that the value hasn't changed.
I know this isn't really the best way to use cucumber, as that is more of a system test type thing to do, but doing it this way would be the best way to fit it into the existing infrastructure.
Use #BeforeClass and #AfterClass annotations in your run file.
#RunWith(Cucumber.class)
#Cucumber.Options(
format = {"json", "<the report file>"},
features = {"<the feature file>"},
strict = false,
glue = {"<package with steps classes>"})
public class TestRunFile {
#BeforeClass
public static void getJobNumbersOnServerBeforeStarting() {
//Implement logic
}
#AfterClass
public static void getJobNumbersOnServerAfterCompletion() {
//Implement logic
}
}
How about using tagged hooks.
#Before("#jobCheck")
public void beforeScenario() {
// actions
}
#After("#jobCheck")
public void afterScenario() {
// actions
}
And then for each scenario that requires this check, add #jobCheck before the Scenario definition as below.
Feature: Some feature description
#jobCheck
Scenario: It should process a sentence
// The steps
More on JVM hooks here: https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/

Spring step does not run properly when I "fib" the reader, must I use a tasklet?

I'm aware that all spring steps need to have a reader, a writer, and optionally a processor. So even though my step only needs a writer, I am also fibbing a reader that does nothing but make spring happy.
This is based on the solution found here. Is it outdated, or am I missing something?
I have a spring batch job that has two chunked steps. My first step, deleteCount, is just deleting all rows from the table so that the second step has a clean slate. This means my first step doesn't need a reader, so I followed the above linked stackoverflow solution and created a NoOpItemReader, and added it to my stepbuilder object (code at the bottom).
My writer is mapped to a simple SQL statement that deletes all the rows from the table (code is at the bottom).
My table is not being cleared by the deleteCounts step. I suspect it's because I'm fibbing the reader.
I am expecting that deleteCounts will delete all rows from the table, yet it is not - and I suspect it's because of my "fibbed" reader but am not sure what I'm doing wrong.
My delete statement:
<delete id="delete">
DELETE FROM ${schemaname}.DERP
</delete>
My deleteCounts Step:
#Bean
#JobScope
public Step deleteCounts() {
StepBuilder sb = stepBuilderFactory.get("deleteCounts");
SimpleStepBuilder<ProcessedCountData, ProcessedCountData> ssb = sb.<ProcessedCountData, ProcessedCountData>chunk(10);
ssb.reader(noOpItemReader());
ssb.writer(writerFactory.myBatisBatchWriter(COUNT_DATA_DELETE));
ssb.startLimit(1);
ssb.allowStartIfComplete(true);
return ssb.build();
}
My NoOpItemReader, based on the previously linked solution on stackoverflow:
public NoOpItemReader<? extends ProcessedCountData> noOpItemReader() {
return new NoOpItemReader<>();
}
// for steps that do not need to read anything
public class NoOpItemReader<T> implements ItemReader<T> {
#Override
public T read() throws Exception {
return null;
}
}
I left out some mybatis plumbing, since I know that is working (step 2 is much more involved with the mybatis stuff, and step 2 is inserting rows just fine. deleting is so simple, it must be something with my step config...)
Your NoOpItemReader returns null. An ItemReader returning null indicates that the input has been exhausted. Since, in your case, that's all it returns, the framework assumes that there was no input in the first place.

RecyclerView on Multiple Activity

I have an app that depends heavily on recyclerview.. Each activity has different model and layout for them.. So do I need to write separate adapters for all of them.?? Or could I have a base adapter which can have on create view holder, onbind view holder which would reduce the amount of repetitive code..PS. I also need onclick listener so, I wanted to include that in the base adapter..
What is the best way?? And if I can write a base adapter please give me some code samples..
Thanks in Advance...
Each activity has different model and layout for them.. So do I need
to write separate adapters for all of them
Adapters are responsible for providing views that represent items in a data set, used by the RecyclerView. Now if those items in your ReyclerView are same across all the Activities then you can just have a single RecyclerView.Adapter
I also need onclick listener so, I wanted to include that in the base
adapter.. What is the best way??
You can check this SO post for detailed implementation, but I am summarizing it briefly.
RecyclerView recyclerView = findViewById(R.id.recycler);
recyclerView.addOnItemTouchListener(
new RecyclerItemClickListener(context, new RecyclerItemClickListener.OnItemClickListener() {
#Override public void onItemClick(View view, int position) {
// do whatever
}
})
);
And provide RecyclerItemClickListener class, that implements RecyclerView.OnItemTouchListener interface
And if I can write a base adapter please give me some code samples.
You need to clarify what exactly needs to be implemented in BaseAdapter for anyone to help you
yes it is possible
enter code here holder.button.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if (modelArrayList.get(position).heading.equals("maps")){
Intent intent = new Intent(holder.button.getContext(),MainActivity2.class);
holder.button.getContext().startActivity(intent);
}
if (modelArrayList.get(position).heading.equals("calls")){
Intent intent = new Intent(holder.button.getContext(),MainActivity3.class);
holder.button.getContext().startActivity(intent);
}

spring batch - processor chain

I need to execute seven distinctive processes sequently(One after the other). The data is stored in Mysql. I am thinking of the following options, Please correct me if I am wrong, or if there is a better solution.
Requirments:
Read the data from the Db, do the seven processes(datavalidation, calculation1, calculation2 ...etc.) finally, write the processed data into the DB.
Need to process the data in chunks.
My solution and issues:
Data read:
Read the data using JdbcCursorItemReader, because this is the best performing db reader - But, the SQL is very complex , so I may have to consider a custom ItemReader using JdbcTemplate? which gives me more flexibility in handling the data.
Process:
Define seven steps and chunks, share the data between the steps using databean. But, this won't be a good idea, because the data processes in chunks and after each chunk the step1 writer will create a new set of data in the databean. When this databean shared across the other steps, data integrity will be an issue.
Use StepExecutionContext to share the data between steps. But this may affect the performance as this involves Batch job repository.
Define only one step, with one ItemReader, and a chain of processes (the seven processes), and create one ItemWriter which writes the processed data into the DB. But, I won't be able to administrate or monitor each different processes, all will be in one step.
the org.springframework.batch.item.support.CompositeItemProcessor is an out of the box component from the Spring Batch Framework that would support your requirement akin to your second option. this would allow you do to the following;
- keep separation in your design/solution for reading from the database (itemreader)
- keep separation of each individual processors 'concerns' and configuration
- allow any individual processor to 'shutdown' the chunk by returning null, irrespective of previous processes
the CompositeItemProcessor iterates over a loop of delegates, so it's 'similar' to an action pattern. it's quite useful in the scenario you've described and still allows you to leverage the Chunk benefits (exception, retry, commit policy, etc.)
Suggestions:
1) Read the data using JdbcCursorItemReader.
All out-of-the-box Components are a good choice because they already implements the ItemStream interface that make your steps restartable. But like you mention, sometime, the request is just to complexe or, like me, you already have a service or DAO that you can reuse.
I would suggest you use the ItemReaderAdapter. It let you configure a delegate service to call to get your data.
<bean id="MyReader" class="xxx.adapters.MyItemReaderAdapter">
<property name="targetObject" ref="AnExistingDao" />
<property name="targetMethod" value="next" />
</bean>
Note that the targetMethod must respect the read contract of ItemReaders (return null when no more data)
If your job does not need to be restartable, you could simply use the class : org.springframework.batch.item.adapter.ItemReaderAdapter
But if you need your job to be restartable, you can create your own ItemReaderAdapter like this:
public class MyItemReaderAdapter<T> extends AbstractMethodInvokingDelegator<T> implements ItemReader<T>, ItemStream {
private long currentCount = 0;
private final String CONTEXT_COUNT_KEY = "count";
/**
* #return return value of the target method.
*/
public T read() throws Exception {
super.setArguments(new Long[]{currentCount++});
return invokeDelegateMethod();
}
#Override
public void open(ExecutionContext executionContext)
throws ItemStreamException {
currentCount = executionContext.getLong(CONTEXT_COUNT_KEY,0);
}
#Override
public void update(ExecutionContext executionContext) throws ItemStreamException {
executionContext.putLong(CONTEXT_COUNT_KEY, currentCount);
log.info("Update Stream current count : " + currentCount);
}
#Override
public void close() throws ItemStreamException {
// TODO Auto-generated method stub
}
}
Because the out-of-the-box itemReaderAdapter is not restartable, you just create your own that implements the ItemStream
2) Regarding the 7 steps vs 1 step.
I would go with 1 step with compositeProcessor on this one. the 7 steps option will only bring problems IMO.
1) 7 steps databean : so your writer commit in a databean until step 7.. then step 7 writer try to commit to the real database and boom error!!! all is lost and the batch must restart from step 1!!
2) 7 steps with context : could be better since you will have the state saved in the spring batch metadata.. BUT it is not a good practice to store big data in the metadata of springBatch!!
3) is the way to go IMO. ;-)