Spring batch how to let the Writer know that it received the last entity running through the flow - spring-batch

I have a flow of Reader -> Processor -> writer
Every 50 Million records the writer is writing the data into a file and zip it.
the problem is that once the Reader has finished the Writer still "holds" many records which are not written since it didn't reach the 50 M records threshold.
Any advise on how to implement it in a way that the data will be written to many files with 50 M records each and a single file with the renaming records ?

If you use a MulitResoureceItemWriter, you can use chunk size to dictate how this should work. It can be configured to write at your specific threshold and if there is a remainder in the final chunk, that will also be written out. You can read more about this useful delegate in the documentation here: https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/file/MultiResourceItemWriter.html

Related

Need advice on storing time series data in aligned 10 minute batches per channel

I have time series data in Kafka. The schema is quite simple - the key is the channel name, and the values are Long/Double tuples of the timestamp and the value (in reality it's a custom Avro object but it boils down to this). They always come in correct chronological order.
The wanted end result is data packaged in 10 minute batches, aligned at 10 minutes (i.e., 00:00 < t <= 00:10, 00:10 < t <= 00:20, ..., 23: 50 < t <= 00:00). Each package is to contain only data of one channel.
My idea is to have two Spark Streaming jobs. The first one takes the data from the Kafka topics and dumps it to a table in a Cassandra database where the key is the timestamp and the channel name, and every time such an RDD hits a 10 minute boundary, this boundary is posted to another topic, alongside the channel whose boundary is hit.
The second job listens to this "boundary topic", and for every received 10 minute boundary, the data is pulled from Cassandra, some calculations like min, max, mean, stddev are done and the data and these results are packaged to a defined output directory. That way, each directory contains the data from one channel and one 10 minute window.
However, this looks a bit clunky and like a lot of extra work to me. Is this a feasible solution or are there any other more efficient tricks to it, like some custom windowing of the Kafka data?
I agree with your intuition that this solution is clunky. How about simply using the time windowing functionality built into the Streams DSL?
http://kafka.apache.org/11/documentation/streams/developer-guide/dsl-api.html#windowing
The most natural output would be a new topic containing the windowed aggregations, but if you really need it written to a directory that should be possible with Kafka Connect.
I work with the Flink Stream Processing, not Spark-streaming but I guess the programming concept of both of them is alike. So supposing data are ordered chronologically and you want to aggregate data for every 10 minutes and do some processing on aggregated data, I think the best approach is to use the Streaming Window Functions. I suggest to define a function to map every incoming data's timestamp to the last 10 minutes:
12:10:24 ----> 12:10:00
12:10:30 ----> 12:10:00
12:25:24 ----> 12:20:00
So you can create a keyed stream object like:
StreamObject<Long, Tuple<data>>
That the Long field is the mapped timestamp of every message. Then you can apply a window. You should search what kind of window is more appropriate for your case.
Point: Setting a key for the data stream will cause the window function to consider a logical window for every key.
In the most simple case, you should define a time window of 10 minutes and aggregate all data incoming on that period of time.
The other approach, if you know the rate of generating of data and how many messages will be generated in a period of 10 minutes, is to use Count window. For example, a window with the count of 20 will listen to the stream and aggregate all the messages with the same key in a logical window and apply the window function just when the number of messages in the window reaches 20.
After messages aggregated in a window as desired, you could apply your processing logic using a reduce function or some action like that.

How to do rollback in spring bach for specific chunk?

Say I have 1000 data which is being read by reader and my chunk size is 50.In my processor after doing some business logic , deleting data from some table by calling java stored procedure and in case of deletion fail , incrementing failed-deletion count. At last when control comes to writer if failed-deletion count> 0 then I want to rollback the entire deletion for this particular chunk and this should not affect the other chunk process. Can someone please help on this? for more information please let me know.

Spring Batch - Chunk Processing

Im my chunk processing ,I have read one value from file and in my processor im passing this value to the DB , that will return 4 records for that single value. And im returning 4 records to the writer which is going to write in the DB . I'm failing the job in the 3rd record which is returned for the value read from the file. But after failing the job, 3 records from the DB is not rollbacked?
How the chunk is maintaining the transaction whether it is based on read count and write count of the record or not?

Passing data from itemreader to processor

How is the data read is passed from reader to Itemprocessor in Spring batch? Is there a queue where it is put from ItemReader's read method which is consumed by ItemProcessor? I have to read 10 records at a time from a database and process 5 at a time in the ItemProcessor's process method. ItemProcessor is taking the records one by one and I want to change it to 5 records at a time in the process method.
Every item that is returned from the read method of a reader will be forwarded to the processor as one item.
If you want to collect a group of items and pass them as a group to the processor, you need a reader that groups them.
You could implement something like a group-wrapper.
I explained such an approach in another answer: Spring Batch Processor

In spring batch, how to mark a record a skipped record (without retry) during the writing phase

Spring batch has facility to provide the declarative skip policy (i.e. skippable-exception-classes) to state that the particular record needs to be skipped in the batch processing.
This is quite straight forward in case of ItemReader and ItemProcessor (as they operate record by record basis).
However in case of ItemWriter, when the writing of the record fails (because of the DB Constraint violation), I want to skip that record and let other records go through.
As far as I have researched, I can implement this in two ways,
1) Throw the skippable exception, and Spring Batch will start retry operation with one item per batch, and so if the original batch size is 1000, then the batch will call the writer (and processor if it's transactional) 1000 times (once for each record) and record the skipCount for such item which fails with skip exception (which is most probably the same item which had failed in normal operation)
2) ItemWriter catches the SQLException, and resumes the processing the next record till the end of the items list.
The 2nd approach has a problem of losing the statistics about how many records did not go through (i.e. skipped records) and the batch will record all the items are successfully written and hence update the write count with improper value.
The 1st approach is a little bit tricky in my use-case as it involves re-execution of all the items (on DB side we have complex SPs + triggers) and therefore unnecessarily takes more time.
I am looking for some legal alternative to retry to just record the skipped record count during writing phase.
If none, I will go for the 1st option.
Thanks !
This specifies after how many executions of writer the transaction is commited.
<chunk ... commit-interval="10"/>
As you want to skip all the items that fail while persisted to DB you need commit-interval to be 1 in order to actually persist the good items and not be rolled back along a bad one.
Assuming the reader sends only one item to the processor (and not the list of 1000) reader, processor and writer get executed in order for each item. In this case option 2) is not useful as writer receives only one item always.
You can control how the skip count is incremented by calling StepContribution.html#incrementWriteCount and other increment*Count methods from this class.