Spring Batch Footer Validation - spring-batch

I am using Spring batch for processing a file with a header, detail and footer records.
The footer contains the total number of records in the file.
If the detail record count dosent match the count in the footer, the file should not be processed.
I am using a Custom Line Tokenizer that processes the header, detail and footer record. When the footer record is encountered, if the count dosent match the detail record count, I am throwing an exception.
But the problem I am facing is if the chunk size is set to small numbers like 10 and the file has 20 records, the first 10 detail records are being persisted into the DB, even though the footer count dosent match the total number of records.
Is there a way to validate the footer count with the number of records in the file before the call to the Writer?
Thanks.

What you need is a reader with a footer callback handler defined. I had faced a similar problem and this link helped me a lot!
See the last post by Atefeh Zareh. He has also included the xml configuration.
And regarding the first ten being persisted, you can have another validation step before the main processing step which will just check the header and trailer counts. Do not write any persisting logic in the writer. If the count fails, stop the job so that it does not go into the processing step.

By writing our own Item Reader as well as Item classes to handle Header,Footer,Data records and finding the counts of Header,Footer,Data records
ItemReader Class
public class AggregateItemReader<T> implements ItemStreamReader<ResultHolder> {
private ItemStreamReader<AggregateItem<T>> itemReader;
#Override
public ResultHolder read() throws Exception {
ResultHolder holder = new ResultHolder();
while (process(itemReader.read(), holder)) {
continue;
}
if (!holder.isExhausted()) {
return holder;
}
else {
return null;
}
}
private boolean process(AggregateItem<T> value, ResultHolder holder) {
// finish processing if we hit the end of file
if (value == null) {
LOG.debug("Exhausted ItemReader");
holder.setExhausted(true);
return false;
}
// start a new collection
if (value.isHeader()) {
LOG.debug("Header Record detected");
holder.addHeaderRecordCount();
return true;
}
// mark we are finished with current collection
if (value.isFooter()) {
LOG.debug("Tailer Record detected");
holder.addTailerRecordCount();
holder.setFiledRecordCount(value.getFieldSet().readInt(3));
System.out.println("###########################################"+holder.getDataRecordCount()+"############################################");
return false;
}
// add a simple record to the current collection
holder.addDataRecordCount();
return true;
}
And Item Class is
public class AggregateItem<T> {
#SuppressWarnings("unchecked")
public static <T> AggregateItem<T> getData(FieldSet fs) {
return new AggregateItem(fs, false, false, true);
}
#SuppressWarnings("unchecked")
public static <T> AggregateItem<T> getFooter(FieldSet fs) {
return new AggregateItem(fs, false, true, false);
}
#SuppressWarnings("unchecked")
public static <T> AggregateItem<T> getHeader(FieldSet fs) {
return new AggregateItem(fs, true, false, false);
}
private boolean data = false;
private FieldSet fieldSet;
private boolean footer = false;
private boolean header = false;
private T item;
public AggregateItem(FieldSet fs, boolean header, boolean footer, boolean data) {
this(null);
this.header = header;
this.footer = footer;
this.data = data;
this.fieldSet = fs;
}
public AggregateItem(T item) {
super();
this.item = item;
}
public FieldSet getFieldSet() {
return fieldSet;
}
public T getItem() {
return item;
}
public boolean isData() {
return data;
}
public boolean isFooter() {
return footer;
}
public boolean isHeader() {
return header;
}
}
And ResultHolder class is
public class ResultHolder implements {
private Integer headerRecordCount = 0;
private Integer dataRecordCount = 0;
private Integer tailerRecordCount = 0;
private Integer filedRecordCount;//this is to save record count given in source File
private boolean exhausted = false;//setters & getters
}
If any doubts feel free to mail at sk.baji6#gmail.com

Related

In iText 7 java how do you update Link text after it's already been added to the document

I am using iText7 to build a table of contents for my document. I know all the section names before I start, but don't know what the page numbers will be. My current process is to create a table on the first page and create all the Link objects with generic text "GO!". Then as I add sections I add through the link objects and update the text with the page numbers that I figured out as I created the document.
However, at the end, what gets written out for the link is "GO!", not the updated page number values I set as I was creating the rest of the document.
I did set the immediateFlush flag to false when I created the Document.
public class UpdateLinkTest {
PdfDocument pdfDocument = null;
List<Link>links = null;
Color hyperlinkColor = new DeviceRgb(0, 102, 204);
public static void main(String[] args) throws Exception {
List<String[]>notes = new ArrayList<>();
notes.add(new String[] {"me", "title", "this is my text" });
notes.add(new String[] {"me2", "title2", "this is my text 2" });
new UpdateLinkTest().exportPdf(notes, new File("./test2.pdf"));
}
public void exportPdf(List<String[]> notes, File selectedFile) throws Exception {
PdfWriter pdfWriter = new PdfWriter(selectedFile);
pdfDocument = new PdfDocument(pdfWriter);
Document document = new Document(pdfDocument, PageSize.A4, false);
// add the table of contents table
addSummaryTable(notes, document);
// add a page break
document.add(new AreaBreak(AreaBreakType.NEXT_PAGE));
// add the body of the document
addNotesText(notes, document);
document.close();
}
private void addSummaryTable(List<String[]> notes, Document document) {
links = new ArrayList<>();
Table table = new Table(3);
float pageWidth = PageSize.A4.getWidth();
table.setWidth(pageWidth-document.getLeftMargin()*2);
// add header
addCell("Author", table, true);
addCell("Title", table, true);
addCell("Page", table, true);
int count = 0;
for (String[] note : notes) {
addCell(note[0], table, false);
addCell(note[1], table, false);
Link link = new Link("Go!", PdfAction.createGoTo(""+ (count+1)));
links.add(link);
addCell(link, hyperlinkColor, table, false);
count++;
}
document.add(table);
}
private void addNotesText(List<String[]> notes, Document document)
throws Exception {
int count = 0;
for (String[] note : notes) {
int numberOfPages = pdfDocument.getNumberOfPages();
Link link = links.get(count);
link.setText(""+(numberOfPages+1));
Paragraph noteText = new Paragraph(note[2]);
document.add(noteText);
noteText.setDestination(++count+"");
if (note != notes.get(notes.size()-1))
document.add(new AreaBreak(AreaBreakType.NEXT_PAGE));
}
}
private static void addCell(String text, Table table, boolean b) {
Cell c1 = new Cell().add(new Paragraph(text));
table.addCell(c1);
}
private static void addCell(Link text, Color backgroundColor, Table table, boolean b) {
Cell c1 = new Cell().add(new Paragraph(text));
text.setUnderline();
text.setFontColor(backgroundColor);
table.addCell(c1);
}
}
Quite more work needs to be done compared to the code you have now because the changes to the elements don't take any effect once you've added them to the document. Immediate flush set to false allows you to relayout the elements, but that does not happen automatically. The way you calculate the current page the paragraph will be placed on (int numberOfPages = pdfDocument.getNumberOfPages();) is not bulletproof because in some cases pages might be added in advance, even if the content is not going to be placed on them immediately.
There is a very low level way to achieve your goal but with the recent version of iText (7.1.15) there is a simpler way as well, which still requires some work though. Basically your use case is very similar to target-counter concept in CSS, with page counter being the target one in your case. To support target counters in pdfHTML add-on we added new capabilities to our layout module which are possible to use directly as well.
To start off, we are going to tie our Link elements to the corresponding Paragraph elements that they will point to. We are going to do it with ID property in layout:
link.setProperty(Property.ID, String.valueOf(count));
noteText.setProperty(Property.ID, String.valueOf(count));
Next up, we are going to create custom renderers for our Link elements and Paragraph elements. Those custom renderers will interact with TargetCounterHandler which is the new capability in layout module I mentioned in the introduction. The idea is that during layout operation the paragraph will remember the page on which it was placed and then the corresponding link element (remember, link elements are connected to paragraph elements) will ask TargetCounterHandler during layout process of that link element which page the corresponding paragraph was planed on. So in a way, TargetCounterHandler is a connector.
Code for custom renderers:
private static class CustomParagraphRenderer extends ParagraphRenderer {
public CustomParagraphRenderer(Paragraph modelElement) {
super(modelElement);
}
#Override
public IRenderer getNextRenderer() {
return new CustomParagraphRenderer((Paragraph) modelElement);
}
#Override
public LayoutResult layout(LayoutContext layoutContext) {
LayoutResult result = super.layout(layoutContext);
TargetCounterHandler.addPageByID(this);
return result;
}
}
private static class CustomLinkRenderer extends LinkRenderer {
public CustomLinkRenderer(Link link) {
super(link);
}
#Override
public LayoutResult layout(LayoutContext layoutContext) {
Integer targetPageNumber = TargetCounterHandler.getPageByID(this, getProperty(Property.ID));
if (targetPageNumber != null) {
setText(String.valueOf(targetPageNumber));
}
return super.layout(layoutContext);
}
#Override
public IRenderer getNextRenderer() {
return new CustomLinkRenderer((Link) getModelElement());
}
}
Don't forget to assign the custom renderers to their elements:
link.setNextRenderer(new CustomLinkRenderer(link));
noteText.setNextRenderer(new CustomParagraphRenderer(noteText));
Now, the other thing we need to do it relayout. You already set immediateFlush to false and this is needed for relayout to work. Relayout is needed because on the first layout loop we will not know all the positions of the paragraphs, but we will already have placed the links on the pages by the time we know those positions. So we need the second pass to use the information about page numbers the paragraphs will reside on and set that information to the links.
Relayout is pretty straightforward - once you've put all the content you just need to call a single dedicated method:
// For now we have to prepare the handler for relayout manually, this is going to be improved
// in future iText versions
((DocumentRenderer)document.getRenderer()).getTargetCounterHandler().prepareHandlerToRelayout();
document.relayout();
One caveat is that for now you also need to subclass the DocumentRenderer since there is an additional operation that needs to be done that is not performed under the hood - propagation of the target counter handler to the root renderer we will be using for the second layout operation:
// For now we have to create a custom renderer for the root document to propagate the
// target counter handler to the renderer that will be used on the second layout process
// This is going to be improved in future iText versions
private static class CustomDocumentRenderer extends DocumentRenderer {
public CustomDocumentRenderer(Document document, boolean immediateFlush) {
super(document, immediateFlush);
}
#Override
public IRenderer getNextRenderer() {
CustomDocumentRenderer renderer = new CustomDocumentRenderer(document, immediateFlush);
renderer.targetCounterHandler = new TargetCounterHandler(targetCounterHandler);
return renderer;
}
}
document.setRenderer(new CustomDocumentRenderer(document, false));
And now we are done. Here is our visual result:
Complete code looks as follows:
public class UpdateLinkTest {
PdfDocument pdfDocument = null;
Color hyperlinkColor = new DeviceRgb(0, 102, 204);
public static void main(String[] args) throws Exception {
List<String[]> notes = new ArrayList<>();
notes.add(new String[] {"me", "title", "this is my text" });
notes.add(new String[] {"me2", "title2", "this is my text 2" });
new UpdateLinkTest().exportPdf(notes, new File("./test2.pdf"));
}
public void exportPdf(List<String[]> notes, File selectedFile) throws Exception {
PdfWriter pdfWriter = new PdfWriter(selectedFile);
pdfDocument = new PdfDocument(pdfWriter);
Document document = new Document(pdfDocument, PageSize.A4, false);
document.setRenderer(new CustomDocumentRenderer(document, false));
// add the table of contents table
addSummaryTable(notes, document);
// add a page break
document.add(new AreaBreak(AreaBreakType.NEXT_PAGE));
// add the body of the document
addNotesText(notes, document);
// For now we have to prepare the handler for relayout manually, this is going to be improved
// in future iText versions
((DocumentRenderer)document.getRenderer()).getTargetCounterHandler().prepareHandlerToRelayout();
document.relayout();
document.close();
}
private void addSummaryTable(List<String[]> notes, Document document) {
Table table = new Table(3);
float pageWidth = PageSize.A4.getWidth();
table.setWidth(pageWidth-document.getLeftMargin()*2);
// add header
addCell("Author", table, true);
addCell("Title", table, true);
addCell("Page", table, true);
int count = 0;
for (String[] note : notes) {
addCell(note[0], table, false);
addCell(note[1], table, false);
Link link = new Link("Go!", PdfAction.createGoTo(""+ (count+1)));
link.setProperty(Property.ID, String.valueOf(count));
link.setNextRenderer(new CustomLinkRenderer(link));
addCell(link, hyperlinkColor, table, false);
count++;
}
document.add(table);
}
private void addNotesText(List<String[]> notes, Document document) {
int count = 0;
for (String[] note : notes) {
Paragraph noteText = new Paragraph(note[2]);
noteText.setProperty(Property.ID, String.valueOf(count));
noteText.setNextRenderer(new CustomParagraphRenderer(noteText));
document.add(noteText);
noteText.setDestination(++count+"");
if (note != notes.get(notes.size()-1))
document.add(new AreaBreak(AreaBreakType.NEXT_PAGE));
}
}
private static void addCell(String text, Table table, boolean b) {
Cell c1 = new Cell().add(new Paragraph(text));
table.addCell(c1);
}
private static void addCell(Link text, Color backgroundColor, Table table, boolean b) {
Cell c1 = new Cell().add(new Paragraph(text));
text.setUnderline();
text.setFontColor(backgroundColor);
table.addCell(c1);
}
private static class CustomLinkRenderer extends LinkRenderer {
public CustomLinkRenderer(Link link) {
super(link);
}
#Override
public LayoutResult layout(LayoutContext layoutContext) {
Integer targetPageNumber = TargetCounterHandler.getPageByID(this, getProperty(Property.ID));
if (targetPageNumber != null) {
setText(String.valueOf(targetPageNumber));
}
return super.layout(layoutContext);
}
#Override
public IRenderer getNextRenderer() {
return new CustomLinkRenderer((Link) getModelElement());
}
}
private static class CustomParagraphRenderer extends ParagraphRenderer {
public CustomParagraphRenderer(Paragraph modelElement) {
super(modelElement);
}
#Override
public IRenderer getNextRenderer() {
return new CustomParagraphRenderer((Paragraph) modelElement);
}
#Override
public LayoutResult layout(LayoutContext layoutContext) {
LayoutResult result = super.layout(layoutContext);
TargetCounterHandler.addPageByID(this);
return result;
}
}
// For now we have to create a custom renderer for the root document to propagate the
// target counter handler to the renderer that will be used on the second layout process
// This is going to be improved in future iText versions
private static class CustomDocumentRenderer extends DocumentRenderer {
public CustomDocumentRenderer(Document document, boolean immediateFlush) {
super(document, immediateFlush);
}
#Override
public IRenderer getNextRenderer() {
CustomDocumentRenderer renderer = new CustomDocumentRenderer(document, immediateFlush);
renderer.targetCounterHandler = new TargetCounterHandler(targetCounterHandler);
return renderer;
}
}
}

How to set max no of records read in flatfileItemReader?

My application needs only fixed no of records to be read
& processed. How to limit this if I am using a flatfileItemReader ?
In DB based Item Reader, I am returning null/empty list when max_limit is reached.
How to achieve the same if I am using a org.springframework.batch.item.file.FlatFileItemReader ?
For the FlatFileItemReader as well as any other ItemReader that extends AbstractItemCountingItemStreamItemReader, there is a maxItemCount property. By configuring this property, the ItemReader will continue to read until either one of the following conditions has been met:
The input has been exhausted.
The number of items read equals the maxItemCount.
In either of the two above conditions, null will be returned by the reader, indicating to Spring Batch that the input is complete.
If you have any custom ItemReader implementations that need to satisfy this requirement, I'd recommend extending the AbstractItemCountingItemStreamItemReader and going from there.
The best approch is to write a delegate which is responsible to track down number of read records and stop after a fixed count; the components should take care of execution context to allow restartability
class CountMaxReader<T> implements ItemReader<T>,ItemStream
{
private int count = 0;
private int max = 0;
private ItemReader<T> delegate;
T read() {
T next = null;
if(count < max) {
next = delegate.read();
++count;
}
return next;
}
void open(ExecutionContext executionContext) {
((ItemStream)delegate).open(executionContext);
count = executionContext.getInt('count', 0);
}
void close() {
((ItemStream)delegate).close(executionContext);
}
void update(ExecutionContext executionContext) {
((ItemStream)delegate).update(executionContext);
executionContext.putInt('count', count);
}
}
This works with any reader.
public class CountMaxFlatFileItemReader extends FlatFileItemReader {
private int counter;
private int maxCount;
public void setMaxCount(int maxCount) {
this.maxCount = maxCount;
}
#Override
public Object read() throws Exception {
counter++;
if (counter >= maxCount) {
return null; // this will stop reading
}
return super.read();
}
}
Something like this should work. The reader stops reading, as soon as null is returned.

Entity Framework + ODATA: side-stepping the pagination

The project I'm working on has the Entity Framework on top of an OData layer. The Odata layer has it's server side pagination turned to a value of 75. My reading on the subject leads me to believe that this pagination value is used across the board, rather than a per table basis. The table that I'm currently looking to extract all the data from is, of course, more than 75 rows. Using the entity framework, my code is simply thus:
public IQueryable<ProductColor> GetProductColors()
{
return db.ProductColors;
}
where db is the entity context. This is returning the first 75 records. I read something where I could append a parameter inlinecount set to allpages giving me the following code:
public IQueryable<ProductColor> GetProductColors()
{
return db.ProductColors.AddQueryOption("inlinecount","allpages");
}
However, this too returns 75 rows!
Can anyone shed light on how to truly get all the records regardless of the OData server-side pagination stuff?
important: I cannot remove the pagination or turn it off! It's extremely valuable in other scenarios where performance is a concern.
Update:
Through some more searching I've found an MSDN that describes how to do this task.
I'd love to be able to turn it into a full Generic method but, this was as close as I could get to a generic without using reflection:
public IQueryable<T> TakeAll<T>(QueryOperationResponse<T> qor)
{
var collection = new List<T>();
DataServiceQueryContinuation<T> next = null;
QueryOperationResponse<T> response = qor;
do
{
if (next != null)
{
response = db.Execute<T>(next) as QueryOperationResponse<T>;
}
foreach (var elem in response)
{
collection.Add(elem);
}
} while ((next = response.GetContinuation()) != null);
return collection.AsQueryable();
}
calling it like:
public IQueryable<ProductColor> GetProductColors()
{
QueryOperationResponse<ProductColor> response = db.ProductColors.Execute() as QueryOperationResponse<ProductColor>;
var productColors = this.TakeAll<ProductColor>(response);
return productColors.AsQueryable();
}
If unable turn off paging you'll receive 75 row by call, always. You can get all rows in following ways:
Add another IQueryable<ProductColor> AllProductColors and modify
public static void InitializeService(DataServiceConfiguration config)
{
config.UseVerboseErrors = true;
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
config.SetEntitySetPageSize("ProductColors", 75); - Note only paged queries are present
config.SetServiceOperationAccessRule("*", ServiceOperationRights.AllRead);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
}
You should call ProductColors as many as needed, for example
var cat = new NetflixCatalog(new Uri("http://odata.netflix.com/v1/Catalog/"));
var x = from t in cat.Titles
where t.ReleaseYear == 2009
select t;
var response = (QueryOperationResponse<Title>)((DataServiceQuery<Title>)x).Execute();
while (true)
{
foreach (Title title in response)
{
Console.WriteLine(title.Name);
}
var continuation = response.GetContinuation();
if (continuation == null)
{
break;
}
response = cat.Execute(continuation);
}
I use Rx with following code
public sealed class DataSequence<TEntry> : IObservable<TEntry>
{
private readonly DataServiceContext context;
private readonly Logger logger = LogManager.GetCurrentClassLogger();
private readonly IQueryable<TEntry> query;
public DataSequence(IQueryable<TEntry> query, DataServiceContext context)
{
this.query = query;
this.context = context;
}
public IDisposable Subscribe(IObserver<TEntry> observer)
{
QueryOperationResponse<TEntry> response;
try
{
response = (QueryOperationResponse<TEntry>)((DataServiceQuery<TEntry>)query).Execute();
if (response == null)
{
return Disposable.Empty;
}
}
catch (Exception ex)
{
logger.Error(ex);
return Disposable.Empty;
}
var initialState = new State
{
CanContinue = true,
Response = response
};
IObservable<TEntry> sequence = Observable.Generate(
initialState,
state => state.CanContinue,
MoveToNextState,
GetCurrentValue,
Scheduler.ThreadPool).Merge();
return new CompositeDisposable(initialState, sequence.Subscribe(observer));
}
private static IObservable<TEntry> GetCurrentValue(State state)
{
if (state.Response == null)
{
return Observable.Empty<TEntry>();
}
return state.Response.ToObservable();
}
private State MoveToNextState(State state)
{
DataServiceQueryContinuation<TEntry> continuation = state.Response.GetContinuation();
if (continuation == null)
{
state.CanContinue = false;
return state;
}
QueryOperationResponse<TEntry> response;
try
{
response = context.Execute(continuation);
}
catch (Exception)
{
state.CanContinue = false;
return state;
}
state.Response = response;
return state;
}
private sealed class State : IDisposable
{
public bool CanContinue { get; set; }
public QueryOperationResponse<TEntry> Response { get; set; }
public void Dispose()
{
CanContinue = false;
}
}
}
so for get any data thru OData, create a sequence and Rx does the rest
var sequence = new DataSequence<Product>(context.Products, context);
sequence.OnErrorResumeNext(Observable.Empty<Product>())
.ObserveOnDispatcher().SubscribeOn(Scheduler.NewThread).Subscribe(AddProduct, logger.Error);
The page size is set by the service author and can be set per entity set (but a service may choose to apply the same page size to all entity sets). There's no way to avoid it from the client (which is by design since it's a security feature).
The inlinecount option asks the server to include the total count of the results (just the number), it doesn't disable the paging.
From the client the only way to read all the data is to issue the request which will return the first page and it may contain a next link which you request to read the next page and so on until the last response doesn't have the next link.
If you're using the WCF Data Services client library it has support for continuations (the next link) and a simple sample can be found in this blog post (for example): http://blogs.msdn.com/b/phaniraj/archive/2010/04/25/server-driven-paging-with-wcf-data-services.aspx

GWT multiple Activities/Places with one Token

My Site has on the left a GWT-Tree. In the center is a GWT-TabBar.
Both parts are implemented as Views/Activities/Places. I have two tokenizer: "m" for the tree and "t" for the tabs.
If I visit one place (goTo()) only this place will be used to generate the history token. But I would like to see this: <page>#m:sub/sub/sub;t:map
I actually thought that the hole idea of activities&places. I don't see the point to have multiple tokenizer, when only one tokenizer can provide a token at once.
You cannot display two different tokens #m: and #t: at the same time as you cannot be in two places at the same time.
So if both tabs and tree are displaying at the same time, then the state of both must be stored at once in the same place.
This is more or less what you need.
public class ExamplePlace extends Place {
public String treePosition = "/";
public int tabIndex = 0;
public ExamplePlace() {
super();
}
public ExamplePlace(String treePosition, int tabIndex) {
this.treePosition = treePosition;
this.tabIndex = tabIndex;
}
#Prefix("overview")
public static class Tokenizer implements PlaceTokenizer<ExamplePlace> {
/**
* parse token to get state
*
*/
#Override
public ExamplePlace getPlace(String token) {
String treePosition = "";
int tabIndex = 0;
String[] states = token.split(";");
for (String state : states) {
String[] mapping = state.split("=");
if (mapping.length == 2) {
if ("t".equals(mapping[0])) {
treePosition = mapping[1];
}
if ("m".equals(mapping[0])) {
try {
tabIndex = Integer.valueOf(mapping[1]);
} catch (Throwable e) {
}
}
}
}
return new ExamplePlace(treePosition, tabIndex);
}
/**
* store state in token
*
*/
#Override
public String getToken(ExamplePlace place) {
StringBuffer sb = new StringBuffer();
if (place.getTreePosition()!=null) {
sb.append("t").append("=").append(place.getTreePosition());
sb.append(";");
}
sb.append("m=").append(place.getTabIndex());
return sb.toString();
}
}
public String getTreePosition() {
return treePosition;
}
public void setTreePosition(String treePosition) {
this.treePosition = treePosition;
}
public int getTabIndex() {
return tabIndex;
}
public void setTabIndex(int tabIndex) {
this.tabIndex = tabIndex;
}
}
This will give you URLs that look like ;
index.html#overview:t=/subtree/subtree/leaf;m=2
You might run in trouble with the forward slashes in the token, not sure. Change them to some other character if necessary;
The activity receives the incoming place and inject the state into the view;

smartgwt ListGridRecord programmatically editing issue

I am using smartgwt and I have a ListGrid in which I have some populated values in ListGridRecord. now if I am setting any listGridRecord Field value programmatically through setAttribute(String fieldName, String value) and refreshing the field through ListGridRecord.refreshFields(), then values get reflected to the UI. but the problem is if i am editing that same ListGridRecord by double click. then the value get lost or removed from the UI.
class FieldRecord extends ListGridRecord{
private int id;
private String name;
public void setID(Long id) {
setAttribute(Constant.ID, id);
}
public void setName(String name) {
setAttribute(Constant.NAME, name);
}
public Long getID() {
return getAttribute(Constant.ID);
}
public String getName() {
return getAttribute(Constant.NAME);
}
}
class testData {
FieldDataSource fieldDS = new FieldDataSource();
FieldRecord fieldRec = new FieldRecord();
//set some default value of record.
fieldDS.addData(fieldRec);
FieldGrid fieldGrid = new FieldGrid();
fieldGrid.setDataSource(fieldDS);
public void parseValue(){
// on some condition
fieldRec.setAttribute(Constant.NAME, "high");
// programmaticaly set record value and that value is removed when i double click on
that record.
}
}
I hope the FieldGrid is a ListGrid.
You shoud use setFields to attach ListGridRecord to ListGrid
fieldGrid.setFields(fieldRec);
Try setting the ListGrid/FieldGrid's autoSaveEdits to false.
fieldGrid.autoSaveEdits(false);
Setting autoSaveEdits false creates a "mass update" / "mass delete" interaction where edits will be retained for all edited cells (across rows if appropriate) until ListGrid.saveEdits is called to save a particular row, or ListGrid.saveAllEdits is called to save all changes in a batch.
Update
Use addRowEditorExitHandler for ListGrid and explicitly set the new values like below
addRowEditorExitHandler(new RowEditorExitHandler() {
#Override
public void onRowEditorExit(final RowEditorExitEvent event) {
if (event.getRecord() != null) {
Record gridRecord = event.getRecord();
//This will be an update operations
}
else {
gridRecord = new Record();
//This will be a new record creation
}
if (FieldGrid.this.validateRow(event.getRowNum())) {
for (Object attribute : event.getNewValues().keySet()) {
//Here you will be able to see all the newly edited values
gridRecord.setAttribute(String.valueOf(attribute), event.getNewValues().get(attribute));
}
//Finally you will have a record with all unsaved values.Send it to server
addData(gridRecord);
}
}
});