We have several spring boot Rest APIs with hundreds of endpoints.
Are there any tools or libraries that we can use to monitor specific endpoints, logging the request, response, and timings to a custom database?
Any in particular that can be attached to running services already?
I've heard of Actuator, AOP, AspectJ, but I'm not sure it's what we want?
Thanks
You can create an aspect that logs enter/exit logs and time execution for each method of given packages.
To calculate time execution, you can use spring Stopwatch. However, you have to be careful to the performance impacts. (This class isn’t recommended for production environment)
import org.springframework.util.StopWatch;
#Aspect
#Component
public class LoggingAspect {
private final Logger log = LoggerFactory.getLogger(this.getClass());
/**
* Pointcut that matches all services and Web REST endpoints.
*/
#Pointcut("within(#org.springframework.stereotype.Service *)" +
" || within(#org.springframework.web.bind.annotation.RestController *)")
public void springBeanPointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
/**
* Pointcut that matches all Spring beans in the application's endpoint packages.
*/
#Pointcut("within(your.pack.num1..*)" +
" || within(your.pack.num2..*)" +
" || within(your.pack.num3..*)")
public void applicationPackagePointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
/**
* Advice that logs methods throwing exceptions.
*
* #param joinPoint join point for advice
* #param e exception
*/
#AfterThrowing(pointcut = "applicationPackagePointcut() && springBeanPointcut()", throwing = "e")
public void logAfterThrowing(JoinPoint joinPoint, Throwable e) {
log.error("Exception in {}.{}() with cause = {}", joinPoint.getSignature().getDeclaringTypeName(),
joinPoint.getSignature().getName(), e.getCause() != null ? e.getCause() : "NULL");
}
/**
* Advice that logs when a method is entered and exited.
*
* #param joinPoint join point for advice
* #return result
* #throws Throwable throws IllegalArgumentException
*/
#Around("applicationPackagePointcut() && springBeanPointcut()")
public Object logAround(ProceedingJoinPoint joinPoint) throws Throwable {
StopWatch stopWatch;
if (log.isDebugEnabled()) {
stopWatch= new StopWatch();
stopWatch.start();
log.debug("Enter: {}.{}() with argument[s] = {}", joinPoint.getSignature().getDeclaringTypeName(),
joinPoint.getSignature().getName(), Arrays.toString(joinPoint.getArgs()));
}
try {
try{
Object result = joinPoint.proceed();
}finally{
stopWatch.stop();
}
if (log.isDebugEnabled()) {
log.debug("Exit: {}.{}() with result = {} and execution time {}", joinPoint.getSignature().getDeclaringTypeName(),
joinPoint.getSignature().getName(), result,stopWatch.getTotalTimeMillis());
}
return result;
} catch (IllegalArgumentException e) {
log.error("Illegal argument: {} in {}.{}()", Arrays.toString(joinPoint.getArgs()),
joinPoint.getSignature().getDeclaringTypeName(), joinPoint.getSignature().getName());
throw e;
}
}
}
Related
Currently I'm developing a REST API in quarkus that makes use of this two dependencies
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-security</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-jwt</artifactId>
</dependency>
As you can see, I'm expecting to receive a JWT token which identifies the requester. The problem appears when trying to customize the UNAUTHORIZE error response, currently I'm receiving one without any body but I'd like to introduce a JSON structure giving more details of the kin of error.
I've read this documentation about HttpAuthenticationMechanism
but I'm not sure how to proceed here. Can someone lend me a hand with this problem? Is there any documentation that I'm missing or concept I should be accustom to?
To make this post clearer, I have code that interferes with the AuthenticationMechanism quarkus exposes and I'm using the out-of-the-box solution from smallrye-jwt dependency but this dependency returns UNAUTHORIZE messages with empty body and I want to return a simple JSON structure giving more details about the error Is it possible?
If anyone encounters in the same situation as I was what I wanted to do is send back a tailored error message and ended up with the following code approach.
#Alternative
#Priority(1)
#ApplicationScoped
#JBossLog
public class AuthMechanism implements HttpAuthenticationMechanism {
private static final String PARSING_JWT_ERROR = "Authentication failed! Error message \"{0}\"";
private static final String UNKNOWN_ERROR = "Unknown error while validating toke. "
+ "Message from the exception \"{0}\"";
#Inject
JWTAuthMechanism delegate;
#Override
public Uni<SecurityIdentity> authenticate(RoutingContext context,
IdentityProviderManager identityProviderManager) {
context.data().put(QuarkusHttpUser.AUTH_FAILURE_HANDLER, customAuthErrorHandler());
return delegate.authenticate(context, identityProviderManager);
}
#Override
public Uni<ChallengeData> getChallenge(RoutingContext context) {
return delegate.getChallenge(context);
}
#Override
public Uni<Boolean> sendChallenge(RoutingContext context) {
return delegate.sendChallenge(context);
}
#Override
public Set<Class<? extends AuthenticationRequest>> getCredentialTypes() {
return delegate.getCredentialTypes();
}
/**
* <p>
* We overwrite the default error handler implemented by our quarkus dependencies with the
* sole purpose of tailor an error dto with extra information about the problem.
* </p>
* */
private BiConsumer<RoutingContext, Throwable> customAuthErrorHandler() {
return (context, throwable) -> {
throwable = extractRootCause(throwable);
if (throwable instanceof AuthenticationFailedException) {
processFailedAuthentication((AuthenticationFailedException)throwable, context);
} else {
log.errorv(UNKNOWN_ERROR, throwable.getMessage());
String bodyResponse = ErrorDtoGenerator.generateErrorDto(
HttpResponseStatus.UNAUTHORIZED.reasonPhrase(),
"Your token is not valid", "", Severity.ERROR.name());
context.response().setStatusCode(HttpResponseStatus.UNAUTHORIZED.code())
.putHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON)
.putHeader(HttpHeaders.CONTENT_LENGTH, String.valueOf(bodyResponse.length()))
.end(bodyResponse);
}
};
}
/**
* <p>
* It will handle all possible authentication errors regarding the JWT validation.
* </p>
* */
private void processFailedAuthentication(AuthenticationFailedException authenticationError,
RoutingContext context) {
context.response()
.setStatusCode(HttpResponseStatus.UNAUTHORIZED.code())
.putHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON);
if (authenticationError.getCause() != null
&& authenticationError.getCause() instanceof ParseException) {
log.errorv(PARSING_JWT_ERROR, authenticationError.getCause().getCause().getMessage());
String errorBody = ErrorDtoGenerator.generateErrorDto(
HttpResponseStatus.UNAUTHORIZED.reasonPhrase(),
authenticationError.getCause().getMessage(),
"", Severity.ERROR.name()
);
context.response()
.putHeader(HttpHeaders.CONTENT_LENGTH, String.valueOf(errorBody.length()))
.write(errorBody);
}
context.response().end();
}
private Throwable extractRootCause(Throwable throwable) {
while ((throwable instanceof CompletionException && throwable.getCause() != null)
|| (throwable instanceof CompositeException)) {
if (throwable instanceof CompositeException) {
throwable = ((CompositeException) throwable).getCauses().get(0);
} else {
throwable = throwable.getCause();
}
}
return throwable;
}
}
when two concurrent request were made for the below code, both of the requests were able to acquire lock simultaneously and hence were able to execute the block of code
Sample code running in production:
Sample Code for reference :
//Starting point for the request
#Override
public void receiveTransferItems(String argumet1, String refernceId, List<Item> items, long messageId)
throws Exception {
ParentDTO parent = DAO.lockByReferenceid(referenceId);
if (parent == null) {
throw new Exception(referenceId + "does not exist");
}
updateData(parent);
for (Item item : items) {
receiveItem(td, td.getWarehouseId(), item.getItemSKU(), item.getItemStatus(), item.getQtyReceived(), messageId);
}
}
private void updateData(ParentDTO td) throws DropShipException {
//perform some logical processing and then execute update
DAO.update(td);
}
private void receiveItem(ParentDTO td, String warehouseId, String asin, String itemStatus, int quantity, long messageId)
throws Exception {
/**
* perform some logical processing
*
**/
//call is being made to another class to do the rest of the processing
service.receive(td, asin, quantity, condition, container, messageId);
}
#Override
public void receive(
ParentDTO parentDTO,
String asin,
int quantity,
Condition condition,
Container container,
long messageId,
DataAccessor accessor) throws Exception {
List<ChildDTO> childDTOs =
DAO.lockChildDTOItems(parentDTO.getReferenceId(), asin, condition,
CostInfoSource.MANIFEST);
List<ChildDTO> filterItems = DAO
.loadChildDTOItems(parentDTO.getReferenceId(), asin, condition.name());
long totalExpectedQuantity = getTotalExpectedQuantity(filterItems);
long totalReceivedQuantity = getTotalReceivedQuantity(filterItems);
int quantityNormalReceived = 0;
for (ChildDTO tdi : childDTOs) {
int quantityReceived = 0;
if (asinDropShipMsgAction != null) {
quantity -= asinDropShipMsgAction.getInitialQuantity();
quantityNormalReceived += asinDropShipMsgAction.getInitialQuantity();
} else {
quantityReceived = new DBOperationRunner<Integer>(accessor.getSessionManager()) {
#Override
protected Integer doWorkAndReturn() throws Exception {
return normalReceive(tdi, quantityLeft, container, MessageActionType.TS_IN, messageId);
}
}.execute();
}
}
}
private int normalReceive(final ChildDTO childDTO,
int quantity,
final Container container,
final MessageActionType type,
long messageId)
throws Exception {
/**
* perform some business logic
*
* */
DAO.update(childDTO);
return someQuantity;
}
Implementation for lockByReferenceId function:
#Override
public ParentDTO lockByReferenceId(String referenceId) {
Criteria criteria = getCurrentSession().createCriteria(ParentDTO.class)
.add(Restrictions.eq("referenceId", referenceId)).setLockMode(LockMode.UPGRADE_NOWAIT);
return (ParentDTO) criteria.uniqueResult();
}
Implementation of DBOperationRunner class :
public T execute() throws Exception {
T t = null;
Session originalSession = (Session) ThreadLocalContext.get(ThreadLocalContext.CURRENT_SESSION);
try {
ThreadLocalContext.put(ThreadLocalContext.CURRENT_SESSION, sessionManager.getCurrentSession());
sessionManager.beginTransaction();
t = doWorkAndReturn();
sessionManager.commit();
} catch (Exception e) {
try {
sessionManager.rollback();
} catch (Throwable t1) {
logger.error("failed to rollback", t1);
}
throw e;
} finally {
ThreadLocalContext.put(ThreadLocalContext.CURRENT_SESSION, originalSession);
}
return t;
}
Recently i observed one issue in production code in which two or more simultaneous requests were able to acquire lock on the same data at same time.
I am using hibernate and criteria as a DB framework and c3p0 as a connection pooling framework and Postgres as DB.
Note : This issue is intermittent and only observed for some random concurrent requests which is making it hard to debug.
I am unable to understand how two concurrent request were able to lock the same rows simultaneously. Can you please help me in identifying what is going wrong in this case?
Thanks in advance!!!!
I am trying to figure out what the most efficient way to test of the existence of an Object in a Bucket in Google Cloud Store.
This is what I am doing now:
try
{
final GcsFileMetadata md = GCS_SERVICE.getMetadata(bm.getFilename());
if (md == null)
{
// do what I need to do here!
}
}
catch (IOException e)
{
L.error(e.getMessage());
}
Because according to the documentation it returns null if the GcsFilename does not exist.
.
/**
* #param filename The name of the file that you wish to read the metadata of.
* #return The metadata associated with the file, or null if the file does not exist.
* #throws IOException If for any reason the file can't be read.
*/
GcsFileMetadata getMetadata(GcsFilename filename) throws IOException;
Using .list() on a Bucket and checking for .contains() sounds expensive but is explicit in its intention.
Personally I think testing for null to check if something exists is inelegant and not as direct as GCS_SERVICE.objectExists(fileName); but I guess I don't get to design the GCS Client API. I will just create a method to do this test in my API.
Is there a more efficient ( as in time ) or more self documenting way to do this test?
Solution
Here is the working solution I ended up with:
#Nonnull
protected Class<T> getEntityType() { (Class<T>) new TypeToken<T>(getClass()) {}.getRawType(); }
/**
* purge ObjectMetadata records that don't have matching Objects in the GCS anymore.
*/
public void purgeOrphans()
{
ofy().transact(new Work<VoidWork>()
{
#Override
public VoidWork run()
{
try
{
for (final T bm : ofy().load().type(ObjectMetadataEntityService.this.getEntityType()).iterable())
{
final GcsFileMetadata md = GCS_SERVICE.getMetadata(bm.getFilename());
if (md == null)
{
ofy().delete().entity(bm);
}
}
}
catch (IOException e)
{
L.error(e.getMessage());
}
return null;
}
});
}
They added the file.exists() method.
const fileExists = _=>{
return file.exists().then((data)=>{ console.log(data[0]); });
}
fileExists();
//logs a boolean to the console;
//true if the file exists;
//false if the file doesn't exist.
I am trying to launch a job in Spring Batch 2, and I need to pass some information in the job parameters, but I do not want it to count for the uniqueness of the job instance. For example, I'd want these two sets of parameters to be considered unique:
file=/my/file/path,session=1234
file=/my/file/path,session=5678
The idea is that there will be two different servers trying to start the same job, but with different sessions attached to them. I need that session number in both cases. Any ideas?
Thanks!
So, if 'file' is the only attribute that's supposed to be unique and 'session' is used by downstream code, then your problem matches almost exactly what I had. I had a JMSCorrelationId that i needed to store in the execution context for later use and I didn't want it to play into the job parameters' uniqueness. Per Dave Syer, this really wasn't possible, so I took the route of creating the job with the parameters (not the 'session' in your case), and then adding the 'session' attribute to the execution context before anything actually runs.
This gave me access to 'session' downstream but it was not in the job parameters so it didn't affect uniqueness.
References
https://jira.springsource.org/browse/BATCH-1412
http://forum.springsource.org/showthread.php?104440-Non-Identity-Job-Parameters&highlight=
You'll see from this forum that there's no good way to do it (per Dave Syer), but I wrote my own launcher based on the SimpleJobLauncher (in fact I delegate to the SimpleLauncher if a non-overloaded method is called) that has an overloaded method for starting a job that takes a callback interface that allows contribution of parameters to the execution context while not being 'true' job parameters. You could do something very similar.
I think the applicable LOC for you is right here:
jobExecution = jobRepository.createJobExecution(job.getName(),
jobParameters);
if (contributor != null) {
if (contributor.contributeTo(jobExecution.getExecutionContext())) {
jobRepository.updateExecutionContext(jobExecution);
}
}
which is where, after execution context creatin, the execution context is added to. Hopefully this helps you in your implementation.
public class ControlMJobLauncher implements JobLauncher, InitializingBean {
private JobRepository jobRepository;
private TaskExecutor taskExecutor;
private SimpleJobLauncher simpleLauncher;
private JobFilter jobFilter;
public void setJobRepository(JobRepository jobRepository) {
this.jobRepository = jobRepository;
}
public void setTaskExecutor(TaskExecutor taskExecutor) {
this.taskExecutor = taskExecutor;
}
/**
* Optional filter to prevent job launching based on some specific criteria.
* Jobs that are filtered out will return success to ControlM, but will not run
*/
public void setJobFilter(JobFilter jobFilter) {
this.jobFilter = jobFilter;
}
public JobExecution run(final Job job, final JobParameters jobParameters, ExecutionContextContributor contributor)
throws JobExecutionAlreadyRunningException, JobRestartException,
JobInstanceAlreadyCompleteException, JobParametersInvalidException, JobFilteredException {
Assert.notNull(job, "The Job must not be null.");
Assert.notNull(jobParameters, "The JobParameters must not be null.");
//See if job is filtered
if(this.jobFilter != null && !jobFilter.launchJob(job, jobParameters)) {
throw new JobFilteredException(String.format("Job has been filtered by the filter: %s", jobFilter.getFilterName()));
}
final JobExecution jobExecution;
JobExecution lastExecution = jobRepository.getLastJobExecution(job.getName(), jobParameters);
if (lastExecution != null) {
if (!job.isRestartable()) {
throw new JobRestartException("JobInstance already exists and is not restartable");
}
logger.info(String.format("Restarting job %s instance %d", job.getName(), lastExecution.getId()));
}
// Check the validity of the parameters before doing creating anything
// in the repository...
job.getJobParametersValidator().validate(jobParameters);
/*
* There is a very small probability that a non-restartable job can be
* restarted, but only if another process or thread manages to launch
* <i>and</i> fail a job execution for this instance between the last
* assertion and the next method returning successfully.
*/
jobExecution = jobRepository.createJobExecution(job.getName(),
jobParameters);
if (contributor != null) {
if (contributor.contributeTo(jobExecution.getExecutionContext())) {
jobRepository.updateExecutionContext(jobExecution);
}
}
try {
taskExecutor.execute(new Runnable() {
public void run() {
try {
logger.info("Job: [" + job
+ "] launched with the following parameters: ["
+ jobParameters + "]");
job.execute(jobExecution);
logger.info("Job: ["
+ job
+ "] completed with the following parameters: ["
+ jobParameters
+ "] and the following status: ["
+ jobExecution.getStatus() + "]");
} catch (Throwable t) {
logger.warn(
"Job: ["
+ job
+ "] failed unexpectedly and fatally with the following parameters: ["
+ jobParameters + "]", t);
rethrow(t);
}
}
private void rethrow(Throwable t) {
if (t instanceof RuntimeException) {
throw (RuntimeException) t;
} else if (t instanceof Error) {
throw (Error) t;
}
throw new IllegalStateException(t);
}
});
} catch (TaskRejectedException e) {
jobExecution.upgradeStatus(BatchStatus.FAILED);
if (jobExecution.getExitStatus().equals(ExitStatus.UNKNOWN)) {
jobExecution.setExitStatus(ExitStatus.FAILED
.addExitDescription(e));
}
jobRepository.update(jobExecution);
}
return jobExecution;
}
static interface ExecutionContextContributor {
boolean CONTRIBUTED_SOMETHING = true;
boolean CONTRIBUTED_NOTHING = false;
/**
*
* #param executionContext
* #return true if the exeuctioncontext was contributed to
*/
public boolean contributeTo(ExecutionContext executionContext);
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.state(jobRepository != null, "A JobRepository has not been set.");
if (taskExecutor == null) {
logger.info("No TaskExecutor has been set, defaulting to synchronous executor.");
taskExecutor = new SyncTaskExecutor();
}
this.simpleLauncher = new SimpleJobLauncher();
this.simpleLauncher.setJobRepository(jobRepository);
this.simpleLauncher.setTaskExecutor(taskExecutor);
this.simpleLauncher.afterPropertiesSet();
}
#Override
public JobExecution run(Job job, JobParameters jobParameters)
throws JobExecutionAlreadyRunningException, JobRestartException,
JobInstanceAlreadyCompleteException, JobParametersInvalidException {
return simpleLauncher.run(job, jobParameters);
}
}
Starting from spring batch 2.2.x, there is support for non-identifying parameters. If you are using CommandLineJobRunner, you can specify non-identifying parameters with '-' prefix.
For example:
java org.springframework.batch.core.launch.support.CommandLineJobRunner file=/my/file/path -session=5678
If you are using old version of spring batch, you need to migrate your database schema. See 'Migrating to 2.x.x' section at http://docs.spring.io/spring-batch/getting-started.html.
This is the Jira page of the feature https://jira.springsource.org/browse/BATCH-1412, and here are the change that implement it https://fisheye.springsource.org/changelog/spring-batch?cs=557515df45c0f596588418d53c3f2bae3781c1c3
In more recent versions of Spring Batch (I am using spring-batch-core:4.3.3), you can use the JobParametersBuilder to specify whether a parameter is identifying or not. For example:
new JobParametersBuilder()
.addString("identifying-param-name", paramValue1)
.addString("non-identifying-param-name", paramValue2, false)
.toJobParameters();
The 'false' in the third argument makes the parameter non-identifying.
I have a problem with datasource binding in ListGrid with smartGWT. I have GWT-RPC-DataSource and i have set it as my datasource
grid.setDataSource(ds);
On one button click I have some changes in my datasource and I am generating new datasource and rebinding with smartgwt's grid. but it fails. I have tried grid.redraw() function to redraw the grid.
Below is my class for GWTRPCDATASOURCE
public abstract class GwtRpcDataSource extends DataSource {
/**
* Creates new data source which communicates with server by GWT RPC. It is
* normal server side SmartClient data source with data protocol set to
* <code>DSProtocol.CLIENTCUSTOM</code> ("clientCustom" - natively supported
* by SmartClient but should be added to smartGWT) and with data format
* <code>DSDataFormat.CUSTOM</code>.
*/
public GwtRpcDataSource() {
setDataProtocol(DSProtocol.CLIENTCUSTOM);
setDataFormat(DSDataFormat.CUSTOM);
setClientOnly(false);
}
/**
* Executes request to server.
*
* #param request
* <code>DSRequest</code> being processed.
* #return <code>Object</code> data from original request.
*/
#Override
protected Object transformRequest(DSRequest request) {
String requestId = request.getRequestId();
DSResponse response = new DSResponse();
response.setAttribute("clientContext",
request.getAttributeAsObject("clientContext"));
// Asume success
response.setStatus(0);
switch (request.getOperationType()) {
case FETCH:
executeFetch(requestId, request, response);
break;
case ADD:
executeAdd(requestId, request, response);
break;
case UPDATE:
executeUpdate(requestId, request, response);
break;
case REMOVE:
executeRemove(requestId, request, response);
break;
default:
// Operation not implemented.
break;
}
return request.getData();
}
/**
* Executed on <code>FETCH</code> operation.
* <code>processResponse (requestId, response)</code> should be called when
* operation completes (either successful or failure).
*
* #param requestId
* <code>String</code> extracted from
* <code>DSRequest.getRequestId ()</code>.
* #param request
* <code>DSRequest</code> being processed.
* #param response
* <code>DSResponse</code>. <code>setData (list)</code> should be
* called on successful execution of this method.
* <code>setStatus (<0)</code> should be called on failure.
*/
protected abstract void executeFetch(String requestId, DSRequest request,
DSResponse response);
/**
* Executed on <code>ADD</code> operation.
* <code>processResponse (requestId, response)</code> should be called when
* operation completes (either successful or failure).
*
* #param requestId
* <code>String</code> extracted from
* <code>DSRequest.getRequestId ()</code>.
* #param request
* <code>DSRequest</code> being processed.
* <code>request.getData ()</code> contains record should be
* added.
* #param response
* <code>DSResponse</code>. <code>setData (list)</code> should be
* called on successful execution of this method. Array should
* contain single element representing added row.
* <code>setStatus (<0)</code> should be called on failure.
*/
protected abstract void executeAdd(String requestId, DSRequest request,
DSResponse response);
/**
* Executed on <code>UPDATE</code> operation.
* <code>processResponse (requestId, response)</code> should be called when
* operation completes (either successful or failure).
*
* #param requestId
* <code>String</code> extracted from
* <code>DSRequest.getRequestId ()</code>.
* #param request
* <code>DSRequest</code> being processed.
* <code>request.getData ()</code> contains record should be
* updated.
* #param response
* <code>DSResponse</code>. <code>setData (list)</code> should be
* called on successful execution of this method. Array should
* contain single element representing updated row.
* <code>setStatus (<0)</code> should be called on failure.
*/
protected abstract void executeUpdate(String requestId, DSRequest request,
DSResponse response);
/**
* Executed on <code>REMOVE</code> operation.
* <code>processResponse (requestId, response)</code> should be called when
* operation completes (either successful or failure).
*
* #param requestId
* <code>String</code> extracted from
* <code>DSRequest.getRequestId ()</code>.
* #param request
* <code>DSRequest</code> being processed.
* <code>request.getData ()</code> contains record should be
* removed.
* #param response
* <code>DSResponse</code>. <code>setData (list)</code> should be
* called on successful execution of this method. Array should
* contain single element representing removed row.
* <code>setStatus (<0)</code> should be called on failure.
*/
protected abstract void executeRemove(String requestId, DSRequest request,
DSResponse response);
private ListGridRecord getEditedRecord(DSRequest request) {
// Retrieving values before edit
JavaScriptObject oldValues = request
.getAttributeAsJavaScriptObject("oldValues");
// Creating new record for combining old values with changes
ListGridRecord newRecord = new ListGridRecord();
// Copying properties from old record
JSOHelper.apply(oldValues, newRecord.getJsObj());
// Retrieving changed values
JavaScriptObject data = request.getData();
// Apply changes
JSOHelper.apply(data, newRecord.getJsObj());
return newRecord;
}
}
I have implemented this abstract class to my own datasource class named NTDatasource.
public class NTDataSource extends GwtRpcDataSource {
public static int total = 991;
Record[] records;
public NTDataSource() {
}
public void setData(List<NTListGridField> lstFields, Record[] records) {
// setTestData(records);
for (NTListGridField lstField : lstFields) {
if (lstField.getType() == ListGridFieldType.DATE) {
DataSourceDateField dateField = new DataSourceDateField(
lstField.getName());
dateField.setHidden(lstField.getAttributeAsBoolean("visible"));
if (lstField.getName().equals("id")) {
dateField.setHidden(true);
}
addField(dateField);
} else {
DataSourceTextField textField = new DataSourceTextField(
lstField.getName());
textField.setHidden(lstField.getAttributeAsBoolean("visible"));
if (lstField.getName().equals("id")) {
textField.setHidden(true);
textField.setPrimaryKey(true);
}
addField(textField);
}
}
total = records.length;
this.records = records;
}
#Override
protected void executeFetch(String requestId, DSRequest request,
DSResponse response) {
// assume we have 1000 items.
response.setTotalRows(total);
int end = request.getEndRow();
if (end > total) {
end = total;
}
Record returnRecords[] = new Record[end
- request.getStartRow()];
for (int i = request.getStartRow(); i < end; i++) {
ListGridRecord r = new ListGridRecord();
r = (ListGridRecord) records[i];
returnRecords[i - request.getStartRow()] = r;
}
GWT.log(" called from " + request.getStartRow() + " to "
+ request.getEndRow() + " result " + returnRecords.length, null);
response.setData(returnRecords);
processResponse(requestId, response);
}
#Override
protected void executeAdd(String requestId, DSRequest request,
DSResponse response) {
// TODO Auto-generated method stub
}
#Override
protected void executeUpdate(String requestId, DSRequest request,
DSResponse response) {
// TODO Auto-generated method stub
}
#Override
protected void executeRemove(String requestId, DSRequest request,
DSResponse response) {
// TODO Auto-generated method stub
}
}
I have solve this question myself.
the answer is i need to use grid.fetchData() method and bind datasource one more time to use it.... !! I hope it might help someone else.
Try grid.invalidateCache() .This call will clear the current data in the grid and executes the NTDataSource.executeFetch method.