I am observing really bad performance when using GWT requestfactory. For example, a request that takes my service layer 2 seconds to fullfil is taking GWT 20 seconds to serialize. My service is returning ~100 what will be EntityProxies. Each of these objects has what will become 4 ValueProxies and 2 more EntityProxies (100 root level EntityProxies, 400 ValueProxies and 200 additional EntityProxies). However, I see the same 10x performance degradation on much smaller datasets.
Example of log snippet:
D 2012-10-18 22:42:39.546 ServiceLayerDecorator invoke: Inoking service layer took 2265 ms
D 2012-10-18 22:42:58.957 RequestFactoryServlet doPost: Entire request took 22870 ms
I have added some profiling code to the ServiceLayerDecorator#invoke method and wrapped the entire servlet in a timer. I have profiled the service by itself, and it is indeed returning results in ~2s.
I am using GWT 2.4, but have tested this on GWT 2.5rc1 and GWT 2.5rc2. My backend is on GAE, but I dont think that is playing a role here.
I found this bug filed against 2.4, which seems to be very related. I have manually applied the patch from this google group without any luck.
My domain models look like:
class Trip {
protected Address origin; // becomes ValueProxy
protected Address destination; becomes ValueProxy
protected Set<TripPassenger> tripPassengers; // Set of ValueProxies
}
class TripPassenger {
protected Passenger passenger;
}
class Passenger {
protected Account account;
}
My question is:
Have I profiled the code correctly and isolated the problem to the GWT serialization?
Could I be doing something wrong that would cause this behavior?
How can I better profile the GWT serialization code to try and figure out the cause?
Have I profiled the code correctly and isolated the problem to the GWT serialization?
RequestFactory uses reflection a whole lot (much more than GWT-RPC for instance), so I'm not really surprised that it causes some perf issues in some cases. And GAE could play a role here.
I believe RequestFactory (the AutoBean part actually) could greatly benefit from code generation at build-time.
Could I be doing something wrong that would cause this behavior?
Check your locators' find and/or isLive methods.
How can I better profile the GWT serialization code to try and figure out the cause?
It would also be interesting to know the time spent on deserialization of the request, applying changes, and then serialization of the response. And don't forget to substract from those the time spent in find and isLive.
Related
In an integration test, my advice of #AfterReturning is wrongly executed while in the test I mock it to throw TimeoutException, and the arg passed to the aspect is null.
My advice:
#AfterReturning("execution(* xxxxxx" +
"OrderKafkaProducerService.sendOrderPaidMessage(..)) && " +
"args(order)")
public void orderComplete(CheckoutOrder order) { // order is null when debugging
metricService.orderPaidKafkaSent();
log.trace("Counter inc: order paid kafka"); // this line of log is shown in console
metricService.orderCompleted();
log.trace("Order complete! {}", order.getId()); // this line is not, because NPE
}
And my test:
// mocking
doThrow(new ServiceException(new TimeoutException("A timeout occurred"), FAILED_PRODUCING_ORDER_MESSAGE))
.when(orderKafkaProducerService).sendOrderPaidMessage(any()); // this is where advice is executed, which is wrong
...
// when
(API call with RestAssured, launch a real HTTP call to endpoint; service is called during this process)
// then
verify(orderKafkaProducerService).sendOrderPaidMessage(any(CheckoutOrder.class)); // it should be called
verify(metricService, never()).orderCompleted(); // but we are throwing, not returning, we should not see this advice executed
This test is failing because of NPE(order is null).
In debugging, I find that when I was mocking, I already execute the advice, and at this point, any() has no value yet, is null, so NPE. But I don't think the advice should execute while mocking. How can I avoid that while testing?? This is absurd for me.
Currently Spring test support does not explicitly handle the situation that an injected mock or spy (which is a proxy subclass via Mockito) might actually be an AOP target later on (i.e. proxied and thus subclassed again via CGLIB).
There are several bug tickets related to this topic for Spring, Spring Boot and Mockito. Nobody has done anything about it yet. I do understand why the Mockito maintainers won't include Spring-specific stuff into their code base, but I do not understand why the Spring people do not improve their testing tools.
Actually when debugging your failing test and inspecting kafkaService, you will find out the following facts:
kafkaService.getClass() is com.example.template.services.KafkaService$MockitoMock$92961867$$EnhancerBySpringCGLIB$$8fc4fe95
kafkaService.getClass().getSuperclass() is com.example.template.services.KafkaService$MockitoMock$92961867
kafkaService.getClass().getSuperclass().getSuperclass() is class com.example.template.services.KafkaService
In other words:
kafkaService is a CGLIB Spring AOP proxy.
The AOP proxy wraps a Mockito spy (probably a ByteBuddy proxy).
The Mockito spy wraps the original object.
Besides, changing the wrapping order to make the Mockito spy the outermost object would not work because CGLIB deliberately makes its overriding methods final, i.e. you cannot extend and override them again. If Mockito was just as restrictive, the hierarchical wrapping would not work at all.
Anyway, what can you do?
Either you use a sophisticated approach like described in this tutorial
or you go for the cheap solution to explicitly unwrap an AOP proxy via AopTestUtils.getTargetObject(Object). You can call this method safely because if the passed candidate object is not a Spring proxy (internally easy to identify because it implements the Advised interface which also gives access to the target object), it just returns the passed object again.
In your case the latter solution would look like this:
#Test
void shouldCompleteHappyPath() {
// fetch spy bean by unwrapping the AOP proxy, if any
KafkaService kafkaServiceSpy = AopTestUtils.getTargetObject(kafkaService);
// given mocked
doNothing().when(kafkaServiceSpy).send(ArgumentMatchers.any());
// when (real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaServiceSpy).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
This has the effect that when(kafkaServiceSpy).send(ArgumentMatchers.any()) no longer triggers the aspect advice because kafkaServiceSpy is no longer an AOP proxy. The auto-wired bean kafkaService still is, though, thus AOP gets triggered as expected, but no longer unwantedly while recording the mock interaction.
Actually, for the verification you could even use kafkaService instead of the spy and only unwrap the spy when recording the interaction you want to verify later:
#Test
void shouldCompleteHappyPath() {
// given mocked
doNothing()
.when(
// fetch spy bean by unwrapping the AOP proxy, if any
AopTestUtils.<KafkaService>getTargetObject(kafkaService)
)
.send(ArgumentMatchers.any());
// when(real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaService).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
P.S.: Without your MCVE I would never have been able to debug this and find out what the heck was going on. This proves again that asking questions including an MCVE is the best thing you can do for yourself because it helps you get answers to questions which otherwise probably would remain unanswered.
Update: After I had mentioned this problem under the similar closed issue Spring Boot #6871, one of the maintainers has by himself created Spring Boot #22281 which specifically addresses your problem here. You might want to watch the new issue in order to find out if/when it can be fixed.
Firstly i would like to apologize if i could not find anything about what i would like to describe that really solved my problems. This does not mean that i fully searched in the site. Although i have been spending too much time (days). I am also new on here (in the sense that i never wrote/replied to SO users). And i am sorry for my possible english errors.
I have to say i am new to Java EE.
I am working on WildFly 14, using MySQL.
I am now focusing on a JPA problem.
I have a uniqueness constraint. I am doing tests and while performing the uniqueness violation test, from the data source level i get a MySQLIntegrityConstraintViolationException, and that's ok. I have the problem in that the persist() method does not let me catch the exception (i even put Throwable in the clause, but nothing..). I strongly, strictly, need to catch that, in order to manage a crucial procedure (that, indirectly contains the call to .remove()) in my work's code.
By the way, trying to write that exception, the system does not show me the window of the suggested classes/annotations/etc, suggesting me just to create the class "MySQLIntegrityConstraintViolationException". Doesn't working on WildFly, using MySQL, suffice, for having the suggestions?
Not finding the solution, i decided to change: instead of using persist(), i decided to use .createNativeQuery() in which i put as parameter a String describing an insertion in the db. It seems working. Indeed it works (signals uniqueness violation (ok!), does not execute the TRY block code (ok!) and goes into CATCH block (ok!)). But, again, the exception/error is not clear.
Also, when in the code i enter the piece of code that is in charge of managing the catching and then executing what's inside (and i have a .remove(), inside), it raises the exception:
"Transaction is required to perform this operation (either use a transaction or extended persistence context)" --> this referring to my entityManager.remove() execution..
Now i cannot understand.. should not JPA/JTA manage automatically the transactions?
Moreover, trying, later, to put entityManager.getTransaction().begin() (and commit()), it gives me the problem of having tried to manage manually transactions when instead i couldn't.. seems an endless loop..
[edit]: i am working in CMT context, so i am allowed to work with just EntityManager and EntityManagerFactory. I have tried with entityManager.getTransaction().begin() and entityManager.getTransaction().commit() and it hasn't worked.
[edit']: .getTransaction (EntityTransaction object) cannot be used in CMT context, for this reason that didn't work.
[edit'']: i have solved the transaction issue, by means of the transaction management suited for the CMT context: JTA + CMT requires us to manage the transactions with a TRY-CATCH-FINALLY block, in whose TRY body it is needed to put the operation we want to perform on the database and in whose FINALLY body it is needed to put the EntityManager object closing (em.close()). Though, as explained above, i have used em.createNativeQuery(), that, when failing, throws catchable (catchable in my app) exceptions; i would really need to do a roll-back (usage of .createNativeQuery() is temporary) in my work code and use the .persist() method, so i need to know what to do in order to be able to catch that MySQLIntegrityConstraintViolationException.
Thanks so much!
IT SEEMS i have solved the problem.
Rolling back to the use of .persist() (so, discarding createNativeQuery()), putting em.flush() JUST AFTER em.persist(my_entity_object) has helped me, in that, once the uniqueness constraint is violated (see above), the raised exception is now catchable. With the catchable exception, I can now do as described at the beginning of the post.
WARNING: I remind you of the fact that i am new to JavaEE-JPA-JTA. I have been "lucky" because, since my lack of knowledge, i put that instruction (em.flush()) by taking a guess (i don't know how i could think of that). Hence, I would not be able to explain the behaviour; I would appreciate, though, any explanation of what could have happen, of how and when the method flush() is used, and so on and so forth..
Thanks!
We are using guava 16.0.1
On RateLimiter.create(MaxRequestsPerSecond), I cannot burst right away. We would like to allow this as customer rate limiters are only created on their first request and are cached right now(too many customers to hold all of them in).
Ideally, I would just set storedPermits to some number but I can't seem to do that. Also, the warming up only allows 2x or 3x the request rate or something so they can't just do 3 requests at the same time right out of the gate.
Is there a way too allow for a burst right away on creation of the RateLimiter?
thanks,
Dean
Yes, the RateLimiter is package private so you can extend it yourself.
Create a class in the same package in your code to access the underlying storedPermits. See an implementation below and adjust based on which Guava version you are using as the internal implementation changed.
package com.google.common.util.concurrent;
import com.google.common.util.concurrent.RateLimiter.SleepingStopwatch;
public class RateLimiters {
public static RateLimiter createCharged(double permitsPerSecond, double maxBurstSeconds) {
SmoothRateLimiter.SmoothBursty rateLimiter = new SmoothRateLimiter.SmoothBursty(SleepingStopwatch.createFromSystemTimer(), maxBurstSeconds);
rateLimiter.setRate(permitsPerSecond);
rateLimiter.storedPermits = maxBurstSeconds * permitsPerSecond;
return rateLimiter;
}
}
Our work around was to pre-create RateLimiters since there is no way to initialize StoredPermits. We grab one (and add a new one) hoping the new one has enough time to build up permits for the next customer as long as our queue size is large enough for bursts of customers coming in.
Suppose I send objects of the following type from GWT client to server through RPC. The objects get stored to a database.
public class MyData2Server implements Serializable
{
private String myDataStr;
public String getMyDataStr() { return myDataStr; }
public void setMyDataStr(String newVal) { myDataStr = newVal; }
}
On the client side, I constrain the field myDataStr to be say 20 character max.
I have been reading on web-application security. If I learned something it is client data should not be trusted. Server should then check the data. So I feel like I ought to check on the server that my field is indeed not longer than 20 characters otherwise I would abort the request since I know it must be an attack attempt (assuming no bug on the client side of course).
So my questions are:
How important is it to actually check on the server side my field is not longer than 20 characters? I mean what are the chances/risks of an attack and how bad could the consequences be? From what I have read, it looks like it could go as far as bringing the server down through overflow and denial of service, but not being a security expert, I could be mis-interpreting.
Assuming I would not be wasting my time doing the field-size check on the server, how should one accomplish it? I seem to recall reading (sorry I no longer have the reference) that a naive check like
if (myData2ServerObject.getMyDataStr().length() > 20) throw new MyException();
is not the right way. Instead one would need to define (or override?) the method readObject(), something like in here. If so, again how should one do it within the context of an RPC call?
Thank you in advance.
How important is it to actually check on the server side my field is not longer than 20 characters?
It's 100% important, except maybe if you can trust the end-user 100% (e. g. some internal apps).
I mean what are the chances
Generally: Increasing. The exact proability can only be answered for your concrete scenario individually (i. e. no one here will be able to tell you, though I would also be interested in general statistics). What I can say is, that tampering is trivially easy. It can be done in the JavaScript code (e. g. using Chrome's built-in dev tools debugger) or by editing the clearly visible HTTP request data.
/risks of an attack and how bad could the consequences be?
The risks can vary. The most direct risk can be evaluated by thinking: "What could you store and do, if you can set any field of any GWT-serializable object to any value?" This is not only about exceeding the size, but maybe tampering with the user ID etc.
From what I have read, it looks like it could go as far as bringing the server down through overflow and denial of service, but not being a security expert, I could be mis-interpreting.
This is yet another level to deal with, and cannot be addressed with server side validation within the GWT RPC method implementation.
Instead one would need to define (or override?) the method readObject(), something like in here.
I don't think that's a good approach. It tries to accomplish two things, but can do neither of them very well. There are two kinds of checks on the server side that must be done:
On a low level, when the bytes come in (before they are converted by RemoteServiceServlet to a Java Object). This needs to be dealt with on every server, not only with GWT, and would need to be answered in a separate question (the answer could simply be a server setting for the maximum request size).
On a logical level, after you have the data in the Java Object. For this, I would recommend a validation/authorization layer. One of the awesome features of GWT is, that you can use JSR 303 validation both on the server and client side now. It doesn't cover every aspect (you would still have to test for user permissions), but it can cover your "#Size(max = 20)" use case.
I've set up mvc-mini-profiler against my Entity Framework-powered MVC 3 site. Everything is duly configured; Starting profiling in Application_Start, ending it in Application_End and so on. The profiling part works just fine.
However, when I try to swap my data model object generation to providing profilable versions, performance slows to a grind. Not every SQL query, but some queries take about 5x the entire page load. (The very first page load after firing up IIS Express takes a bit longer, but this is sustained.)
Negligible time (~2ms tops) is spent querying, executing and "data reading" the SQL, while this line:
var person = dataContext.People.FirstOrDefault(p => p.PersonID == id);
...when wrapped in using(profiler.Step()) is recorded as taking 300-400 ms. I profiled with dotTrace, which confirmed that the time is actually spent in EF as usual (the profilable components do make very brief appearances), only it is taking much longer.
This leads me to believe that the connection or some of its constituent parts are missing sufficient data, making EF perform far worse.
This is what I'm using to make the context object (my edmx model's class is called DataContext):
var conn = ProfiledDbConnection.Get(
/* returns an SqlConnection */CreateConnection());
return CreateObjectContext<DataContext>(conn);
I originally used the mvc-mini-profiler provided ObjectContextUtils.CreateObjectContext method. I dove into it and noticed that it set a wildcard metadata workspace path string. Since I have the database layer isolated to one project and several MVC sites as other projects using the code, those paths have changed and I'd rather be more specific. Also, I thought this was the cause of the performance issue. I duplicated the CreateObjectContext functionality into my own project to provide this, as such:
public static T CreateObjectContext<T>(DbConnection connection) where T : System.Data.Objects.ObjectContext {
var workspace = new System.Data.Metadata.Edm.MetadataWorkspace(
GetMetadataPathsString().Split('|'),
// ^-- returns
// "res://*/Redacted.csdl|res://*/Redacted.ssdl|res://*/Redacted.msl"
new Assembly[] { typeof(T).Assembly });
// The remainder of the method is copied straight from the original,
// and I carried over a duplicate CtorCache too to make this work.
var factory = DbProviderServices.GetProviderFactory(connection);
var itemCollection = workspace.GetItemCollection(System.Data.Metadata.Edm.DataSpace.SSpace);
itemCollection.GetType().GetField("_providerFactory", // <==== big fat ugly hack
BindingFlags.NonPublic | BindingFlags.Instance).SetValue(itemCollection, factory);
var ec = new System.Data.EntityClient.EntityConnection(workspace, connection);
return CtorCache<T, System.Data.EntityClient.EntityConnection>.Ctor(ec);
}
...but it doesn't seem to make much of a difference. The problem still exists whether I use the above hacked version that's more specific with metadata workspace paths or the mvc-mini-profiler provided version. I just thought I'd mention that I've tried this too.
Having exhausted all this, I'm at my wits' end. Once again: when I just provide my data context as usual, no performance is lost. When I provide a "profilable" data context, performance plummets for certain queries (I don't know what influences this either). What could mvc-mini-profiler do that's wrong? Am I still feeding it the wrong data?
I think this is the same problem as this person ran into.
I just resolved this issue today.
see: http://code.google.com/p/mvc-mini-profiler/issues/detail?id=43
It happened cause some of our fancy hacks were not cached well enough. In particular:
var workspace = new System.Data.Metadata.Edm.MetadataWorkspace(
new string[] { "res://*/" },
new Assembly[] { typeof(T).Assembly });
Is a very expensive call, so we need to cache the workspace.
Profiling, by definition, will effect performance of the application being profiled. The profiler needs to insert it's own method calls throughout the application, intercept low level system calls, and record all that data someplace (meaning writes to disk). All of those tasks take up precious CPU cycles, memory, and disk access.