Spring AOP + AspectJ: #AfterReturning advice wrongly executed while mocking(before actual execution) - aspectj

In an integration test, my advice of #AfterReturning is wrongly executed while in the test I mock it to throw TimeoutException, and the arg passed to the aspect is null.
My advice:
#AfterReturning("execution(* xxxxxx" +
"OrderKafkaProducerService.sendOrderPaidMessage(..)) && " +
"args(order)")
public void orderComplete(CheckoutOrder order) { // order is null when debugging
metricService.orderPaidKafkaSent();
log.trace("Counter inc: order paid kafka"); // this line of log is shown in console
metricService.orderCompleted();
log.trace("Order complete! {}", order.getId()); // this line is not, because NPE
}
And my test:
// mocking
doThrow(new ServiceException(new TimeoutException("A timeout occurred"), FAILED_PRODUCING_ORDER_MESSAGE))
.when(orderKafkaProducerService).sendOrderPaidMessage(any()); // this is where advice is executed, which is wrong
...
// when
(API call with RestAssured, launch a real HTTP call to endpoint; service is called during this process)
// then
verify(orderKafkaProducerService).sendOrderPaidMessage(any(CheckoutOrder.class)); // it should be called
verify(metricService, never()).orderCompleted(); // but we are throwing, not returning, we should not see this advice executed
This test is failing because of NPE(order is null).
In debugging, I find that when I was mocking, I already execute the advice, and at this point, any() has no value yet, is null, so NPE. But I don't think the advice should execute while mocking. How can I avoid that while testing?? This is absurd for me.

Currently Spring test support does not explicitly handle the situation that an injected mock or spy (which is a proxy subclass via Mockito) might actually be an AOP target later on (i.e. proxied and thus subclassed again via CGLIB).
There are several bug tickets related to this topic for Spring, Spring Boot and Mockito. Nobody has done anything about it yet. I do understand why the Mockito maintainers won't include Spring-specific stuff into their code base, but I do not understand why the Spring people do not improve their testing tools.
Actually when debugging your failing test and inspecting kafkaService, you will find out the following facts:
kafkaService.getClass() is com.example.template.services.KafkaService$MockitoMock$92961867$$EnhancerBySpringCGLIB$$8fc4fe95
kafkaService.getClass().getSuperclass() is com.example.template.services.KafkaService$MockitoMock$92961867
kafkaService.getClass().getSuperclass().getSuperclass() is class com.example.template.services.KafkaService
In other words:
kafkaService is a CGLIB Spring AOP proxy.
The AOP proxy wraps a Mockito spy (probably a ByteBuddy proxy).
The Mockito spy wraps the original object.
Besides, changing the wrapping order to make the Mockito spy the outermost object would not work because CGLIB deliberately makes its overriding methods final, i.e. you cannot extend and override them again. If Mockito was just as restrictive, the hierarchical wrapping would not work at all.
Anyway, what can you do?
Either you use a sophisticated approach like described in this tutorial
or you go for the cheap solution to explicitly unwrap an AOP proxy via AopTestUtils.getTargetObject(Object). You can call this method safely because if the passed candidate object is not a Spring proxy (internally easy to identify because it implements the Advised interface which also gives access to the target object), it just returns the passed object again.
In your case the latter solution would look like this:
#Test
void shouldCompleteHappyPath() {
// fetch spy bean by unwrapping the AOP proxy, if any
KafkaService kafkaServiceSpy = AopTestUtils.getTargetObject(kafkaService);
// given mocked
doNothing().when(kafkaServiceSpy).send(ArgumentMatchers.any());
// when (real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaServiceSpy).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
This has the effect that when(kafkaServiceSpy).send(ArgumentMatchers.any()) no longer triggers the aspect advice because kafkaServiceSpy is no longer an AOP proxy. The auto-wired bean kafkaService still is, though, thus AOP gets triggered as expected, but no longer unwantedly while recording the mock interaction.
Actually, for the verification you could even use kafkaService instead of the spy and only unwrap the spy when recording the interaction you want to verify later:
#Test
void shouldCompleteHappyPath() {
// given mocked
doNothing()
.when(
// fetch spy bean by unwrapping the AOP proxy, if any
AopTestUtils.<KafkaService>getTargetObject(kafkaService)
)
.send(ArgumentMatchers.any());
// when(real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaService).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
P.S.: Without your MCVE I would never have been able to debug this and find out what the heck was going on. This proves again that asking questions including an MCVE is the best thing you can do for yourself because it helps you get answers to questions which otherwise probably would remain unanswered.
Update: After I had mentioned this problem under the similar closed issue Spring Boot #6871, one of the maintainers has by himself created Spring Boot #22281 which specifically addresses your problem here. You might want to watch the new issue in order to find out if/when it can be fixed.

Related

How do I access the current span and trace ids in a Spring Reactor context?

In my Spring Boot WebFlux application, I am using Reactor. I have set spring.sleuth.reactor.instrumentation-type=manual and am using #ContinueSpan on a service method. I see from the code that the span is created, started and ended appropriately in a reactive way.
Later, in the Flux I need to extract the trace information.
The code I am using injects Tracer and I use Tracer.currentSpan().context() to try to get to the trace context. I can successfully get the currentSpan() until I hit a call to an R2DBC repository method, after which the currentSpan is null. I can make the span "current" by annotating the repository method, but I do not want to do that unless it's necessary. I'd like to understand the underlying "problem".
I have also looked at CurrentTraceContext in the Reactor Subscription Context and see that it answers as Tracer does. Supporting both seems to be a threadlocal-ly supported trace context.
Oddly, if I look at TraceContext in the Subscriber context, the trace, parent, and span ids are there. It appears that WebFluxSleuthOperators.currentTraceContext(Context...) does this - so I have to believe that this is the appropriate vehicle for obtaining the trace context.
So, a few questions:
Is WebFluxSleuthOperators.currentTraceContext(Context...) [TraceContext in the Reactor Subscription context] the proper way to get the up-to-date trace context?
Looking at ReactorSleuthMethodInvocationProcessor, a reference to the current span and trace context are given to the SpanSubscriber which puts them in the Subscriber context. As mentioned earlier, subscribe() and next() are invoked within the context of that span. Why would a method such as a call to a r2dbc repository method effectively "erase" the tracer.currentSpan() but leave the trace context alone?
I'd love to understand this a bit more deeply and will look at the source more. But any insight right now is greatly appreciated. Thank you very much in advance.
To access the current span in a Reactor flow in Spring, use the Span or TraceContext found in the Subscriber Context.
Mono.deferContextual(contextView -> Mono.just(contextView.get(TraceContext.class)))
Or, better yet, use WebFluxSleuthOperators.currentTraceContext(Context.of(contextView)))
Accessing the span through the Tracer bean may not produce the current span - as the current thread may be different than the one that originated the span and brought it into scope. This was confirmed through debugging.

JPA - JTA - two big problems (on .persist() and on .remove()) - MySQLIntegrityConstraintViolationException

Firstly i would like to apologize if i could not find anything about what i would like to describe that really solved my problems. This does not mean that i fully searched in the site. Although i have been spending too much time (days). I am also new on here (in the sense that i never wrote/replied to SO users). And i am sorry for my possible english errors.
I have to say i am new to Java EE.
I am working on WildFly 14, using MySQL.
I am now focusing on a JPA problem.
I have a uniqueness constraint. I am doing tests and while performing the uniqueness violation test, from the data source level i get a MySQLIntegrityConstraintViolationException, and that's ok. I have the problem in that the persist() method does not let me catch the exception (i even put Throwable in the clause, but nothing..). I strongly, strictly, need to catch that, in order to manage a crucial procedure (that, indirectly contains the call to .remove()) in my work's code.
By the way, trying to write that exception, the system does not show me the window of the suggested classes/annotations/etc, suggesting me just to create the class "MySQLIntegrityConstraintViolationException". Doesn't working on WildFly, using MySQL, suffice, for having the suggestions?
Not finding the solution, i decided to change: instead of using persist(), i decided to use .createNativeQuery() in which i put as parameter a String describing an insertion in the db. It seems working. Indeed it works (signals uniqueness violation (ok!), does not execute the TRY block code (ok!) and goes into CATCH block (ok!)). But, again, the exception/error is not clear.
Also, when in the code i enter the piece of code that is in charge of managing the catching and then executing what's inside (and i have a .remove(), inside), it raises the exception:
"Transaction is required to perform this operation (either use a transaction or extended persistence context)" --> this referring to my entityManager.remove() execution..
Now i cannot understand.. should not JPA/JTA manage automatically the transactions?
Moreover, trying, later, to put entityManager.getTransaction().begin() (and commit()), it gives me the problem of having tried to manage manually transactions when instead i couldn't.. seems an endless loop..
[edit]: i am working in CMT context, so i am allowed to work with just EntityManager and EntityManagerFactory. I have tried with entityManager.getTransaction().begin() and entityManager.getTransaction().commit() and it hasn't worked.
[edit']: .getTransaction (EntityTransaction object) cannot be used in CMT context, for this reason that didn't work.
[edit'']: i have solved the transaction issue, by means of the transaction management suited for the CMT context: JTA + CMT requires us to manage the transactions with a TRY-CATCH-FINALLY block, in whose TRY body it is needed to put the operation we want to perform on the database and in whose FINALLY body it is needed to put the EntityManager object closing (em.close()). Though, as explained above, i have used em.createNativeQuery(), that, when failing, throws catchable (catchable in my app) exceptions; i would really need to do a roll-back (usage of .createNativeQuery() is temporary) in my work code and use the .persist() method, so i need to know what to do in order to be able to catch that MySQLIntegrityConstraintViolationException.
Thanks so much!
IT SEEMS i have solved the problem.
Rolling back to the use of .persist() (so, discarding createNativeQuery()), putting em.flush() JUST AFTER em.persist(my_entity_object) has helped me, in that, once the uniqueness constraint is violated (see above), the raised exception is now catchable. With the catchable exception, I can now do as described at the beginning of the post.
WARNING: I remind you of the fact that i am new to JavaEE-JPA-JTA. I have been "lucky" because, since my lack of knowledge, i put that instruction (em.flush()) by taking a guess (i don't know how i could think of that). Hence, I would not be able to explain the behaviour; I would appreciate, though, any explanation of what could have happen, of how and when the method flush() is used, and so on and so forth..
Thanks!

SCALA: Is it possible to prepare a regression suite(integration tests) for Restful API's with scalatest

I am writing a regression suite for API's using SCALATEST, I am kind of stuck-up with following scenario:
For instance I have two tests:
test-1{
Call for API-1
Call for API-2
Call for API-3
}
test-2{
Call for API-5
Call for API-6
Call for API-7
}
I have created a generalized function to Call API's I have setup separate JSON files for URI, method, body and headers.
Now my question is that as all these calls will be async, and will be getting back Future Results, one way to handle I know is flatmap / or For within one Test.
But what about 2nd Test, do I need to block main thread here or there is some smart solution for this. I can't afford to run multiple cases in parallel due to inter-dependencies on resources they will be using.
It's better for your tests be executed sequentially, for this please refer to the scalatest user guide on how to deal with Futures
Play will also provide you some utils to handle a Future, the usage is described in the testing documentation

GWT Requestfactory performance suggestions

I am observing really bad performance when using GWT requestfactory. For example, a request that takes my service layer 2 seconds to fullfil is taking GWT 20 seconds to serialize. My service is returning ~100 what will be EntityProxies. Each of these objects has what will become 4 ValueProxies and 2 more EntityProxies (100 root level EntityProxies, 400 ValueProxies and 200 additional EntityProxies). However, I see the same 10x performance degradation on much smaller datasets.
Example of log snippet:
D 2012-10-18 22:42:39.546 ServiceLayerDecorator invoke: Inoking service layer took 2265 ms
D 2012-10-18 22:42:58.957 RequestFactoryServlet doPost: Entire request took 22870 ms
I have added some profiling code to the ServiceLayerDecorator#invoke method and wrapped the entire servlet in a timer. I have profiled the service by itself, and it is indeed returning results in ~2s.
I am using GWT 2.4, but have tested this on GWT 2.5rc1 and GWT 2.5rc2. My backend is on GAE, but I dont think that is playing a role here.
I found this bug filed against 2.4, which seems to be very related. I have manually applied the patch from this google group without any luck.
My domain models look like:
class Trip {
protected Address origin; // becomes ValueProxy
protected Address destination; becomes ValueProxy
protected Set<TripPassenger> tripPassengers; // Set of ValueProxies
}
class TripPassenger {
protected Passenger passenger;
}
class Passenger {
protected Account account;
}
My question is:
Have I profiled the code correctly and isolated the problem to the GWT serialization?
Could I be doing something wrong that would cause this behavior?
How can I better profile the GWT serialization code to try and figure out the cause?
Have I profiled the code correctly and isolated the problem to the GWT serialization?
RequestFactory uses reflection a whole lot (much more than GWT-RPC for instance), so I'm not really surprised that it causes some perf issues in some cases. And GAE could play a role here.
I believe RequestFactory (the AutoBean part actually) could greatly benefit from code generation at build-time.
Could I be doing something wrong that would cause this behavior?
Check your locators' find and/or isLive methods.
How can I better profile the GWT serialization code to try and figure out the cause?
It would also be interesting to know the time spent on deserialization of the request, applying changes, and then serialization of the response. And don't forget to substract from those the time spent in find and isLive.

Rhino Mocks Calling instead of Recording in NUnit

I am trying to write unit tests for a bit of code involving Events. Since I need to raise an event at will, I've decided to rely upon RhinoMocks to do so for me, and then make sure that the results of the events being raised are as expected (when they click a button, values should change in a predictable manner, in this example, the height of the object should decrease)
So, I do a bit of research and realize I need an Event Raiser for the event in question. Then it's as simple as calling eventraiser.Raise(); and we're good.
The code for obtaining an event raiser I've written as is follows (written in C#) (more or less copied straight off the net)
using (mocks.Record())
{
MyControl testing = mocks.DynamicMock<MyControl>();
testing.Controls.Find("MainLabel",false)[0].Click += null;
LastCall.IgnoreArguments();
LastCall.Constraints(Rhino.Mocks.Constraints.Is.NotNull());
Raiser1 = LastCall.GetEventRaiser();
}
I then test it as In playback mode.
using (mocks.Playback())
{
MyControl thingy = new MyControl();
int temp=thingy.Size.Height;
Raiser1.Raise();
Assert.Greater(temp, thingy.Size.Height);
}
The problem is, when I run these tests through NUnit, it fails. It throws an exception at the line testing.Controls.Find("MainLabel",false)[0].Click += null; which complains about trying to add null to the event listener. Specifically, "System.NullReferenceException: Object Reference not set to an instance of the Object"
Now, I was under the understanding that any code under the Mocks.Record heading wouldn't actually be called, it would instead create expectations for code calls in the playback. However, this is the second instance where I've had a problem like this (the first problem involved classes/cases that where a lot more complicated) Where it appears in NUnit that the code is actually being called normally instead of creating expectations. I am curious if anyone can point out what I am doing wrong. Or an alternative way to solve the core issue.
I'm not sure, but you might get that behaviour if you haven't made the event virtual in MyControl. If methods, events, or properties aren't virtual, then I don't think DynamicMock can replace their behaviour with recording and playback versions.
Personally, I like to define interfaces for the classes I'm going to mock out and then mock the interface. That way, I'm sure to avoid this kind of problem.