Scalatest & Easymock - not failing on empty expectations - scala

I'm using scalatest and easymock 3.4. Tried some other versions.
I'm having issues testing w/ akka using easymock.
Two strange behaviors:
Setting expectations worked if they are NOT called, but I can't get tests
to fail if no expectations are set and the method is called.
Also I'm having these weird errors where it says the method is called as it's supposed to be! Why does this fail? Mocks are used inside an actor so there is obviously async activity involved. I can't seem to use verify :( I'm guessing this is somehow related to the async nature of akka.
[info] java.lang.AssertionError: Expectation failure on verify:
[info] YourInterface.poll(0, 100): expected: 1, actual: 1
Using all of the behavior in a test works but when introducing akka, I start seeing weird behavior making it hard to post code examples.

Related

Spring AOP + AspectJ: #AfterReturning advice wrongly executed while mocking(before actual execution)

In an integration test, my advice of #AfterReturning is wrongly executed while in the test I mock it to throw TimeoutException, and the arg passed to the aspect is null.
My advice:
#AfterReturning("execution(* xxxxxx" +
"OrderKafkaProducerService.sendOrderPaidMessage(..)) && " +
"args(order)")
public void orderComplete(CheckoutOrder order) { // order is null when debugging
metricService.orderPaidKafkaSent();
log.trace("Counter inc: order paid kafka"); // this line of log is shown in console
metricService.orderCompleted();
log.trace("Order complete! {}", order.getId()); // this line is not, because NPE
}
And my test:
// mocking
doThrow(new ServiceException(new TimeoutException("A timeout occurred"), FAILED_PRODUCING_ORDER_MESSAGE))
.when(orderKafkaProducerService).sendOrderPaidMessage(any()); // this is where advice is executed, which is wrong
...
// when
(API call with RestAssured, launch a real HTTP call to endpoint; service is called during this process)
// then
verify(orderKafkaProducerService).sendOrderPaidMessage(any(CheckoutOrder.class)); // it should be called
verify(metricService, never()).orderCompleted(); // but we are throwing, not returning, we should not see this advice executed
This test is failing because of NPE(order is null).
In debugging, I find that when I was mocking, I already execute the advice, and at this point, any() has no value yet, is null, so NPE. But I don't think the advice should execute while mocking. How can I avoid that while testing?? This is absurd for me.
Currently Spring test support does not explicitly handle the situation that an injected mock or spy (which is a proxy subclass via Mockito) might actually be an AOP target later on (i.e. proxied and thus subclassed again via CGLIB).
There are several bug tickets related to this topic for Spring, Spring Boot and Mockito. Nobody has done anything about it yet. I do understand why the Mockito maintainers won't include Spring-specific stuff into their code base, but I do not understand why the Spring people do not improve their testing tools.
Actually when debugging your failing test and inspecting kafkaService, you will find out the following facts:
kafkaService.getClass() is com.example.template.services.KafkaService$MockitoMock$92961867$$EnhancerBySpringCGLIB$$8fc4fe95
kafkaService.getClass().getSuperclass() is com.example.template.services.KafkaService$MockitoMock$92961867
kafkaService.getClass().getSuperclass().getSuperclass() is class com.example.template.services.KafkaService
In other words:
kafkaService is a CGLIB Spring AOP proxy.
The AOP proxy wraps a Mockito spy (probably a ByteBuddy proxy).
The Mockito spy wraps the original object.
Besides, changing the wrapping order to make the Mockito spy the outermost object would not work because CGLIB deliberately makes its overriding methods final, i.e. you cannot extend and override them again. If Mockito was just as restrictive, the hierarchical wrapping would not work at all.
Anyway, what can you do?
Either you use a sophisticated approach like described in this tutorial
or you go for the cheap solution to explicitly unwrap an AOP proxy via AopTestUtils.getTargetObject(Object). You can call this method safely because if the passed candidate object is not a Spring proxy (internally easy to identify because it implements the Advised interface which also gives access to the target object), it just returns the passed object again.
In your case the latter solution would look like this:
#Test
void shouldCompleteHappyPath() {
// fetch spy bean by unwrapping the AOP proxy, if any
KafkaService kafkaServiceSpy = AopTestUtils.getTargetObject(kafkaService);
// given mocked
doNothing().when(kafkaServiceSpy).send(ArgumentMatchers.any());
// when (real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaServiceSpy).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
This has the effect that when(kafkaServiceSpy).send(ArgumentMatchers.any()) no longer triggers the aspect advice because kafkaServiceSpy is no longer an AOP proxy. The auto-wired bean kafkaService still is, though, thus AOP gets triggered as expected, but no longer unwantedly while recording the mock interaction.
Actually, for the verification you could even use kafkaService instead of the spy and only unwrap the spy when recording the interaction you want to verify later:
#Test
void shouldCompleteHappyPath() {
// given mocked
doNothing()
.when(
// fetch spy bean by unwrapping the AOP proxy, if any
AopTestUtils.<KafkaService>getTargetObject(kafkaService)
)
.send(ArgumentMatchers.any());
// when(real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaService).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
P.S.: Without your MCVE I would never have been able to debug this and find out what the heck was going on. This proves again that asking questions including an MCVE is the best thing you can do for yourself because it helps you get answers to questions which otherwise probably would remain unanswered.
Update: After I had mentioned this problem under the similar closed issue Spring Boot #6871, one of the maintainers has by himself created Spring Boot #22281 which specifically addresses your problem here. You might want to watch the new issue in order to find out if/when it can be fixed.

scala future null pointer match error

New to scala futures I try to call a web service like
wsClient.url(baseUrl + url).withHeaders("Content-Type" -> "application/json").post(dataForR).flatMap(parseOutlierResponse)
using ply-ws library
I validate & map the response as follows https://gist.github.com/geoHeil/943a18d43279762ad4cdfe9aa2e40770
The main thing is:
Await.result(callAMethodCallingTheFirstSnippet, 5.minutes)
Strangely this works just fine in the repl. However if run via sbt run I get a NullPointer Exception. I already verified the JSON response manually. It validates like a breeze. Even the mapping works great. However, there must be a problem with the futures I am using. But I am not sure what is wrong. It seems like the flatMap method is called before there already is a result.
Interestingly if I do not await the result there is no null-pointer exception, but the parsed result is displayed correctly (however, the program does not exit). But there, where I really use this code, I somehow need to await the successful completion to further deal with it.
Below you will find an illustration of the problem
I do not see any major concern with your code! I made a small test with the following code bit and it seems to be working perfectly, both in the REPL and when using sbt run:
WS.clientUrl(s"http://$hostName/api/put").withHeaders(jsonHeaders: _*).post(body).map { r =>
if (r.status >= 400)
logger.warn(s"Invalid http response status: ${r.status} \n ${r.body}")
else
logger.debug(s"Successfully persisted data. Http response ${r.status}")
}
After more and more debugging I found that some implicits were in the wrong scope and the order of dependent case-classes was wrong. After moving them into the correct scope (the method performing the request) the null-pointer exception is fixed.
I could only find the "real" error after changing from flatmap to map which I find very strange. However, now both methods work fine.

Stacktraces on user instantiated exceptions in Specs2 output

In our tests, we make extensive use of stubs, some of which create and throw exceptions. I'm finding that just instantiating an exception, is causing a stacktrace to show up when our tests run. This means that there's a large amount of unnecessary and confusing noise, as we don't care about these 'expected' exceptions. I've scoured the internet, but can't find anything about silencing these stacktraces (bare in mind, that actual exceptions, thrown elsewhere within the code, or by the framework, I'd like to still show traces for). Is this a common issue and has anyone found a way around it?
Sorry, with example:
val t = new Throwable("Expected exception")
val service = new AuthenticationService()(ExceptionThrowingClient(t))
Results in a test run with a stacktrace that looks something like this:
java.lang.Throwable: Expected exception
at services.auth.AuthenticationServiceSpec$$anonfun$4.apply(ServiceSpec.scala:104) ~[test-classes/:na]
at services.auth.AuthenticationServiceSpec$$anonfun$4.apply(ServiceSpec.scala:103) ~[test-classes/:na]
at org.specs2.mutable.SideEffectingCreationPaths$$anonfun$executeBlock$1.apply$mcV$sp(FragmentsBuilder.scala:292) ~[specs2-core_2.10-2.3.12.jar:2.3.12]
at org.specs2.mutable.SideEffectingCreationPaths$class.replay(FragmentsBuilder.scala:264) ~[specs2-core_2.10-2.3.12.jar:2.3.12]
at org.specs2.mutable.Specification.replay(Specification.scala:12) ~[specs2-core_2.10-2.3.12.jar:2.3.12]
Which traces back to the point of instantiation within the spec.

Scala Test: Auto-skip tests that exceed timeout

As I have a collection of scala tests that connect with remote services (some of which may not be available at the time of test execution), I would like to have a way of indicating Scala tests that should be ignored, if the time-out exceeds a desired threshold.
Indeed, I could enclose the body of a test in a future and have it auto-pass, if the time-out is exceeded but having slow tests silently pass strikes me as risky. It would be better if it were explicitly skipped during the test run. So, what I would really like is something like the following:
ignorePast(10 seconds) should "execute a service that is sometimes unavailable" in {
invokeServiceThatIsSometimesUnavailable()
....
}
Looking at the ScalaTest documentation, I don't see this feature supported directly but suspect that there might be away to add this capability? Indeed, I could just add a "tag" to "slow" tests and tell the runner not to execute them, but I would rather the tests be automatically skipped when the timeout is exceeded.
I believe that's not something you're test framework should be responsible for.
Wrap your invokeServiceThatIsSometimesUnavailable() in an exception handling block and you'll be fine.
try {
invokeServiceThatIsSometimesUnavailable()
} catch {
case e : YourServiceTimeoutException => reportTheIgnoredTest()
}
I agree with Maciej that exceptions are probably the best way to go, since the timeout happens within your test itself.
There's also assume (see here), which allows to cancel a test if some pre-requisite fails. You could use it also within a single test, I think.

Zend Session Handler Db Table being destructed during PHPUnit test

I've been doing unit testing and I ran into this weird bad problem.
I'm doing user authentication tests with some of my services/mappers.
I run, all together about 307 tests right now. This only really happens when I run them all in one batch.
I try to instantiate only one Zend_Application object and use that for all my tests. I only instantiate it to take care of the db connection, the session, and the autoloading of my classes.
Here is the problem.
Somewhere along the line of tests the __destruct method of the Zend_Session_SaveHandler_DbTable gets called. I have NO IDEA WHY? But it does.
The __destruct method will render any writing to my session objects useless because they are marked as read-only.
I have NO clue why the destruct method is being called.
It gets called many tests before my authentication tests. If I run each folder of tests individually there is no problem. It's only when I try to run all 307 tests. I do have some tests that do database work but my code is not closing the db connections or destructing the save handler.
Does anyone have any ideas on why this would be happening and why my Zend_Session_SaveHandler_DbTable is being destructed? Does this have anything to do with the lifetime that it has by default?
I think that what was happening is that PHPUnit was doing garbage collection. Whenever I ran the 307 tests the garbage collector had to run and it probably destroyed the Zend_Session_SaveHandler_DbTable for some reason.
This would explain why it didn't get destroyed when fewer tests were being run.
Or maybe it was PHP doing the garbage collection, that makes more sense.
Either way, my current solution is to create a new Zend_Application object for each test class so that all the tests within that class have a fresh zend_application object to work with.
Here is some interesting information.
I put an echo statement in the __destruct method of the savehandler.
The method was being called ( X + 1 ) times, where X was the number of tests that I ran. If I ran 50 test I got 51 echos, 307 tests then 308 echos, etc.
Here is the interesting part. If I ran only a few tests, the echos would all come at the END of the test run. If I tried to run all 307 tests, 90 echos would show up after what I assumed were 90 test. The rest of the echos would then come up at the end of the remaining tests. The number of echos was X + 1 again, or in this case 308.
So, this is where I'm assuming that this has something to do with either the tearDown method that PHPUnit calls, or the PHP garbage collector. Maybe PHPUnit invokes the garbage collector at teardown. Who knows but I'm glad I got it working now as my tests were all passing beforehand.
If any of you have a better solution then let me know. Maybe I uncovered a flaw in my code, phpunit, or zend, that hadn't been known before and there is some way to fix it.
It's an old question, but I have just had the same problem and found the solution here. I think it's the right way to solve it.
Zend_Session::$_unitTestEnabled = true;