How can I unit test Eclipse Command Handlers? - eclipse

I have an Eclipse 3.x-based application which uses commands and handlers.
I'm in the process of upping the code coverage and want to test as much as I can. The simple cases (POJOs) I've learned how to test. However, there are cases where I can't find a good starting point, especially when creating a fixture.
For example: Eclipse Command Handlers. I have a handler class MyHandler extending org.eclipse.core.commands.AbstractHandler. It has a single method public Object execute(ExecutionEvent event) throws ExecutionException. Usually, event is passed in from a user action in the GUI, e.g., clicking a menu item.
How can I unit test this? Would I need to mock the ExecutionEvent with the help of a mocking framework?

Unless inevitable, I prefer to mock only types I do own. See here for a discussion of Should you only mock types you own?
Since ExecutionEvents can be created without too much hassle I wouldn't mock them. The snippet below creates an event that you can pass to your handlers' execute method.
IEvaluationContext context = new EvaluationContext( null, new Object() );
Map<String, String> parameters = new HashMap<>();
ExecutionEvent event = new ExecutionEvent( null, parameters, null, context );
The first argument of the ExecutionEvent constructor references the command - I have never had any use for it. If your code requires an actual command, you can use the ICommandService to obtain a reference to your command:
ICommandService commandService = ...
Command command = commandService.getCommand( "id.of.my.command" );
The second argument is a map of command parameters. The third argument is the trigger. If case of the Eclipse workbench this is the SWT Event if available. Leave it null if your production code does not evaluate it.
Before calling execute, you would probably want to prepare the variables of the context:
context.addVariable( ISources.ACTIVE_PART_NAME, myPart );
context.addVariable( ISources.ACTIVE_CURRENT_SELECTION_NAME, new StructuredSelection() );
Note that null is not allowed as a variable value. Either omit the call or - if already added, use removeVariable().
If you don't need a command (see above) - and of course your production code doesn't require a workbench - you can even run the tests as plain JUnit tests (as opposed to PDE JUnit test).

Related

Spring AOP + AspectJ: #AfterReturning advice wrongly executed while mocking(before actual execution)

In an integration test, my advice of #AfterReturning is wrongly executed while in the test I mock it to throw TimeoutException, and the arg passed to the aspect is null.
My advice:
#AfterReturning("execution(* xxxxxx" +
"OrderKafkaProducerService.sendOrderPaidMessage(..)) && " +
"args(order)")
public void orderComplete(CheckoutOrder order) { // order is null when debugging
metricService.orderPaidKafkaSent();
log.trace("Counter inc: order paid kafka"); // this line of log is shown in console
metricService.orderCompleted();
log.trace("Order complete! {}", order.getId()); // this line is not, because NPE
}
And my test:
// mocking
doThrow(new ServiceException(new TimeoutException("A timeout occurred"), FAILED_PRODUCING_ORDER_MESSAGE))
.when(orderKafkaProducerService).sendOrderPaidMessage(any()); // this is where advice is executed, which is wrong
...
// when
(API call with RestAssured, launch a real HTTP call to endpoint; service is called during this process)
// then
verify(orderKafkaProducerService).sendOrderPaidMessage(any(CheckoutOrder.class)); // it should be called
verify(metricService, never()).orderCompleted(); // but we are throwing, not returning, we should not see this advice executed
This test is failing because of NPE(order is null).
In debugging, I find that when I was mocking, I already execute the advice, and at this point, any() has no value yet, is null, so NPE. But I don't think the advice should execute while mocking. How can I avoid that while testing?? This is absurd for me.
Currently Spring test support does not explicitly handle the situation that an injected mock or spy (which is a proxy subclass via Mockito) might actually be an AOP target later on (i.e. proxied and thus subclassed again via CGLIB).
There are several bug tickets related to this topic for Spring, Spring Boot and Mockito. Nobody has done anything about it yet. I do understand why the Mockito maintainers won't include Spring-specific stuff into their code base, but I do not understand why the Spring people do not improve their testing tools.
Actually when debugging your failing test and inspecting kafkaService, you will find out the following facts:
kafkaService.getClass() is com.example.template.services.KafkaService$MockitoMock$92961867$$EnhancerBySpringCGLIB$$8fc4fe95
kafkaService.getClass().getSuperclass() is com.example.template.services.KafkaService$MockitoMock$92961867
kafkaService.getClass().getSuperclass().getSuperclass() is class com.example.template.services.KafkaService
In other words:
kafkaService is a CGLIB Spring AOP proxy.
The AOP proxy wraps a Mockito spy (probably a ByteBuddy proxy).
The Mockito spy wraps the original object.
Besides, changing the wrapping order to make the Mockito spy the outermost object would not work because CGLIB deliberately makes its overriding methods final, i.e. you cannot extend and override them again. If Mockito was just as restrictive, the hierarchical wrapping would not work at all.
Anyway, what can you do?
Either you use a sophisticated approach like described in this tutorial
or you go for the cheap solution to explicitly unwrap an AOP proxy via AopTestUtils.getTargetObject(Object). You can call this method safely because if the passed candidate object is not a Spring proxy (internally easy to identify because it implements the Advised interface which also gives access to the target object), it just returns the passed object again.
In your case the latter solution would look like this:
#Test
void shouldCompleteHappyPath() {
// fetch spy bean by unwrapping the AOP proxy, if any
KafkaService kafkaServiceSpy = AopTestUtils.getTargetObject(kafkaService);
// given mocked
doNothing().when(kafkaServiceSpy).send(ArgumentMatchers.any());
// when (real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaServiceSpy).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
This has the effect that when(kafkaServiceSpy).send(ArgumentMatchers.any()) no longer triggers the aspect advice because kafkaServiceSpy is no longer an AOP proxy. The auto-wired bean kafkaService still is, though, thus AOP gets triggered as expected, but no longer unwantedly while recording the mock interaction.
Actually, for the verification you could even use kafkaService instead of the spy and only unwrap the spy when recording the interaction you want to verify later:
#Test
void shouldCompleteHappyPath() {
// given mocked
doNothing()
.when(
// fetch spy bean by unwrapping the AOP proxy, if any
AopTestUtils.<KafkaService>getTargetObject(kafkaService)
)
.send(ArgumentMatchers.any());
// when(real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaService).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
P.S.: Without your MCVE I would never have been able to debug this and find out what the heck was going on. This proves again that asking questions including an MCVE is the best thing you can do for yourself because it helps you get answers to questions which otherwise probably would remain unanswered.
Update: After I had mentioned this problem under the similar closed issue Spring Boot #6871, one of the maintainers has by himself created Spring Boot #22281 which specifically addresses your problem here. You might want to watch the new issue in order to find out if/when it can be fixed.

Why does JUnit run tests twice with different resuls

When running JUnit from Eclipse (using right-click | run as - same results at project level and individual test level) my tests run twice. One time the tests run as expected (and are labelled with just the package name), the other time I get spurious null pointer exceptions (and labelled with the fully qualified package name). I don't have any suites, and the different results on the runs mean that it doesn't seem to be the same issue that others are having with tests running twice.
My test file (minus the imports) is:
public class CommandHistoryTest extends TestCase {
private CommandHistory commandHistory;
#BeforeEach
public void initEach() {
commandHistory = new CommandHistory();
}
#Test
#DisplayName("On creation, canUndo and canRedo should be false")
public void testCreate() {
Assertions.assertFalse(commandHistory.canUndo());
Assertions.assertFalse(commandHistory.canRedo());
}
}
As I say, this works fine on one of the JUnit passes - it failed until I implemented the relevant bits of commandHistory and passed when I implemented them - but on the other pass it gives me a null pointer exception on Assertions.assertFalse(commandHistory.canUndo());
I can live with this, because I am getting a valid set of test results, but seeing all those red flags on the second pass makes me sad. How do I stop the spurious tests?
EDIT: I note that in the package explorer the test shows as '> CommandHistoryTest.java'. I've added another test class, which doesn't show that '>' symbol in the package explorer and which doesn't run twice. What does the '>' mean?
EDIT AGAIN: No, I now see that '>' was part of the git integration, but the answer is below.
JUnit runs your test twice: once with the Vintage engine because it extends TestCase from JUnit 3 and once with the Jupiter engine because it contains a method annotated with org.junit.jupiter.api.Test. While the latter executes the #BeforeEach method, the former does not. Just remove extends TestCase and it will only run once.

Accessing the process instance from Rule Tasks in JBPM 5

The short version: How do I get JBPM5 Rule Nodes to use a DRL file which reads and updates process variables?
The long version:
I have a process definition, being run under JBPM5. The start of this process looks something like this:
[Start] ---> [Rule Node] ---> [Gateway (Diverge)] ... etc
The gateway uses constraints on a variable named 'isValid'.
My Rule Node is pointing to the RuleFlowGroup 'validate', which contains only one rule:
rule "Example validation rule"
ruleflow-group "validate"
when
processInstance : WorkflowProcessInstance()
then
processInstance.setVariable("isValid", new Boolean(false));
end
So, by my logic, if this is getting correctly processed then the gateway should always follow the "false" path.
In my Java code, I have something like the following:
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add(ResourceFactory.newClassPathResource("myProcess.bpmn"), ResourceType.BPMN2);
kbuilder.add(ResourceFactory.newClassPathResource("myRules.drl"), ResourceType.DRL);
KnowledgeBase kbase = kbuilder.newKnowledgeBase();
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
new Thread(new Runnable()
{
public void run()
{
ksession.fireUntilHalt();
}
}).start();
// start a new process instance
Map<String, Object> params = new HashMap<String, Object>();
params.put("isValid", true);
ksession.startProcess("test.processesdefinition.myProcess", params);
I can confirm the following:
The drl file is getting loaded into working memory, because when I put syntax errors in the file then I get errors.
If I include a value for "isValid" in the Java params map, the process only ever follows the path specified by Java, apparently ignoring the drools rule.
If I take the "isValid" parameter out of the params map, I get a runtime error.
From this I assume that the final "setVariable" line in the rule is either not executing, or is updating the wrong thing.
I think my issue is related to this statement in the official documentation:
Rule constraints do not have direct access to variables defined inside the process. It is
however possible to refer to the current process instance inside a rule constraint, by adding
the process instance to the Working Memory and matching for the process instance in your
rule constraint. We have added special logic to make sure that a variable processInstance of
type WorkflowProcessInstance will only match to the current process instance and not to other
process instances in the Working Memory. Note that you are however responsible yourself to
insert the process instance into the session and, possibly, to update it, for example, using Java
code or an on-entry or on-exit or explicit action in your process.
However I cannot figure out how to do what is described here. How do I add the process instance into working memory in a way that would make it accessible to this first Rule Node? Rule Nodes do not seem to support on-entry behaviors, and I can't add it to the Java code because the process could very easily complete execution of the rules node before the working memory has been updated to include the process.
As you mentioned, there are several options to inserting the process instance into the working memory:
- inserting it after calling startProcess()
- using an action script to insert it (using "insert(kcontext.getProcessInstance()")
If calling startProcess() might already have gone over the rule task (which is probably the case in your example), and you don't have another node in front of your rule task where you could just use an on-entry/exit script to do this (so that's is hidden), I would recommend using an explicit script task before your rule task to do this.
Kris

What Event Handler to Subscribe and Assert.AreEqual Fails

I want to invoke a method when my integration test fails (i.e., Assert.AreEqual fails), is there any event delegate that I can subscribe to in NUnit framework? Of course the event delegate must be fired when the tests fail.
This is because in my tests there are a lot of Assert statement, and I can't tell need to log the Assert, along with the assertion information that cause a problem in my bug tracker. The best way to do this is to when the test fails, a delegate method is invoke.
I'm curious, what problem are you trying to solve by doing this? I don't know of a way to do this with NUnit, but there is a messy way to do it. You can supply a Func and invoke it as the fail message. The body of the Func can provide you delegation you're looking for.
var callback = new Func<string>(() => {
// do something
return "Reason this test failed";
});
Assert.AreEqual("", " ", callback.Invoke());
It seems that there is no event I can subscribe to in case of assert fail
This sounds like you are wanting to create your own test-runner.
There is a distinction between tests (a defines the actions) and the running of the tests (actually running the tests). In this case, if you want to detect that a test failed, you would have to do this when the test is run.
If all you are looking to do is send an email on fail, it may be easiest to create a script (powershell/batch) that runs the command line runner, and sends an email on fail (to your bug tracking system?)
If you want more complex interactivity, you may need to consider creating a custom runner. The runner can run the test and then take action based on results. In this case you should look in the Nunit.Core namespace for the test runner interface.
For example (untested):
TestPackage package = new TestPackage();
package.add("c:\some\path\to.dll");
SimpleTestRunner runner = new SimpleTestRunner();
if (runner.Load(Package)){
var results = runner.Run(new NullListener(), TestFilter.Empty, false, LoggingThreshold.Off);
if(results.ResultState != ResultState.Success){
... do something interesting ...
}
}
EDIT: better snippet of code https://stackoverflow.com/a/5241900/1961413

Rhino Mocks Calling instead of Recording in NUnit

I am trying to write unit tests for a bit of code involving Events. Since I need to raise an event at will, I've decided to rely upon RhinoMocks to do so for me, and then make sure that the results of the events being raised are as expected (when they click a button, values should change in a predictable manner, in this example, the height of the object should decrease)
So, I do a bit of research and realize I need an Event Raiser for the event in question. Then it's as simple as calling eventraiser.Raise(); and we're good.
The code for obtaining an event raiser I've written as is follows (written in C#) (more or less copied straight off the net)
using (mocks.Record())
{
MyControl testing = mocks.DynamicMock<MyControl>();
testing.Controls.Find("MainLabel",false)[0].Click += null;
LastCall.IgnoreArguments();
LastCall.Constraints(Rhino.Mocks.Constraints.Is.NotNull());
Raiser1 = LastCall.GetEventRaiser();
}
I then test it as In playback mode.
using (mocks.Playback())
{
MyControl thingy = new MyControl();
int temp=thingy.Size.Height;
Raiser1.Raise();
Assert.Greater(temp, thingy.Size.Height);
}
The problem is, when I run these tests through NUnit, it fails. It throws an exception at the line testing.Controls.Find("MainLabel",false)[0].Click += null; which complains about trying to add null to the event listener. Specifically, "System.NullReferenceException: Object Reference not set to an instance of the Object"
Now, I was under the understanding that any code under the Mocks.Record heading wouldn't actually be called, it would instead create expectations for code calls in the playback. However, this is the second instance where I've had a problem like this (the first problem involved classes/cases that where a lot more complicated) Where it appears in NUnit that the code is actually being called normally instead of creating expectations. I am curious if anyone can point out what I am doing wrong. Or an alternative way to solve the core issue.
I'm not sure, but you might get that behaviour if you haven't made the event virtual in MyControl. If methods, events, or properties aren't virtual, then I don't think DynamicMock can replace their behaviour with recording and playback versions.
Personally, I like to define interfaces for the classes I'm going to mock out and then mock the interface. That way, I'm sure to avoid this kind of problem.