Why does JUnit run tests twice with different resuls - eclipse

When running JUnit from Eclipse (using right-click | run as - same results at project level and individual test level) my tests run twice. One time the tests run as expected (and are labelled with just the package name), the other time I get spurious null pointer exceptions (and labelled with the fully qualified package name). I don't have any suites, and the different results on the runs mean that it doesn't seem to be the same issue that others are having with tests running twice.
My test file (minus the imports) is:
public class CommandHistoryTest extends TestCase {
private CommandHistory commandHistory;
#BeforeEach
public void initEach() {
commandHistory = new CommandHistory();
}
#Test
#DisplayName("On creation, canUndo and canRedo should be false")
public void testCreate() {
Assertions.assertFalse(commandHistory.canUndo());
Assertions.assertFalse(commandHistory.canRedo());
}
}
As I say, this works fine on one of the JUnit passes - it failed until I implemented the relevant bits of commandHistory and passed when I implemented them - but on the other pass it gives me a null pointer exception on Assertions.assertFalse(commandHistory.canUndo());
I can live with this, because I am getting a valid set of test results, but seeing all those red flags on the second pass makes me sad. How do I stop the spurious tests?
EDIT: I note that in the package explorer the test shows as '> CommandHistoryTest.java'. I've added another test class, which doesn't show that '>' symbol in the package explorer and which doesn't run twice. What does the '>' mean?
EDIT AGAIN: No, I now see that '>' was part of the git integration, but the answer is below.

JUnit runs your test twice: once with the Vintage engine because it extends TestCase from JUnit 3 and once with the Jupiter engine because it contains a method annotated with org.junit.jupiter.api.Test. While the latter executes the #BeforeEach method, the former does not. Just remove extends TestCase and it will only run once.

Related

Eclipse conditional breakpoint has compilation errors, in ThreadpoolExecutor

I am trying to put a conditional breakpoint in java.util.concurrent.ThreadPoolExecutor class (say in getTask()) method. I tried multiple conditions, but in all conditions, this as well as other threads keep popping the following error:
"Unable to compile conditional breakpoint - missing java project context".
enter image description here
Could this be happening because ThreadPoolExecutor could have been compiled without debug information?
In any case, are there any other way in which i can break in this class (particularly getTask() method)? Its important for me in order to debug multi threaded application.

unable to stop test with runner.storp.run method

I am trying to run the nunit test programmatically which I have successfully done. but I also wants to stop the test whenever I need to but runner.stoprun is not stopping the test run. can someone please help with the same.
following is the code snippet which is running the test:
IDictionary<String, Object> options = new Dictionary<String, Object>();
options.Add(FrameworkPackageSettings.DefaultTestNamePattern, testName);
DefaultTestAssemblyBuilder builder = new NUnit.Framework.Api.DefaultTestAssemblyBuilder();
ITest test=builder.Build(asmName, options);
runner = new NUnit.Framework.Api.NUnitTestAssemblyRunner(builder);
runner.Load(asmName, options);
runner.Run(null, TestFilter.Empty);
and with following code I am trying to stop the test in a different thread:
if (runner.IsTestRunning)
runner.StopRun(true);
Thanks in advance.
Not sure what you mean by a different thread. Different from the test thread? Different from the thread that called Run?
In any case, the Run method you are calling is synchronous, so the test will keep that thread busy till it returns.
Since you are using NUnit internal methods (i.e. not designed for public consumption even though visible) you should probably be using a debug build of NUnit so that you can step into the call to StopRun.
Essentially, there are two possibilities:
1. The setup you have created for running tests is wrong.
2. One of the tests is refusing to be terminated.
I'd rule out the second item first, by using a very simple test.

Redirect standard output and standard err when executing a method

I have a program that tests each method in a Test# subclass and outputs XML in JUnit's XML format.
For instance:
class ExampleTest : Test
{
Void testOne()
{
...
}
}
I want to execute the testOne method and capture the standard output and standard error produced in it. This out and err output will be included in the XML report.
My first idea was to look at sys::Env. The environment class sys::Env has err and out but are readonly.
My second idea is that sys::Process can be launched for each test method and redirect sys::Process#.err and sys::Process#.out in it, but I'm afraid it will be very slow.
There is other way to do it?
You won't be able to redirect output from your current process (and really should not).
If the output absolutely has to be stdout/err - you'll need to go the Process route. You'll take the fork/jvm/stream setup hit, but that may be negligible compared to your test runtime.
A better option would be to log using the Logging API - which will give more control over what gets logged, and where things go.

How can I unit test Eclipse Command Handlers?

I have an Eclipse 3.x-based application which uses commands and handlers.
I'm in the process of upping the code coverage and want to test as much as I can. The simple cases (POJOs) I've learned how to test. However, there are cases where I can't find a good starting point, especially when creating a fixture.
For example: Eclipse Command Handlers. I have a handler class MyHandler extending org.eclipse.core.commands.AbstractHandler. It has a single method public Object execute(ExecutionEvent event) throws ExecutionException. Usually, event is passed in from a user action in the GUI, e.g., clicking a menu item.
How can I unit test this? Would I need to mock the ExecutionEvent with the help of a mocking framework?
Unless inevitable, I prefer to mock only types I do own. See here for a discussion of Should you only mock types you own?
Since ExecutionEvents can be created without too much hassle I wouldn't mock them. The snippet below creates an event that you can pass to your handlers' execute method.
IEvaluationContext context = new EvaluationContext( null, new Object() );
Map<String, String> parameters = new HashMap<>();
ExecutionEvent event = new ExecutionEvent( null, parameters, null, context );
The first argument of the ExecutionEvent constructor references the command - I have never had any use for it. If your code requires an actual command, you can use the ICommandService to obtain a reference to your command:
ICommandService commandService = ...
Command command = commandService.getCommand( "id.of.my.command" );
The second argument is a map of command parameters. The third argument is the trigger. If case of the Eclipse workbench this is the SWT Event if available. Leave it null if your production code does not evaluate it.
Before calling execute, you would probably want to prepare the variables of the context:
context.addVariable( ISources.ACTIVE_PART_NAME, myPart );
context.addVariable( ISources.ACTIVE_CURRENT_SELECTION_NAME, new StructuredSelection() );
Note that null is not allowed as a variable value. Either omit the call or - if already added, use removeVariable().
If you don't need a command (see above) - and of course your production code doesn't require a workbench - you can even run the tests as plain JUnit tests (as opposed to PDE JUnit test).

WaitHandles Exception from OpenCover

I'm using OpenCover to report on my code coverage for my NUnit tests, and when I run a suite of tests which take a long time I get the following exception:
An exception occured: The number of WaitHandles must be less than or equal to 64.
stack: at System.Threading.WaitHandle.WaitAny(WaitHandle[] waitHandles, Int32 millisecondsTimeout, Boolean exitContext)
at OpenCover.Framework.Manager.ProfilerManager.ProcessMessages(List`1 handles, GCHandle pinnedComms)
at OpenCover.Framework.Manager.ProfilerManager.RunProcess(Action`1 process, Boolean isService)
at OpenCover.Console.Program.Main(String[] args)
This only happens when I replace my mock DAL with a real DAL in my tests. Basically I'm running the same set of tests against the same interfaces, just with an integrated implementation instead of a mock implementation. The mock DAL tests run fine, another DAL implementation which uses XML files runs fine (just expectedly slower). The slowest of the three, the actual SQL implementation (slow because of the teardown/setup between each test), brings about this error.
There's no shortage of information online about threading and WaitHandles for custom code, but this is happening inside of a 3rd party tool. Is there something I can do with OpenCover to fix this? Some command line argument which explicitly directs the threading to allow these long-running tests? Perhaps an argument that it needs to pass to NUnit?