How to (automatedly) test different ways to close an application with SWTBot (with Tycho) - eclipse

Probably there is a simple answer to this, but I'm finding it hard to figure it out myself: How can I test different ways to exit an application with SWTBot?
In my application based on the Eclipse RCP 3.x, you can close the application in three different ways:
Per mouse click on menu items (File > Exit)
Per keyboard shortcuts on a menu (Alt+F X)
Per shortcut (Ctrl+Q)
I'm currently writing unit tests for this behaviour with the help of SWTBot. Running them I have a simple and very real problem: Once one way of closing the application is tested, the application is closed and hence all the other tests fail.
All tests are currently residing in one test class.
My question therefore is: How can I run all tests successfully, from Eclipse for starters. But also: How can I have them run by Tycho during the build, so that following tests won't automatically fail due to the application not being open anymore?

In short, you cannot test closing an application with SWTBot.
As you already found out, closing the application will terminate the VM as well. And since your tests run in the same VM as the application under test, the tests will be terminated as well.
Aside from these implications, you shouldn't test closing an application. The three ways to close an application that you mention are all provided by the platform and hence the platform should have tests for that functionality, not your application.

Related

How to debug JavaFX application with jdk14/javafx14/Eclipse v.2020-03?

I'm trying to run a JavaFX application to test some custom controls based on jdk14 and JavaFX14. My operating system is Windows 10, the IDE is Eclipse 2020-03, and I use m2e Maven plugin. The controls are exact copies of controls developed under jdk8 and JavaFX8; the earlier controls pass all tests, there was no problem with debugging.
There is no problem getting a test application to run using jdk14 and JavaFX14, but breakpoints are ignored regardless of whether I run in debug mode, or run mode, or whether I modify the Maven command from javafx:run to javafx:debug (that did NOT work) or to javafx:run#debug.
This issue seems to have been addressed several times in the context of a Netbeans IDE (see stackoverflow discussion, and I copied in the text from the modified plugin as suggested, but to no effect.
I have the following questions:
What must be done in order to debug a JavaFX application under the conditions described above?
Who is responsible for dealing with this? Eclipse? OpenJFX? Somebody else?
Based on the principle that whatever solution is developed, it should be as user friendly as the debugging process under jdk8 and JavaFX8 (i.e. before JavaFX and everything else got decoupled from Oracle), is it reasonable to expect that a solution along those lines will be available in the near future? Is anybody working on it now?
Thanks for feedback.

IntelliJ: Keep junit running during Integration Testing

With Eclipse and Spring Tool Suite when creating a Debug configuration we can check the Keep JUnit running after a test run when debugging. Because we're using the SpringJUnit4ClassRunner and loading the Spring app before running, startup time before these test can run is significant so this is a huge time saver for rerunning tests and even hotswapping basic changes.
However, I recently switched to IntelliJ and I'm unable to find an equivalent option for this feature. Can someone tell me where it is?
You can achieve something very similar to Eclipse's "Keep JUnit running after a test run when debugging" by doing the following:
Create a new JUnit Run/Debug configuration with Test kind set to
Method and Repeat to Until Stopped
Choose your test class and the method you plan to test and save the configuration
Now, start the test in Debug mode using the configuration you just created. You will notice that the test will run over and over again without reloading the Spring context.
Hit the sort a-z button in the Debug tab so that the latest JUnit test run is always shown at the top
Pause the test from the Debug or Run tab (the || button on the left)
Make your changes to the code and then build. Changes will be hot swapped. For the best results, I recommend also using HotSwap Agent or JRebel
Resume the test from the Debug or Run tab
Rinse and repeat from 5 to 7 until you are done with the test
Note that pausing the test is optional, changes will be reloaded anyway between test runs.
The only downside of this strategy is that you can only keep-alive test one method at a time.

Convenient way to run eclipse plugin

I have recently started developing an Eclipse plugin (which is basic stuff for now) and I am struggling with "default" way to run Eclipse plugin ("Run as Eclipse application").
The Eclipse is starting another instance with my plugin already installed in it (this is default behaviour).
The problem is that when I want to re-run my plugin project and I press "run" button again (or Ctrl + F11) (and the another Eclipse instance still running) I get following message:
"Could not launch the application because the associated workspace is currently in use by another Eclipse application".
The error makes sense, and when I close "testing" Eclipse instance I am able to run my plugin again.
The question is - "is it normal routine for plugin development?". Maybe I am missing something, e.g. special arguments for Eclipse?
This seems all pretty normal. The error message is since the run configuration is specifing a workspace and when you start a second instance using the same workspace it is locked and considered in use.
What I usually do when testing a plugin is to create a run configuration (click "Run...") where I disable all the plugins I wont need when testing. This makes sure that the test starts up a couple of seconds quicker. Make sure you save that run configuration as a *.launch file aswell, that makes it quicker to test the next time. Or it can be used to share the configuration.
There's a lot you can configure in the run configuration, such as eclipse arguments, vm argument, if you want environment variables set, etc. So be sure to experiment a little.
In your run configuration. Main tab->Workspace Data ->Location text box add this:
${workspace_loc}/../runtime-EclipseApplication${current_date:yyyyMMdd_HHmmss}
Note the suffix ${current_date:yyyyMMdd_HHmmss} by this every time you launch your application new workspace will be created. So you will not get any error message saying workspace is locked.
But be careful as the folder .metadata will be different for different instances as their work-spaces are different. Thus preferences stored/retrieved by different instances are NOT in sync.
You are probably missing one important point: Eclipse supports the Java hot code replacement. Therefore in many cases you can modify your Java code while your application Eclipse instance is running, save the code and continue without restarting.
If hot code replacement is not possible, Eclipse will tell you, so you always know whether the editing changes are applied to the running instance.
This works best with more recent versions of the JVM, so consider upgrading to the latest Java 7 version, even if you write code to be compliant with Java 1.5 or 6.

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

Getting WatiN.Core.Exceptions.TimeoutException while running from CruieControl

I am getting WatiN.Core.Exceptions.TimeoutException:
Timeout while Internet Explorer busy error while executing my tests via CruiseControl.Net.
Any one have idea how to resolve this?
While we are using TeamCity, we had to disable IE protected mode.
Also, check that user, under which watiN tests are being run can interact with desktop.
I know this question is old and answered, but below are some of my observations.
It is possible to run watin tests under a service account
but the following restrictions/prerequisites apply:
service must run in desktop interactive mode. Only available if running as system.
tests must not create a new windows, even alert/confirm dialogs
Ie cannot create a new window, so watin fails when looking for/expecting it to appear.
ie may show its own warnings, e.g. Insecure content in a secure Page, this can cause tests to fail*
if the tests fail/timeout and the ie instance is forcefully closed, the next instance may try to restore the previous state. The tests then appear to fail*
this can be turned off in the advanced settings.
*from what I've experienced, usually because the prompt is halting the document from being reported as loading-finished.
Feel free to add with other restrictions /comments.