Issue in running Debug mode in Rules Studio - jrules

I wrote a technical rule and would like to test this rule in debug mode. I am using JUnit test case for unit testing. i have deployed the ruleset in JUnit test project. I have set break points in technical rule. but while running in debug mode, the control is not stopping at the break points.
I checked the "enable debug" option in archive.xml of the RuleApp project.
Please let me know what are the things i need to take care for testing a ruleset in debug mode.
Thanks in Advance.
Hari

you can run the rules in debug mode yes but you cannot have break points in the conditions
only in actions.
Before using JUnit make sure you can run the ruleset in debug mode using a simple java project for rules.
A workaround to test the conditions is to create some print (log/sysout) functions returning true and just print the condition, one by one.
Hope ti helps

Related

IntelliJ: Keep junit running during Integration Testing

With Eclipse and Spring Tool Suite when creating a Debug configuration we can check the Keep JUnit running after a test run when debugging. Because we're using the SpringJUnit4ClassRunner and loading the Spring app before running, startup time before these test can run is significant so this is a huge time saver for rerunning tests and even hotswapping basic changes.
However, I recently switched to IntelliJ and I'm unable to find an equivalent option for this feature. Can someone tell me where it is?
You can achieve something very similar to Eclipse's "Keep JUnit running after a test run when debugging" by doing the following:
Create a new JUnit Run/Debug configuration with Test kind set to
Method and Repeat to Until Stopped
Choose your test class and the method you plan to test and save the configuration
Now, start the test in Debug mode using the configuration you just created. You will notice that the test will run over and over again without reloading the Spring context.
Hit the sort a-z button in the Debug tab so that the latest JUnit test run is always shown at the top
Pause the test from the Debug or Run tab (the || button on the left)
Make your changes to the code and then build. Changes will be hot swapped. For the best results, I recommend also using HotSwap Agent or JRebel
Resume the test from the Debug or Run tab
Rinse and repeat from 5 to 7 until you are done with the test
Note that pausing the test is optional, changes will be reloaded anyway between test runs.
The only downside of this strategy is that you can only keep-alive test one method at a time.

How to (automatedly) test different ways to close an application with SWTBot (with Tycho)

Probably there is a simple answer to this, but I'm finding it hard to figure it out myself: How can I test different ways to exit an application with SWTBot?
In my application based on the Eclipse RCP 3.x, you can close the application in three different ways:
Per mouse click on menu items (File > Exit)
Per keyboard shortcuts on a menu (Alt+F X)
Per shortcut (Ctrl+Q)
I'm currently writing unit tests for this behaviour with the help of SWTBot. Running them I have a simple and very real problem: Once one way of closing the application is tested, the application is closed and hence all the other tests fail.
All tests are currently residing in one test class.
My question therefore is: How can I run all tests successfully, from Eclipse for starters. But also: How can I have them run by Tycho during the build, so that following tests won't automatically fail due to the application not being open anymore?
In short, you cannot test closing an application with SWTBot.
As you already found out, closing the application will terminate the VM as well. And since your tests run in the same VM as the application under test, the tests will be terminated as well.
Aside from these implications, you shouldn't test closing an application. The three ways to close an application that you mention are all provided by the platform and hence the platform should have tests for that functionality, not your application.

Eclipse - "Keep JUnit running after a test when debugging"

In Eclipse there is an option under Run/Debug configuration Keep JUnit running after a test when debugging.
Googling for that phrase only returns one hit, a bug report at Eclipse (61174), that is no manual, instruction or similar. Hence I have two questions:
What does this option affect?
The reason I found this option was that I was looking for a way to make running a test faster. Currently it takes 35 seconds for JUnit to start while running the actual tests usually just takes a few seconds. This is very annoying when I debug test cases and need to start/stop them frequently. Is there a way to make JUnit launch faster?
Yes, I've ran into this myself:
A JUnit launch configuration has a "keep alive" option. If your Java virtual machine supports "hot code replacement" you can fix the code and rerun the test without restarting the full test run. To enable this option select the Keep JUnit running after a test run when debugging checkbox in the JUnit launch configuration.
From the site: http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2FgettingStarted%2Fqs-junit.htm

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

Anyone successful in debugging unit tests for iPhone?

I found examples on how to debug your unit test in Cocoa or the ADC page here.
But I can't get the debugging to work for an iPhone app target. I can get the tests up and running and they are run during the build, but what I need is to debug the tests for some of the more complex failures.
You might consider moving your tests to GHUnit, where they run in a normal application target, so debugging is straightforward.
This can be done by setting up a separate Executable for the project that uses the otest tool to run the unit tests, after setting a bunch of relevant environment variables for the executable. I have used this method to successfully debug SenTestKit logic unit tests.
I found the following links helpful:
http://www.grokkingcocoa.com/how_to_debug_iphone_unit_te.html (also contains help to fix common errors encountered setting up the project).
http://cocoawithlove.com/2009/12/sample-iphone-application-with-complete.html (covers both logic tests and application tests)
http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man1/otest.1.html (Man Page for otest XCode tool)
The NSLog messages show up in Console.app
Should give you a starting point.
In Xcode 4, you can set breakpoints in your unit tests.
Create a new project with "include unit tests" checked.
Put a breakpoint in the failing unit test.
Press Command-U to test.
If you do Build & Go instead of just build, then you can set breakpoints in your unit tests and debug them traditionally. This is if you are using the google toolbox for iphone unit testing; i don't know how you are doing it and if the process is different.