How to make JUnit annotations work in SOAP UI? - junit4

I am trying to port my JUnit test scripts into SOAP UI. Since SOAP UI supports Java, I thought it will support JUnit as well. I have placed the JUnit Jar in 'ext' folder. When I run the test, I could see that the annotation #Test is not being recognized by SOAP UI.
I got the below error:
Script1.groovy: 9: Invalid constructor format. Remove 'void' as the
return type if you want a constructor, or use a different name if you
want a method. at line: 9 column: 4. File: Script1.groovy # line 9,
column 4.
#Test
Am I doing it entirely wrong?

Without seeing your code (since it hasn't been posted as of writing this), you probably don't need to have a return type of void, especially on the constructor.
I'm still a little confused by your question though; are you trying to run JUnit from inside of SoapUI (it appears that way from the error message you are getting) or do you want to run it from a Java class, using JUnit and calling SoapUI. If you are doing the latter, try the following example of the format that should work:
public class CalculatorServiceTestCase {
#Test
public void testCalculatorService() throws Exception {
SoapUITestCaseRunner testCaseRunner = new SoapUITestCaseRunner();
SoapUIMockServiceRunner mockServiceRunner = new SoapUIMockServiceRunner();
testCaseRunner.setProjectFile("src/test/resources/calculator-soapui-project.xml");
mockServiceRunner.setProjectFile("src/test/resources/calculator-soapui-project.xml");
mockServiceRunner.run();
testCaseRunner.run();
}
}

Related

Migration problems when migrate from NUnit 2.X to NUnit 3.X

I'm using NUnit 2.X library but want to use NUnit 3.X now. I have some problems about migration from 2.X to 3.X. First i have a setup fixture class. Here is the 2.X version;
using System;
using System.IO;
using System.Reflection;
using HalisEnerji.QuantSignal.Logging;
using NUnit.Framework;
namespace HalisEnerji.QuantSignal.Tests
{
[SetUpFixture]
public class Initialize
{
[SetUp]
public void SetLogHandler()
{
Log.LogHandler = new ConsoleLogHandler();
}
}
}
First problem is fixed via change "Setup" attribute with "OneTimeSetUp" attribute. Second problem fixed via add some codes for set test directory. Because i'm using Re-Sharper test engine. Here is final shape of setup fixture;
using System;
using System.IO;
using System.Reflection;
using HalisEnerji.QuantSignal.Logging;
using NUnit.Framework;
namespace HalisEnerji.QuantSignal.Tests
{
[SetUpFixture]
public class Initialize
{
[OneTimeSetUp]
public void SetLogHandler()
{
Log.LogHandler = new ConsoleLogHandler();
var assembly = Assembly.GetExecutingAssembly();
var localPath = new Uri(assembly.CodeBase).LocalPath;
var direcotyName = Path.GetDirectoryName(localPath);
if (direcotyName != null)
{
Environment.CurrentDirectory = direcotyName;
}
}
}
}
Well after solve setup fixture problem, my real problems begins with use TestCaseSource/TestCaseData. Here is sample 2.X version;
[Theory]
[TestCaseSource("CreateSymbolTestCaseData")]
public void CreateSymbol(string ticker, Symbol expected)
{
Assert.AreEqual(Symbol.Create(ticker), expected);
}
private TestCaseData[] CreateSymbolTestCaseData()
{
return new []
{
new TestCaseData("SPY", new Symbol(Security.GenerateEquity("SPY"), "SPY")),
new TestCaseData("EURUSD", new Symbol(Security.GenerateForex("EURUSD"), "EURUSD"))
};
}
2.X version creating exception and my tests are fail. Shortly exception telling that TestCaseData provider method must be static. Well, after mark method with static identifier test working correctly but this time my other test failing (before use static identifier it's not failing). Why my other test failing? Because it's reading a file from test directory and somehow test working before setup up fixture codes run and change test directory!
Before use static identifier first SetUpFixture codes run and then tests code run. After use static identifier order changing my test that read file from test directory (which is Re-Sharper's temporary directory and not contain necessary file) run first after that SetUpFixture codes run. Any idea how all my tests to be successful?
UPDATE:
Explain some units;
I have Initialize.cs (part of my test assembly) which is responsible setup CurrentDirectory.
I have Config.cs (part of my project infrastructure assembly) which is my project configuration file and it has public static readonly Setttings property which is reading configuration file from CurrentDirectory.
I have ConfigTests.cs (part of my test assembly) which is contain some test methods for read/write Settings property.
When i debug tests;
Before use any static TestCaseSource, they are working below order;
A. Initialize.cs => Setup method
B. Config.cs => static Settings property getter method
C. ConfigTests.cs => first test method
So initialize working first others working later all tests successfully passing from test.
After use static TestCaseSource for inside other test file lets say OrdersTests.cs (excluded from project for first scenario after that included again), somehow working order is changing like below;
A. Config.cs => static Settings property getter method
B. OrdersTests.cs => static TestCaseSource method (not test method)
C. Initialize.cs => Setup method
D. ConfigTests.cs => first test method
E. OrdersTests.cs => first test method
So, my ConfigTests.cs tests failing because Initialize.cs working after Config.cs. I hope with this update my problem is more clear.
Is this problem related NUnit or Resharper or V.Studio? I don't know and all i know is my successfully passing tests are failing now!
UPDATE 2:
Chris,
Yes you are right. I explore project in detail and i saw the problem is related that my project's some classes accessing to static Config class and it's static Settings property (before run test setup fixture method and even before static test case source method!). You talk about order of process of test methods; NUnit doing tests like your said, not like i said. But when i try to use your solution (set current directory before test case source) it's not working. Because of that i solve my problem in another way. I'm not happy but at least my test methods working now. Could you please tell me what are the technical reasons that run static test case methods before initialize/setup method? Is this because of NUnit or because of infrastructure of .Net Framework? I'm not fanatic about NUnit and/or TDD. I don't have deep knowledge about these concepts but it does not make sense to me: run any method before setup method.
Thanks for your interest.
Because it's reading a file from test directory and somehow test working before setup up fixture codes run and change test directory!
How are you reading this file? You should use TestContext.CurrentContext.TestDirectory to get the test directory in NUnit 3, rather than relying on the location of the current directory. See the Breaking Changes page for details.
Edit: I also see you've tagged this ReSharper 7.1. You should be aware that this version of resharper does not support NUnit 3 - the first version that did is ReSharper 10. Your tests will appear to run correctly - however you may experience weird side effects, and this may break in any future version of NUnit.
Response to update:
Take a look at NUnit 3's Breaking Changes page. There are two relevant breaking changes between NUnit 2 and 3.
TestCaseSource's must now be static.
The CurrentDirectory is no longer set to Environment.CurrentDirectory by default.
The first you've solved easily enough. The second, is what's now causing you issues.
NUnit 3 runs it's methods in this order:
Evalute TestCaseSource methods (OrdersTests.cs)
Run SetUpFixture (Initialize.cs)
Run Test (ConfigTests/OrdersTests)
I'm not sure what Config.cs is being called before your TestCaseSource method - are you sure of that order? Does anything in CreateSymbolTestCaseData() call anything in Config.cs You could try rewriting your TestCaseSource as such:
private TestCaseData[] CreateSymbolTestCaseData()
{
Environment.CurrentDirectory = "c:\RequiredDirectory";
return new []
{
new TestCaseData("SPY", new Symbol(Security.GenerateEquity("SPY"), "SPY")),
new TestCaseData("EURUSD", new Symbol(Security.GenerateForex("EURUSD"), "EURUSD"))
};
}

Run As TestNG is not shown for class extending AbstractTestNGCucumberTests

Please note that I've searched for this particular question & found couple of them but none of them had scenario related to cucumber integration.
I've a test runner class extending AbstractTestNGCucumberTests.
I've also installed Eclipse TestNG plugin as well 6.12
Also adding entry under TestNG under Run Configuration, didn't help to solve the issue.
Mac + Eclipse 4.7.0
#CucumberOptions(features={"src/test/resources/WunderlistAndroid.feature"}, strict = false, format = { "pretty","json:target/cucumber.json" }, tags = { "~#ignore" })
public class WLSignIn extends AbstractTestNGCucumberTests{
#BeforeClass
public void launchAppiumServer(){
//code doing desired action
}
#AfterClass
public void killAppiumServer(){
//code doing desired action
}
}
The problem is due to the fact that the eclipse TestNG plugin doesn't see any #Test methods in your class. I believe the plugin is contextual in nature and hence shows the Run As > TestNG Test only when it sees atleast one #Test method in your test class. Since the #Test method resides in your base class, the plugin doesnt see that and hence you don't see it.
To get past this, you can perhaps add a dummy test method such as the one below and that should bring back the Run as > TestNG test option.
#Test(enabled=false)
public void dummyTestMethod() {}
On a side note: You might want to file this as an issue in the TestNG project and see if its worth getting fixed.
Details that can be used for the bug :
If the base class resides within a jar (and has one or more #Test annotated test methods) then the eclipse testng plugin doesn't see the child class (WLSignIn) the first time. But after one adds a disabled #Test method to the child class (WLSignIn) the option shows up. This happens irrespective of whether the child class extends from another class in the same project or from another class which resides in a jar (in your case cucumber.api.testng.AbstractTestNGCucumberTests)

Is there a equivalent of testNG's #BeforeSuite in JUnit 4?

I'm new to the test automation scene so forgive me if this is a stupid question but google has failed me this time. Or at least anything I've read has just confused me further.
I'm using JUnit 4 and Selenium Webdriver within Eclipse. I have several tests that I need to run as a suite and also individually. At the moment these tests run fine when run on their own. At the start of the test an input box is presented to the tester/user asking first what server they wish to test on (this is a string variable which becomes part of a URL) and what browser they wish to test against. At the moment when running the tests in a suite the user is asked this at the beginning of each test, because obviously this is coded into each of their #Before methods.
How do I take in these values once, and pass them to each of the test methods?
So if server = "server1" and browser = "firefox" then firefox is the browser I want selenium to use and the URL I want it to open is http://server1.blah.com/ for all of the following test methods. The reason I've been using seperate #Before methods is because the required URL is slightly different for each test method. i.e each method tests a different page, such as server1.blah.com/something and server1.blah.com/somethingElse
The tests run fine, I just don't want to keep inputting the values because the number of test methods will eventually be quiet large.
I could also convert my tests to testNG if there is an easier way of doing this in testNG. I thought the #BeforeSuite annotation might work but now I'm not sure.
Any suggestions and criticism (the constructive kind) are much appreciated
You can adapt the solution for setting a global variable for a suite in this answer to JUnit 4 Test invocation.
Basically, you extend Suite to create MySuite. This creates a static variable/method which is accessible from your tests. Then, in your tests, you check the value of this variable. If it's set, you use the value. If not, then you get it. This allows you to run a single test and a suite of tests, but you'll only ask the user once.
So, your suite will look like:
public class MySuite extends Suite {
public static String url;
/**
* Called reflectively on classes annotated with <code>#RunWith(Suite.class)</code>
*
* #param klass the root class
* #param builder builds runners for classes in the suite
* #throws InitializationError
*/
public MySuite(Class<?> klass, RunnerBuilder builder) throws InitializationError {
this(builder, klass, getAnnotatedClasses(klass));
// put your global setup here
MySuite.url = getUrlFromUser();
}
}
This would be used in your Suite like so:
#RunWith(MySuite.class)
#SuiteClasses({FooTest.class, BarTest.class, BazTest.class});
Then, in your test classes, you can either do something in the #Before/#After, or better look at TestRule, or if you want Before and After behaviour, look at ExternalResource. ExternalResource looks like this:
public static class FooTest {
private String url;
#Rule
public ExternalResource resource= new ExternalResource() {
#Override
protected void before() throws Throwable {
url = (MySuite.url != null) ? MySuite.url : getUrlFromUser();
};
#Override
protected void after() {
// if necessary
};
};
#Test
public void testFoo() {
// something which uses resource.url
}
}
You can of course externalize the ExternalResource class, and use it from multiple Test Cases.
I think the main functionality of TestNG that will be useful here is not just #BeforeSuite but #DataProviders, which make it trivial to run the same test with a different set of values (and won't require you to use statics, which always become a liability down the road).
You might also be interested in TestNG's scripting support, which makes it trivial to ask the user for some input before the tests start, here is an example of what you can do with BeanShell.
It might make sense to group tests so that the test suite will have the same #Before method code, so you have a test suite for each separate.
Another option might be to use the same base url for each test but navigate to the specific page by getting selenium to click through to where you want to carry out the test.
If using #RunWith(Suite.class), you can add static methods with #BeforeClass (and #AfterClass), which will run before (and after) the entire Suite you define. See this question.
This of course won't help if you are referring to the entire set of classes found dynamically, and are not using Suite runner.

Eclipse: how to update a JUnit test file with newly added method in the source file?

Using Eclipse (Helios), I could create a JUnit test file ClassATest.java of the source file ClassA.java by using New -> JUnit Test Case -> Class under test..., then choose all the methods of ClassA to be tested.
If later we add some more methods to ClassA, how do we easily reflect this addition in ClassATest ? (No copy/paste plz).
One solution is to use MoreUnit
With MoreUnit installed to Eclipse, one can right click onto the newly added method (and not yet unit tested), and choose "Generate Test"
Of course, if one always follows the writing-test-before-writing-method style, then this solution is not needed. However in reality sometimes you don't have a clear idea of what you would want to do, in that case you would have to code up some method, play with it, then rethink and code again until you are satisfied with the code and want to make it stable by adding unit test.
You should look into creating a JUnit test suite which will execute all tests within the classes you specify. Thus, adding new test cases is as simple as creating a new class and adding it to the #Suite.SuiteClasses list (as seen below).
Here's an example.
Example JUnit Test Suite Class:
#RunWith(Suite.class)
#Suite.SuiteClasses({
TestClassFoo.class
})
public class ExampleTestSuite {}
Example Test Case class:
public class TestClassFoo {
#Test
public void testFirstTestCase() {
// code up test case
}
}

How to diagnose "TestFixtureSetUp Failed"

We use TeamCity as our CI server, and I've just started seeing "TestFixtureSetUp Failed" in the test failure window.
Any idea how I go about debugging this problem? The tests run fine on my workstation (R# test runner in VS2008).
It is a bit of a flaw in the implementation of TestFixtureSetUp (and TestFixtureTearDown) that any exceptions are not well reported. I wrote the first implementation of them and I never got it to work the way it was supposed to. At the time the concepts in the NUnit code were tightly coupled to the idea that actions were directly related to a single test. So the reporting of everything was related to a test result. There wasn't really a space for reporting something that happened at the suite level without a huge re-write (it isn't a refactoring when you change a sheep into an escalator).
Because of that bit of history it's hard to find out what really happened in a TestFixtureSetUp. There isn't a good place to attach the error. The TestFixtureSetUp call is a side effect of running a test instead of being directly related to it.
#TrueWill has the right idea. Check the logs and then modify the test to add more logging if necessary. You might want to put at try/catch inside the TestFixtureSetup and log a lot in the catch block. I just thought I could add some background to it (in other words it's kind of my fault).
I'd check the Build Log first.
If it's not obvious from that, you could try including Console.WriteLines in the tests - I'm not positive, but I think those are written to the Build Log. Alternately you could log to a file (even using log4net if you wanted to get fancy).
If you have Visual Studio installed on the CI server, you could try running the build/tests from there. If it's a connectivity issue, that might resolve it.
I've seen path issues, though, where relative paths to files were no longer correct or absolute paths were used. These are harder to debug, and might require logging the paths and then checking if they exist on the build server.
I ran into this today when creating some integration tests that have long running setup that I don't want to duplicate. I ended up wrapping all the test fixture setup logic in a try/catch. I then add a SetUp method whose sole purpose is to see if a failure occurred during fixture setup and provide better logging.
Exception testFixtureSetupException = null;
[TestFixtureSetUp]
public void FixtureSetup()
{
try
{
// DoTestFixtureSetup
}
catch (Exception ex)
{
testFixtureSetupException = ex;
}
}
[SetUp]
// NUnit doesn't support very useful logging of failures from a TestFixtureSetUp method. We'll do the logging here.
public void CheckForTestFixturefailure()
{
if (testFixtureSetupException != null)
{
string msg = string.Format("There was a failure during test fixture setup, resulting in a {1} exception. You should check the state of the storage accounts in Azure before re-running the RenewStorageAccountE2ETests. {0}Exception Message: {3}{0}Stack Trace:{4}",
Environment.NewLine, testFixtureSetupException.GetType(), accountNamePrefix, testFixtureSetupException.Message, testFixtureSetupException.StackTrace);
Assert.Fail(msg);
}
}
I was getting the same error while running any test with SpecFlow using Visual NUnit. When I tried doing the same from the Unit Test Explorer(provided by Resharper), it gave a little more helpful message: Binding methods with more than 10 parameters are not supported. I realized I can't have a SpecFlow method with more than 10 params, had to remove the test.
I was able to see that I was not creating my test database correctly by doing a quick switch to VS Unit Testing. In my Case it was able to return a better response to the reason why it failed. I usually use NUnit.
"Unable to create instance of class X. Error: System.Data.SqlClient.SqlException: A file activation error occurred. The physical file name '\DbTest.mdf' may be incorrect. Diagnose and correct additional errors, and retry the operation.
CREATE DATABASE failed. Some file names listed could not be created. Check related errors..
"
Run the unit test in debug mode. You may find a runtime error in the the setup.
If you are using SpecFlow and C# in Visual Studio, look at the auto-generated <whatever>.feature.cs file after the test fails. On the public partial class <whatever>Feature line, you should see a symbol which when hovered over will show the reason that the NUnit fixture setup failed. In my case, it was that some of my BeforeFeature methods in my TestHooks class were not static. All BeforeTestRun, AfterTestRun, BeforeFeature, and AfterFeature methods need to be static.
I had this issue and it was caused by adding a private readonly Dictionary in the class, much the same way that you add a private const string.
I tried to make the Dictionary a constant but you can't do that at compile time. I solved this by putting my Dictionary in a method that returns it.
I was troubled by this today. I did the following to get the actual error.
(1) Write another test in a separate fixture which initializes an instance of the troubling test fixture, explicitly calls setup methods such as TestFixtureSetUp and SetUp if any, and then executes the target test method.
(2) Add exception handling code for the new code above, and log / output the actual exception to somewhere.
You can catch the exception and write it in the console on the TearDown
Something like:
[SetUpFixture]
public class BaseTest
{
private Exception caughtException = null;
[SetUp]
public void RunBeforeAnyTests()
{
try
{
throw new Exception("On purpose");
}
catch (Exception ex)
{
caughtException = ex;
}
}
[TearDown]
public void RunAfterAnyTests()
{
if (caughtException != null)
{
Console.WriteLine(string.Format("TestFixtureSetUp failed in {0} - {1}", this.GetType(), caughtException.Message));
}
}
}
And the result will be:
TestFixtureSetUp failed in IntegratedTests.Services.BaseTest - On purpose
I had this symptom caused by an error during field initialization. If you initialize your fields in the [SetUp] method, you should see a better error message.
[TestFixture]
internal class CommandParserTest
{
// obscure error message
private CommandParser parser = new CommandParser(...);
...
}
[TestFixture]
internal class CommandParserTest
{
private CommandParser parser;
[SetUp]
public void BeforeTest()
{
// better error message
parser = new CommandParser(...);
}
...
}