Citrus Framework - echo action does not "echo" - citrus-framework

I am attempting to debug/trace an integration test written with the Citrus Framework. Among the various test "actions" that can be taken, there is an echo action which is supposed to do what you might expect: echo something to the console log. The problem is: it does not echo.
When I run the integration test (via Maven failsafe plugin), errors from the test failing appear on the console, but nothing else.
What am I missing?
UPDATE:
This appears to only be a problem when running the integration tests as part of a maven build. When the test is run from eclipse, the complete console log appears.

This may be an issue with your test names. Failsafe Maven plugin by default has a naming convention and only runs those tests that follow this convention. So your test names must match a pattern.
Please review the default naming pattern in failsafe plugin and see if this fixes your issue.

I was able to figure out how to get logging to capture the Citrus integration test console output. See related issue Citrus Framework logging - how to enable/use.

Related

Does the XML Report Processing work for NUnit3?

I'm currently moving one of our projects to DNX (.NET Core now) and I was forced to update to nunit3. Because of other considerations, we run compile the test project as a console app with its own entry point, basically self-hosting the NUnit runner.
I now need to report the results to TeamCity via the XML Reporter, which doesn't seem to parse Nunit3 TestResults.xml files.
Any advice on how to work around this?
The NUnit 3 console has the option to produce results formatted in the NUnit 2 style.
Use the option:
--result=[filename];format=nunit2
Docs: https://github.com/nunit/nunit/wiki/Console-Command-Line
To add to the answer above:
NUnitLite inherits the --result CLI parameter which seems to do the trick.
Another option, which I went for in the end is using the --teamcity CLI parameter:
dotnetbuild --project:<path to project directory> -- --teamcity
which will integrate with TC's service messages. This will also do real-time updates.

Jacoco code coverage for remote machine

I tried to find this answer but hardly found it anywhere. I am doing the API testing, In process I need to call the rest API from my local machine. local machine contains the maven project and a framework to call respective rest API.
I need to check the code coverage of remote Rest API and form a report based on the code coverage. please help, how to do that?
Note: I found this link useful but it does not elaborate clearly on what to do?
http://eclemma.org/jacoco/trunk/doc/agent.html
you will probably do a bit of file copying around - depending on the way you run the tests.
JaCoCo runs as a java agent. So you usually add the javaagent parameter as mentioned in the docs you linked to the start script of you application server.
-javaagent:[yourpath/]jacocoagent.jar=[option1]=[value1],[option2]=[value2]
so it would look like:
java -javaagent: -jar myjar.jar
Using tomcat you can add the "-javaagent" part into JAVA_OPTS or CATALINA_OPTS environment variables. Should be similar for other servers.
this will create the jacoco*.exec files. you need to copy those back to your build or CI server to show its results (for ex if you use sonar you need those files before running the sonar reporter). Its important to just include the packages you're interested in.
You can also create one jacoco.exec file per test flavour (jacoco.exec for unit tests, jacoco-it.exec for integration tests, jacoco-at.exec for application tests).
And I would not mix coverage with performance testing - just to mention that too.
There are some examples on stackoverflow for JBoss

Eclipse grails run as JUnit test

I'm new to using Eclipse for Grails (using STS) and I'm trying to figure out an easy way to run the unit tests. I've seen that I can do it by right clicking Run As > Grails Command (test-app). This works but is slow and the test output goes to the test report html page and has no apparent clickable stack traces.
I can also do Run As > JUnit Test, which appears to be much faster and gives me the traditional JUnit console available in non-Grails tests. When running unit tests, is there a difference in the two? Is the grails command setting up other things or doing anything else?
You are performing a full blown test with all bells and whistles on. :)
According to the docs:
test-app: Runs all Grails unit and integration tests and generates reports.
Setting up the container for the integration tests is what makes it more 'expensive'.
You can limit the test cases that are being run by using 'unit:' as a parameter to indicate that only unit tests need to be run. (When not using JUnit directly from eclipse)
In your case you could do:
test-app unit:
or for a specific FooBarTests.groovy file:
test-app unit: FooBar
optionally you can add -echoOut or -echoErr to get more verbose output.
Check out the docs for more info and different phases of testing.

Makegood in Eclipse says "The main script is not found"

I googled this unexpected error message and there not a single result.
I am using Eclipse Helios (3.6) with Makegood plugin to run PHPUnit test.
PHPUnit is working just fine.
I can also use Makegood to one test class.
But when I run all test, Makegood refuse to do it and display
'Launching <currentfilename>' has encountered a problem.
The main script is not found.
Looks like there are some internal issue with Makegood. I just don t know how to get started debugging this. Is this a eclipse or makegood error message? What does it mean ? Is there any log or debug mode I could use to understand what s happen ?
Recently, I've encountered this problem when executing the Run All Test command. Then the project has no PHP script under the specified test folders. Since the Xdebug implementation of PDT requires a PHP file, test cannot be run in such state.
To prevent this, MakeGood checks whether the project has at least a PHP file under the specified test folders, and skips a test run if the project has no PHP scripts. But even so this error is raised by any reason...
I created a issue http://redmine.piece-framework.com/issues/310 to fix this problem.
Thank you for using MakeGood.

How to report the progress when NUnit tests crashes on a CruiseControl.NET server?

Nunit works quite well with CruiseControl.NET, but there is one thing that irritates me a lot.
If there is a test that causes Nunit to crash, I would only get little information about the crash because the XML report of Nunit doesn't get a chance to be created and be merged into the CruiseControl report.
I need a way to report the progress even when Nunit crashes during the execution.
I have been tried to force each test to output some information to the console to resolve this problem. I have thought about using SetUp method, but I haven't found any good way to get the name of the current running test.
I think a better answer would be to create an NUnit Add-in that implements EventListener interface to capture the TestStarted event to output the progress to the console or a file.
The EventListener interface is documented on NUnit website: http://nunit.org/index.php?p=eventListeners&r=2.5
In addition, we can make the Dashboard report better even when NUnit crashes during its execution. We can use the following procedure to ensure that the DashBoard always shows something about the tests.
Run tests with the EventListener which outputs the progress to a separate file
After running tests, use another program to check the file
If the file does not contain a specific "end line", generate a special XML report based on the file and merge it into the CruiseControl log
If getting the name of the current running test is what you're after you could grab it with the following:
using System.Diagnostics;
...
[Test]
public void SomeTestThatWillCrash()
{
StackFrame sf = new StackFrame();
Console.WriteLine("Now running method: " + sf.GetMethod().Name);
...
}
CruiseControl.net recommends that you use NUnit through your builder (i.e. NAnt/MSBuild). See here: http://confluence.public.thoughtworks.org/display/CCNET/NUnit+Task. As they describe - it will allow you to run these tests locally first - which should give you an exception that you can clear up.
That being said - are your developers running these unit tests prior to checking in code? That could ease this issue. If its an integration issue - I would suggest grabbing the latest code base and running the tests locally to see what is out of sorts.
I don't know if NUnit is able to create the results file even when it crashes. Even if it did - you could run into problems if that file is not well formed due to the crash.
You could use #jpoh's approach but do it in the TestSetup method which would require you do it per-fixture. If really needed, you could write a base class that all your test fixtures inherit from that implement this method.
Another solution is to use MSBuild to run NUnit and use the task in the MSBuildCommunityTasks library. This allows you to continue on error and also get the error code back from NUnit. You won't get what method caused the problem, but might help some. Here is my MSBuild target:
<Target Name="UnitTest"
DependsOnTargets="BuildIt">
<NUnit Assemblies="#(TestAssemblies)"
ToolPath="$(NUnitx86Path)"
WorkingDirectory="%(TestAssemblies.RootDir)%(TestAssemblies.Directory)"
OutputXmlFile="#(TestAssemblies->'%(FullPath).$(NUnitFile)')"
Condition="'#(TestAssemblies)' != ''"
ExcludeCategory="$(ExcludeNUnitCategories)"
ContinueOnError="true">
<Output TaskParameter="ExitCode" ItemName="NUnitExitCodes"/>
</NUnit>
<!-- Copy the test results for the CCNet build before a possible build failure (see next step) -->
<CallTarget Targets="CopyTestResults" Condition="'#(TestAssemblies)' != ''"/>
<Error Text="Test error(s) occured" Code="%(NUnitExitCodes.Identity)" Condition=" '%(NUnitExitCodes.Identity)' != '0' And '#(TestAssemblies)' != ''"/>
</Target>
This probably won't fit your needs as is, but is something to try out and play with.
That said, I would agree with #rifferte that it sounds like you need to debug the problem locally and not rely on CC.NET to handle the reporting.