How to run code before/after cucumber suite? - scala

I'm trying to figure out how to run some code before and after all my cucumber tests run.
I've been tracking down a bug for a few days where some our processes create jobs on a server, and don't properly clean it up. It's easy to miss so ideally I don't want engineers to have to manually add a check to every test.
I was hoping there'd be a way to put a hook in before any tests ran to cache how many jobs exist on the server, then a hook at the end to ensure that the value hasn't changed.
I know this isn't really the best way to use cucumber, as that is more of a system test type thing to do, but doing it this way would be the best way to fit it into the existing infrastructure.

Use #BeforeClass and #AfterClass annotations in your run file.
#RunWith(Cucumber.class)
#Cucumber.Options(
format = {"json", "<the report file>"},
features = {"<the feature file>"},
strict = false,
glue = {"<package with steps classes>"})
public class TestRunFile {
#BeforeClass
public static void getJobNumbersOnServerBeforeStarting() {
//Implement logic
}
#AfterClass
public static void getJobNumbersOnServerAfterCompletion() {
//Implement logic
}
}

How about using tagged hooks.
#Before("#jobCheck")
public void beforeScenario() {
// actions
}
#After("#jobCheck")
public void afterScenario() {
// actions
}
And then for each scenario that requires this check, add #jobCheck before the Scenario definition as below.
Feature: Some feature description
#jobCheck
Scenario: It should process a sentence
// The steps
More on JVM hooks here: https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/

Related

How to run test methods individually with Citrus Framework?

I have the next code
#Test
public class ApiTestIT extends TestNGCitrusTestDesigner {
#CitrusTest(name = "testApi2IT")
public void testApi1IT() {
//TO-DO here
}
#CitrusTest(name = "testApi2IT")
public void testApi2IT() {
echo("Hello Citrus!");
}
}
How can i run test methods individually?
I trying using -Dtest and -Dit.test bad no work; always run the test at same time.
Thanks
To execute single test methods, you have to specify them in the -Dit.test specification like -Dit.test=ApiTestIT#testApi2IT.
Nevertheless, this functionality is not provided by citrus but by the maven failsafe plugin. For more information, have a look at the documentation for Running a Single Test
Some examples with citrus can be found in the Citrus samples repository.

Configuration settings make tests unstable with NCrunch

I have a problem with tests becoming unstable under NCrunch. It looks like it has to do with some shadow copying issue. My test goes something like this
class SaveViewSettings : ISaveSettings
{
public void SaveSettings()
{
Properties.View.Default.Save();
}
}
[TestFixture]
// ReSharper disable once InconsistentNaming
class SaveViewSettings_Should
{
[Test]
public void Save_Settings()
{
var ctx = Properties.View.Default;
var sut = new SaveViewSettings();
ctx.LeftSplitter = 12.34;
sut.SaveSettings();
ctx.Reload();
ctx.LeftSplitter.Should().Be(12.34);
}
}
When reloading the settings using ctx.Reload() i get
System.Configuration.ConfigurationErrorsException : ...
----> System.Configuration.ConfigurationErrorsException...
(C:\...\AppData\Local\Remco_Software_Ltd\nCrunch.TestRunner.AppDom_Url_q2piuozo0uftcc2pz5zv15hpilzfpoqk\[version]\user.config...)
A similar problem has been raised on the NCrunch forum about 3 months ago: Unrecognized configuration section userSettings
You might get similar errors with NCrunch when working on multiple solutions with application settings.
I think this might root down to NCrunch always using the same product and user name when shadow building so that all configuration settings are mapped to the same user.config file path.
It seems that by now there is no known solution to this. A work around is to manually delete the user config in
%LOCALAPPDATA%\Remco_Software_Ltd\nCrunch.TestRunner.AppDom_...\user.config`.
Note that the usual way to do this ctx.Reset() might fail as well, so you really have to locate and delete the user.config yourself using ConfigurationManager.
I have automated this work around by adding the following code which stabilizes the test with NCrunch
[Test]
public void Save_Settings()
{
#if NCRUNCH
// Every once in a while NCrunch throws ConfigurationErrorException, cf.:
// - http://forum.ncrunch.net/yaf_postsm7538_Unrecognized-configuration-section-userSettings.aspx
// - http://www.codeproject.com/Articles/30216/Handling-Corrupt-user-config-Settings
// - http://stackoverflow.com/questions/2903610/visual-studio-reset-user-settings-when-debugging
// - http://stackoverflow.com/questions/9038070/how-do-i-get-the-location-of-the-user-config-file-in-programmatically
var config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.PerUserRoamingAndLocal);
if (File.Exists(config.FilePath))
File.Delete(config.FilePath);
#endif
...
Over the past few years i came to consider .NET configuration settings as a kind of legacy feature. It was introduced with .NET 2.0 and was great at the time but it has some issues that you need to be aware of. Maybe it is a good idea to look for alternatives or abstractions like e.g. HumbleConfig which makes it easy to switch.

How to write a custom Junit runner that will log results

I am working with JUnit and I have this idea which I want to implement.
I want to write a runner that will log the results of each test to an excel or a text file so that i can attach it in my reports.
What do i need to learn to get started
Two alternatives
Write a RunListener and use it like:
public void main(String... args) {
JUnitCore core= new JUnitCore();
core.addListener(new MyRunListener());
core.run(MyTestClass.class);
}
Write a RunListener again. But this time extend an org.junit.runner.Runner implementation and override its run method like
#Override
public void run(RunNotifier notifier) {
notifier.addListener(new MyRunNotifier());
super.run(notifier);
}
Second approach can also be used in tests with #RunWith(MyRunner.class) annotation.

testNG: sending report via mail within the #AfterSuite section

I'd like to send the report generated by testNG ( java+eclipse+testNG) within the #AfterSuite section.
It's not a problem to send it, but the point is that the report is generated after the #AfterSuite section, so , basically, i send the previous one instead of the last one !
Any idea about how can I solve it ?
As you are seeing, #AfterSuite runs before the report is generated.
Have you though about implementing a TestNG IReporter listener ?
public class MyReporter implements IReporter {
#Override
public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> iSuites, String s) {
//Create your bespoke results
//Email results
}
}
Obviously you can see a flaw in that you have to generate your own results from the raw results data (which may be advantageous if you just want to email a subset of data).
The ideal solution would to be able to extend the default report generator, but I am not sure this can be done. However there is an existing listener provided by http://reportng.uncommons.org/, which actually provides a much nicer report output.
If you extend this this class, and call their code, and then add email generator code afterwards, it may work
public class MyReporter extends HTMLReporter {
#Override
public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> iSuites, String s) {
super.generateReport(xmlSuites, iSuites, s);
//Email results
}
}
You can attach a listener to a test suite in several ways, as explained on the TEstNG website (http://testng.org/doc/documentation-main.html#listeners-testng-xml)
An alternative to all of this woudl be to use a build tool like Maven to run your tests, then have a post test event to email the results.
I copied the answer from Krishnan.
It works for me.
By the way, in my test environment, I need to extends the org.testng.reporters.EmailableReporter2 instead of EmailableReporter to make sure the correct count.
See below for your reference:
Krishnan Mahadevan Krishnan Mahadevan at Jul 31, 2012 at 8:58 am I am guessing that you are referring to the TestNG generated
"emailable-report.html" which you would want to mail.
With that assumption here's how you should be able to do it.
Extend org.testng.reporters.EmailableReporter
Override org.testng.reporters.EmailableReporter.generateReport(List,
List, String) and have it do something as below :
#Override
public void generateReport(List xml, List suites, String
outdir) {
super.generateReport(xml, suites, outdir);
SendFileEmail e= new SendFileEmail();
e.sendEmail();
}
Now add up this listener of yours into your suite file using
tag.

NUnit extension

Hi All i have a question regarding NUnit Extension (2.5.10).
What i am trying to do is write some additional test info to the
database. For that i have created NUnit extension using Event
Listeners.
The problem i am experiencing is that public void
TestFinished(TestResult result) method is being called twice at
runtime. And my code which writes to the database is in this method
and that leaves me with duplicate entries in the database. The
question is: Is that the expected behaviour? Can i do something about
it?
The extension code is below. Thanks.
using System;
using NUnit.Core;
using NUnit.Core.Extensibility;
namespace NuinitExtension
{
[NUnitAddinAttribute(Type = ExtensionType.Core,
Name = "Database Addin",
Description = "Writes test results to the database.")]
public class MyNunitExtension : IAddin, EventListener
{
public bool Install(IExtensionHost host)
{
IExtensionPoint listeners = host.GetExtensionPoint("EventListeners");
if (listeners == null)
return false;
listeners.Install(this);
return true;
}
public void RunStarted(string name, int testCount){}
public void RunFinished(TestResult result){}
public void RunFinished(Exception exception){}
public void TestStarted(TestName testName){}
public void TestFinished(TestResult result)
{
// this is just sample data
SqlHelper.SqlConnectAndWRiteToDatabase("test", test",
2.0, DateTime.Now);
}
public void SuiteStarted(TestName testName){}
public void SuiteFinished(TestResult result){}
public void UnhandledException(Exception exception){}
public void TestOutput(TestOutput testOutput){}
}
}
I have managed to fix the issue by simply removing my extension
assembly from NUnit 2.5.10\bin\net-2.0\addins folder. At the moment
everything works as expected but i am not sure how. I thought that you
have to have the extension/addin assembly inside the addins folder.
I am running tests by opening a solution via NUnit.exe. My extension
project is part of the solution i am testing. I have also raised this issue with NUnit guys and got the following explanation:
Most likely, your addin was being loaded twice. In order to make it easier to test addins, NUnit searches each test assembly for addins to be loaded, in addition to searching the addins directory. Normally, when you are confident that your addin works, you should remove it from the test assembly and install it in the addins folder. This makes it available to all tests that are run using NUnit. OTOH, if you really only want the addin to apply for a certain project, then you can leave it in the test assembly and not install it as a permanent addin.
http://groups.google.com/group/nunit-discuss/browse_thread/thread/c9329129fd803cb2/47672f15e7cc05d1#47672f15e7cc05d1
Not sure this answer is strictly relevant but might be useful.
I was having a play around with the NUnit library recently to read NUnit tests in so they could easily be transfered over to our own in-house acceptance testing framework.
It turns out we probably wont stick with this but thought it might be useful to share my experiences figuring out how to use the NUnit code:
It is different in that it doesn't get run by the NUnit console or Gui Runner but just by our own console app.
public class NUnitTestReader
{
private TestHarness _testHarness;
public void AddTestsTo(TestHarness testHarness)
{
_testHarness = testHarness;
var package = new TestPackage(Assembly.GetExecutingAssembly().Location){AutoBinPath = true};
CoreExtensions.Host.InitializeService();
var testSuiteBuilder = new TestSuiteBuilder();
var suite = testSuiteBuilder.Build(package);
AddTestsFrom(suite);
}
private void AddTestsFrom(Test node)
{
if (!node.IsSuite)
AddTest(node);
else
{
foreach (Test test in node.Tests)
AddTestsFrom(test);
}
}
private void AddTest(Test node)
{
_testHarness.AddTest(new WrappedNUnitTest(node, TestFilter.Empty));
}
}
The above reads NUnit tests in from the current assembly wraps them up and then adds them to our inhouse test harness. I haven't included these classes but they're not really important to understanding how the NUnit code works.
The really useful bit of information here is the static to "InitialiseService" this took quite a bit of figuring out but is necessary to get the basic set of test readers loaded in NUnit. You need to be a bit careful when looking at the tests in NUnit aswell as it includes failing tests (which I assume dont work because of the number of statics involved) - so what looks like useful documentation is actually misleading.
Aside from that you can then run the tests by implementing EventListener. I was interested in getting a one to one mapping between our tests and NUnit tests so each test is run on it's own. To achieve this you just need to implement TestStarted and TestFinished to do logging:
public void TestStarted(TestName testName)
{
}
public void TestFinished(TestResult result)
{
string text;
if (result.IsFailure)
text = "Failure";
else if (result.IsError)
text = "Error";
else
return;
using (var block = CreateLogBlock(text))
{
LogFailureTo(block);
block.LogString(result.Message);
}
}
There are a couple of problems with this approach: Inherited Test base classes from other assemblies with SetUp methods that delegate to ones in the current assembly dont get called. It also has problems with TestFixtureSetup methods which are only called in NUnit when TestSuites are Run (as opposed to running test methods on their own).
These both seem to be problems with NUnit although if you dont want to construct wrapped tests individually I think you could just put in a call to suite.Run with the appropriate parameters and this will fix the latter problem