Is there a way to skip a test from inside the test? I.E. Skip Exception -> TestNGCitrusTestRunner - citrus-framework

Is it possible to skip a test, from within the test using TestNGCitrusTestRunner?
Basically, trying to use the AssumptionViolatedException, and SkipException from JUNIT/TestNG aren't working for me.
We query data from a database, then run a test, compare the test to the database. If the database has no data the test fails. Ideally, we would skip the test, since the test didn't really pass or fail.
I've tried
throw new AssumptionViolatedException("test");
throw new SkipException("Test");
Both exceptions cause a failure exception. Rather than skip the test

I was able to set the tests to skip using the following:
ITestResult result = Reporter.getCurrentTestResult();
result.setStatus(ITestResult.SKIP);
This modifies the testNG and other reports output to skipped, but the Citrus report has the tests passing

Related

Bdd Cucumber issues

I have newly started working on BDD Cucumber. I am using scala for writing test cases. I am trying to use Scenario Outline and passing parameters in step definitions. My code is as follows.
Scenario Outline: Data is parsed and persisted
Given Portal is running
When A data of <type> is received
Then The data of <type> with <Id> should be parsed and persisted
Examples:
| type | Id |
| Personal | 1 |
|Professional | 2 |
Now in my when condition I am trying to get these parameters as follows
When("""^A data of \"([^\"]*)\" is received$""") {
(type: String) =>
//My code
}
Now on running my code I am getting following error everytime.
io.cucumber.junit.UndefinedStepException: The step "A data of Personal is received" is undefined. You can implement it using the snippet(s) below:
When("""A data of Personal is received""") { () =>
// Write code here that turns the phrase above into concrete actions
throw new io.cucumber.scala.PendingException()
}
Though I have my code in when. Also If I don't use Scenario Outline then it works fine but I want to use Scenario Outline for my code.
I am using tags in my feature file to run my test cases. When I run my test cases with command sbt test #tag1, test cases executes fine but when all test cases are finished running on cmd I am getting following error:
[error] Expected ';'
[error] #tag1
I tried putting ";" after tag but still getting same error
What is this issue and how I can resolve it?
I have 4-5 feature files in my application. That means 4-5 tags. As of now the test case which I want to run I give path of feature file and "glue" it with step deinition in my Runner Class. How I can provide all the tags in my Runner class so that my application runs all the test cases one by one when started?
You are missing the double quotes around <type>:
When A data of "<type>" is received
Just some general advice.
When cuking keep things as simple as possible, focus on clarity and simplicity, do not worry about repetition.
Your task would be much simpler if you wrote 2 simple scenarios
Scenario: Personal data
Given Portal is running
When personal data is received
Then personal data should be persisted
Scenario: Professional data
...
Secondly don't use tags to run your features, you don't need tags yet.
You can cuke much more effectively if you avoid scenario outlines, regex's, tags, transforms etc. etc.. The main power in Cucumber is using natural language to express yourself clearly. Focus on that and keep it simple ...

Phoenix / Elixir testing when setting isolation level of transaction

I have a chunk of code that looks something like this:
Repo.transaction(fn ->
Repo.query!("set transaction isolation level serializable;")
# do some queries
end)
In my test suite, I continually run into the error:
(Postgrex.Error) ERROR 25001 (active_sql_transaction): SET TRANSACTION ISOLATION LEVEL must be called before any query
I'm wondering if I'm doing something fundamentally wrong, or if there's something about the test environment that I'm missing.
Thanks!
Not sure if you are still looking for the answer to this but I found a nice solution for this. For the case I have setup block like so:
setup tags do
:ok =
if tags[:isolation] do
Sandbox.checkout(Repo, isolation: tags[:isolation])
else
Sandbox.checkout(Repo)
end
unless tags[:async] do
Sandbox.mode(Repo, {:shared, self()})
end
:ok
end
then on the test that is in the path of the serializable transaction you have to tag it with "serializable" like so:
#tag isolation: "serializable"
test "my test" do
...
end
this will let you run your tests that come across serializable in the path and still use sandbox.
The problem is for testing purposes all of the tests are wrapped in a transaction so they can be rolled back so you don't pollute your database with tons of old test data. Which could result in failures that should have passed and passes that should have failed depending on how you've written your tests.
You can work around it but it will, again, pollute your test database and you'll have to clean it up yourself:
setup do
[Other set up stuff]
Ecto.Adapters.SQL.Sandbox.checkin(MyApp.Repo) #This closes any open transaction, effectively.
Ecto.Adapters.SQL.Sandbox.checkout(MyApp.Repo, [sandbox: false]) # This opens a new transaction without sandboxing.
end
This setup task goes in the test file with your failing tests if you don't have a setup. If you don't do the checkin call you'll (most likely) get an error about other queries running before the one setting the transaction level because you are inserting something before the test.
See here for someone essentially calling out the same issue.

VSTS passed test cases value variable to write custom condition in build definition

I want to check test cases passed(highlighted in below snap) value and compare with some threshold and if test case passed value is more than threshold then run next task from build
There isn’t the built-in variable that can get the details of test result.
There are some ways can do it during the build:
Analysis the test result file (e.g. trx file in TestResults folder) through PowerShell or other script
Retrieve the test run through Test Run REST API with buildUri (format like vstfs:///Build/Build/{build id} filter, then get the necessary information (e.g. totoalTests, passedTests)
After that, you can set the variable value through Logging Command (##vso[task.setvariable]value)

Getting a realtime dump of test's output

I have a big and long running set of tests; their textual output is redirected to a file, so I can view the logs later. However, NUnit writes this content to the file only when all test have finished.
Is it possible to make nunit write all test's output to a file immediately as this output is written by the test?
This feature will be added in v3.4 https://github.com/nunit/nunit/issues/1139

Programmatically Gathering NUnit results

I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programmatically I get the numbers for Total Success/Failed tests?
Did you consider using a continous integration server like CruiseControl.NET?
It builds and runs the tests for you and displays the results in a web page. If you just want a tool, let the nunit-console.exe output the results in XML and parse/transform it with an XSLT script like the ones coming from cruise control.
Here is an example of such an XSL file if you run the transformation on the direct output of nunit-console.exe then you will have to adapt the select statements and remove cruisecontrol.
However it sounds like you might be interested in continuous integration.
We had a similar requirement and what we did was to read into the Test Result XML file that is generated by NUnit.
XmlDocument testresultxmldoc = new XmlDocument();
testresultxmldoc.Load(this.nunitresultxmlfile);
XmlNode mainresultnode = testresultxmldoc.SelectSingleNode("test-results");
this.MachineName = mainresultnode.SelectSingleNode("environment").Attributes["machine-name"].Value;
int ignoredtests = Convert.ToInt16(mainresultnode.Attributes["ignored"].Value);
int errors = Convert.ToInt16(mainresultnode.Attributes["errors"].Value);
int failures = Convert.ToInt16(mainresultnode.Attributes["failures"].Value);
int totaltests = Convert.ToInt16(mainresultnode.Attributes["total"].Value);
int invalidtests = Convert.ToInt16(mainresultnode.Attributes["invalid"].Value);
int inconclusivetests = Convert.ToInt16(mainresultnode.Attributes["inconclusive"].Value);
We recently had a similar requirement, and wrote a small open source library to combine the results files into one aggregate set of results (as if you had run all of the tests with a single run of nunit-console).
You can find it at https://github.com/15below/NUnitMerger
I'll quote from the release notes for nunit 2.4.3:
The console runner now uses negative return codes for errors encountered in trying to run the test. Failures or errors in the test themselves give a positive return code equal to the number of such failures or errors.
(emphasis mine). The implication here is that, as is usual in bash, a return of 0 indicates success, and non-zero indicates failure or error (as above).
HTH