How do I use the nunit task's skip_test_fail parameter in Albacore? - rake

The Albacore release notes say the xunit task supports a skip_test_fail parameter that
"prevents rake from aborting the build when an xunit test fails. This is useful in continuous integration scenarios, such as running with TeamCity."
No example was shown and my attempt to use it (below) was not successful. How is it supposed to work?
desc "XUnit Test Runner Example"
xunit :xunit do |xunit|
xunit.command = "../xunit-1.8/xunit.console.clr4.exe"
xunit.assembly = "Islambox.Web.Test/bin/Debug/Islambox.Web.Test.dll"
xunit.skip_test_fail
end

I looked through the xunit task source and see that the parameter does exist. It's a regular "property" that can be set to some value.
attr_accessor :html_output, :skip_test_fail
It's used in a postfix if condition, which will return false if the value of the property is false or nil. Any other value will return true.
if !result && (!#skip_test_fail || $?.exitstatus > 1)
So, just set it with any value! I recommend true so that it's more clear what's going on. I've updated the wiki with this information.
xunit.skip_test_fail = true

Related

How to use assert statement in Katalon Studio?

Can someone tell me please how to use an assert statement in Katalon Studio?
The scenario is- I have to create one user (user=program), once I click the submit button I have to capture the result if the user is created successfully or not. If the user is created successfully, only then the execution should proceed further if not then the test case should fail and further execution should stop.
Please let me how to use assert statement in Test Case, Object repository or global variable or keywords?
In Katalon Studio, default assertions are available. Using which we can keep the validation check points. Please refer to below code where its checking for userPofileImge element. If user profile img present, assertion will be PASSED and execution will continue, or else failed and execution stops.
**Code Snippet**
assert WebUI.verifyElementVisible(findTestObject('HomePageLocators/userProfileImg')) == true : 'login failed as user profile is not present'
If you are using Groovy language with Katalon studio, this is the answer:
def x = 1
assert x == 2
// Output: // // Assertion failed: // assert x == 2 //
| | // 1 false
Groovy Language features: http://docs.groovy-lang.org/docs/latest/html/documentation/core-testing-guide.html#_introduction
Katalon Studio has significant customized assert methods. You can choose as per your requirement and control failure. In your case, when you click on Submit, user is created and probably you get alert message, notification message and or new user should be visible in user area.
So you have to identify checkpoint from above and get the property and WebUI.verify--select method with FailureHandling.STOP_ON_FAILURE.
Katalon Studio provides mutiple way to handle test failure
FailureHandling.CONTINUE_ON_FAILURE
FailureHandling.STOP_ON_FAILURE // Applicable in your case
FailureHandling.OPTIONAL
Code will be like this
WebUI.verifyElementPresent(findTestObject('User Locator'), maxWaitTime,
FailureHandling.STOP_ON_FAILURE) // use this if you want to fail further
execution

is it possible to override uvm test that is specified via +UVM_TESTNAME=test1 by also having +uvm_set_type_override=test1,test2?

I am wondering if it is possible to override test specified in command line via +UVM_TESTNAME by +uvm_set_type_override.
I have tried it and this is what i see in prints in log.
UVM_INFO # 0: reporter [RNTST] Running test Test1...
UVM_INFO # 0: reporter [UVM_CMDLINE_PROC] Applying type override from the command line: +uvm_set_type_override=Test1,Test2
So it seems to me that test component is created first and then factory overrides are applied?
I see in uvm_root.svh following pieces of code
// if test now defined, create it using common factory
if (test_name != "") begin
if(m_children.exists("uvm_test_top")) begin
uvm_report_fatal("TTINST",
"An uvm_test_top already exists via a previous call to run_test", UVM_NONE);
#0; // forces shutdown because $finish is forked
end
$cast(uvm_test_top, factory.create_component_by_name(test_name,
"", "uvm_test_top", null));
It is using the factory, but i don't know if actully overrides are put in. I also see code in following.
begin
if(test_name=="")
uvm_report_info("RNTST", "Running test ...", UVM_LOW);
else if (test_name == uvm_test_top.get_type_name())
uvm_report_info("RNTST", {"Running test ",test_name,"..."}, UVM_LOW);
else
uvm_report_info("RNTST", {"Running test ",uvm_test_top.get_type_name()," (via factory override for test \"",test_name,"\")..."}, UVM_LOW);
end
I am wondering if the "else" part in above is ever executed? or under what condition is it executed?
It seems that there is an issue with command line processing ordering in the UVM—UVM_TESTNAME gets processed separate before all the other options.
It is possible to set an override before calling run_test() in the initial block.
But what is the point of setting up the test name, and then overriding it on the same command line? Why not use the overridden test name as the test?
In general, anything registered with the UVM Factory can be overridden at runtime with a runtime command line switch.
In the case of test names, there is a command line switch called +UVM_TESTNAME=selected_test_name_here.
Typically,
We may have the base test name as the default in the run(your_base_test_name) in the top module,
And then we can select various tests at runtime, without compiling as we run each test (as long as each test has been included in the compile
And the +UVM_TESTNAME=selected_test_at_runtime as we typically cycle through test names when running regressions or switching tests as we debug our design with each different test.

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

how to downgrade dut_error to dut_warning in specman

I have a test where I am using a expect #eventA-> eventually #eventB
else dut_error
However, my test treats that dut_error as a dut_warning and the test passed.
Is there any runtime switch in specman that downgrades all dut_errors to dut_warnings ?
For changing the effect of all checks, you cal also issue
"set check WARNING"
I recommend that you give names to the checks, among other things - it simplifies controlling their effect.
e.g. -
expect data_flow is #eventA-> eventually #eventB else ...
and then -
set check -name = my_checker.data_flow WARNING;
A nice thing is that if you name the expect, you can override it.
expect data_flow #eventA-> {[3..13]; #eventB} else ...
Yes, set_check can change the error level.
extend sys {
setup() is also {
set_check("...", WARNING);
};
};

chai.assert() wont run methods in a test before the assertion (chai assert lib with protractor)

First time I post an issue on SO, I hope I'm doing it right.
it (' :: 2.0 service creation :: should fill out service info tab', function(){
createNewService.setServiceName(e2eConfig.newServiceDetails.basicServiceName);
createNewService.selectCategory();
createNewService.setIntroText(e2eConfig.newServiceDetails.introText);
createNewService.selectParent();
createNewService.uploadIcon();
createNewService.nextTab();
//right now assert will fire off without running the methods above because
//we are still on the infoTab
assert(($(createNewService.selectors.infoTab).isDisplayed()) == true, 'did not move to the next tab');
},20000);
What this test does is it fills the inputs, selects drop-downs where necessary and uploads a file.
The test then attempts to switch to the next tab in the widget.
To determine whether it managed to switch to the next tab I want to make a chai library assertion with a custom message.
with the current code the assert will return true because it sees the infoTab and the test will fail without running any of the methods before the assert
if I change the assert line to look for '!== true', then it's going to run the methods and move on
In any case, would it be better to do this in a different manner or perhaps use expect instead of assert?
Chai assert API
Chai expect API
All Protractor function calls return promises that resolve asynchronously, so if the functions you defined on createNewService are all calling Protractor functions you'll have to wait for them resolve before calling the assert. Try something like the following:
it (' :: 2.0 service creation :: should fill out service info tab', function(done) {
createNewService.setServiceName(e2eConfig.newServiceDetails.basicServiceName);
createNewService.selectCategory();
createNewService.setIntroText(e2eConfig.newServiceDetails.introText);
createNewService.selectParent();
createNewService.uploadIcon();
createNewService.nextTab().then(function() {
assert.eventually.strictEqual($(createNewService.selectors.infoTab).isDisplayed(), true, 'did not move to the next tab');
done();
});
},20000);
A few things to note:
This example assumes that createNewService.nextTab() returns a promise.
You'll need to use a library like chai-as-promised to handle assertions on the values returned from promises. In your code you're asserting that a promise object == true, which is truthy due to coercion.
Since your functions run asynchronously, you'll need to pass a callback to your anonymous function then call it when your test is finished. Information about testing asynchronous code can be found here.