Debugging test cases when they are combination of Robot framework and python selenium - eclipse

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)

If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev

If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

Related

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

How to compile documents and run Jshop2 in Eclipse?

I am a student who begin to study SHOP2 from China.
My teacher told me to run JSHOP2 in Eclipse.Now I can run original zenotravel problem and generate GUI and plans.Likewise, I want to put other domain and problems to SHOP2 and produce plans.
But the problem is that I don't know how to compile them and My teacher only asked me to run the the main function in Internaldomain but it can't succeed.Follow is the original code:
public static void main(String[] args) throws Exception
{
//compile();
// compile(args);
//-- run the planning algorithm
run(args);
}
This code can run zenotravel.Then I put domain and problems named pfile1 and
tdepots respectively into SHOP2 folder.Change the codes to:
{
compile(domaintdepots);
// compile(args);
//-- run the planning algorithm
run(args);
}
It warns "domainpdfiles cannot be resolved to a variable".
Or
//--compile();
compile(args);
//-- run the planning algorithm
//run(args);
It turns out:
"Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at JSHOP2.InternalDomain.compile(InternalDomain.java:748)
at JSHOP2.InternalDomain.main(InternalDomain.java:720)"
720 is main funcition above.And 748 is compile function:
public static void compile(String[] args) throws Exception
{
//-- The number of solution plans to be returned.
int planNo = -1;
//-- Handle the number of solution plans the user wants to be returned.
if (args.length == 2 || args[0].substring(0, 2).equals("-r")) {
if (args[0].equals("-r"))
planNo = 1;
else if (args[0].equals("-ra"))
planNo = Integer.MAX_VALUE;
else try {
planNo = Integer.parseInt(args[0].substring(2));
} catch (NumberFormatException e) {
}
}
Finally,according to the advice of the friend,I put the two pddls into src folder and use “java Jshop2.InternalDomain domaintdepots”in CMD commad but an error appeared:"the main class Interdomain can't be found or loaded".But I have set the class path accurately and the Zenotravel planning can run.So how
and where can I use the command ?
And what is written in the bracket"compile()" in Eclipse?
I am also not familiar with JAVA so it's better if there is concrete instruction.Thanks a lot.
Please describe what are you trying to build, what is it supposed to do, what is the expected end result.
If you do have a valid PDDL domain and problem file, you could try to load them into the online http://editor.planning.domains/ editor using the File > Load menu. Then press the Solve button and confirm which of the file is the domain and which is problem. If the PDDL model is valid (and the underlying solver can handle the requirements), you will get a plan back.
If you are trying to build a software solution that needs a PDDL-based planning engine as one of its component, perhaps you could use one of the available implementations: https://nergmada.github.io/pddl-reference/guide/whatisplanner.html#list-of-planners
If you are trying to build your own planning engine in Java using the Eclipse IDE, you probably need a Java-based PDDL parser. Here is a tutorial, how to use pddl4j for that purpose:
https://github.com/pellierd/pddl4j/wiki/A-tutorial-to-develop-your-own-planner
If you need to use Jshop2 in particular, it looks from their documentation (http://www.cs.umd.edu/projects/shop/description.html) that you need to indeed compile the domain and problem PDDL into Java code using following commands:
java JSHOP2.InternalDomain domainFileName
java JSHOP2.InternalDomain -r problemFileName
Edited on June 19th
Java package names (e.g. JSHOP2) and class names (InternalDomain) are case sensitive, so make sure you type them as per the documentation. That is probably why you are getting the "main class not found error".
It is difficult to say what the lines numbers 748 and 720 exactly correspond to, because in the GitHub repo https://github.com/mas-group/jshop2/blob/master/src/JSHOP2/InternalDomain.java the code is different from yours. Can you indicate in your questions which lines those are exactly?
The make file shows how to execute an out-of-the-box example in the distribution:
cd examples\blocks
java JSHOP2.InternalDomain blocks
java JSHOP2.InternalDomain -r problem300
Does that work for you?

is it possible to override uvm test that is specified via +UVM_TESTNAME=test1 by also having +uvm_set_type_override=test1,test2?

I am wondering if it is possible to override test specified in command line via +UVM_TESTNAME by +uvm_set_type_override.
I have tried it and this is what i see in prints in log.
UVM_INFO # 0: reporter [RNTST] Running test Test1...
UVM_INFO # 0: reporter [UVM_CMDLINE_PROC] Applying type override from the command line: +uvm_set_type_override=Test1,Test2
So it seems to me that test component is created first and then factory overrides are applied?
I see in uvm_root.svh following pieces of code
// if test now defined, create it using common factory
if (test_name != "") begin
if(m_children.exists("uvm_test_top")) begin
uvm_report_fatal("TTINST",
"An uvm_test_top already exists via a previous call to run_test", UVM_NONE);
#0; // forces shutdown because $finish is forked
end
$cast(uvm_test_top, factory.create_component_by_name(test_name,
"", "uvm_test_top", null));
It is using the factory, but i don't know if actully overrides are put in. I also see code in following.
begin
if(test_name=="")
uvm_report_info("RNTST", "Running test ...", UVM_LOW);
else if (test_name == uvm_test_top.get_type_name())
uvm_report_info("RNTST", {"Running test ",test_name,"..."}, UVM_LOW);
else
uvm_report_info("RNTST", {"Running test ",uvm_test_top.get_type_name()," (via factory override for test \"",test_name,"\")..."}, UVM_LOW);
end
I am wondering if the "else" part in above is ever executed? or under what condition is it executed?
It seems that there is an issue with command line processing ordering in the UVM—UVM_TESTNAME gets processed separate before all the other options.
It is possible to set an override before calling run_test() in the initial block.
But what is the point of setting up the test name, and then overriding it on the same command line? Why not use the overridden test name as the test?
In general, anything registered with the UVM Factory can be overridden at runtime with a runtime command line switch.
In the case of test names, there is a command line switch called +UVM_TESTNAME=selected_test_name_here.
Typically,
We may have the base test name as the default in the run(your_base_test_name) in the top module,
And then we can select various tests at runtime, without compiling as we run each test (as long as each test has been included in the compile
And the +UVM_TESTNAME=selected_test_at_runtime as we typically cycle through test names when running regressions or switching tests as we debug our design with each different test.

NUnit long string gets truncated

I am using NUnit to write a test for a class that knows how to serialize itself to XML. The class has lots of properties so the XML fragment produced by the function I'm testing might be very long even with the default state of a new object.
When I run the test in the NUnit test runner and I've purposefully broken the expected returned XML, the test runner only shows a truncated version of the expected and actual string returned from the function that serializes the object to XML. Such as:
MyProject.MyTests.CanCreateObjectAndEdit:
Expected string length 525 but was 1485. Strings differ at index 509.
Expected: "...ffer="False" IsThing="False" /></MyObject>"
But was: "...ffer="False" IsThing="False" /><MySubObject ItemID="60..."
--------------------------------------------^
Is there any way to get NUnit to return the entire expected and actual string? I have NUnit 2.6.3 (the latest release) and I am using the NUnit x86 GUI test runner.
My current workaround is to create a console app, copy the code out of the test, run it and print the output to a debug window, and then paste that output back into my test.
Almost every Assertion method (i.e. Assert.AreEquals) takes a "message" parameter as a third argument.
It is printed only on test failures and it is intended to provide useful information to diagnose a test failure. I think it is exactly what you need.
Hope it helps.
(With apologies for any transcription errors in the code below: I was testing this on a remote computer without copy/paste)
I tested the message parameter as suggested by Manuel.
[Test]
public void LongTest()
{
string s1 = new string('.', 1000);
string s1 = new string('.', 500) + "+" + new string('.', 500);
Assert.That(s1, Is.EqualTo(s2));
}
and got the equivalent result to the one in the question:
Changing the Assert to
Assert.That(s1, Is.EqualTo(s2), s1 + "\r\n\r\n" + s2);
changed the result to
which is possibly less than helpful, especially when the tooltip comes up showing you the entire thing. However, you can right-click on that area in the GUI runner and Copy it, and you do indeed get the whole text copied to the clipboard.

#javascript cucumber tests pass using selenium driver but fail when using poltergiest

I'm trying to test an jquery UI autocomplete, I've got the tests passing using the selenium driver. I want to switch to poltergiest for some headless testing, but now my tests are now failing.
It doesn't seem to select the autocomplete option for some reason that I have yet been able to figure out
Step
When /^select contract$/ do
VCR.use_cassette("contract") do
selector =
'.ui-menu-item a:contains("John Smith (123456)")'
within("div#review") do
fill_in("contract", with: "john")
end
sleep 2
page.execute_script "$('#{selector}').trigger(\"mouseenter\").click();"
within("div#myPerformaceReview") do
find_field("contract").value.should ==
"John Smith (123456)"
end
end
end
The test passes using the Selenium driver without any changes to the step.
Any advice on how I could debug this?
Version
selenium-webdriver (2.27.2)
poltergeist (1.0.2)
cucumber (1.2.1)
cucumber-rails (1.0.6)
capybara (1.1.4)
phantomjs 1.8.1
I've managed to figure it out, it seems capybara-poltergeist driver doesn't trigger any of the events that jquery-ui uses to display the dropdown list.
I found the answer here: https://github.com/thoughtbot/capybara-webkit/issues/50
I created a form helper in features/support
module FormHelper
def fill_in_autocomplete(selector, value)
page.execute_script %Q{$('#{selector}').val('#{value}').keydown()}
end
def choose_autocomplete(text)
find('ul.ui-autocomplete').should have_content(text)
page.execute_script("$('.ui-menu-item:contains(\"#{text}\")').find('a').trigger('mouseenter').click()")
end
end
World(FormHelper)
I then used those method to fill in the form and select the desired option.
Martin's answer almost worked for me, but I found that the input needs to be focused as well to make it work:
module FormHelper
def fill_in_autocomplete(selector, value)
page.execute_script %Q{$('#{selector}').focus().val('#{value}').keydown()}
end
def choose_autocomplete(text)
find('ul.ui-autocomplete').should have_content(text)
page.execute_script("$('.ui-menu-item:contains(\"#{text}\")').find('a').trigger('mouseenter').click()")
end
end
Found this on the same page: https://github.com/thoughtbot/capybara-webkit/issues/50#issuecomment-4978108