NUnit long string gets truncated - nunit

I am using NUnit to write a test for a class that knows how to serialize itself to XML. The class has lots of properties so the XML fragment produced by the function I'm testing might be very long even with the default state of a new object.
When I run the test in the NUnit test runner and I've purposefully broken the expected returned XML, the test runner only shows a truncated version of the expected and actual string returned from the function that serializes the object to XML. Such as:
MyProject.MyTests.CanCreateObjectAndEdit:
Expected string length 525 but was 1485. Strings differ at index 509.
Expected: "...ffer="False" IsThing="False" /></MyObject>"
But was: "...ffer="False" IsThing="False" /><MySubObject ItemID="60..."
--------------------------------------------^
Is there any way to get NUnit to return the entire expected and actual string? I have NUnit 2.6.3 (the latest release) and I am using the NUnit x86 GUI test runner.
My current workaround is to create a console app, copy the code out of the test, run it and print the output to a debug window, and then paste that output back into my test.

Almost every Assertion method (i.e. Assert.AreEquals) takes a "message" parameter as a third argument.
It is printed only on test failures and it is intended to provide useful information to diagnose a test failure. I think it is exactly what you need.
Hope it helps.

(With apologies for any transcription errors in the code below: I was testing this on a remote computer without copy/paste)
I tested the message parameter as suggested by Manuel.
[Test]
public void LongTest()
{
string s1 = new string('.', 1000);
string s1 = new string('.', 500) + "+" + new string('.', 500);
Assert.That(s1, Is.EqualTo(s2));
}
and got the equivalent result to the one in the question:
Changing the Assert to
Assert.That(s1, Is.EqualTo(s2), s1 + "\r\n\r\n" + s2);
changed the result to
which is possibly less than helpful, especially when the tooltip comes up showing you the entire thing. However, you can right-click on that area in the GUI runner and Copy it, and you do indeed get the whole text copied to the clipboard.

Related

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

How to compile documents and run Jshop2 in Eclipse?

I am a student who begin to study SHOP2 from China.
My teacher told me to run JSHOP2 in Eclipse.Now I can run original zenotravel problem and generate GUI and plans.Likewise, I want to put other domain and problems to SHOP2 and produce plans.
But the problem is that I don't know how to compile them and My teacher only asked me to run the the main function in Internaldomain but it can't succeed.Follow is the original code:
public static void main(String[] args) throws Exception
{
//compile();
// compile(args);
//-- run the planning algorithm
run(args);
}
This code can run zenotravel.Then I put domain and problems named pfile1 and
tdepots respectively into SHOP2 folder.Change the codes to:
{
compile(domaintdepots);
// compile(args);
//-- run the planning algorithm
run(args);
}
It warns "domainpdfiles cannot be resolved to a variable".
Or
//--compile();
compile(args);
//-- run the planning algorithm
//run(args);
It turns out:
"Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at JSHOP2.InternalDomain.compile(InternalDomain.java:748)
at JSHOP2.InternalDomain.main(InternalDomain.java:720)"
720 is main funcition above.And 748 is compile function:
public static void compile(String[] args) throws Exception
{
//-- The number of solution plans to be returned.
int planNo = -1;
//-- Handle the number of solution plans the user wants to be returned.
if (args.length == 2 || args[0].substring(0, 2).equals("-r")) {
if (args[0].equals("-r"))
planNo = 1;
else if (args[0].equals("-ra"))
planNo = Integer.MAX_VALUE;
else try {
planNo = Integer.parseInt(args[0].substring(2));
} catch (NumberFormatException e) {
}
}
Finally,according to the advice of the friend,I put the two pddls into src folder and use “java Jshop2.InternalDomain domaintdepots”in CMD commad but an error appeared:"the main class Interdomain can't be found or loaded".But I have set the class path accurately and the Zenotravel planning can run.So how
and where can I use the command ?
And what is written in the bracket"compile()" in Eclipse?
I am also not familiar with JAVA so it's better if there is concrete instruction.Thanks a lot.
Please describe what are you trying to build, what is it supposed to do, what is the expected end result.
If you do have a valid PDDL domain and problem file, you could try to load them into the online http://editor.planning.domains/ editor using the File > Load menu. Then press the Solve button and confirm which of the file is the domain and which is problem. If the PDDL model is valid (and the underlying solver can handle the requirements), you will get a plan back.
If you are trying to build a software solution that needs a PDDL-based planning engine as one of its component, perhaps you could use one of the available implementations: https://nergmada.github.io/pddl-reference/guide/whatisplanner.html#list-of-planners
If you are trying to build your own planning engine in Java using the Eclipse IDE, you probably need a Java-based PDDL parser. Here is a tutorial, how to use pddl4j for that purpose:
https://github.com/pellierd/pddl4j/wiki/A-tutorial-to-develop-your-own-planner
If you need to use Jshop2 in particular, it looks from their documentation (http://www.cs.umd.edu/projects/shop/description.html) that you need to indeed compile the domain and problem PDDL into Java code using following commands:
java JSHOP2.InternalDomain domainFileName
java JSHOP2.InternalDomain -r problemFileName
Edited on June 19th
Java package names (e.g. JSHOP2) and class names (InternalDomain) are case sensitive, so make sure you type them as per the documentation. That is probably why you are getting the "main class not found error".
It is difficult to say what the lines numbers 748 and 720 exactly correspond to, because in the GitHub repo https://github.com/mas-group/jshop2/blob/master/src/JSHOP2/InternalDomain.java the code is different from yours. Can you indicate in your questions which lines those are exactly?
The make file shows how to execute an out-of-the-box example in the distribution:
cd examples\blocks
java JSHOP2.InternalDomain blocks
java JSHOP2.InternalDomain -r problem300
Does that work for you?

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

Using the output of a [ValueSourceAttribute] Nunit Test in the following Test

I am developing a unit test project where I create an item in a test, then create sub items for it in the following test.
These tests are parameterized tests, and these parameters are collected in the runtime, so when the project starts it starts. It fails to retrieve the parent item from the database because they are not created yet "as I haven't run the first test yet".
Is there a workaround for this?
The first function:
[Test, Sequential]
public void AddInitiative([ValueSourceAttribute("Get_AddInitiatives_Data_FromExcel")]AddInitiative Initiative_Object)
{
string URL = "http://" + Server_name + Port_number + "/IntegrationHub/IntegrationHub.svc/RestUnsecure/AddInitiative";
string Token = Get_Security_token("gpmdev\\administrator", "Xyz7890", TenantID_Input);
var Response = POST_Request(Initiative_Object, URL, Token);
Guid Returned_GUID = GenericSerializer<Guid>.DeserializeFromJSON(Response);
DataBase_Queries DB = new DataBase_Queries();
List<StrategyItem> StrategyItemsFromDB=DB.GetStrategyItemByID(Returned_GUID.ToString());
Assert.AreEqual(Initiative_Object.Initiative.Name_En, StrategyItemsFromDB[0].Name_En);
}
The second function that fails:
[Test, Sequential]
public void AddInitiativeMilestones([ValueSourceAttribute("Get_AddInitiativeMilestones_Data_FromExcel")]AddMilestone Milestone_Object)
{
string URL = "http://" + Server_name + Port_number + "/IntegrationHub/IntegrationHub.svc/RestUnsecure/AddInitiativeMilestones";
string Token = Get_Security_token("gpmdev\\administrator", "Xyz7890", TenantID_Input);
var Response = POST_Request(Milestone_Object, URL, Token);
List<Milestone> Returned_Milestone = GenericSerializer<List<Milestone>>.DeserializeFromJSON(Response);
DataBase_Queries DB = new DataBase_Queries();
List<StrategyItem> StrategyItemsFromDB = DB.GetStrategyItemByID(Returned_Milestone[0].ID.ToString());
Assert.AreEqual(Milestone_Object.Milestones[0].Name_En, Returned_Milestone[0].Name_En);
Assert.AreEqual(Milestone_Object.Milestones[0].Name_En,StrategyItemsFromDB[0].Name_En);
}
Update: When I clicked from the GUI on Clear fixture the test data was reloaded, but it there a way to do that without the GUI?
It's generally bad practice in unit tests to have one test depend on (i.e. use output from) another test. In this case, with NUnit, it's actually impossible.
It's impossible because NUnit creates tests long before they are executed. NUnit will call your TestCaseSource methods at what we call "load time" when NUnit decides what tests exists and populates a GUI if you use one.
The code in your tests executes at "run time" for the tests. In a gui, this may happen multiple times for each load - every time you click Run, for example.
Note that I'm explaining this in terms of a GUI because it's an easy way to conceptualize it. NUnit works the same way whether you are running in batch or interactively.
If you want something to happen only once, before any tests are run, you can use OneTimeSetUp (TestFixtureSetUp in NUnit V2) to set it up. You can use a member of the class to save whatever you need from that execution and access it from your tests. However, this will still happen at "run time", decades (in computer terms) after your tests have been loaded.

TestNG - emailable-report issue

I have created java method to send mail and that mail attaching TestNG emailable-report. Mail and report is sending fine to specified email address.
My issue is I am calling sendmail method at last means when all other tests complete but thing is I am always getting previous last report in mail. So is it like TestNG update report after all class execution?
I want to get latest emailable-report in mail rather than previous last emailable-report.. How can I do that?
I want to get latest emailable-report in mail rather than previous
last emailable-report.. How can I do that?
I think..you're sending mail in your #aftersuite.You're getting previous test emailable-report because the test is currently running and only when it finishes reports will be generated.
I will suggest you to use a continuous Integration server like jenkins because it gives sending emails as a post build option or build tool like Maven or ant to run your tests, then have a post test event to email the results.Maven also provides many plugins to automatically send mails after test execution like Postman mail plugin
Another solution : If you are not willing to use a continuous Integration server(jenkins) or maven or ant
TestNG IReporter listener
Create your own CustomReport by implementing your class to IReporter Interface.This interface has only one method to implement generateReport. This method has all the information of a complete test execution in the List and we can generate report using it.
public class CustomReporter implements IReporter{
#Override
public void generateReport(List<XmlSuite> arg0, List<ISuite> arg1,
String outputDirectory) {
// Second parameter of this method ISuite will contain all the suite executed.
for (ISuite iSuite : arg1) {
//Get a map of result of a single suite at a time
Map<String,ISuiteResult> results = iSuite.getResults();
//Get the key of the result map
Set<String> keys = results.keySet();
//Go to each map value one by one
for (String key : keys) {
//The Context object of current result
ITestContext context = results.get(key).getTestContext();
//results of all the test case will be stored in the context object
//Ex: context.getFailedTests(); will give all failed tests and similarly you can get passed and skipped test results make your own html report using the above data
}
}
}
}
Hope this helps you...Kindly get back if you need any further help
For Windows OS, I've created (for TestNG) in #AfterSuite at the end a method to send mail with attachment and batch file with 2 steps. First step launch mvn clean test to execute all test without previous results. Second step launch test without parameter clean and operates on non existing group :)
Batch is in folder with tests.
start /wait cmd /k "mvn clean test && exit" /secondary /minimized
REM to send email with results in file, whitch was composed after test suite (mvn without: clean)
start /wait cmd /k "mvn test -Dgroups=PhantomGroupNonExist && exit" /secondary /minimized