How to control the WebTau report - rest

When I first used WebTau it would produce a report file in the project directory where the WebTau tests were being run. Tests that use WebTau are no longer generating a report.
Is there a way to (manually) conrol when the report is produced?
Can I specify an output directory for this report?
Also, there is another question as to how I managed to change things and the report is no longer being generated automatically? I have moved some common code patterns into a TestingSupport project, because I found a bunch of tests were almost cut-n-paste at this ealry stage. There's no report file appearing in another directory though.
Another potential explaination is that I commented-out the #WebTau annotation on my test Class. I did that when I moved REST calling patterns to a library class. Everything works fine, of course there's no report. I'm guessing that is going to be a clue.

#WebTau annotation is essential for JUnit5 to generate the report. You need to annotate any class that you want to participate in the report.
Alternatively you can try to register global WebTau extension org.testingisdocumenting.webtau.junit5.WebTauJunitExtension following this JUnit5 guide: https://www.baeldung.com/junit-5-extensions#1-automatic-extension-registration
Use reportPath config value to change the report location
It can be specified using config file or system property override or using environment variable

Related

Adjust test name in NUnit

We're running our test suite two times for different configurations. Our CI server aggregates them into a single report which doesn't work properly because all the test names exist twice in the NUnit report file.
Is there a way to adjust all test names, e.g., by adding a prefix, depending on a configuration value. Or in other words: is it possible to dynamically prepend a prefix to all the test names (e.g., in the SetUpFixture or something like that).
Ah, found it. There is an option of the console runner:
--test-name-format="MyPrefix_{m}{a}"
See https://github.com/nunit/docs/wiki/Console-Command-Line and https://github.com/nunit/docs/wiki/Template-Based-Test-Naming for more information.

Library Startup Skript in Dymola

Using Dymola, I'm looking for a way to automatically execute a script when loading a library. The intention is to define additional displayUnits using the defineUnitConversion() command, which are specific to the library that is loaded. Still I think there are quite some other cases where this could be helpful.
What I figured out in this regard:
I know that it is possible to add conversions to the file in DymolaInstallDir/insert/displayUnits.mos but this comes with the disadvantage that is has to be done again on every new computer or after an update of Dymola. I would like to avoid this.
Other than that I only found the libraryinfo.mos file, which seems to be read during the start-up of Dymola. Therefore I assume it is not the right place to put the conversions, as it contains general information about the library and should only contain the respective functions.
Dymola 2022 has a new (tool-specific) feature that covers exactly this use-case. It is mentioned in the Dymola 2022 release notes in the section "Library startup script" on page 24.
It basically introduces the a new annotation, which allows to specify a path to a .mos script, which is executed, when the respective library is loaded. Here is the example from the release notes:
package ThisPack
annotation(__Dymola_startup =
"modelica://ThisPack/Resources/Scripts/Dymola/startup.mos");
end ThisPack;
The annotation can also be set via the UI...

Protractor - how to reuse the same spec file for different tests

In my Protractor conf.js file, I'd like to re-use the same spec files multiple times; however, it's seems to not be possible.
Some background:
We are reading test cases from a JSON file, launching reports, then testing grid results and various DOM elements.
All reports have the same format. The primary differences lie in the report titles, data columns, actual data results, etc.
So in my conf.js file, ideally I'd like to re-use the same spec files multiple times - but my understanding is that I cannot do this.
For example, my spec array:
specs: [
'spec/report1-spec.js',
'spec/report-grid-details-spec.js',
'spec/report2-spec.js',
'spec/report-grid-details-spec.js',
'spec/report3-spec.js',
'spec/report-grid-details-spec.js',
]
I've read this post (http://ramt.in/how-to-run-identical-jasmine-specs-multiple-times-with-protractor/ ) where you can move your spec files into a node module, but 1) I don't want to move all specs files there, and 2) it doesn't work anyway when I move even one spec file into a module export file.
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
Just wondering if anyone out there has found a solution to this need to re-use spec files multiple times in one conf.js configuration.
Thank you,
Bob
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
This would probably be the easiest way to approach the problem. Though, I like the idea of putting specs into modules - it is a plus to reusability overall.
The thing is, jasmine does not allow executing the same test in a single test run. And, from what I understand, there is no easy way to change the behavior.
One of the possible workarounds is to completely restart protractor and, hence, recreate the jasmine testing environment so that the next report-grid-details-spec.js would run in a new jasmine environment - this is something that protractor-flake project uses to retry the failing tests (it basically restarts protractor through command-line passing the failing specs as a comma-separated list to the specs argument, source).

xtend2 code generation with ecore meta-model

I created an ecore-Metamodel, a genmodel and a corresponding model.
Now I want to generate Code from this.
I found this post and wanted to implement it. I get no errors and so on, but how do I bring the CodeGenerator to generate the wanted output in a File like 'test.txt' (taken that example from the referenced question)?
Do I require a workflow file (mwe2) or do I miss something?
I only needed to run the application as Java and the code worked. I don't need any workflow or mwe2!
But appearently I still cant create the files like I can with a Generator.
I just can use a simple filewriter.

Is it possible for run NUnit against a specific (long) list of tests

I have a list of several thousand NUnit tests that I want to run (generated automatically by another tool). (This is a subset of all of the tests, and changes frequently)
I'd like to be able to run these via NUnit-Console.exe. Unfortunately the /run option only takes a direct list of files which in my case would not fit on a single command line. I'd like it to pickup the list from a filename.
I appreciate that I could use categories, but the list I want to run changes frequently and so I'd prefer not to have to start changing source code.
Does anyone know if there is a clean way to get NUnit to run my specified tests?
(I could break it down into a series of smaller calls to NUnit-console with a full command line, but that's not very elegant)
(If it's not possible, maybe I should add it as an NUnit feature request.)
Had a reply from Charlie Poole (from NUnit development team), that this is not currently possible but has been added as a feature request for NUnit 2.6
I see what you're saying, but like you say you can run a single fixture from the command line.
nunit-console /fixture:namespace.fixture tests.dll
How about generating all the tests in the same fixture? Or place them all in the same assembly?
nunit-console tests.dll
As mentioned in the nunitLink, we need to mention the scenario/test case name. It simple but it has bit of a trick in it. Directly mentioning the test case name will not serve the purpose and you will end up with the 0 testcases executed. We need to write the exact path for the same.
I don't know how it works for other languages but using c# I have found a solution. Whenever we create a feature file corresponding feature.cs file get's created in Visual Studio. Click on the featureFileName.feature.cs and look for namespace and keep it aside(Part 1)
namespace MMBank.Test.Features
Scroll a bit down you will get the class name. Note that as well and keep it aside(Part 2)
public partial class HistoricalTransactionFeature
Keep scrolling down, you will see the code which nunit understands for execution basically.
[NUnit.Framework.TestAttribute()]
[NUnit.Framework.DescriptionAttribute("TC_1_A B C D")]
[NUnit.Framework.CategoryAttribute("MM_Bank")]
Below the code you can see the function/method name which will most likely be TC_1_ABCD(certain parameters)
public virtual void TC_1_ABCD(string username, string password, string visit)
You will be having multiple such methods based on no. of scenarios you have in your feature file. Note the method(test case) which you want to execute and keep it aside(Part 3)
Now collate all the parts with dots. Finally you will land up with something like this,
MMBank.Test.Features.HistoricalTransactionFeature.TC_1_ABCD
This is it. Similarly you can create the test case names from multiple feature files and stack them up in text file. Every test case name should be in different line. For command you can browse through above nunit link for execution using command prompt.