is it possible to override uvm test that is specified via +UVM_TESTNAME=test1 by also having +uvm_set_type_override=test1,test2? - system-verilog

I am wondering if it is possible to override test specified in command line via +UVM_TESTNAME by +uvm_set_type_override.
I have tried it and this is what i see in prints in log.
UVM_INFO # 0: reporter [RNTST] Running test Test1...
UVM_INFO # 0: reporter [UVM_CMDLINE_PROC] Applying type override from the command line: +uvm_set_type_override=Test1,Test2
So it seems to me that test component is created first and then factory overrides are applied?
I see in uvm_root.svh following pieces of code
// if test now defined, create it using common factory
if (test_name != "") begin
if(m_children.exists("uvm_test_top")) begin
uvm_report_fatal("TTINST",
"An uvm_test_top already exists via a previous call to run_test", UVM_NONE);
#0; // forces shutdown because $finish is forked
end
$cast(uvm_test_top, factory.create_component_by_name(test_name,
"", "uvm_test_top", null));
It is using the factory, but i don't know if actully overrides are put in. I also see code in following.
begin
if(test_name=="")
uvm_report_info("RNTST", "Running test ...", UVM_LOW);
else if (test_name == uvm_test_top.get_type_name())
uvm_report_info("RNTST", {"Running test ",test_name,"..."}, UVM_LOW);
else
uvm_report_info("RNTST", {"Running test ",uvm_test_top.get_type_name()," (via factory override for test \"",test_name,"\")..."}, UVM_LOW);
end
I am wondering if the "else" part in above is ever executed? or under what condition is it executed?

It seems that there is an issue with command line processing ordering in the UVM—UVM_TESTNAME gets processed separate before all the other options.
It is possible to set an override before calling run_test() in the initial block.
But what is the point of setting up the test name, and then overriding it on the same command line? Why not use the overridden test name as the test?

In general, anything registered with the UVM Factory can be overridden at runtime with a runtime command line switch.
In the case of test names, there is a command line switch called +UVM_TESTNAME=selected_test_name_here.
Typically,
We may have the base test name as the default in the run(your_base_test_name) in the top module,
And then we can select various tests at runtime, without compiling as we run each test (as long as each test has been included in the compile
And the +UVM_TESTNAME=selected_test_at_runtime as we typically cycle through test names when running regressions or switching tests as we debug our design with each different test.

Related

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Elixir Postgres view returning empty dataset when testing

I am trying to test a view created in Postgres, but it is returning an empty result set. However, when testing out the view in an Elixir interactive shell, I get back the expected data. Here are the steps I have taken to create and test the view:
Create a migration:
def up do
execute """
CREATE VIEW example_view AS
...
Create the schema:
import Ecto.Changeset
schema "test_view" do
field(:user_id, :string)
Test:
describe "example loads" do
setup [
:with_example_data
]
test "view" do
query = from(ev in Schema.ExampleView)
IO.inspect Repo.all(query)
end
end
The response back is an empty array []
Is there a setting that I am missing to allow for views to be tested in test?
As pointed out in one of the comments:
iex, mix phx.server... run on the :dev environment and the dev DB
tests use the :test environment and runs on a separate DB
It actually makes a lot of sense because you want your test suite to be reproducible and independent of whatever records that you might create/edit in your dev env.
You can open iex in the :test environment to confirm that your query returns the empty array here too:
MIX_ENV=test iex -S mix
What you'll need is to populate your test DB with some known records before querying. There are at least 2 ways to achieve that: fixtures and seeds.
Fixtures:
define some helper functions to create records in test/support/test_helpers.ex (typically: takes some attrs, adds some defaults and calls some create_ function from your context)
def foo_fixture(attrs \\ %{}) do
{:ok, foo} =
attrs
|> Enum.into(%{name: "foo", bar: " default bar"})
|> MyContext.create_foo()
foo
end
call them within your setup function or test case before querying
side note: you should use DataCase for tests involving the DB. With DataCase, each test is wrapped in its own transaction and any fixture that you created will be rollback-ed at the end of the test, so tests are isolated and independent from each other.
Seeds:
If you want to include some "long-lasting" records as part of your "default state" (e.g. for a list of countries, categories...), you could define some seeds in priv/repo/seeds.exs.
The file should have been created by the phoenix generator and indicate you how to add seeds (typically use Repo.insert!/1)
By default, mix will run those seeds whenever you run mix ecto.setup or mix ecto.reset just after your migrations (whatever the env used)
To apply any changes in seeds.exs, you can run the following:
# reset dev DB
mix ecto.reset
# reset test DB
MIX_ENV=test mix ecto.reset
If you need some seeds to be environment specific, you can always introduce different seed files (e.g. dev_seeds.exs) and modify your mix.exs to configure ecto.setup.
Seeds can be very helpful not only for tests but for dev/staging in the early stage of a project, while you are still tinkering a lot with your schema and you are dropping the DB frequently.
I usually find myself using a mix of both approaches.

In Katalon, Test Listners are running for each call TestCase when Actual Test case has couple of Call Test Case

I am using Katalon to prepare the UI Auomation test case.
The below is my structure of code.
1. Call Login Test Case
2. Call Book Appointment Test Case
3. Call Logout Test Case.
I expected that once all three activity completed it should get the status of the Test case but it run Step 1 (Calling Test case) then TestListeners and then Step 2 (Calling Test Case) then TestListeners and finally Step 3 and then Test listeners..
I wanted to run once all the steps completed only.. How to restrict that?
How to use Test listeners when I have multiple call Test case in my original Test case
You can use https://api-docs.katalon.com/com/kms/katalon/core/util/KeywordUtil.html to mark tests as passed or failed.
For example, you add KeywordUtil.markPassed("Test completed succesfully!") (or failed) at the end of each of the test cases.

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

chai.assert() wont run methods in a test before the assertion (chai assert lib with protractor)

First time I post an issue on SO, I hope I'm doing it right.
it (' :: 2.0 service creation :: should fill out service info tab', function(){
createNewService.setServiceName(e2eConfig.newServiceDetails.basicServiceName);
createNewService.selectCategory();
createNewService.setIntroText(e2eConfig.newServiceDetails.introText);
createNewService.selectParent();
createNewService.uploadIcon();
createNewService.nextTab();
//right now assert will fire off without running the methods above because
//we are still on the infoTab
assert(($(createNewService.selectors.infoTab).isDisplayed()) == true, 'did not move to the next tab');
},20000);
What this test does is it fills the inputs, selects drop-downs where necessary and uploads a file.
The test then attempts to switch to the next tab in the widget.
To determine whether it managed to switch to the next tab I want to make a chai library assertion with a custom message.
with the current code the assert will return true because it sees the infoTab and the test will fail without running any of the methods before the assert
if I change the assert line to look for '!== true', then it's going to run the methods and move on
In any case, would it be better to do this in a different manner or perhaps use expect instead of assert?
Chai assert API
Chai expect API
All Protractor function calls return promises that resolve asynchronously, so if the functions you defined on createNewService are all calling Protractor functions you'll have to wait for them resolve before calling the assert. Try something like the following:
it (' :: 2.0 service creation :: should fill out service info tab', function(done) {
createNewService.setServiceName(e2eConfig.newServiceDetails.basicServiceName);
createNewService.selectCategory();
createNewService.setIntroText(e2eConfig.newServiceDetails.introText);
createNewService.selectParent();
createNewService.uploadIcon();
createNewService.nextTab().then(function() {
assert.eventually.strictEqual($(createNewService.selectors.infoTab).isDisplayed(), true, 'did not move to the next tab');
done();
});
},20000);
A few things to note:
This example assumes that createNewService.nextTab() returns a promise.
You'll need to use a library like chai-as-promised to handle assertions on the values returned from promises. In your code you're asserting that a promise object == true, which is truthy due to coercion.
Since your functions run asynchronously, you'll need to pass a callback to your anonymous function then call it when your test is finished. Information about testing asynchronous code can be found here.