How to associate the Data-Driven NUnit test method to the test case at ADO?
There is a test case with multiple iterations. TAF is using NUnit. But the thing is even NUnit is supporting data-driven method when you are adding association to the test case it's associating each iteration separately. I can not afford to create a test case for each iteration. And I would like to run tests using association. What could you suggest?
The only idea I have is to create a wrapper test method that inside will be called each iteration method. The disadvantage of it is that if 1 iteration failed and you would like to re-run it, you will have to re-run the whole test. It's not good for performance.
Related
There are some configuration settings that most of the tests in an integration test suite will need to share. These include database connection strings and similar items. It's very slow to fetch these from the system where they are stored. I'm planning to create a Fixture class with the SetUpFixture attribute that's in the root namespace of the assembly and use the OneTimeSetup attribute for the method that gets the config data. That will ensure it only runs one time before any of the tests start.
I can use a static property on the same Fixture class and then individual tests can read the config items with Fixture.ConfigSettings. This seems to work fine in some preliminary testing. Since the tests only read these settings there shouldn't be any cross test interference.
Is an arrangement like this a common way to handle this situation with NUnit? Are there other built in NUnit features or recommended pattern that may be helpful?
Yes, this will work. You should be clear, however that a SetUpFixture and a TestFixture serve different purposes. Do not use both attributes on the same class. Do not inherit one from the other.
As you noted, only static properties will work in this situation and the values should not change once set.
Using xUnit and the TestServer from Microsoft.AspNet.TestHost, how can I wrap each test in a database transaction that can be rolled back after the test?
Here's how I create the TestServer:
TestServer = new TestServer(TestServer.CreateBuilder()
.UseStartup<Startup>());
The Startup that's referenced there is the Startup from the web app project. In the ConfigureServices method in that Startup class I add EF like this:
services.AddEntityFramework()
.AddSqlServer()
.AddDbContext<TrailsDbContext>(options => options.UseSqlServer(Configuration["Data:DefaultConnection:ConnectionString"]));
I could pull the DbContext back of services and store a static reference on the Startup class, but that seems pretty hacky. Is there any way I can instantiate the DbContext where I create the TestServer and somehow have the web app use that instead of the one in the Startup class?
Edit: I have tried instantiating another instance of the DbContext where I create the TestServer and using that context to delete and recreate the database before each test, but that adds about 10 seconds to each test's run time.
Some advice: the simplest approach would be to destroy the test database at the end and recreate for each test run. This ensures no lingering test-to-test contamination.
But since you asked how, this can be done by extending Xunit. Xunit allows you to define custom test cases and test runners. A complete answer is hard to include in a SO answer. The simplest solution uses ambient transactions. (Danger! Ambient transactions can be tricky.) Xunit has a sample for a custom BeforeAfterTestAttribute that rolls back a transaction. https://github.com/xunit/samples.xunit/tree/master/AutoRollbackExample. To use ambient transactions, turn off the default EF setting that throws if ambient transactions are present.(optionsBuilder.UseSqlServer().SuppressAmbientTransactionWarning()).
A more complicated, but better solution is to override XunitTestCaseRunner and inject a transaction into each test case, ensuring to rollback at the conclusion of each test.
Also, EF docs provides a sample of using the InMemory provider to test. You may find this useful.
"Testing In Memory : EF Core Docs"
I have am about to retro-code a suite of unit tests for a new MVC4 app. Because nearly all my code in the EF data project is copied straight from code generated by the VS2012 EF Reverse Engineering tool, I have decided to skip unit tests in this part of the application, unless I can somehow automatically generate them. I have no business logic at here and I would like to first concentrate my efforts on ensuring better QA on the business side. But, I would like to know how one goes about first TDD, and second, just unit testing in general here.
Let's assume I don't have to or want to mock the database yet. I have often been quite happy unit testing against a test DB copy before, but with more conventional, home rolled ORM.
So, do I start with a test that instantiates my drived DbContext, then derive a DbContext until that test passes. Then, test for instantiating an entity, and create an entity, going on to test for a DbSet of those entities, which test will also include checking if the table is created. All is still good and well, if not bloody laborious, but my head asplode as soon as I start thinking of even a hint of testing my fluent mapping classes for all my entities. What now?
Testing against database is not unit testing - it is integration testing and integration testing usually doesn't follow the granularity of unit testing. Why it is not unit testing? Because unit testing tests single self contained unit - all external dependencies are faked. When your test spans both your unit code and database it test dependency as well = it is integration test.
All EF dependent code should be tested with integration testing. It doesn't make sense to unit test Microsoft's code. For example your question about mapping. Correct unit test for mapping does something like:
Preparation: Prepare compiled model with your entity mapping configuration
Execution: Create DbContext from compiled model and get metadata workspace from the context
Validation: Assert that metadata context contains your mapped entity
Now you can repeat similar test for every property you want to map in that entity.
That is obviously framework code which should already work - these tests should be done by people developing the framework.
In your case simple make integration test against local database which will try to load, save, update and delete entity and assert expectations you have on these operations. If anything in mapping is wrong at least one of these tests will fail.
I have a Spring-based system that uses Hibernate Search 3.4 (on top of Hibernate 3.5.4). Integration tests are managed by Spring, with #Transactional annotation. At the moment test data (entities that are to be indexed) is loaded by Liquibase script, we use it's Spring integration. It's very inconvenient to manage.
My new solution is to have test data defined as Spring beans and wire them as Resources, by name. This part works.
I tried to have these beans persisted and indexed in setUp method of my test cases (and in test methods themselves) but I failed. They get into DB fine but I can't get them indexed. I tried calling index() on FullTextEntityManager (with flushToIndexes), I tried createIndexer().startAndWait().
What else can I do?
Or may be there is some better option of testing HS?
Thank You in advance
My new solution is to have test data defined as Spring beans and wire
them as Resources, by name. This part works.
sounds like a strange setup for a unit test. To be honest I am not quote sure how you do this.
In Search itself an in memory database (H2) is used together with a Lucene RAM directory. The benefits of such a setup is that it is fast and easy to avoid dependencies between tests.
I tried to have these beans persisted and indexed in setUp method of
my test cases (and in test methods themselves) but I failed. They get
into DB fine but I can't get them indexed.
If automatic indexing is enabled and the persisting of the test data is occurring within an transaction, it should work. A common mistake in combination with Spring is to use the wrong transaction manager. The Hibernate Search forum has a lot of threads around this, for example this one - https://forum.hibernate.org/viewtopic.php?f=9&t=998155. Since you are not giving any concrete configuration and code examples it is hard to give more specific advice.
I tried createIndexer().startAndWait()
that is also a good approach. I would recommend this approach if you want to insert not such a couple of test entities, but a whole set of data. In this case it can make sense to use a framework like dbunit to insert the testdata and then manually index the data. createIndexer().startAndWait() is the right tool for that. Extracting all this loading/persisting/indexing functionality into a common test base class is the way to go. The base class can also be responsible to do all the Spring bootstrapping.
Again, to give more specific feedback you have to refine your question.
I have a complete different approach, when I write any queries, i want to write a complete test suite, but data creation has always been pain(special mention to when test customer gets corrupt and all your test suite breaks.
To solve this I created Random-JPA. It's simple and easy to integrate. The whole idea is you create fresh data and test.
You Can find the full documentation here
I have read many articles about unit testing.
Most of the articles said that we should not use more than one mock object in a test, but i can't understand why.
Sometimes we really need more than one mock object in a test.
You can have more than one mock in a unit test depending on the context.
However I think what 'the articles' might be hinting at is
prevention of over-mocking. When a unit-test mocks out all collaborators, you leave the door open; the scenario might fail when you substitute real collaborators. By minimizing the number of mocks and using real collaborators as far as feasible/possible, you minimize that risk.
High Coupling alerts: If you find yourself having to mock lots of collaborators inorder to write a unit test, it might be a design smell indicating that you have high coupling.
You should add as many mocks as necessary to isolate your class under test. You need a mock for every dependency that should not be part of the test.
Sometimes you put two or three classes together in a test, for simplicity, because they build something like a component and are highly coupled. Everything else should be mocked.
I know this "best practice" to have only one mock and also do not understand it. In our unit tests, we have many mocks, some environmental mocks are set up by the test framework I wrote (eg. TransactionService, SecurityService, SessionService). There is only one thing to consider, as Gishu already mentioned in his answer, many mocks are an indication of high dependency. It's up to you to consider when it is too much. We have many small interfaces, which requires many mocks in tests.
To turn your answer around, you should not mock a dependency when:
It is a highly coupled part of the class under test, like an inner class, private class etc.
It is a common .NET framework class like a Collection and the like
You want to write an integration test to test exactly the interaction with that class. (You still mock everything else and you still have unit tests for every involved class in isolation.)
It is just to expensive to mock a certain class. Be careful with deciding it as too expensive, mocks seem to be hard to set up, but turn out to be a breeze compared to the maintainability problems you'll have with using real classes. But there are some frameworks and technologies that are not implementing against interfaces and are very hard to mock. If it is too expensive to put this framework classes behind your own interface, you need to live with them in the tests.
I'm not sure what articles you're referring to, but I typically have one mock object per dependency for the class under test.