How is state shared between nunit tests? - nunit

There are some configuration settings that most of the tests in an integration test suite will need to share. These include database connection strings and similar items. It's very slow to fetch these from the system where they are stored. I'm planning to create a Fixture class with the SetUpFixture attribute that's in the root namespace of the assembly and use the OneTimeSetup attribute for the method that gets the config data. That will ensure it only runs one time before any of the tests start.
I can use a static property on the same Fixture class and then individual tests can read the config items with Fixture.ConfigSettings. This seems to work fine in some preliminary testing. Since the tests only read these settings there shouldn't be any cross test interference.
Is an arrangement like this a common way to handle this situation with NUnit? Are there other built in NUnit features or recommended pattern that may be helpful?

Yes, this will work. You should be clear, however that a SetUpFixture and a TestFixture serve different purposes. Do not use both attributes on the same class. Do not inherit one from the other.
As you noted, only static properties will work in this situation and the values should not change once set.

Related

NUnit Data-Driven & Azure DevOps association

How to associate the Data-Driven NUnit test method to the test case at ADO?
There is a test case with multiple iterations. TAF is using NUnit. But the thing is even NUnit is supporting data-driven method when you are adding association to the test case it's associating each iteration separately. I can not afford to create a test case for each iteration. And I would like to run tests using association. What could you suggest?
The only idea I have is to create a wrapper test method that inside will be called each iteration method. The disadvantage of it is that if 1 iteration failed and you would like to re-run it, you will have to re-run the whole test. It's not good for performance.

Runtime creation and persistence of executable model rules

We have the need to create and persist rules at runtime. The goal is to create the rules, persist them and then reload them at a later point in time. Using bits and pieces of code cobbled together from drools unit tests, I can successfully create rules from DRL strings and then persist them to a kjar. And using the new KieBuilder.buildAll overload, the kjar (presumably) is built using the new executable model. All of that seems to work.
But what I really want to do is eliminate the DRL strings entirely and create my rules at runtime using the flow or pattern DSL. Again, using example code, I can create those rules at runtime, and execute them in a session. What I can’t seem to do is actually persist them as a kjar (or any other form that I can devise). It seems that the end result of building a rule using flow or pattern DSL is a KieBase. And there seems to be no way to serialize or persist a KieBase. At some point in the process, I need to be able to getBytes() in order to persist the KieBase.
For example, I can create the KieBase like this:
Rule rule = getRule();
ModelImpl model = new ModelImpl().addRule( rule );
KieBase kieBase = KieBaseBuilder.createKieBaseFromModel( model );
But I then need to be able to persist that newly created kieBase so it can be reloaded later. And there doesn't seem to be a workable way to do that.
Any suggestions? I’m using 7.7.0 for my testing.
UPDATE 2018-07-23
Let me clarify my original question with additional information. There are really two use cases where I’d like to use the new executable model to author rules in Java: 1) at design time; 2) at run time. Each use case has slightly different requirements, and so far I’ve been unsuccessful in getting either one to work completely.
For the 1st use case, at design time I need the ability to write rules in Java (using the new pattern DSL) and then save those rules to a kjar. Once there, they can be loaded into a KieServer instance and executed. Purportedly the Kie Maven Plugin can do this, and I’ve attempted to follow the instructions given in the drools doc (for example section 2.2.1.4 of the 7.8.0 doc). But those instructions appear to be incomplete, and there just aren’t any examples of how to accomplish this. What file or files need to be added to the resources\META-INF folder to identify the rules? How are the rules actually exposed in the Java code? Do they need to be in a particular type of class? Are the rules returned from public methods? How are those methods identified as having rules? Are any Java annotations needed to make this work?
All of those questions would be answered for me if there was just one simple end-to-end example that demonstrated how to author a rule in Java, AND create the kjar containing that rule.
For the 2nd use case (actually the more important of the two for me), I need the ability to dynamically create rules at runtime. Based on configuration data within our application, multiple rules need to be programmatically created and ultimately loaded into a KieServer instance. My assumption was that the process would be similar to use case #1 where a kjar could be programmatically created and then loaded into the KieServer. And remember that in this case, the Maven Plugin isn’t in the picture since this is all being done at runtime, not design time. Using the examples for the executable model (primarily the unit tests), I can author the rules in Java, and I can execute them. But I’ve found no way to actually build a kjar from them, or to directly load them into a KieServer.
To execute the rules, they have to be in a specific Java file and the kjar needs to have a file into the META-INF folder stating where the rules actually are.
Take a look at what's the maven plugin doing here
https://github.com/kiegroup/droolsjbpm-integration/blob/master/kie-maven-plugin/src/main/java/org/kie/maven/plugin/GenerateModelMojo.java#L165
There will be probably an easier way in the future, but I can't tell you when.
Thank you for using the bleeding edge features, and good luck with that.

Assert.AreEqual unit testing for DbContext entities

I wish to unit test my business logic is loading the correct data by loading an entity via the business logic and comparing it to an entity loaded directly from the dbcontext.
Assert.AreEqual fails I'm guessing because the entities are loaded as tracked.
I thought that I could possibly use AsNoTracking(), but it didn't work.
Is there a way of "unwrapping" the entity from entity framework to a POCO?
I've read about disabling proxycreation, but is this the only option?
I'm hoping there is something similar (although I realise a completely different concept), to ko.utils.unwrapObservable() in the knockout javascript library.
It is strange integration test (it is not unit test at all because it uses database) - it should be enough to simply define static expectation instead of loading it again from the database. Dynamic tests are more error prone and can hide issues.
To make it work you must override Equal to compare data not references. Disabling proxy creation will not work because you will still have different reference from your business logic and different reference from tested context (unless you share the context but in such case the test will be even more strange).

How to manage test data for Hibernate Search integration tests

I have a Spring-based system that uses Hibernate Search 3.4 (on top of Hibernate 3.5.4). Integration tests are managed by Spring, with #Transactional annotation. At the moment test data (entities that are to be indexed) is loaded by Liquibase script, we use it's Spring integration. It's very inconvenient to manage.
My new solution is to have test data defined as Spring beans and wire them as Resources, by name. This part works.
I tried to have these beans persisted and indexed in setUp method of my test cases (and in test methods themselves) but I failed. They get into DB fine but I can't get them indexed. I tried calling index() on FullTextEntityManager (with flushToIndexes), I tried createIndexer().startAndWait().
What else can I do?
Or may be there is some better option of testing HS?
Thank You in advance
My new solution is to have test data defined as Spring beans and wire
them as Resources, by name. This part works.
sounds like a strange setup for a unit test. To be honest I am not quote sure how you do this.
In Search itself an in memory database (H2) is used together with a Lucene RAM directory. The benefits of such a setup is that it is fast and easy to avoid dependencies between tests.
I tried to have these beans persisted and indexed in setUp method of
my test cases (and in test methods themselves) but I failed. They get
into DB fine but I can't get them indexed.
If automatic indexing is enabled and the persisting of the test data is occurring within an transaction, it should work. A common mistake in combination with Spring is to use the wrong transaction manager. The Hibernate Search forum has a lot of threads around this, for example this one - https://forum.hibernate.org/viewtopic.php?f=9&t=998155. Since you are not giving any concrete configuration and code examples it is hard to give more specific advice.
I tried createIndexer().startAndWait()
that is also a good approach. I would recommend this approach if you want to insert not such a couple of test entities, but a whole set of data. In this case it can make sense to use a framework like dbunit to insert the testdata and then manually index the data. createIndexer().startAndWait() is the right tool for that. Extracting all this loading/persisting/indexing functionality into a common test base class is the way to go. The base class can also be responsible to do all the Spring bootstrapping.
Again, to give more specific feedback you have to refine your question.
I have a complete different approach, when I write any queries, i want to write a complete test suite, but data creation has always been pain(special mention to when test customer gets corrupt and all your test suite breaks.
To solve this I created Random-JPA. It's simple and easy to integrate. The whole idea is you create fresh data and test.
You Can find the full documentation here

Can we use more than one mock object in a unit test?

I have read many articles about unit testing.
Most of the articles said that we should not use more than one mock object in a test, but i can't understand why.
Sometimes we really need more than one mock object in a test.
You can have more than one mock in a unit test depending on the context.
However I think what 'the articles' might be hinting at is
prevention of over-mocking. When a unit-test mocks out all collaborators, you leave the door open; the scenario might fail when you substitute real collaborators. By minimizing the number of mocks and using real collaborators as far as feasible/possible, you minimize that risk.
High Coupling alerts: If you find yourself having to mock lots of collaborators inorder to write a unit test, it might be a design smell indicating that you have high coupling.
You should add as many mocks as necessary to isolate your class under test. You need a mock for every dependency that should not be part of the test.
Sometimes you put two or three classes together in a test, for simplicity, because they build something like a component and are highly coupled. Everything else should be mocked.
I know this "best practice" to have only one mock and also do not understand it. In our unit tests, we have many mocks, some environmental mocks are set up by the test framework I wrote (eg. TransactionService, SecurityService, SessionService). There is only one thing to consider, as Gishu already mentioned in his answer, many mocks are an indication of high dependency. It's up to you to consider when it is too much. We have many small interfaces, which requires many mocks in tests.
To turn your answer around, you should not mock a dependency when:
It is a highly coupled part of the class under test, like an inner class, private class etc.
It is a common .NET framework class like a Collection and the like
You want to write an integration test to test exactly the interaction with that class. (You still mock everything else and you still have unit tests for every involved class in isolation.)
It is just to expensive to mock a certain class. Be careful with deciding it as too expensive, mocks seem to be hard to set up, but turn out to be a breeze compared to the maintainability problems you'll have with using real classes. But there are some frameworks and technologies that are not implementing against interfaces and are very hard to mock. If it is too expensive to put this framework classes behind your own interface, you need to live with them in the tests.
I'm not sure what articles you're referring to, but I typically have one mock object per dependency for the class under test.