Can we use more than one mock object in a unit test? - class

I have read many articles about unit testing.
Most of the articles said that we should not use more than one mock object in a test, but i can't understand why.
Sometimes we really need more than one mock object in a test.

You can have more than one mock in a unit test depending on the context.
However I think what 'the articles' might be hinting at is
prevention of over-mocking. When a unit-test mocks out all collaborators, you leave the door open; the scenario might fail when you substitute real collaborators. By minimizing the number of mocks and using real collaborators as far as feasible/possible, you minimize that risk.
High Coupling alerts: If you find yourself having to mock lots of collaborators inorder to write a unit test, it might be a design smell indicating that you have high coupling.

You should add as many mocks as necessary to isolate your class under test. You need a mock for every dependency that should not be part of the test.
Sometimes you put two or three classes together in a test, for simplicity, because they build something like a component and are highly coupled. Everything else should be mocked.
I know this "best practice" to have only one mock and also do not understand it. In our unit tests, we have many mocks, some environmental mocks are set up by the test framework I wrote (eg. TransactionService, SecurityService, SessionService). There is only one thing to consider, as Gishu already mentioned in his answer, many mocks are an indication of high dependency. It's up to you to consider when it is too much. We have many small interfaces, which requires many mocks in tests.
To turn your answer around, you should not mock a dependency when:
It is a highly coupled part of the class under test, like an inner class, private class etc.
It is a common .NET framework class like a Collection and the like
You want to write an integration test to test exactly the interaction with that class. (You still mock everything else and you still have unit tests for every involved class in isolation.)
It is just to expensive to mock a certain class. Be careful with deciding it as too expensive, mocks seem to be hard to set up, but turn out to be a breeze compared to the maintainability problems you'll have with using real classes. But there are some frameworks and technologies that are not implementing against interfaces and are very hard to mock. If it is too expensive to put this framework classes behind your own interface, you need to live with them in the tests.

I'm not sure what articles you're referring to, but I typically have one mock object per dependency for the class under test.

Related

How is state shared between nunit tests?

There are some configuration settings that most of the tests in an integration test suite will need to share. These include database connection strings and similar items. It's very slow to fetch these from the system where they are stored. I'm planning to create a Fixture class with the SetUpFixture attribute that's in the root namespace of the assembly and use the OneTimeSetup attribute for the method that gets the config data. That will ensure it only runs one time before any of the tests start.
I can use a static property on the same Fixture class and then individual tests can read the config items with Fixture.ConfigSettings. This seems to work fine in some preliminary testing. Since the tests only read these settings there shouldn't be any cross test interference.
Is an arrangement like this a common way to handle this situation with NUnit? Are there other built in NUnit features or recommended pattern that may be helpful?
Yes, this will work. You should be clear, however that a SetUpFixture and a TestFixture serve different purposes. Do not use both attributes on the same class. Do not inherit one from the other.
As you noted, only static properties will work in this situation and the values should not change once set.

Deep class inheritance hierarchy -- bad idea?

hoping a grandmaster can shed some light. Very high overview is that I am no beginner to coding, but still new to OOP. This set of message classes is at the heart of a large simulation application we're writing, and I don't want to do it stupidly--this interface cuts the application in half, from sequencer to executer and vice-versa.
My question is whether or not it's a bad idea to have an inheritance hierarchy this deep (image is not yet fleshed out, might go 5 or 6 deep in the end). This is as opposed to having some of the child classes just have a directed association to their parent class, instead of inheriting.
I've read that a deep inheritance hierarchy is not a good idea, and that if a child class is inheriting simply to have the parent's data, then you should simply include the parent as data in the child, but I'm having a hard time wrapping my head around why. What bad thing is going to happen to us if I decided to make an inheritance hierarchy 7-deep or something like that? Clearly there's a small performance hit, and changing things at the top of the hierarchy is going to have huge ripples throughout the app, but other than that I don't see an issue. Aside, I care little about minor differences in performance.
(bonus question: Is there an off-the-shelf package that handles this kind of stuff? We have most of the low level physical simulations handled, but the sequencing program we're going to have to write. I just have this suspicion that what I've laid out is very similar to what about 10,000 simulation developers before me did.)
(bonus question #2: any masters of both simulation systems and OOP programming, that would not hate living in Los Angeles? We're hiring.)
that if a child class is inheriting simply to have the parent's data
This is a bad idea. There's this understanding that you define base classes as the most generic of contracts that a set of (concrete) classes are going to honor. This typically means that your contract is about behavior and not implementation.
What bad thing is going to happen to us if I decided to make an inheritance hierarchy 7-deep or something like that?
The major issues here are mundane:
Fragile base classes (changes to base are a nightmare for the derived)
Increased coupling (with too many base classes comes tight coupling)
Encapsulation weakens
Testing issues (leaf level overridden methods can't just be tested to reproduce end-user behavior correctly always due to multiple chained calls here and there)
Maintenance (comes from strong coupling)
(You many want to peruse this paper on Why Ada isn't popular, particularly, Item 6, para 6.)
Is there an off-the-shelf package that handles this kind of stuff?
I'm not sure what you are looking for, but if you're looking for an automated hierarchy simplifier then I don't know of any. Also if such a package exists it'll be highly dependent on your language of choice and you haven't mentioned one.
Note that most of the times such issues can be resolved by looking at alternatives like aggregation or traits or dependency injection or whatever. These are design time issues and are typically (IMO) best ironed out on a whiteboard than with a compiler and millions of LOC.
Seeing this question quite late, but I have had many thoughts on this and have been bitten with deep inheritance hierarchies. One reason they are bad is because you will inevitably get the classification wrong as you specialize the many subclasses. However, once you have the class structure in place it will be hard to change because doing so would break client code.
I blogged about this here.
Old question, but active issue in software development and wanted to add an opinion that may help.
Maintenance overhead can't be estimated when you touch the base classes with DI. This is a major drawback that recently affected our 3 level deep inheritance structure.
Also, if you base is a template, expect to violate SOL(I)(D) if you have too many children with just 1 derived parent in 3 levels for example.
Generally, just to access data I'd choose an adequate design pattern or pass the data pointer if it doesn't violate SOLID. Depending on whether you just read or write, I'd also avoid getters and setters to avoid the Quasi-Classes. It's rare case that default children are 'protected' and I think the structure in this case is a candidate to be flawed by design.

Perl DAL Design Questions

Recently I've been working on some Perl projects and I'm a very novice Perl programmer. I've been experimenting with DBIx::Class and so far I'm really please with the flexibility and the ease of use. I'm curious though. I come from a .NET background and it seems like we spend a lot of time abstracting our DAL to a certain degree. Is this a good idea with a language like Perl?
Where I want to get shortly is to have the ability to start mocking my DAL so I can write unit tests for tasks. Right now though I'm struggling with how the overall structure and design of the application should look though?
Re: Relationship of the ORM within the application...
Hopefully this is the kind of answer you are looking for...
With most web app frameworks in the "scripting" world (i.e. perl, ruby, python, php), most of the time I've seen the business logic implemented at the ORM object level. E.g. in a Rails app it's at the ActiveRecord level; if you are using DBix::Class it would be at the Result-class level.
More concretely, in the case of DBIx::Class, if you have a table named VENDOR there would be a class called MySchema::Result::Vendor which represents a single row in the table VENDOR. Simply add your business methods to this class.
One disadvantage of this approach is that it ties your business logic with the ORM class which can make (unit) testing more difficult. One solution to this is to use a light-weight database for unit tests (i.e. SQLite), and an ORM like DBIx::Class will facilitate switching between the two. Of course, this won't work if you rely on SQL features which are not implemented in SQLite.
Another approach is to place your business logic methods into a Moose role. Then those methods can be composed into either the DBIx::Class Result class or into a mock object for testing. I can elaborate with an example if you'd like.
One big assumption of the above is that your business object = one row in the database. If this is not the case (i.e. you business object spans more than one table), then you'll probably want to create a "shell" or container object which has as instance members each of the constituent ORM objects. Fortunately, Moose has a nice facility for delegating methods (search for Moose delegation and the handles attribute of instance member declarations), so it is relatively easy to make a composite business object out of two or more ORM objects. Again, I can give you an example of this if you'd like.
HTH
I used to work in perl projects for the web long ago. But after working with things such as Django, perl's tools like DBI, etc now look to me rather rudimentary and outdated. Have a look at the django ORM for example, it's elegant and very productive to use, you can bypass it if your query is too complex or the ORM gets in the way...
These days I'd go python or ruby for that kind of projects.
For one liners, small text parsing or sysadmin stuff I still love to use small perl snippets. But I'm more into DRY than TMTOWTDI for more than a few lines of code these days.

Business logic in Entity Framework POCOs using partial classes?

I have business logic that could either sit in a business logic/service layer or be added to new members of an extended domain class (EF T4 generated POCO) that exploits the partial class feature.
So I could have:
a) bool OrderBusiness.OrderCanBeCancelledOnline(Order order) .. or (IOrder order)
or
b) bool order.CanBeCancelledOnline() .. i.e. it is the order itself knows whether or not it can be cancelled.
For me option b) is more OO. However option a) allows more complex logic to be applied e.g. using other domain objects or services.
At the moment I have a mix of both and this doesn't seem elegant.
Any guidance on this would be much appreciated!
The key thing about OO for me is that you tell objects to do things for you. You don't pull attributes out and make the decisions yourself (in a helper class or other).
So I agree with your assertion about option b). Since you require additional logic, there's no harm in performing an operation on the object whilst passing references to additional helper objects such that they collaborate. Whether you do this at the time of the operation itself, or pre-populate your order object with those collaborating entities is very much dependent upon your current situation.
You can also use extension methods to the POCO's to wrap your bll methods.
So you can keep using your current bll's.
in c# something like:
public static class OrderBusiness <- everything must be static, class and method
{
public static bool CanBeCancelledOnline(this Order order) <- notice the 'this'
{
logic ...
And now you can do order.CanBeCancelledOnline()
This is likely to depend on the complexity of your application and does require some judgement that comes with experience. The short answer is that if your project is anything more than a pretty simple one then you are best off putting your logic in the domain classes.
The longer answer:
If you place your logic within a service layer you are affectively following the transaction script pattern, and ending up with an anaemic domain model. This can be a valid route, but it generally works best with simple and small projects. The problem is that the transaction script layer (your service layer) becomes more complicated to maintain as it grows.
So the alternative is to create a rich domain model that contains the logic within it. Keeping logic together with the class it applies to is a key part of good OO design, and in a complex project pretty essential. It usually requires a bit more thought and effort initially, which is why for very simple projects people sometimes use the transaction script pattern.
If you are unsure about which to go with it is not normally a too difficult job to refactor your logic to move it from your service layer to the domain, but you need to make the call early enough that the job is not too large.
Contrary to one of the answers, using POCO classes does not mean you can't have business logic in your domain classes. POCO is about not applying framework specific structures to your domain classes, such as methods and interfaces specific to a particular ORM. A class with some functions to apply business logic is clearly still a Plain-Old-CLR-Object.
A common question, and one that is partially subjective.
IMO, you should go with Option A.
POCO's should be exactly that, "plain-old-CLR" objects. If you start applying business logic to them, they cease to be POCO's. :)
You can certainly put your business logic in the same assembly as your POCO's, just don't add methods directly to them, create helper classes to facilitate business rules. The only thing your POCO's should have is properties mapping to your domain model.
Really depends on how complex your business rules are. In our application, the busines rules are very straightforward, so we use Option A.
But if your business rules start to get messy, consider using the Specification Pattern.

EF4 - possible to mock ObjectContext for unit testing?

Can it be done without using TypeMock Islolator? I've found a few suggestions online such as passing in a metadata only connection string, however nothing I've come across besides TypeMock seems to truly allow for a mock ObjectContext that can be injected into services for unit testing. Do I plunk down the $$ for TypeMock, or are there alternatives? Has nobody managed to create anything comparable to TypeMock that is open source?
I'm unit testing EF4 easily without mocking. What I did was create a repository interface using the code from http://elegantcode.com/2009/12/15/entity-framework-ef4-generic-repository-and-unit-of-work-prototype/ as a basis I then created an InMemoryRepository<T> class that used the IRepository interface. I then replaced the IObjectSet<T> with a List<T> inside of the class and changed the retrieval methods accordingly.
Thus if you need to do unit testing, pass in the InMemoryRepository rather than the DataRepository.
Put your Linq2Entity query behind an interface, unit test it in isolation against a real database.
Write tests for your business logic with mocks for your query interfaces. Don't let Linq bleed into your business logic!
Don't use the RepositoryPattern!
Wrap the ObjectContext in a proxy class. Then inject that into your classes.
I don't think the repository pattern is the only answer to the question (it avoids the problem, sure)
I liked this answer - I think more appropriate for introducing tests to an existing codebase Creating Interface for ObjectContext