I have question regarding mocking dbContext for the purpose of unit testing.
In my below code i am mocking dbContext and DBSet entry using Moq library and then triggering Create method from service and in the end verifying SaveChanges was hit at least once successfully.
public void Create_Test_Item()
{
// creates a DbSet<TestItem>
var mockSet = new Mock<DbSet<TestItem>>();
mockSet.Setup(x => x.Add(It.IsAny<TestItem>())).Returns((TestItem testChildItem) => testChildItem);
// uses Moq to create a TestContext.
var mockContext = new Mock<TestContext>() { CallBase = true };
//wires it up to be returned from the context’s TestItem property.
mockContext.Setup(c => c.Set<TestItem>()).Returns(mockSet.Object);
mockContext.Setup(c => c.SaveChanges()).Returns(1);
//context is used to create a new TestsvcInstance<TestItem> which is then used to create a new TestItem
var svcInstance = new TestsvcInstance<TestItem>(mockContext.Object);
var TestItem = new TestItem
{
Name = "B01",
Code = "001",
ModuleId = new Guid("1F8B2910-C5D4-E611-80D0-000D3A80FCC4")
};
svcInstance.Create(TestItem);
// Finally, the test verifies that the svcInstance added a new TestItem and called SaveChanges on the context.
mockSet.Verify(m => m.Add(It.IsAny<TestItem>()), Times.Once());
mockContext.Verify(m => m.SaveChanges(), Times.Once());
}
I am trying to extend it further to add Unique check. Suppose if I try
to create TestItem with Code / Name that already saved in underlying
context, my mocking implementation should raise an error.
How I can achieve that through same mocking idea?
Your objective with tests should be to test your business logic, not EF. If you want to ensure that your database is set up with unique constraints use an integration test against a real database.
As far as unit tests are concerned, your business logic might be set up to avoid duplicate unique values. This means that given a scenario where your code would be considering duplicate values, your services/DbContext (if you have to go that deep) can be mocked to expect a validation call to assert that the desired value is not a duplicate.
So lets say I have a routine to validate that a user name for a new user is unique. That code goes to the Users DBSet, it will do some filtering or what-have you to determine if the user name is unique. I.e. DbContext.Users.Any(u=>u.UserName = userName);
if I were mocking a DBContext then I would mock the Users DBSet in that case to return a List<User> containing a single user with a name that matches the name I am testing with. My code under test should receive that User, fail its validation and my test should pass. I'd also assert my DBContext.SaveChanges does not get called in that scenario. So I tell the mock "Hey, give me something that I should figure out there is a duplicate user name, and make sure I don't call SaveChanges in this case."
To test how your code handles a duplicate ID constraint violation the mocked DbContext can be set up to Throw the exception expected in that case so that you can assert how your code handles that exception. You don't need to go through the trouble of setting up a mock to "detect" a duplicate constraint, just tell it "I'm going to give you one, so here is what you need to do."
So taking the above example say we want to test a race condition (someone else inserted that user just before, or some other developer mucked up my simple .Any() check with some extra filter that resulted in the duplicate user not being found. In this case I'd mock the Users DBSet to return an empty list for example, but then mock the SaveChanges call to throw an exception. (The same type as you'd receive inserting the duplicate record) From there you can assert the behaviour of your code under test. (Does it call a logging service, return back a different result? etc.)
With Moq that would look like:
mockDbContext.Setup(x=>x.SaveChanges()).Throws<SomeDbException>("Suitable message.");
Again though I'd say that while you can unit test this behaviour, the unit tests should be more about your business logic. (I.e. did it do a validation as expected?) An integration test against a real database or in-memory database can be set up to handle edge case scenarios. The difference between integration and unit tests are that integration tests would be run daily or a few times a day while unit tests with mocks are designed to be run repeatedly while you develop business logic where there is the risk of side effects as you go. The form a safety net around code/logic that is likely in flux. Constraints are relatively static behaviour.
Related
How to test this scenario:
I have a user repository that (so far) only has one method: SaveUser.
"SaveUser" is suposed to receive a user as a parameter and save it to the DB using EF 6.
If the user is new (new user is defined by a "Email" that is not present in the database) the method is supposed to insert it, if its not, the method is supposed to only update it.
Technically if this method is called, all business validation are OK, only remaining the act of actually persisting the user
My problem here is: I don't actually want to create a new user or update one every time... this would lead to undesired side effects in the future (let's call it "paper trail")... How do i do it?
Here is the test code:
public void CreateOrUpdateUserTest1()
{
UserDTO dto = new UserDTO();
dto.UniqueId = new Guid("76BCB16B-4AD6-416B-BEF6-388D56217E76");
dto.Name = "CreateOrUpdateUserTest1";
dto.Email = "leo#leo.com";
dto.Created = DateTime.Now;
GeneralRepository repository = new GeneralRepository();
//Now the user should be CREATED on the DB
repository.SaveUser(dto);
dto.Name = "CreateOrUpdateUserTest";
//Now the user should be UPDATED on the DB
repository.SaveUser(dto);
}
Your repository probably needs to invoke some methods of a third party library to actually persist the data. Unit-testing in such case could only make sense if you could mock the third party library and verify and the particular persistence methods are being correctly invoked by your repository. To achieve this, you need to refactor your code.
Otherwise, you can't unit-test this class, but also consider that maybe there is no need to. The third party library responsible for persistence is a different component, so testing if DB storage works correctly with your classes is rather a matter of Integration testing.
Apologies, in advance, if this seems like a duplicate question. This question was the closest I could find, but it doesn't really solve the issues I am facing.
I'm using Entity Framework 5 in an ASP.NET MVC4 application and attempting to implement the Unit of Work pattern.
My unit of work class implements IDisposable and contains a single instance of my DbContext-derived object context class, as well as a number of repositories, each of which derives from a generic base repository class that exposes all the usual repository functionality.
For each HTTP request, Ninject creates a single instance of the Unit of Work class and injects it into the controllers, automatically disposing it when the request is complete.
Since EF5 abstracts away the data storage and Ninject manages the lifetime of the object context, it seems like the perfect way for consuming code to access in-memory entity objects without the need to explcitly manage their persistence. In other words, for optimum separation of concerns, I envisage my controller action methods being able to use and modify repository data without the need to explicitly call SaveChanges afterwards.
My first (naiive) attempt to implement this idea employed a call to SaveChanges within every repository base-class method that modified data. Of course, I soon realized that this is neither performance optimized (especially when making multiple successive calls to the same method), nor does it accommodate situations where an action method directly modifies a property of an object retrieved from a repository.
So, I evolved my design to eliminate these premature calls to SaveChanges and replace them with a single call when the Unit of Work instance is disposed. This seemed like the cleanest implementation of the Unit of Work pattern in MVC, since a unit of work is naturally scoped to a request.
Unfortunately, after building this concept, I discovered its fatal flaw - the fact that objects added to or deleted from a DbContext are not reflected, even locally, until SaveChanges has been called.
So, what are your thoughts on the idea that consuming code should be able to use objects without explicitly persisting them? And, if this idea seems valid, what's the best way to achieve it with EF5?
Many thanks for your suggestions,
Tim
UPDATE: Based on #Wahid's response, I am adding below some test code that shows some of the situations in which it becomes essential for the consuming code to explicitly call SaveChanges:
var unitOfWork = _kernel.Get<IUnitOfWork>();
var terms = unitOfWork.Terms.Entities;
// Purge the table so as to start with a known state
foreach (var term in terms)
{
terms.Remove(term);
}
unitOfWork.SaveChanges();
Assert.AreEqual(0, terms.Count());
// Verify that additions are not even reflected locally until committed.
var created = new Term { Pattern = "Test" };
terms.Add(created);
Assert.AreEqual(0, terms.Count());
// Verify that additions are reflected locally once committed.
unitOfWork.SaveChanges();
Assert.AreEqual(1, terms.Count());
// Verify that property modifications to entities are reflected locally immediately
created.Pattern = "Test2";
var another = terms.Single(term => term.Id == created.Id);
Assert.AreEqual("Test2", another.Pattern);
Assert.True(ReferenceEquals(created, another));
// Verify that queries against property changes fail until committed
Assert.IsNull(terms.FirstOrDefault(term => term.Pattern == "Test2"));
// Verify that queries against property changes work once committed
unitOfWork.SaveChanges();
Assert.NotNull(terms.FirstOrDefault(term => term.Pattern == "Test2"));
// Verify that deletions are not even reflected locally until committed.
terms.Remove(created);
Assert.AreEqual(1, terms.Count());
// Verify that additions are reflected locally once committed.
unitOfWork.SaveChanges();
Assert.AreEqual(0, terms.Count());
First of all SaveChanges should NOT be ever in the repositories at all. Because that's leads you to lose the benefit of UnitOfWork.
Second you need to make a special method to save changes in the UnitOfWork.
And if you want to call this method automatically then you may fine some other solution like ActionFilter or maybe by making all your Controllers inherits from BaseController class and handle the SaveChanges in it.
Anyway the UnitOfWork should always have SaveChanges method.
Having seen some strong advice against testing EF against mocks, especially Code First, I have decided to go with integration testing against a SqlCe database dedicated to testing, and then use pure unit tests further downstream from the unit of work and repositories provided by DbContext and DbSet.
I am just unclear where to draw the line and what to test where. I know I can mock the DAL in my service layer when I am confident the DAL specific integration tests cover its insides, but what do I test in the DAL? There doesn't seem to be much point testing to see if I can save and read an object, because EF is external and already tested.
You will test your mapping and queries in DAL by using integration tests. Example:
public class Service {
private readonly IDAL _dal;
public Service(IDAL dal) {
// Not null validation here
_dal = dal;
}
public void DoSomething() {
SomeData data = FindSomeData();
// Do some logic
_dal.Commit();
}
protected virtual SomeData FindSomeData() {
return _dal.SomeData.Where(...).FirstOrDefault();
}
}
This is very simplified example showing:
Service dependent on DAL. DAL interface is passed through constructor injection.
The Service contains public DoSomething method you want to test to know if the logic is executed correctly. But this method is also dependent on DB query and DB persistence (Commit).
The query is part of your logic but executing such query is separate concern so it is handled by its own method. In more complex situation this method can be in other class injected to the Service class (repository). The key criteria for these query methods are:
They don't return IQueryable
They don't accept Expression<> as parameters
How to unit test DoSomething method:
In this simple example your test class will derive from Service class and override FindSomeData to return test data. In case of injection you would instead define fake for injected class.
You will also mock IDAL and you can verify that Commit was called
What integration test you should use:
You should create test for FindSomeData querying the real database
In general you should also have integration test for Commit but it is more difficult to achieve because the example has commit called directly from DoSomething. You don't want to test that method again and in the same time Commit method has too much generic cases because it simply flushes all changes from current context to database. I usually have separate tests for inserting, updating and deleting every entity type. When the DoSomething method does some complex modification you can split the method into two methods one handled by unit test for a real logic and second covered by integration tests for different persistence scenarios which can be produced by your logic.
Entity Framework is tested, but your DAL, especially mapping, is not. I prefer having integration tests to show me that at the very least my mapping is right, and better yet, that I can successfully perform all CRUD operations against my database.
What we usually test DB-wise, is if the complex object graphs can be properly inserted, updated, deleted; basically testing the more complex mappings.
In my opinion there's not really much point in testing if an object with 3 primitive value properties can be inserted and whatnot, because then you'd never see the end of it.
We kind of favor being optimistic (that the simple stuff will just work), we test the more complex associations, and if we encounter an error in our mapping (e.g. an object that should be deleted but isn't) we write an extra test for that.
It's usually not wise to test absolutely everything from a business perspective; you should focus on the high-risk/high-damage stuff first and then work your way down the severity ladder until you think it's not worth it anymore.
1) Have a set of integration tests which test your mappings
2) Make your DAL very light weight but with sufficent power to construct querys, something like:
public interface IDb
{
IQueryable<T>Query<T>();
... (save, delete, get-by-id methods)...
}
3) Write objects which encapsulate the logic behind constructing the query against the DAL
public class MuppetSearch
{
public MuppetColor? Color { get; set;}
public string Name{ get; set; }
public IQueryable<Muppet> ConstructQuery(IDb db)
{
var query = db.Query<Muppet>();
if(Color.HasValue)
{
query = query.Where(m=>m.Value == Color.Value);
}
if(!String.IsNullOrEmpty(Name))
{
query = query.Where(m=>m.Name.Contains(Name));
}
return query;
}
}
4) Test those, mocking all the data you need should be pretty trivial
5) In your service use the search classes to do your query construction
We have a scenario in our code when only a few properties of an entity are allowed to be changed. To guarantee that, we have code similar to this:
public void SaveCustomer(Customer customer)
{
var originalCustomer = dbContext.GetCustomerById(customer.Id);
if (customer.Name != originalCustomer.Name)
{
throw new Exception("Customer name may not be changed.");
}
originalCustomer.Address = customer.Address;
originalCustomer.City = customer.City;
dbContext.SaveChanges();
}
The problem with this code is that the call to dbContext.GetCustomerById does not always gives me a new instance of the Customer class. If the customer already has been fetched from the database, Entity Framework will keep the instance in memory and return it on every subsequent call.
This leads us to the actual problem - customer and originalCustomer may refer to the same instance. In that case, customer.Name will be equal to originalCustomer.Name and we will not be able to detect if it differs from the database.
I guess the same problem exists with most other ORMs as well, because of the identitymap design pattern.
Any ideas how this can be solved? Can I somehow force EF to always give me a new instance of the customer class?
Or should we refactor the code instead? Does anyone know of any good design patterns for this scenario?
you can try by detaching the entity from the context, this will remove all the references to the context (as well as the identitymap behaviour).
So, before passing the Customer to your method you can detach it:
yourContext.Detach(customer);
Here's part of a controller action:
[HttpPost]
public ActionResult NewComplaint(Complaint complaint)
{
if(!ModelState.IsValid)
{
// some code
}
// some more code...
}
When running the application, the model is automatically validated before the if statement is even called. However, when attempting to unit test this code, the automatic validation does not occur.
If I were to use a FormCollection and call TryUpdateModel instead, the validation would occur but I don't want to use that.
I've found that calling TryValidateModel(model) before the if statement works around the problem well; only requiring one extra line of code. I'd rather get rid of it however.
Any ideas why the automatic validation does not occur when unit testing but occurs when the application is running?
EDIT: Forgot to mention, I'm using ASP.NET MVC3 RC1 and I'm mocking the HTTPContext object of the controller if that makes any difference
Validation occurs during model binding (and TryUpdateModel performs model binding).
But I think the problem is that what you are trying to test is the MVC framework (i.e. the fact that validation occurs before an action method is invoked). You shouldn't test that.
You should assumet that that part just works (because we test it extensively) and only test your application code. So in this case, the only thing you need to mock is the return value of ModelState.IsValid and you can do that by adding a validation error manually:
ModelState.AddModelError("some key", "some error message")