Mocking entity framework inner join - entity-framework

I need to unit test an inner join method from my Data Access Layer.
MY DAL looks like this:
public class MyDAL:IMyDAL
{
private MyContext context;
public MyDAL(MyContext Context)
{
this.context = Context;
}
public IEnumerable <Parent> Read(int childId)
{
var query = from parent in context.Parent
join child in context.Child on parent.Id equals child.Id
where child.Id == childId
select env;
return query.ToList();
}
And I want to unit test the Read method by mocking Entity framework, following this link http://msdn.microsoft.com/en-us/data/dn314429.aspx.
[TestMethod]
public void ReadMethod()
{
var data = new List<Parent>
{
new Parent { Id = 20 },
}.AsQueryable();
var data2 = new List<Child>
{
new Child { Id = 8 },
}.AsQueryable();
var mockSetPar = new Mock<DbSet<Parent>>();
var mockSetChild = new Mock<DbSet<Child>>();
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.Provider).Returns(data.Provider);
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.Expression).Returns(data.Expression);
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.ElementType).Returns(data.ElementType);
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.GetEnumerator()).Returns(data.GetEnumerator());
moockSetChild.As<IQueryable<Child>>().Setup(m => m.Provider).Returns(data2.Provider);
mockSetChild.As<IQueryable<Child>>().Setup(m => m.Expression).Returns(data2.Expression);
mockSetChild.As<IQueryable<Child>>().Setup(m => m.ElementType).Returns(data2.ElementType);
mockSetChild.As<IQueryable<Child>>().Setup(m => m.GetEnumerator()).Returns(data2.GetEnumerator());
var customDbContextMock = new Mock<MyContext>();
customDbContextMock.Setup(x => x.Parent).Returns(mockSetPar.Object);
customDbContextMock.Setup(x => x.Child).Returns(mockSetChild.Object);
var myDAL = new MyDAL(customDbContextMock.Object);
var actual = myDAL.Read(8);
Assert.IsNotNull(actual);
The result actual is empty because the join method hasn't been mocked so it returns nothing.
How can I mock the join method to return a value?
Thank you

In-memory test of DB interactions might be misleading because the LINQ to Entities capabilities are a subset of LINQ to Objects capabilities, so you can write a test that will be green but the query will always throw an exception in production.
This is also wrong on sole conceptual level. The DAL is a code that lays on a boundry of two systems - app and db. Its responsibility is to integrate them. So if you isolate those integration components your test becomes meaningless as it mocks away the core responsibility of the SUT.
To test the query logic, you have to use a database provider that behaves like the production one. So you need integration tests. = A useful solution is to use SQLite in-memory database. It will behave as the real database in most scenarios covered by Entity Framework yet perform almost as fast as mocks based on in-memory collections in unit tests.
You can consider SQLite as a database mock on steroids if you like.

I think you already noticed, that mocking EF queries is time-consuming and brittle. My suggestion - do not mock it. You can hide as much of data-access logic as you can under repository interfaces which is easy to mock:
public interface IParentRepository
{
IEnumerable<Parent> GetParentsOfChild(int childId);
}
Then test will look like:
[TestMethod]
public void ReadMethod()
{
int childId = // get sample id
var expected = // get sample parents
var repositoryMock = new Mock<IParentRepository>();
repositoryMock.Setup(r => r.GetParentsOfChild(childId))
.Returns(expected);
var myDAL = new MyDAL(repositoryMock.Object);
var actual = myDAL.Read(childId);
repositoryMock.VerifyAll();
CollectionAssert.AreEqual(actual, expected);
}
If you want to verify query implementation, then best way to do this is an acceptance/integration test which involves real database. Keep in mind - analyzing generated IQueryable is not enough to be sure your query will work in real environment. E.g. you can use operator Last() with IQueryable but this query will fail to be translated to SQL by Entity Framework.

Related

How to control in-memory generated key values in Entity Framework Core?

Background
I am developing a Web application based on ASP.NET Core 2.1 and EntityFrameworkCore 2.1.1. Besides other type of tests, I am using InMemory Web API testing to have some kind of integration testing of the Web.Api.
Since test data is rather convoluted I have generated a JSON file based on a slice of a migrated database and testing is importing data from this JSON file when the in-memory tests kick in.
The issue
Read-only tests are working just fine. However, when I insert something for an entity that has an identity (ValueGenerated == ValueGenerated.OnAdd) it automatically gets a value of 1 (all PKs are ints). This makes sense since whatever generator EF is using behind the scene to generate those values was not instructed to generate from a certain value.
However, this is clearly not working properly for inserts that generate an existing key value.
Things I have tried
[current working solution] Shifting the key values - Upon deserializing all objects, I perform an ids shift operations for all involved entities (they get "large" values). This works properly, but it is error prone (e.g. some entities have static ids, I have to ensure that all foreign keys / navigations properties are properly defined and it is kind of slow right now since I rely on reflection to properly identity the keys / navigation properties that require shifting)
Configuring value generator to start from a large value:
TestStartUp.cs
services.AddSingleton<IValueGeneratorCache, ValueGeneratorCache>();
services.AddSingleton<ValueGeneratorCacheDependencies, ValueGeneratorCacheDependencies>();
Ids generation functionality
public const int StartValue = 100000000;
class ResettableValueGenerator : ValueGenerator<int>
{
private int _current;
public override bool GeneratesTemporaryValues => false;
public override int Next(EntityEntry entry)
{
return Interlocked.Increment(ref _current);
}
public void Reset() => _current = StartValue;
}
public static void ResetValueGenerators(CustomApiContext context, IValueGeneratorCache cache, int startValue = StartValue)
{
var allKeyProps = context.Model.GetEntityTypes()
.Select(e => e.FindPrimaryKey().Properties[0])
.Where(p => p.ClrType == typeof(int));
var keyProps = allKeyProps.Where(p => p.ValueGenerated == ValueGenerated.OnAdd);
foreach (var keyProperty in keyProps)
{
var generator = (ResettableValueGenerator)cache.GetOrAdd(
keyProperty,
keyProperty.DeclaringEntityType,
(p, e) => new ResettableValueGenerator());
generator.Reset();
}
}
When debugging I can see that my entities being iterated, so the reset is applied.
Pushing the data into the in-memory database
private void InitializeSeeding(IServiceScope scope)
{
using (scope)
{
var services = scope.ServiceProvider;
try
{
var context = services.GetRequiredService<CustomApiContext>();
// this pushes deserialized data + static data into the database
InitDbContext(context);
var valueGeneratorService = services.GetRequiredService<IValueGeneratorCache>();
ResetValueGenerators(context, valueGeneratorService);
}
catch (Exception ex)
{
var logger = services.GetService<ILogger<Startup>>();
logger.LogError(ex, "An error occurred while seeding the database.");
}
}
}
Actual insert
This is done using a generic repository, but boils down to this:
Context.Set<T>().Add(entity);
Context.SaveChanges();
Question: How to control in-memory generated key values in Entity Framework Core?

How can I create a generic update method for One to Many structures in Entity Framework 5?

I am writing a web application, such that I get different objects back from the web that need to be either updated or added to the database. On top of this, I need to check that the owner is not modified. Since a hacker could potentially get an account and send an update to modify the foreign key to the user model. I don't want to have to manually code all of these methods, instead I want to make a simple generic call.
Maybe something as simple as this
ctx.OrderLines.AddOrUpdateSet(order.OrderLines, a => a.Order)
Based on old persisted records that have a foreign key to Order, and on the new incoming records.
Delete old records that are not on the new records list.
Add new records that are not on the old records list.
Update new records that exist on both lists.
ctx.Entry(orderLine).State=EntityState.Deleted;
...
ctx.Entry(orderLine).State=EntityState.Added;
...
ctx.Entry(orderLine).State=EntityState.Modified;
This gets a bit complicated when the old record is loaded to verify that ownership did not change. I get an error if I don't do.
oldorder.OrderLines.remove(oldOrderLine); //for deletes
oldorder.OrderLines.add(oldOrderLine); //for adds
ctx.Entry(header).CurrentValues.SetValues(header); //for modifications
With Entity Framework 5 there is a new extension function called AddOrUpdate. And there was a very interesting (please read) blog entry on how to create this method before it was added.
I'm not sure if this is too much to ask as a question in StackOverflow, any clues on how to approach the problem may be sufficient. Here are my thoughts so far:
a) leverage AddOrUpdate for some of the functionality.
b) create a secondary context hoping to avoid loading order into the context and avoid extra calls.
c) Set the state of all the saved objects to initially deleted.
Since you have linked to this question from my own question, I thought I'd throw in some newly-aquired experience with Entity Framework for me.
To achieve a common save method in my generic repository with Entity Framework, I do this. (Please note that the Context is a member of my repository, as I am implementing the Unit of Work pattern as well)
public class EFRepository<TEntity> : IRepository<TEntity> where TEntity : class
{
internal readonly AwesomeContext Context;
internal readonly DbSet<TEntity> DbSet;
public EFRepository(AwesomeContext context)
{
if (context == null) throw new ArgumentNullException("context");
Context = context;
DbSet = context.Set<TEntity>();
}
// Rest of implementation removed for brevity
public void Save(TEntity entity)
{
var entry = Context.Entry(entity);
if (entry.State == EntityState.Detached)
DbSet.Add(entity);
else entry.State = EntityState.Modified;
}
}
Honestly, I can't tell you why this works, because I just kept changing the state conditions - however I do have unit (integration) tests to prove that it works. Hopefully someone more into EF than myself can shed some light on this.
Regarding the "cascading updates", I was curious myself as if it would work using the Unit of Work pattern (my question I linked to was when I did not know it existed, and my repositories would basically create a unit of work whenever I wanted to save/get/delete, which is bad), so I threw in a test case in a simple relational DB. Here is a diagram to give you an idea.
IMPORTANT In order for test case number 2 to work, you need to make your POCO reference properties virtual, in order for EF to provide lazy loading.
The repository implementation is just derived from the generic EFRepository<TEntity> as shown above, so I'll leave out that implementation.
These are my test cases, both pass.
public class EFResourceGroupFacts
{
[Fact]
public void Saving_new_resource_will_cascade_properly()
{
// Recreate a fresh database and add some dummy data.
SetupTestCase();
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
var cultureRepo = new EFCultureRepository(ctx);
var resourceRepo = new EFResourceRepository(cultureRepo, ctx);
var existingCulture = cultureRepo.Get(1); // First and only culture.
var groupToAdd = new ResourceGroup("Added Group");
var resourceToAdd = new Resource(existingCulture,"New Resource", "Resource to add to existing group.",groupToAdd);
// Verify we got a single resource group.
Assert.Equal(1,ctx.ResourceGroups.Count());
// Saving the resource should also add the group.
resourceRepo.Save(resourceToAdd);
ctx.SaveChanges();
// Verify the group was added without explicitly saving it.
Assert.Equal(2, ctx.ResourceGroups.Count());
}
// try creating a new Unit of Work to really verify it has been persisted..
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
Assert.DoesNotThrow(() => ctx.ResourceGroups.First(rg => rg.Name == "Added Group"));
}
}
[Fact]
public void Changing_existing_resources_group_saves_properly()
{
SetupTestCase();
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
ctx.Configuration.LazyLoadingEnabled = true;
var cultureRepo = new EFCultureRepository(ctx);
var resourceRepo = new EFResourceRepository(cultureRepo, ctx);
// This resource already has a group.
var existingResource = resourceRepo.Get(2);
Assert.NotNull(existingResource.ResourceGroup); // IMPORTANT: Property must be virtual!
// Verify there is only one resource group in the datastore.
Assert.Equal(1,ctx.ResourceGroups.Count());
existingResource.ResourceGroup = new ResourceGroup("I am implicitly added to the database. How cool is that?");
// Make sure there are 2 resources in the datastore before saving.
Assert.Equal(2, ctx.Resources.Count());
resourceRepo.Save(existingResource);
ctx.SaveChanges();
// Make sure there are STILL only 2 resources in the datastore AFTER saving.
Assert.Equal(2, ctx.Resources.Count());
// Make sure the new group was added.
Assert.Equal(2,ctx.ResourceGroups.Count());
// Refetch from store, verify relationship.
existingResource = resourceRepo.Get(2);
Assert.Equal(2,existingResource.ResourceGroup.Id);
// let's change the group to an existing group
existingResource.ResourceGroup = ctx.ResourceGroups.First();
resourceRepo.Save(existingResource);
ctx.SaveChanges();
// Assert no change in groups.
Assert.Equal(2, ctx.ResourceGroups.Count());
// Refetch from store, verify relationship.
existingResource = resourceRepo.Get(2);
Assert.Equal(1, existingResource.ResourceGroup.Id);
}
}
private void SetupTestCase()
{
// Delete everything first. Database.SetInitializer does not work very well for me.
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
ctx.Database.Delete();
ctx.Database.Create();
var culture = new Culture("en-US", "English");
var resourceGroup = new ResourceGroup("Existing Group");
var resource = new Resource(culture, "Existing Resource 1",
"This resource will already exist when starting the test. Initially it has no group.");
var resourceWithGroup = new Resource(culture, "Exising Resource 2",
"Same for this resource, except it has a group.",resourceGroup);
ctx.Cultures.Add(culture);
ctx.ResourceGroups.Add(resourceGroup);
ctx.Resources.Add(resource);
ctx.Resources.Add(resourceWithGroup);
ctx.SaveChanges();
}
}
}
It was interesting to learn this, as I was not sure if it would work.
After working on this for a while I found an opensource project called GraphDiff here is it's blog entry 'introducing graphdiff for entity framework code first – allowing automated updates of a graph of detached entities'. I only began using it but it looks impressive. And it does solve the problem of issuing update/delete/insert for Many to One relationships. It actually generalizes the problem to graphs and allows arbitrary nesting.
Here is the generic method I concocted. It does use AddOrUpdate from the System.Data.Entity.Migrations namespace. Which may be reloading records from the db, I'll be checking on that later. The usage is
ctx.OrderLines.AddOrUpdateSet(l => l.orderId == neworder.Id,
l => l.Id, order.orderLines);
Here is the code:
public static class UpdateExtensions
{
public static void AddOrUpdateSet<TEntity>(this IDbSet<TEntity> set, Expression<Func<TEntity, bool>> predicate,
Func<TEntity, int> selector, IEnumerable<TEntity> newRecords) where TEntity : class
{
List<TEntity> oldRecords = set.Where(predicate).ToList();
IEnumerable<int> keys = newRecords.Select(selector);
foreach (TEntity newRec in newRecords)
set.AddOrUpdate(newRec);
oldRecords.FindAll(old => !keys.Contains(selector(old))).ForEach(detail => set.Remove(detail));
}
}

Testing EF ConcurrencyCheck

I have a base object, that contains a Version property, marked as ConcurrencyCheck
public class EntityBase : IEntity, IConcurrencyEnabled
{
public int Id { get; set; }
[ConcurrencyCheck]
[Timestamp]
public byte[] Version { get; set; }
}
This works, however, I want to write a test to ensure it doesn't get broken. Unfortunately, I can't seem to figure out how to write a test that doesn't rely on the physical database!
And the relevant test code that works, but uses the database...
protected override void Arrange()
{
const string asUser = "ConcurrencyTest1"; // used to anchor and lookup this test record in the db
Context1 = new MyDbContext();
Context2 = new MyDbContext();
Repository1 = new Repository<FooBar>(Context1);
Repository2 = new Repository<FooBar>(Context2);
UnitOfWork1 = new UnitOfWork(Context1);
UnitOfWork2 = new UnitOfWork(Context2);
Sut = Repository1.Find(x => x.CreatedBy.Equals(asUser)).FirstOrDefault();
if (Sut == null)
{
Sut = new FooBar
{
Name = "Concurrency Test"
};
Repository1.Insert(Sut);
UnitOfWork1.SaveChanges(asUser);
}
ItemId = Sut.Id;
}
protected override void Act()
{
_action = () =>
{
var item1 = Repository1.FindById(ItemId);
var item2 = Repository2.FindById(ItemId);
item1.Name = string.Format("Changed # {0}", DateTime.Now);
UnitOfWork1.SaveChanges("test1");
item2.Name = string.Format("Conflicting Change # {0}", DateTime.Now);
UnitOfWork2.SaveChanges("test2"); //Should throw DbUpdateConcurrencyException
};
}
[TestMethod]
[ExpectedException(typeof(DbUpdateConcurrencyException))]
public void Assert()
{
_action();
}
How can I remove the DB requirement???
I would recommend extracting your MyDbContext into an interface IMyDbContext, and then creating a TestDbContext class that will also implement SaveChanges the way you have it up there, except with returning a random value (like 1) instead of actually saving to the database.
At that point then all you'd need to do is to test that, in fact, all of the entities got their version number upped.
Or you could also do the examples found here or here, as well.
EDIT: I actually just found a direct example with using TimeStamp for concurrency checks on this blog post.
It's my opinion that you should not try to mock this behaviour to enable "pure" unit testing. For two reasons:
it requires quite a lot of code that mocks database behaviour: materializing objects in a way that they have a version value, caching the original objects (to mock a store), modifying the version value when updating, comparing the version values with the original ones, throwing an exception when a version is different, and maybe more. All this code is potentially subject to bugs and, worse, may differ slightly from what happens in reality.
you'll get trapped in circular reasoning: you write code specifically for unit tests and then... you write unit tests to test this code. Green tests say everything is OK, but essential parts of application code are not covered.
This is only one of the many aspects of linq to entities that are hard (impossible) to mock. I am compiling a list of these differences here.

How to mock the limitations of EntityFramework's implementation of IQueryable

I am currently writing unit tests for my repository implementation in an MVC4 application. In order to mock the data context, I started by adopting some ideas from this post, but I have now discovered some limitations that make me question whether it is even possible to properly mock IQueryable.
In particular, I have seen some situations where the tests pass but the code fails in production and I have not been able to find any way to mock the behavior that causes this failure.
For example, the following snippet is used to select Post entities that fall within a predefined list of categories:
var posts = repository.GetEntities<Post>(); // Returns IQueryable<Post>
var categories = GetCategoriesInGroup("Post"); // Returns a fixed list of type Category
var filtered = posts.Where(p => categories.Any(c => c.Name == p.Category)).ToList();
In my test environment, I have tried mocking posts using the fake DbSet implementation mentioned above, and also by creating a List of Post instances and converting it to IQueryable using the AsQueryable() extension method. Both of these approaches work under test conditions, but the code actually fails in production, with the following exception:
System.NotSupportedException : Unable to create a constant value of type 'Category'. Only primitive types or enumeration types are supported in this context.
Although LINQ issues like this are easy enough to fix, the real challenge is finding them, given that they do not reveal themselves in the test environment.
Am I being unrealistic in expecting that I can mock the behavior of Entity Framework's implementation of IQueryable?
Thanks for your ideas,
Tim.
I think it is very very hard, if impossible, to mock Entity Framework behaviour. First and foremost because it would require profound knowledge of all peculiarities and edge cases where linq-to-entites differs from linq-to-objects. As you say: the real challenge is finding them. Let me point out three main areas without claiming to be even nearly exhaustive:
Cases where Linq-to-Objects succeeds and Linq-to-Entities fails:
.Select(x => x.Property1.ToString(). LINQ to Entities does not recognize the method 'System.String ToString()' method... This applies to nearly all methods in native .Net classes and of course to own methods. Only a few .Net methods will be translated into SQL. See CLR Method to Canonical Function Mapping. As of EF 6.1, ToString is supported by the way. But only the parameterless overload.
Skip() without preceding OrderBy.
Except and Intersect: can produce monstrous queries that throw Some part of your SQL statement is nested too deeply. Rewrite the query or break it up into smaller queries.
Select(x => x.Date1 - x.Date2): DbArithmeticExpression arguments must have a numeric common type.
(your case) .Where(p => p.Category == category): Only primitive types or enumeration types are supported in this context.
Nodes.Where(n => n.ParentNodes.First().Id == 1): The method 'First' can only be used as a final query operation.
context.Nodes.Last(): LINQ to Entities does not recognize the method '...Last...'. This applies to many other IQueryable extension methods. See Supported and Unsupported LINQ Methods.
(See Slauma's comment below): .Select(x => new A { Property1 = (x.BoolProperty ? new B { BProp1 = x.Prop1, BProp2 = x.Prop2 } : new B { BProp1 = x.Prop1 }) }): The type 'B' appears in two structurally incompatible initializations within a single LINQ to Entities query... from here.
context.Entities.Cast<IEntity>(): Unable to cast the type 'Entity' to type 'IEntity'. LINQ to Entities only supports casting EDM primitive or enumeration types.
.Select(p => p.Category?.Name). Using null propagation in an expression throws CS8072 An expression tree lambda may not contain a null propagating operator. This may get fixed one day.
This question: Why does this combination of Select, Where and GroupBy cause an exception? made me aware of the fact that there are even entire query constructions that are not supported by EF, while L2O wouldn't have any trouble with them.
Cases where Linq-to-Objects fails and Linq-to-Entities succeeds:
.Select(p => p.Category.Name): when p.Category is null L2E returns null, but L2O throws Object reference not set to an instance of an object. This can't be fixed by using null propagation (see above).
Nodes.Max(n => n.ParentId.Value) with some null values for n.ParentId. L2E returns a max value, L2O throws Nullable object must have a value.
Using EntityFunctions (DbFunctions as of EF 6) or SqlFunctions.
Cases where both succeed/fail but behave differently:
Nodes.Include("ParentNodes"): L2O has no implementation of include. It will run and return nodes (if Nodes is IQueryable), but without parent nodes.
Nodes.Select(n => n.ParentNodes.Max(p => p.Id)) with some empty ParentNodes collections: both fail but with different exceptions.
Nodes.Where(n => n.Name.Contains("par")): L2O is case sensitive, L2E depends on the database collation (often not case sensitive).
node.ParentNode = parentNode: with a bidirectional relationship, in L2E this will also add the node to the nodes collection of the parent (relationship fixup). Not in L2O. (See Unit testing a two way EF relationship).
Work-around for failing null propagation: .Select(p => p.Category == null ? string.Empty : p.Category.Name): the result is the same, but the generated SQL query also contains the null check and may be harder to optimize.
Nodes.AsNoTracking().Select(n => n.ParentNode. This one is very tricky!. With AsNoTracking EF creates new ParentNode objects for each Node, so there can be duplicates. Without AsNoTracking EF reuses existing ParentNodes, because now the entity state manager and entity keys are involved. AsNoTracking() can be called in L2O, but it doesn't do anything, so there will never be a difference with or without it.
And what about mocking lazy/eager loading and the effect of context life cycle on lazy loading exceptions? Or the effect of some query constructs on performance (like constructs that trigger N+1 SQL queries). Or exceptions due to duplicate or missing entity keys? Or relationship fixup?
My opinion: nobody is going to fake that. The most alarming area is where L2O succeeds and L2E fails. Now what's the value of green unit tests? It has been said before that EF can only reliably be tested in integration tests (e.g. here) and I tend to agree.
However, that does not mean that we should forget about unit tests in projects with EF as data layer. There are ways to do it, but, I think, not without integration tests.
I have written a few Unit Tests with Entity Framework 6.1.3 using Moq and used it to override IQueryable. Note that all DbSet that should be tested needs to be marked as virtual. Example from Microsoft themselves:
Query:
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
namespace TestingDemo
{
[TestClass]
public class QueryTests
{
[TestMethod]
public void GetAllBlogs_orders_by_name()
{
var data = new List<Blog>
{
new Blog { Name = "BBB" },
new Blog { Name = "ZZZ" },
new Blog { Name = "AAA" },
}.AsQueryable();
var mockSet = new Mock<DbSet<Blog>>();
mockSet.As<IQueryable<Blog>>().Setup(m => m.Provider).Returns(data.Provider);
mockSet.As<IQueryable<Blog>>().Setup(m => m.Expression).Returns(data.Expression);
mockSet.As<IQueryable<Blog>>().Setup(m => m.ElementType).Returns(data.ElementType);
mockSet.As<IQueryable<Blog>>().Setup(m => m.GetEnumerator()).Returns(0 => data.GetEnumerator());
var mockContext = new Mock<BloggingContext>();
mockContext.Setup(c => c.Blogs).Returns(mockSet.Object);
var service = new BlogService(mockContext.Object);
var blogs = service.GetAllBlogs();
Assert.AreEqual(3, blogs.Count);
Assert.AreEqual("AAA", blogs[0].Name);
Assert.AreEqual("BBB", blogs[1].Name);
Assert.AreEqual("ZZZ", blogs[2].Name);
}
}
}
Insert:
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using System.Data.Entity;
namespace TestingDemo
{
[TestClass]
public class NonQueryTests
{
[TestMethod]
public void CreateBlog_saves_a_blog_via_context()
{
var mockSet = new Mock<DbSet<Blog>>();
var mockContext = new Mock<BloggingContext>();
mockContext.Setup(m => m.Blogs).Returns(mockSet.Object);
var service = new BlogService(mockContext.Object);
service.AddBlog("ADO.NET Blog", "http://blogs.msdn.com/adonet");
mockSet.Verify(m => m.Add(It.IsAny<Blog>()), Times.Once());
mockContext.Verify(m => m.SaveChanges(), Times.Once());
}
}
}
Example service:
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Threading.Tasks;
namespace TestingDemo
{
public class BlogService
{
private BloggingContext _context;
public BlogService(BloggingContext context)
{
_context = context;
}
public Blog AddBlog(string name, string url)
{
var blog = _context.Blogs.Add(new Blog { Name = name, Url = url });
_context.SaveChanges();
return blog;
}
public List<Blog> GetAllBlogs()
{
var query = from b in _context.Blogs
orderby b.Name
select b;
return query.ToList();
}
public async Task<List<Blog>> GetAllBlogsAsync()
{
var query = from b in _context.Blogs
orderby b.Name
select b;
return await query.ToListAsync();
}
}
}
Source: https://learn.microsoft.com/en-us/ef/ef6/fundamentals/testing/mocking

Entity Framework using Generic Predicates

I use DTO's to map between my Business and Entity Framework layer via the Repository Pattern.
A Standard call would look like
public IClassDTO Fetch(Guid id)
{
var query = from s in _db.Base.OfType<Class>()
where s.ID == id
select s;
return query.First();
}
Now I wish to pass in filtering criteria from the business layer so I tried
public IEnumerable<IClassDTO> FetchAll(ISpecification<IClassDTO> whereclause)
{
var query = _db.Base.OfType<Class>()
.AsExpandable()
.Where(whereclause.EvalPredicate);
return query.ToList().Cast<IClassDTO>();
}
The Call from the business layer would be something like
Specification<IClassDTO> school =
new Specification<IClassDTO>(s => s.School.ID == _schoolGuid);
IEnumerable<IClassDTO> testclasses = _db.FetchAll(school);
The problem I am having is that the .Where clause on the EF query cannot be inferred from the usage. If I use concrete types in the Expression then it works find but I do not want to expose my business layer to EF directly.
Try making FetchAll into a generic on a class instead, like this:-
public IEnumerable<T> FetchAll<T> (Expression<Func<T,bool>> wherePredicate)
where T:IClassDTO //not actually needed
{
var query = _db.Base.OfType<T>()
.AsExpandable()
.Where(wherePredicate);
return query;
}
pass in school.Evalpredicate instead. FetchAll doesn't appear to need to know about the whole specification, it just needs the predicate, right? If you need to cast it to IClassDTO, do that after you have the results in a List.