I want to use Sieve and DTOs in the same project - entity-framework

I want to use Sieve in my Web API project. I am using Data Transfer Objects (Dtos) instead of directly sending out my database entity models over the API. I am using Automapper to quickly map the db model objects to the Dtos. A sample API endpoint:
[HttpGet]
public async Task<ActionResult<IEnumerable<RdtoSetup>>> GetSetups([FromQuery] SieveModel sieveModel)
{
if (_context.Setups == null)
{
return NotFound();
}
var setups = _context.Setups.AsNoTracking();
var results = _mapper.Map<IEnumerable<RdtoSetup>>(await setups.ToListAsync());
var filtered = _sieveProcessor.Apply(sieveModel, results);
return Ok(filtered);
}
As you can see from the code, I am unable to leverage the database for filtering sorting etc. I have to load all the results into an IEnumerable<RdtoSetup> before I can apply the Sieve model for filtering.
What am I doing wrong? Is there a way to get the benefit of Sieve without having to load complete tables from db into memory?
Solution from #Steve-py
Updated code that still uses my Automapper profile:
[HttpGet]
public async Task<ActionResult<IEnumerable<RdtoSetup>>> GetSetups([FromQuery] SieveModel sieveModel)
{
if (_context.Setups == null)
{
return NotFound();
}
var setups = _context.Setups.AsQueryable();
var results = _mapper.ProjectTo<RdtoSetup>(setups);
var filtered = _sieveProcessor.Apply(sieveModel, results);
return Ok(await filtered.ToListAsync());
}

When working with Automapper and EF, you should leverage Automapper's ProjectTo rather than Map. This will keep the mapping operation within the IQueryable so that the results don't need to be materialized and the Sieve operation should flow down to the query. Using ProjectTo you also don't need to worry about AsNoTracking or eager loading relationships with Include:
var results = _context.Setups.ProjectTo<RdtoSetup>(mapperConfiguration);
var filtered = await _sieveProcessor.Apply(sieveModel, results).ToListAsync();
Where mapperConfiguration is an Automapper Config for converting from your Entity(ies) to DTO(s).
I haven't worked with Sieve before for sorting models, but if that doesn't translate through ProjectTo, then the alternative would be to derive your Sieve Model off the Entity rather than the DTO and do the mapper.Map last:
var setups = _context.Setups.AsNoTracking();
var filtered = _sieveProcessor.Apply(sieveModel, setups);
var results = _mapper.Map<IEnumerable<RdtoSetup>>(await filtered.ToListAsync());
return Ok(results);

Related

How to control in-memory generated key values in Entity Framework Core?

Background
I am developing a Web application based on ASP.NET Core 2.1 and EntityFrameworkCore 2.1.1. Besides other type of tests, I am using InMemory Web API testing to have some kind of integration testing of the Web.Api.
Since test data is rather convoluted I have generated a JSON file based on a slice of a migrated database and testing is importing data from this JSON file when the in-memory tests kick in.
The issue
Read-only tests are working just fine. However, when I insert something for an entity that has an identity (ValueGenerated == ValueGenerated.OnAdd) it automatically gets a value of 1 (all PKs are ints). This makes sense since whatever generator EF is using behind the scene to generate those values was not instructed to generate from a certain value.
However, this is clearly not working properly for inserts that generate an existing key value.
Things I have tried
[current working solution] Shifting the key values - Upon deserializing all objects, I perform an ids shift operations for all involved entities (they get "large" values). This works properly, but it is error prone (e.g. some entities have static ids, I have to ensure that all foreign keys / navigations properties are properly defined and it is kind of slow right now since I rely on reflection to properly identity the keys / navigation properties that require shifting)
Configuring value generator to start from a large value:
TestStartUp.cs
services.AddSingleton<IValueGeneratorCache, ValueGeneratorCache>();
services.AddSingleton<ValueGeneratorCacheDependencies, ValueGeneratorCacheDependencies>();
Ids generation functionality
public const int StartValue = 100000000;
class ResettableValueGenerator : ValueGenerator<int>
{
private int _current;
public override bool GeneratesTemporaryValues => false;
public override int Next(EntityEntry entry)
{
return Interlocked.Increment(ref _current);
}
public void Reset() => _current = StartValue;
}
public static void ResetValueGenerators(CustomApiContext context, IValueGeneratorCache cache, int startValue = StartValue)
{
var allKeyProps = context.Model.GetEntityTypes()
.Select(e => e.FindPrimaryKey().Properties[0])
.Where(p => p.ClrType == typeof(int));
var keyProps = allKeyProps.Where(p => p.ValueGenerated == ValueGenerated.OnAdd);
foreach (var keyProperty in keyProps)
{
var generator = (ResettableValueGenerator)cache.GetOrAdd(
keyProperty,
keyProperty.DeclaringEntityType,
(p, e) => new ResettableValueGenerator());
generator.Reset();
}
}
When debugging I can see that my entities being iterated, so the reset is applied.
Pushing the data into the in-memory database
private void InitializeSeeding(IServiceScope scope)
{
using (scope)
{
var services = scope.ServiceProvider;
try
{
var context = services.GetRequiredService<CustomApiContext>();
// this pushes deserialized data + static data into the database
InitDbContext(context);
var valueGeneratorService = services.GetRequiredService<IValueGeneratorCache>();
ResetValueGenerators(context, valueGeneratorService);
}
catch (Exception ex)
{
var logger = services.GetService<ILogger<Startup>>();
logger.LogError(ex, "An error occurred while seeding the database.");
}
}
}
Actual insert
This is done using a generic repository, but boils down to this:
Context.Set<T>().Add(entity);
Context.SaveChanges();
Question: How to control in-memory generated key values in Entity Framework Core?

How to support OData query syntax but return non-Edm models

Exposing my EF models to an API always seemed wrong. I'd like my API to return a custom entity model to the caller but use EF on the back.
So I may have PersonRestEntity and a controller for CRUD ops against that and a Person EF code-first entity behind in and map values.
When I do this, I can no longer use the following to allow ~/people?$top=10 etc. in the URL
[EnableQuery]
public IQueryable<Person> Get(ODataQueryOptions<Person> query) { ... }
Because that exposes Person which is private DB implementation.
How can I have my cake and eat it?
I found a way. The trick is not to just return the IQueryable from the controller, because you need to materialise the query first. This doesn't mean materialising the whole set into RAM, the query is still run at the database, but by explicitly applying the query and materialising the results you can return mapped entities thereafter.
Define this action, specifying the DbSet entity type:
public async Task<HttpResponseMessage> Get(ODataQueryOptions<Person> oDataQuery)
And then apply the query manually to the DbSet<Person> like so:
var queryable = oDataQuery.ApplyTo(queryableDbSet);
Then use the following to run the query and turn the results into the collection of entities you publicly expose:
var list = await queryable.ToListAsync(cancellationToken);
return list
.OfType<Person>()
.Select(p => MyEntityMapper.MapToRestEntity(p));
Then you can return the list in an HttpResponseMessage as normal.
That's it, though obviously where the property names between the entities don't match or are absent on either class, there's going to be some issues, so its probably best to ensure the properties you want to include in query options are named the same in both entities.
Else, I guess you could choose to not support filters and just allow $top and $skip and impose a default order yourself. This can be achieved like so, making sure to order the queryable first, then skip, then top. Something like:
IQueryable queryable = people
.GetQueryable(operationContext)
.OrderBy(r => r.Name);
if (oDataQuery.Skip != null)
queryable = oDataQuery.Skip.ApplyTo(queryable, new System.Web.OData.Query.ODataQuerySettings());
if (oDataQuery.Top != null)
queryable = oDataQuery.Top.ApplyTo(queryable, new System.Web.OData.Query.ODataQuerySettings());
var list = await queryable.ToListAsync(operationContext.CreateToken());
return list
.OfType<Person>()
.Select(i => this.BuildPersonEntity(i));
More information:
If you simply use the non-generic ODataQueryOptions you get
Cannot create an EDM model as the action 'Get' on controller 'People'
has a return type 'System.Net.Http.HttpResponseMessage' that does not
implement IEnumerable
And other errors occur under different circumstances.

Mocking entity framework inner join

I need to unit test an inner join method from my Data Access Layer.
MY DAL looks like this:
public class MyDAL:IMyDAL
{
private MyContext context;
public MyDAL(MyContext Context)
{
this.context = Context;
}
public IEnumerable <Parent> Read(int childId)
{
var query = from parent in context.Parent
join child in context.Child on parent.Id equals child.Id
where child.Id == childId
select env;
return query.ToList();
}
And I want to unit test the Read method by mocking Entity framework, following this link http://msdn.microsoft.com/en-us/data/dn314429.aspx.
[TestMethod]
public void ReadMethod()
{
var data = new List<Parent>
{
new Parent { Id = 20 },
}.AsQueryable();
var data2 = new List<Child>
{
new Child { Id = 8 },
}.AsQueryable();
var mockSetPar = new Mock<DbSet<Parent>>();
var mockSetChild = new Mock<DbSet<Child>>();
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.Provider).Returns(data.Provider);
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.Expression).Returns(data.Expression);
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.ElementType).Returns(data.ElementType);
mockSetPar.As<IQueryable<Parent>>().Setup(m => m.GetEnumerator()).Returns(data.GetEnumerator());
moockSetChild.As<IQueryable<Child>>().Setup(m => m.Provider).Returns(data2.Provider);
mockSetChild.As<IQueryable<Child>>().Setup(m => m.Expression).Returns(data2.Expression);
mockSetChild.As<IQueryable<Child>>().Setup(m => m.ElementType).Returns(data2.ElementType);
mockSetChild.As<IQueryable<Child>>().Setup(m => m.GetEnumerator()).Returns(data2.GetEnumerator());
var customDbContextMock = new Mock<MyContext>();
customDbContextMock.Setup(x => x.Parent).Returns(mockSetPar.Object);
customDbContextMock.Setup(x => x.Child).Returns(mockSetChild.Object);
var myDAL = new MyDAL(customDbContextMock.Object);
var actual = myDAL.Read(8);
Assert.IsNotNull(actual);
The result actual is empty because the join method hasn't been mocked so it returns nothing.
How can I mock the join method to return a value?
Thank you
In-memory test of DB interactions might be misleading because the LINQ to Entities capabilities are a subset of LINQ to Objects capabilities, so you can write a test that will be green but the query will always throw an exception in production.
This is also wrong on sole conceptual level. The DAL is a code that lays on a boundry of two systems - app and db. Its responsibility is to integrate them. So if you isolate those integration components your test becomes meaningless as it mocks away the core responsibility of the SUT.
To test the query logic, you have to use a database provider that behaves like the production one. So you need integration tests. = A useful solution is to use SQLite in-memory database. It will behave as the real database in most scenarios covered by Entity Framework yet perform almost as fast as mocks based on in-memory collections in unit tests.
You can consider SQLite as a database mock on steroids if you like.
I think you already noticed, that mocking EF queries is time-consuming and brittle. My suggestion - do not mock it. You can hide as much of data-access logic as you can under repository interfaces which is easy to mock:
public interface IParentRepository
{
IEnumerable<Parent> GetParentsOfChild(int childId);
}
Then test will look like:
[TestMethod]
public void ReadMethod()
{
int childId = // get sample id
var expected = // get sample parents
var repositoryMock = new Mock<IParentRepository>();
repositoryMock.Setup(r => r.GetParentsOfChild(childId))
.Returns(expected);
var myDAL = new MyDAL(repositoryMock.Object);
var actual = myDAL.Read(childId);
repositoryMock.VerifyAll();
CollectionAssert.AreEqual(actual, expected);
}
If you want to verify query implementation, then best way to do this is an acceptance/integration test which involves real database. Keep in mind - analyzing generated IQueryable is not enough to be sure your query will work in real environment. E.g. you can use operator Last() with IQueryable but this query will fail to be translated to SQL by Entity Framework.

Problem using Include() when serializing Entity Framework 4 POCO classes with WCF

I have a WCF service with an Entity Framework 4 model, using POCO classes that are serialized and sent over to client applications. I have LazyLoadingEnabled and ProxyCreationEnabled set to false, and I'm using Linq to Entites to query an Entity, and return it via List<> to the client. Everything goes perfect when I don't use Include():
public List<TBLTable1> GetTBLTable1(string pCode)
{
using (PcFactoryEntities oPcFactoryDB = new PcFactoryEntities())
{
oPcFactoryDB.ContextOptions.ProxyCreationEnabled = false;
oPcFactoryDB.ContextOptions.LazyLoadingEnabled = false;
var oRS = oPcFactoryDB.TBLTable1
.Where(c => c.Code == pCode).ToList();
XmlObjectSerializer serializer = new DataContractSerializer(typeof(TBLTable1));
serializer.WriteObject(new XmlTextWriter(Console.Out) { Formatting = Formatting.Indented }, oRS[0]);
return oRS;
}
}
After the Linq query, I use the serializer to simulate the serialization process that happens when the POCO class is sent to the client, and I works great. However, when I add an Include() to load one of the navigation list for the class, it starts serializing all of Table2's navigation's list as if LazyLoadingEnabled was set to true, and it goes on forever serializing probably the whole database!
public List<TBLTable1> GetTBLTable1(string pCode)
{
using (PcFactoryEntities oPcFactoryDB = new PcFactoryEntities())
{
oPcFactoryDB.ContextOptions.ProxyCreationEnabled = false;
oPcFactoryDB.ContextOptions.LazyLoadingEnabled = false;
var oRS = oPcFactoryDB.TBLTable1
.Include("TBLTable2")
.Where(c => c.Code == pCode).ToList();
XmlObjectSerializer serializer = new DataContractSerializer(typeof(TBLTable1));
serializer.WriteObject(new XmlTextWriter(Console.Out) { Formatting = Formatting.Indented }, oRS[0]);
return oRS;
}
}
Why is this happening? Shouldn't the LazyLoadingEnabled set to false apply to the class included manually and return all of it's navigation lists to null as it happens with all of the other navigation lists for Table1? Is there a way to fix this so I can return with Table1 some navigations lists filled in with their navigation lists set to null?
Tks
Instead of trying to directly serialize the entity, try projecting to a DTO and serializing that. I agree what your seeing is bizarre behaviour - but it could be that the EF internal graph is taking over when your serializing the entities, but if you serialize a DTO, EF should not intervene.
E.g:
var dto = oPcFactoryDB.TBLTable1
.Where(x => x.Code == pCode)
.Select(x => new SpecialisedDTO
{
PropertyOne = x,
PropertyTwo = x.TBLTable2
}).ToList();
And then serialize that.
Since your projecting, you don't need to eager load - EF will grab what it needs to based on the query you have provided.
It's usually good practice in N-Tier situations to transmit DTO's over the wire, rather than the pure POCO entities.
Do you have a Navigation Property on TBLtable1 to TBLtable2? The .Include() is used to include entities that are linked va FK relationships and the .Include() is passed the Name of the Navigation Property.
So if you have a Person Entity with a NavigationProperty to an Addresses Entity called PersonAddresses you would then execute the following in order to get the Person and their addresses.
var p = dbContext.Person
.Where(x => x.Id == id)
.Include("PersonAddresses")
.SelectFirstOrDefault;

Entity Framework using Generic Predicates

I use DTO's to map between my Business and Entity Framework layer via the Repository Pattern.
A Standard call would look like
public IClassDTO Fetch(Guid id)
{
var query = from s in _db.Base.OfType<Class>()
where s.ID == id
select s;
return query.First();
}
Now I wish to pass in filtering criteria from the business layer so I tried
public IEnumerable<IClassDTO> FetchAll(ISpecification<IClassDTO> whereclause)
{
var query = _db.Base.OfType<Class>()
.AsExpandable()
.Where(whereclause.EvalPredicate);
return query.ToList().Cast<IClassDTO>();
}
The Call from the business layer would be something like
Specification<IClassDTO> school =
new Specification<IClassDTO>(s => s.School.ID == _schoolGuid);
IEnumerable<IClassDTO> testclasses = _db.FetchAll(school);
The problem I am having is that the .Where clause on the EF query cannot be inferred from the usage. If I use concrete types in the Expression then it works find but I do not want to expose my business layer to EF directly.
Try making FetchAll into a generic on a class instead, like this:-
public IEnumerable<T> FetchAll<T> (Expression<Func<T,bool>> wherePredicate)
where T:IClassDTO //not actually needed
{
var query = _db.Base.OfType<T>()
.AsExpandable()
.Where(wherePredicate);
return query;
}
pass in school.Evalpredicate instead. FetchAll doesn't appear to need to know about the whole specification, it just needs the predicate, right? If you need to cast it to IClassDTO, do that after you have the results in a List.