Assert.AreEqual unit testing for DbContext entities - entity-framework

I wish to unit test my business logic is loading the correct data by loading an entity via the business logic and comparing it to an entity loaded directly from the dbcontext.
Assert.AreEqual fails I'm guessing because the entities are loaded as tracked.
I thought that I could possibly use AsNoTracking(), but it didn't work.
Is there a way of "unwrapping" the entity from entity framework to a POCO?
I've read about disabling proxycreation, but is this the only option?
I'm hoping there is something similar (although I realise a completely different concept), to ko.utils.unwrapObservable() in the knockout javascript library.

It is strange integration test (it is not unit test at all because it uses database) - it should be enough to simply define static expectation instead of loading it again from the database. Dynamic tests are more error prone and can hide issues.
To make it work you must override Equal to compare data not references. Disabling proxy creation will not work because you will still have different reference from your business logic and different reference from tested context (unless you share the context but in such case the test will be even more strange).

Related

How do we switch from Telerik Open Access to anything else?

Our company has been using Telerik Open Access for years. We have multiple projects using it including some in development and some in production that need updated. Because Telerik no longer updates or supports Open Access, we are having a variety of problems. We've got users that have to go to another work station because we can't get Open Access on their computers and we've got projects where we can't add or update tables because the visual designer doesn't work in modern Visual Studio versions. So my question is, how do we convert these and what do we convert these to?
I've heard of Microsoft Entities Framework and we used to just call stored procedures instead of having a separate data project. Obviously our clients aren't going to pay us for hours to switch so we need something that works quick. How do we convert our existing Telerik Open Access project to Microsoft Entities Framework, straight SQL queries, or some other data layer option?
Here's an example of what we have currently.
A separate Visual Studio project that acts as our data layer where all the code was created by Telerik Open Access's visual designer:
We then have a DataAccess.cs class in our main project that creates the instance of the data layer:
Then we call it by using linq statements in the main project:
I don't know OA hands-on, so I can only put my two-cents in.
I'm afraid this isn't going to be be an easy transition. I've yet to see the first seamless transition from one data layer implementation to another (and I've seen a few). The main cause for this is that IQueryable is a leaky abstraction. That is, the data layer exposes IQueryables, but
it doesn't support all features of the interface, and
it adds its own features, and
it's got its own interpretation of how to implement the features that are supported.
This means that if you're going to port your data layer to EF, you may notice that some LINQ queries throw runtime errors because they contain unsupported .Net methods/properties (for instance DateTime.Date), or perform worse -- or better, or return data in subtly different shapes or sorting orders.
Some important OA features that are missing in EF:
Runtime mappings (EF's mapping is static)
Bulk update/delete functions (with EF: only by using third-party libraries)
Second-leve cache
Profiler and Tuning Advisor
Streaming of large objects
Mixing database-side and client-side evaluation of LINQ queries (EF6: only db-evaluation)
On the other hand, the basic architectures of OA and EF don't seem to be too different. They both -
support POCOs
work with simple navigation properties
have a fluent mapping API
support LINQ through IQueryable<T>, where T is an entity class.
most importantly: both have revolve around the Unit of Work and Repository patterns. (For EF: DbContext and DbSet, respectively)
All-in-all it's goinig to be a delicate process of converting, trying and testing. One good thing is that your current DAL is already abstracted to a certain extent. Another is that the syntax doesn't even look too different. Where you have ...
dbContext.Add(newDockReport);
dbContext.SaveChanges();
... using EF this would become ...
dbContext.DockReports.Add(newDockReport);
dbContext.SaveChanges();
With EF-core it wouldn't even have to change one bit.
But that's another important choice to make: EF6 or EF-core? EF6 is stable, mature, feature-rich, but at the end of its life cycle (a phrase you've probably come to hate by now). EF-core, on the other hand, is the future, but is presently struggling to get bug-free in its major functions, not yet as feature-rich as EF6 (and some features will never return, although other new features are great improvements compared to EF6). At the moment, I'd be wary of using EF-core for enterprise applications, but I'm pretty sure that a year from now this is not an issue any more.
Whichever way you go, I'd start the process by writing large amounts of integration tests, if you didn't do so already. The advantage of integration tests is that you can avoid the hassle of mocking either framework first (which isn't trivial).
I have never heard of a tool that can do that.Expect it to take time.
You have to figure how to do it by yourself :
( for the exact safer way to migrate )
1rst you must have a simple page that use your DataLayer it will be your test page. A simple one that you can adapt the LinQ logic .
Add a LinQ to SQL Class, Right click> Add > LinQ to SQL Class.
Drop your table for this page only the usefull one, put the link if needed.
In the test page create a new data context of the linQtoSql.
Use it fixing the type and rename what have to be rename.
Fix error till it compile.
Stock coffee and anything that can boost your brain.
Add table to your context to match the one you had in telerik data access.
Debug for days.
Come back with new question on how to fix a new issue twice a day.
To help with the dbContext.Add() difference between the 2 frameworks you could use this extension in the EF 6.x :
public static void Add<T>(this DbContext db, T entityToCreate) where T : class
{
db.Set<T>().Add(entityToCreate);
db.SaveChanges();
}
then you could do :
db.Add(obj);
See Gert Arnold answer : In Entity Framework, how do I add a generic entity to its corresponding DbSet without a switch statement that enumerates all the possible DbSets?

Serializing Entity Framework objects for Azure Cache

we use Azure Caching directly (and not through one of the available Entity Framework wrappers). Apparently, for distributed caching, we need to serialize the objects. Unfortunately, this causes issues with lazy-loaded DbContext-based proxies used for navigation properties.
I see we can use a custom serializer in order to map proxies to empty collections (if not loaded) or to normal objects (if loaded), but I am not sure about the implementation. One possible implementation can be based on the one used by WCF, but I am not sure Azure works the same way.
The ideal solution (and that's why I point to ProxyDataContractResolver) would be one where, when serialization happens:
IF the navigation property has been already loaded the data would be serialized as if it were a normal Collection,
and if they are not loaded, they won't be serialized (I would like lazy loading to work back after deserialization for the latter case, but it's acceptable if it doesn't).
Has anyone manually fixed that problem in an elegant way?
Thanks in advance!
I will presume that if you are wanting to cache EF objects, you don't require lazy loading or change tracking on those entities.
I believe that both of those are enabled through object proxies that will cause serialization issues (since you don't want to serialize the proxy).
If you disable the property DbContext.Configuration.ProxyCreationEnabled then serialization of the actual object, not the proxy, should work fine. This is typically required when returning POCO objects over WCF but is likley the same for other serializations scenarios such as this.
If you detach the EF entity from the DbContext before serializing it, that disables lazy loading, so your custom serializer won't try to serialize anything that isn't already part of the entity's graph.
Then when you get it back from the cache, if you attach it to a new (identical) DbContext, that should reenable lazy loading.
(Caveat: once you detach the entity from the context, any new queries that include that same object will create a new, attached, copy, so you will need to code with some care to avoid running into trouble with multiple potentially-different versions of the same object running around. But that said, this should let you do what you want.)

Understanding EF Code-First eager-loading

I've been working with EF CodeFirst since EF 4.1 went live, that's more than a year ago, and I feel pretty confortable working with it now. I'm used to custom entity validators, overriding .SaveChanges() to modify some of it's behaviours and to some non-trivial concepts like mapping to nontable db objects. But theres this part of EF that remain cloudy to me: context.Configuration.LazyLoadingEnabled = false;.
I understand the basics, linq queries will be thrown to the database as soon as they are called, dependent collections won't get loaded if I don't explicitly specify it, yadda yadda yadda.
What I would love to understand is:
In what situations should I disable lazy loading? And why?
What are the practical benefits and/or drawbacks of disabling it?
Any additional clarification is welcome.
EF v1 did not support lazy loading. When lazy loading was added in the 2nd round (it was EF4) it could cause problems when porting apps written with EF v1 to EF4 when suddenly a lot more queries would have been sent to the database when it had not happen before. Therefore the easiest way to make a EF v1 work on EF4 was to disable lazy loading.
Another interesting thing is to take a look at how lazy loading is implemented - EF will dynamically create a type derived from your entity type and will add some code to handle lazy loading. This means that EF is not actually using your type but the type derived from your type. This is usually OK but sometimes may cause issues.
Finally sometimes you may want to control when queries are actually sent to the database (e.g. due to latency when using Sql Azure it is usually better/faster to send one query returning a bigger result instead of a lot of queries returning smaller results). With lazy loading it often happens the people don't realize that they hit the database hard when it is not necessary or not effective. One thing to notice here that you can mix both worlds - .Include() will force loading related entities regardless of the lazy loading setting.
You may read more about this here: http://thedatafarm.com/blog/data-access/a-look-at-lazy-loading-in-ef4/ and here: http://msdn.microsoft.com/en-us/magazine/hh205756.aspx

How can I do TDD and Unit Testing for EF Code First entity declaration and mapping?

I have am about to retro-code a suite of unit tests for a new MVC4 app. Because nearly all my code in the EF data project is copied straight from code generated by the VS2012 EF Reverse Engineering tool, I have decided to skip unit tests in this part of the application, unless I can somehow automatically generate them. I have no business logic at here and I would like to first concentrate my efforts on ensuring better QA on the business side. But, I would like to know how one goes about first TDD, and second, just unit testing in general here.
Let's assume I don't have to or want to mock the database yet. I have often been quite happy unit testing against a test DB copy before, but with more conventional, home rolled ORM.
So, do I start with a test that instantiates my drived DbContext, then derive a DbContext until that test passes. Then, test for instantiating an entity, and create an entity, going on to test for a DbSet of those entities, which test will also include checking if the table is created. All is still good and well, if not bloody laborious, but my head asplode as soon as I start thinking of even a hint of testing my fluent mapping classes for all my entities. What now?
Testing against database is not unit testing - it is integration testing and integration testing usually doesn't follow the granularity of unit testing. Why it is not unit testing? Because unit testing tests single self contained unit - all external dependencies are faked. When your test spans both your unit code and database it test dependency as well = it is integration test.
All EF dependent code should be tested with integration testing. It doesn't make sense to unit test Microsoft's code. For example your question about mapping. Correct unit test for mapping does something like:
Preparation: Prepare compiled model with your entity mapping configuration
Execution: Create DbContext from compiled model and get metadata workspace from the context
Validation: Assert that metadata context contains your mapped entity
Now you can repeat similar test for every property you want to map in that entity.
That is obviously framework code which should already work - these tests should be done by people developing the framework.
In your case simple make integration test against local database which will try to load, save, update and delete entity and assert expectations you have on these operations. If anything in mapping is wrong at least one of these tests will fail.

What is the correct way to manage dependency injection for Entity Framework ObjectContext in ASP.NET MVC controllers?

In my MVC controllers, I'm using an IoC container (Ninject), but am not sure how to best use when it comes to the Entity Framework ObjectContext.
Currently, I'm doing something like:
using(var context = new MyObjectContext())
{
var stuff = m_repository.GetStuff(context);
}
This is the best way to manage from the point of view of keeping the database connection open for the shortest time possible.
If I were to create the ObjectContext via Ninject on a per request basis, this obviously keeps the database connection open for too long.
Also the above code would become...
var stuff = m_repository.GetStuff(m_myObjectContext);
(And when would I dispose of the context...?)
Should I be creating a factory for the ObjectContext and pass that in via DI? This would loosen the coupling, but does this really help with testability if there is no easy means of maintaining an interface for the ObjectContext (that I know of)?.
Is there a better way? Thanks
This is the best way to manage from the point of view of keeping the
database connection open for the shortest time possible.
If I were to create the ObjectContext via Ninject on a per request
basis, this obviously keeps the database connection open for too long.
Entity Framework will close the connection directly after the execution of each query (except when supplying an open connection from the outside), so your argument for doing things like this does not hold.
In the past I used to have by business logic (my command handlers to be precise) have control over the context (create, commit, and dispose it), but the downside is that you need to pass on this context to all other methods and all dependencies. When the application logic gets more complex, this results in less readable, less maintainable code.
For that reason I moved to a model where the unit of work (your MyObjectContext) is created, committed, and disposed outside the control of the business logic. This allows you to inject the unit of work into all dependencies and reuse the same unit of work for all objects. Downside is that this makes your DI configuration a bit harder. Some things your need to make sure of:
The unit of work must be created as per web request or within a certain scope.
The unit of work must be disposed at the end of the request or scope (although it is probably not a problem when the DbContext is not disposed, since the underlighing connection is closed and DbContext does not implemente a finalizer).
You need to explicitly commit the unit of work, but you can't do this at the end of the web request, since at that point you have no idea whether it is safe to commit (since you don't want to commit when your business logic threw an exception, but at the end of the request there is no way to correctly detect if this actually happened).
One tip I can give you is to model the business logic in the system around command handlers, since this allows you do define a single decorator that handles the transactional behavior (committing the unit of work and perhaps even running everything in a database transaction) at a single point. This decorator can be wrapped around each handler in the system.
I must admit that I have no idea how to register generic types and generic decorators with Ninject, but you'll probably get an answer quickly when asking here at Stackoverflow.