I've been reading up on alternatives to Entity Framework, so far I've seen Dapper, OrmLite, NPoco, PetaPoco and Massive.
And they ALL look like ADO.NET with a different name to me. They operate by executing SQL queries, specified in plain text, just like ADO.NET.
I'm sure if you delve deep into them there would be some difference but am I missing something or are they just slightly different from ADO.NET?
Your question starts out talking about Entity Framework, but then it segues into ADO.NET. Those are not the same thing.
Entity Framework is an ORM (object relational mapping) framework, like the other packages you mention in your first paragraph. You're right that they all do essentially the same thing: They let you read and write data from a database by using strongly typed C# objects instead of more abstract concepts like DataTable and DataRow. The syntaxes are different for each, and you may find some easier to work with than others. You might also be interested in looking at some of the speed benchmarks that are out there for the different frameworks.
It's also important to note that ORMs don't always require you to write plain-text SQL; they take care of generating that themselves. For example, something like this works in PetaPoco and NPoco:
public class Holiday
{
property DateTime Date { get; set; }
property String Name { get; set; }
}
using (var db = new Database("MyDB"))
{
// No SQL here -- it gets generated for you
IEnumerable<Holiday> holidays = db.Query<Holiday>("");
}
ADO.NET is the underlying database access technology used by .NET, so anything that talks to a database is going to use that at some point. All of these ORMs rely on ADO.NET.
Related
I'm looking to implement entity framework version 4.3.1 in my existing project which don't follow this EF.The database is already developed and is currently used applying ado.net.In this case how do I start with to work on EF, is it Database First,Code first.
Even when a database already exists I still use the Code First approach, mapping the tables using annotations, because the domain is way more organized than on the EDMX file. If there are many tables the visual EDMX can become really useless since the design will be overcrowded with data and connections all over the place.
In two steps you can begin with this approach:
1) Create a domain model class, Customer for example, and map it to your table using data annotations:
[Table("tbl_cust")]
public class Customer
{
[Key]
[Column("cust_id")]
public int CustomerId { get; set; }
[Column("cust_name")]
public string Name { get; set; }
// Add other properties below
}
2) Create a context class deriving from DbContext and set DbSet<T> properties for each model, we have only one in our case so:
public class MyApplicationContext: DbContext
{
public MyApplicationContext() : base("name=ConnectionStringName") { }
public DbSet<Customer> Customers { get; set; }
}
Now anywhere in your code can instantiate the derived DbContext class and make queries using Linq:
var _db = new MyApplicationContext();
var customer = _db.Customers.Where(c => c.CustomerId == 37).FirstOrDefault();
Don't forget to add a reference to EntityFramework assembly using NuGet.
Good Luck.
Since your database already exists the obvious choice is Database first. If the database is designed with common sense it (mostly) works great.
I think the question is if you want to use the EF Designer to visualize your database or not. Since you are looking at EF 4.3.1 (in fact you should be looking at EF5 not 4.3.1 - EF5 is the latest version) I assume you don't care about the designer. In this case you could use EF Power Tools to reverse engineer your database. This will create a set of classes that will match your database. Note that since the database has already been created EF will not be able to detect changes in your classes (as opposed to databases created by Code First when additional information is stored in the database and EF is able to tell whether the model has changed). Make sure to read this blog post - it contains a lot of details you may find helpful to make the decision.
If you care about being able to see your model in the designer you can just use VS to reverse engineer DB. If you use VS2012 you will by default get EF5 and DBContext. The difference from using Code First will be that instead of building the model EF needs based on your classes the model is saved in the the edmx file that is part of your project (and used to generate code for you)
I'm trying to understand the best architecture for my MVC2 site.
As I have been experimenting with getting the data in and out of a database with Entity Framework, I am beginning to realize the simple domain-models I have so far constructed do not map to all the needs of my planned views. So I am considering following the accpepted answer to this question: Why Two Classes, View Model and Domain Model?.
But there seems to be redundancy with little payoff that I can perceive between the domain-models and the EF models, and I can't even hardly understand the conceptual difference. I do NOT have as a requirement the need to switch data sources down the road, and I do not forsee the need to switch my ORM solution either.
QUESTION:
If I follow this pattern then, since I am using Entity Framework, shouldn't I just use my EF entities to serve directly as the domain models? (note: I haven't thought through the "how" of that, but answers there are welcome too.) Or am I still advised to manage a separate set of domain-models?
It seems you've got some redundancy here. Reading your paragraph:
But there seems to be redundancy with
little payoff that I can perceive
between the domain-models and the EF
models, and I can't even hardly
understand the conceptual difference.
I would argue that there is no real difference between the EF Model and your Domain Model. In the projects I create, my EF Model is my Domain model.
However, my Domain model classes are not the same as my ViewModels. The Domain model class might contain data that is not interesting for the View, or maybe the view needs information that is calculated/evaluated based on information in view. A simple example might be:
public class Session // Domain model (and EF Model
{
public int Id {get; set; }
public DateTime Start {get; set; }
public int DurationInMinutes {get; set; }
}
public class SessionViewModel // The viewmodel :p
{
public DateTime Start {get; set; }
public int DurationInMinutes {get; set;}
public DateTime End
{
get
{
return Start.Add(TimeSpan.FromMinutes(DurationInMinutes));
}
}
}
In this example I'm interested in displaying the actual End-time in my View, but I have no interest in storing it in the database, as that might lead to data-discrepencies (DurationInMinutes + Start might not equal End if data is corrupted upon saving)
When I first started coding this way, I ended up doing alot of manual work mapping my Domain models to ViewModels, and back. AutoMapper changed all that :) Google it, or NuGet it and it will make your life a whole lot easier :)
Hope this helps a little. Please comment if I'm totally missing the point :)
Update to address the comment
DataAnnotations would then be applied to the ViewModel, because normally DataAnnotations denote how the data should be displayed and validated in the View.
For instance you would put the [Required] attribute on public DateTime Start {get; set;} in order for the Html.DisplayFor extensions automatically validates your HTML according to your dataannotations.
By definition (by some anyway) the Domain Model should not contain any code or logic related to your business logic. The Domain Model is simply responsible for containing the data pretty raw according to your datastore. Personally I like to put some sort of Service layer inbetween that is responsible for fetching the data and returning ViewModels, and also doing the reverse.
The ultimate goal is to avoid referencing your domainmodel directly from your controllers.
Of course, all these points has to be weighed in reference to the size of the project. It's certainly overkill to do all this just to mock up a test-site - but in any other project where you'll actually be deploying something that might scale, expand or otherwise change, it's a good practice to get used to, as it seriously increases your ability to do so.
Another key point to this approach is that you are forced to abstract your operations down to smaller and more managable units, enabling better and more precise unit-tests.
So it turns out that I am the last person to discover the fundamental floor that exists in Microsoft's Entity Framework when implementing TPT (Table Per Type) inheritance.
Having built a prototype with 3 sub classes, the base table/class consisting of 20+ columns and the child tables consisting of ~10 columns, everything worked beautifully and I continued to work on the rest of the application having proved the concept. Now the time has come to add the other 20 sub types and OMG, I've just started looking the SQL being generated on a simple select, even though I'm only interested in accessing the fields on the base class.
This page has a wonderful description of the problem.
Has anyone gone into production using TPT and EF, are there any workarounds that will mean that I won't have to:
a) Convert the schema to TPH (which goes against everything I try to achieve with my DB design - urrrgghh!)?
b) rewrite with another ORM?
The way I see it, I should be able to add a reference to a Stored Procedure from within EF (probably using EFExtensions) that has the the TSQL that selects only the fields I need, even using the code generated by EF for the monster UNION/JOIN inside the SP would prevent the SQL being generated every time a call is made - not something I would intend to do, but you get the idea.
The killer I've found, is that when I'm selecting a list of entities linked to the base table (but the entity I'm selecting is not a subclass table), and I want to filter by the pk of the Base table, and I do .Include("BaseClassTableName") to allow me to filter using x=>x.BaseClass.PK == 1 and access other properties, it performs the mother SQL generation here too.
I can't use EF4 as I'm limited to the .net 2.0 runtimes with 3.5 SP1 installed.
Has anyone got any experience of getting out of this mess?
This seems a bit confused. You're talking about TPH, but when you say:
The way I see it, I should be able to add a reference to a Stored Procedure from within EF (probably using EFExtensions) that has the the TSQL that selects only the fields I need, even using the code generated by EF for the monster UNION/JOIN inside the SP would prevent the SQL being generated every time a call is made - not something I would intend to do, but you get the idea.
Well, that's Table per Concrete Class mapping (using a proc rather than a table, but still, the mapping is TPC...). The EF supports TPC, but the designer doesn't. You can do it in code-first if you get the CTP.
Your preferred solution of using a proc will cause performance problems if you restrict queries, like this:
var q = from c in Context.SomeChild
where c.SomeAssociation.Foo == foo
select c;
The DB optimizer can't see through the proc implementation, so you get a full scan of the results.
So before you tell yourself that this will fix your results, double-check that assumption.
Note that you can always specify custom SQL for any mapping strategy with ObjectContext.ExecuteStoreQuery.
However, before you do any of this, consider that, as RPM1984 points out, your design seems to overuse inheritance. I like this quote from NHibernate in Action
[A]sk yourself whether it might be better to remodel inheritance as delegation in the object model. Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. [Your ORM] acts as a buffer between the object and relational models, but that doesn't mean you can completely ignore persistence concerns when designing your object model.
We've hit this same problem and are considering porting our DAL from EF4 to LLBLGen because of this.
In the meantime, we've used compiled queries to alleviate some of the pain:
Compiled Queries (LINQ to Entities)
This strategy doesn't prevent the mammoth queries, but the time it takes to generate the query (which can be huge) is only done once.
You'll can use compiled queries with Includes() as such:
static readonly Func<AdventureWorksEntities, int, Subcomponent> subcomponentWithDetailsCompiledQuery = CompiledQuery.Compile<AdventureWorksEntities, int, Subcomponent>(
(ctx, id) => ctx.Subcomponents
.Include("SubcomponentType")
.Include("A.B.C.D")
.FirstOrDefault(s => s.Id == id));
public Subcomponent GetSubcomponentWithDetails(int id)
{
return subcomponentWithDetailsCompiledQuery.Invoke(ObjectContext, id);
}
I am unit testing code written against the ADO .NET Entity Framework. I would like to populate an in-memory database with rows, and make sure that my code retrieves them properly.
I can mock the Entity Framework using Rhino Mocks, but that would not be sufficient. I would be telling the query what entities to return to me. This would neither test the where clause nor the .Include() statements. I want to be sure that my where clause matches only the rows I intend, and no others. I want to be sure that I have asked for the entities that I need, and none that I don't.
For example:
class CustomerService
{
ObjectQuery<Customer> _customerSource;
public CustomerService(ObjectQuery<Customer> customerSource)
{
_customerSource = customerSource;
}
public Customer GetCustomerById(int customerId)
{
var customers = from c in _customerSource.Include("Order")
where c.CustomerID == customerId
select c;
return customers.FirstOrDefault();
}
}
If I mock the ObjectQuery to return a known customer populated with orders, how do I know that CustomerService has the right where clause and Include? I would rather insert some customer rows and some order rows, then assert that the right customer was selected and the orders are populated.
An InMemory provider is included in EF7 (pre-release).
You can use either the NuGet package, or read about it in the EF repo on GitHub (view source).
The article http://www.codeproject.com/Articles/460175/Two-strategies-for-testing-Entity-Framework-Effort describes Effort -Entity Framework provider that runs in memory.
You can still use your DbContext or ObjectContext classes within unit tests, without having to have an actual database.
A better approach here might be to use the Repository pattern to encapsulate your EF code. When testing your services you can use mocks or fakes. When testing your repositories you will want to hit the real DB to ensure that you are getting the results you expect.
There is not currently a in memory provider for EF, but if you take a look at Highway.Data it has a base abstraction interface and an InMemoryDataContext.
Testing Data Access and EF with Highway.Data
Yes, there is at least one such provider - SQLite. I have used it a bit and it works. Also you can try SQL Server Compact. It's an embeded database and has EF providers too.
Edit:
SQLite has support for in-memory databases (link1). All you need is to specify a connection string like: "Data Source=:memory:;Version=3;New=True;". If you need in an example you may look at SharpArchitecture.
I am not familiar with Entity Framework and the ObjectQuery class but if the Include method is virtual you can mock it like this:
// Arrange
var customerSourceStub = MockRepository.GenerateStub<ObjectQuery<Customer>>();
var customers = new Customer[]
{
// Populate your customers as if they were coming from DB
};
customerSourceStub
.Stub(x => x.Include("Order"))
.Return(customers);
var sut = new CustomerService(customerSourceStub);
// Act
var actual = sut.GetCustomerById(5);
// Assert
Assert.IsNotNull(actual);
Assert.AreEqual(5, actual.Id);
You could try SQL Server Compact but it has some quite wild limitations:
SQL Server Compact does not support SKIP expressions in paging queries when it is used with the Entity Framework
SQL Server Compact does not support entities with server-generated keys or values when it is used with the Entity Framework
No outer joins, collate, modulo on floats, aggregates
In EF Core there are two main options for doing this:
SQLite in-memory mode allows you to write efficient tests against a provider that behaves like a relational database.
The InMemory provider is a lightweight provider that has minimal dependencies, but does not always behave like a relational database
I am using SQLite and it supports all queries, that I need to do with Azure SQL production database.
When using the Entity Framework, does ESQL perform better than Linq to Entities?
I'd prefer to use Linq to Entities (mainly because of the strong-type checking), but some of my other team members are citing performance as a reason to use ESQL. I would like to get a full idea of the pro's/con's of using either method.
The most obvious differences are:
Linq to Entities is strongly typed code including nice query comprehension syntax. The fact that the “from” comes before the “select” allows IntelliSense to help you.
Entity SQL uses traditional string based queries with a more familiar SQL like syntax where the SELECT statement comes before the FROM. Because eSQL is string based, dynamic queries may be composed in a traditional way at run time using string manipulation.
The less obvious key difference is:
Linq to Entities allows you to change the shape or "project" the results of your query into any shape you require with the “select new{... }” syntax. Anonymous types, new to C# 3.0, has allowed this.
Projection is not possible using Entity SQL as you must always return an ObjectQuery<T>. In some scenarios it is possible use ObjectQuery<object> however you must work around the fact that .Select always returns ObjectQuery<DbDataRecord>. See code below...
ObjectQuery<DbDataRecord> query = DynamicQuery(context,
"Products",
"it.ProductName = 'Chai'",
"it.ProductName, it.QuantityPerUnit");
public static ObjectQuery<DbDataRecord> DynamicQuery(MyContext context, string root, string selection, string projection)
{
ObjectQuery<object> rootQuery = context.CreateQuery<object>(root);
ObjectQuery<object> filteredQuery = rootQuery.Where(selection);
ObjectQuery<DbDataRecord> result = filteredQuery.Select(projection);
return result;
}
There are other more subtle differences described by one of the team members in detail here and here.
ESQL can also generate some particularly vicious sql. I had to track a problem with such a query that was using inherited classes and I found out that my pidly-little ESQL of 4 lines got translated in a 100000 characters monster SQL statetement.
Did the same thing with Linq and the compiled code was much more managable, let's say 20 lines of SQL.
Plus, what other people mentioned, Linq is strongly type, although very annoying to debug without the edit and continue feature.
AD
Entity-SQL (eSQL) allows you to do things such as dynamic queries more easily than LINQ to Entities. However, if you don't have a scenario that requires eSQL, I would be hesitant to rely on it over LINQ because it will be much harder to maintain (e.g. no more compile-time checking, etc).
I believe LINQ allows you to precompile your queries as well, which might give you better performance. Rico Mariani blogged about LINQ performance a while back and discusses compiled queries.
nice graph showing performance comparisons here:
Entity Framework Performance Explored
not much difference seen between ESQL and Entities
but overall differences significant in using Entities over direct Queries
Entity Framework uses two layers of object mapping (compared to a single layer in LINQ to SQL), and the additional mapping has performance costs. At least in EF version 1, application designers should choose Entity Framework only if the modeling and ORM mapping capabilities can justify that cost.
The more code you can cover with compile time checking for me is something that I'd place a higher premium on than performance. Having said that at this stage I'd probably lean towards ESQL not just because of the performance, but it's also (at present) a lot more flexible in what it can do. There's nothing worse than using a technology stack that doesn't have a feature you really really need.
The entity framework doesn't support things like custom properties, custom queries (for when you need to really tune performance) and does not function the same as linq-to-sql (i.e. there are features that simply don't work in the entity framework).
My personal impression of the Entity Framework is that there is a lot of potential, but it's probably a bit to "rigid" in it's implementation to use in a production environment in its current state.
For direct queries I'm using linq to entities, for dynamic queries I'm using ESQL. Maybe the answer isn't either/or, but and/also.