Differences between System.Linq.Dynamic, EntitySQL and Expression Trees - entity-framework

I am currently working on a large project that uses Entity Framework extensively. Part of the functionality we have implemented is dynamic querying (Filter/Sort) of the various data models based on user-supplied filters.
To achieve this I ended up using System.Linq.Dynamic which allows me, through various means, to create string-based filters like like "SomeProperty.StartsWith(#P0)" and so on, and then pass these strings (and attendant parameters) to the Dynamic Linq extension methods for IQueryable<T> (Where, etc) so that they get executed against the database and everyone is happy.
I didn't know any other way to do this at the time except for a vague
notion of Expression Trees and to be honest, I just could not get
my head around them - I spent several weeks poring over a decompilation of a component that used expressions to implement dynamic querying and I balked :)
Plus it felt like I was reinventing the wheel when the functionality I needed effectively was already written by far cleverer people than myself, in the System.Linq.Dynamic extensions.
Now the current code all works quite well as a generalised solution for filtering, sorting, etc, on any of my entities, and I'm happy enough with it however as I became more and more familiar with E.F. I started to come across things like
ObjectQuery
EntityClient
EntitySQL
Expression Trees
And I started to wonder, given that System.Linq.Dynamic is nearly 6 years old, and hasn't really had anything done with it in that time, am I missing out on anything? or, have I missed some fundamental point?
Should I bite the bullet and move my codebase over to use EntitySQL? (I assume this is like the spiritual successor to System.Linq.Dynamic, or am I wrong?)
Or should I go back and learn how to use Expression Trees because they are the way of the future/all the cool kids do it, etc? I'm not a fan of change for changes sake, and I like code that works, but I am worried that at some point in the future string-based dynamic linq becomes a dead-end and I'm stuck using it.
If anyone can help to clarify the differences between System.Linq.Dynamic and EntitySQL, or can identify any good reason for moving to Expression Trees I'd really appreciate it.

We are using Dynamic Linq extensively in our project...
Its clean and it works well, but its very complicated if you would want to peep into or change anything its code.
One of the problems what I found using a combination of Dynamic Linq and EF 6 is
EF 6 uses query caching to perform faster retrieval of data and the way where query that is built in Dynamic Linq does not use this feature of EF 6. So we have to change the where to use query caching.
This is just a small example to say Dynamic Linq is not meant for newer EF versions.
Dynamic Linq is a wonderful solution if you want to work with un-typed collection like IQuerable, but its very difficult to maintain.
I am hoping you would work in a typed environment(IQueryable). Otherwise essentially you would need to modify Dynamic Linq to really take advantage of EF 6.

Related

Complex mappings with ORM

I have used Entity Framework Code First once and although it is easy to deal with i feel like it forces you to fight your OOP principles as i tend to break many habits and design decisions just so Code First can understand my entities and map/read them from the db like:
You can't use ReadOnlyCollections
You can't have a collection of a Complex Type (Value Type)
Forced to use a hack to make enumerations work, (most market customers still have Windows XP)
and I can name a few more. What i would like to know if NHibernate supports the stuff mentioned above on Windows XP plus other things (like if it can work with SQL CE) and things that doesn't force you to change your design just to make it work.
I would like to hear an NHibernate professional/expert on this ?
Not sure about ReadOnlyCollections in particular, as NHibernate requires using interfaces and then uses its own collection implementation (which you can replace). But you can always map a private field and use a projection.
The others work out of the box.

Perl DAL Design Questions

Recently I've been working on some Perl projects and I'm a very novice Perl programmer. I've been experimenting with DBIx::Class and so far I'm really please with the flexibility and the ease of use. I'm curious though. I come from a .NET background and it seems like we spend a lot of time abstracting our DAL to a certain degree. Is this a good idea with a language like Perl?
Where I want to get shortly is to have the ability to start mocking my DAL so I can write unit tests for tasks. Right now though I'm struggling with how the overall structure and design of the application should look though?
Re: Relationship of the ORM within the application...
Hopefully this is the kind of answer you are looking for...
With most web app frameworks in the "scripting" world (i.e. perl, ruby, python, php), most of the time I've seen the business logic implemented at the ORM object level. E.g. in a Rails app it's at the ActiveRecord level; if you are using DBix::Class it would be at the Result-class level.
More concretely, in the case of DBIx::Class, if you have a table named VENDOR there would be a class called MySchema::Result::Vendor which represents a single row in the table VENDOR. Simply add your business methods to this class.
One disadvantage of this approach is that it ties your business logic with the ORM class which can make (unit) testing more difficult. One solution to this is to use a light-weight database for unit tests (i.e. SQLite), and an ORM like DBIx::Class will facilitate switching between the two. Of course, this won't work if you rely on SQL features which are not implemented in SQLite.
Another approach is to place your business logic methods into a Moose role. Then those methods can be composed into either the DBIx::Class Result class or into a mock object for testing. I can elaborate with an example if you'd like.
One big assumption of the above is that your business object = one row in the database. If this is not the case (i.e. you business object spans more than one table), then you'll probably want to create a "shell" or container object which has as instance members each of the constituent ORM objects. Fortunately, Moose has a nice facility for delegating methods (search for Moose delegation and the handles attribute of instance member declarations), so it is relatively easy to make a composite business object out of two or more ORM objects. Again, I can give you an example of this if you'd like.
HTH
I used to work in perl projects for the web long ago. But after working with things such as Django, perl's tools like DBI, etc now look to me rather rudimentary and outdated. Have a look at the django ORM for example, it's elegant and very productive to use, you can bypass it if your query is too complex or the ORM gets in the way...
These days I'd go python or ruby for that kind of projects.
For one liners, small text parsing or sysadmin stuff I still love to use small perl snippets. But I'm more into DRY than TMTOWTDI for more than a few lines of code these days.

Rules of thumbs for writing "queries" using ADO.NET Entity Framework

I’m currently working on a prototype of a medium size web application, and I thought that it would be good to also experiment with Entity Framework. The problem is that the major part of the application is not the data layer and logic, and so that I don't have much time to play with Entity Framework. On the other hand, the database schema is quite simple.
One of the problems I’m facing is that I cannot find a consistent way to "write queries". As far as I can tell, there are four "interfaces" for the job:
LINQ to Entities
LINQ to Entities using LINQ extension methods
Entity SQL
Query builder
OK, the first two are essentially the same, but it’s good to use just one for maintenance and consistency.
I’m mostly puzzled by the fact that none of them seems to be complete and the most general. I often find myself cornered and using some ugly looking combination of several of them. My guess is that Entity SQL is the most general one, but writing queries using strings feels like a step back. The main reason I’m experimenting with something like Entity Framework is that I like the compile time checking.
Some other random thought / issues:
I often also use the ObjectQuery.Include() method, but again it takes a string. Is this the only way?
When to use ObjectQuery.Execute() (vs. ToList())? Does it actually execute the query?
Should execute queries as soon as possible (e.g. using ToList()) or should I not care just let leave the execution for the first enumeration which gets in the way?
Are ObjectQuery.Skip() and ObjectQuery.Take() available only as extension methods? Is there a better way to do paging? It’s 2009 and almost every web application deals with paging.
Overall, I understand there are many difficulties when implementing an ORM, and often one has to compromise. On the other hand, the direct database access (e.g. ADO.NET) is plain and simple and has well defined interface (tabular results, data readers), so all code - no matter who and when writes it - is consistent. I don’t want to faced with too many choices whenever writing a database query. It’s too tedious and more than likely different developers will come up with different ways.
What are your rules of thumbs?
I use LINQ-to-Entities as much as possible. I also try and formalise to the lambda-form, as opposed to the extended SQL-style syntax. I have to admit to have had problems enforcing relationships and making compromises on efficiency just to expedite my coding of our application (eg. Master->Child tables may need to be manually loaded) but all in all, EF is a good product.
I do use EF's .Include() method for lazy-loading, which as you say, does require a string input. I find no problem with this, other than that of identifying the string to use which is relatively simple. I guess if you're keen on compile-time checking of such relations, a model similar to: Parent.GetChildren() might be more appropriate.
My application does require some "dynamic" queries to be performed, though. I have two ways of meeting this:
a) I create a mediator object, eg. ClientSearchMediator, which "knows" how to search for clients by name, etc. I can then put this through a SearchHandler.Search(ISearchMediator[] mediators) call (for example). This can be used to target specific data structures and sort results accordingly using LINQ-to-Entities.
b) For a looser experience, possibly as a result of a user designing their own query (using high level tools our application provides), eSQL is ideal for this purpose. It can be made to be injection-safe.
I don't have enough knowledge to address all of this, but I'll at least take a few stabs.
I don't know why you think ADO.NET is more consistent than Entity Framework. There are many different ways to use ADO.NET and I've definitely seen inconsistency within a single code base.
Entity Framework is currently a 1.0 release and it suffers from many 1.0 type problems (incomplete & inconsistent API, missing features, etc.).
In regards to Include, I assume you are referring to eager loading. Multiple people (outside of Microsoft) have developed solutions for getting "type safe" includes (try googling something like: Entity Framework ObjectQueryExtension Include). That said, Include is more of a hint than anything. You can't force eager loading and you have to always remember to call the IsLoaded() method to see if your request was fulfilled. As far as I know, the way "Include" works is not changing at all in the next version of Entity Framework (4.0 - to ship with VS 2010).
As far as executing the Linq query as soon as it's built vs. the last possible moment, that decision is situational. Personally, I would probably execute it as soon as it's built for the most part unless there was a compelling reason not to, but I can see other people going the opposite direction.
There are more mature ORMs on the market and Entity Framework isn't necessarily your best option. For the most part, you can bend Entity Framework to your will, but you may end up rolling your own implementation of features that come out of the box with other ORMs.

What are the pros/cons of returning POCO objects from a Repository rathen than EF Entities?

Following the way Rob does it, I have the classes that are generated by the Linq to SQL wizard, and then a copy of those classes that are POCOs. In my repositories I return these POCOs rather than the Linq to SQL models:
return from c in DataContext.Customer
where c.ID == id
select new MyPocoModels.Customer { ID = c.ID, Name = c.Name }
I understand that the benefit of this is that the POCO models can be instantiated easier so this will make my code more testable.
I'm now moving from Linq to SQL over to Entity Framework and I'm about half way through an EF book. It seems there's a lot of goodness I'm going to lose out on by returning POCOs from my repositories rather than the EF entities.
I still haven't really embraced unit testing, so I feel like I'm wasting a lot of time creating these extra POCOs and writing the code to populate them, when all I appear to be gaining is testable code, yet I'm also gonna lose out on a lot of the benefits of the EF by not being able to track my objects.
Does anyone have any advice for a relative newb to all this ORM/Repository stuff?
Anthony
Another reason people don't like the auto-generated objects (in LINQ to SQL for example) is because of their built-in "magic".
Usually the magic is invisible and you never notice it, but when you try to do things like serialize one of those objects and then deserialize it (for example when using web services) its internal connection to the data source is broken and special hacks need to be employed to "put the magic back in".
With POCOs, you don't have to worry about those sorts of things and can get a better separation between your data and service layers. The downside of course is that you have to write lots of boring POCO -> magic object and magic object -> POCO conversion code. But in the end I think it's usually worth it, especially for large or complex projects.
The main reason is that a lot of people like to develop their model with a specific mindset: like DDD for instance. They might want to use a specific pattern (like Spec or State) for things like statuses (instead of enums) - or you might want to use a Factory for instantiation.
OO breaks when you try to use Tables as Objects when things get more complex. Simple sites work OK - but when you get to big big things, it gets ugly.
So - as always - it depends what you think your project will turn into.
My experience is that when you start writting some complex queries .Include method is worthless and you will find yourself either:
a) Writting a lot of queries to get the data you want or
b) abusing of annonymous types to load the data in a single query and then writting a lot of code just to pass that data to your entities.
POCOs are the way to go, IMHO.

Performance of Linq to Entities vs ESQL

When using the Entity Framework, does ESQL perform better than Linq to Entities?
I'd prefer to use Linq to Entities (mainly because of the strong-type checking), but some of my other team members are citing performance as a reason to use ESQL. I would like to get a full idea of the pro's/con's of using either method.
The most obvious differences are:
Linq to Entities is strongly typed code including nice query comprehension syntax. The fact that the “from” comes before the “select” allows IntelliSense to help you.
Entity SQL uses traditional string based queries with a more familiar SQL like syntax where the SELECT statement comes before the FROM. Because eSQL is string based, dynamic queries may be composed in a traditional way at run time using string manipulation.
The less obvious key difference is:
Linq to Entities allows you to change the shape or "project" the results of your query into any shape you require with the “select new{... }” syntax. Anonymous types, new to C# 3.0, has allowed this.
Projection is not possible using Entity SQL as you must always return an ObjectQuery<T>. In some scenarios it is possible use ObjectQuery<object> however you must work around the fact that .Select always returns ObjectQuery<DbDataRecord>. See code below...
ObjectQuery<DbDataRecord> query = DynamicQuery(context,
"Products",
"it.ProductName = 'Chai'",
"it.ProductName, it.QuantityPerUnit");
public static ObjectQuery<DbDataRecord> DynamicQuery(MyContext context, string root, string selection, string projection)
{
ObjectQuery<object> rootQuery = context.CreateQuery<object>(root);
ObjectQuery<object> filteredQuery = rootQuery.Where(selection);
ObjectQuery<DbDataRecord> result = filteredQuery.Select(projection);
return result;
}
There are other more subtle differences described by one of the team members in detail here and here.
ESQL can also generate some particularly vicious sql. I had to track a problem with such a query that was using inherited classes and I found out that my pidly-little ESQL of 4 lines got translated in a 100000 characters monster SQL statetement.
Did the same thing with Linq and the compiled code was much more managable, let's say 20 lines of SQL.
Plus, what other people mentioned, Linq is strongly type, although very annoying to debug without the edit and continue feature.
AD
Entity-SQL (eSQL) allows you to do things such as dynamic queries more easily than LINQ to Entities. However, if you don't have a scenario that requires eSQL, I would be hesitant to rely on it over LINQ because it will be much harder to maintain (e.g. no more compile-time checking, etc).
I believe LINQ allows you to precompile your queries as well, which might give you better performance. Rico Mariani blogged about LINQ performance a while back and discusses compiled queries.
nice graph showing performance comparisons here:
Entity Framework Performance Explored
not much difference seen between ESQL and Entities
but overall differences significant in using Entities over direct Queries
Entity Framework uses two layers of object mapping (compared to a single layer in LINQ to SQL), and the additional mapping has performance costs. At least in EF version 1, application designers should choose Entity Framework only if the modeling and ORM mapping capabilities can justify that cost.
The more code you can cover with compile time checking for me is something that I'd place a higher premium on than performance. Having said that at this stage I'd probably lean towards ESQL not just because of the performance, but it's also (at present) a lot more flexible in what it can do. There's nothing worse than using a technology stack that doesn't have a feature you really really need.
The entity framework doesn't support things like custom properties, custom queries (for when you need to really tune performance) and does not function the same as linq-to-sql (i.e. there are features that simply don't work in the entity framework).
My personal impression of the Entity Framework is that there is a lot of potential, but it's probably a bit to "rigid" in it's implementation to use in a production environment in its current state.
For direct queries I'm using linq to entities, for dynamic queries I'm using ESQL. Maybe the answer isn't either/or, but and/also.