How to escape from ORMs limitations or should I avoid them? - entity-framework

In short, ORMs like Entity Framework provides a fast solution but with many limitations, When should they (ORMs) be avoided?
I want to create an engine of a DMS system, I wonder that how could I create the Business Logic Layer.
I'll discuss the following options:
Using Entity Framework and provides it as a Business later for the engine's clients.
The problem is that missing the control on the properties and the validation because it's generated code.
Create my own business layer classes manually without using Entity Framework or any ORM:
The problem is that it's a hard mission and something like reinvent the weel.
Create my own business layer classes up on the Entitiy Framework (use it)
The problem Seems to be code repeating by creating new classes with the same names and every property will cover the opposite one which is generated by the ORM.
Am I discuss the problem in a right way?

In short, ORMs should be avoided when:
your program will perform bulk inserts/updates/deletes (such as insert-selects, and updates/deletes that are conditional on something non-unique). ORMs are not designed to do these kinds of bulk operations efficiently; you will end up deleting each record one at a time.
you are using highly custom data types or conversions. ORMs are generally bad at dealing with BLOBs, and there are limits to how they can be told how to "map" objects.
you need the absolute highest performance in your communication with SQL Server. ORMs can suffer from N+1 problems and other query inefficiencies, and overall they add a layer of (usually reflective) translation between your request for an object and a SQL statement which will slow you down.
ORMs should instead be used in most cases of application-based record maintenance, where the user is viewing aggregated results and/or updating individual records, consisting of simple data types, one at a time. ORMs have the extreme advantage over raw SQL in their ability to provide compiler-checked queries using Linq providers; virtually all of the popular ORMs (Linq2SQL, EF, NHibernate, Azure) have a Linq query interface that can catch a lot of "fat fingers" and other common mistakes in queries that you don't catch when using "magic strings" to form SQLCommands. ORMs also generally provide database independence. Classic NHibernate HBM mappings are XML files, which can be swapped out as necessary to point the repository at MSS, Oracle, SQLite, Postgres, and other RDBMSes. Even "fluent" mappings, which are classes in code files, can be swapped out if correctly architected. EF has similar functionality.

So are you asking how to do "X" without doing "X"? ORM is an abstraction and as any other abstraction it has disadvantages but not those you mentioned.
Code (EFv4) can be generated by T4 template and T4 template is a code that can be modified
Generated code is partial class which can be combined with your partial part containing your logic
Writing classes manually is very common case - using designer as available in Entity framework is more rare

Disclaimer: I work at Mindscape that builds the LightSpeed ORM for .NET
As you don't ask about a specific issue, but about approaches to solving the flexibility problem with an ORM I thought I'd chime in with some views from a vendor perspective. It may or may not be of use to you but might give some food for thought :-)
When designing an O/R Mapper it's important to take into consideration what we call "escape hatches". An ORM will inevitably push a certain set of default behaviours which is one way that developer gain productivity gains.
One of the lessons we have learned with LightSpeed has been where developers need those escape hatches. For example, KeithS here states that ORMs are not good for bulk operations - and in most cases this is true. We had this scenario come up with some customers and added an overload to our Remove() operation that allowed you to pass in a query that removes all records that match. This saved having to load entities into memory and delete them. Listening to where developers are having pain and helping solve those problems quickly is important for helping build solid solutions.
All ORMs should efficiently batch queries. Having said that, we have been surprised to see that many ORMs don't. This is strange given that often batching can be done rather easily and several queries can be bundled up and sent to the database at once to save round trips. This is something we've done since day 1 for any database that supports it. That's just an aside to the point of batching made in this thread. The quality of those batches queries is the real challenge and, frankly, there are some TERRIBLE SQL statements being generated by some ORMs.
Overall you should select an ORM that gives you immediate productivity gains (almost demo-ware styled 'see I queried data in 30s!') but has also paid attention to larger scale solutions which is where escape hatches and some of the less demoed, but hugely useful features are needed.
I hope this post hasn't come across too salesy, but I wanted to draw attention to taking into account the thought process that goes behind any product when selecting it. If the philosophy matches the way you need to work then you're probably going to be happier than selecting one that does not.
If you're interested, you can learn about our LightSpeed ORM for .NET.

in my experience you should avoid use ORM when your application do the following data manipulation:
1)Bulk deletes: most of the ORM tools wont truly delete the data, they will mark it with a garbage collect ID (GC record) to keep the database consistency. The worst thing is that the ORM collect all the data you want to delete before it mark it as deleted. That means that if you want to delete 1000000 rows the ORM will first fetch the data, load it in your application, mark it as GC and then update the database;. which I believe is a huge waist of resources.
2)bulk inserts and data import:most of the ORM tools will create business layer validations on the business classes, this is good if you want to validate 1 record but if you are going to insert/import hundreds or even millions of records the process could take days.
3)Report generation: the ORM tools are good to create simple list reports or simple table joins like in a order-order_details scenario. but it most cases the ORM will only slow down the retrieve of the data and will add more joins that you need for a report. that translate in a give more work to the DB engine than you usually do with a SQL approach

Related

Using MicroORM for read layer in CQRS

Folks,
Im considering using a microORM such as Dapper.net for the read access component of a CQRS application (Asp.Net MVC), with Entity Framework being used for manipulating the domain.
This is CQRS light, I am not using event sourcing etc. I have seen it mentioned several times that the read only model in CQRS should be light/simpleas possible querying the data layer, possible using something like ADO.net
That implies potentially hardcoding SQL Query strings in our code or in some XML file. How should I go about justifying this approach where we have to maintain the domain mappings on one side and SQL statements on another?
Has anyone used MicroORM's in a CQRS solution in this way?
Thanks
Mick
Yes, absolutely you can use Dapper, PetaPoco, Massive, Simple.Data, or any other micro ORM you would like. In the past we have used NHibernate to solve the problem but it was a 10,000 lbs. gorilla compared to what we needed.
One thing that we really liked about Simple.Data and Petapoco in our evaluation of those libraries was that they each could adapt your queries to different database engines (including Mongo) with minimal tweaking necessary, whereas Dapper was basically one big bunch of SQL strings--it was "stringly typed". Don't get me wrong, Dapper's great and is very, very fast and will absolutely work great. Just evaluate your functional and non-functional requirements before committing.
Here are the relative number of downloads using NuGet for each of the primary micro ORMs (as of about 1/1/2012). For us, having a good community with lots of downloads is always a must in order to help iron out issues when the arise:
5568 Simple.Data
4990 Petapoco
4913 Dapper
2203 Massive
1152 OrmLite
Lastly, one thing you may want to investigate is your reasoning behind SQL altogether for your read models. If your domain is publishing events (regardless of event sourcing), and you're writing to simple, flat/non-relational view models, you may be able to get away with something as simple as JSON files that are pushed to the browser which the browser then interprets and uses to populate your HTML templates. There's all kinds of options that are available, you just need to determine what works best in your scenario.

Entity Framework - is it suitable for Enterprise Level application?

I have a web application with:
1 Terabyte DB
200+ tables
At least 50 tables with 1+ million records each
10+ developers
1000s of concurrent users
This project is currently using Ad-Hoc Sql, which is generated by custom ORM solution.
Instead of supporting custom ORM (which is missing a lot of advanced features), I am thinking to switch to Entity Framework.
I used EF 4.1 (Code-First) on a smaller project and it worked pretty well, but is it scalable for a much larger project above?
I (highly) agree with marvelTracker (and Ayende's) thoughts.
Here is some further information though:
Key Strategy
There is a well-known cost when using GUIDs as Primary Keys. It was described by Jimmy Nilsson and it has been publicly available at http://www.informit.com/articles/article.aspx?p=25862. NHibernate supports the GUIDCOMB primary key strategy. However, to achieve that in EntityFramework is a little tricky and requires additional steps.
Enums
EntityFramework doesn’t support enums natively. Until June CTP which adds support for Enums http://blogs.msdn.com/b/adonet/archive/2011/06/30/walkthrough-enums-june-ctp.aspx the only way to map enumerations was using workarounds Please look at: How to work with Enums in Entity Framework?
Queries:
NHibernate offers many ways for querying data:
LINQ (using the re-motion’s re-linq provider, https://www.re-motion.org/web/)
Named Queries encapsulated in query objects
ICriteria/QueryOver for queries where the criteria are not known in advance
Using QueryOver projections and aggregates (In cases, we only need specific properties of an entity. In other cases, we may need the results of an aggregate function, such as average or count):
PagedQueries: In an effort to avoid overwhelming the user, and increase application responsiveness, large result sets are commonly broken into smaller pages of results.
MultiQueries that combine several ICriteria and QueryOver queries into a single database roundtrip
Detached Queries which are query objects in parts of the application without access to the NHibernate session. These objects are then executed elsewhere with a session. This is good because we can avoid complex repositories with many methods.
ISession’s QueryOver:
// Query that depends on a session:
premises = session.QueryOver<Premise>().List();
Detached QueryOver:
// Full reusable query!
var query = QueryOver.Of<Premise>();
// Then later, in some other part of ther application:
premises = query.GetExecutableQueryOver(session).List(); // Could pass IStateleSession too.
Open source
NHibernate has a lot of contribution projects available at http://sourceforge.net/projects/nhcontrib/
This project provides a number of very useful extensions to NHibernate (among other):
Cache Providers (for 2nd-level cache)
Dependency Injection for entities with no default constructor
Full-Text Search (Lucene.NET integration)
Spatial Support (NetTopologySuite integration)
Support
EntityFramework comes with Microsoft support.
NHibernate has an active community:
https://stackoverflow.com/questions/tagged/nhibernate
http://forum.hibernate.org/
http://groups.google.com/group/fluent-nhibernate
Also, have a look at:
http://www.infoq.com/news/2010/01/Comparing-NHibernate-EF-4
NHibernate is the best choice for you because it has good support of complex query, second level cacheing and great support of optimizations. I think EF is getting there. If you are dealing with Legacy systems NHibernate is the best approach.
http://ayende.com/blog/4351/nhibernate-vs-entity-framework-4-0
Suitable is an interesting term. Is it usable? Yes, and you'll find a number of nice features well suited toward rapid application development. That said, it's somewhat of a half baked technology, and lacks many advanced features of its own predecessor, LINQ to SQL (even 3 years after its first release). Here are a few annoyances:
Poor complex LINQ support
No Enum property types
Missing SQL Conversions (parse DateTime, parse int, etc.) (though you can implement these via model defined functions)
Poor SQL readability
Problems keeping multiple ssdl/csdl/msl resources independent for sharding (not really a problem with Code First)
Problems with running multiple concurrent Transactions in different ObjectContext's
Problems with Detached entity scenarios
That said, Microsoft has devoted a lot of effort to it, and hopefully it will continue to improve over time. I personally would spend time implementing a well abstracted Repository/Unit of Work pattern so your code doesn't know it's using EF at all and if necessary you can switch to another LINQ to DB provider in the future.
Most modern ORMs will be a step up from ad-hoc SQL.

ORM for large volume database

I am working on new project which have data oriented means very large volume of data (increasing day by day). So kindly suggest me which type of approach I should use to achieve desire functionality with out any hurdles.
Is database fully normalized?
Which ORM (linq2sql, entity framework) is suitable for this project?
Should I use stored procedures, db functions, triggers, etc?
Whether or not the database is normalized is something you need to know and need to answer!
As for the ORM: it really depends on the type of data and its structure.
Linq-to-SQL is a very simplistic ORM that basically just does a 1:1 mapping of tables to domain objects. As long as you don't need anything else - that's fine. Linq-to-SQL is no longer being actively developed, so that might be a drawback. Also, stored proc support is a bit limited.
Entity Framework (at least in .NET 4) is great and is the current ORM of choice at Microsoft - it's being actively developed, has a lot of backing, lot of flexibility. It offers database-first, model-first and code-first development styles, it supports POCO objects and self-tracking entities, and is very well integrated with stored procs (you can define a stored proc for INSERT, UPDATE, DELETE on every single entity, if you wish to do so). It would be my first choice.
NHibernate is a great, enterprise-level ORM, well established and being actively developed - certainly not a "dead-end" like Linq-to-SQL. I've used it a few years ago, and while it's great and powerful, it's also a bit harder to learn than EF4 (no visual designer, needs more manual labor, manual effort). It's great if you really need all its power and if you're willing to invest the necessary up-front learning time.
As for the database: stored procs are definitely worth while investigating, especially if you need to encapsulate certain database processing into a nice proc to call from your code. I would be rather careful and defensive about using triggers and functions too much - they have their place, but they shouldn't be overused, since they do carry some problems with them (mostly performance problems and problems of "discoverability" - lots of devs don't think about triggers that could be in place, and will not understand what's going on).
#Xulfee, that's a fairly broad question and a lot depends on the nature of your project. The approaches you reference affect a lot of aspects of the overall architecture. For example:
Is the database fully normalized?
Database normalization generally aids in tackling the problem of complexity of your conceptual model. When properly (note I did not say, "fully") normalized, your model should be fairly straight-forward and consumers of the database (developers, your BI team, domain experts, etc) should be able to get a good idea of the business problems that are being approached with your database. That having been said, normalization can lead to a fairly large reporting and analysis problem. When writing a query for a report against a large, fairly normalized database, you may introduce performance problems by joining a lot of tables. Enter snowflake schemas. So, to your question: it depends. What are you reporting requirements? How many transactions on average do you need to support? How complex is your conceptual model? Are you able to break the database into smaller models that are associated, rather than one large one?
Which ORM (linq2sql, entity framework) is suitable for this project?
Again, an ORM is a tool. You must ask yourself what is the specific job that you are trying to accomplish? The choice of an ORM (or in even using an ORM in the first place) is a decision that I would recommend you make fairly early on as it can affect everything from performance to development team cohesion. There are a lot of great choices out there:
Linq-To-Sql
NHibernate
Entity Framework
LLBLGen
Each of the above frameworks does a fantastic job of abstracting your persistence layer. Each has it's pro's and cons - the majority of which come down to infrastructure concerns: performance, configuration, schema/language compatibility, persistence patterns, vendor support. Given the choice, I would ask myself which of the frameworks is my development team most comfortable with? Which one supports the level of system activity that I expect? With which vendor am I willing to "throw-in"? I have seen fairly successful systems that use fairly small ORM's (i.e. Stackoverflow uses a modified version of Linq-To-Sql) as well as fairly large systems fail with fairly complex ORM's.
Should I use stored procedures, db functions, triggers, etc?
This question centers around your persistence store and how you use it (as well as how angry you want to make your DBA :) ). The use of sprocs (stored procedures) lends itself to allowing your dba to provide security at a very granular level. In addition, if the orm you are using generates dynamic sql, you might benefit from the database's ability to cache queries generated using sprocs. DB functions can be a double-sided blade. They offer the ability to add functionality and intelligence to your model, while at the same time allowing you to take a fairly large hit performance-wise (i.e. table-valued UDF's). Triggers have their own pitfalls and should be used with caution, but that discussion could get rather involved. The bottom-line for me in this case is: how much logic in the database do you want to support, and how important is security and performance? Do you have a qualified dba (not just a developer who knows how to write queries, but a dba who is capable of performance tuning and data modeling)? How big is your database? How complex is your data? Think about all of these questions and more when determining how you want to manage you data.
In summary, you are asking some good questions. Don't confuse infrastructure needs with implementation needs. Decide on a stack and run with it, don't get bogged-down in implementation details to the point at which you are unable to successfully complete the project. With the right level of abstraction, you may find it easier to try out new and different technologies without risking the overall success of the project. And remember: there's nothing wrong with experimenting and trying new things, just be prepared to fail gracefully and test, test, test!

Philosophy of correct working with ORM (Entity Framework )

I'm an old-school database programmer. And all my life i've working with database via DAL and stored procedures. Now i got a requirement to use Entity Framework.
Could you tell me your expirience and architecture best practicies how to work with it ?
As I know ORM was made for programmers who don't know SQL expression. And this is only benefit of ORM. Am I right ?
I got architecture document and I don't know clearly what I shoud do with ORM. I think that my steps should be:
1) Create complete database
2) Create high-level entities in model such "Price" which is realy consists from few database tables
3) Map database tables on entities.
An ORM does a lot more than just allow non-SQL programmers to talk to databases!
Instead of having to deal with loads of handwritten DAL code, and getting back a row/column representation of your data, an ORM turns each row of a table into a strongly-typed object.
So you end up with e.g. a Customer, and you can access its phone number as a strongly-typed property:
string customerPhone = MyCustomer.PhoneNumber;
That is a lot better than:
string customerPhone = MyCustomerTable.Rows[5].Column["PhoneNumber"].ToString();
You get no support whatsoever from the IDE in making this work - be aware of mistyping the column name! You won't find out 'til runtime - either you get no data back, or you get an exception.... no very pleasant.
It's first of all much easier to use that Customer object you get back, the properties are nicely available, strongly-typed, and discoverable in Intellisense, and so forth.
So besides possibly saving you from having to hand-craft a lot of boring SQL and DAL code, an ORM also brings a lot of benefits in using the data from the database - discoverability in your code editor, type safety and more.
I agree - the thought of an ORM generating SQL statements on the fly, and executing those, can be scary. But at least in Entity Framework v4 (.NET 4), Microsoft has done an admirable job of optimizing the SQL being used. It might not be perfect in 100% of the cases, but in a large percentage of the time, it's a lot better than any SQL any non-expert SQL programmer would write...
Plus: in EF4, if you really want to and see a need to, you can always define and use your own Stored procs for INSERT, UPDATE, DELETE on any entity.
I can relate to your sentiment of wanting to have complete control over your SQL. I have been researching ORM usage myself, and while I can't state a case nearly as well as marc_s has, I thought I might chime in with a couple more points.
I think the point of ORM is to shift the focus away from writing SQL and DAL code, and instead focus more on the business logic. You can be more agile with an ORM tool, because you don't have to refactor your data model or stored procedures every time you change your object model. In fact, ORM essentially give you a layer of abstraction, so you can potentially make changes to your schema without affecting your code, and vice-versa. ORM might not always generate the most efficient SQL, but you may benefit in faster development time. For small projects however, the benefits of ORM might not be worth the extra time spent configuring the ORM.
I know that doesn't answer your questions though.
To your 2nd question, it seems to me that many developers on S.O. here who are very skilled in SQL still advocate the use of and themselves use ORM tools such as Hibernate, LINQ to SQL, and Entity Framework. In fact, you still need to know SQL sometimes even if you use ORM, and it's typically the more complicated queries, so your theory about ORM being mainly "for programmers who don't know SQL" might be wrong. Plus you get caching from your ORM layer.
Furthermore, Jeff Atwood, who is the lead developer of S.O. (this site here), claims that he loves SQL (and I'd bet he's very good at it), and he also strives to avoid adding extra tenchnologies to his stack, but yet he choose to use LINQ to SQL to build S.O. Years ago already he claimed that, "Stored Procedures should be considered database assembly language: for use in only the most performance critical situations."
To your 1st question, here's another article from Jeff Atwood's blog that talks about varies ways (including using ORM) to deal with the object-relational impedance mistmatch problem, which helped me put things in perspective. It's also interesting because his opinion of ORM must have changed since then. In the article he said you should, "either abandon relational databases, or abandon objects," as well as, "I tend to err on the side of the database-as-model camp." But as I said, some of the bullet points helped put things into perspective for me.

Manual DAL & BLL vs. ORM

Which approach is better: 1) to use a third-party ORM system or 2) manually write DAL and BLL code to work with the database?
1) In one of our projects, we decided using the DevExpress XPO ORM system, and we ran across lots of slight problems that wasted a lot of our time. Amd still from time to time we encounter problems and exceptions that come from this ORM, and we do not have full understanding and control of this "black box".
2) In another project, we decided to write the DAL and BLL from scratch. Although this implied writing boring code many, many times, but this approach proved to be more versatile and flexible: we had full control over the way data was held in the database, how it was obtained from it, etc. And all the bugs could be fixed in a direct and easy way.
Which approach is generally better? Maybe the problem is just with the ORM that we used (DevExpress XPO), and maybe other ORMs are better (such as NHibernate)?
Is it worth using ADO Entiry Framework?
I found that the DotNetNuke CMS uses its own DAL and BLL code. What about other projects?
I would like to get info on your personal experience: which approach do you use in your projects, which is preferable?
Thank you.
My personal experience has been that ORM is usually a complete waste of time.
First, consider the history behind this. Back in the 60s and early 70s, we had these DBMSes using the hierarchical and network models. These were a bit of a pain to use, since when querying them you had to deal with all of the mechanics of retrieval: follow links between records all over the place and deal with the situation when the links weren't the links you wanted (e.g., were pointing in the wrong direction for your particular query). So Codd thought up the idea of a relational DBMS: specify the relationships between things, say in your query only what you want, and let the DBMS deal with figuring out the mechanics of retrieving it. Once we had a couple of good implementations of this, the database guys were overjoyed, everybody switched to it, and the world was happy.
Until the OO guys came along into the business world.
The OO guys found this impedance mismatch: the DBMSes used in business programming were relational, but internally the OO guys stored things with links (references), and found things by figuring out the details of which links they had to follow and following them. Yup: this is essentially the hierarchical or network DBMS model. So they put a lot of (often ingenious) effort into layering that hierarchical/network model back on to relational databases, incidently throwing out many of the advantages given to us by RDBMSes.
My advice is to learn the relational model, design your system around it if it's suitable (it very frequently is), and use the power of your RDBMS. You'll avoid the impedance mismatch, you'll generally find the queries easy to write, and you'll avoid performance problems (such as your ORM layer taking hundreds of queries to do what it ought to be doing in one).
There is a certain amount of "mapping" to be done when it comes to processing the results of a query, but this goes pretty easily if you think about it in the right way: the heading of the result relation maps to a class, and each tuple in the relation is an object. Depending on what further logic you need, it may or may not be worth defining an actual class for this; it may be easy enough just to work through a list of hashes generated from the result. Just go through and process the list, doing what you need to do, and you're done.
Perhaps a little of both is the right fit. You could use a product like SubSonic. That way, you can design your database, generate your DAL code (removing all that boring stuff), use partial classes to extend it with your own code, use Stored Procedures if you want to, and generally get more stuff done.
That's what I do. I find it's the right balance between automation and control.
I'd also point out that I think you're on the right path by trying out different approaches and seeing what works best for you. I think that's ultimately the source for your answer.
Recently I made the decision to use Linq to SQL on a new project, and I really like it. It is lightweight, high-performance, intuitive, and has many gurus at microsoft (and others) that blog about it.
Linq to SQL works by creating a data layer of c# objects from your database. DevExpress XPO works in the opposite direction, creating tables for your C# business objects. The Entity Framework is supposed to work either way. I am a database guy, so the idea of a framework designing the database for me doesn't make much sense, although I can see the attractiveness of that.
My Linq to SQL project is a medium-sized project (hundreds, maybe thousands of users). For smaller projects sometimes I just use SQLCommand and SQLConnection objects, and talk directly to the database, with good results. I have also used SQLDataSource objects as containers for my CRUD, but these seem clunky.
DALs make more sense the larger your project is. If it is a web application, I always use some sort of DAL because they have built-in protections against things like SQL injection attacks, better handling of null values, etc.
I debated whether to use the Entity Framework for my project, since Microsoft says this will be their go-to solution for data access in the future. But EF feels immature to me, and if you search StackOverflow for Entity Framework, you will find several people who are struggling with small, obtuse problems. I suspect version 2 will be much better.
I don't know anything about nHibernate, but there are people out there who love it and would not use anything else.
You might try using NHibernate. Since it's open source, it's not exactly a black box. It is very versatile, and it has many extensibility points for you to plug in your own additional or replacement functionality.
Comment 1:
NHibernate is a true ORM, in that it permits you to create a mapping between your arbitrary domain model (classes) and your arbitrary data model (tables, views, functions, and procedures). You tell it how you want your classes to be mapped to tables, for example, whether this class maps to two joined tables or two classes map to the same table, whether this class property maps to a many-to-many relation, etc. NHibernate expects your data model to be mostly normalized, but it does not require that your data model correspond precisely to your domain model, nor does it generate your data model.
Comment 2:
NHibernate's approach is to permit you to write any classes you like, and then after that to tell NHibernate how to map those classes to tables. There's no special base class to inherit from, no special list class that all your one-to-many properties have to be, etc. NHibernate can do its magic without them. In fact, your business object classes are not supposed to have any dependencies on NHibernate at all. Your business object classes, by themselves, have absolutely no persistence or database code in them.
You will most likely find that you can exercise very fine-grained control over the data-access strategies that NHibernate uses, so much so that NHibernate is likely to be an excellent choice for your complex cases as well. However, in any given context, you are free to use NHibernate or not to use it (in favor of more customized DAL code), as you like, because NHibernate tries not to get in your way when you don't need it. So you can use a custom DAL or DevExpress XPO in one BLL class (or method), and you can use NHibernate in another.
I recently took part in sufficiently large interesting project. I didn't join it from the beginning and we had to support already implemented architecture. Data access to all objects was implemented through stored procedures and automatically generated wrapper-methods on .NET that returned DataTable objects. The development process in such system was really slow and inefficient. We had to write huge stored procedure on PL/SQL for every query, that could be expressed in simple LINQ query. If we had used ORM, we would have implement project several times faster. And I don't see any advantage of such architecture.
I admit, that it is just particular not very successful project, but I made following conclusion: Before refusing to use ORM think twice, do you really need such flexibility and control over database? I think in most cases it isn't worth wasted time and money.
As others explain, there is a fundamental difficulty with ORM's that make it such that no existing solution does a very good job of doing the right thing, most of the time. This Blog Post: The Vietnam Of Computer Science echoes some of my feelings about it.
The executive summary is something along the lines of the assumptions and optimizations that are incompatible between object and relational models. although early returns are good, as the project progresses, the abstractions of the ORM fall short, and the extra overhead of working around it tends to cancel out the successes.
I have used Bold for Delphi four years now. It is great but it is not available anymore for sale and it lacks some features like databinding. ECO the successor has all that.
No I'm not selling ECO-licenses or something but I just think it is a pity that so few people realize what MDD (Model Driven Development) can do. Ability to solve more complex problems in less time and fewer bugs. This is very hard to measure but I have heard something like 5-10 more efficient development. And as I work with it every day I know this is true.
Maybe some traditional developer that is centered around data and SQL say:
"But what about performance?"
"I may loose control of what SQL is run!"
Well...
If you want to load 10000 instances of a table as fast as possible it may be better to use stored procedures, but most application don't do this. Both Bold and ECO use simple SQL queries to load data. Performance is highly dependent of the number of queries to the database to load a certain amount of data. Developer can help by saying this data belong to each other. Load them as effiecent as possible.
The actual queries that is executed can of course be logged to catch any performance problems. And if you really want to use your hyper optimized SQL query this is no problem as long as it don't update the database.
There is many ORM system to choose from, specially if you use dot.net. But honestly it is very very hard to do a good ORM framework. It should be concentrated around the model. If the model change, it should be an easy task to change database and the code dependent of the model. This make it easy to maintain. The cost for making small but many changes is very low. Many developers do the mistake to center around the database and adapt everthing around that. In my opinion this is not the best way to work.
More should try ECO. It is free to use an unlimited time as long the model is no more than 12 classes. You can do a lot with 12 classes!
I suggest you to use Code Smith Tool for creating Nettiers, that is a good option for developer.