Need some advice concerning MVVM + Lightweight objects + EF - entity-framework

We develop the back office application with quite large Db.
It's not reasonable to load everything from DB to memory so when model's proprties are requested we read from DB (via EF)
But many of our UIs are just simple lists of entities with some (!) properties presented to the user.
For example, we just want to show Id, Title and Name.
And later when user select the item and want to perform some actions the whole object is needed. Now we have list of items stored in memory.
Some properties contain large textst, images or other data.
EF works with entities and reading a bunch of large objects degrades performance notably.
As far as I understand, the problem can be solved by creating lightweight entities and using them in appropriate context.
First.
I'm afraid that each view will make us create new LightweightEntity and we eventually will end with bloated object context.
Second. As the Model wraps EF we need to provide methods for various entities.
Third. ViewModels communicate and pass entities to each other.
So I'm stuck with all these considerations and need good architectural design advice.
Any ideas?

For images an large textst you may consider table splitting, which is commonly used to split a table in a lightweight entity and a "heavy" entity.
But I think what you call lightweight "entities" are data transfer objects (DTO's). These are not supplied by the context (so it won't get bloated) but by projection from entities, which is done in a repository or service.
For projection you can use AutoMapper, especially its newer feature that I describe here. This allows you to reduce the number of methods you need to provide "for various entities" (DTO's), because the type to project to can be given in a generic type parameter.

Related

Using ViewModels instead DTOs as the result of a CQRS query

Reading a SO question, I realized that my Read services could provide some smarter object like ViewModels instead plain DTOs. This makes me reconsider what information should be provided by the objects returned by the Read Services
Before, using just DTOs, my Read Service just made flat view mapping of a database query into hash like structure with minimum normalization and no behavior.
However I tend to think of a ViewModel as something "smarter" that can have generated information not provided by the database, like status icon, calculated values, reformatted values, default values, etc.
I am starting to see that the construction of some ViewModel objects might get more complicated and has potential downsides if I made my generic ReadServiceInterface return ViewModels only:
(1) Should I plan some design restriction for the ViewModels returned by my CQRS? Like making sure that their construction is almost as fast as a plain DTO?
(2) DTOs by nature are easily serialized and ready to be sent to an external system in a SOA architecture or embedded into a message. Does this mean that using ViewModels will have a negative impact on my architecture?
(3) Which type of ViewModels should I keep outside my Read Services?
(4) Should I expect all ViewModels to be retrieved from Read Services?
In the past I implemented some ViewModels that needed more than one query. In a CQRS I suppose, that is a design smell, since everything they provide, should be in only one query.
I am starting a new project, where I thought that any query will return either aggregate objects or DTOs. Since now ViewModels come into play. I am wondering:
(5) Should I plan that queries within my architecture will yield two type of objects (ViewModels+Aggregates) or three (+DTO)?
View Models (VM) serve a single master: the View. We're usually consider the VM a pretty dumb object so in this regard, there's no technical difference between a VM and a DTO, only their purpose and semantics are different.
How you build a VM is an implementation detail. Some VM are pre generated and stored in a VM repository. Others are built in real-time by a service (or a query handler) either by querying the db directly or querying other repos/services then assembling the results. There's no right or wrong and no rules about how to do it. It comes down to preference.
In CQRS the important part is separation of commands from queries i.e more than one model. There's no rule about how many queries you should do or if you should return a view model or dto. As long as you have at least one read model dedicated for queries, it's CQRS.
Don't let technicalities complicate your design. Proper design is more about high level structure and not low level implementation. Use CQRS because having a read model simplifies your app, not for other reasons. Aim for simplification and clean code, not for rigid rules that dictate a 'how to' recipe.

Can a project have two different EF data models that reference the same table?

I've system that has a primary data model to perform most of the work.
The model has quite a few tables and with performance in mind when I came to add an administrative feature to the application I decided to use a second separate data model.
All works well until my second data model needs to access a table that is also in the primary data model. Now, from digging around I can see this can cause problems.
The two possible workaround I've come up with are to either:
Put the data models in separate projects.
Use views / stored procedures for accessing the table in question when required.
Method 1 seems the simpliest but I'm concerned about whether there would be any performance loss. Method 2 seems a bit messy and takes the point out of using EF.
Before I plump for using method 1, is there an easier work around that I could use?
In the end I decided to put the two data models into separate projects and I've there hasn't been any slowdown that I've been able to notice (I've not done any benchmarking but it's passed the perception test).
In one of her online tutorials EF guru Julie Lerman says that you should put your data model in a separate project anyway, so I don't think this has been a bad workaround.
I am working with 2 models in the same project, because I connect to 2 different databases. I have put different namespaces using "Custom Tool Namespace" on *.tt files but it is not necessary. It generally works, but it cannot handle situation when the entity (table) with the same name is in both models. When you save one model the entity with the same name is deleted from the second model.

Is core data implementing data mapper pattern?

I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.

How do I use entity framework with hierarchical data?

I'm working with a large hierarchical data set in sql server - modelled using the standard "EntityID, ParentID" kind of approach. There are about 25,000 nodes in the whole tree.
I often need to access subtrees of the tree, and then access related data that hangs off the nodes of the subtree. I built a data access layer a few years ago based on table-valued functions, using recursive queries to fetch an arbitrary subtree, given the root node of the subtree.
I'm thinking of using Entity Framework, but I can't see how to query hierarchical data like
this. AFAIK there is no recursive querying in Linq, and I can't expose a TVF in my entity data model.
Is the only solution to keep using stored procs? Has anyone else solved this?
Clarification: By 25,000 nodes in the tree I'm referring to the size of the hierarchical dataset, not to anything to do with objects or the Entity Framework.
It may the best to use a pattern called "Nested Set", which allows you to get an arbitrary subtree within one query. This is especially useful if the nodes aren't manipulated very often: Managing hierarchical data in MySQL.
In a perfect world the entity framework would provide possibilities to save and query data using this data pattern.
Everything IS possible with Entity Framework but you have to hack and slash your way in to it. The database I am currently working against has too many "holder tables" since Points for instance is shared with both teams and users. Both users and teams can also have a blog.
When you say 25 000 nodes do you mean navigational properties? If so I think it could be tricky to get the data access in place. It's not hard to navigate, search etc with entity framework but I tend to model on paper then create the database based on how I want to navigate while using entity framework. Sounds like you don't have that option.
Thanks for these suggestions.
I'm beginning to realise that the answer is to remodel the data in the database - either along the lines of nested sets as Georg suggests, or maybe a transitive closure table, which I've just come across.
That way, I'm hoping to get two key benefits:
a) faster querying aginst arbitrary subtrees
b) a data model which no longer requires recursive querying - so perhaps bringing it within easy reach of the Entity Framework!
It's always amazing how so often the right answer to a difficult problem is not to answer it, but to do something else instead!

Linq to SQL, Entity Framework, Repository Pattern, and Dependency Injection

Stephan Walters video on MVC and Models is a very good and light discussion of the various topics listed in this questions title. The one question listed in the notes unanswered was:
If you create an Interface / Repository pattern for Linq2SQL, does Linq2SQLs classes still cause a dependency on Linq, even though you pass the classes as toList?
It is probably an easy answer YES, however, what standard mechanic would you use to represent the data?
Lets say you have a Product entity that is made up of three tables (Prices, Text, and Photos) (you could have sets of price for different regions, different text for localization, and different photos). (Sounds like a builder pattern) Would you create a slice of these tables grabbing the right prices, text, and photos in to a single List? Since Lists may be proprietary, would you use a Dictionary object?
I thank you for your answers. I am very interested in the "standard and proper" way to do it rather than 101 possibilities.
Another quick question: is Entity Framework ready for a complicated database yet? There are a lot of constructs that Linq2SQL likes that EF does not. EF seems to require identity fields as primary keys (HAHA), but it seems like every demo does this. I want to use EF, but I constantly fail to make it work, falling back to Linq2SQL.
If you keep the L2S on the other side of the Repository facade (remember, that's all a Repository is - a facade) then you decouple the rest of your application from L2S. This means that the job of the code behind your repository is to turn the L2S into "domain" objects, custom classes, and then the Repository returns those.
In this sense, the Repository is returning fully formed "Product" objects with all their related Price, Text, and Photo data. This is called an Aggregate Root.
There shouldn't be a problem with Lists, since they are CLR objects.
As far as EF for advanced scenarios, my advice would be not yet, for the reasons you note.
The standard mechanism I'd use to represent the data is a Data Transfer Object. I would never return a LINQ to SQL or Entity Framework object across a service boundary, and I would hesitate to return it across a layer boundary of any kind. This is because these objects will serialize implementation-dependant data.