I'm using the Database-first EF model, and use breezeJs for the client-side data management.
Let's say I have a table [User] in my database, with a field called 'AccessCode'. I want to expose the User object through breeze to the clientside, but do not want to expose the AccessCode property. As far as I know, I have the following options:
Make the AccessCode property on the EF generated entity class
Internal.
Create a DTO and omnit the AccessCode property. DTO is
exposed to the client side.
2nd option doesn't work well with breeze, since we should be able to add/modify the User object directly from clientside.
Is there anything wrong with the first option? My concern is that if we make the property internal, the change will be wiped the next time the model is updated. I know if we want to enforce validations, we can use partial classes with the entity class, but we can't do that here.
What would be the best way to achieve the data-hiding here?
DTOs are almost always the right answer when you want to hide pieces of data from different layers.
Related
I have a domain model architecture in which my domain/business objects were created based on the problem domain and independent of any knowledge of the physical data model or persistence structures. So far I'm on track because it's perfectly acceptable and often the case that there is an impedance mismatch between the domain model and the data model. A DBA created the database for getting the data they required, but it does not encapsulate the applications entire domain model or design.
The result - I have my own set of domain model objects. However all of the fields that need to be persisted do exist somewhere or another within my domain model, but not necessarily in the shape that my auto generated .edmx POCO entities have them. So I have all the data, it's just not in the perfect shape exactly like the tables in which auto generated POCO entities are created from.
I have seen a few posts on this topic like converting POCO entity to business entity and Entity Framework 4 with Existing Domain Model that make statements like the following:
"Create the entities in your entity data model with the same names as
your domain classes. The entity properties should also have the same
names and types as in the domain classes"
What!? No way, why should I have to make my domain model be reshaped to POCOs that are modeled exactly after the data model / table structure in the database? For example - in my case of having 5 given properties, 2 might be in class 'A' and 3 in class 'B', whereas a auto generated POCO class has all 5 in its own class 'A'.
This is the entire point, I want separation of my object model and data model but yet still use an ORM like EF 5.0 to map in between them. I do not want to have to create and shape classes and properties named as such in the data model.
Right now my .edmx in EF 5.0 is generating the POCO classes for me, but my question is how to dissolve these and rewire everything to my domain objects that contain all this data but just in a different shape?
By the way any solution proposed using a Code First approach is not an option so please do not offer this. I need some guidance or a tutorial (best) using EF5 (if possible because EF4 examples are always inheriting POCOs from ObjectContext) with wiring up my own business objects to the .edmx.
Any help or guidance is appreciated, thanks!
This sounds like exactly the use case of Entity Framework. I am making a few assumptions here. First that when you make this statement:
"I have a domain model architecture in which my domain/business objects were created based on the problem domain and independent of any knowledge of the physical data model or persistence structures."
That you mean this domain was created in the EF designer? But then you say:
"However all of the fields that need to be persisted do exist somewhere or another within my domain model, but not necessarily in the shape that my auto generated .edmx POCO entities have them."
This sounds to me like my first assumption is incorrect.
Next, you dismiss code first? If your domain model/business objects are code based and you want to persist them to a relational database, that is exactly the use case for code first. You have the code, now you need to create your DbContext and map it to your physical model.
However you dismiss that... so some thoughts:
If you have a domain model of code based business objects and you have an EDMX that is used for other things I think you would want to create a repository layer that uses something like auto mapper or manual projections to query your Entities and return your business objects.
If you have a domain model of code based business objects and you have an EDMX that is not used for other things other than persisting your business objects I would say that you need to express your domain in an EDMX and then map it onto your existing database. This is really the use case for an ORM. Having two domain models and mapping from one model to the other where one model matches your domain and one matches your database is adding an extra un-needed layer of plumbing.
The latter approach above is what is called "Model First" in EF parlance. There are several articles written about it although the bulk of them just generate the db from the model. You would not do that step, rather you would map your entities onto your existing database.
The basic steps for this are to "update from the database" not selecting any of the db objects (or entities would be created). Or, you can take your exiting .edmx in the designer (which is sounds like you have) and modify the entities to match your business domain. Or just delete all the entities in your EDMX model, create your entities as you want them, and then map them all.
Here is a jing I made where I use the EF Designer to bring in the model store (the only way to do this is to allow it to generate entities) and then delete the entities allowing the Store information to stay by clicking NO when it asks if you want to delete the table info.
http://screencast.com/t/8eiPg2kcp
I didn't add the POCO generator to this, but if I did it would generate the Entities in the designer as POCO classes.
The statement quoted above is not suggesting that you rewrite your domain objects to match your pocos, it is suggesting that you update the edmx to match your domain model.
In your example you could create an entity in your edmx that maps all 5 properties from both tables and EF will manage the mapping to and from the single generated Poco onto your tables.
Of course this means that you then have duplicate domain objects and pocos, meaning you would either have to manually convert your domain objects to pocos when interacting with EF,
or you could define your domain data objects as interfaces and provide partial implementations of the pocos that essentially identify each poco as being a concrete implementation of a domain object.
There are probably several other ways to skin this particular cat, but I don't think that you can provide predefined c# objects for use in an edmx.
One approach might be to select into a ViewModel (suited to your particular business logic) and automatically map some data from the context into it like so https://stackoverflow.com/a/8588843/201648. This uses AutoMapper to map entity properties from an EF context into a ViewModel. This won't do everything for you, but it might make life a bit easier. If you're unhappy with the way this occurs automatically, you can configure AutoMapper to do things a bit differently (see Projection) - https://github.com/AutoMapper/AutoMapper/wiki/Projection
You might know this already, but its also possible to automatically generate POCOs from your EDMX using t4 - http://visualstudiogallery.msdn.microsoft.com/72a60b14-1581-4b9b-89f2-846072eff19d. If you define the templates to generate partial classes, you can then have another partial class with your custom properties for that POCO. That way you can have most properties automatically populated, but have other custom properties which you populate with custom rules from your context/repository. This takes a lot of the monotony out of generating these, and you can then focus on reshaping the data using the above technique.
If you're seriously unhappy with both, you could always map a stored procedure to get the exact field names that you want automatically without needing to stuff around. This will of course affect how you work with the data, but I have done it before for optimisation purposes/where a procedure already existed that did exactly what I wanted. See http://msdn.microsoft.com/en-us/data/gg699321.aspx
I have been working with the Entity Framework + Self-Tracking entities, and came out with a problem:
Is there any way to determine when an entity has been changed??
For example: If you have an entity User with two fields: Name and Password, you can know if an User instance has been changed making:
<user>.ChangeTracker.State != ObjectState.Unchanged;
My problem is when the User has a Person, and the person has a field Email. I want that if the email field is changed, then the corresponding User is changed too.
I have been trying with methods such as: <user>.StartTrackingAll(); but this does not work with navigation properties (or maybe i am doing something wrong). Some help about this can be found here.
Remember that the Self tracking entities are autogenerated via T4 templates, so the clases can't be modified.
First when wanting to know if any entity in a so-called object graph has changed you can recurse through all entities contained in trackable collections or one-to-one navigation properties of a root entity (user in your case). This way you can know if a person inside the root entity has changed. This is actually how I check complex object graphs for any changes in any of the contained entities. But also for checking out if any of these entities have critical validation errors (so the user can't persist them yet).
Remember that the Self tracking entities are autogenerated via T4 templates, so the clases can't be modified.
Not true. First of all you can modify the T4 template to generate more (complex) code to achieve the things you want. And second, it generates partial classes which can easily be extended with custom (non-generated) code.
If you change the email in the Person instance only that instance is correctly marked as modified. That is absolutely correct behaviour. What do you expect? Do you expect that change to property in related entity will propagate changed state to relations? That would make STEs completely useless because any single change to entity graph would make all entities in the graph modified and each this modification causes additional roundtrip to the database.
If you want to set User as modified when you are changing email simply create some method or handle some event and call person.User.MarkAsModified()
I am using the Service Layer --> Repository --> Entity Framework (Code-First) w/POCO objects approach, and I am having a hard time with updating entities.
I am using AutoMapper to map my Domain Objects to my View Models and that works good for getting the data, no how do I get that changes back into the database?
Using pure POCO objects, I would assume that there is no sort of change tracking, so I see my only option is to handle it myself. Do you just make sure that your View Models have the EXACT same properties as your Domain Objects? What if I just change a field or two on the View Model? Won't the rest of the fields on the Domain Object get overwritten in the database with default values?
With that said, what is the best approach?
Thanks!
Edit
So what I am stumbling on is this, lets take for example a simple Customer:
1) The Controller has a service, CustomerService, that calls the services GetCustmoerByID method.
2) The Service calls into the CustomerRepository and retrieves the Customer object.
3) Controller uses AutoMapper to map the Customer to the ViewModel.
4) Controller hands the model to the View. Everything is great!
Now in the view you do some modifications of the customer and post it back to the controller to persist the changes to the database.
I would assume at this point the object is detached. So should the model have the EXACT same properties as the Customer object? And do you have to make hidden fields for each item that you do not want to show, so they can persist back?
How do you handle saving the object back to the database? What happens if your view/model only deals with a couple of the fields on the object?
If you're using EF Code First, i.e: the DbContext API, then you still do have change tracking which is taken care of by your context class.
after making changes to your objects, all you have to do is call SaveChanges() on your context and that will persist the changes to your database.
EDIT:
Since you are creating a "copy" of the entity using AutoMapper, then it's no longer attached to your context.
I guess what you could do is something similar to what you would in ASP.NET MVC (with UpdateModel). You can get the original entity from your context, take your ViewModel (which may contain changed properties) and update the old entity, either manually (just modified properties), or using AutoMapper. And then persist the changes using context.SaveChanges().
Another solution would be to send the model entity as [part of] the ViewModel. This way, you'll have your entity attached to the container and change tracking will still work.
Hope this helps :)
You are absolutely right that with a detached object you are responsible for informing the context about changes in your detached entity.
The basic approach is just set the entity as modified. This works for scalar and complex properties but it doesn't work for navigation properties (except FK relations) - for further reading about problems with navigation properties check this answer (it is related to EFv4 and ObjectContext API but same problems are with DbContext API). The disadvantage of this approach is that all fields in DB will be modified. If you just want to modify single field you still have to correctly fill others or your database record will be corrupted.
There is a way to explicitly define which fields have changed. You will set the modified state per property instead of whole entity. It is little bit harder to solve this on generic approach but I tried to show some way for EFv4 and for EFv4.1.
I agree with #AbdouMoumen that it's much simpler to use the model entities at the view level. The service layer should provide an API to persist those entities in the data store (db). The service layer shouldn't dumbly duplicate the repository lawyer (ie: Save(entity) for every entity) but rather provide a high level save for an aggregate of entities. For instance, you could have a Save(order) in the service layer which results in updating more basic entities like inventory, customer, account.
I'm using DataNucleus as a JPA implementation to store my classes in my web application. I use a set of converters which all have toDTO() and fromDTO().
My issue is, that I want to avoid the whole DB being sent over the wire:
If I lazy load, the converter will try to access ALL the fields, and load then (resulting in very eager loading).
If I don't lazy load, I'll get a huge part of the DB, since user contains groups, and groups contains users, and so on.
Is there a way to explicitly load some fields and leave the others as NULL in my loaded class?
I've tried the DataNucleus docs with no luck.
Your DTOs are probably too fine-grained. i.e. dont plan to have a DTO per JPA entity. If you have to use DTOs then make them more coarse grained and construct them manually.
Recently we have had the whole "to DTO or not to DTO, that is the question" discussion AGAIN. The requirement for them (especially in the context of a JPA app) is often no longer there, but one of the arguments FOR DTOs tends to be that the view has coarser data requirements.
To only load the data you really require, you would need to use a custom select clause containing only these elements that you are about to use for your DTOs. I know how painful this is, especially when it involves joins, which is why I created Blaze-Persistence Entity Views which will take care of making the query efficient.
You define your DTO as an interface with mappings to the entity, using the attribute name as default mapping, this looks very simple and a lot like a subset of an entity, though it doesn't have to. You can use any JPQL expression as mapping for your DTO attributes.
I'm would like to use both EF and MVVM and am trying to see how they fit together. I can't find much in the way of examples so hope you guys can answer a few questions.
Let's say I have a single table in a database called Customer. I run the EF designer and get a data model.
The next step is to run some linq to get data out of the data model. Let's create a new class called CustomerRepository to do this.
Now I'm guessing the Model would call CustomerRepository.GetCustomers to get a list of customers.
Here is my question - CustomerModel has a list of customer objects that were defined by EF in the data model. How do I add validation attributes or any kind of validation to it?
There just seems to be a bit of a disconnect between EF and MVVM. I'm sure some of you have hit this before - any ideas? Any better ways of approaching this?
Cheers
Steve
The validation, the business rules, the presentation of your Customer object should live in the ViewModel that will serve as a controller or presenter for your View.
In terms of how to create that ViewModel, you have a couple of options:
Include the Model as a property of the VM, and pass the model instance into the VM's constructor. You can then expose the Customer's properties and just wire them through to the underlying Model's corresponding properties.
Generate a ViewModel using T4 templates and Reflection (or preferably Introspection) to 'read' the Model, and generate the properties that will map directly to it.
Now you can add custom validation rules to the VM, such that when the appropriate command is sent from the View you can perform your business rules, and if appropriate you can update the Model using EF's API to persist those changes back to the database...