Is there a server-side equivalent of Breeze metadata helper? With the client-side helper, I have to know the model at design time, and keep the code in sync whenever the sever model is updated. This is a big maintenance burden. If there is a server-side helper, I can write generic code that get entity and relationship information at run-time. This not only reduces maintenance work, but also makes the code more reusable.
Related
I had a very interesting discussion recently with a colleague. He only has years-long coding experience with no engineering background whatsoever. He seriously tried to persuade me that using the same model for presentation, business, and persistence of entities is actually considered a best-practice.
We work on the same project. It's basically a Web API that delivers JSON and uses a MongoDB database. For all our entities, the JSON response, the business classes in the code, and the MongoDB documents all have the same structure.
I think his approach is totally wrong. He argued that keeping those 3 things separated like in the MVVM and MVC patterns is old school thinking because everytime you create a new business entity, you must "write mapping code" for the JSON response as well as the mapping to MongoDB documents.
When it comes to designing business entities, his design approach is really strange: he never uses inheritance or aggregation/composition. His logic is this: "When I design business entities, I must keep frontend developers in mind. The JSON response must have a structure that is easily usable and accessible by frontend code."
All our business entities in the code, and I mean all, do not show any sign of inheritance, aggregation or composition. No associations whatsoever. Every time a new feature comes in, he "designs" a completely new entity. He also changes the structure of entities already running in production code (by adding new fields for new features), which not only has an impact on how the MongoDB documents are (and can) be loaded, it also has an effect on the JSON response. In other words, this completely messes up the stability of the code and the usability of the API.
What do you think?
We are currently rewriting an existing internal ASP.NET Web Forms application. Our application consists of a Web Api back end which uses Entity Framework 6 for data access and an front end which uses AngularJS.
We have an existing large database that I've created EF models using the Code-First Using Existing Database method and we are using data transfer object classes as inputs/outputs to our API methods so we aren't directly exposing our model classes. So basically, I'm trying to become proficient with EF, Web Api and AngularJS all at the same time. For the most part I'm fairly comfortable with the latter two, but for EF I haven't completely gotten comfortable with. I've watched a lot of the videos on Microsoft Virtual Academy but this is the first time I've had some hands-on experience with it.
We've been working on this application for a few months and so far we've only had to work with CRUD operations on our entities (POCO DTO's) which are flat objects with simple properties. However, we've finally come across some situations where we need to deal not only with our classes, but properties which are classes themselves; a parent-child relationship.
Therefore, I have the following questions:
I see that when we have a proper foreign key relationship in our DB, that virtual properties are created in EF, which from what I recall are to support lazy-loading. However, lazy-loading isn't really feasible in this environment where we are using web services (Web Api). Our object model does allow for some really large hierarchy of classes where a fully populated object and its children would mean a large amount of data would be passed around when that really isn't necessary, so in most cases a first level object is all we need. In some cases however, we do want to populate child classes, so my question is how do we do that, and where do we do that? I've looked at the automatically-generated code in the DB Context but we have also used scaffolded code to create our controllers. Which place do we need to do this? I've seen code samples showing how do to this but it hasn't said specifically where this code lies. It appears to be within a controller but I could be wrong.
If we do allow for 2- or more level hierarchy of objects, does EF automatically handle operations (updates, deletes, etc.) -- for example, if we have a "Company" object which has a collection of "Customer" objects, and we delete the "Company" object, do the related "Customer" objects get deleted too? Also, is a multi-step operation like that automatically performed within a transaction or do we need to explicitly set that up?
If I modify a model class or the DB context, seeing as this code is automatically-generated, that's generally bad practice as my changes could be overwritten, so I am assuming the controller code is where I want to make my changes. I am aware of database migrations but I have no experience with them and I am sure I'll need to use them at some point because I am fairly confident that our database may not have all the foreign key relationships necessary for EF to do everything we need at the moment.
I know this is a long post, but if anyone can give some guidance on how to do some of these things because it's not only me that's having to deal with this but I have two other developers on my team who are working on this project and we are all as inexperienced with this as the others are. Thanks
For the purpose of sending data across a web service, I'd suggest creating a DTO to hold the data you want to send and mapping your entities to the DTO instead of trying to send the entities themselves in your payload. It also protects your API from changes to your entity.
Cascading deletes are configurable, iirc, but I'm not 100% sure what the default is. Transactions are generally not implicit, so you will want to use those where you require them.
Not exactly sure what you are asking here. In general, how your entities/tables change depends on if you are using database-first or code-first. If you are using database-first (you will have a .edmx file in your solution that has the model matching your schema), you just update the SQL directly and update your entity model via the .edmx. If you use code-first, you will change the entities how you want them and run a database migration to update your database to match.
MSDN article about code-first migration: https://msdn.microsoft.com/en-us/data/jj591621.aspx
I am writing a RIA service. I need to decide where to put the business logic.
I see two possibilities
Use the CRUD methods being called by submitchanges and put the business logic there. The main problem is that in some cases I will need to do more effort to detect what has changed in my object as I don't really know which field changed in my object. The methods themself are expected to become big as needing to deal with multiple entity changes.
Give the client specific invoke/named update methods for some of the update ops. These will be called by the UI when doing specific data model change and therefore the effort on the server side will be smaller (will know better the operation being done) and maybe the complication of the server methods will be reduced..
Amit
The second approach makes more sence to me. Client is used only as a terminal to store and show data but there is no contraindications to invoke different server functions as long as you can keep your data consistent. Huge perk is simplicity, ive been tought the easier the better.
I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)
I'm developing a program which allows users to input some information which then gets stored and dynamically creates an image based on it.
I was going to use the Entity Framework to do the work with the data, but then I obviously need a way to generate the image. My thinking was that the "correct" way to do this was to somehow extend the data entity to include a function call like "CreateImage", or alternatively, to create a separate class not in the EF called "DataImage" which would have a "generate" method.
Extending the EF seems the "pure" way to do this, but I'm not sure how or if it's more practical than using the separate class.
Any thoughts on the best way to do this and how to do it using EF?
Putting this functionality in the EF would be a major violation of SRP. Breaking SRP has cascading negative effects as your application grows.
The approach you most likely want to take instead is a totally separate, encapsulated image generation service which takes interfaces that your EF entities implement. This decouples your image service from your data access completely; you get complete testability and zero dependencies right away.