Can’t find best way for apply best in code design techniques in software dev - mvvm

[Pre]
I have to say that I'm dummy newbie who is trying to get together important puzzles with such crucial details as DDD, TDD, MVVM, and EFCore. I have an about 10 years of windows form develop experience in complete wrong manner, and after I'm joined to Plurasight I'm understood that I'm just lost my last 10 years, and this is really sad :).
[Problem description]
I have an App that i want to re-write from scratch by using latest and greatest technics that've learned for the last 6 month on Pluralsight, but the problem is that these new knowledge’s is stopping me, because simply I'm afraid that I'll do it wrong again...(that is stupid I know, but it is what it is).
So back to my questions, I have a big problem domain, and pretty well documented business logic, which i have to turn in to the code. I'm understand that my start point is design data layer, for these purposes I want to use Entity framework core (I saw Julie Lerman's course on Pluralsight and I think's she is amazing and inspires me to use EFCore as ORM for my app). But at the same time leakage of experience produces more questions than what I’ve learned with Pluralsight, and I will try to write them all(please don’t judge me too hard)
It is looks like that I will need 2 or even more data model projects in my solution, and here is why I have multiple document set types, each of the type contain more than one reference books used to generate unique file names and data sheets. But it looks weird to me have 3 Data model projects such as MyApp.PackType1.DataModel, MyApp.PackType2.DataModel, and each of them will be preinstalled with the EFCore, and each of them will generates its own database based on Data Context defined by EF. Isn’t it very redundant or this is correct way?
I don’t understand how to join these multiple Data Models projects, including Shared Kernel into the one nice model
I don’t understand what is the best way to design my data classes? Should they be just POCO’s or I can design them as nice looking classes with the private var’s and public properties? What are the best practices in here?
Also I don’t understand what is the best practice to use a MVVM pattern on top of that, and is it applicable at all to use MVVM in this case?
Should I keep my Tests in separate projects like MyApp.PackType1.DataModel.Tests, or keep them in same project?
Best regards,
Maks!
P.S.
Apologize for unclear definitions and questions, English isn't my native language.

It's very complicated to answer your question because you have asked for a lot of details, but I going to provide a brief answer and I hope it will be helpful.
You can have only one model for your entities (DDD) and create sub model from this model in your end level projects (Web API or UI)
Read point #1
You have to create an Entity Layer project that represents your database and then you can create DTO's for specific scenarios
From my point of view, use Angular but you can use another UI framework such as React or VueJs, but I prefer to use Angular to build UI interfaces and consume .NET Core Web API from client
Create unit tests and integration tests for you Web API projects and as additional feature you can use Db in memory provider for tests
May be this guide is useful: https://www.codeproject.com/Articles/1160586/Entity-Framework-Core-for-Enterprise
Regards

Hm, multiple DbContexts (models) usually come about when you have distinct databases you are using. General rule is one Context = one Database. Exceptions can occur when there are a lot of tables that can be grouped functionally, but there are downsides to that approach.
A DbContext is a repository pattern but for individual tables. Using a Unit of Work pattern and layering with a custom repository provider would allow you to make it "appear" as a single database, hiding the complexity from the front-end.
Your entity descriptions are usually created as straight POCO. You can get creative with different DTOs
In a nutshell, an MVVM pattern goes like this:
Request from UI to a controller
Controller possibly issues multiple calls to Data Layer to gather data
Assemble data in a single ViewModel (everything the page needs)
Return to UI
The beauty of the approach is single roundtrip (request/response) to the UI
Separate Project in my opinion. There are techniques to spoof the database connection using EF so you are not using "live" data.
That CodeProject article will come in handy.

Related

How many layers your small REST system should have?

I am building simple REST deployable using Spring Boot. Decided to create it by using failing acceptance test first followed with TDD until its green.
My module is pretty simple, I have 3 API's:
Retrieving list of data from datastore.
Adds item to datastore.
Deletes item from datastore.
I feel like it is good idea to abstract datastore and have maybe backed by Map data structure for testing purposes and use it with either NoSQL or SQL db if I want to for deployments/releases and end to end testing.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
So standard approach would be controller->service->repository. In my case service does not do much(possible some exception handling but not more) and I will end up with interface and implementation as an extra as well as few more lines of code. I fell like going for controller->repository solution in my situation but it is not a practice I have seen and not sure how others would see it.
What's the best way to implement this sort of system?
I feel like it is good idea to abstract datastore
You are right. The abstraction is called 'Repository' in DDD (Domain Driven Design) for example.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
I'm pretty sure there are data that you want to validate. So you should have a layer in the middle (e.g. the domain layer) which will be in charge of this validation.
Even so, if you feel like your application is simple and doesn't require such layers, go without. You will have less supple design, but more simplicity at first. Be careful: while evolving your app, you could run into trouble.
Hope this will help.
This is rather an opinion based question, but if you are asking whether a 3 layer architecture is a must, to that I say no. Be pragmatic, if you don' see a reason for a class/layer/module to exist, it does not need to exist.
A repository has a purpose (to store/retrieve), and the api layer has a purpose, to offer those things through HTTP.
Here is an article for building small services with the sparkframework: https://dzone.com/articles/building-simple-restful-api

Entity Framework 6 Database-First and Onion Architecture

I am using Entity Framework 6 database-first. I am converting the project to implement the onion architecture to move towards better separation of concerns. I have read many articles and watched many videos but having some issues deciding on my solution structure.
I have 4 projects: Core, Infrastructure, Web & Tests.
From what I've learned, the .edmx file should be placed under my "Infrastructure" folder. However, I have also read about using the Repository and Unit of Work patterns to assist with EF decoupling and using Dependency Injection.
With this being said:
Will I have to create Repository Interfaces under CORE for ALL entities in my model? If so, how would one maintain this on a huge database? I have looked into automapper but found issues with it presenting IEnumererables vs. IQueryables but there is an extension available it has to hlep with this. I can try this route deeper but want to hear back first.
As an alternative, should I leave my edmx in Infrastructure and move the .tt T4 files for my entities to CORE? Does this present any tight coupling or a good solution?
Would a generic Repository interface work well with the suggestion you provide? Or maybe EF6 already resolves the Repository and UoW patterns issue?
Thank you for looking at my question and please present any alternative responses as well.
I found a similar post here that was not answered:
EF6 and Onion architecture - database first and without Repository pattern
Database first doesn't completely rule out Onion architecture (aka Ports and Adapters or Hexagonal Architecture, so you if you see references to those they're the same thing), but it's certainly more difficult. Onion Architecture and the separation of concerns it allows fit very nicely with a domain-driven design (I think you mentioned on twitter you'd already seen some of my videos on this subject on Pluralsight).
You should definitely avoid putting the EDMX in the Core or Web projects - Infrastructure is the right location for that. At that point, with database-first, you're going to have EF entities in Infrastructure. You want your business objects/domain entities to live in Core, though. At that point you basically have two options if you want to continue down this path:
1) Switch from database first to code first (perhaps using a tool) so that you can have POCO entities in Core.
2) Map back and forth between your Infrastructure entities and your Core objects, perhaps using something like AutoMapper. Before EF supported POCO entities this was the approach I followed when using it, and I would write repositories that only dealt with Core objects but internally would map to EF-specific entities.
As to your questions about Repositories and Units of Work, there's been a lot written about this already, on SO and elsewhere. You can certainly use a generic repository implementation to allow for easy CRUD access to a large set of entities, and it sounds like that may be a quick way for you to move forward in your scenario. However, my general recommendation is to avoid generic repositories as your go-to means of accessing your business objects, and instead use Aggregates (see DDD or my DDD course w/Julie Lerman on Pluralsight) with one concrete repository per Aggregate Root. You can separate out complex business entities from CRUD operations, too, and only follow the Aggregate approach where it is warranted. The benefit you get from this approach is that you're constraining how the objects are accessed, and getting similar benefits to a Facade over your (large) set of database entities.
Don't feel like you can only have one dbcontext per application. It sounds like you are evolving this design over time, not starting with a green field application. To that end, you could keep your .edmx file and perhaps a generic repository for CRUD purposes, but then create a new code first dbcontext for a specific set of operations that warrant POCO entities, separation of concerns, increased testability, etc. Over time, you can shift the bulk of the essential code to use this, while still keeping the existing dbcontext so you don't lose and current functionality.
I am using entity framework 6.1 in my DDD project. Code first works out very well if you want to do Onion Architecture.
In my project we have completely isolated Repository from the Domain Model. Application Service is what uses repository to load aggregates from and persist aggregates to the database. Hence, there is no repository interfaces in the domain (core).
Second option of using T4 to generate POCO in a separate assembly is a good idea. Please remember that your domain model (core) should be persistence-ignorant.
While generic repository are good for enforcing aggregate-level operations, I prefer using specific repository more, simply because not every Aggregate is going to need all of those generic repository operations.
http://codingcraft.wordpress.com/

Entity framework - Code First - Is it really good approach for big projects?

I have been doing some reading on code first approach of the entity framework and the other approaches(Model first, database first).
1.)The reason supporting entity framework code first approach in a lot of blogs seems to be to keep developers happy so that they dont have to work with designers. I am suprised at this argument because, you develop a project to keep your customers happy and not your developers. How many projects out there have developer happiness in the project plan. Another argument is to avoid a lot of mappings on xml. Well, if not xml, we end up doing that anyways in OnModelCreating and adding [Key] attributes to domain models. So mapping is not eliminated.
2.)Also when you generate a database from the code(domain model), the domain model is designed in an OO way and the generated database structure might not be the optimal one, which forces me to think that this code first approach is only suitable for small projects.
Are arguments correct?
1) The reason for the code first paradigm is to save development time by reducing the amount of tedious repetitive work developers have to do (which in turn makes them happier). Code first allows developers to forget about SQL for the most part (sometimes you will still have to write stored procedures) and focus on the data model and business logic. It is much easier to version your database (especially during development) as each developer only has to change their c# models rather than write a new SQL script, check it in and make sure everyone else on the team knows to run it. As for the XML files in the past in orther ORM's I've found it a pain to constantly lookup the corresponding xml file when you are modifying the domain classes. I personally find the fluent interfaces quicker when making changes to my data model. Also the xml files entity framework used in v1.0 became so large they were a real pain to edit/manage when your database got to be of any real world size.
2) I can't think of a scenario where a data schema could only be created via SQL and not expressed as c# with entity framework code first.

EF + WCF in three-layered application with complex object graphs. Which pattern to use?

I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)

Dynamic ORM - Perl

I am looking to upgrade an existing perl web-based application and wondering if there are any suggestions on how to solve a particular problem:
The application is used by several clients who each have a very customized dataset behind the scenes. There is very little overlap in the dataset between clients. However, they all load and use the same software. There are numerous configuration files that tell the software how to process this client and understand it's customized dataset.
In essence, there are common functions but different datasets that those functions work upon. I'm looking for a way to abstract the datasets into an ORM. However, most ORMs seem to expect a common dataset behind the scenes. I need to either load the ORM modules dynamically based on the client being used or dynamically create the ORM structure based on the same.
e.g.
The software provides View/Edit/Delete functionality but
Client A
Manages tables
Client B
Manages automobiles
The View function loads configuration files and has custom template files for each client that are relevant to the type of data they are managing.
Any suggestions?
Check out Jorge
See Rose::DB::Object (RDBO).
It support's loading the database structure at runtime by its Loader package. John Siracusa, the author of RDBO is always kindly responding to question in #rdbo on irc.perl.org or the mailing list.
It's also very fast (once loaded) and powerful. I can really recommend it if you have a DB application more complex than any example app.