Migrating away from core data - swift

Our team is in the process of moving from core data to GRDB. One issue is the transition from the 2 databases and keeping them in sync as we test.
One idea would be to just keep using the sqlite DB that core data has created directly, or to make a copy and if we revert, copy it back in place with the updated records.
Anyone have any experience with this concept, looking for any issues we might encounter.

I am doing this right now. I think that if you prepare good persistence layer abstraction like protocols and their implementations for different databases, where protocol do not have dependency to the specific database library classes and data types, then migration should be fine and quite easy. After that, once you have Core Data implementation, you can write another implementation with GRDB and switch to it in your application. It's strategy software design pattern.

Related

Can’t find best way for apply best in code design techniques in software dev

[Pre]
I have to say that I'm dummy newbie who is trying to get together important puzzles with such crucial details as DDD, TDD, MVVM, and EFCore. I have an about 10 years of windows form develop experience in complete wrong manner, and after I'm joined to Plurasight I'm understood that I'm just lost my last 10 years, and this is really sad :).
[Problem description]
I have an App that i want to re-write from scratch by using latest and greatest technics that've learned for the last 6 month on Pluralsight, but the problem is that these new knowledge’s is stopping me, because simply I'm afraid that I'll do it wrong again...(that is stupid I know, but it is what it is).
So back to my questions, I have a big problem domain, and pretty well documented business logic, which i have to turn in to the code. I'm understand that my start point is design data layer, for these purposes I want to use Entity framework core (I saw Julie Lerman's course on Pluralsight and I think's she is amazing and inspires me to use EFCore as ORM for my app). But at the same time leakage of experience produces more questions than what I’ve learned with Pluralsight, and I will try to write them all(please don’t judge me too hard)
It is looks like that I will need 2 or even more data model projects in my solution, and here is why I have multiple document set types, each of the type contain more than one reference books used to generate unique file names and data sheets. But it looks weird to me have 3 Data model projects such as MyApp.PackType1.DataModel, MyApp.PackType2.DataModel, and each of them will be preinstalled with the EFCore, and each of them will generates its own database based on Data Context defined by EF. Isn’t it very redundant or this is correct way?
I don’t understand how to join these multiple Data Models projects, including Shared Kernel into the one nice model
I don’t understand what is the best way to design my data classes? Should they be just POCO’s or I can design them as nice looking classes with the private var’s and public properties? What are the best practices in here?
Also I don’t understand what is the best practice to use a MVVM pattern on top of that, and is it applicable at all to use MVVM in this case?
Should I keep my Tests in separate projects like MyApp.PackType1.DataModel.Tests, or keep them in same project?
Best regards,
Maks!
P.S.
Apologize for unclear definitions and questions, English isn't my native language.
It's very complicated to answer your question because you have asked for a lot of details, but I going to provide a brief answer and I hope it will be helpful.
You can have only one model for your entities (DDD) and create sub model from this model in your end level projects (Web API or UI)
Read point #1
You have to create an Entity Layer project that represents your database and then you can create DTO's for specific scenarios
From my point of view, use Angular but you can use another UI framework such as React or VueJs, but I prefer to use Angular to build UI interfaces and consume .NET Core Web API from client
Create unit tests and integration tests for you Web API projects and as additional feature you can use Db in memory provider for tests
May be this guide is useful: https://www.codeproject.com/Articles/1160586/Entity-Framework-Core-for-Enterprise
Regards
Hm, multiple DbContexts (models) usually come about when you have distinct databases you are using. General rule is one Context = one Database. Exceptions can occur when there are a lot of tables that can be grouped functionally, but there are downsides to that approach.
A DbContext is a repository pattern but for individual tables. Using a Unit of Work pattern and layering with a custom repository provider would allow you to make it "appear" as a single database, hiding the complexity from the front-end.
Your entity descriptions are usually created as straight POCO. You can get creative with different DTOs
In a nutshell, an MVVM pattern goes like this:
Request from UI to a controller
Controller possibly issues multiple calls to Data Layer to gather data
Assemble data in a single ViewModel (everything the page needs)
Return to UI
The beauty of the approach is single roundtrip (request/response) to the UI
Separate Project in my opinion. There are techniques to spoof the database connection using EF so you are not using "live" data.
That CodeProject article will come in handy.

Entity Framework 6 Database-First and Onion Architecture

I am using Entity Framework 6 database-first. I am converting the project to implement the onion architecture to move towards better separation of concerns. I have read many articles and watched many videos but having some issues deciding on my solution structure.
I have 4 projects: Core, Infrastructure, Web & Tests.
From what I've learned, the .edmx file should be placed under my "Infrastructure" folder. However, I have also read about using the Repository and Unit of Work patterns to assist with EF decoupling and using Dependency Injection.
With this being said:
Will I have to create Repository Interfaces under CORE for ALL entities in my model? If so, how would one maintain this on a huge database? I have looked into automapper but found issues with it presenting IEnumererables vs. IQueryables but there is an extension available it has to hlep with this. I can try this route deeper but want to hear back first.
As an alternative, should I leave my edmx in Infrastructure and move the .tt T4 files for my entities to CORE? Does this present any tight coupling or a good solution?
Would a generic Repository interface work well with the suggestion you provide? Or maybe EF6 already resolves the Repository and UoW patterns issue?
Thank you for looking at my question and please present any alternative responses as well.
I found a similar post here that was not answered:
EF6 and Onion architecture - database first and without Repository pattern
Database first doesn't completely rule out Onion architecture (aka Ports and Adapters or Hexagonal Architecture, so you if you see references to those they're the same thing), but it's certainly more difficult. Onion Architecture and the separation of concerns it allows fit very nicely with a domain-driven design (I think you mentioned on twitter you'd already seen some of my videos on this subject on Pluralsight).
You should definitely avoid putting the EDMX in the Core or Web projects - Infrastructure is the right location for that. At that point, with database-first, you're going to have EF entities in Infrastructure. You want your business objects/domain entities to live in Core, though. At that point you basically have two options if you want to continue down this path:
1) Switch from database first to code first (perhaps using a tool) so that you can have POCO entities in Core.
2) Map back and forth between your Infrastructure entities and your Core objects, perhaps using something like AutoMapper. Before EF supported POCO entities this was the approach I followed when using it, and I would write repositories that only dealt with Core objects but internally would map to EF-specific entities.
As to your questions about Repositories and Units of Work, there's been a lot written about this already, on SO and elsewhere. You can certainly use a generic repository implementation to allow for easy CRUD access to a large set of entities, and it sounds like that may be a quick way for you to move forward in your scenario. However, my general recommendation is to avoid generic repositories as your go-to means of accessing your business objects, and instead use Aggregates (see DDD or my DDD course w/Julie Lerman on Pluralsight) with one concrete repository per Aggregate Root. You can separate out complex business entities from CRUD operations, too, and only follow the Aggregate approach where it is warranted. The benefit you get from this approach is that you're constraining how the objects are accessed, and getting similar benefits to a Facade over your (large) set of database entities.
Don't feel like you can only have one dbcontext per application. It sounds like you are evolving this design over time, not starting with a green field application. To that end, you could keep your .edmx file and perhaps a generic repository for CRUD purposes, but then create a new code first dbcontext for a specific set of operations that warrant POCO entities, separation of concerns, increased testability, etc. Over time, you can shift the bulk of the essential code to use this, while still keeping the existing dbcontext so you don't lose and current functionality.
I am using entity framework 6.1 in my DDD project. Code first works out very well if you want to do Onion Architecture.
In my project we have completely isolated Repository from the Domain Model. Application Service is what uses repository to load aggregates from and persist aggregates to the database. Hence, there is no repository interfaces in the domain (core).
Second option of using T4 to generate POCO in a separate assembly is a good idea. Please remember that your domain model (core) should be persistence-ignorant.
While generic repository are good for enforcing aggregate-level operations, I prefer using specific repository more, simply because not every Aggregate is going to need all of those generic repository operations.
http://codingcraft.wordpress.com/

Entity framework - Code First - Is it really good approach for big projects?

I have been doing some reading on code first approach of the entity framework and the other approaches(Model first, database first).
1.)The reason supporting entity framework code first approach in a lot of blogs seems to be to keep developers happy so that they dont have to work with designers. I am suprised at this argument because, you develop a project to keep your customers happy and not your developers. How many projects out there have developer happiness in the project plan. Another argument is to avoid a lot of mappings on xml. Well, if not xml, we end up doing that anyways in OnModelCreating and adding [Key] attributes to domain models. So mapping is not eliminated.
2.)Also when you generate a database from the code(domain model), the domain model is designed in an OO way and the generated database structure might not be the optimal one, which forces me to think that this code first approach is only suitable for small projects.
Are arguments correct?
1) The reason for the code first paradigm is to save development time by reducing the amount of tedious repetitive work developers have to do (which in turn makes them happier). Code first allows developers to forget about SQL for the most part (sometimes you will still have to write stored procedures) and focus on the data model and business logic. It is much easier to version your database (especially during development) as each developer only has to change their c# models rather than write a new SQL script, check it in and make sure everyone else on the team knows to run it. As for the XML files in the past in orther ORM's I've found it a pain to constantly lookup the corresponding xml file when you are modifying the domain classes. I personally find the fluent interfaces quicker when making changes to my data model. Also the xml files entity framework used in v1.0 became so large they were a real pain to edit/manage when your database got to be of any real world size.
2) I can't think of a scenario where a data schema could only be created via SQL and not expressed as c# with entity framework code first.

EF + WCF in three-layered application with complex object graphs. Which pattern to use?

I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)

is it that easy working with ADO.NET Entity framework in real programming?

HI Guys,
I was watching these videos series about Entity Framework:
http://msdn.microsoft.com/en-us/data/ff191186.aspx
is that easy building application in real world programming??? and is it ....reliable...has good performance...
"I am a graduate.."
thanks
Entity Framework is a valid real world data access tool. It is very easy to get up and running with EF. You simply import (or create in EF 4) your data model. You then can rename it to make it more code friendly. And then you are off querying databases.
Performance
I have been on multiple projects that use it, some which require high throughput, others that have low performance requirements. Entity Framework out of the box is not the fastest solution in the world, so there are a lot of performance tweaks that have to go on, but its all do able.
Reliability
We never have issues with reliability. We have never had an issue with EF in general, its always data content related. Trying to insert duplicated data, etc.
Other Tangibles
EF follows a pattern which allows for you to do some fun stuff with templates and abstract classes. All entities inerit from a class, entities that have references inherit from other classes. All Entity Contexts inherit from ;) ObjectContext classes, which provide a base set of functionality that allows you to create generic DAO implementations that can be reused throughout the enterprise.
If you are using UI dev, you can also use Data Services that wrap EF, as a fast gateway to your databse. The only downside of this is that you dont have access to the full suite of the Entity Framework.