Entity Framework Code First Library and Database Update Implications - entity-framework

We have recently begun using Entity Framework for accessing all the various databases we touch on a regular basis. We've established a collection of library projects, one for each of these. For many of them, we're accessing established databases that do not change, and using DB first works just great.
For some projects, though, we're developing evolving databases that are having new fields and tables added periodically. Because these libraries are used by multiple projects (at the moment, just two, but eventually many more), running a migration on the production database necessitates a republish of both/all sites that use that particular DB's library. Failure to update the library on any other site of course produces the error that the model backing the context has changed.
How can we effectively manage the deployment/update of the Code-First libraries to all of the sites that use them each time a change to the database is made?

A year later, here's what we came up with and have been using.
We now include the following line in the Application_Start() method:
Database.SetInitializer<EFLib.MyHousing.MyHousingMVCContext>(null);
This causes it not to throw a fit if the current database model doesn't exactly match what's in the code. While there is still potential for problems if non-backward-compatible changes are made, this allows for new functionality to be added without the need to re-deploy every site that uses these libraries when the affecting changes are not relevant to that particular site.

Related

Is it possible to use Entity Framework Core with ODBC?

My company's main software application hasn't been updated in twenty years. I expect to soon be working on a complete rewrite of it. To that end, I am beginning to work my way through the book "Pro ASP.Net Core 3" by Adam Freeman (8th edition).
Our application was written to be independent of specific database types. Most of our customers use PostgreSQL, but a few use SQL Server. Therefore, we use ODBC because ODBC drivers exist for both of those databases, as well as several others. The application does not do anything fancy with the databases, and ODBC works well. We configure an ODBC DSN to talk to whichever database the customer has, and the application itself doesn't have to be changed.
A search on "Entity Framework Core ODBC" led me to the EF Core Github, where people have asked similar questions, and the answers were mostly along the lines of "why on earth would you want to do that?". Well, I need to do that.
Can I use EF Core with ODBC, or is there some other way that I can set up an Entity Framework Core application that does not have to be modified if the underlying database changes from PostgreSQL to SQL Server?
You could use your appsettings.json to store a value used to swap between the two. Those environment configs get reloaded on change (though you might have to restart your application to read them again, I'm not sure on that one).
Regardless something along the lines of this would suit your needs I think.
if (Configuration.GetSection("dbOptions")["postgres"]))
services.AddDbContext<ApplicationDbContext>(options => options.UseNpgsql(Configuration.GetConnectionString("PostgresConnectionString")));
else
services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("SqlServerConnectionString")));
EDIT: I placed this in Startup.cs where you would normally configure the DBContext. I use a similar solution reading off the Environment type to load either the Prod or QA connection strings based on deployment. In principle, this should accomplish the same task without the need for rebuilding and redeploying the code base.

Persisting to multiple databases in Spring

We have a DB2 database which is used by legacy applications that we are in the process of decommissioning and we have an Oracle database that we are developing new applications for. In order to maintain compatibility with legacy applications until they are completely decommissioned and keep data in sync, we are using Atomikos for two phase commits. This however is resulting in a lot of duplicated entities and repositories and thus technical debt, because the same entities and repositories cannot be used by the same entity managers so we have to duplicate them and put them under different packages for the entity scanning.
In most cases we want to read data from the legacy database and persist to both, but ideally this would be configurable.
Any help on this would be greatly appreciated.

keeping database in sync ef core

I have read multiple posts about this but do not have a clear answer yet.
We are transitioning to EF Core 2.0 company-wide, one project at a time.
The challenge is this:
A new project starts and a database is created using code first, migrations etc.
another programmer needs to create a project targeting the same database.
This programmer can use Scaffold-DbContext and generate current models.
However, a new column is needed and this second programmer adds it.
Now...how do we best update the other projects?
Is there something that checks and syncs or shows what is out of sync between your model and a database? Meaning check the database for changes...not the model.
We don't mind buying a tool if that is the best solution.
The solution we have been using, very successfully is the Database project in Visual Studio.
Using that each developer has the project in their solution, changes are made against it locally.
Then we can do a "Schema Compare" inside of VS.
We have been using this successfully for 4 of us the past three weeks extensively with almost no issues.
This has even worked for keeping versions and changes to our stored procedures current.
It works well with VSTS also.
Here are some of the posts I read that helped me understand it:
https://www.sqlchick.com/entries/2016/1/10/why-you-should-use-a-ssdt-database-project-for-your-data-warehouse
https://weblogs.asp.net/gunnarpeipman/using-visual-studio-database-projects-in-real-life
..and this forum had a lot of relevant questions/answers:
https://social.msdn.microsoft.com/Forums/sqlserver/en-us/home?forum=ssdt

EF Code-First in an ecosystem of apps concerns

I have a concern that I would like some input on to see if there is a solution. We have an ecosystem of about 30 web applications that all connect to the same database. We are going to be updating the applications and with that we are going to be moving to a new database schema. I have a process that will be pulling all the old data into the new schema using EF Code-First for the new schema.
Well, today I ran into an issue when two branches of the solution that I wrote to migrate the data have a slightly different schema (one branch put a MaxLength on a field and an index on another). The issue that I had was that the database was out of sync in one branch, but up-to-date in the other and the branch that was out of sync would not run until I ran Update-Database.
I was thinking about putting the code-first poco classes into a library along with the DataContext to be able to be used by the various web apps. I would then make this library available to the team using an internal NuGet server.
My concern is that if the schema changes (doesn't happen very often, but it does happen) what happens to all of the web applications that are relying on this library for data connectivity? Would they all break (my assumption is that they would)? If so, all of the production web apps would be down. Is there a way to get around this?

Code First model and deployment of new versions of the software

I'm looking on Entity Framework at the moment and working with Code First example. So far I can see that the framework does not handle model changes easily: whenever I want to add another field to a class/table, framework drops the entire database and creates it from the scratch.
Similar behaviour I saw in (N)Hibernate. (I could be wrong here, it was long time ago)
That is ok, as long as I work on tutorial. When a real-life project is involved, you can't afford to drop a database every time you need a new field in a table.
Just imagine scenario, you are working on a project with many clients. Every client has their own database. In release 1.0.1 I need to add a new field to one of the tables. If I drop database in my dev environment - not a big deal. (Still, I need to run a script to populate test data every time DB is dropped, and sometimes even this is no viable)
But what do I do when I need to deploy this new version? Make a SQL script to update client's databases without dropping them? then deploy binaries?
But how is this better than doing database mods separate from code changes?
(sorry for my bad english)
This is exactly why Code First Migrations exists. Take a look here (automatic migrations) and here (code-based migrations)