I have reverse engineered our model from an existing database using EF Core Scaffold-DbContext. Since then a couple of tables have been added to the database within SSMS and I now need to update the model. How can I do this without overwriting the original model classes (changes have been made to them)?
You should not make changes to the generated code, they are generated as partial classes, so you can make another file with your changes, and then simply re-generate the files.
Related
I am using EF Core 2.0 in my sample project with some value object configurations. I modify the code and generate migrations via CLI command line. In the last migration rather than adding a new database table as it should, it is trying to rename existing tables to each other and create an extra table for existing one. I could not figure out the reason for it.
Issue is, since with EF Core the snapshot is a separate auto-generated file from the migration itself I don't want to modify the snapshot.
I only want to modify the migration script so that it will not rename multiple tables, and then generate the snapshot from the migrations I created.
I did not see any command for this in the CLI - is it such a bad practice to modify the scaffolded migration and regenerate or am I missing some obvious new link where how to manually modify migration scripts is explained?
Thanks a bunch.
Update 1: After comments, added info about the snapshot from this link.
Because the current database schema is represented in code, EF Core doesn't have to interact with the database to create migrations. When you add a migration, EF determines what changed by comparing the data model to the snapshot file. EF interacts with the database only when it has to update the database. +
I examined my generated snapshot code from source control. It exactly has added one extra table as what I needed.
The migration script to generate this is hectic at best - renaming multiple tables to each other and then warning that this could break causing multiple issues.
Since this is a sample project for me with only mock data as of now at least, I decided to go for it and not break the automated scripts. I am willing to lose some mock data at this stage rather than wasting time on it.
If this were in a production database I would be extremely careful to manually create the same result with intervention modifying both the scaffold and the migration file.
I am accepting this one as an answer (basically saying current EF Core does not support it to the best of my current knowledge) since there is no other candidate now - I will be more than glad to accept if any better answer shows up.
I have been using Database First Entity Framework (EDMX) and SQL Server Data Tools Database Projects in combination very successfully - change the schema in the database and 'Update Model from Database' to get them into the EDMX. I see though that Entity Framework 7 will be dropping the EDMX format and I am looking for a new process that will allow me to use Code First in Combination with Database Projects.
Lots of my existing development and deployment processes rely on having a database project that contains the schema. This goes in source control is deployed along with the code and is used to update the production database complete with data migration using pre and post deployment scripts. I would be reluctant to drop it.
I would be keen to split one big EDMX into many smaller models as part of this work. This will mean multiple Code First models referencing the same database.
Assuming that I have an existing database and a database project to go with it - I am thinking that I would start by using the following wizard to create an initial set of entity and context classes - I would do this for each of the models.
Add | New Item... | Visual C# Items | Data | ADO.NET Entity Data Model | Code first from database
My problem is - where do I go from there? How do I handle schema changes? As long as I can get the database schema updated, I can use a schema compare operation to get the changes into the project.
These are the options that I am considering.
Make changes in the database and use the wizard from above to regenerate. I guess that I would need to keep any modifications to the entity and/or context classes in partial classes so that they do not get overwritten. Automating this with a list of tables etc to include would be handy. Powershell or T4 Templates maybe? SqlSharpener (suggested by Keith in comments) looks like it might help here. I would also look at disabling all but the checks for database existence and schema compatibility here, as suggested by Steve Green in the comments.
Make changes in code and use migrations to get these changes applied to the database. From what I understand, not having models map cleanly to database schemas (mine don't) might pose problems. I also see some complaints on the net that migrations do not cover all database object types - this was also my experience when I played around with Code First a while back - unique constraints I think were not covered. Has this improved in Entity Framework 7?
Make changes in the database and then use migrations as a kind of comparison between code and the database. See what the differences are and adjust the code to suit. Keep going until there are no differences.
Make changes manually in both code and the database. Obviously, this is not very appealing.
Which of these would be best? Is there anything that I would need to know before trying to implement it? Are there any other, better options?
So the path that we ended up taking was to create some T4 templates that generate both a DbContext and our entities. We provide the entity T4 a list of tables from which to generate entities and have a syntax to indicate that the entity based on one table should inherit from the entity based on another. Custom code goes in partial classes. So our solution looks most like my option 1 from above.
Also, we started out generating fluent configuration in OnModelCreating in the DbContext but have swapped to using attributes on the Entities (where attributes exist - HasPrecision was one that we had to use fluent configuration for). We found that it is more concise and easier to locate the configuration for a property when it is right there decorating that property.
I'm using EF6 code-first migrations for existing database but initial DbContext does not fully cover existing schema (since it's massive). So from time to time I have to make updates to the model in database-first style. For example when I need an entity mapping for a table or a column that is already in the database but not reflected in the code I do the following:
Make all change (add new entity, rename the column mapping or add new property)
Scaffold migration representing the latest model snapshot stub_migration
Copy-paste latest serialized model from stub_migration to the last_migration resource file
Delete stub_migration
Revert last_migration in database
Update-Database so that model snapshot in [__MigrationHistory] table would be also updated
I understand that this aproach is a bit hackish and the proper way would be to leave empty stub_migration but this would force lots of empty migrations which I would rather avoid.
Looking at a similar scenario from MSDN article (Option 2: Update the model snapshot in the last migration) I wouldn't imagine that there is an easier way rather than writing power shell script, managed code or both to make it work. But I would rather ask community first before diving deep into it.
So I wonder: is there a simple way to automate generation of new model snapshot in latest migration and reaplying it?
I'm doing something similar. I have a large database and I am using the EF Tools for VS 2013 to reverse engineer it in small parts into my DEV environment. The tool creates my POCOs and Context changes in a separate folder. I move them to my data project, create a fluent configuration and then apply a migration (or turn automigration on).
After a while I want a single migration for TEST or PROD so I roll them up into a single migration using the technique explained here: http://cpratt.co/migrating-production-database-with-entity-framework-code-first/#at_pco=smlwn-1.0&at_si=54ad5c7b61c48943&at_ab=per-12&at_pos=0&at_tot=1
You can simplify the steps for updating DbContext snapshot of the last migration applied to database by re-scaffolding it with Entity Framework:
Revert the last migration if it is applied to the database:
Update-Database -Target:Previous_Migraton
Re-scaffold the last migration Add-Migration The_name_of_the_last_migration which will recreate the last migrations *.resx and *.Designer.cs (not the migration code), which is quite handy.
Those 2 steps are covering 4 steps (2-5) from original question.
You can also get different bahavior depending on what you want by specifying the flags -IgnoreChanges and (or) -Force
And by the way, the major problem with the updating the DbContext snapshot is not how to automate those steps, but how to conditionally apply them to TEST/PROD environments depending on whether you actually want to suppress the warning because you've mapped existing DB-first entities in you DbContext or you want it it to fail the build in case you've created new entities and forgot to create a code-first migration for them.
So, try to avoid those steps altogether and maybe create empty migrations when you just want to map existing tables to your code.
I have created an MVC project with Entity Framework Code First. The project has a decent sized database and is in Production. Now, I am adding a large new set of features that will pretty much double the size (number of tables) of the database. As I'm developing it, I expect to make a lot of tweaks to the POCO objects and Fluent model building logic. But, I don't want to have 100 "migrations" as I make little changes.
If I was doing Database First, I would change the database and recreate the model from it iteratively. When finished, I could compare the final schema with the previous schema and create the change scripts.
I am inclined to create a new temporary DbContext and develop my Code First model for the new tables there, recreating a new database from scratch as I iterate. And then when I have the model where I'm happy with it, move it over into the main DbContext and create one big migration. But this seems painful. It also has the problem that there are some relationships between new objects and existing objects that need to be put in place.
So, my specific question is how do I make many small changes to a Code First database:
Without re-creating the existing database
And without creating a (permanent) migration for each change I want to test
You say you created the project with Code First so you I assume you don't need to reverse engineer the database.
To avoid recreating the existing database use a MigrateDatabaseToLatestVersion database initializer
To avoid creating a permanent migration for each change, you could rollback each minor change then force the migration to re-run.
To rollback: Update-Database -TargetMigration 0
To force migration to re-run: Add-Migration "OneMigrationToRuleThemAll" -Force
On the other hand....
Learning to stop sweating the small stuff involves deciding what
things to engage in and what things to ignore
(Richard Carlson)
These tips for Entity Framework migrations are worth a read
I've been using EF for a while (4 with model first) and so far I've not created any mapping manually. Whenever I need more entities/tables, I add an entity and the associations (all foreign key) and click "update database from model", which, as is well known, doesn't update any database from the model (although it does need a database connection for reasons I don't know). What it does is generating a storage model and the appropriate mappings to it, which are all stored back to the same edmx xml file.
So far, that has always been enough for me but I'm wondering what the workflow would be if one is to tweak the mappings and storage model manually. "Update database from model" overwrites all manual customization - so how is one to fix most of the mappings and storage model? Because I clearly don't want to do it all by hand - in fact I couldn't even figure out how to actually create a table in the storage model other than by editing the edmx in the xml.
I have the same problem. I just use a mixture of methods. If I add a field to the database, I just add the field to the model file. If I do a major restructure, I delete the table and recreate it by generating it from the database. Sometimes, I actually edit the edmx as XML to change or add things. You just kinda gotta figure out what process works best for you. I have managed to avoid heavy customization in the edmx by using the T4 template or changing the database and regenerating.