Inside of OnModelCreating, I want to be able to ignore a column if the database is on an older migration EF Core 5 throws an exception if I attempt to read from the database directly, or indirectly by querying the applied migrations. I'm not certian that it's even a good idea, since OnModelCreating is used during the migration 😩, but I'll burn that bridge when I cross it.
There are some examples on how one would do this with EF6, but they don't seem to apply anymore with EF Core.
While Ivan Stoev is right that --generally-- you should model the target database without outside input, the real world isn't always that clear-cut. In my particular case, there are multiple service instances (Azure Functions) that need to read and write to a single database. In order to maintain zero downtime, those Functions need to not read or write columns that don't yet exist.
I solved the problem the way Serge suggested. The database has a known version, populated with seed data that increments with every migration. On startup, the service reads that version with a regular old Microsoft.Data.Sql.SqlConnection. This version is then added to the IServiceCollection as a singleton to be used by the DbContext constructor.
When talking to an older database version, OnModelCreating does things like this:
builder.Entity<Widget>(w =>
{
// another option would be to use the migrations table instead of an integer
if (DatabaseVersion < ContextVersions.WidgetNewPropertyAddedVersion)
{
w.Ignore(w => w.NewProperty);
}
else
{
w.Property(w => w.NewProperty)
.HasDefaultValue(0);
}
});
The startup code also detects if it's been started by the Entity Framework tools and does not read the database version, instead assuming "latest". This way, we do not ignore new properties when building the migration.
Figuring out how to let the service instances know that the database has been upgraded and they should restart to get the new database model is an exercise left up to the reader. :)
I'm playing around with spring-data-jdbc and discovered a problem, with I can't solve using Google.
No matter what I try to do, I just can't push a trivial object into the database (Bean1.java:25):
carRepository.save(new Car(2L, "BMW", "5"));
Both, without one and with a TransactionManager +#Transactional the database (apparently) does not commit the record.
The code is based on a Postgres database, but you might also simply use a H2 below and get the same result.
Here is the (minimalistic) source code:
https://github.com/bitmagier/spring-data-jdbc-sandbox/tree/stackoverflow-question
Can somebody tell me, why the car is not inserted into the database?
This is not related to transactions not working.
Instead, it's about Spring Data JDBC considering your instance an existing instance that needs updating (instead of inserting).
You can verify this is the problem by activating logging for org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate. You should see an update but no insert.
By default, Spring Data JDBC considers an entity as new when it has an id of an object type and a value of null or of a primitive type (e.g. int or long) and a value of 0.
If your entity has an attribute with #Version annotation that attribute will be used to determine if the instance is a new one.
You have the following options in order to make it work:
Set the id to null and configure your database schema so that it will automatically create a new value on insert. After the save your entity instance will contain the generated value from the database.
Note: Spring Data JDBC will set the id even if it is final in your entity.
Leave the id null and set it in a Before-Convert listener to the desired value.
Let your entity implement Persistable. This allows you to control when an entity is considered new. You'll probably need a listener as well so you can let the entity know it is not new any longer.
Beginning with version 1.1 of Spring Data JDBC you'll also be able to use a JdbcAggregateTemplate to do a direct insert, without inspecting the id, see https://jira.spring.io/browse/DATAJDBC-282. Of course, you can do that in a custom method of your repository, as is done in this example: https://github.com/spring-projects/spring-data-examples/pull/441
currently I am working at a project which requires to be backwards compatible with (non-EF) databases, but also want to create a new database from model.
For this task I save the current schema somewhere (in XML form) and update the databases with raw sql update steps, until they match the schema, which is working fine.
Also, the modelBuilder matches the schema (as in, my algorithm finds no difference between the newly created database by context.Database.Create() and my saved schema) currently.
Since the schema will most likely change in later stages of development, I do have to support two ways to create an Up-to-date database and was wondering if I could combine these two - since now I have to update the saved target schema, create the update steps AND update my modelBuilder so that is creates exactly the database I need - which would be quite a tedious task.
So since there is probably no way to "translate" my schema to modelBuilder entries and because there is more not mapped information in my POCO classes (which prohibits the approach of updating a correct database and update my classes database first) the only (visible to me) way would be to somehow gather the CREATE TABLE statements a context would create when I call Database.Create() which I can use to update my schema and the update steps accordingly.
I know quite sure I can do the same by logging the context while calling the Create() method, however - this will take quite some time, will issue some queries I do not need and will create a dump database I have to get rid of afterwards each time I update my model.
So I was wondering if there was a way to inspect the modelBuilder (or the context, of course) and somehow see what the tables would look like it maps to.
I'm building my EF (v4.0.30319) data model from my SQL Server database. Each table has Created and Updated fields populated via database triggers.
This database is the backend of a ASP.NET Web API application and I recently discovered a problem. The problem is, since the Created and Updated fields are not populated in the model passed into the api endpoint, they are written to the database as NULL. This overwrites any values already in the database for those properties.
I discovered I can edit the EF data model and just delete those two columns from the entity. It works and the datetimes are not overwritten with NULL. But this leads to another, less serious but more annoying, problem... my data model has a bunch of tables that contain these properties and all the tables need to be updated by removing these two columns.
Is there a way to tell EF to ignore certain columns in entities across the entire data model without manually deleting them?
As far as I know, no. Generating the model from the database is going to always create all of the fields from the database table. Well, as long as it has a primary key it will.
It is possible to only update the fields that you want i.e. don't include the "Created" and "Updated" fields in your create and update methods. I'd have to say though, that I'd think it'd be better if those fields didn't even exist on the model at that point. You may at one point see those fields on the model and not remember that they won't get persisted to the DB.
You may want to look into just inserting the datetimes into those fields when you call your create() and update() methods. Then you could just ditch the triggers. You'd obviously want to use a class library for all of your database operations so this functionality would be in one place. To keep it nice and neat, you know?
Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information.
We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date.
But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner?
My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct?
My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?
Here's basically how I ended up solving this.
While Amorya's and MHarrison's answers were valid, they had one assumption: that once created, not only the tables but each row in each table would always be the same.
The problem is that my process to pre-populate the "Animals" database, using existing data (that is updated periodically), creates a new database file each time. In other words, I can't rely on creating a relationship between the (static) Animal entity and a (dynamic) Rating entity in Core Data, since that entity may not exist the next time I regenerate the application. Why not? Because I have no control how Core Data is storing that relationship behind the scenes. Since it's an SQLite backing store, it's likely that it's using a table with foreign key relations. But when you regenerate the database, you can't assume anything about what values each row gets for a key. The primary key for Lion may be different the second time around, if I've added a Lemur to the list.
The only way to avoid this problem would require pre-populating the database only once, and then manually updating rows each time there's an update. However, that kind of process isn't really possible in my case.
So, what's the solution? Well, since I can't rely on the foreign key relations that Core Data makes, I have to make up my own. What I do is introduce an intermediate step in my database generation process: instead of taking my raw data (which happens to be UTF-8 text but is actually MS Word files) and creating the SQLite database with Core Data directly, I introduce an intermediary step: I convert the .txt to .xml. Why XML? Well, not because it's a silver bullet, but simply because it's a data format I can parse very easily. So what does this XML file have different? A hash value that I generate for each Animal, using MD5, that I'll assume is unique. What is the hash value for? Well, now I can create two databases: one for the "static" Animal data (for which I have a process already), and one for the "dynamic" Ratings database, which the iPhone app creates and which lives in the application's Documents directory. For each Rating, I create a pseudo-relationship with the Animal by saving the Animal entity's hash value. So every time the user brings up an Animal detail view on the iPhone, I query the "dynamic" database to find if a Rating entity exists that matches the Animal.md5Hash value.
Since I'm saving this intermediate XML data file, the next time there's an update, I can diff it against the last XML file I used to see what's changed. Now, if the name of an animal was changed -- let's say a typo was corrected -- I revert the hash value for that Animal in situ. This means that even if an Animal name is changed, I'll still be able to find a matching Rating, if it exists, in the "dynamic" database.
This solution has another nice side effect: I don't need to handle any migration issues. The "static" Animal database that ships with the app can stay embedded as an app resource. It can change all it wants. The "dynamic" Ratings database may need migration at some point, if I modify its data model to add more entities, but in effect the two data models stay totally independent.
The way I'm doing this is: ship a database of the static stuff as part of your app bundle. On app launch, check if there is a database file in Documents. If not, copy the one from the app bundle to Documents. Then open the database from Documents: this is the only one you read from and edit.
When an upgrade has happened, the new static content will need to be merged with the user's editable database. Each static item (Animal, in your case) has a field called factoryID, which is a unique identifier. On the first launch after an update, load the database from the app bundle, and iterate through each Animal. For each one, find the appropriate record in the working database, and update any fields as necessary.
There may be a quicker solution, but since the upgrade process doesn't happen too often then the time taken shouldn't be too problematic.
Storing your SQLite database in the Documents directory (NSDocumentDirectory) is certainly the way to go.
In general, you should avoid application changes that modify or delete SQL tables as much as possible (adding is ok). However, when you absolutely have to make a change in an update, something like what Amorya said would work - open up the old DB, import whatever you need into the new DB, and delete the old one.
Since it sounds like you want a static database with an "Animal" table that can't be modified, then simply replacing this table with upgrades shouldn't be an issue - as long as the ID of the entries doesn't change. The way you should store user data about animals is to create a relation with a foreign key to an animal ID for each entry the user creates. This is what you would need to migrate when an upgrade changes it.