I've got a very strange issue that I'd like some advice on. I have an existing, well-established sails app using Sails 1.0.2 (originally created using 0.13 and upgraded), sails-postgresql#1.0.0-13. For quite a while now, the app has had 9 models (all using pg). I'm now trying to add three more. Since we don't like automigrations, I created these in postgres, then created models for sails based on the postgres table using sails generate model. These models appear very similar to the ones that already exist.
However, if I try to do a simple query, I get some very strange errors. (Note that "User" is one of my pre-existing models, not one of the new ones -- so this is apparently taking out the whole orm hook.)
sails> User.find().then(console.log)
Promise {
_bitField: 0,
_fulfillmentHandler0: undefined,
_rejectionHandler0: undefined,
_promise0: undefined,
_receiver0: undefined }
sails> Unhandled rejection AdapterError: Unexpected error from database
adapter: Could not run select() because of 2 problems:
------------------------------------------------------
• "datastore" is required, but it was not defined.
• "models" is required, but it was not defined.
------------------------------------------------------
If I remove the new model files, this works fine. If the files are present (even if the entire content is commented out or all the properties of the model are commented out!) I get the same error. I've tried recreating and simplifying the tables (since I used some advanced Postgres features) but that did nothing.
The only thing I've found that makes any difference is if I add the "datastore" attribute to the new models. In that case, I can query the "old" models but get the same error when I try to query a new one.
I've confirmed that all my settings in config/models.js & config/datastores.js look right.
Related
Inside of OnModelCreating, I want to be able to ignore a column if the database is on an older migration EF Core 5 throws an exception if I attempt to read from the database directly, or indirectly by querying the applied migrations. I'm not certian that it's even a good idea, since OnModelCreating is used during the migration 😩, but I'll burn that bridge when I cross it.
There are some examples on how one would do this with EF6, but they don't seem to apply anymore with EF Core.
While Ivan Stoev is right that --generally-- you should model the target database without outside input, the real world isn't always that clear-cut. In my particular case, there are multiple service instances (Azure Functions) that need to read and write to a single database. In order to maintain zero downtime, those Functions need to not read or write columns that don't yet exist.
I solved the problem the way Serge suggested. The database has a known version, populated with seed data that increments with every migration. On startup, the service reads that version with a regular old Microsoft.Data.Sql.SqlConnection. This version is then added to the IServiceCollection as a singleton to be used by the DbContext constructor.
When talking to an older database version, OnModelCreating does things like this:
builder.Entity<Widget>(w =>
{
// another option would be to use the migrations table instead of an integer
if (DatabaseVersion < ContextVersions.WidgetNewPropertyAddedVersion)
{
w.Ignore(w => w.NewProperty);
}
else
{
w.Property(w => w.NewProperty)
.HasDefaultValue(0);
}
});
The startup code also detects if it's been started by the Entity Framework tools and does not read the database version, instead assuming "latest". This way, we do not ignore new properties when building the migration.
Figuring out how to let the service instances know that the database has been upgraded and they should restart to get the new database model is an exercise left up to the reader. :)
I am developing an application in which the database is selected by the end user at runtime. The database can either be on a MS SQL server or an IBM DB2 server. I am currently using IBM DB2 10 Express-c on a windows server for testing. I am developing using Visual Studio 2013 C# and Entity Framework 6. I have installed the EntityFramework.IBM.DB2 Nuget package for the DB2 support. I am using reverse-engineer code-first against an existing SQL server database to generate my base code. The application works fine against a SQL Server database.
I am using System.Data.Common.DbProviderFactories.GetFactory to generate the provider.
System.Data.EntityClient.EntityConnectionStringBuilder connectString = new System.Data.EntityClient.EntityConnectionStringBuilder(a_Connection);
System.Data.Common.DbConnection conn = System.Data.Common.DbProviderFactories.GetFactory(connectString.Provider).CreateConnection();
conn.ConnectionString = connectString.ProviderConnectionString;
LB500Database = new LB402_TestContext(conn, true);
a_Connection is provider=IBM.Data.DB2;provider connection string="Database=LISTBILL;User ID=xxxx;Password=yyyy;Server=db210:50000"
and is being parsed correctly by the EntityConnectionStringBuilder.
I then try to access a table in the database with
LBData500.LB_System oneSystem;
System.Linq.IQueryable<LB_System> allSystem = LB500Database.LB_System.Where(g => g.DatabaseVersion == databaseVersion && g.CompanyID == companyID);
I get an invalid operation exception "Sequence contains no matching element" which means that no elements are returned. If I remove the Where so that all rows are returned (there is one in the table) and try to enumerate the result set using the VS debugger I see the message:
"The context cannot be used while the model is being created. This exception may be thrown if the context is used inside the OnModelCreating method or if the same context instance is accessed by multiple threads concurrently. Note that instance members of DbContext and related classes are not guaranteed to be thread safe."
I am not using multi-threading. I am not inside the OnModelCreating.
Just changing the connect string to point to SQL server works fine, so I think my basic approach is sound. If I were getting some kind of error back from the server I would have something to go on. I can run the query from inside Visual Studio, so I have connectivity.
Any pointers would be appreciated.
UPDATE:
I turns out the EF objects were generated using EF5 and the EF6 runtime was being used. I regenerated the EF objects using EF6 reverse engineer code first. I can now connect to the database and get an error message:
"ERROR [42704] [IBM][DB2/NT64] SQL0204N \"DBO.LB_SYSTEM\" is an undefined name."
The schema in the DB2 database is the same as my userid (in this case, not always). I added the CurrentSchema=xxxx to the provide connection string, but EF is still passing dbo as the schema name.
Now I need a way to change the schema name at run time. I saw a link to codeplex EFModelAdapter (http://efmodeladapter.codeplex.com). So I may give that a try.
Update2 After looking through EFModelAdapter, I decided to take a different route. Since I only need database access and not schema management, I decided to go with Dapper (https://github.com/StackExchange/dapper-dot-net). This works great for what I need and allows me to change the schema name when accessing DB2 databases.
As per my Update 2, Entity Framework was a little overkill for what I needed. I switched to dapper https://github.com/StackExchange/dapper-dot-net and I am working fine against multiple DBMSs.
I created a new project and added an empty data model. I added a few entities and properties and then generated the database according to this tutorial. So far, so good. I then went back and added additional entitities.
Now, I am no longer able to Generate Database From Model... because I receive an Error 11007: Entity Type 'xxxx' is not mapped for all of the new entities I added. According to msdn, I can follow the instructions here to resolve my mapping issue between conceptual and storage models. However, it appears these instructions assume the entities are already present in my storage model (which they are not). When I try to manually map them, the only two tables I have to choose from are the original two tables I created.
I appreciate any help you can offer.
This was a phantom error of sorts. I had another error with a default value I was trying to set for a datetime that appeared at the bottom of my error list. I resolved this error and everything is now working as it should.
I have follows
val s1 = autoIncremented("advert_id_seq")
on(car)(attributes => declare(attributes.id is (s1)))
on(danceInstructor)(attributes => declare(attributes.id is (s1)))
When i run my app a catch following exception
org.postgresql.util.PSQLException: ERROR: relation "advert_id_seq" already exists
As i realized, squeryl try to create sequence twice and gets error
I'm guessing that your issue is with Schema generation, not with querying the database. If that's the case, then you probably just want to avoid having Squeryl create the tables directly. Squeryl's schema generation is purposefully basic. When you outgrow what it can do I think you're better off adopting some method that gives you greater control than a "read your model and generate stuff" tool can offer. Tools like Flyway or Liquibase are good for this.
If you don't want to adopt a new library you can also use Squeryl to output the schema to a file through one of the Schema.printDdl methods then remove the extraneous sequence before executing it.
Ruby on Rails has a polymorphic association, but scala active record don't have its. I made something like it's, and will push it in github repo.
What is the best practice for upgrading the database using ORM (DevExpress XPO, NHibernate or MS Entity Framework)?
I'm starting a new project and have to pick an ORM. The development process requires of releasing intermediate test builds quite often and likely that each build will have changes in the database structure. Each new version has to upgrade the DB gently to keep current data.
For old solutions I would provide a set of SQL scripts for upgrading the database from v1 to v2, from v2 to v3, etc. and execute them sequentially.
But how is it going to work for ORM? Should I still write SQL scripts to upgrade the DB?
I understand that simple adding new fields wouldn't cause a problem (e.g. see UpdateSchema() method for XPO), but what if I have to split a table and reallocate current records into 2 new tables?
I can't comment on the other ORM's, but I have used DevExpress XPO for a corporate treasury application since 2007. The schema changes a little with every release but there have also been some big schema changes over the years as well. A somewhat extended version of the default XPO upgrade mechanism has comfortably catered for all the changes.
There is good basic information here about upgrading XPO applications.
DevExpress provide a DBUpdater tool to assist you with the task of upgrading production environments. You can extend this tool to cater for additional requirements. In my application, we have added some options for logging, preview with rollback, etc.
Each module has virtual UpdateDatabaseBeforeSchemaUpdate() and UpdateDatabaseAfterSchemaUpdate() methods. You can significantly control the upgrade process within these.
As you mention, some of the upgrade will be handled automatically by XPO (e.g., adding a new column), but some things need additional control such as initialising the new column with a default value for existing records.
For instance, let's say MyNewField has been added to the MyEntity XPO class in version 2.0 of your application. Let's say it should default to a value of 3 for existing records. XPO will handle the creation of the new column but existing records will be NULL. (If you specify a default value in the XPO class it would only pertain to new records). In order to correct the value for existing records you would add something like the following to entity module's overridden UpdateDatabaseAfterSchemaUpdate():
public override void UpdateDatabaseAfterUpdateSchema()
{
base.UpdateDatabaseAfterUpdateSchema();
if (CurrentDBVersion < new Version(2, 0, 0, 0))
ObjectSpace.GetSession().ExecuteNonQuery(
"UPDATE [MyEntity] SET [MyNewField] = 3 WHERE [MyNewField] IS NULL");
}
(You could also use ObjectSpace.GetObjects<MyEntity>() and a foreach if you prefer to avoid the direct SQL.)
In your more extreme example of splitting a table in two, you can use the same method, but you would override UpdateDatabaseBeforeUpdateSchema() instead, run the SQL to split the table, let XPO perform any other schema updates and, if necessary, populate any default values in the UpdateDatabaseAfterUpdateSchema().
You will find that you bump into constraint problems e.g., foreign key violations so you might find you need to write some general routines such as DropAllForeignKeyConstraints() as part of the UpdateDatabaseBeforeUpdateSchema(). Sometimes you find that XPO already provide something, sometimes not. Missing constraints and indexes will get regenerated in the schema update. (In my experience switching a master data table's primary key turned out to be the hardest update routine to get right.)
By default the calls all happen in an SQL transaction so if anything fails it should all roll back.
The developers need to be aware of when a change to the domain model is likely to cause a problem with the underlying schema.
For testing, we keep a few old customer databases and run a bunch of before-and-after tests as part of the build process to make sure that existing customers are able to upgrade properly whatever version they are upgrading from. In production whenever we run into a problem upgrading, the problem data is added into this test library to prevent similar problems in the future.
We are dealing with major international companies and banks. The customers are quite happy with the result. In situations where a corporate's DBA needs to sign off on the changes, they don't seem to mind having a command line tool to do the upgrade rather than a script.
Most migration solutions can handle easy tasks, like adding new column, relationship or removing one, but fail to work when you rename a column (is that an add? or a remove following an add which equals a rename? What should you do with the data in that case?)
All three solutions have basic migrations support, XPO even lets you run your own scripts as a part of the process (to insert static/test/contant data, etc.)
There's also the MigratorDotNet project that you can use and not to rely on any ORM specific feature regarding migrations.
Personally, I would use auto migration only in dev/test environment and would have full set of upgrade scripts when running on client specific database to say upgrade from v1 to v2.
How is it going to work for ORM? Should I still write SQL scripts to
upgrade the DB?
Clear answer of this question should be on Programmer's stackexchange thread - What are the criteria for evaluating an ORM for.NET?, there i got simple answer for your question that you asked and matches with my experience with ORM while developing some project with Entity framework and Code smith ORM templates.
How does the ORM manages changes in the data model? what if I have to split a table and reallocate current records into 2 new tables?
Some can update the DB automatically within a certain measure, other
don't do anything and you'll have to do the dirty work yourself; other
provide a framework for handling change that lets you control database
updates. That means every couple of days someone needs to spend an hour updating the model to add a table or change datatypes that are changing
Ref:
https://softwareengineering.stackexchange.com/questions/6543/what-are-the-benefits-of-using-database-abstraction-by-orm
https://softwareengineering.stackexchange.com/questions/41739/best-arguments-for-against-introducing-orm-technology-into-a-companies-dev-proce/41833#41833
If you ask - what is the best practice for upgrading the db using ORM - my answer is: Don't use it if your application is more than a hobbyist app.
There are a lot of scenarios where many ORMs are unable to provide support to your specific database needs, e.g. in creating stored procedures, create indices and views or even indexed views/materialized tables without writing sql scripts. Problems like adding a new non-nullable column to an existing table are much harder to solve in ORM-Migration-Code than by writing SQL scripts.
Current Tools like Visual Studio Data Tools do handle these kind of problems way better.