How should I handle database evolutions when using Play and Slick? Must I manually write SQL? - scala

I'm looking at the "Hello Slick" tutorial. A users table is defined and then created using users.schema.create (the code on github is outdated so there it's users.ddl.create there, but when I create the app in Activator it's schema because it's using Slick 3.0.0, which is close enough). However if I run the app a second time there's an error because the table already exists. I see no option like users.schema.createIfNotExists which is a bit surprising. But in any case I would need something more sophisticated if I added a column to the table sometime in the future. So does Slick have no way of helping with migrations/evolutions?
I'm using Play and supposedly Play Slick has special support for database evolutions. It's not clear what is offered in addition to the usual Play evolutions that is specific to Slick: we're just told to add a dependency. I can't see any further documentation.
Do I have to manually write the SQL for evolutions (1.sql, Ups, Downs, etc.)? If so it seems pretty silly to have to write code like column[Int]("ID", O.PrimaryKey, O.AutoInc) in addition. I'm bothered by the duplication of effort and worried that if my SQL/DDL is wrong subtle bugs will appear when I access the database. I may be wrong but I seem to remember that migrations can be automatically generated after changing a model in Django so it doesn't seem like an unsolvable problem. Is this just not something that's been implemented or am I missing something?
I'm using PostgreSQL if that's relevant.

You could use slicks schema code generation feature:
http://slick.typesafe.com/doc/3.0.0/code-generation.html
This way if you update the db schema, you don't have to hand write the slick classes to correspond with the table this will just do it for you.

Related

Play: Exclude certain tables from being managed by the evolutions?

I have a Scala Play app over a MySQL instance. I store my evolutions as conf/evolutions/$db/$step.sql files. However, some of my tables are dynamic i.e. their schema may be modified during the runtime of the Play app. What is the best way to exclude these tables from Play's evolutions framework?
I have couple of choices and none of them look especially elegant:
1) Move all the offending tables to a separate database where the evolutions plugin is disabled - This is not that great since I have to move all related tables that have foreign key constraints out of the current database too.
2) Somehow override Play's evolutions framework - Unfortunately, Play's evolution framework is not modular nor is it extendable. I was hoping it would have some Scala or Java hooks for def onUp(tableName: String) and def onDown(tableName: String) that I can override but Play's evolutions framework has no such nice abstractions it seems and is quite monolithic.
3) I know Play creates an entry in a table called play_evolutions - I can modify that table from my app in onStart to manually take out all offending table related stuff. That would work but is very hackish and has hard dependency on Play's internal representation/handling of schema changes.
4) Simply move all offending table sql statements to conf/evolutions/$db/ignore_evolution_$step.sql- This way these tables are out of the watchful eyes of the evolutions framework but I essentially have to roll my own framework to parse these files and execute them.
5) Anything else I missed?

How can i use one sequence to auto increment several tables in squeryl (PostgreSQL)?

I have follows
val s1 = autoIncremented("advert_id_seq")
on(car)(attributes => declare(attributes.id is (s1)))
on(danceInstructor)(attributes => declare(attributes.id is (s1)))
When i run my app a catch following exception
org.postgresql.util.PSQLException: ERROR: relation "advert_id_seq" already exists
As i realized, squeryl try to create sequence twice and gets error
I'm guessing that your issue is with Schema generation, not with querying the database. If that's the case, then you probably just want to avoid having Squeryl create the tables directly. Squeryl's schema generation is purposefully basic. When you outgrow what it can do I think you're better off adopting some method that gives you greater control than a "read your model and generate stuff" tool can offer. Tools like Flyway or Liquibase are good for this.
If you don't want to adopt a new library you can also use Squeryl to output the schema to a file through one of the Schema.printDdl methods then remove the extraneous sequence before executing it.
Ruby on Rails has a polymorphic association, but scala active record don't have its. I made something like it's, and will push it in github repo.

Entity Framework code first - development strategies

Working on a brand new project from the ground up. That means the data model is in a constant flux, doubly so because things are, inevitably, not as well planned as they should be. Model classes are being created and changed fairly regularly.
The plan was to use the latest version of EF with all the neat code-first stuff in it. But we're constantly tripping over the limitations the framework has in terms of adding or updating tables. The initialization options seem to allow only the complete deletion and re-creation of the database, which isn't really ideal.
I've had a look at the migrations. But this seems a sledgehammer to crack a nut: we don't need to detail every single small change and update with a new migration scaffold.
Are there some better strategies to deal with this? For instance, I started writing some unit tests to pre-populate one of the contexts with some test data, but because this causes the whole Db to drop and re-create, it causes problems with all the other contexts. Or perhaps making use of a custom initialiser to seed the data for us? How can we easily exclude these in production code?
We're also wondering about perhaps abandoning code-first and going back to EDMX diagrams. At least that way changes result in updated SQL commands which can be run directly against the database.
Any suggestions gratefully received.
I think, imho, that:
as the database schema must at least match your model you should/must detail every single change, and code first migration allows that and trace the changes over time
code first migration also allows to migrate the database schema for you
code first migration also allows you to produce sql that allows you to migrate the schema
For these reasons code first is as good (if not better) as the edmx approach
Please take few minutes to implement http://msdn.microsoft.com/en-us/data/jj591621.aspx
One other point, always imho and in a perfect world, if you unit test the business of you model you should not need the DAL, use generic collection. Be aware of different comportement of linq to object vs linq to entities, for example concerning the case sensitivity.

DevExpress XPO vs NHibernate vs Entity Framework: database upgrading issue

What is the best practice for upgrading the database using ORM (DevExpress XPO, NHibernate or MS Entity Framework)?
I'm starting a new project and have to pick an ORM. The development process requires of releasing intermediate test builds quite often and likely that each build will have changes in the database structure. Each new version has to upgrade the DB gently to keep current data.
For old solutions I would provide a set of SQL scripts for upgrading the database from v1 to v2, from v2 to v3, etc. and execute them sequentially.
But how is it going to work for ORM? Should I still write SQL scripts to upgrade the DB?
I understand that simple adding new fields wouldn't cause a problem (e.g. see UpdateSchema() method for XPO), but what if I have to split a table and reallocate current records into 2 new tables?
I can't comment on the other ORM's, but I have used DevExpress XPO for a corporate treasury application since 2007. The schema changes a little with every release but there have also been some big schema changes over the years as well. A somewhat extended version of the default XPO upgrade mechanism has comfortably catered for all the changes.
There is good basic information here about upgrading XPO applications.
DevExpress provide a DBUpdater tool to assist you with the task of upgrading production environments. You can extend this tool to cater for additional requirements. In my application, we have added some options for logging, preview with rollback, etc.
Each module has virtual UpdateDatabaseBeforeSchemaUpdate() and UpdateDatabaseAfterSchemaUpdate() methods. You can significantly control the upgrade process within these.
As you mention, some of the upgrade will be handled automatically by XPO (e.g., adding a new column), but some things need additional control such as initialising the new column with a default value for existing records.
For instance, let's say MyNewField has been added to the MyEntity XPO class in version 2.0 of your application. Let's say it should default to a value of 3 for existing records. XPO will handle the creation of the new column but existing records will be NULL. (If you specify a default value in the XPO class it would only pertain to new records). In order to correct the value for existing records you would add something like the following to entity module's overridden UpdateDatabaseAfterSchemaUpdate():
public override void UpdateDatabaseAfterUpdateSchema()
{
base.UpdateDatabaseAfterUpdateSchema();
if (CurrentDBVersion < new Version(2, 0, 0, 0))
ObjectSpace.GetSession().ExecuteNonQuery(
"UPDATE [MyEntity] SET [MyNewField] = 3 WHERE [MyNewField] IS NULL");
}
(You could also use ObjectSpace.GetObjects<MyEntity>() and a foreach if you prefer to avoid the direct SQL.)
In your more extreme example of splitting a table in two, you can use the same method, but you would override UpdateDatabaseBeforeUpdateSchema() instead, run the SQL to split the table, let XPO perform any other schema updates and, if necessary, populate any default values in the UpdateDatabaseAfterUpdateSchema().
You will find that you bump into constraint problems e.g., foreign key violations so you might find you need to write some general routines such as DropAllForeignKeyConstraints() as part of the UpdateDatabaseBeforeUpdateSchema(). Sometimes you find that XPO already provide something, sometimes not. Missing constraints and indexes will get regenerated in the schema update. (In my experience switching a master data table's primary key turned out to be the hardest update routine to get right.)
By default the calls all happen in an SQL transaction so if anything fails it should all roll back.
The developers need to be aware of when a change to the domain model is likely to cause a problem with the underlying schema.
For testing, we keep a few old customer databases and run a bunch of before-and-after tests as part of the build process to make sure that existing customers are able to upgrade properly whatever version they are upgrading from. In production whenever we run into a problem upgrading, the problem data is added into this test library to prevent similar problems in the future.
We are dealing with major international companies and banks. The customers are quite happy with the result. In situations where a corporate's DBA needs to sign off on the changes, they don't seem to mind having a command line tool to do the upgrade rather than a script.
Most migration solutions can handle easy tasks, like adding new column, relationship or removing one, but fail to work when you rename a column (is that an add? or a remove following an add which equals a rename? What should you do with the data in that case?)
All three solutions have basic migrations support, XPO even lets you run your own scripts as a part of the process (to insert static/test/contant data, etc.)
There's also the MigratorDotNet project that you can use and not to rely on any ORM specific feature regarding migrations.
Personally, I would use auto migration only in dev/test environment and would have full set of upgrade scripts when running on client specific database to say upgrade from v1 to v2.
How is it going to work for ORM? Should I still write SQL scripts to
upgrade the DB?
Clear answer of this question should be on Programmer's stackexchange thread - What are the criteria for evaluating an ORM for.NET?, there i got simple answer for your question that you asked and matches with my experience with ORM while developing some project with Entity framework and Code smith ORM templates.
How does the ORM manages changes in the data model? what if I have to split a table and reallocate current records into 2 new tables?
Some can update the DB automatically within a certain measure, other
don't do anything and you'll have to do the dirty work yourself; other
provide a framework for handling change that lets you control database
updates. That means every couple of days someone needs to spend an hour updating the model to add a table or change datatypes that are changing
Ref:
https://softwareengineering.stackexchange.com/questions/6543/what-are-the-benefits-of-using-database-abstraction-by-orm
https://softwareengineering.stackexchange.com/questions/41739/best-arguments-for-against-introducing-orm-technology-into-a-companies-dev-proce/41833#41833
If you ask - what is the best practice for upgrading the db using ORM - my answer is: Don't use it if your application is more than a hobbyist app.
There are a lot of scenarios where many ORMs are unable to provide support to your specific database needs, e.g. in creating stored procedures, create indices and views or even indexed views/materialized tables without writing sql scripts. Problems like adding a new non-nullable column to an existing table are much harder to solve in ORM-Migration-Code than by writing SQL scripts.
Current Tools like Visual Studio Data Tools do handle these kind of problems way better.

Entity Framework equivalence for NHibernte SchemaExport

Is there an equivalence in Entity Framework to NHibernate SchemaExport?
Given I have a working Entity-Model, I would like to programmatically initialize a database.
I would like to use this functionality in the setup of my integration tests.
Creating the matching DDL for an Entity-Model would also suffice.
Yes - given that you're working with Entity Framework 4 (which is, confusingly enough, the second version...)
Edit: This is the way to do it with just EF4. In my original post below is described how to accomplish the same thing with the Code-Only approach in EF CTP3.
How to: Export model to database in EF4
To export a model to database, right-click anywhere in the designer (where you don't have an entity) and choose "Generate database from model..." and follow the steps described in the wizard. Voila!
Original post, targeting EF4 CTP3 and Code-Only: This is code I use in a little setup utility.
var builder = new ContextBuilder<ObjectContext>();
// Register all configurations you need here
builder.Configurations.Add(new EntryConfiguration());
builder.Configurations.Add(new TagConfiguration());
var conn = GetUnOpenedSqlConnection();
var db = builder.Create(conn);
if (db.DatabaseExists())
{ db.DeleteDatabase(); }
db.CreateDatabase();
It works on my machine (although here I've simplified a little bit for brevity...), so if something does not work it's because I over-simplified.
Note that, as TomTom stated, you will only get the basics. But it's pretty useful even if you have a more complicated schema - you only have to manually write DDL to add the complicated stuff onto the generated DB schema.
Nope, and seriously I do wonder why nhibernate bothers having this.
Problem is: an O/R mapper has LESS information about the database than needed for non-trivial setups.
Missing are:
Indices, fully configured
Information about server side constraints, triggers (yes, there may be some)
Information about object distribution over elements like table spaces
Information about permissions
I really love a test method (please check that database is good enough for all objects you know), but generation is VERY tricky - been there, done that. You need some serious additional annotations in the ORM to be able to even generate sensible indices.