Play: Exclude certain tables from being managed by the evolutions? - scala

I have a Scala Play app over a MySQL instance. I store my evolutions as conf/evolutions/$db/$step.sql files. However, some of my tables are dynamic i.e. their schema may be modified during the runtime of the Play app. What is the best way to exclude these tables from Play's evolutions framework?
I have couple of choices and none of them look especially elegant:
1) Move all the offending tables to a separate database where the evolutions plugin is disabled - This is not that great since I have to move all related tables that have foreign key constraints out of the current database too.
2) Somehow override Play's evolutions framework - Unfortunately, Play's evolution framework is not modular nor is it extendable. I was hoping it would have some Scala or Java hooks for def onUp(tableName: String) and def onDown(tableName: String) that I can override but Play's evolutions framework has no such nice abstractions it seems and is quite monolithic.
3) I know Play creates an entry in a table called play_evolutions - I can modify that table from my app in onStart to manually take out all offending table related stuff. That would work but is very hackish and has hard dependency on Play's internal representation/handling of schema changes.
4) Simply move all offending table sql statements to conf/evolutions/$db/ignore_evolution_$step.sql- This way these tables are out of the watchful eyes of the evolutions framework but I essentially have to roll my own framework to parse these files and execute them.
5) Anything else I missed?

Related

How should I handle database evolutions when using Play and Slick? Must I manually write SQL?

I'm looking at the "Hello Slick" tutorial. A users table is defined and then created using users.schema.create (the code on github is outdated so there it's users.ddl.create there, but when I create the app in Activator it's schema because it's using Slick 3.0.0, which is close enough). However if I run the app a second time there's an error because the table already exists. I see no option like users.schema.createIfNotExists which is a bit surprising. But in any case I would need something more sophisticated if I added a column to the table sometime in the future. So does Slick have no way of helping with migrations/evolutions?
I'm using Play and supposedly Play Slick has special support for database evolutions. It's not clear what is offered in addition to the usual Play evolutions that is specific to Slick: we're just told to add a dependency. I can't see any further documentation.
Do I have to manually write the SQL for evolutions (1.sql, Ups, Downs, etc.)? If so it seems pretty silly to have to write code like column[Int]("ID", O.PrimaryKey, O.AutoInc) in addition. I'm bothered by the duplication of effort and worried that if my SQL/DDL is wrong subtle bugs will appear when I access the database. I may be wrong but I seem to remember that migrations can be automatically generated after changing a model in Django so it doesn't seem like an unsolvable problem. Is this just not something that's been implemented or am I missing something?
I'm using PostgreSQL if that's relevant.
You could use slicks schema code generation feature:
http://slick.typesafe.com/doc/3.0.0/code-generation.html
This way if you update the db schema, you don't have to hand write the slick classes to correspond with the table this will just do it for you.

Where to initialize a database in Play framework 2.4 with Slick 3?

Slick 3.0.2 doesn't automatically create the database table when they don't exist so you have to do something like:
val setup = DBIO.seq(
(table1.schema ++ table2.schema).create,
//...
)
Where do you put this code in Play 2.4?
On a eager binding?
https://www.playframework.com/documentation/2.4.x/ScalaDependencyInjection#Eager-bindings
From the point of view of play framework developers you should use evolutions to define your schema.
https://www.playframework.com/documentation/2.4.x/Evolutions
https://www.playframework.com/documentation/2.4.x/PlaySlick
Of course, that might be a bit boring and kind of repetitive work as you also define your model in Slick.
If you want to run some code on startup an eager binding is the way to go.
If you have problems with eager bindings please, let us know.

Development process for Code First Entity Framework and SQL Server Data Tools Database Projects

I have been using Database First Entity Framework (EDMX) and SQL Server Data Tools Database Projects in combination very successfully - change the schema in the database and 'Update Model from Database' to get them into the EDMX. I see though that Entity Framework 7 will be dropping the EDMX format and I am looking for a new process that will allow me to use Code First in Combination with Database Projects.
Lots of my existing development and deployment processes rely on having a database project that contains the schema. This goes in source control is deployed along with the code and is used to update the production database complete with data migration using pre and post deployment scripts. I would be reluctant to drop it.
I would be keen to split one big EDMX into many smaller models as part of this work. This will mean multiple Code First models referencing the same database.
Assuming that I have an existing database and a database project to go with it - I am thinking that I would start by using the following wizard to create an initial set of entity and context classes - I would do this for each of the models.
Add | New Item... | Visual C# Items | Data | ADO.NET Entity Data Model | Code first from database
My problem is - where do I go from there? How do I handle schema changes? As long as I can get the database schema updated, I can use a schema compare operation to get the changes into the project.
These are the options that I am considering.
Make changes in the database and use the wizard from above to regenerate. I guess that I would need to keep any modifications to the entity and/or context classes in partial classes so that they do not get overwritten. Automating this with a list of tables etc to include would be handy. Powershell or T4 Templates maybe? SqlSharpener (suggested by Keith in comments) looks like it might help here. I would also look at disabling all but the checks for database existence and schema compatibility here, as suggested by Steve Green in the comments.
Make changes in code and use migrations to get these changes applied to the database. From what I understand, not having models map cleanly to database schemas (mine don't) might pose problems. I also see some complaints on the net that migrations do not cover all database object types - this was also my experience when I played around with Code First a while back - unique constraints I think were not covered. Has this improved in Entity Framework 7?
Make changes in the database and then use migrations as a kind of comparison between code and the database. See what the differences are and adjust the code to suit. Keep going until there are no differences.
Make changes manually in both code and the database. Obviously, this is not very appealing.
Which of these would be best? Is there anything that I would need to know before trying to implement it? Are there any other, better options?
So the path that we ended up taking was to create some T4 templates that generate both a DbContext and our entities. We provide the entity T4 a list of tables from which to generate entities and have a syntax to indicate that the entity based on one table should inherit from the entity based on another. Custom code goes in partial classes. So our solution looks most like my option 1 from above.
Also, we started out generating fluent configuration in OnModelCreating in the DbContext but have swapped to using attributes on the Entities (where attributes exist - HasPrecision was one that we had to use fluent configuration for). We found that it is more concise and easier to locate the configuration for a property when it is right there decorating that property.

Creating rdbms DDL from scala classes

Is there a straightforward way to generate rdbms ddl, for a set of scala classes?
I.e. to derive a table ddl for each class (whereby each case class field would translate to field of the table, with a corresponding rdbms type).
Or, to directly create the database objects in the rdbms.
I have found some documentation about Ebean being embedded in Play framework, but was not sure what side-effects may enabling Ebean in play have, and how much taming would Ebean require to avoid any of them. I have never even used Ebean before...
I would actually rather use something outside of Play, but if it's simple to accomplish in Play I would dearly like to know a clean way. Thanks in advance!
Is there a straightforward way to generate rdbms ddl, for a set of
scala classes?
Yes
Ebean
Ebean a default orm provided by play you just have to create entity and enable evolution(which is set to enable as default).It will create a (dot)sql file in conf/evolution/default directory and when you hit localhost:9000 it will show you apply script .But your tag say you are using scala so you can't really use EBean with Scala .If you do that you will have to
sacrifice the immutability of your Scala class, and to use the Java
collections API instead of the Scala one.
Using Scala this way will just bring more troubles than using Java directly.
Source
JPA
JPA (using Hibernate as implementation) is the default way to access and manage an SQL database in a standard Play Java application. It is still possible to use JPA from a Play Scala application, but it is probably not the best way, and it should be considered as legacy and deprecated.Source
Anorm(if you want to write ddl)
Anorm is Not an Object Relational Mapper so you have to manually write ddl. Source
Slick
Function relation mapping for scala .Source
Activate
Activate is a framework to persist objects in Scala.Source
Skinny
It is built upon ScalikeJDBC library which is a thin but powerful JDBC wrapper.Details1,Details2
Also check RDBMS with scala,Best data access option for play scala

How can i use one sequence to auto increment several tables in squeryl (PostgreSQL)?

I have follows
val s1 = autoIncremented("advert_id_seq")
on(car)(attributes => declare(attributes.id is (s1)))
on(danceInstructor)(attributes => declare(attributes.id is (s1)))
When i run my app a catch following exception
org.postgresql.util.PSQLException: ERROR: relation "advert_id_seq" already exists
As i realized, squeryl try to create sequence twice and gets error
I'm guessing that your issue is with Schema generation, not with querying the database. If that's the case, then you probably just want to avoid having Squeryl create the tables directly. Squeryl's schema generation is purposefully basic. When you outgrow what it can do I think you're better off adopting some method that gives you greater control than a "read your model and generate stuff" tool can offer. Tools like Flyway or Liquibase are good for this.
If you don't want to adopt a new library you can also use Squeryl to output the schema to a file through one of the Schema.printDdl methods then remove the extraneous sequence before executing it.
Ruby on Rails has a polymorphic association, but scala active record don't have its. I made something like it's, and will push it in github repo.