Where to initialize a database in Play framework 2.4 with Slick 3? - playframework-2.4

Slick 3.0.2 doesn't automatically create the database table when they don't exist so you have to do something like:
val setup = DBIO.seq(
(table1.schema ++ table2.schema).create,
//...
)
Where do you put this code in Play 2.4?
On a eager binding?
https://www.playframework.com/documentation/2.4.x/ScalaDependencyInjection#Eager-bindings

From the point of view of play framework developers you should use evolutions to define your schema.
https://www.playframework.com/documentation/2.4.x/Evolutions
https://www.playframework.com/documentation/2.4.x/PlaySlick
Of course, that might be a bit boring and kind of repetitive work as you also define your model in Slick.
If you want to run some code on startup an eager binding is the way to go.
If you have problems with eager bindings please, let us know.

Related

Play Framework 2.5.x Scala Slick implementation style

I have kind of philosophical question.
I have been a very happy user of Play Framework for Java for couple years now. Now I am trying to dive into Scala and functional programming. In Java-based play I have been using Ebean, so according to Play documentation I extended Ebean Model class and implemented my own models. In each model I declared a static variable of type Finder in order to call queries. All of this is documented and working well.
However in Scala-based Play (v2.5.x) there is not too much documentation about persistance layer. OK, I understood there is a recommendation of Play Slick as it is using the ideas of functional programming. I am kind of excited about that, but there is almost no documentation on how to use it. I found a way on how to enable Slick, how to configure data source and db server and how to inject db into Controller. There is also a very small example on how to call simple query on db.
The question is: How to actually use Slick? I researched some third party tutorials and blogs and it seems there are multiple ways.
1) How to define models? It seems that I should use case classes to define model itself. Than I should define class extending Table to define columns and its properties??
2) What is the project structure? Should I create new scala file for each model? By conventions of Java I should, but sometimes I have seen all models in one scala file (like in Python Django). I assume separate files are better.
3) Should I create DAOs for manipulating Models? Or should I create something like Service? The code would be probably very same. What I am asking is the structure of the project.
Thank you in advance for any ideas
I had the same questions about slick and came up with a solution that works for me. Have a look at this example project:
https://github.com/nemoo/play-slick3-example
Most other example projects are too basic. So I created this project with a broader scope, similar to what I found in real live play code. I tested out various approaches, including services. In the end I found the additional layer hard to work with because I never knew where to put the code. You can see the thought process in the past commits :)
Let me quote from the readme: Repositories handle interactions with domain aggregates. All public methods are exposed as Futures. Internally, in some cases we need to compose various queries into one block that is carried out within a single transaction. In this case, the individual queries return DBIO query objects. A single public method runs those queries and exposes a Future to the client.
I can wholeheartedly recommend the Getting Started part of the Slick documentation
There is also a Typesafe Activator Template for Slick - Hello Slick - which you can find here and then explore and continue from there
To get started with Slick and Play you would need to add the dependency in your build.sbt file:
"com.typesafe.play" %% "play-slick" % "2.0.0"
Also evolutions (which I recommend)
"com.typesafe.play" %% "play-slick-evolutions" % "2.0.0"
And of course the driver for the database
"com.h2database" % "h2" % "${H2_VERSION}" // replace `${H2_VERSION}` with an actual version number
Then you would have to specify the configuration for your database:
slick.dbs.default.driver="slick.driver.H2Driver$"
slick.dbs.default.db.driver="org.h2.Driver"
slick.dbs.default.db.url="jdbc:h2:mem:play"
If you want to have a nice overview of all this and more you should definitely take a look at THE BEST STARTING POINT - a complete project with Models, DAOs, Controllers, adapted to Play 2.5.x

How should I handle database evolutions when using Play and Slick? Must I manually write SQL?

I'm looking at the "Hello Slick" tutorial. A users table is defined and then created using users.schema.create (the code on github is outdated so there it's users.ddl.create there, but when I create the app in Activator it's schema because it's using Slick 3.0.0, which is close enough). However if I run the app a second time there's an error because the table already exists. I see no option like users.schema.createIfNotExists which is a bit surprising. But in any case I would need something more sophisticated if I added a column to the table sometime in the future. So does Slick have no way of helping with migrations/evolutions?
I'm using Play and supposedly Play Slick has special support for database evolutions. It's not clear what is offered in addition to the usual Play evolutions that is specific to Slick: we're just told to add a dependency. I can't see any further documentation.
Do I have to manually write the SQL for evolutions (1.sql, Ups, Downs, etc.)? If so it seems pretty silly to have to write code like column[Int]("ID", O.PrimaryKey, O.AutoInc) in addition. I'm bothered by the duplication of effort and worried that if my SQL/DDL is wrong subtle bugs will appear when I access the database. I may be wrong but I seem to remember that migrations can be automatically generated after changing a model in Django so it doesn't seem like an unsolvable problem. Is this just not something that's been implemented or am I missing something?
I'm using PostgreSQL if that's relevant.
You could use slicks schema code generation feature:
http://slick.typesafe.com/doc/3.0.0/code-generation.html
This way if you update the db schema, you don't have to hand write the slick classes to correspond with the table this will just do it for you.

Creating rdbms DDL from scala classes

Is there a straightforward way to generate rdbms ddl, for a set of scala classes?
I.e. to derive a table ddl for each class (whereby each case class field would translate to field of the table, with a corresponding rdbms type).
Or, to directly create the database objects in the rdbms.
I have found some documentation about Ebean being embedded in Play framework, but was not sure what side-effects may enabling Ebean in play have, and how much taming would Ebean require to avoid any of them. I have never even used Ebean before...
I would actually rather use something outside of Play, but if it's simple to accomplish in Play I would dearly like to know a clean way. Thanks in advance!
Is there a straightforward way to generate rdbms ddl, for a set of
scala classes?
Yes
Ebean
Ebean a default orm provided by play you just have to create entity and enable evolution(which is set to enable as default).It will create a (dot)sql file in conf/evolution/default directory and when you hit localhost:9000 it will show you apply script .But your tag say you are using scala so you can't really use EBean with Scala .If you do that you will have to
sacrifice the immutability of your Scala class, and to use the Java
collections API instead of the Scala one.
Using Scala this way will just bring more troubles than using Java directly.
Source
JPA
JPA (using Hibernate as implementation) is the default way to access and manage an SQL database in a standard Play Java application. It is still possible to use JPA from a Play Scala application, but it is probably not the best way, and it should be considered as legacy and deprecated.Source
Anorm(if you want to write ddl)
Anorm is Not an Object Relational Mapper so you have to manually write ddl. Source
Slick
Function relation mapping for scala .Source
Activate
Activate is a framework to persist objects in Scala.Source
Skinny
It is built upon ScalikeJDBC library which is a thin but powerful JDBC wrapper.Details1,Details2
Also check RDBMS with scala,Best data access option for play scala

Play: Exclude certain tables from being managed by the evolutions?

I have a Scala Play app over a MySQL instance. I store my evolutions as conf/evolutions/$db/$step.sql files. However, some of my tables are dynamic i.e. their schema may be modified during the runtime of the Play app. What is the best way to exclude these tables from Play's evolutions framework?
I have couple of choices and none of them look especially elegant:
1) Move all the offending tables to a separate database where the evolutions plugin is disabled - This is not that great since I have to move all related tables that have foreign key constraints out of the current database too.
2) Somehow override Play's evolutions framework - Unfortunately, Play's evolution framework is not modular nor is it extendable. I was hoping it would have some Scala or Java hooks for def onUp(tableName: String) and def onDown(tableName: String) that I can override but Play's evolutions framework has no such nice abstractions it seems and is quite monolithic.
3) I know Play creates an entry in a table called play_evolutions - I can modify that table from my app in onStart to manually take out all offending table related stuff. That would work but is very hackish and has hard dependency on Play's internal representation/handling of schema changes.
4) Simply move all offending table sql statements to conf/evolutions/$db/ignore_evolution_$step.sql- This way these tables are out of the watchful eyes of the evolutions framework but I essentially have to roll my own framework to parse these files and execute them.
5) Anything else I missed?

Entity Framework equivalence for NHibernte SchemaExport

Is there an equivalence in Entity Framework to NHibernate SchemaExport?
Given I have a working Entity-Model, I would like to programmatically initialize a database.
I would like to use this functionality in the setup of my integration tests.
Creating the matching DDL for an Entity-Model would also suffice.
Yes - given that you're working with Entity Framework 4 (which is, confusingly enough, the second version...)
Edit: This is the way to do it with just EF4. In my original post below is described how to accomplish the same thing with the Code-Only approach in EF CTP3.
How to: Export model to database in EF4
To export a model to database, right-click anywhere in the designer (where you don't have an entity) and choose "Generate database from model..." and follow the steps described in the wizard. Voila!
Original post, targeting EF4 CTP3 and Code-Only: This is code I use in a little setup utility.
var builder = new ContextBuilder<ObjectContext>();
// Register all configurations you need here
builder.Configurations.Add(new EntryConfiguration());
builder.Configurations.Add(new TagConfiguration());
var conn = GetUnOpenedSqlConnection();
var db = builder.Create(conn);
if (db.DatabaseExists())
{ db.DeleteDatabase(); }
db.CreateDatabase();
It works on my machine (although here I've simplified a little bit for brevity...), so if something does not work it's because I over-simplified.
Note that, as TomTom stated, you will only get the basics. But it's pretty useful even if you have a more complicated schema - you only have to manually write DDL to add the complicated stuff onto the generated DB schema.
Nope, and seriously I do wonder why nhibernate bothers having this.
Problem is: an O/R mapper has LESS information about the database than needed for non-trivial setups.
Missing are:
Indices, fully configured
Information about server side constraints, triggers (yes, there may be some)
Information about object distribution over elements like table spaces
Information about permissions
I really love a test method (please check that database is good enough for all objects you know), but generation is VERY tricky - been there, done that. You need some serious additional annotations in the ORM to be able to even generate sensible indices.