I am using slick in play framework. I need to know how to write the following line in slick:
compute nvl(max(s.version),0)
Related
From this JIRA ticket Hide UserDefinedType in Spark 2.0
, seems that spark hide the UDT API from version 2.0.
Is there exists an alternative function or API we can use in version 2.2, so that we could define UserDefinedType? I wish to use a custom type in dataframe or structured streaming.
There is no alternative API and UDT remains private (https://issues.apache.org/jira/browse/SPARK-7768).
Generic Encoders (org.apache.spark.sql.Encoders.kryo and org.apache.spark.sql.Encoders.javaSerialization) serve similar purpose in Dataset, but there are not direct replacement:
How to store custom objects in Dataset?
Questions about the future of UDTs and Encoders
I have kind of philosophical question.
I have been a very happy user of Play Framework for Java for couple years now. Now I am trying to dive into Scala and functional programming. In Java-based play I have been using Ebean, so according to Play documentation I extended Ebean Model class and implemented my own models. In each model I declared a static variable of type Finder in order to call queries. All of this is documented and working well.
However in Scala-based Play (v2.5.x) there is not too much documentation about persistance layer. OK, I understood there is a recommendation of Play Slick as it is using the ideas of functional programming. I am kind of excited about that, but there is almost no documentation on how to use it. I found a way on how to enable Slick, how to configure data source and db server and how to inject db into Controller. There is also a very small example on how to call simple query on db.
The question is: How to actually use Slick? I researched some third party tutorials and blogs and it seems there are multiple ways.
1) How to define models? It seems that I should use case classes to define model itself. Than I should define class extending Table to define columns and its properties??
2) What is the project structure? Should I create new scala file for each model? By conventions of Java I should, but sometimes I have seen all models in one scala file (like in Python Django). I assume separate files are better.
3) Should I create DAOs for manipulating Models? Or should I create something like Service? The code would be probably very same. What I am asking is the structure of the project.
Thank you in advance for any ideas
I had the same questions about slick and came up with a solution that works for me. Have a look at this example project:
https://github.com/nemoo/play-slick3-example
Most other example projects are too basic. So I created this project with a broader scope, similar to what I found in real live play code. I tested out various approaches, including services. In the end I found the additional layer hard to work with because I never knew where to put the code. You can see the thought process in the past commits :)
Let me quote from the readme: Repositories handle interactions with domain aggregates. All public methods are exposed as Futures. Internally, in some cases we need to compose various queries into one block that is carried out within a single transaction. In this case, the individual queries return DBIO query objects. A single public method runs those queries and exposes a Future to the client.
I can wholeheartedly recommend the Getting Started part of the Slick documentation
There is also a Typesafe Activator Template for Slick - Hello Slick - which you can find here and then explore and continue from there
To get started with Slick and Play you would need to add the dependency in your build.sbt file:
"com.typesafe.play" %% "play-slick" % "2.0.0"
Also evolutions (which I recommend)
"com.typesafe.play" %% "play-slick-evolutions" % "2.0.0"
And of course the driver for the database
"com.h2database" % "h2" % "${H2_VERSION}" // replace `${H2_VERSION}` with an actual version number
Then you would have to specify the configuration for your database:
slick.dbs.default.driver="slick.driver.H2Driver$"
slick.dbs.default.db.driver="org.h2.Driver"
slick.dbs.default.db.url="jdbc:h2:mem:play"
If you want to have a nice overview of all this and more you should definitely take a look at THE BEST STARTING POINT - a complete project with Models, DAOs, Controllers, adapted to Play 2.5.x
Slick 3.0.2 doesn't automatically create the database table when they don't exist so you have to do something like:
val setup = DBIO.seq(
(table1.schema ++ table2.schema).create,
//...
)
Where do you put this code in Play 2.4?
On a eager binding?
https://www.playframework.com/documentation/2.4.x/ScalaDependencyInjection#Eager-bindings
From the point of view of play framework developers you should use evolutions to define your schema.
https://www.playframework.com/documentation/2.4.x/Evolutions
https://www.playframework.com/documentation/2.4.x/PlaySlick
Of course, that might be a bit boring and kind of repetitive work as you also define your model in Slick.
If you want to run some code on startup an eager binding is the way to go.
If you have problems with eager bindings please, let us know.
Is there a straightforward way to generate rdbms ddl, for a set of scala classes?
I.e. to derive a table ddl for each class (whereby each case class field would translate to field of the table, with a corresponding rdbms type).
Or, to directly create the database objects in the rdbms.
I have found some documentation about Ebean being embedded in Play framework, but was not sure what side-effects may enabling Ebean in play have, and how much taming would Ebean require to avoid any of them. I have never even used Ebean before...
I would actually rather use something outside of Play, but if it's simple to accomplish in Play I would dearly like to know a clean way. Thanks in advance!
Is there a straightforward way to generate rdbms ddl, for a set of
scala classes?
Yes
Ebean
Ebean a default orm provided by play you just have to create entity and enable evolution(which is set to enable as default).It will create a (dot)sql file in conf/evolution/default directory and when you hit localhost:9000 it will show you apply script .But your tag say you are using scala so you can't really use EBean with Scala .If you do that you will have to
sacrifice the immutability of your Scala class, and to use the Java
collections API instead of the Scala one.
Using Scala this way will just bring more troubles than using Java directly.
Source
JPA
JPA (using Hibernate as implementation) is the default way to access and manage an SQL database in a standard Play Java application. It is still possible to use JPA from a Play Scala application, but it is probably not the best way, and it should be considered as legacy and deprecated.Source
Anorm(if you want to write ddl)
Anorm is Not an Object Relational Mapper so you have to manually write ddl. Source
Slick
Function relation mapping for scala .Source
Activate
Activate is a framework to persist objects in Scala.Source
Skinny
It is built upon ScalikeJDBC library which is a thin but powerful JDBC wrapper.Details1,Details2
Also check RDBMS with scala,Best data access option for play scala
I'm new to Scala and Slick and was surprised by something in the Slick documentation:
The following primitive types are supported out of the box for
JDBC-based databases in JdbcProfile
...
Unit
...
I don't get why this list contains Unit. From my understanding, Unit is similar to Java's void, something I neither can save to nor receive from my database. What is the intention behind it?
edit: you can find it here.
One way to look at Slick is running Scala code on your database as the execution engine. We are working on allowing more Scala code over time. An expression that contains a unit, e.g. in a tuple is a valid Scala expression and thus should be runnable by Slick unless there is a good reason why not. So we support unit.