Slick how to insert row with db-computed fields and get id - scala

I'm trying to write a simple Active Record style DAO in Slick and I'm finding it very hard going.
I'd like to insert a row using the case class representation, set its "updated" column using database time and return the new row values as a case class.
i.e. the DAO should look like this:
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
???
}
}
// This class is auto-generated by slick codegen
case class FooRow(
id: Int,
created: Option[java.sql.Timestamp]
updated: Option[java.sql.Timestamp],
x1: String,
x2: String,
x3: String)
I want the returned FooRow object to have the correct ID and created timestamp that was assigned by the database.
I'll also want an update method that updates a FooRow, using the database clock for the updated column and returning a new FooRow
Things I have tried:
Using "returning"
I can insert the case class and get the generated id using the returning helper as described at Slick 3.0 Insert and then get Auto Increment Value, i.e.
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
db.run((Foos returning Foos.map(_.id)) += fooRow)
.map(id => fooRow.copy(id = id))
}
}
This works but I can't see any way to use CURRENT_TIMESTAMP() in the insert statement to get the updated column filled in. The types just don't allow it.
Using sqlu
I can write the SQL that I'm aiming for using sqlu, but then I can't see any way to get the generated key out of Slick. The JDBC API allows it via getGeneratedKeys() but I can't see how to access it here:
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
val sql: DBIO[Int] = sqlu"""
INSERT INTO foos
(created,
updated,
x1,
x2,
x3)
VALUES (current_timestamp(),
current_timestamp(),
$foo.x1,
$foo.x2,
$foo.x3)
"""
db.run(sql)
.map(rowCount => ???)
}
}
Using active-slick
The https://github.com/strongtyped/active-slick project looks like nearly what I need, but it doesn't appear to support updated/created columns.
Using database default values / triggers
I suppose I could implement the created column using a database default value, but how would I prevent Slick from passing in an explicit value in its insert statement and overruling that?
I suppose I could implement the updated column using a database trigger, but I'd prefer to keep my logic in the Scala layer and keep the database simple.
Using the web-server clock instead of the DB clock
The closest I can get to this at the moment is to use the web-server clock instead of the DB clock:
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
val now = Some(Timestamp.from(Instant.now()))
val toInsert = fooRow.copy(
created = now,
updated = now)
db.run((Foos returning Foos.map(_.id)) += toInsert)
.map(id => toInsert.copy(id = id))
}
}
However, I'd much prefer to use the database clock for these columns, in case different web servers have different times.
Any ideas would be gratefully received. I'm strongly considering abandoning Slick and using JOOQ, where this would be easy.

Related

Silently dropping a nullable database column with slick

I have a database for objects called Campaigns containing three fields :
Id (int, not nullable)
Version (int, not nullable)
Stuff (Text, nullable)
Let's call CampaignsRow the corresponding slick entity class
When I select line from Campaigns, I don't always need to read stuff, which contains big chunks of text.
However, I'd very much like to work in the codebase with the class CampaignsRow instead of a tuple, and so to be able to sometimes just drop the Stuff column, while retaining the original type
Basically, I'm trying to write the following function :
//Force dropping the Stuff column from the current Query
def smallCampaign(campaigns: Query[Campaigns, CampaignsRow, Seq]): Query[Campaigns, CampaignsRow, Seq] = {
val smallCampaignQuery = campaigns.map {
row => CampaignsRow(row.id, row.version , None : Option[String])
}
smallCampaignQuery /* Fails because the type is now wrong, I have a Query[(Rep[Int], Rep[Int], Rep[Option[String]), (Int, Int, Option[String], Seq] */
}
Any idea how to do this ? I suspect this has to do with Shape in slick, but I can't find a resource to start understanding this class, and the slick source code is proving too complex for me to follow.
You're actually already doing almost what you want in def *, the default mapping. You can use the same tools in the map method. Your two tools are mapTo and <>.
As you've found, there is the mapTo method which you can only use if your case class exactly matches the shape of the tuple, so if you wanted a special case class just for this purpose:
case class CampaignLite(id: Int, version: Int)
val smallCampaignQuery = campaigns.map {
row => (row.id, row.version).mapTo[CampaignLite]
}
As you want to reuse your existing class, you can write your own convert functions instead of using the standard tupled and unapply and pass those to <>:
object CampaignRow {
def tupleLite(t: (Int, Int)) = CampaignRow(t._1, t._2, None)
def unapplyLite(c: CampaignRow) = Some((c.id, c.version))
}
val smallCampaignQuery = campaigns.map {
row => (row.id, row.version) <> (CampaignRow.tupleLite, CampaignRow.unapplyLite)
}
This gives you the most flexibility, as you can do whatever you like in your convert functions, but it's a bit more wordy.
As row is an instance of the Campaigns table you could always define it there alongside *, if you need to use it regularly.
class Campaigns ... {
...
def * = (id, version, stuff).mapTo[CampaignRow]
def liteMapping = (id, version) <> (CampaignRow.tupleLite, CampaignRow.unapplyLite)
}
val liteCampaigns = campaigns.map(_.liteMapping)
Reference: Essential Slick 3, section 5.2.1
If I understand your requirement correctly, you could consider making CampaignRow a case class that models your Campaigns table class by having Campaigns extend Table[CampaignRow] and providing the bidirectional mapping for the * projection:
case class CampaignRow(id: Int, version: Int, stuff: Option[String])
class Campaigns(tag: Tag) extends Table[CampaignRow](tag, "CAMPAIGNS") {
// ...
def * = (id, version, stuff) <> (CampaignRow.tupled, CampaignRow.unapply)
}
You should then be able to do something like below:
val campaigns = TableQuery[CampaignRow]
val smallCampaignQuery = campaigns.map( _.copy(stuff = None) )
For a relevant example, here's a Slick doc.

Extracting DDL from case classes

I'm experimenting with scalikejdbc (trying to move from Slick), and I'm stuck on creating my schema from the entities (read: case classes).
// example Slick equivalent
case class X(id: Int, ...)
class XTable(tag: Tag) extends Table[X] (tag, "x") {
def id = column[Int]("id")
... //more columns
def * = (id, ...) <> (X.tupled, X.unapply)
}
val xTable = TableQuery[XTable]
db.run(xtable.schema.create) //creates in the DB a table named "x", with "id" and all other columns
It seemed like using SQLSyntaxSupport could be a step in the right direction, with something like
// scalikejdbc
case class X (id: Int, ...)
object X extends SQLSyntaxSupport[X] {
def apply (x: ResultName[X])(rs: WrappedResultSet): X = new X(id = rs.get(x.id, ...))
}
X.table.??? // what to do now?
but could not figure out the next step.
What I'm looking for is the opposite of the tool described under [Reverse-engineering]: http://scalikejdbc.org/documentation/reverse-engineering.html
Any help/ideas, in particular directions to a relevant part of the documentation, will be appreciated
You can use the the statements method to get the SQL code, like for
most other SQL-based Actions. Schema Actions are currently the only
Actions that can produce more than one statement.
schema.create.statements.foreach(println)
schema.drop.statements.foreach(println)
http://slick.typesafe.com/doc/3.0.0/schemas.html

Reading/Writing None values as null with ReactiveMongo

We are in the process of migrating an existing REST service from Spring/Java to Spray using ReactiveMongo. One of the requirements for the migration (the first phase of it anyway), is that all inputs and outputs must match the current system. The issue with this is the business objects allow null values - both at rest in the datastore, and when returned in GET methods on the service. Fields can be missing as input to the service for PUT/POST, but the corresponding values must still be written as null to the datastore and returned the same.
Normally 'not required' fields aren't an issue for Scala/Spray through the use of Option, but the issue I'm having is actually writing the values of the Option fields as null when persisting, and setting the fields as None when reading the same null from Mongo.
In the research I've been doing, I have not been able to find a way to do this.
Here are snippets of my code:
UserPersistent
case class UserPersistent(id: Option[String], name: Option[String])
PersistentUser
object PersistentUser {
implicit object PersistentUserReader extends BSONDocumentReader[UserPersistent] {
def read(doc: BSONDocument): UserPersistent = UserPersistent(
id = doc.getAs[String]("_id"),
name = doc.getAs[String]("name")
)
}
implicit object PersistentUserWriter extends BSONDocumentWriter[UserPersistent] {
override def write(persisted: UserPersistent): BSONDocument = {
BSONDocument(
"_id" -> persisted.id,
"name" -> persisted.name
)
}
}
}
I have tried the following on the write() side, and although the code compiles and runs, it throws a NullPointerException when executed
"name" -> {
val nnn = persisted.name match {
case Some(n) => n
case _ => null
}
nnn
}
I have used OptionFormat for the 'presentation' of the data, which returns nulls (but for everything), but I need to take care of the Mongo side of this.
Surely there's a way to do this - what am I missing?
Try This:
object PersistentUser {
implicit val reader: BSONDocumentReader[UserPersistent] = Macros.reader[UserPersistent]
implicit val writer: BSONDocumentWriter[UserPersistent] = Macros.writer[UserPersistent]
}

SLICK: A simple query using union and parameter lists

I'm new to SLICK (2.1) and am lost in creating my first query using union. Because the parameters are provided from external (via a web interface) eventually, I set them as optional. Please see the comment in the code below. How to create an appropriate query?
My actual class is more complex which I simplified for the sake of this question.
case class MyStuff(id: Int, value: Int, text: String)
class MyTable (tag: Tag) extends Table[MyStuff](tag, "MYSTUFF"){
def id = column[Int]("ID", O NotNull)
def value = column[Int]("VALUE", O NotNull)
def text = column[String]("TEXT", O NotNull)
def * =
(id,
value,
text).shaped <> ((MyStuff.apply _).tupled, MyStuff.unapply)
}
object myTable extends TableQuery(new MyTable(_)){
def getStuff(ids: Option[List[Int]], values: Option[List[Int]])(implicit session: Session): Option[List[MyStuff]] = {
/*
1) If 'ids' are given, retrieve all matching entries, if any.
2) If 'values' are given, retrieve all matching entries (if any), union with the results of the previous step, and remove duplicate entries.
4) If neither 'ids' nor 'values' are given, retrieve all entries.
*/
}
}
getStuff is called like this:
db: Database withSession { implicit session => val myStuff = myTable.getStuff(...) }
You can use inset if the values are Some, otherwise a literal false and only filter when something is not None.
if(ids.isDefined || values.isDefined)
myTable.filter(row =>
ids.map(row.id inSet _).getOrElse(slick.lifted.LiteralColumn(false))
) union myTable.filter(row =>
values.map(row.value inSet _).getOrElse(slick.lifted.LiteralColumn(false))
)
else myTable
If I understand you correctly you want to build a filter at runtime from the given input. You can look at the extended docs for 3.0 (http://slick.typesafe.com/doc/3.0.0-RC1/queries.html#sorting-and-filtering) at "building criteria using a "dynamic filter" e.g. from a webform". This part of the docs is also valid for version 2.1.

Squeryl session management with 'using'

I'm learning Squeryl and trying to understand the 'using' syntax but can't find documentation on it.
In the following example two databases are created, A contains the word Hello, and B contains Goodbye. The intention is to query the contents of A, then append the word World and write the result to B.
Expected console output is Inserted Message(2,HelloWorld)
object Test {
def main(args: Array[String]) {
Class.forName("org.h2.Driver");
import Library._
val sessionA = Session.create(DriverManager.getConnection(
"jdbc:h2:file:data/dbA","sa","password"),new H2Adapter)
val sessionB = Session.create(DriverManager.getConnection(
"jdbc:h2:file:data/dbB","sa","password"),new H2Adapter)
using(sessionA){
drop; create
myTable.insert(Message(0,"Hello"))
}
using(sessionB){
drop; create
myTable.insert(Message(0,"Goodbye"))
}
using(sessionA){
val results = from(myTable)(s => select(s))//.toList
using(sessionB){
results.foreach(m => {
val newMsg = m.copy(msg = (m.msg+"World"))
myTable.insert(newMsg)
println("Inserted "+newMsg)
})
}
}
}
case class Message(val id: Long, val msg: String) extends KeyedEntity[Long]
object Library extends Schema { val myTable = table[Message] }
}
As it stands, the code prints Inserted Message(2,GoodbyeWorld), unless the toList is added on the end of the val results line.
Is there some way to bind the results query to use sessionA even when evaluated inside the using(sessionB)? This seems preferable to using toList to force the query to evaluate and store the contents in memory.
Update
Thanks to Dave Whittaker's answer, the following snippet fixes it without resorting to 'toList' and corrects my understanding of both 'using' and the running of queries.
val results = from(myTable)(s => select(s))
using(sessionA){
results.foreach(m => {
val newMsg = m.copy(msg = (m.msg+"World"))
using(sessionB){myTable.insert(newMsg)}
println("Inserted "+newMsg)
})
}
First off, I apologize for the lack of documentation. The using() construct is a new feature that is only available in SNAPSHOT builds. I actually talked to Max about some of the documentation issues for early adopters yesterday and we are working to fix them.
There isn't a way that I can think of to bind a specific Session to a Query. Looking at your example, it looks like an easy work around would be to invert your transactions. When you create a query, Squeryl doesn't actually access the DB, it just creates an AST representing the SQL to be performed, so you don't need to issue your using(sessionA) at that point. Then, when you are ready to iterate over the results you can wrap the query invocation in a using(sessionA) nested within your using(sessionB). Does that make sense?