Generic update with mapping in Slick - scala

I'm writing a CRUD app using Slick, and I want my update queries to only update a specific set of columns and I use .map().update() for that.
I have a function that returns a tuple of fields that can be updated in my table definition (def writableFields). And I have a funciton that returns a tuple of values to write there extracted from a case class.
It works fine, but it's annoying to create a repo and write the whole update function for every table. I want to create a generic form of this function, and make my table and it's companion object to extend some trait. But I cannot come up with correct type definitions.
Slick expects output of map() to be somehow compatible with the output of update. And I don't know how to make a generic type for tuples.
Is it even possible to accomplish? Or is there an alternative way to limit code duplication? Ideally I want to avoid writing Repos at all and just either instantiate a generic class or call a generic method.
object ProjectsRepo extends BaseRepository[Projects, Project] {
protected val query = lifted.TableQuery[Projects]
def update(id: Long, c: Project): Future[Option[Project]] = {
val q = filterByIdQuery(id).map(_.writableFields)
.update(Projects.mapFormToTable(c))
(db run q).flatMap(
affected =>
if (affected > 0) {
findOneById(id)
} else {
Future(None)
}
)
}
}
class Projects(tag: Tag) extends Table[Project](tag, "projects") with IdentifiableTable[Long] {
val id = column[Long]("id", O.PrimaryKey, O.AutoInc)
val title = column[String]("title")
val slug = column[String]("slug")
val created_at = column[Timestamp]("created_at")
val updated_at = column[Timestamp]("updated_at")
def writableFields =
(
title,
slug
)
def readableFields =
(
id,
created_at,
updated_at
)
def allFields = writableFields ++ readableFields // shapeless
def * = allFields <> (Projects.mapFromTable, (_: Project) => None)
}
object Projects {
def mapFormToTable(c: Project): FormFields =
(
c.title,
c.slug
)
}

Related

Aggregate root implementation with slick

I am trying to implement a simple aggregation root in slick.
But I don't really know what is the best way to do that.
Here is my domain objects:
case class Project(id: UUID,
name: String,
state: ProjectState,
description: String,
team: String,
tags: Set[String]
I would like to store the "tags" in a separate table and build up the "Project" objects from "projects_table" and "project_tags_table"
Here is my table definition:
class ProjectTable(tag: Tag) extends Table[ProjectTableRecord](tag, Some("octopus_service"), "projects") {
def id: Rep[UUID] = column[UUID]("id", O.PrimaryKey)
def name: Rep[String] = column[String]("name")
def state: Rep[ProjectState] = column[ProjectState]("state")
def description: Rep[String] = column[String]("description")
def team: Rep[String] = column[String]("team")
override def * : ProvenShape[ProjectTableRecord] = (id, name, state, description, team, created, lastModified) <> (
(ProjectTableRecord.apply _).tupled, ProjectTableRecord.unapply
)
}
class ProjectTagTable(tag: Tag) extends Table[ProjectTag](tag, Some("octopus_service"), "project_tags") {
def projectID: Rep[UUID] = column[UUID]("project_id")
def name: Rep[String] = column[String]("name")
def project = foreignKey("PROJECT_FK", projectID, TableQuery[ProjectTable])(_.id, onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)
override def * : ProvenShape[ProjectTag] = (projectID, name) <> (
ProjectTag.tupled, ProjectTag.unapply
)
}
How can I generate "Project" objects from joining these 2 tables?
Thanks in advance :)
I think there is a misconception on the level of responsibility. Slick allows you to access relational database (to some extent the same way as SQL allows you to do it). It's basically a DAO layer.
Aggregate root is really a level above this (it's a domain thing, not db level thing - although they often are the same to large extent).
So basically you need to have a level above Slick tables that would allow you to perform different queries and aggregate the results into single being.
Before we start though - you should create and store somewhere your TableQuery objects, perhaps like this:
lazy val ProjectTable = TableQuery[ProjectTable]
lazy val ProjectTagTable = TableQuery[ProjectTagTable]
You could put them probably somewhere near you Table definitions.
So first as I mentioned your Aggregate Root being Project needs be pulled by something. Let's call it ProjectRepository.
Let's say it will have a method def load(id: UUID): Future[Project].
This method would perhaps look like this:
class ProjectRepository {
def load(id: UUID): Future[Project] = {
db.run(
for {
project <- ProjectTable.filter(_.id === id).result
tags <- ProjectTagTable.filter(_.projectId === id).result
} yield {
Project(
id = project.id,
name = project.name,
state = project.state,
description = project.description,
team = project.team,
tags = tags.map(_.name)
)
}
)
}
// another example - if you wanted to extract multiple projects
// (in reality you would probably apply some paging here)
def findAll(): Future[Seq[Project]] = {
db.run(
ProjectTable
.join(ProjectTag).on(_.id === _.projectId)
.result
.map { _.groupBy(_._1)
.map { case (project, grouped) =>
Project(
id = project.id,
name = project.name,
state = project.state,
description = project.description,
team = project.team,
tags = grouped.map(_._2.name)
)
}
}
)
}
}
Digression:
If you wanted to have paging in findAll method you would need to do something like this:
ProjectTable
.drop(pageNumber * pageSize)
.take(pageSize)
.join(ProjectTag).on(_.id === _.projectId)
.result
Above would produce sub-query but it is basically typical way how you do paging with multiple joined relations (without subquery you would page over whole result set which is most of the time not what you need!).
Coming back to main part:
Obviously it would be all easier if you defined you defined your Project as:
case class Project(project: ProjectRecord, tags: Seq[ProjectTag])
then your yield would be simply:
yield {
Project(project, tags)
}
but that's definitely a matter of taste (it make actually sense to make it as you did - to hide internal record layout).
Basically there are potentially multiple things that could be improved here. I am not really an expert on DDD but at least from Slick perspective the 1st change that should be done is to change the method:
def load(id: UUID): Future[Project]
to
def load(id: UUID): DBIO[Project]
and perform db.run(...) operation on some higher level. The reason for this is that in Slick as soon as you fire db.run (thus convert DBIO to Future) you loose ability to compose multiple operation within single transaction. Therefore a common pattern is to push DBIO pretty high in application layers, basically up to some business levels which defined transactional boundaries.

How to return full row using Slick's insertOrUpdate

I am currently learning Play2, Scala and Slick 3.1, and am pretty stuck with the syntax for using insertOrUpdate and wonder if anyone can please help me.
What I want to do is to return the full row when using insertOrUpdate including the auto inc primary key, but I have only managed to return the number of updated/inserted rows.
Here is my table definition:
package models
final case class Report(session_id: Option[Long], session_name: String, tester_name: String, date: String, jira_ref: String,
duration: String, environment: String, notes: Option[String])
trait ReportDBTableDefinitions {
import slick.driver.PostgresDriver.api._
class Reports(tag: Tag) extends Table[Report](tag, "REPORTS") {
def session_id = column[Long]("SESSION_ID", O.PrimaryKey, O.AutoInc)
def session_name = column[String]("SESSION_NAME")
def tester_name = column[String]("TESTER_NAME")
def date = column[String]("DATE")
def jira_ref = column[String]("JIRA_REF")
def duration = column[String]("DURATION")
def environment = column[String]("ENVIRONMENT")
def notes = column[Option[String]]("NOTES")
def * = (session_id.?, session_name, tester_name, date, jira_ref, duration, environment, notes) <> (Report.tupled, Report.unapply)
}
lazy val reportsTable = TableQuery[Reports]
}
Here is the section of my DAO that relates to insertOrUpdate, and it works just fine, but only returns the number of updated/inserted rows:
package models
import com.google.inject.Inject
import play.api.db.slick.DatabaseConfigProvider
import scala.concurrent.Future
class ReportsDAO #Inject()(protected val dbConfigProvider: DatabaseConfigProvider) extends DAOSlick {
import driver.api._
def save_report(report: Report): Future[Int] = {
dbConfig.db.run(reportsTable.insertOrUpdate(report).transactionally)
}
}
I have tried playing with "returning" but I can't get the syntax I need and keep getting type mismatches e.g. the below doesn't compile (because it's probably completely wrong!)
def save_report(report: Report): Future[Report] = {
dbConfig.db.run(reportsTable.returning(reportsTable).insertOrUpdate(report))
}
Any help appreciated - I'm new to Scala and Slick so apologies if I'm missing something really obvious.
Solved - posting it incase it helps anyone else trying to do something similar:
//will return the new session_id on insert, and None on update
def save_report(report: Report): Future[Option[Long]] = {
val insertQuery = (reportsTable returning reportsTable.map(_.session_id)).insertOrUpdate(report)
dbConfig.db.run(insertQuery)
}
Works well - insertOrUpdate doesn't returning anything it seems on update, so if I need to get the updated data after the update operation I can then run a subsequent query to get the information using the session id.
You cannot return whole Report, first return Id (returning(reportsTable.map(_.session_id))) and then get whole object
Check if report exists in the database if it exists update it, if not go ahead inserting the report into the database.
Note do above operations in all or none fashion by using Transactions
def getReportDBIO(id: Long): DBIO[Report] = reportsTable.filter(_.session_id === id).result.head
def save_report(report: Report): Future[Report] = {
val query = reportsTable.filter(_.session_id === report.session_id)
val existsAction = query.exists.result
val insertOrUpdateAction =
(for {
exists <- existsAction
result <- exists match {
case true =>
query.update(report).flatMap {_ => getReportDBIO(report.session_id)}.transactionally
case false => {
val insertAction = reportsTable.returning(reportsTable.map(_.session_id)) += report
val finalAction = insertAction.flatMap( id => getReportDBIO(id)).transactionally //transactionally is important
finalAction
}
}
} yield result).transactionally
dbConfig.db.run(insertOrUpdateAction)
}
Update your insertOrUpdate function accordingly
You can return the full row, but it is an Option, as the documentation states, it will be empty on an update and will be a Some(...) representing the inserted row on an insert.
So the correct code would be
def save_report(report: Report): Future[Option[Report]] = {dbConfig.db.run(reportsTable.returning(reportsTable).insertOrUpdate(report))}

case class filter criteria in Slick 3.0.3

I have a Model, corresponding Table and a Repository. In my repository, using TableQuery I want to find a model object based on some criteria (a function from model to boolean), which the repository have no control, it is injected as a parameter. E.g.
case class JournalEntryModel(id: Option[Long] = None, isDebit: Boolean, principal: Double)
class JournalEntryTable(tag: Tag) extends Table[JournalEntryModel](tag, "journal_entries") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def isDebit = column[Boolean]("is_debit")
def principal = column[Double]("principal")
def * = (id.?, isDebit, principal) <>
(JournalEntryModel.tupled, JournalEntryModel.unapply)
}
object journalEntries extends TableQuery(new JournalEntryTable(_))
object Repository {
def find(criteria: JournalEntryModel => Boolean): List[JournalEntryModel] = db.run {
journalEntries.filter(je => criteria(je)).result
} toList
}
val credits = Repository.find(!_.isDebit && _.principal > 500.0)
How do I write a such a filter function ?
I think your question is "How do I write a function literal for JournalEntryModel => Boolean"?
If you want to do it with a literal like you have, you need to define your parameters:
// Parameter list ("model" here) is required. You also need the argument type(s).
// Here, they're inferred from the argument to "find".
val credits = Repository.find(model => !model.isDebit && model.principal > 500.0)

Scala Reflection to update a case class val

I'm using scala and slick here, and I have a baserepository which is responsible for doing the basic crud of my classes.
For a design decision, we do have updatedTime and createdTime columns all handled by the application, and not by triggers in database. Both of this fields are joda DataTime instances.
Those fields are defined in two traits called HasUpdatedAt, and HasCreatedAt, for the tables
trait HasCreatedAt {
val createdAt: Option[DateTime]
}
case class User(name:String,createdAt:Option[DateTime] = None) extends HasCreatedAt
I would like to know how can I use reflection to call the user copy method, to update the createdAt value during the database insertion method.
Edit after #vptron and #kevin-wright comments
I have a repo like this
trait BaseRepo[ID, R] {
def insert(r: R)(implicit session: Session): ID
}
I want to implement the insert just once, and there I want to createdAt to be updated, that's why I'm not using the copy method, otherwise I need to implement it everywhere I use the createdAt column.
This question was answered here to help other with this kind of problem.
I end up using this code to execute the copy method of my case classes using scala reflection.
import reflect._
import scala.reflect.runtime.universe._
import scala.reflect.runtime._
class Empty
val mirror = universe.runtimeMirror(getClass.getClassLoader)
// paramName is the parameter that I want to replacte the value
// paramValue is the new parameter value
def updateParam[R : ClassTag](r: R, paramName: String, paramValue: Any): R = {
val instanceMirror = mirror.reflect(r)
val decl = instanceMirror.symbol.asType.toType
val members = decl.members.map(method => transformMethod(method, paramName, paramValue, instanceMirror)).filter {
case _: Empty => false
case _ => true
}.toArray.reverse
val copyMethod = decl.declaration(newTermName("copy")).asMethod
val copyMethodInstance = instanceMirror.reflectMethod(copyMethod)
copyMethodInstance(members: _*).asInstanceOf[R]
}
def transformMethod(method: Symbol, paramName: String, paramValue: Any, instanceMirror: InstanceMirror) = {
val term = method.asTerm
if (term.isAccessor) {
if (term.name.toString == paramName) {
paramValue
} else instanceMirror.reflectField(term).get
} else new Empty
}
With this I can execute the copy method of my case classes, replacing a determined field value.
As comments have said, don't change a val using reflection. Would you that with a java final variable? It makes your code do really unexpected things. If you need to change the value of a val, don't use a val, use a var.
trait HasCreatedAt {
var createdAt: Option[DateTime] = None
}
case class User(name:String) extends HasCreatedAt
Although having a var in a case class may bring some unexpected behavior e.g. copy would not work as expected. This may lead to preferring not using a case class for this.
Another approach would be to make the insert method return an updated copy of the case class, e.g.:
trait HasCreatedAt {
val createdAt: Option[DateTime]
def withCreatedAt(dt:DateTime):this.type
}
case class User(name:String,createdAt:Option[DateTime] = None) extends HasCreatedAt {
def withCreatedAt(dt:DateTime) = this.copy(createdAt = Some(dt))
}
trait BaseRepo[ID, R <: HasCreatedAt] {
def insert(r: R)(implicit session: Session): (ID, R) = {
val id = ???//insert into db
(id, r.withCreatedAt(??? /*now*/))
}
}
EDIT:
Since I didn't answer your original question and you may know what you are doing I am adding a way to do this.
import scala.reflect.runtime.universe._
val user = User("aaa", None)
val m = runtimeMirror(getClass.getClassLoader)
val im = m.reflect(user)
val decl = im.symbol.asType.toType.declaration("createdAt":TermName).asTerm
val fm = im.reflectField(decl)
fm.set(??? /*now*/)
But again, please don't do this. Read this stackoveflow answer to get some insight into what it can cause (vals map to final fields).

Using Auto Incrementing fields with PostgreSQL and Slick

How does one insert records into PostgreSQL using AutoInc keys with Slick mapped tables? If I use and Option for the id in my case class and set it to None, then PostgreSQL will complain on insert that the field cannot be null. This works for H2, but not for PostgreSQL:
//import scala.slick.driver.H2Driver.simple._
//import scala.slick.driver.BasicProfile.SimpleQL.Table
import scala.slick.driver.PostgresDriver.simple._
import Database.threadLocalSession
object TestMappedTable extends App{
case class User(id: Option[Int], first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id.? ~ first ~ last <> (User, User.unapply _)
def ins1 = first ~ last returning id
val findByID = createFinderBy(_.id)
def autoInc = id.? ~ first ~ last <> (User, User.unapply _) returning id
}
// implicit val session = Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver").createSession()
implicit val session = Database.forURL("jdbc:postgresql:test:slicktest",
driver="org.postgresql.Driver",
user="postgres",
password="xxx")
session.withTransaction{
Users.ddl.create
// insert data
print(Users.insert(User(None, "Jack", "Green" )))
print(Users.insert(User(None, "Joe", "Blue" )))
print(Users.insert(User(None, "John", "Purple" )))
val u = Users.insert(User(None, "Jim", "Yellow" ))
// println(u.id.get)
print(Users.autoInc.insert(User(None, "Johnathan", "Seagul" )))
}
session.withTransaction{
val queryUsers = for {
user <- Users
} yield (user.id, user.first)
println(queryUsers.list)
Users.where(_.id between(1, 2)).foreach(println)
println("ID 3 -> " + Users.findByID.first(3))
}
}
Using the above with H2 succeeds, but if I comment it out and change to PostgreSQL, then I get:
[error] (run-main) org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
This is working here:
object Application extends Table[(Long, String)]("application") {
def idlApplication = column[Long]("idlapplication", O.PrimaryKey, O.AutoInc)
def appName = column[String]("appname")
def * = idlApplication ~ appName
def autoInc = appName returning idlApplication
}
var id = Application.autoInc.insert("App1")
This is how my SQL looks:
CREATE TABLE application
(idlapplication BIGSERIAL PRIMARY KEY,
appName VARCHAR(500));
Update:
The specific problem with regard to a mapped table with User (as in the question) can be solved as follows:
def forInsert = first ~ last <>
({ (f, l) => User(None, f, l) }, { u:User => Some((u.first, u.last)) })
This is from the test cases in the Slick git repository.
I tackled this problem in an different way. Since I expect my User objects to always have an id in my application logic and the only point where one would not have it is during the insertion to the database, I use an auxiliary NewUser case class which doesn't have an id.
case class User(id: Int, first: String, last: String)
case class NewUser(first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id ~ first ~ last <> (User, User.unapply _)
def autoInc = first ~ last <> (NewUser, NewUser.unapply _) returning id
}
val id = Users.autoInc.insert(NewUser("John", "Doe"))
Again, User maps 1:1 to the database entry/row while NewUser could be replaced by a tuple if you wanted to avoid having the extra case class, since it is only used as a data container for the insert invocation.
EDIT:
If you want more safety (with somewhat increased verbosity) you can make use of a trait for the case classes like so:
trait UserT {
def first: String
def last: String
}
case class User(id: Int, first: String, last: String) extends UserT
case class NewUser(first: String, last: String) extends UserT
// ... the rest remains intact
In this case you would apply your model changes to the trait first (including any mixins you might need), and optionally add default values to the NewUser.
Author's opinion: I still prefer the no-trait solution as it is more compact and changes to the model are a matter of copy-pasting the User params and then removing the id (auto-inc primary key), both in case class declaration and in table projections.
We're using a slightly different approach. Instead of creating a further projection, we request the next id for a table, copy it into the case class and use the default projection '*' for inserting the table entry.
For postgres it looks like this:
Let your Table-Objects implement this trait
trait TableWithId { this: Table[_] =>
/**
* can be overriden if the plural of tablename is irregular
**/
val idColName: String = s"${tableName.dropRight(1)}_id"
def id = column[Int](s"${idColName}", O.PrimaryKey, O.AutoInc)
def getNextId = (Q[Int] + s"""select nextval('"${tableName}_${idColName}_seq"')""").first
}
All your entity case classes need a method like this (should also be defined in a trait):
case class Entity (...) {
def withId(newId: Id): Entity = this.copy(id = Some(newId)
}
New entities can now be inserted this way:
object Entities extends Table[Entity]("entities") with TableWithId {
override val idColName: String = "entity_id"
...
def save(entity: Entity) = this insert entity.withId(getNextId)
}
The code is still not DRY, because you need to define the withId method for each table. Furthermore you have to request the next id before you insert an entity which might lead to performance impacts, but shouldn't be notable unless you insert thousands of entries at a time.
The main advantage is that there is no need for a second projection what makes the code less error prone, in particular for tables having many columns.
The simplest solution was to use the SERIAL type like this:
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
Here's a more concrete block:
// A case class to be used as table map
case class CaseTable( id: Long = 0L, dataType: String, strBlob: String)
// Class for our Table
class MyTable(tag: Tag) extends Table[CaseTable](tag, "mytable") {
// Define the columns
def dataType = column[String]("datatype")
def strBlob = column[String]("strblob")
// Auto Increment the id primary key column
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
// the * projection (e.g. select * ...) auto-transforms the tupled column values
def * = (id, dataType, strBlob) <> (CaseTable.tupled, CaseTable.unapply _)
}
// Insert and get auto incremented primary key
def insertData(dataType: String, strBlob: String, id: Long = 0L): Long = {
// DB Connection
val db = Database.forURL(jdbcUrl, pgUser, pgPassword, driver = driverClass)
// Variable to run queries on our table
val myTable = TableQuery[MyTable]
val insert = try {
// Form the query
val query = myTable returning myTable.map(_.id) += CaseTable(id, dataType, strBlob)
// Execute it and wait for result
val autoId = Await.result(db.run(query), maxWaitMins)
// Return ID
autoId
}
catch {
case e: Exception => {
logger.error("Error in inserting using Slick: ", e.getMessage)
e.printStackTrace()
-1L
}
}
insert
}
I've faced the same problem trying to make the computer-database sample from play-slick-3.0 when I changed the db to Postgres. What solved the problem was to change the id column (primary key) type to SERIAL in the evolution file /conf/evolutions/default/1.sql (originally was in BIGINT). Take a look at https://groups.google.com/forum/?fromgroups=#%21topic/scalaquery/OEOF8HNzn2U
for the whole discussion.
Cheers,
ReneX
Another trick is making the id of the case class a var
case class Entity(var id: Long)
To insert an instance, create it like below
Entity(null.asInstanceOf[Long])
I've tested that it works.
The solution I've found is to use SqlType("Serial") in the column definition. I haven't tested it extensively yet, but it seems to work so far.
So instead of
def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", O.PrimaryKey, O.AutoInc)
You should do:
def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
Where PK is defined like the example in the "Essential Slick" book:
final case class PK[A](value: Long = 0L) extends AnyVal with MappedTo[Long]