Map Slick oneToMany results to tree format - scala

I have written a simple Play! 2 REST with Slick application. I have following domain model:
case class Company(id: Option[Long], name: String)
case class Department(id: Option[Long], name: String, companyId: Long)
class Companies(tag: Tag) extends Table[Company](tag, "COMPANY") {
def id = column[Long]("ID", O.AutoInc, O.PrimaryKey)
def name = column[String]("NAME")
def * = (id.?, name) <> (Company.tupled, Company.unapply)
}
val companies = TableQuery[Companies]
class Departments(tag: Tag) extends Table[Department](tag, "DEPARTMENT") {
def id = column[Long]("ID", O.AutoInc, O.PrimaryKey)
def name = column[String]("NAME")
def companyId = column[Long]("COMPANY_ID")
def company = foreignKey("FK_DEPARTMENT_COMPANY", companyId, companies)(_.id)
def * = (id.?, name, companyId) <> (Department.tupled, Department.unapply)
}
val departments = TableQuery[Departments]
and here's my method to query all companies with all related departments:
override def findAll: Future[List[(Company, Department)]] = {
db.run((companies join departments on (_.id === _.companyId)).to[List].result)
}
Unfortuneatelly I want to display data in tree JSON format, so I will have to build query that gets all companies with departments and map them somehow to CompanyDTO, something like that:
case class CompanyDTO(id: Option[Long], name: String, departments: List[Department])
Do you know what is the best solution for this? Should I map List[(Company, Department)] with JSON formatters or should I change my query to use CompanyDTO? If so how can I map results to CompanyDTO?

One-to-Many relationships to my knowledge haven't been known to be fetched in one query in RDBMS. the best you can do while avoiding the n+1 problem is doing it in 2 queries. here's how it'd go in your case:
for {
comps <- companies.result
deps <- comps.map(c => departments.filter(_.companyId === c.id.get))
.reduceLeft((carry,item) => carry unionAll item).result
grouped = deps.groupBy(_.companyId)
} yield comps.map{ c =>
val companyDeps = grouped.getOrElse(c.id.get,Seq()).toList
CompanyDTO(c.id,c.name,companyDeps)
}
there are some fixed parts in this query that you'll come to realize in time. this makes it a good candidate for abstraction, something you can reuse to fetch one-to-many relationships in general in slick.

Related

Future and Option in for comprehension in slick

I pretty new in using slick and now I faced with the issue how to retrieve some data from two tables.
I have one table
class ExecutionTable(tag: Tag) extends Table[ExecTuple](tag, "execution") {
val id: Rep[String] = column[String]("id")
val executionDefinitionId: Rep[Long] = column[Long]("executionDefinitionId")
// other fields are omitted
def * = ???
}
and another table
class ServiceStatusTable(tag: Tag)
extends Table[(String, Option[String])](tag, "serviceStatus") {
def serviceId: Rep[String] = column[String]("serviceId")
def detail: Rep[String] = column[String]("detail")
def * = (serviceId, detail.?)
}
In Dao I convert data from this two tables to a business object
case class ServiceStatus(
id: String,
detail: Option[String] = None, //other fields
)
like this
private lazy val getServiceStatusCompiled = Compiled {
(id: Rep[String], tenantId: Rep[String]) =>
for {
exec <- getExecutionById(id, tenantId)
status <- serviceStatuses if exec.id === status.serviceId
} yield mapToServiceStatus(exec, status)
}
and later
def getServiceStatus(id: String, tenantId: String)
: Future[Option[ServiceStatus]] = db
.run(getServiceStatusCompiled(id, tenantId).result.transactionally)
.map(_.headOption)
The problem is that not for all entries from table execution exists entry in table serviceStatus. I cannot modify table execution and add to it field details as it is only service specific.
When I run query in case when for entry from execution exists entry in serviceStatus all works as expected. But if there is no entry in serviceStatus, Future[None] is returned.
Question: Is there any option to obtain status in for comprehension as Option depending on existing entry in table serviceStatus or some else workaround?
Usually, in case when join condition does not find corresponding record in the "right" table but the result should still contain the row from "left" table, left join is used.
In your case you can do something like:
Execution
.filter(...execution table filter...)
.joinLeft(ServiceStatus).on(_.id===_.serviceId)
This gives you pair of
(Execution, Rep[Option[ServiceStatus]])
and after query execution:
(Execution, Option[ServiceStatus])

Aggregate root implementation with slick

I am trying to implement a simple aggregation root in slick.
But I don't really know what is the best way to do that.
Here is my domain objects:
case class Project(id: UUID,
name: String,
state: ProjectState,
description: String,
team: String,
tags: Set[String]
I would like to store the "tags" in a separate table and build up the "Project" objects from "projects_table" and "project_tags_table"
Here is my table definition:
class ProjectTable(tag: Tag) extends Table[ProjectTableRecord](tag, Some("octopus_service"), "projects") {
def id: Rep[UUID] = column[UUID]("id", O.PrimaryKey)
def name: Rep[String] = column[String]("name")
def state: Rep[ProjectState] = column[ProjectState]("state")
def description: Rep[String] = column[String]("description")
def team: Rep[String] = column[String]("team")
override def * : ProvenShape[ProjectTableRecord] = (id, name, state, description, team, created, lastModified) <> (
(ProjectTableRecord.apply _).tupled, ProjectTableRecord.unapply
)
}
class ProjectTagTable(tag: Tag) extends Table[ProjectTag](tag, Some("octopus_service"), "project_tags") {
def projectID: Rep[UUID] = column[UUID]("project_id")
def name: Rep[String] = column[String]("name")
def project = foreignKey("PROJECT_FK", projectID, TableQuery[ProjectTable])(_.id, onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)
override def * : ProvenShape[ProjectTag] = (projectID, name) <> (
ProjectTag.tupled, ProjectTag.unapply
)
}
How can I generate "Project" objects from joining these 2 tables?
Thanks in advance :)
I think there is a misconception on the level of responsibility. Slick allows you to access relational database (to some extent the same way as SQL allows you to do it). It's basically a DAO layer.
Aggregate root is really a level above this (it's a domain thing, not db level thing - although they often are the same to large extent).
So basically you need to have a level above Slick tables that would allow you to perform different queries and aggregate the results into single being.
Before we start though - you should create and store somewhere your TableQuery objects, perhaps like this:
lazy val ProjectTable = TableQuery[ProjectTable]
lazy val ProjectTagTable = TableQuery[ProjectTagTable]
You could put them probably somewhere near you Table definitions.
So first as I mentioned your Aggregate Root being Project needs be pulled by something. Let's call it ProjectRepository.
Let's say it will have a method def load(id: UUID): Future[Project].
This method would perhaps look like this:
class ProjectRepository {
def load(id: UUID): Future[Project] = {
db.run(
for {
project <- ProjectTable.filter(_.id === id).result
tags <- ProjectTagTable.filter(_.projectId === id).result
} yield {
Project(
id = project.id,
name = project.name,
state = project.state,
description = project.description,
team = project.team,
tags = tags.map(_.name)
)
}
)
}
// another example - if you wanted to extract multiple projects
// (in reality you would probably apply some paging here)
def findAll(): Future[Seq[Project]] = {
db.run(
ProjectTable
.join(ProjectTag).on(_.id === _.projectId)
.result
.map { _.groupBy(_._1)
.map { case (project, grouped) =>
Project(
id = project.id,
name = project.name,
state = project.state,
description = project.description,
team = project.team,
tags = grouped.map(_._2.name)
)
}
}
)
}
}
Digression:
If you wanted to have paging in findAll method you would need to do something like this:
ProjectTable
.drop(pageNumber * pageSize)
.take(pageSize)
.join(ProjectTag).on(_.id === _.projectId)
.result
Above would produce sub-query but it is basically typical way how you do paging with multiple joined relations (without subquery you would page over whole result set which is most of the time not what you need!).
Coming back to main part:
Obviously it would be all easier if you defined you defined your Project as:
case class Project(project: ProjectRecord, tags: Seq[ProjectTag])
then your yield would be simply:
yield {
Project(project, tags)
}
but that's definitely a matter of taste (it make actually sense to make it as you did - to hide internal record layout).
Basically there are potentially multiple things that could be improved here. I am not really an expert on DDD but at least from Slick perspective the 1st change that should be done is to change the method:
def load(id: UUID): Future[Project]
to
def load(id: UUID): DBIO[Project]
and perform db.run(...) operation on some higher level. The reason for this is that in Slick as soon as you fire db.run (thus convert DBIO to Future) you loose ability to compose multiple operation within single transaction. Therefore a common pattern is to push DBIO pretty high in application layers, basically up to some business levels which defined transactional boundaries.

Scala Slick How to insert rows with a previously returned autoincremental ID

Here is the thing.
I'm using Slick 3.1.0 and basically I got two models and one has a FK constraint on the other. Let's say:
case class FooRecord(id: Option[Int], name: String, oneValue: Int)
class FooTable(tag: Tag) extends Table[FooRecord](tag, "FOO"){
def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
def name = column[String]("NAME")
def oneValue = column[Int]("ONEVALUE")
}
val fooTable = TableQuery[FooTable]
and
case class BarRecord(id: Option[Int], name: String, foo_id: Int)
class BarTable(tag: Tag) extends Table[BarRecord](tag, "BAR"){
def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
def name = column[String]("NAME")
def fooId = column[Int]("FOO_ID")
def foo = foreignKey("foo_fk", fooId, fooTable)(_.id)
}
These are intended to store one Foo, and several Bars, where for one Bar there is only One Foo.
I've been trying to figure how to perform the complete set of insert statements on a single transaction. ie.
DBIO.seq(
(fooTable returning fooTable.map(_id)) += Foo(None, "MyName", 42),
(barTable returning batTable.map(_id)) += Bar(None, "MyBarname", FOO_KEY)
)
but the thing is I could not find the way to get Foo ID to be used as a FOO_KEY on Bar instance field.
Of course I can perform the DBAction twice but it's pretty awful in my opinion.
Any thought?
Thanks in advance
The simplest way is just to sequence your DBIO transactions and pass the result of the first into the second:
val insertDBIO = for {
fooId <- (fooTable returning fooTable.map(_id)) += Foo(None, "MyName", 42)
barId <- (barTable returning batTable.map(_id)) += Bar(None, "MyBarname", fooId)
} yield (fooId, barId)
db.run(insertDBIO.transactionally)
The transactionally call will ensure that both are run on the same connection.

Slick 3 many to many relations: how to get all the element of a table and their relations if they exist?

I'm working with Slick 3 and Play! 2.4 and I have a very common problem that I don't manage to resolve.
I have a table playlist that can be linked to some tracks with the help of a relation table playlistsTracks. I want to be able to get all the playlists with their tracks relation and their tracks. My problem is that I don't manage to get the playlists if they do not have any relations.
Here are the 3 tables:
class Playlists(tag: Tag) extends Table[Playlist](tag, "playlists") {
def id = column[Long]("playlistid", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id.?, name) <> ((Playlist.apply _).tupled, Playlist.unapply)
}
class PlaylistsTracks(tag: Tag) extends Table[PlaylistTrack](tag, "playliststracks") {
def playlistId = column[Long]("playlistid")
def trackId = column[UUID]("trackid")
def trackRank = column[Double]("trackrank")
def * = (playlistId, trackId, trackRank) <> ((PlaylistTrack.apply _).tupled, PlaylistTrack.unapply)
def aFK = foreignKey("playlistId", playlistId, playlists)(_.id, onDelete = ForeignKeyAction.Cascade)
def bFK = foreignKey("trackId", trackId, tracks)(_.uuid, onDelete = ForeignKeyAction.Cascade)
}
class Tracks(tag: Tag) extends Table[Track](tag, "tracks") {
def uuid = column[UUID]("trackid", O.PrimaryKey)
def title = column[String]("title")
def * = (uuid, title) <> ((Track.apply _).tupled, Track.unapply)
}
For the moment the snippet of code that get the playlists look like this:
val query = for {
playlist <- playlists
playlistTrack <- playlistsTracks if playlistTrack.playlistId === playlist.id
track <- tracks if playlistTrack.trackId === track.uuid
} yield (playlist, playlistTrack, track)
db.run(query.result) map { println }
It prints something like Vector(Playlist, PlaylistTrack, Track) (what I want) but I solely get the playlists having relations instead of getting all the playlists, even the one without relations as I would like.
I tried a lot of things like using join (or joinFull, joinLeft, joinRight...) but without success, and it is unfortunately difficult to find some example projects with not only very easy relations.
You need to use left join between the Playlists and PlaylistTracks table and use inner join between PlaylistTracks and Tracks.
There are some things missing in the example so I can't actually compile the following, but I think you can try something like:
val query = for {
(playlist, optionalPlaylistTrackAndTrack) <- playlists joinLeft (playlistsTracks join tracks on (_.trackId === _.uuid)) on (_.id === _._1.playlistId)
} yield (playlist, optionalPlaylistTrackAndTrack)
Note that optionalPlaylistTrackAndTrack is an Option of a tuple representing a playlist track and a track. This is because there may be a playlist without a playlistTrack.

Using Auto Incrementing fields with PostgreSQL and Slick

How does one insert records into PostgreSQL using AutoInc keys with Slick mapped tables? If I use and Option for the id in my case class and set it to None, then PostgreSQL will complain on insert that the field cannot be null. This works for H2, but not for PostgreSQL:
//import scala.slick.driver.H2Driver.simple._
//import scala.slick.driver.BasicProfile.SimpleQL.Table
import scala.slick.driver.PostgresDriver.simple._
import Database.threadLocalSession
object TestMappedTable extends App{
case class User(id: Option[Int], first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id.? ~ first ~ last <> (User, User.unapply _)
def ins1 = first ~ last returning id
val findByID = createFinderBy(_.id)
def autoInc = id.? ~ first ~ last <> (User, User.unapply _) returning id
}
// implicit val session = Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver").createSession()
implicit val session = Database.forURL("jdbc:postgresql:test:slicktest",
driver="org.postgresql.Driver",
user="postgres",
password="xxx")
session.withTransaction{
Users.ddl.create
// insert data
print(Users.insert(User(None, "Jack", "Green" )))
print(Users.insert(User(None, "Joe", "Blue" )))
print(Users.insert(User(None, "John", "Purple" )))
val u = Users.insert(User(None, "Jim", "Yellow" ))
// println(u.id.get)
print(Users.autoInc.insert(User(None, "Johnathan", "Seagul" )))
}
session.withTransaction{
val queryUsers = for {
user <- Users
} yield (user.id, user.first)
println(queryUsers.list)
Users.where(_.id between(1, 2)).foreach(println)
println("ID 3 -> " + Users.findByID.first(3))
}
}
Using the above with H2 succeeds, but if I comment it out and change to PostgreSQL, then I get:
[error] (run-main) org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
This is working here:
object Application extends Table[(Long, String)]("application") {
def idlApplication = column[Long]("idlapplication", O.PrimaryKey, O.AutoInc)
def appName = column[String]("appname")
def * = idlApplication ~ appName
def autoInc = appName returning idlApplication
}
var id = Application.autoInc.insert("App1")
This is how my SQL looks:
CREATE TABLE application
(idlapplication BIGSERIAL PRIMARY KEY,
appName VARCHAR(500));
Update:
The specific problem with regard to a mapped table with User (as in the question) can be solved as follows:
def forInsert = first ~ last <>
({ (f, l) => User(None, f, l) }, { u:User => Some((u.first, u.last)) })
This is from the test cases in the Slick git repository.
I tackled this problem in an different way. Since I expect my User objects to always have an id in my application logic and the only point where one would not have it is during the insertion to the database, I use an auxiliary NewUser case class which doesn't have an id.
case class User(id: Int, first: String, last: String)
case class NewUser(first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id ~ first ~ last <> (User, User.unapply _)
def autoInc = first ~ last <> (NewUser, NewUser.unapply _) returning id
}
val id = Users.autoInc.insert(NewUser("John", "Doe"))
Again, User maps 1:1 to the database entry/row while NewUser could be replaced by a tuple if you wanted to avoid having the extra case class, since it is only used as a data container for the insert invocation.
EDIT:
If you want more safety (with somewhat increased verbosity) you can make use of a trait for the case classes like so:
trait UserT {
def first: String
def last: String
}
case class User(id: Int, first: String, last: String) extends UserT
case class NewUser(first: String, last: String) extends UserT
// ... the rest remains intact
In this case you would apply your model changes to the trait first (including any mixins you might need), and optionally add default values to the NewUser.
Author's opinion: I still prefer the no-trait solution as it is more compact and changes to the model are a matter of copy-pasting the User params and then removing the id (auto-inc primary key), both in case class declaration and in table projections.
We're using a slightly different approach. Instead of creating a further projection, we request the next id for a table, copy it into the case class and use the default projection '*' for inserting the table entry.
For postgres it looks like this:
Let your Table-Objects implement this trait
trait TableWithId { this: Table[_] =>
/**
* can be overriden if the plural of tablename is irregular
**/
val idColName: String = s"${tableName.dropRight(1)}_id"
def id = column[Int](s"${idColName}", O.PrimaryKey, O.AutoInc)
def getNextId = (Q[Int] + s"""select nextval('"${tableName}_${idColName}_seq"')""").first
}
All your entity case classes need a method like this (should also be defined in a trait):
case class Entity (...) {
def withId(newId: Id): Entity = this.copy(id = Some(newId)
}
New entities can now be inserted this way:
object Entities extends Table[Entity]("entities") with TableWithId {
override val idColName: String = "entity_id"
...
def save(entity: Entity) = this insert entity.withId(getNextId)
}
The code is still not DRY, because you need to define the withId method for each table. Furthermore you have to request the next id before you insert an entity which might lead to performance impacts, but shouldn't be notable unless you insert thousands of entries at a time.
The main advantage is that there is no need for a second projection what makes the code less error prone, in particular for tables having many columns.
The simplest solution was to use the SERIAL type like this:
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
Here's a more concrete block:
// A case class to be used as table map
case class CaseTable( id: Long = 0L, dataType: String, strBlob: String)
// Class for our Table
class MyTable(tag: Tag) extends Table[CaseTable](tag, "mytable") {
// Define the columns
def dataType = column[String]("datatype")
def strBlob = column[String]("strblob")
// Auto Increment the id primary key column
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
// the * projection (e.g. select * ...) auto-transforms the tupled column values
def * = (id, dataType, strBlob) <> (CaseTable.tupled, CaseTable.unapply _)
}
// Insert and get auto incremented primary key
def insertData(dataType: String, strBlob: String, id: Long = 0L): Long = {
// DB Connection
val db = Database.forURL(jdbcUrl, pgUser, pgPassword, driver = driverClass)
// Variable to run queries on our table
val myTable = TableQuery[MyTable]
val insert = try {
// Form the query
val query = myTable returning myTable.map(_.id) += CaseTable(id, dataType, strBlob)
// Execute it and wait for result
val autoId = Await.result(db.run(query), maxWaitMins)
// Return ID
autoId
}
catch {
case e: Exception => {
logger.error("Error in inserting using Slick: ", e.getMessage)
e.printStackTrace()
-1L
}
}
insert
}
I've faced the same problem trying to make the computer-database sample from play-slick-3.0 when I changed the db to Postgres. What solved the problem was to change the id column (primary key) type to SERIAL in the evolution file /conf/evolutions/default/1.sql (originally was in BIGINT). Take a look at https://groups.google.com/forum/?fromgroups=#%21topic/scalaquery/OEOF8HNzn2U
for the whole discussion.
Cheers,
ReneX
Another trick is making the id of the case class a var
case class Entity(var id: Long)
To insert an instance, create it like below
Entity(null.asInstanceOf[Long])
I've tested that it works.
The solution I've found is to use SqlType("Serial") in the column definition. I haven't tested it extensively yet, but it seems to work so far.
So instead of
def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", O.PrimaryKey, O.AutoInc)
You should do:
def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
Where PK is defined like the example in the "Essential Slick" book:
final case class PK[A](value: Long = 0L) extends AnyVal with MappedTo[Long]