How to join two tables and map the result to a case class in slick - scala

I am working in a project with scala play 2 framework where i am using slick as FRM and postgres database.
In my project customer is an entity. So i create a customer table and customer case class and object also. Another entity is account. So i create a account table and account case class and object also. The code is given bellow
case class Customer(id: Option[Int],
status: String,
balance: Double,
payable: Double,
created: Option[Instant],
updated: Option[Instant]) extends GenericEntity {
def this(status: String,
balance: Double,
payable: Double) = this(None, status, balance, payable, None, None)
}
class CustomerTable(tag: Tag) extends GenericTable[Customer](tag, "customer"){
override def id = column[Option[Int]]("id")
def status = column[String]("status")
def balance = column[Double]("balance")
def payable = column[Double]("payable")
def account = foreignKey("fk_customer_account", id, Accounts.table)(_.id, onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)
def * = (id, status, balance, payable, created, updated) <> ((Customer.apply _).tupled, Customer.unapply)
}
object Customers extends GenericService[Customer, CustomerTable] {
override val table = TableQuery[CustomerTable]
val accountTable = TableQuery[AccountTable]
override def copyEntityFields(entity: Customer, id: Option[Int],
created: Option[Instant], updated: Option[Instant]): Customer = {
entity.copy(id = id, created = created, updated = updated)
}
}
Now I Want to join Customer Table and Account Table and map the result to a case class named CustomerWithAccount
I have tried the following code
case class CustomerDetail(id: Option[Int],
status: String,
name: String)
object Customers extends GenericService[Customer, CustomerTable] {
override val table = TableQuery[CustomerTable]
val accountTable = TableQuery[AccountTable]
def getAllCustomersWithAccount = db.run(table.join(accountTable).on(_.id === _.id).map { row =>
//for (row1 <- row) {
for {
id <- row._1.id
status <- row._1.status.toString()
name <- row._2.name.toString()
} yield CustomerDetail(id = id, status = status, name = name)
//}
}.result)
override def copyEntityFields(entity: Customer, id: Option[Int], created:Option[Instant], updated: Option[Instant]): Customer = {
entity.copy(id = id, created = created, updated = updated)
}
}
But this did not work.
Please help me.

You can try this query
db.run((table.join(accountTable).on(_.id === _.id)
.map{
case (t,a) => ((t.id, t.status, a.name) <> (CustomerDetail.tupled, CustomerDetail.unapply _))
}).result)

you can do this with 3 case classes, 1 per table and then 1 for the joined result.
db.run((customerTable.join(accountTable).on(_.id === _.id)
.map{
case (c,a) => CustomerWithAccount(status = c.status, created =
c.created, account=a, ...)
}

Related

How to use scala case class as function parameter to specify different fields to use?

I have some duplication of code due to having to do some grouping on 3 different fields in a case class and then populate a new case class with those. Since they share a common schema it should be possible for me to do a function that can take the input of the 3 different fields and populate accordingly. However, I am not exactly sure how to do this.
Schemas:
case class Transaction(
senderBank: Bank,
receiverBank: Bank,
intermediaryBank: Bank)
case class Bank(
name: String,
country: Option[String],
countryCode: Option[String])
case class GroupedBank(
name: String,
country: Option[String],
countryCode: Option[String],
bankType: String)
Current function I tried to do:
def groupedBank(transactionSeq: Seq[Transaction], bankName: Bank, bankTypeString: String): Iterable[Seq[GroupedBank]] = {
transactionSeq.groupBy(_ => bankName.name).map {
case (key, transactionSeq) =>
val bankGroupedSeq = transactionSeq.map(_ => {
GroupedBank(
name = bankName.name,
country = bankName.country,
countryCode = bankName.countryCode,
bankType = bankTypeString)
})
bankGroupedSeq
}
}
I need to do the grouping for SenderBank, receiverBank, and intermediaryBank. However, I am not sure how to refer to them correctly in the function parameter bankName. So for SenderBank I would want to do something like Transaction.senderBank, which would point to the correct fields for name, country and so on for senderBank. For receiverBank it should be similar, so Transactions.receiverBank, which then refers to the correct fields for receiverBank and so on. And again for intermediaryBank the same logic. My question is therefore how can I accomplish something like this or is there another way that would be more appropriate?
You can pass a function to extract the bank with the correct type from a transaction:
def groupedBank(
transactionSeq: Seq[Transaction],
getBank: Transaction => Bank,
bankTypeString: String
): Iterable[Seq[GroupedBank]] = {
transactionSeq.groupBy(getBank(_).name).map {
case (key, transactionSeq) =>
transactionSeq.map { transaction =>
val bank = getBank(transaction)
GroupedBank(
name = bank.name,
country = bank.country,
countryCode = bank.countryCode,
bankType = bankTypeString)
}
}
}
And then call it like this:
groupedBank(transactionSeq, _.senderBank, "sender")
It could also be a good idea to abstract the bank type concept into a separate trait:
sealed trait BankGroup {
def name: String
def getBank(transaction: Transaction): Bank
def groupBanks(transactionSeq: Seq[Transaction]): Iterable[Seq[GroupedBank]] = {
transactionSeq.groupBy(getBank(_).name).map {
case (key, transactionSeq) =>
transactionSeq.map { transaction =>
val bank = getBank(transaction)
GroupedBank(
name = bank.name,
country = bank.country,
countryCode = bank.countryCode,
bankType = name)
}
}
}
}
object BankGroup {
object Sender extends BankGroup {
def name: String = "sender"
def getBank(transaction: Transaction): Bank = transaction.senderBank
}
object Receiver extends BankGroup {
def name: String = "receiver"
def getBank(transaction: Transaction): Bank = transaction.receiverBank
}
object Intermediary extends BankGroup {
def name: String = "intermediary"
def getBank(transaction: Transaction): Bank = transaction.intermediaryBank
}
val values: Seq[BankGroup] = Seq(Sender, Receiver, Intermediary)
def byName(name: String): BankGroup = values.find(_.name == name)
.getOrElse(sys.error(s"unknown bank type: $name"))
}
And you can call it in one of those ways:
BankGroup.Sender.groupBanks(transactionSeq)
BankGroup.byName("sender").groupBanks(transactionSeq)

Map Slick oneToMany results to tree format

I have written a simple Play! 2 REST with Slick application. I have following domain model:
case class Company(id: Option[Long], name: String)
case class Department(id: Option[Long], name: String, companyId: Long)
class Companies(tag: Tag) extends Table[Company](tag, "COMPANY") {
def id = column[Long]("ID", O.AutoInc, O.PrimaryKey)
def name = column[String]("NAME")
def * = (id.?, name) <> (Company.tupled, Company.unapply)
}
val companies = TableQuery[Companies]
class Departments(tag: Tag) extends Table[Department](tag, "DEPARTMENT") {
def id = column[Long]("ID", O.AutoInc, O.PrimaryKey)
def name = column[String]("NAME")
def companyId = column[Long]("COMPANY_ID")
def company = foreignKey("FK_DEPARTMENT_COMPANY", companyId, companies)(_.id)
def * = (id.?, name, companyId) <> (Department.tupled, Department.unapply)
}
val departments = TableQuery[Departments]
and here's my method to query all companies with all related departments:
override def findAll: Future[List[(Company, Department)]] = {
db.run((companies join departments on (_.id === _.companyId)).to[List].result)
}
Unfortuneatelly I want to display data in tree JSON format, so I will have to build query that gets all companies with departments and map them somehow to CompanyDTO, something like that:
case class CompanyDTO(id: Option[Long], name: String, departments: List[Department])
Do you know what is the best solution for this? Should I map List[(Company, Department)] with JSON formatters or should I change my query to use CompanyDTO? If so how can I map results to CompanyDTO?
One-to-Many relationships to my knowledge haven't been known to be fetched in one query in RDBMS. the best you can do while avoiding the n+1 problem is doing it in 2 queries. here's how it'd go in your case:
for {
comps <- companies.result
deps <- comps.map(c => departments.filter(_.companyId === c.id.get))
.reduceLeft((carry,item) => carry unionAll item).result
grouped = deps.groupBy(_.companyId)
} yield comps.map{ c =>
val companyDeps = grouped.getOrElse(c.id.get,Seq()).toList
CompanyDTO(c.id,c.name,companyDeps)
}
there are some fixed parts in this query that you'll come to realize in time. this makes it a good candidate for abstraction, something you can reuse to fetch one-to-many relationships in general in slick.

How to create entity relationships in slick?

I am working in a project with scala play 2 framework where i am using slick as FRM and postgres database.
In my project customer is an entity. So i create a customer table and customer case class and object also. Another entity is account. The code is given bellow
case class Customer(id: Option[Int],
status: String,
balance: Double,
payable: Double,
created: Option[Instant],
updated: Option[Instant]) extends GenericEntity {
def this(status: String,
balance: Double,
payable: Double) = this(None, status, balance, payable, None, None)
}
class CustomerTable(tag: Tag) extends GenericTable[Customer](tag, "customer"){
override def id = column[Option[Int]]("id")
def status = column[String]("status")
def balance = column[Double]("balance")
def payable = column[Double]("payable")
def account = foreignKey("fk_customer_account", id, Accounts.table)(_.id, onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)
def * = (id, status, balance, payable, created, updated) <> ((Customer.apply _).tupled, Customer.unapply)
}
object Customers extends GenericService[Customer, CustomerTable] {
override val table = TableQuery[CustomerTable]
val accountTable = TableQuery[AccountTable]
override def copyEntityFields(entity: Customer, id: Option[Int],
created: Option[Instant], updated: Option[Instant]): Customer = {
entity.copy(id = id, created = created, updated = updated)
}
}
Now the problem is how can i create an account object inside the customer object? Or how can i get the account of a customer?
Slick is not ORM its just a functional relational mapper. Nested objects cannot be directly used in slick like Hibernate or any other ORMs Instead
Slick
//Usual ORM mapping works with nested objects
case class Person(name: String, age: Int, address: Address) //nested object
//With Slick
case class Address(addressStr: String, id: Long) //id primary key
case class Person(name: String, age: Int, addressId: Long) //addressId is the id of the address object and its going to the foreign key
Account object in Customer
case class Account(id: Long, ....) // .... means other fields.
case class Customer(accountId: Long, ....)
Place accountId inside the Customer object and make it a foreign key
For more info refer to Slick example provided as part of documentation

better slick dynamic query coding style

private def buildQuery(query: TweetQuery) = {
var q = Tweets.map { t =>
t
}
query.isLocked.foreach { isLocked =>
q = q.filter(_.isLocked === isLocked)
}
query.isProcessed.foreach { isProcessed =>
q = q.filter(_.processFinished === isProcessed)
}
query.maxScheduleAt.foreach { maxScheduleAt =>
q = q.filter(_.expectScheduleAt < maxScheduleAt)
}
query.minScheduleAt.foreach { minScheduleAt =>
q = q.filter(_.expectScheduleAt > minScheduleAt)
}
query.status.foreach { status =>
q = q.filter(_.status === status)
}
query.scheduleType.foreach { scheduleType =>
q = q.filter(_.scheduleType === scheduleType)
}
q
}
I am writing things like above to do dynamic query. really boring, any way better to do this ?
Maybe the MaybeFilter can help you https://gist.github.com/cvogt/9193220
I think this is the correct migrated code for slick 2.1.0
case class MaybeFilter[X, Y](val query: Query[X, Y, Seq]) {
def filter[T, R: CanBeQueryCondition](data: Option[T])(f: T => X => R) = {
data.map(v => MaybeFilter(query.withFilter(f(v)))).getOrElse(this)
}
}
I modified the answer of cvogt in order to work with slick 2.1.0. Explanations of what have changed are in here.
Hope it helps someone :)
case class MaybeFilter[X, Y](val query: scala.slick.lifted.Query[X, Y, Seq]) {
def filter(op: Option[_])(f:(X) => Column[Option[Boolean]]) = {
op map { o => MaybeFilter(query.filter(f)) } getOrElse { this }
}
}
Regards.
Corrected example:
//Class definition
import scala.slick.driver.H2Driver.simple._
import scala.slick.lifted.{ProvenShape, ForeignKeyQuery}
// A Suppliers table with 6 columns: id, name, street, city, state, zip
class Suppliers(tag: Tag)
extends Table[(Int, String, String, String, String, String)](tag, "SUPPLIERS") {
// This is the primary key column:
def id: Column[Int] = column[Int]("SUP_ID", O.PrimaryKey)
def name: Column[String] = column[String]("SUP_NAME")
def street: Column[String] = column[String]("STREET")
def city: Column[String] = column[String]("CITY")
def state: Column[String] = column[String]("STATE")
def zip: Column[String] = column[String]("ZIP")
// Every table needs a * projection with the same type as the table's type parameter
def * : ProvenShape[(Int, String, String, String, String, String)] =
(id, name, street, city, state, zip)
}
//I changed the name of the def from filter to filteredBy to ease the
//implicit conversion
case class MaybeFilter[X, Y](val query: scala.slick.lifted.Query[X, Y, Seq]) {
def filteredBy(op: Option[_])(f:(X) => Column[Option[Boolean]]) = {
op map { o => MaybeFilter(query.filter(f)) } getOrElse { this }
}
}
//Implicit conversion to the MaybeFilter in order to minimize ceremony
implicit def maybeFilterConversor[X,Y](q:Query[X,Y,Seq]) = new MaybeFilter(q)
val suppliers: TableQuery[Suppliers] = TableQuery[Suppliers]
suppliers += (101, "Acme, Inc.", "99 Market Street", "Groundsville", "CA", "95199")
//Dynamic query here
//try this asigment val nameFilter:Option[String] = Some("cme") and see the results
val nameFilter:Option[String] = Some("Acme")
//also try to assign None in here like this val supIDFilter:Option[Int] = None and see the results
val supIDFilter:Option[Int] = Some(101)
suppliers
.filteredBy(supIDFilter){_.id === supIDFilter}
.filteredBy(nameFilter){_.name like nameFilter.map("%" + _ + "%").getOrElse("")}
.query.list
Complete example:
https://github.com/neowinx/hello-slick-2.1-dynamic-filter
Are isLocked, isProcessed, etc Options?
Then you can also write things like
for (locked <- query.isLocked) { q = q.filter(_.isLocked is locked) }
if that's of any consolation :-}
Well, it seems like this code violates OCP. Try to take a look on this article - even though it's not on Scala, it explains how to properly design such methods.

Using Auto Incrementing fields with PostgreSQL and Slick

How does one insert records into PostgreSQL using AutoInc keys with Slick mapped tables? If I use and Option for the id in my case class and set it to None, then PostgreSQL will complain on insert that the field cannot be null. This works for H2, but not for PostgreSQL:
//import scala.slick.driver.H2Driver.simple._
//import scala.slick.driver.BasicProfile.SimpleQL.Table
import scala.slick.driver.PostgresDriver.simple._
import Database.threadLocalSession
object TestMappedTable extends App{
case class User(id: Option[Int], first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id.? ~ first ~ last <> (User, User.unapply _)
def ins1 = first ~ last returning id
val findByID = createFinderBy(_.id)
def autoInc = id.? ~ first ~ last <> (User, User.unapply _) returning id
}
// implicit val session = Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver").createSession()
implicit val session = Database.forURL("jdbc:postgresql:test:slicktest",
driver="org.postgresql.Driver",
user="postgres",
password="xxx")
session.withTransaction{
Users.ddl.create
// insert data
print(Users.insert(User(None, "Jack", "Green" )))
print(Users.insert(User(None, "Joe", "Blue" )))
print(Users.insert(User(None, "John", "Purple" )))
val u = Users.insert(User(None, "Jim", "Yellow" ))
// println(u.id.get)
print(Users.autoInc.insert(User(None, "Johnathan", "Seagul" )))
}
session.withTransaction{
val queryUsers = for {
user <- Users
} yield (user.id, user.first)
println(queryUsers.list)
Users.where(_.id between(1, 2)).foreach(println)
println("ID 3 -> " + Users.findByID.first(3))
}
}
Using the above with H2 succeeds, but if I comment it out and change to PostgreSQL, then I get:
[error] (run-main) org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
This is working here:
object Application extends Table[(Long, String)]("application") {
def idlApplication = column[Long]("idlapplication", O.PrimaryKey, O.AutoInc)
def appName = column[String]("appname")
def * = idlApplication ~ appName
def autoInc = appName returning idlApplication
}
var id = Application.autoInc.insert("App1")
This is how my SQL looks:
CREATE TABLE application
(idlapplication BIGSERIAL PRIMARY KEY,
appName VARCHAR(500));
Update:
The specific problem with regard to a mapped table with User (as in the question) can be solved as follows:
def forInsert = first ~ last <>
({ (f, l) => User(None, f, l) }, { u:User => Some((u.first, u.last)) })
This is from the test cases in the Slick git repository.
I tackled this problem in an different way. Since I expect my User objects to always have an id in my application logic and the only point where one would not have it is during the insertion to the database, I use an auxiliary NewUser case class which doesn't have an id.
case class User(id: Int, first: String, last: String)
case class NewUser(first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id ~ first ~ last <> (User, User.unapply _)
def autoInc = first ~ last <> (NewUser, NewUser.unapply _) returning id
}
val id = Users.autoInc.insert(NewUser("John", "Doe"))
Again, User maps 1:1 to the database entry/row while NewUser could be replaced by a tuple if you wanted to avoid having the extra case class, since it is only used as a data container for the insert invocation.
EDIT:
If you want more safety (with somewhat increased verbosity) you can make use of a trait for the case classes like so:
trait UserT {
def first: String
def last: String
}
case class User(id: Int, first: String, last: String) extends UserT
case class NewUser(first: String, last: String) extends UserT
// ... the rest remains intact
In this case you would apply your model changes to the trait first (including any mixins you might need), and optionally add default values to the NewUser.
Author's opinion: I still prefer the no-trait solution as it is more compact and changes to the model are a matter of copy-pasting the User params and then removing the id (auto-inc primary key), both in case class declaration and in table projections.
We're using a slightly different approach. Instead of creating a further projection, we request the next id for a table, copy it into the case class and use the default projection '*' for inserting the table entry.
For postgres it looks like this:
Let your Table-Objects implement this trait
trait TableWithId { this: Table[_] =>
/**
* can be overriden if the plural of tablename is irregular
**/
val idColName: String = s"${tableName.dropRight(1)}_id"
def id = column[Int](s"${idColName}", O.PrimaryKey, O.AutoInc)
def getNextId = (Q[Int] + s"""select nextval('"${tableName}_${idColName}_seq"')""").first
}
All your entity case classes need a method like this (should also be defined in a trait):
case class Entity (...) {
def withId(newId: Id): Entity = this.copy(id = Some(newId)
}
New entities can now be inserted this way:
object Entities extends Table[Entity]("entities") with TableWithId {
override val idColName: String = "entity_id"
...
def save(entity: Entity) = this insert entity.withId(getNextId)
}
The code is still not DRY, because you need to define the withId method for each table. Furthermore you have to request the next id before you insert an entity which might lead to performance impacts, but shouldn't be notable unless you insert thousands of entries at a time.
The main advantage is that there is no need for a second projection what makes the code less error prone, in particular for tables having many columns.
The simplest solution was to use the SERIAL type like this:
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
Here's a more concrete block:
// A case class to be used as table map
case class CaseTable( id: Long = 0L, dataType: String, strBlob: String)
// Class for our Table
class MyTable(tag: Tag) extends Table[CaseTable](tag, "mytable") {
// Define the columns
def dataType = column[String]("datatype")
def strBlob = column[String]("strblob")
// Auto Increment the id primary key column
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
// the * projection (e.g. select * ...) auto-transforms the tupled column values
def * = (id, dataType, strBlob) <> (CaseTable.tupled, CaseTable.unapply _)
}
// Insert and get auto incremented primary key
def insertData(dataType: String, strBlob: String, id: Long = 0L): Long = {
// DB Connection
val db = Database.forURL(jdbcUrl, pgUser, pgPassword, driver = driverClass)
// Variable to run queries on our table
val myTable = TableQuery[MyTable]
val insert = try {
// Form the query
val query = myTable returning myTable.map(_.id) += CaseTable(id, dataType, strBlob)
// Execute it and wait for result
val autoId = Await.result(db.run(query), maxWaitMins)
// Return ID
autoId
}
catch {
case e: Exception => {
logger.error("Error in inserting using Slick: ", e.getMessage)
e.printStackTrace()
-1L
}
}
insert
}
I've faced the same problem trying to make the computer-database sample from play-slick-3.0 when I changed the db to Postgres. What solved the problem was to change the id column (primary key) type to SERIAL in the evolution file /conf/evolutions/default/1.sql (originally was in BIGINT). Take a look at https://groups.google.com/forum/?fromgroups=#%21topic/scalaquery/OEOF8HNzn2U
for the whole discussion.
Cheers,
ReneX
Another trick is making the id of the case class a var
case class Entity(var id: Long)
To insert an instance, create it like below
Entity(null.asInstanceOf[Long])
I've tested that it works.
The solution I've found is to use SqlType("Serial") in the column definition. I haven't tested it extensively yet, but it seems to work so far.
So instead of
def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", O.PrimaryKey, O.AutoInc)
You should do:
def id: Rep[PK[SomeTable]] = column[PK[SomeTable]]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
Where PK is defined like the example in the "Essential Slick" book:
final case class PK[A](value: Long = 0L) extends AnyVal with MappedTo[Long]