Join three tables in Scala Slick or flatten nested tuples - scala

I need to do INNER JOIN three tables because of foreign key in the first one as follows:
CREATE TABLE "product" (
"id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
"name" VARCHAR NOT NULL,
"price" FLOAT NOT NULL,
"categoryid" INT NOT NULL,
"supplierid" INT NOT NULL,
FOREIGN KEY(categoryid) references category(id),
FOREIGN KEY(supplierid) references supplier(id)
);
In models I have method to handle list all:
def list(): Future[Seq[(Product, Category, Supplier)]] = db.run {
productTable.join(categoryTable).on(_.categoryid === _.id).join(supplierTable).on(_._1.supplierid === _.id).result
}
But this returns nested tuple instead of flat one: ((Product, Category), Supplier).
Then how should I join those tables to get flat tuple or, if not possible to do that, how can I flatten this tuple?
EDIT:
Actually the only solution I found which works for me is manual use of map:
def list(): Future[Seq[(Product, Category, Supplier)]] = db.run {
productTable.join(categoryTable).on(_.categoryid === _.id).join(supplierTable).on(_._1.supplierid === _.id).result.map(a => Seq((a(1)._1._1,a(1)._1._2,a(1)._2)))
}
Which looks and feels horrible. But only this works so far ...
Any better ideas?

Inner joins are expressed as for comprehensions in Slick:
def list(): Future[Seq[(Product, Category, Supplier)]] = db.run {
for {
product <- productTable
category <- categoryTable if product.categoryid === category.id
supplier <- supplierTable if product.supplierid === supplier.id
} yield (product, category, supplier)
}
I would also recommend you to check out Slick's support for foreign key queries. That would make the query quite a bit simpler, it would probably look something like this:
def list(): Future[Seq[(Product, Category, Supplier)]] = db.run {
for {
product <- productTable
category <- product.category
supplier <- product.supplier
} yield (product, category, supplier)
}

Related

Group by in many-to-many join with Quill

I am trying to achieve with Quill what the following PostgreSQL query does:
select books.*, array_agg(authors.name) from books
join authors_books on(books.id = authors_books.book_id)
join authors on(authors.id = authors_books.author_id)
group by books.id
For now I have this in my Quill version:
val books = quote(querySchema[Book]("books"))
val authorsBooks = quote(querySchema[AuthorBook]("authors_books"))
val authors = quote(querySchema[Author]("authors"))
val q: db.Quoted[db.Query[(db.Query[Book], Seq[String])]] = quote{
books
.join(authorsBooks).on(_.id == _.book_id)
.join(authors).on(_._2.author_id == _.id)
.groupBy(_._1._1.id)
.map {
case (bId, q) => {
(q.map(_._1._1), unquote(q.map(_._2.name).arrayAgg))
}
}
}
How can I get rid of the nested query in the result (db.Query[Book]) and get a Book instead?
I might be a little bit rusty with SQL but are you sure that your query is valid? Particularly I find suspicious that you do select books.* while group by books.id i.e. you directly return fields that you didn't group by. And attempt to translate that wrong query directly is what makes things go wrong
One way to fix it is to do group by by all fields. Assuming Book is declared as:
case class Book(id: Int, name: String)
you can do
val qq: db.Quoted[db.Query[((Index, String), Seq[String])]] = quote {
books
.join(authorsBooks).on(_.id == _.book_id)
.join(authors).on(_._2.author_id == _.id)
.groupBy(r => (r._1._1.id, r._1._1.name))
.map {
case (bId, q) => {
// (Book.tupled(bId), unquote(q.map(_._2.name).arrayAgg)) // doesn't work
(bId, unquote(q.map(_._2.name).arrayAgg))
}
}
}
val res = db.run(qq).map(r => (Book.tupled(r._1), r._2))
Unfortunately it seems that you can't apply Book.tupled inside quote because you get error
The monad composition can't be expressed using applicative joins
but you can easily do it after db.run call to get back your Book.
Another option is to do group by just Book.id and then join the Book table again to get all the fields back. This might be actually cleaner and faster if there are many fields inside Book

Slick.io - Handling Joined Tables

I have a simple query in Slick (2.1) which joins two tables in a one-to-many relationship. Defined roughly as follows...
class Users( tag: Tag ) extends Table[ User ]( tag, "users" )
// each field here...
def * = ( id, name ).shaped <> ( User.tupled, User.unapply)
class Items( tag: Tag ) extends Table[ Item ]( tag, "items" )
// each field here...
// foreign key to Users table
def userId = column[ Int ]( "user_id")
def user_fk = foreignKey( "users_fk", userId, Users )( _.id )
def * = ( id, userId.?, description ).shaped <> ( Item.tupled, Item.unapply)
A single User can have multiple Items. The User case class I want to marshal to looks like...
case class User(id: Option[Int] = None, name:String, items:Option[List[Item]] = None)
I then query the database with an implicit join like this...
for{
u <- Users
i <- Items
if i.userId === u.id
} yield(u, i)
This "runs" fine. However, the query obviously duplicates the "Users" record for each "Item" that belongs to the user giving...
List(
(User(Some(1),"User1Name"),Item(Some(1),Some(1),"Item Description 1")),
(User(Some(1),"User1Name"),Item(Some(2),Some(1),"Item Description 2")))
Is there an elegant way of pulling the "many" part into the User case class? Whether it's Slick or Scala. What I would ideally like to end up with is...
User(Some(1),"User1Name",
List(Item(Some(1),Some(1),"Item Description 1"),
Item(Some(2),Some(1),"Item Description 2")))
Thanks!
One way to do it in Scala:
val results = List((User(Some(1), "User1Name"), Item(Some(1), Some(1), "Item Description 1")),
(User(Some(1), "User1Name"), Item(Some(2), Some(1), "Item Description 2")))
val grouped = results.groupBy(_._1)
.map { case (user, item: List[(User, Item)]) =>
user.copy(items = Option(item.map(_._2))) }
This handles multiple distinct Users (grouped is an Iterable[User]).

Scala Slick 3.0.1 Relationship to self

I have an entity called Category which has a relationship to itself. There are two types of categories, a parent category and a subcategory. The subcategories have in the idParent attribute, the id from the parent category.
I defined the Schema this way
class CategoriesTable(tag: Tag) extends Table[Category](tag, "CATEGORIES") {
def id = column[String]("id", O.PrimaryKey)
def name = column[String]("name")
def idParent = column[Option[String]]("idParent")
def * = (id, name, idParent) <> (Category.tupled, Category.unapply)
def categoryFK = foreignKey("category_fk", idParent, categories)(_.id.?)
def subcategories = TableQuery[CategoriesTable].filter(_.id === idParent)
}
And I have this data:
id name idParent
------------------------------
parent Parent
child1 Child1 parent
child2 Child2 parent
Now I want to get the result in a map grouped by the parent category like
Map(
(parent,Parent,None) -> Seq[(child1,Child1,parent),(child2,Child2,parent]
)
For that I tried with the following query:
def findChildrenWithParents() = {
db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
}
If at this point I execute the query with:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.map(println _)
}
I get this:
(Category(child1,Child1,Some(parent)),Category(parent,Parent,None))
(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))
Here there are two facts that already disconcerted me:
It is returning Future[Seq[Category, Category]] instead of the Future[Seq[Category, Seq[Category]]] that I would expect.
The order is inverted, I would expect the parent to appear first like:
(Category(parent,Parent,None),Category(child1,Child1,Some(parent)))
(Category(parent,Parent,None),Category(child2,Child2,Some(parent)))
Now I would try to group them. As I am having problems with nested queries in Slick. I perform a group by on the result like this:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.groupBy(_._2).map(println _)
}
But the result is really a mess:
(Category(parent,Parent,None),Vector((Category(child1,Child1,Some(parent)),Category(parent,Parent,None),(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))))
I would have expected:
(Category(parent,Parent,None),Vector(Category(child1,Child1,Some(parent)),Category(child2,Child2,Some(parent))))
Can you please help me with the inverted result and with the group by?
Thanks in advance.
Ok I managed to fix it by myself. Here the answer if someone wants to learn from it:
def findChildrenWithParents() = {
val result = db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
result map {
case categoryTuples => categoryTuples.groupBy(_._1).map{
case (k,v) => (k,v.map(_._2))
}
}
}
The solution isn't perfect. I would like to make the group by already in Slick, but this retrieves what I wanted.

Slick/Scala - how do I access fields of the mapped projection/projected table part of a join in a where query

I have a number of basic queries define, and am using query composition to add stuff such as ordering, paging, where clauses and so on...
But I have a problem accessing the fields of the joined 2nd table in the where clause...
Here's my table queries and my table. All tables are mapped to case classes.
val basicCars = TableQuery[CarTable]
val basicCarValues = TableQuery[CarValueTable]
val carsWithValues = for {
(c, v) <- basicCars leftJoin basicCarValues on (_.id === _.carId)
} yield (c, v.?)
Now I reuse/compose queries by doing stuff such as
carsWithValues.where(_._1.id === someId)
which works perfectly...
But if I want to access any value of the 2nd table... and I try
carsWithValues.where(_._2.latestPrice === somePrice)
It tells me that somePrice is not a member of MappedProjection......
error: value somePrice is not a member of scala.slick.lifted.MappedProjection[Option[com......datastore.slick.generated.Tables.CarValue],(Option[Long], Option[Long], Option[String],.....
I understand that this kind of can't work, cause _._2 is a MappedProjection and not just a CarValue sitting in the tuple..
But I can't figure out how to use any field of the table that is in the MappedProjection in a where clause?
The .? from the Slick code generator is implemented using a MappedProjection, which doesn't have the members anymore. If you postpone the call to .? it works:
val carsWithValues = for {
(c, v) <- basicCars leftJoin basicCarValues on (_.id === _.carId)
} yield (c, v)
carsWithValues.where(_._2.latestPrice === somePrice).map{ case (c,v) => (c,v.?) }

Recursive tree-like table query with Slick

My table data forms a tree structure where one row can reference a parent row in the same table.
What I am trying to achieve, using Slick, is to write a query that will return a row and all it's children. Also, I would like to do the same, but write a query that will return a child and all it's ancestors.
In other words:
findDown(1) should return
List(Group(1, 0, "1"), Group(3, 1, "3 (Child of 1)"))
findUp(5) should return
List(Group(5, 2, "5 (Child of 2)"), Group(2, 0, "2"))
Here is a fully functional worksheet (except for the missing solutions ;-).
package com.exp.worksheets
import scala.slick.driver.H2Driver.simple._
object ParentChildTreeLookup {
implicit val session = Database.forURL("jdbc:h2:mem:test1;", driver = "org.h2.Driver").createSession()
session.withTransaction {
Groups.ddl.create
}
Groups.insertAll(
Group(1, 0, "1"),
Group(2, 0, "2"),
Group(3, 1, "3 (Child of 1)"),
Group(4, 3, "4 (Child of 3)"),
Group(5, 2, "5 (Child of 2)"),
Group(6, 2, "6 (Child of 2)"))
case class Group(
id: Long = -1,
id_parent: Long = -1,
label: String = "")
object Groups extends Table[Group]("GROUPS") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def id_parent = column[Long]("ID_PARENT")
def label = column[String]("LABEL")
def * = id ~ id_parent ~ label <> (Group, Group.unapply _)
def autoInc = id_parent ~ label returning id into {
case ((_, _), id) => id
}
def findDown(groupId: Long)(implicit session: Session) = { ??? }
def findUp(groupId: Long)(implicit session: Session) = { ??? }
}
}
A really bad, and static attempt at findDown may be something like:
private def groupsById = for {
group_id <- Parameters[Long]
g <- Groups; if g.id === group_id
} yield g
private def childrenByParentId = for {
parent_id <- Parameters[Long]
g <- Groups; if g.id_parent === parent_id
} yield g
def findDown(groupId: Long)(implicit session: Session) = { groupsById(groupId).list union childrenByParentId(groupId).list }
But, I'm looking for a way for Slick to recursively search the same table using the id and id_parent link. Any other good ways to solve the problem is really welcome. Keep in mind though, that it would be best to minimise the number of database round-trips.
You could try calling SQL from slick. The SQL call to go up the hierarchy would look something like this (This is for SQL Server):
WITH org_name AS
(
SELECT DISTINCT
parent.id AS parent_id,
parentname.label as parent_label,
child.id AS child_id,
childname.ConceptName as child_label
FROM
Group parent RIGHT OUTER JOIN
Group child ON child.parent_id = parent.id
),
jn AS
(
SELECT
parent_id,
parent_label,
child_id,
child_label
FROM
org_name
WHERE
parent_id = 5
UNION ALL
SELECT
C.parent_id,
C.parent_label,
C.child_id,
C.child_label
FROM
jn AS p JOIN
org_name AS C ON C.child_id = p.parent_id
)
SELECT DISTINCT
jn.parent_id,
jn.parent_label,
jn.child_id,
jn.child_label
FROM
jn
ORDER BY
1;
If you want to go down the hierarchy change the line:
org_name AS C ON C.child_id = p.parent_id
to
org_name AS C ON C.parent_id = p.child_id
In plain SQL this would be tricky. You would have multiple options:
Use a stored procedure to collect the correct records (recursively). Then convert those records into a tree using code
Select all the records and convert those into a tree using code
Use more advanced technique as described here (from Optimized SQL for tree structures) and here. Then convert those records into a tree using code
Depending on the way you want to do it in SQL you need to build a Slick query. The concept of Leaky Abstractions is very evident here.
So getting the tree structure requires two steps:
Get the correct (or all records)
Build (using regular code) a tree from those records
Since you are using Slick I don't think it's an option, but another database type might be a better fit for your data model. Check out NoSQL for the differences between the different types.