Slick.io - Handling Joined Tables - scala

I have a simple query in Slick (2.1) which joins two tables in a one-to-many relationship. Defined roughly as follows...
class Users( tag: Tag ) extends Table[ User ]( tag, "users" )
// each field here...
def * = ( id, name ).shaped <> ( User.tupled, User.unapply)
class Items( tag: Tag ) extends Table[ Item ]( tag, "items" )
// each field here...
// foreign key to Users table
def userId = column[ Int ]( "user_id")
def user_fk = foreignKey( "users_fk", userId, Users )( _.id )
def * = ( id, userId.?, description ).shaped <> ( Item.tupled, Item.unapply)
A single User can have multiple Items. The User case class I want to marshal to looks like...
case class User(id: Option[Int] = None, name:String, items:Option[List[Item]] = None)
I then query the database with an implicit join like this...
for{
u <- Users
i <- Items
if i.userId === u.id
} yield(u, i)
This "runs" fine. However, the query obviously duplicates the "Users" record for each "Item" that belongs to the user giving...
List(
(User(Some(1),"User1Name"),Item(Some(1),Some(1),"Item Description 1")),
(User(Some(1),"User1Name"),Item(Some(2),Some(1),"Item Description 2")))
Is there an elegant way of pulling the "many" part into the User case class? Whether it's Slick or Scala. What I would ideally like to end up with is...
User(Some(1),"User1Name",
List(Item(Some(1),Some(1),"Item Description 1"),
Item(Some(2),Some(1),"Item Description 2")))
Thanks!

One way to do it in Scala:
val results = List((User(Some(1), "User1Name"), Item(Some(1), Some(1), "Item Description 1")),
(User(Some(1), "User1Name"), Item(Some(2), Some(1), "Item Description 2")))
val grouped = results.groupBy(_._1)
.map { case (user, item: List[(User, Item)]) =>
user.copy(items = Option(item.map(_._2))) }
This handles multiple distinct Users (grouped is an Iterable[User]).

Related

How to update table query in Slick

How can I convert Query[MappedProjection[Example, (Option[String], Int, UUID, UUID)], Example, Seq] to Query[Examples, Example, Seq]?
Details
I am trying to drop a column from an existing table(Examples in this case) and move the data to another table (Examples2 in this case). I don't want to change all the existing code base, so I plan to join these two tables and map the results to Example.
import slick.lifted.Tag
import slick.driver.PostgresDriver.api._
import java.util.UUID
case class Example(
field1: Option[String] = None,
field2: Int,
someForeignId: UUID,
id: UUID,
)
object Example
class Examples(tag: Tag) extends Table[Example](tag, "entityNotes") {
def field1 = column[Option[String]]("field1")
def field2 = column[Int]("field2")
def someForeignId = column[UUID]("someForeignId")
def id = column[UUID]("id", O.PrimaryKey)
def someForeignKey = foreignKey(
"someForeignIdToExamples2",
someForeignId,
Examples2.query,
)(
_.id.?
)
def * =
(
field1.?,
field2,
someForeignId,
id,
) <> ((Example.apply _).tupled, Example.unapply)
}
object Examples{
val query = TableQuery[Examples]
}
Basically, all the functions in the codebase call Examples.query. If I update that query by joining two tables, the problem will be solved (of course with a performance shortcoming because of one extra join for each call).
To use the query with the existing code base, we need to keep the type the same. For example, we we can use filter as follows:
val query_ = TableQuery[Examples]
val query: Query[Examples, Example, Seq] = query_.filter(_.field2 > 5)
Everything will work without a problem since we keep the type of the query as it is supposed to be.
However, I cannot do that with a join if I want to use data from the second table.
val query_ = TableQuery[Examples]
val query = query
.join(Examples2.query_)
.on(_.someForeignId === _.id)
.map({
case (e, e2) =>
((
e2.value.?,
e1.field2,
e2.id
e.id,
) <> ((Example.apply _).tupled, Example.unapply))
})
This is where I got stuck. Its type is Query[MappedProjection[Example, (Option[String], Int, UUID, UUID)], Example, Seq].
Can anyone help? Btw, we don't have to use map. This is just what I got so far.

Scala Slick 3.0.1 Relationship to self

I have an entity called Category which has a relationship to itself. There are two types of categories, a parent category and a subcategory. The subcategories have in the idParent attribute, the id from the parent category.
I defined the Schema this way
class CategoriesTable(tag: Tag) extends Table[Category](tag, "CATEGORIES") {
def id = column[String]("id", O.PrimaryKey)
def name = column[String]("name")
def idParent = column[Option[String]]("idParent")
def * = (id, name, idParent) <> (Category.tupled, Category.unapply)
def categoryFK = foreignKey("category_fk", idParent, categories)(_.id.?)
def subcategories = TableQuery[CategoriesTable].filter(_.id === idParent)
}
And I have this data:
id name idParent
------------------------------
parent Parent
child1 Child1 parent
child2 Child2 parent
Now I want to get the result in a map grouped by the parent category like
Map(
(parent,Parent,None) -> Seq[(child1,Child1,parent),(child2,Child2,parent]
)
For that I tried with the following query:
def findChildrenWithParents() = {
db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
}
If at this point I execute the query with:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.map(println _)
}
I get this:
(Category(child1,Child1,Some(parent)),Category(parent,Parent,None))
(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))
Here there are two facts that already disconcerted me:
It is returning Future[Seq[Category, Category]] instead of the Future[Seq[Category, Seq[Category]]] that I would expect.
The order is inverted, I would expect the parent to appear first like:
(Category(parent,Parent,None),Category(child1,Child1,Some(parent)))
(Category(parent,Parent,None),Category(child2,Child2,Some(parent)))
Now I would try to group them. As I am having problems with nested queries in Slick. I perform a group by on the result like this:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.groupBy(_._2).map(println _)
}
But the result is really a mess:
(Category(parent,Parent,None),Vector((Category(child1,Child1,Some(parent)),Category(parent,Parent,None),(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))))
I would have expected:
(Category(parent,Parent,None),Vector(Category(child1,Child1,Some(parent)),Category(child2,Child2,Some(parent))))
Can you please help me with the inverted result and with the group by?
Thanks in advance.
Ok I managed to fix it by myself. Here the answer if someone wants to learn from it:
def findChildrenWithParents() = {
val result = db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
result map {
case categoryTuples => categoryTuples.groupBy(_._1).map{
case (k,v) => (k,v.map(_._2))
}
}
}
The solution isn't perfect. I would like to make the group by already in Slick, but this retrieves what I wanted.

Join on two foreign keys from same table in scalikejdbc

So i have a one table that has two FK that points at same table.
For example:
Message table with columns sender and receiver that both references id in user table.
When i'm writing query to fetch message and join on both the result is same use for both, the first one.
Here is how i'm trying to do it.
import scalikejdbc._
Class.forName("org.h2.Driver")
ConnectionPool.singleton("jdbc:h2:mem:hello", "user", "pass")
implicit val session = AutoSession
sql"""
create table members (
id serial not null primary key,
name varchar(64),
created_at timestamp not null
)
""".execute.apply()
sql"""
create table message (
id serial not null primary key,
msg varchar(64) not null,
sender int not null,
receiver int not null
)
""".execute.apply()
Seq("Alice", "Bob", "Chris") foreach { name =>
sql"insert into members (name, created_at) values (${name}, current_timestamp)".update.apply()
}
Seq(
("msg1", 1, 2),
("msg2", 1, 3),
("msg3", 2, 1)
) foreach { case (m, s, r) =>
sql"insert into message (msg, sender, receiver) values (${m}, ${s}, ${r})".update.apply()
}
import org.joda.time._
case class Member(id: Long, name: Option[String], createdAt: DateTime)
object Member extends SQLSyntaxSupport[Member] {
override val tableName = "members"
def apply(mem: ResultName[Member])(rs: WrappedResultSet): Member = new Member(
rs.long("id"), rs.stringOpt("name"), rs.jodaDateTime("created_at"))
}
case class Message(id: Long, msg: String, sender: Member, receiver: Member)
object Message extends SQLSyntaxSupport[Message] {
override val tableName = "message"
def apply(ms: ResultName[Message], s: ResultName[Member], r: ResultName[Member])(rs: WrappedResultSet): Message = new Message(
rs.long("id"), rs.string("msg"), Member(s)(rs), Member(r)(rs))
}
val mem = Member.syntax("m")
val s = Member.syntax("s")
val r = Member.syntax("r")
val ms = Message.syntax("ms")
val msgs: List[Message] = sql"""
select *
from ${Message.as(ms)}
join ${Member.as(s)} on ${ms.sender} = ${s.id}
join ${Member.as(r)} on ${ms.receiver} = ${r.id}
""".map(rs => Message(ms.resultName, s.resultName, r.resultName)(rs)).list.apply()
Am I doing something wrong or is it bug?
Sorry for late reply. We have the Google Group ML and I actively read notifications from the group.
When you're in a hurry, please post stackoverflow URLs there. https://groups.google.com/forum/#!forum/scalikejdbc-users-group
In this case, you need to write select ${ms.result.*}, ${s.result.*} instead of select *. Please read this page for details. http://scalikejdbc.org/documentation/sql-interpolation.html

How to return a sequence generation for an Id

In Scala Slick, if you are not using auto-incremented Id, but with sequence generation strategy for the id, how do you return that id?
Let's say you have the following case class and Slick table:
case class User(id: Option[Int], first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id.? ~ first ~ last <> (User, User.unapply _)
}
The important things to consider here is the fact that User.id is an Option, because when we create it we will set it to None and the DB will generate the number for it.
Now you need to define a new insert mapping which omits the autoincremented column. This is needed because some databases don't allow you to insert into a column which is labeled as Auto Incremental. So instead of:
INSERT INTO users VALUES (NULL, "first, "last")
Slick will generate:
INSERT INTO user(first, last) VALUES ("first", "last")
The mapping looks like this (which must be placed inside Users):
def forInsert = first ~ last <> ({ t => User(None, t._1, t._2)}, { (u: User) => Some((u.first, u.last))})
Finally getting the auto-generated id is simple. We only need to specify in the returning the id column:
val userId = Users.forInsert returning Users.id insert User(None, "First", "Last")
Or you could instead move the returning statement:
def forInsert = first ~ last <> ({ t => User(None, t._1, t._2)}, { (u: User) => Some((u.first, u.last))}) returning id
And simplify your insert calls:
val userId = Users.forInsert insert User(None, "First", "Last")
Source

Scala Slick: Issues with groupBy and missing shapes

I'm trying to use Slick to query a many-to-many relationship, but I'm running into a variety of errors, the most prominent being "Don't know how to unpack (User, Skill) to T and pack to G".
The structure of the tables is similar to the following:
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
object Users extends Table[User]("users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
case class Skill(val name: String, val id: Option[UUID])
object Skills extends Table[Skill]("skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
object UserSkills extends Table[UserSkill]("user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
Ultimately, what I want to achieve is something of the form
SELECT u.*, group_concat(s.name) FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
but before I spend the time trying to get group_concat to work as well, I have been trying to produce the simpler query (which I believe is still valid...)
SELECT u.* FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
I've tried a variety of scala code to produce this query, but an example of what causes the shape error above is
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.first }
Similarly, the following produces a packing error regarding "User" instead of "(User, Skill)"
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.map(_._1).first }
If anyone has any suggestions, I would be very grateful: I've spent most of today and yesterday scouring google/google groups as well as the slick source, but I haven't a solution yet.
(Also, I'm using postgre so group_concat would actually be string_agg)
EDIT
So it seems like when groupBy is used, the mapped projection gets applied because something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s)).map(_._1)
works fine because _._1 gives the type Users, which has a Shape since Users is a table. However, when we call xs.first (as we do when we call groupBy), we actually get back a mapped projection type (User, Skill), or if we apply map(_._1) first, we get the type User, which is not Users! As far as I can tell, there is no shape with User as the mixed type because the only shapes defined are for Shape[Column[T], T, Column[T]] and for a table T <: TableNode, Shape[T, NothingContainer#TableNothing, T] as defined in slick.lifted.Shape. Furthermore, if I do something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s))
.groupBy(_._1.id)
.map { case (_, xs) => xs.map(_._1.id).first }
I get a strange error of the form "NoSuchElementException: key not found: #1515100893", where the numeric key value changes each time. This is not the query I want, but it is a strange issue none the less.
I've run up against similar situations as well. While I love working with Scala and Slick, I do believe there are times when it is easier to denormalize an object in the database itself and link the Slick Table to a view.
For example, I have an application that has a Tree object that is normalized into several database tables. Since I'm comfortable with SQL, I think it is a cleaner solution than writing a plain Scala Slick query. The Scala code:
case class DbGFolder(id: String,
eTag: String,
url: String,
iconUrl: String,
title: String,
owner: String,
parents: Option[String],
children: Option[String],
scions: Option[String],
created: LocalDateTime,
modified: LocalDateTime)
object DbGFolders extends Table[DbGFolder]("gfolder_view") {
def id = column[String]("id")
def eTag = column[String]("e_tag")
def url = column[String]("url")
def iconUrl = column[String]("icon_url")
def title = column[String]("title")
def owner = column[String]("file_owner")
def parents = column[String]("parent_str")
def children = column[String]("child_str")
def scions = column[String]("scion_str")
def created = column[LocalDateTime]("created")
def modified = column[LocalDateTime]("modified")
def * = id ~ eTag ~ url ~ iconUrl ~ title ~ owner ~ parents.? ~
children.? ~ scions.? ~ created ~ modified <> (DbGFolder, DbGFolder.unapply _)
def findAll(implicit s: Session): List[GFolder] = {
Query(DbGFolders).list().map {v =>
GFolder(id = v.id,
eTag = v.eTag,
url = v.url,
iconUrl = v.iconUrl,
title = v.title,
owner = v.owner,
parents = v.parents.map { parentStr =>
parentStr.split(",").toSet }.getOrElse(Set()),
children = v.children.map{ childStr =>
childStr.split(",").toSet }.getOrElse(Set()),
scions = v.scions.map { scionStr =>
scionStr.split(",").toSet }.getOrElse(Set()),
created = v.created,
modified = v.modified)
}
}
}
And the underlying (postgres) view:
CREATE VIEW scion_view AS
WITH RECURSIVE scions(id, scion) AS (
SELECT c.id, c.child
FROM children AS c
UNION ALL
SELECT s.id, c.child
FROM children AS c, scions AS s
WHERE c.id = s.scion)
SELECT * FROM scions ORDER BY id, scion;
CREATE VIEW gfolder_view AS
SELECT
f.id, f.e_tag, f.url, f.icon_url, f.title, m.name, f.file_owner,
p.parent_str, c.child_str, s.scion_str, f.created, f.modified
FROM
gfiles AS f
JOIN mimes AS m ON (f.mime_type = m.name)
LEFT JOIN (SELECT DISTINCT id, string_agg(parent, ',' ORDER BY parent) AS parent_str
FROM parents GROUP BY id) AS p ON (f.id = p.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(child, ',' ORDER BY child) AS child_str
FROM children GROUP BY id) AS c ON (f.id = c.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(scion, ',' ORDER BY scion) AS scion_str
FROM scion_view GROUP BY id) AS s ON (f.id = s.id)
WHERE
m.category = 'folder';
Try this. Hope it may yield what you expected. Find the Slick Code below the case classes.
click here for the reference regarding lifted embedding .
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
class Users(_tableTag: Tag) extends Table[User](_tableTag,"users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
lazy val userTable = new TableQuery(tag => new Users(tag))
case class Skill(val name: String, val id: Option[UUID])
class Skills(_tableTag: Tag) extends Table[Skill](_tableTag,"skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
lazy val skillTable = new TableQuery(tag => new Skills(tag))
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
class UserSkills(_tableTag: Tag) extends Table[UserSkill](_tableTag,"user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
lazy val userSkillTable = new TableQuery(tag => new UserSkills(tag))
(for {((userSkill, user), skill) <- userSkillTable join userTable.filter on
(_.userId === _.id) join skillTable.filter on (_._1.skillId === _.id)
} yield (userSkill, user, skill)).groupBy(_.2.id)