I have an entity called Category which has a relationship to itself. There are two types of categories, a parent category and a subcategory. The subcategories have in the idParent attribute, the id from the parent category.
I defined the Schema this way
class CategoriesTable(tag: Tag) extends Table[Category](tag, "CATEGORIES") {
def id = column[String]("id", O.PrimaryKey)
def name = column[String]("name")
def idParent = column[Option[String]]("idParent")
def * = (id, name, idParent) <> (Category.tupled, Category.unapply)
def categoryFK = foreignKey("category_fk", idParent, categories)(_.id.?)
def subcategories = TableQuery[CategoriesTable].filter(_.id === idParent)
}
And I have this data:
id name idParent
------------------------------
parent Parent
child1 Child1 parent
child2 Child2 parent
Now I want to get the result in a map grouped by the parent category like
Map(
(parent,Parent,None) -> Seq[(child1,Child1,parent),(child2,Child2,parent]
)
For that I tried with the following query:
def findChildrenWithParents() = {
db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
}
If at this point I execute the query with:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.map(println _)
}
I get this:
(Category(child1,Child1,Some(parent)),Category(parent,Parent,None))
(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))
Here there are two facts that already disconcerted me:
It is returning Future[Seq[Category, Category]] instead of the Future[Seq[Category, Seq[Category]]] that I would expect.
The order is inverted, I would expect the parent to appear first like:
(Category(parent,Parent,None),Category(child1,Child1,Some(parent)))
(Category(parent,Parent,None),Category(child2,Child2,Some(parent)))
Now I would try to group them. As I am having problems with nested queries in Slick. I perform a group by on the result like this:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.groupBy(_._2).map(println _)
}
But the result is really a mess:
(Category(parent,Parent,None),Vector((Category(child1,Child1,Some(parent)),Category(parent,Parent,None),(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))))
I would have expected:
(Category(parent,Parent,None),Vector(Category(child1,Child1,Some(parent)),Category(child2,Child2,Some(parent))))
Can you please help me with the inverted result and with the group by?
Thanks in advance.
Ok I managed to fix it by myself. Here the answer if someone wants to learn from it:
def findChildrenWithParents() = {
val result = db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
result map {
case categoryTuples => categoryTuples.groupBy(_._1).map{
case (k,v) => (k,v.map(_._2))
}
}
}
The solution isn't perfect. I would like to make the group by already in Slick, but this retrieves what I wanted.
Related
I'm trying to model the following with Slick 3.1.0;
case class Review(txt: String, userId: Long, id: Long)
case class User(name: String, id: Long)
case class ReviewEvent(event: String, reviewId: Long)
I need to populate a class called a FullReview, which looks like;
case class FullReview(r: Review, user: User, evts: Seq[ReviewEvent])
Assuming I have the right tables for each of the models, I'm trying to fetch a FullReview using a combination of join and group by, like so:
val withUser = for {
(r, u) <- RTable join UTable on (_.userId === _.id)
}
val withUAndEvts = (for {
((r, user), evts) <- withUser joinLeft ETable on {
case ((r, _), ev) => r.id === ev.reviewId
}
} yield (r, user, events)).groupBy(_._1._id)
This seems to yield, when a nested Query type, from what I can see. What am I doing wrong here?
If I understand you correctly, you can use following example:
val users = TableQuery[Users]
val reviews = TableQuery[Reviews]
val events = TableQuery[ReviewEvents]
override def findAllReviews(): Future[Seq[FullReview]] = {
val query = reviews
.join(users).on(_.userId === _.id)
.joinLeft(events).on(_._1.id === _.reviewId)
db.run(query.result).map { a =>
a.groupBy(_._1._1.id).map { case (_, tuples) =>
val ((review, user), _) = tuples.head
val reviewEvents = tuples.flatMap(_._2)
FullReview(review, user, reviewEvents)
}.toSeq
}
}
If you want to add pagination to this request, I've already answered here and here is full example.
From some tinkering around, I figured it would just be better to do the aggregation on the client. What that would mean, indirectly, is that if 100 rows on the table ETable would match a single row on the RTable, you would get multiple rows on the client. The client then has to implement its own aggregation to group all the ReviewEvent by Review.
As far as pagination is concerned, you may do something like;
def withUser(page: Int, pageSize: Int) = for {
(r, u) <- RTable.drop(page * pageSize).take(pageSize) join UTable on (_.userId === _.id)
}
I guess this is elegant enough for now. If someone has a better answer, I'd be happy to hear it.
I'm working with Play! Scala 2.4 and Slick 3.
I have a many to many relations as following:
class Artists(tag: Tag) extends Table[Artist](tag, "artists") {
def id = column[Long]("artistid", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id.?, name) <> ((Artist.apply _).tupled, Artist.unapply)
}
The relation table:
class ArtistsGenres(tag: Tag) extends Table[ArtistGenreRelation](tag, "artistsgenres") {
def artistId = column[Long]("artistid")
def genreId = column[Int]("genreid")
def * = (artistId, genreId) <> ((ArtistGenreRelation.apply _).tupled, ArtistGenreRelation.unapply)
def aFK = foreignKey("artistid", artistId, artists)(_.id, onDelete = ForeignKeyAction.Cascade)
def bFK = foreignKey("genreid", genreId, genres)(_.id, onDelete = ForeignKeyAction.Cascade)
}
and the third table:
class Genres(tag: Tag) extends Table[Genre](tag, "genres") {
def id = column[Int]("genreid", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id.?, name) <> ((Genre.apply _).tupled, Genre.unapply)
}
Until now I just wanted to get all the artists by their genre names as following (and their genres as well):
def findAllByGenre(genreName: String, offset: Int, numberToReturn: Int): Future[Seq[ArtistWithGenre]] = {
val query = for {
genre <- genres if genre.name === genreName
artistGenre <- artistsGenres if artistGenre.genreId === genre.id
artist <- artists joinLeft
(artistsGenres join genres on (_.genreId === _.id)) on (_.id === _._1.artistId)
if artist._1.id === artistGenre.artistId
} yield artist
db.run(query.result) map { seqArtistAndOptionalGenre =>
ArtistsAndOptionalGenresToArtistsWithGenres(seqArtistAndOptionalGenre)
}
}
The method ArtistsAndOptionalGenresToArtistsWithGenres groups the response by artists. This worked like a charm. Now I want to limit the number of artists I get from the database.
But I don't manage to use correctly the slick functions take and drop: indeed as my query returns a list of artists and relations, If I add a take before the .result I don't receive the number of artists I want to get (depending of the number of relations the artists have).
I could drop and take after that I have grouped my result by artist, but I see a problem here: the SGBDR won't optimize the request, i.e. I will get all the artists (it can be a lot), proceed the groupBy and after take a bit instead of limit the number of artist returned before the groupBy.
I found the following solution (with 2 queries but 1 DB call):
def findAllByGenre(genreName: String, offset: Int, numberToReturn: Int): Future[Seq[ArtistWithWeightedGenres]] = {
val query = for {
genre <- genres.filter(_.name === genreName)
artistGenre <- artistsGenres.filter(_.genreId === genre.id)
artist <- artists.filter(_.id === artistGenre.artistId)
} yield artist
val artistsIdFromDB = query.drop(offset).take(numberToReturn) map (_.id)
val query2 = for {
artistWithGenres <- artists.filter(_.id in artistsIdFromDB) joinLeft
(artistsGenres join genres on (_.genreId === _.id)) on (_.id === _._1.artistId)
} yield artistWithGenres
db.run(query2.result) map { seqArtistAndOptionalGenre =>
ArtistsAndOptionalGenresToArtistsWithGenres(seqArtistAndOptionalGenre)
} map(_.toVector)
}
If anyone has a better solution...
I'm trying to use Slick to query a many-to-many relationship, but I'm running into a variety of errors, the most prominent being "Don't know how to unpack (User, Skill) to T and pack to G".
The structure of the tables is similar to the following:
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
object Users extends Table[User]("users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
case class Skill(val name: String, val id: Option[UUID])
object Skills extends Table[Skill]("skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
object UserSkills extends Table[UserSkill]("user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
Ultimately, what I want to achieve is something of the form
SELECT u.*, group_concat(s.name) FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
but before I spend the time trying to get group_concat to work as well, I have been trying to produce the simpler query (which I believe is still valid...)
SELECT u.* FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
I've tried a variety of scala code to produce this query, but an example of what causes the shape error above is
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.first }
Similarly, the following produces a packing error regarding "User" instead of "(User, Skill)"
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.map(_._1).first }
If anyone has any suggestions, I would be very grateful: I've spent most of today and yesterday scouring google/google groups as well as the slick source, but I haven't a solution yet.
(Also, I'm using postgre so group_concat would actually be string_agg)
EDIT
So it seems like when groupBy is used, the mapped projection gets applied because something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s)).map(_._1)
works fine because _._1 gives the type Users, which has a Shape since Users is a table. However, when we call xs.first (as we do when we call groupBy), we actually get back a mapped projection type (User, Skill), or if we apply map(_._1) first, we get the type User, which is not Users! As far as I can tell, there is no shape with User as the mixed type because the only shapes defined are for Shape[Column[T], T, Column[T]] and for a table T <: TableNode, Shape[T, NothingContainer#TableNothing, T] as defined in slick.lifted.Shape. Furthermore, if I do something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s))
.groupBy(_._1.id)
.map { case (_, xs) => xs.map(_._1.id).first }
I get a strange error of the form "NoSuchElementException: key not found: #1515100893", where the numeric key value changes each time. This is not the query I want, but it is a strange issue none the less.
I've run up against similar situations as well. While I love working with Scala and Slick, I do believe there are times when it is easier to denormalize an object in the database itself and link the Slick Table to a view.
For example, I have an application that has a Tree object that is normalized into several database tables. Since I'm comfortable with SQL, I think it is a cleaner solution than writing a plain Scala Slick query. The Scala code:
case class DbGFolder(id: String,
eTag: String,
url: String,
iconUrl: String,
title: String,
owner: String,
parents: Option[String],
children: Option[String],
scions: Option[String],
created: LocalDateTime,
modified: LocalDateTime)
object DbGFolders extends Table[DbGFolder]("gfolder_view") {
def id = column[String]("id")
def eTag = column[String]("e_tag")
def url = column[String]("url")
def iconUrl = column[String]("icon_url")
def title = column[String]("title")
def owner = column[String]("file_owner")
def parents = column[String]("parent_str")
def children = column[String]("child_str")
def scions = column[String]("scion_str")
def created = column[LocalDateTime]("created")
def modified = column[LocalDateTime]("modified")
def * = id ~ eTag ~ url ~ iconUrl ~ title ~ owner ~ parents.? ~
children.? ~ scions.? ~ created ~ modified <> (DbGFolder, DbGFolder.unapply _)
def findAll(implicit s: Session): List[GFolder] = {
Query(DbGFolders).list().map {v =>
GFolder(id = v.id,
eTag = v.eTag,
url = v.url,
iconUrl = v.iconUrl,
title = v.title,
owner = v.owner,
parents = v.parents.map { parentStr =>
parentStr.split(",").toSet }.getOrElse(Set()),
children = v.children.map{ childStr =>
childStr.split(",").toSet }.getOrElse(Set()),
scions = v.scions.map { scionStr =>
scionStr.split(",").toSet }.getOrElse(Set()),
created = v.created,
modified = v.modified)
}
}
}
And the underlying (postgres) view:
CREATE VIEW scion_view AS
WITH RECURSIVE scions(id, scion) AS (
SELECT c.id, c.child
FROM children AS c
UNION ALL
SELECT s.id, c.child
FROM children AS c, scions AS s
WHERE c.id = s.scion)
SELECT * FROM scions ORDER BY id, scion;
CREATE VIEW gfolder_view AS
SELECT
f.id, f.e_tag, f.url, f.icon_url, f.title, m.name, f.file_owner,
p.parent_str, c.child_str, s.scion_str, f.created, f.modified
FROM
gfiles AS f
JOIN mimes AS m ON (f.mime_type = m.name)
LEFT JOIN (SELECT DISTINCT id, string_agg(parent, ',' ORDER BY parent) AS parent_str
FROM parents GROUP BY id) AS p ON (f.id = p.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(child, ',' ORDER BY child) AS child_str
FROM children GROUP BY id) AS c ON (f.id = c.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(scion, ',' ORDER BY scion) AS scion_str
FROM scion_view GROUP BY id) AS s ON (f.id = s.id)
WHERE
m.category = 'folder';
Try this. Hope it may yield what you expected. Find the Slick Code below the case classes.
click here for the reference regarding lifted embedding .
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
class Users(_tableTag: Tag) extends Table[User](_tableTag,"users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
lazy val userTable = new TableQuery(tag => new Users(tag))
case class Skill(val name: String, val id: Option[UUID])
class Skills(_tableTag: Tag) extends Table[Skill](_tableTag,"skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
lazy val skillTable = new TableQuery(tag => new Skills(tag))
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
class UserSkills(_tableTag: Tag) extends Table[UserSkill](_tableTag,"user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
lazy val userSkillTable = new TableQuery(tag => new UserSkills(tag))
(for {((userSkill, user), skill) <- userSkillTable join userTable.filter on
(_.userId === _.id) join skillTable.filter on (_._1.skillId === _.id)
} yield (userSkill, user, skill)).groupBy(_.2.id)
With a class and table definition looking like this:
case class Group(
id: Long = -1,
id_parent: Long = -1,
label: String = "",
description: String = "")
object Groups extends Table[Group]("GROUPS") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def id_parent = column[Long]("ID_PARENT")
def label = column[String]("LABEL")
def description = column[String]("DESC")
def * = id ~ id_parent ~ label ~ design <> (Group, Group.unapply _)
def autoInc = id_parent ~ label ~ design returning id into {
case ((_, _, _), id) => id
}
}
To update a record, I can do this:
def updateGroup(id: Long) = Groups.where(_.id === id)
def updateGroup(g: Group)(implicit session: Session) = updateGroup(g.id).update(g)
But I can't get updates to work using for expressions:
val findGById = for {
id <- Parameters[Long]
g <- Groups; if g.id === id
} yield g
def updateGroupX(g: Group)(implicit session: Session) = findGById(g.id).update(g)
----------------------------------------------------------------------------^
Error: value update is not a member of scala.slick.jdbc.MutatingUnitInvoker[com.exp.Group]
I'm obviously missing something in the documentation.
The update method is supplied by the type UpdateInvoker. An instance of that type can be implicitly created from a Query by the methods productQueryToUpdateInvoker and/or tableQueryToUpdateInvoker (found in the BasicProfile), if they are in scope.
Now the type of your findById method is not a Query but a BasicQueryTemplate[Long, Group]. Looking at the docs, I can find no way from a BasicQueryTemplate (which is a subtype of StatementInvoker) to an UpdateInvoker, neither implicit nor explicit. Thinking about it, that makes kinda sense to me, since I understand a query template (invoker) to be something that has already been "compiled" from an abstract syntax tree (Query) to a prepared statement rather early, before parameterization, whereas an update invoker can only be built from an abstract syntax tree, i.e. a Query object, because it needs to analyze the query and extract its parameters/columns. At least that's the way it appears to work at present.
With that in mind, a possible solution unfolds:
def findGById(id: Long) = for {
g <- Groups; if g.id === id
} yield g
def updateGroupX(g: Group)(implicit session: Session) = findGById(g.id).update(g)
Where findById(id: Long) has the type Query[Groups, Group] which is converted by productQueryToUpdateInvoker to an UpdateInvoker[Group] on which the update method can finally be called.
Hope this helped.
Refer to http://madnessoftechnology.blogspot.ru/2013/01/database-record-updates-with-slick-in.html
I stuck with the updating today, and this blog post helped me much. Also refer to the first comment under the post.
I'm trying to produce this SQL with SLICK 1.0.0:
select
cat.categoryId,
cat.title,
(
select
count(product.productId)
from
products product
right join products_categories productCategory on productCategory.productId = product.productId
right join categories c on c.categoryId = productCategory.categoryId
where
c.leftValue >= cat.leftValue and
c.rightValue <= cat.rightValue
) as productCount
from
categories cat
where
cat.parentCategoryId = 2;
My most successful attempt is (I dropped the "joins" part, so it's more readable):
def subQuery(c: CategoriesTable.type) = (for {
p <- ProductsTable
} yield(p.id.count))
for {
c <- CategoriesTable
if (c.parentId === 2)
} yield(c.id, c.title, (subQuery(c).asColumn))
which produces the SQL lacking parenthesis in subquery:
select
x2.categoryId,
x2.title,
select count(x3.productId) from products x3
from
categories x2
where x2.parentCategoryId = 2
which is obviously invalid SQL
Any thoughts how to have SLICK put these parenthesis in the right place? Or maybe there is a different way to achieve this?
I never used Slick or ScalaQuery so it was quite an adventure to find out how to achieve this. Slick is very extensible, but the documentation on extending is a bit tricky. It might already exist, but this is what I came up with. If I have done something incorrect, please correct me.
First we need to create a custom driver. I extended the H2Driver to be able to test easily.
trait CustomDriver extends H2Driver {
// make sure we create our query builder
override def createQueryBuilder(input: QueryBuilderInput): QueryBuilder =
new QueryBuilder(input)
// extend the H2 query builder
class QueryBuilder(input: QueryBuilderInput) extends super.QueryBuilder(input) {
// we override the expr method in order to support the 'As' function
override def expr(n: Node, skipParens: Boolean = false) = n match {
// if we match our function we simply build the appropriate query
case CustomDriver.As(column, LiteralNode(name: String)) =>
b"("
super.expr(column, skipParens)
b") as ${name}"
// we don't know how to handle this, so let super hanle it
case _ => super.expr(n, skipParens)
}
}
}
object CustomDriver extends CustomDriver {
// simply define 'As' as a function symbol
val As = new FunctionSymbol("As")
// we override SimpleSql to add an extra implicit
trait SimpleQL extends super.SimpleQL {
// This is the part that makes it easy to use on queries. It's an enrichment class.
implicit class RichQuery[T: TypeMapper](q: Query[Column[T], T]) {
// here we redirect our as call to the As method we defined in our custom driver
def as(name: String) =
CustomDriver.As.column[T](Node(q.unpackable.value), name)
}
}
// we need to override simple to use our version
override val simple: SimpleQL = new SimpleQL {}
}
In order to use it we need to import specific things:
import CustomDriver.simple._
import Database.threadLocalSession
Then, to use it you can do the following (I used the tables from the official Slick documentation in my example).
// first create a function to create a count query
def countCoffees(supID: Column[Int]) =
for {
c <- Coffees
if (c.supID === supID)
} yield (c.length)
// create the query to combine name and count
val coffeesPerSupplier =
for {
s <- Suppliers
} yield (s.name, countCoffees(s.id) as "test")
// print out the name and count
coffeesPerSupplier foreach { case (name, count) =>
println(s"$name has $count type(s) of coffee")
}
The result is this:
Acme, Inc. has 2 type(s) of coffee
Superior Coffee has 2 type(s) of coffee
The High Ground has 1 type(s) of coffee