I want to be able to create a query with Slick that let me filter left joins in a dynamic way
case class Player(
id: Long,
createdAt: DateTime,
lastModificationDate: DateTime,
name: String
)
class PlayerTable(tag: Tag) extends Table[Player](tag, "players") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def createdAt = column[DateTime]("createdAt")
def lastModificationDate = column[DateTime]("lastModificationDate")
def name = column[String]("name")
override def * : ProvenShape[Player] = (
id,
createdAt,
lastModificationDate,
updatedAt,
name
) <> (Player.tupled, Player.unapply)
}
case class PlayerGame(
id: Long,
createdAt: DateTime,
lastModificationDate: DateTime,
playerId: Long,
level: Int,
status: String
)
class PlayerGameTable(tag: Tag) extends Table[PlayerGame](tag, "player_games") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def createdAt = column[DateTime]("createdAt")
def lastModificationDate = column[DateTime]("lastModificationDate")
def playerId = column[Long]("playerId")
def level = column[Int]("level")
def status = column[String]("status")
override def * : ProvenShape[PlayerGame] = (
id,
createdAt,
lastModificationDate,
playerId,
level,
status
) <> (PlayerGame.tupled, PlayerGame.unapply)
}
I want to write a query like this with Slick, where the WHERE CLAUSE is dynamic. I wrote two examples
SELECT *
FROM players
LEFT JOIN player_games AS playerGamesOne ON players.id = playerGamesOne.playerId AND playerGamesOne.level = 1
LEFT JOIN player_games AS playerGamesTwo ON players.id = playerGamesTwo.playerId AND playerGamesTwo.level = 2
WHERE playerGamesOne.status LIKE 'gameOver'
OR playerGamesTWO.status LIKE 'gameOver'
SELECT *
FROM players
LEFT JOIN player_games AS playerGamesOne ON players.id = playerGamesOne.playerId AND playerGamesOne.level = 1
LEFT JOIN player_games AS playerGamesTwo ON players.id = playerGamesTwo.playerId AND playerGamesTwo.level = 2
WHERE playerGamesOne.status LIKE 'playing'
OR playerGamesTwo.status NOT LIKE 'gameOver'
I was trying something like this, but I get Rep[Option[PlayerGameTable]] as the parameter. Maybe there is a different way of doing something like this
val baseQuery = for {
((p, g1), g2) <- PlayerTable.playerQuery joinLeft
PlayerGameTable.playerGameQuery ON ((x, y) => x.id === y.playerId && y.level === 1) joinLeft
PlayerGameTable.playerGameQuery ON ((x, y) => x._1.id === y.playerId && y.level === 2)
} yield (p, g1, g2)
private def filterPlayerGames(gameStatus: String, playerGamesOneOpt: Option[PlayerGameTable], playerGamesTwoOpt: Option[PlayerGameTable]) = {
(gameStatus, playerGamesOneOpt, playerGamesOneOpt) match {
case (gameStatus: String, Some(playerGamesOne: PlayerGameTable), Some(playerGamesOne: PlayerGameTable)) if gameStatus == "gameOver" => playerGamesOne.status === "gameOver" || playerGamesTwo.status === "gameOver"
}
}
It is a complex question, if soemthing is not clear please let me know and I will try to clarify it
There are a couple of issues:
With multiple conditions, the underscore placeholder used within your ON clause would not work the way intended
_.level = something is an assignment, not a condition
Assuming PlayerTable.playerQuery is TableQuery[PlayerTable] and PlayerGameTable.playerGameQuery is TableQuery[PlayerGameTable], your baseQuery should look like this:
val baseQuery = for {
((p, g1), g2) <- PlayerTable.playerQuery joinLeft
PlayerGameTable.playerGameQuery on ((x, y) => x.id === y.playerId && y.level === 1) joinLeft
PlayerGameTable.playerGameQuery on ((x, y) => x._1.id === y.playerId && y.level === 2)
} yield (p, g1, g2)
It's not entirely clear to me how your filterPlayerGames method is going to handle dynamic conditions. Nor do I think any filtering wrapper method will be flexible enough to cover multiple conditions with arbitrary and/or/negation operators. I would suggest that you use the baseQuery for the necessary joins and build filtering queries on top of it, similar to something like below:
val query1 = baseQuery.filter{ case (_, g1, g2) =>
g1.filter(_.status === "gameOver").isDefined || g2.filter(_.status === "gameOver").isDefined
}
val query2 = baseQuery.filter{ case (_, g1, g2) =>
g1.filter(_.status === "playing").isDefined || g2.filter(_.status =!= "gameOver").isDefined
}
Note that with the left joins, g1 and g2 are of Option type, thus isDefined is applied for the or operation.
On a separate note, given that your filtering conditions are only on PlayerGameTable, it would probably be more efficient to perform filtering before the joins.
Related
I'm trying to model the following with Slick 3.1.0;
case class Review(txt: String, userId: Long, id: Long)
case class User(name: String, id: Long)
case class ReviewEvent(event: String, reviewId: Long)
I need to populate a class called a FullReview, which looks like;
case class FullReview(r: Review, user: User, evts: Seq[ReviewEvent])
Assuming I have the right tables for each of the models, I'm trying to fetch a FullReview using a combination of join and group by, like so:
val withUser = for {
(r, u) <- RTable join UTable on (_.userId === _.id)
}
val withUAndEvts = (for {
((r, user), evts) <- withUser joinLeft ETable on {
case ((r, _), ev) => r.id === ev.reviewId
}
} yield (r, user, events)).groupBy(_._1._id)
This seems to yield, when a nested Query type, from what I can see. What am I doing wrong here?
If I understand you correctly, you can use following example:
val users = TableQuery[Users]
val reviews = TableQuery[Reviews]
val events = TableQuery[ReviewEvents]
override def findAllReviews(): Future[Seq[FullReview]] = {
val query = reviews
.join(users).on(_.userId === _.id)
.joinLeft(events).on(_._1.id === _.reviewId)
db.run(query.result).map { a =>
a.groupBy(_._1._1.id).map { case (_, tuples) =>
val ((review, user), _) = tuples.head
val reviewEvents = tuples.flatMap(_._2)
FullReview(review, user, reviewEvents)
}.toSeq
}
}
If you want to add pagination to this request, I've already answered here and here is full example.
From some tinkering around, I figured it would just be better to do the aggregation on the client. What that would mean, indirectly, is that if 100 rows on the table ETable would match a single row on the RTable, you would get multiple rows on the client. The client then has to implement its own aggregation to group all the ReviewEvent by Review.
As far as pagination is concerned, you may do something like;
def withUser(page: Int, pageSize: Int) = for {
(r, u) <- RTable.drop(page * pageSize).take(pageSize) join UTable on (_.userId === _.id)
}
I guess this is elegant enough for now. If someone has a better answer, I'd be happy to hear it.
I'm working with Play! Scala 2.4 and Slick 3.
I have a many to many relations as following:
class Artists(tag: Tag) extends Table[Artist](tag, "artists") {
def id = column[Long]("artistid", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id.?, name) <> ((Artist.apply _).tupled, Artist.unapply)
}
The relation table:
class ArtistsGenres(tag: Tag) extends Table[ArtistGenreRelation](tag, "artistsgenres") {
def artistId = column[Long]("artistid")
def genreId = column[Int]("genreid")
def * = (artistId, genreId) <> ((ArtistGenreRelation.apply _).tupled, ArtistGenreRelation.unapply)
def aFK = foreignKey("artistid", artistId, artists)(_.id, onDelete = ForeignKeyAction.Cascade)
def bFK = foreignKey("genreid", genreId, genres)(_.id, onDelete = ForeignKeyAction.Cascade)
}
and the third table:
class Genres(tag: Tag) extends Table[Genre](tag, "genres") {
def id = column[Int]("genreid", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id.?, name) <> ((Genre.apply _).tupled, Genre.unapply)
}
Until now I just wanted to get all the artists by their genre names as following (and their genres as well):
def findAllByGenre(genreName: String, offset: Int, numberToReturn: Int): Future[Seq[ArtistWithGenre]] = {
val query = for {
genre <- genres if genre.name === genreName
artistGenre <- artistsGenres if artistGenre.genreId === genre.id
artist <- artists joinLeft
(artistsGenres join genres on (_.genreId === _.id)) on (_.id === _._1.artistId)
if artist._1.id === artistGenre.artistId
} yield artist
db.run(query.result) map { seqArtistAndOptionalGenre =>
ArtistsAndOptionalGenresToArtistsWithGenres(seqArtistAndOptionalGenre)
}
}
The method ArtistsAndOptionalGenresToArtistsWithGenres groups the response by artists. This worked like a charm. Now I want to limit the number of artists I get from the database.
But I don't manage to use correctly the slick functions take and drop: indeed as my query returns a list of artists and relations, If I add a take before the .result I don't receive the number of artists I want to get (depending of the number of relations the artists have).
I could drop and take after that I have grouped my result by artist, but I see a problem here: the SGBDR won't optimize the request, i.e. I will get all the artists (it can be a lot), proceed the groupBy and after take a bit instead of limit the number of artist returned before the groupBy.
I found the following solution (with 2 queries but 1 DB call):
def findAllByGenre(genreName: String, offset: Int, numberToReturn: Int): Future[Seq[ArtistWithWeightedGenres]] = {
val query = for {
genre <- genres.filter(_.name === genreName)
artistGenre <- artistsGenres.filter(_.genreId === genre.id)
artist <- artists.filter(_.id === artistGenre.artistId)
} yield artist
val artistsIdFromDB = query.drop(offset).take(numberToReturn) map (_.id)
val query2 = for {
artistWithGenres <- artists.filter(_.id in artistsIdFromDB) joinLeft
(artistsGenres join genres on (_.genreId === _.id)) on (_.id === _._1.artistId)
} yield artistWithGenres
db.run(query2.result) map { seqArtistAndOptionalGenre =>
ArtistsAndOptionalGenresToArtistsWithGenres(seqArtistAndOptionalGenre)
} map(_.toVector)
}
If anyone has a better solution...
i'm trying to join rows with other rows in the same table, aka a self join.
this is my model (a bit simplified, my version has 12 more cols):
case class Log(id: Option[Long], createdAt: Date, state: Int, duplicateOf: Option[Long] = None)
class LogsTable(tag: Tag) extends Table[Log](tag, "log") {
def id = column[Option[Long]]("id", O.PrimaryKey, O.AutoInc)
def createdAt = column[Date]("created_at", O.NotNull)
def state = column[Int]("state", O.NotNull)
def duplicateOf = column[Option[Long]]("duplicate_of", O.Nullable)
def * = (id, createdAt, state, duplicateOf) <> (Log.tupled, Log.unapply _)
}
this is my query:
val q = for {
(logs, duplicates) <- Tables.logs.filter(_.duplicateOf.isEmpty) leftJoin Tables.logs on (_.id === _.duplicateOf )
} yield (logs, duplicates)
which is failing with a
[SlickException: Read NULL value (null) for ResultSet column Path s2._19]
Since the column is defined as Option[Long] and Nullable, i'm not really sure why it fails. Any suggestion?
Right now Slick have some limitations on left and right and fullOuter join for nullable row.
But you can handle nullable rows like(may be this is not a best approach)
def getLogs()(implicit session: Session): List[(Log, Option[Log])] = {
(for {
(logs, duplicates) <- Tables.logs.filter(_.duplicateOf.isEmpty) leftJoin Tables.logs on (_.id === _.duplicateOf)
} yield (logs, (duplicates.id, duplicates.createdAt.?, duplicates.state.?, duplicates.duplicateOf))).list
.map {
case (log1, log2) =>
(log1, if (log2._1.isDefined) None else Some(Log(log2._1, log2._2.get, log2._3.get, log2._4)))
}
}
I think you want to use filterNot instead of filter.
Now sure how to do this correctly, I'm trying to do this:
def byId(id: Column[Int], locationId: Column[Int]) = {
for {
m <- users if m.id === id && m.locationId == locationId
} yield m
}
val byIdCompiled = Compiled(byId _) // ???????????? how to pass second parameter?
def getById(id: Int, locationId: Int): Option[User] {
byIdCompiled(id, locationId).firstOption
}
How do I curry a function with 2 parameters when compililng my slick query?
The example provided in Slick docs uses single underscore to encode muiltiple parameters.
http://slick.typesafe.com/doc/2.0.0/queries.html
def userNameByIDRange(min: Column[Int], max: Column[Int]) =
for {
u <- users if u.id >= min && u.id < max
} yield u.first
val userNameByIDRangeCompiled = Compiled(userNameByIDRange _)
// The query will be compiled only once:
val names1 = userNameByIDRangeCompiled(2, 5).run
val names2 = userNameByIDRangeCompiled(1, 3).run
I'm trying to use Slick to query a many-to-many relationship, but I'm running into a variety of errors, the most prominent being "Don't know how to unpack (User, Skill) to T and pack to G".
The structure of the tables is similar to the following:
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
object Users extends Table[User]("users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
case class Skill(val name: String, val id: Option[UUID])
object Skills extends Table[Skill]("skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
object UserSkills extends Table[UserSkill]("user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
Ultimately, what I want to achieve is something of the form
SELECT u.*, group_concat(s.name) FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
but before I spend the time trying to get group_concat to work as well, I have been trying to produce the simpler query (which I believe is still valid...)
SELECT u.* FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
I've tried a variety of scala code to produce this query, but an example of what causes the shape error above is
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.first }
Similarly, the following produces a packing error regarding "User" instead of "(User, Skill)"
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.map(_._1).first }
If anyone has any suggestions, I would be very grateful: I've spent most of today and yesterday scouring google/google groups as well as the slick source, but I haven't a solution yet.
(Also, I'm using postgre so group_concat would actually be string_agg)
EDIT
So it seems like when groupBy is used, the mapped projection gets applied because something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s)).map(_._1)
works fine because _._1 gives the type Users, which has a Shape since Users is a table. However, when we call xs.first (as we do when we call groupBy), we actually get back a mapped projection type (User, Skill), or if we apply map(_._1) first, we get the type User, which is not Users! As far as I can tell, there is no shape with User as the mixed type because the only shapes defined are for Shape[Column[T], T, Column[T]] and for a table T <: TableNode, Shape[T, NothingContainer#TableNothing, T] as defined in slick.lifted.Shape. Furthermore, if I do something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s))
.groupBy(_._1.id)
.map { case (_, xs) => xs.map(_._1.id).first }
I get a strange error of the form "NoSuchElementException: key not found: #1515100893", where the numeric key value changes each time. This is not the query I want, but it is a strange issue none the less.
I've run up against similar situations as well. While I love working with Scala and Slick, I do believe there are times when it is easier to denormalize an object in the database itself and link the Slick Table to a view.
For example, I have an application that has a Tree object that is normalized into several database tables. Since I'm comfortable with SQL, I think it is a cleaner solution than writing a plain Scala Slick query. The Scala code:
case class DbGFolder(id: String,
eTag: String,
url: String,
iconUrl: String,
title: String,
owner: String,
parents: Option[String],
children: Option[String],
scions: Option[String],
created: LocalDateTime,
modified: LocalDateTime)
object DbGFolders extends Table[DbGFolder]("gfolder_view") {
def id = column[String]("id")
def eTag = column[String]("e_tag")
def url = column[String]("url")
def iconUrl = column[String]("icon_url")
def title = column[String]("title")
def owner = column[String]("file_owner")
def parents = column[String]("parent_str")
def children = column[String]("child_str")
def scions = column[String]("scion_str")
def created = column[LocalDateTime]("created")
def modified = column[LocalDateTime]("modified")
def * = id ~ eTag ~ url ~ iconUrl ~ title ~ owner ~ parents.? ~
children.? ~ scions.? ~ created ~ modified <> (DbGFolder, DbGFolder.unapply _)
def findAll(implicit s: Session): List[GFolder] = {
Query(DbGFolders).list().map {v =>
GFolder(id = v.id,
eTag = v.eTag,
url = v.url,
iconUrl = v.iconUrl,
title = v.title,
owner = v.owner,
parents = v.parents.map { parentStr =>
parentStr.split(",").toSet }.getOrElse(Set()),
children = v.children.map{ childStr =>
childStr.split(",").toSet }.getOrElse(Set()),
scions = v.scions.map { scionStr =>
scionStr.split(",").toSet }.getOrElse(Set()),
created = v.created,
modified = v.modified)
}
}
}
And the underlying (postgres) view:
CREATE VIEW scion_view AS
WITH RECURSIVE scions(id, scion) AS (
SELECT c.id, c.child
FROM children AS c
UNION ALL
SELECT s.id, c.child
FROM children AS c, scions AS s
WHERE c.id = s.scion)
SELECT * FROM scions ORDER BY id, scion;
CREATE VIEW gfolder_view AS
SELECT
f.id, f.e_tag, f.url, f.icon_url, f.title, m.name, f.file_owner,
p.parent_str, c.child_str, s.scion_str, f.created, f.modified
FROM
gfiles AS f
JOIN mimes AS m ON (f.mime_type = m.name)
LEFT JOIN (SELECT DISTINCT id, string_agg(parent, ',' ORDER BY parent) AS parent_str
FROM parents GROUP BY id) AS p ON (f.id = p.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(child, ',' ORDER BY child) AS child_str
FROM children GROUP BY id) AS c ON (f.id = c.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(scion, ',' ORDER BY scion) AS scion_str
FROM scion_view GROUP BY id) AS s ON (f.id = s.id)
WHERE
m.category = 'folder';
Try this. Hope it may yield what you expected. Find the Slick Code below the case classes.
click here for the reference regarding lifted embedding .
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
class Users(_tableTag: Tag) extends Table[User](_tableTag,"users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
lazy val userTable = new TableQuery(tag => new Users(tag))
case class Skill(val name: String, val id: Option[UUID])
class Skills(_tableTag: Tag) extends Table[Skill](_tableTag,"skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
lazy val skillTable = new TableQuery(tag => new Skills(tag))
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
class UserSkills(_tableTag: Tag) extends Table[UserSkill](_tableTag,"user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
lazy val userSkillTable = new TableQuery(tag => new UserSkills(tag))
(for {((userSkill, user), skill) <- userSkillTable join userTable.filter on
(_.userId === _.id) join skillTable.filter on (_._1.skillId === _.id)
} yield (userSkill, user, skill)).groupBy(_.2.id)