Slick 3: how to drop and take on collections with some relations - scala

I'm working with Play! Scala 2.4 and Slick 3.
I have a many to many relations as following:
class Artists(tag: Tag) extends Table[Artist](tag, "artists") {
def id = column[Long]("artistid", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id.?, name) <> ((Artist.apply _).tupled, Artist.unapply)
}
The relation table:
class ArtistsGenres(tag: Tag) extends Table[ArtistGenreRelation](tag, "artistsgenres") {
def artistId = column[Long]("artistid")
def genreId = column[Int]("genreid")
def * = (artistId, genreId) <> ((ArtistGenreRelation.apply _).tupled, ArtistGenreRelation.unapply)
def aFK = foreignKey("artistid", artistId, artists)(_.id, onDelete = ForeignKeyAction.Cascade)
def bFK = foreignKey("genreid", genreId, genres)(_.id, onDelete = ForeignKeyAction.Cascade)
}
and the third table:
class Genres(tag: Tag) extends Table[Genre](tag, "genres") {
def id = column[Int]("genreid", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id.?, name) <> ((Genre.apply _).tupled, Genre.unapply)
}
Until now I just wanted to get all the artists by their genre names as following (and their genres as well):
def findAllByGenre(genreName: String, offset: Int, numberToReturn: Int): Future[Seq[ArtistWithGenre]] = {
val query = for {
genre <- genres if genre.name === genreName
artistGenre <- artistsGenres if artistGenre.genreId === genre.id
artist <- artists joinLeft
(artistsGenres join genres on (_.genreId === _.id)) on (_.id === _._1.artistId)
if artist._1.id === artistGenre.artistId
} yield artist
db.run(query.result) map { seqArtistAndOptionalGenre =>
ArtistsAndOptionalGenresToArtistsWithGenres(seqArtistAndOptionalGenre)
}
}
The method ArtistsAndOptionalGenresToArtistsWithGenres groups the response by artists. This worked like a charm. Now I want to limit the number of artists I get from the database.
But I don't manage to use correctly the slick functions take and drop: indeed as my query returns a list of artists and relations, If I add a take before the .result I don't receive the number of artists I want to get (depending of the number of relations the artists have).
I could drop and take after that I have grouped my result by artist, but I see a problem here: the SGBDR won't optimize the request, i.e. I will get all the artists (it can be a lot), proceed the groupBy and after take a bit instead of limit the number of artist returned before the groupBy.

I found the following solution (with 2 queries but 1 DB call):
def findAllByGenre(genreName: String, offset: Int, numberToReturn: Int): Future[Seq[ArtistWithWeightedGenres]] = {
val query = for {
genre <- genres.filter(_.name === genreName)
artistGenre <- artistsGenres.filter(_.genreId === genre.id)
artist <- artists.filter(_.id === artistGenre.artistId)
} yield artist
val artistsIdFromDB = query.drop(offset).take(numberToReturn) map (_.id)
val query2 = for {
artistWithGenres <- artists.filter(_.id in artistsIdFromDB) joinLeft
(artistsGenres join genres on (_.genreId === _.id)) on (_.id === _._1.artistId)
} yield artistWithGenres
db.run(query2.result) map { seqArtistAndOptionalGenre =>
ArtistsAndOptionalGenresToArtistsWithGenres(seqArtistAndOptionalGenre)
} map(_.toVector)
}
If anyone has a better solution...

Related

Scala Slick joinLeft and combined conditions

I want to be able to create a query with Slick that let me filter left joins in a dynamic way
case class Player(
id: Long,
createdAt: DateTime,
lastModificationDate: DateTime,
name: String
)
class PlayerTable(tag: Tag) extends Table[Player](tag, "players") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def createdAt = column[DateTime]("createdAt")
def lastModificationDate = column[DateTime]("lastModificationDate")
def name = column[String]("name")
override def * : ProvenShape[Player] = (
id,
createdAt,
lastModificationDate,
updatedAt,
name
) <> (Player.tupled, Player.unapply)
}
case class PlayerGame(
id: Long,
createdAt: DateTime,
lastModificationDate: DateTime,
playerId: Long,
level: Int,
status: String
)
class PlayerGameTable(tag: Tag) extends Table[PlayerGame](tag, "player_games") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def createdAt = column[DateTime]("createdAt")
def lastModificationDate = column[DateTime]("lastModificationDate")
def playerId = column[Long]("playerId")
def level = column[Int]("level")
def status = column[String]("status")
override def * : ProvenShape[PlayerGame] = (
id,
createdAt,
lastModificationDate,
playerId,
level,
status
) <> (PlayerGame.tupled, PlayerGame.unapply)
}
I want to write a query like this with Slick, where the WHERE CLAUSE is dynamic. I wrote two examples
SELECT *
FROM players
LEFT JOIN player_games AS playerGamesOne ON players.id = playerGamesOne.playerId AND playerGamesOne.level = 1
LEFT JOIN player_games AS playerGamesTwo ON players.id = playerGamesTwo.playerId AND playerGamesTwo.level = 2
WHERE playerGamesOne.status LIKE 'gameOver'
OR playerGamesTWO.status LIKE 'gameOver'
SELECT *
FROM players
LEFT JOIN player_games AS playerGamesOne ON players.id = playerGamesOne.playerId AND playerGamesOne.level = 1
LEFT JOIN player_games AS playerGamesTwo ON players.id = playerGamesTwo.playerId AND playerGamesTwo.level = 2
WHERE playerGamesOne.status LIKE 'playing'
OR playerGamesTwo.status NOT LIKE 'gameOver'
I was trying something like this, but I get Rep[Option[PlayerGameTable]] as the parameter. Maybe there is a different way of doing something like this
val baseQuery = for {
((p, g1), g2) <- PlayerTable.playerQuery joinLeft
PlayerGameTable.playerGameQuery ON ((x, y) => x.id === y.playerId && y.level === 1) joinLeft
PlayerGameTable.playerGameQuery ON ((x, y) => x._1.id === y.playerId && y.level === 2)
} yield (p, g1, g2)
private def filterPlayerGames(gameStatus: String, playerGamesOneOpt: Option[PlayerGameTable], playerGamesTwoOpt: Option[PlayerGameTable]) = {
(gameStatus, playerGamesOneOpt, playerGamesOneOpt) match {
case (gameStatus: String, Some(playerGamesOne: PlayerGameTable), Some(playerGamesOne: PlayerGameTable)) if gameStatus == "gameOver" => playerGamesOne.status === "gameOver" || playerGamesTwo.status === "gameOver"
}
}
It is a complex question, if soemthing is not clear please let me know and I will try to clarify it
There are a couple of issues:
With multiple conditions, the underscore placeholder used within your ON clause would not work the way intended
_.level = something is an assignment, not a condition
Assuming PlayerTable.playerQuery is TableQuery[PlayerTable] and PlayerGameTable.playerGameQuery is TableQuery[PlayerGameTable], your baseQuery should look like this:
val baseQuery = for {
((p, g1), g2) <- PlayerTable.playerQuery joinLeft
PlayerGameTable.playerGameQuery on ((x, y) => x.id === y.playerId && y.level === 1) joinLeft
PlayerGameTable.playerGameQuery on ((x, y) => x._1.id === y.playerId && y.level === 2)
} yield (p, g1, g2)
It's not entirely clear to me how your filterPlayerGames method is going to handle dynamic conditions. Nor do I think any filtering wrapper method will be flexible enough to cover multiple conditions with arbitrary and/or/negation operators. I would suggest that you use the baseQuery for the necessary joins and build filtering queries on top of it, similar to something like below:
val query1 = baseQuery.filter{ case (_, g1, g2) =>
g1.filter(_.status === "gameOver").isDefined || g2.filter(_.status === "gameOver").isDefined
}
val query2 = baseQuery.filter{ case (_, g1, g2) =>
g1.filter(_.status === "playing").isDefined || g2.filter(_.status =!= "gameOver").isDefined
}
Note that with the left joins, g1 and g2 are of Option type, thus isDefined is applied for the or operation.
On a separate note, given that your filtering conditions are only on PlayerGameTable, it would probably be more efficient to perform filtering before the joins.

Scala Slick 3.0.1 Relationship to self

I have an entity called Category which has a relationship to itself. There are two types of categories, a parent category and a subcategory. The subcategories have in the idParent attribute, the id from the parent category.
I defined the Schema this way
class CategoriesTable(tag: Tag) extends Table[Category](tag, "CATEGORIES") {
def id = column[String]("id", O.PrimaryKey)
def name = column[String]("name")
def idParent = column[Option[String]]("idParent")
def * = (id, name, idParent) <> (Category.tupled, Category.unapply)
def categoryFK = foreignKey("category_fk", idParent, categories)(_.id.?)
def subcategories = TableQuery[CategoriesTable].filter(_.id === idParent)
}
And I have this data:
id name idParent
------------------------------
parent Parent
child1 Child1 parent
child2 Child2 parent
Now I want to get the result in a map grouped by the parent category like
Map(
(parent,Parent,None) -> Seq[(child1,Child1,parent),(child2,Child2,parent]
)
For that I tried with the following query:
def findChildrenWithParents() = {
db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
}
If at this point I execute the query with:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.map(println _)
}
I get this:
(Category(child1,Child1,Some(parent)),Category(parent,Parent,None))
(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))
Here there are two facts that already disconcerted me:
It is returning Future[Seq[Category, Category]] instead of the Future[Seq[Category, Seq[Category]]] that I would expect.
The order is inverted, I would expect the parent to appear first like:
(Category(parent,Parent,None),Category(child1,Child1,Some(parent)))
(Category(parent,Parent,None),Category(child2,Child2,Some(parent)))
Now I would try to group them. As I am having problems with nested queries in Slick. I perform a group by on the result like this:
categoryDao.findChildrenWithParents().map {
case categoryTuples => categoryTuples.groupBy(_._2).map(println _)
}
But the result is really a mess:
(Category(parent,Parent,None),Vector((Category(child1,Child1,Some(parent)),Category(parent,Parent,None),(Category(child2,Child2,Some(parent)),Category(parent,Parent,None))))
I would have expected:
(Category(parent,Parent,None),Vector(Category(child1,Child1,Some(parent)),Category(child2,Child2,Some(parent))))
Can you please help me with the inverted result and with the group by?
Thanks in advance.
Ok I managed to fix it by myself. Here the answer if someone wants to learn from it:
def findChildrenWithParents() = {
val result = db.run((for {
c <- categories
s <- c.subcategories
} yield (c,s)).sortBy(_._1.name).result)
result map {
case categoryTuples => categoryTuples.groupBy(_._1).map{
case (k,v) => (k,v.map(_._2))
}
}
}
The solution isn't perfect. I would like to make the group by already in Slick, but this retrieves what I wanted.

self join crash with NULL value exception

i'm trying to join rows with other rows in the same table, aka a self join.
this is my model (a bit simplified, my version has 12 more cols):
case class Log(id: Option[Long], createdAt: Date, state: Int, duplicateOf: Option[Long] = None)
class LogsTable(tag: Tag) extends Table[Log](tag, "log") {
def id = column[Option[Long]]("id", O.PrimaryKey, O.AutoInc)
def createdAt = column[Date]("created_at", O.NotNull)
def state = column[Int]("state", O.NotNull)
def duplicateOf = column[Option[Long]]("duplicate_of", O.Nullable)
def * = (id, createdAt, state, duplicateOf) <> (Log.tupled, Log.unapply _)
}
this is my query:
val q = for {
(logs, duplicates) <- Tables.logs.filter(_.duplicateOf.isEmpty) leftJoin Tables.logs on (_.id === _.duplicateOf )
} yield (logs, duplicates)
which is failing with a
[SlickException: Read NULL value (null) for ResultSet column Path s2._19]
Since the column is defined as Option[Long] and Nullable, i'm not really sure why it fails. Any suggestion?
Right now Slick have some limitations on left and right and fullOuter join for nullable row.
But you can handle nullable rows like(may be this is not a best approach)
def getLogs()(implicit session: Session): List[(Log, Option[Log])] = {
(for {
(logs, duplicates) <- Tables.logs.filter(_.duplicateOf.isEmpty) leftJoin Tables.logs on (_.id === _.duplicateOf)
} yield (logs, (duplicates.id, duplicates.createdAt.?, duplicates.state.?, duplicates.duplicateOf))).list
.map {
case (log1, log2) =>
(log1, if (log2._1.isDefined) None else Some(Log(log2._1, log2._2.get, log2._3.get, log2._4)))
}
}
I think you want to use filterNot instead of filter.

Scala Slick: Issues with groupBy and missing shapes

I'm trying to use Slick to query a many-to-many relationship, but I'm running into a variety of errors, the most prominent being "Don't know how to unpack (User, Skill) to T and pack to G".
The structure of the tables is similar to the following:
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
object Users extends Table[User]("users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
case class Skill(val name: String, val id: Option[UUID])
object Skills extends Table[Skill]("skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
object UserSkills extends Table[UserSkill]("user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
Ultimately, what I want to achieve is something of the form
SELECT u.*, group_concat(s.name) FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
but before I spend the time trying to get group_concat to work as well, I have been trying to produce the simpler query (which I believe is still valid...)
SELECT u.* FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
I've tried a variety of scala code to produce this query, but an example of what causes the shape error above is
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.first }
Similarly, the following produces a packing error regarding "User" instead of "(User, Skill)"
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.map(_._1).first }
If anyone has any suggestions, I would be very grateful: I've spent most of today and yesterday scouring google/google groups as well as the slick source, but I haven't a solution yet.
(Also, I'm using postgre so group_concat would actually be string_agg)
EDIT
So it seems like when groupBy is used, the mapped projection gets applied because something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s)).map(_._1)
works fine because _._1 gives the type Users, which has a Shape since Users is a table. However, when we call xs.first (as we do when we call groupBy), we actually get back a mapped projection type (User, Skill), or if we apply map(_._1) first, we get the type User, which is not Users! As far as I can tell, there is no shape with User as the mixed type because the only shapes defined are for Shape[Column[T], T, Column[T]] and for a table T <: TableNode, Shape[T, NothingContainer#TableNothing, T] as defined in slick.lifted.Shape. Furthermore, if I do something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s))
.groupBy(_._1.id)
.map { case (_, xs) => xs.map(_._1.id).first }
I get a strange error of the form "NoSuchElementException: key not found: #1515100893", where the numeric key value changes each time. This is not the query I want, but it is a strange issue none the less.
I've run up against similar situations as well. While I love working with Scala and Slick, I do believe there are times when it is easier to denormalize an object in the database itself and link the Slick Table to a view.
For example, I have an application that has a Tree object that is normalized into several database tables. Since I'm comfortable with SQL, I think it is a cleaner solution than writing a plain Scala Slick query. The Scala code:
case class DbGFolder(id: String,
eTag: String,
url: String,
iconUrl: String,
title: String,
owner: String,
parents: Option[String],
children: Option[String],
scions: Option[String],
created: LocalDateTime,
modified: LocalDateTime)
object DbGFolders extends Table[DbGFolder]("gfolder_view") {
def id = column[String]("id")
def eTag = column[String]("e_tag")
def url = column[String]("url")
def iconUrl = column[String]("icon_url")
def title = column[String]("title")
def owner = column[String]("file_owner")
def parents = column[String]("parent_str")
def children = column[String]("child_str")
def scions = column[String]("scion_str")
def created = column[LocalDateTime]("created")
def modified = column[LocalDateTime]("modified")
def * = id ~ eTag ~ url ~ iconUrl ~ title ~ owner ~ parents.? ~
children.? ~ scions.? ~ created ~ modified <> (DbGFolder, DbGFolder.unapply _)
def findAll(implicit s: Session): List[GFolder] = {
Query(DbGFolders).list().map {v =>
GFolder(id = v.id,
eTag = v.eTag,
url = v.url,
iconUrl = v.iconUrl,
title = v.title,
owner = v.owner,
parents = v.parents.map { parentStr =>
parentStr.split(",").toSet }.getOrElse(Set()),
children = v.children.map{ childStr =>
childStr.split(",").toSet }.getOrElse(Set()),
scions = v.scions.map { scionStr =>
scionStr.split(",").toSet }.getOrElse(Set()),
created = v.created,
modified = v.modified)
}
}
}
And the underlying (postgres) view:
CREATE VIEW scion_view AS
WITH RECURSIVE scions(id, scion) AS (
SELECT c.id, c.child
FROM children AS c
UNION ALL
SELECT s.id, c.child
FROM children AS c, scions AS s
WHERE c.id = s.scion)
SELECT * FROM scions ORDER BY id, scion;
CREATE VIEW gfolder_view AS
SELECT
f.id, f.e_tag, f.url, f.icon_url, f.title, m.name, f.file_owner,
p.parent_str, c.child_str, s.scion_str, f.created, f.modified
FROM
gfiles AS f
JOIN mimes AS m ON (f.mime_type = m.name)
LEFT JOIN (SELECT DISTINCT id, string_agg(parent, ',' ORDER BY parent) AS parent_str
FROM parents GROUP BY id) AS p ON (f.id = p.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(child, ',' ORDER BY child) AS child_str
FROM children GROUP BY id) AS c ON (f.id = c.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(scion, ',' ORDER BY scion) AS scion_str
FROM scion_view GROUP BY id) AS s ON (f.id = s.id)
WHERE
m.category = 'folder';
Try this. Hope it may yield what you expected. Find the Slick Code below the case classes.
click here for the reference regarding lifted embedding .
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
class Users(_tableTag: Tag) extends Table[User](_tableTag,"users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
lazy val userTable = new TableQuery(tag => new Users(tag))
case class Skill(val name: String, val id: Option[UUID])
class Skills(_tableTag: Tag) extends Table[Skill](_tableTag,"skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
lazy val skillTable = new TableQuery(tag => new Skills(tag))
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
class UserSkills(_tableTag: Tag) extends Table[UserSkill](_tableTag,"user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
lazy val userSkillTable = new TableQuery(tag => new UserSkills(tag))
(for {((userSkill, user), skill) <- userSkillTable join userTable.filter on
(_.userId === _.id) join skillTable.filter on (_._1.skillId === _.id)
} yield (userSkill, user, skill)).groupBy(_.2.id)

How to use SQL "LIKE" operator in SLICK

Maybe a silly question. But I have not found an answer so far. So how do you represent the SQL's "LIKE" operator in SLICK?
Exactly as you normally would!
val query = for {
coffee <- Coffees if coffee.name like "%expresso%"
} yield (coffee.name, coffee.price)
Will generate SQL like
SELECT name, price FROM coffees WHERE NAME like '%expresso%';
This is how I got it to work:
// argMap is map of type [Str, Str]
val query = for {
coffee <- coffees if (
argMap.map{ case (k,v) =>
metric.column[String](k) like s"%${v}%"
}.reduce(_ && _)
)
} yield(coffee.name)
And then you can run this using your db:
val res = db.run(query.result)
Of course res is a future here that you need to use await to get the actual result.
Suppose you have a table named, logs with 3 fields -
id
message
datetime
You want to perform LIKE operation. So it will be:
def data(data: ReqData): Future[Seq[Syslog]] = {
sysLogTable
.filter(_.datetime >= data.datetimeFrom)
.filter(_.datetime <= data.datetimeUntil)
.filter(_.message like s"%${data.phrase}%")
.result
}
Note: for sysLogTable
val sysLogTable: TableQuery[SyslogsTable] = TableQuery[SyslogsTable]
class SyslogsTable(tag: Tag) extends Table[Syslog](tag, "logs") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def message = column[String]("message")
def datetime = column[Timestamp]("date")
def * = (id.?, message, datetime) <> ((Syslog.apply _).tupled, Syslog.unapply)
}
Note: for Syslog case class
case class Syslog(
id: Option[Long],
message: String,
datetime: Timestamp
)