Scala meaning of tilde - scala

Hi I new in Scala and have a problem with following example:
import scala.slick.driver.MySQLDriver.simple._
case class Customer(id: Option[Long], firstName: String, lastName: String, birthday: Option[java.util.Date])
/**
* Mapped customers table object.
*/
object Customers extends Table[Customer]("customers") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def firstName = column[String]("first_name")
def lastName = column[String]("last_name")
def birthday = column[java.util.Date]("birthday", O.Nullable)
def * = id.? ~ firstName ~ lastName ~ birthday.? <>(Customer, Customer.unapply _)
implicit val dateTypeMapper = MappedTypeMapper.base[java.util.Date, java.sql.Date](
{
ud => new java.sql.Date(ud.getTime)
}, {
sd => new java.util.Date(sd.getTime)
})
val findById = for {
id <- Parameters[Long]
c <- this if c.id is id
} yield c
}
What is the meaning of line:
def * = id.? ~ firstName ~ lastName ~ birthday.? <>(Customer, Customer.unapply _)
How to interpret tilde signs and question marks?

You're looking at a Slick Table definition which follows the Slick 1.0+ version of defining the default projection of the Table using the method named *. The ~s join the Columns to make up the default view returned in a kind of projection builder pattern. The ?s indicate which fields represent Option values in the Customer class and <> is a method name in the Projection trait. You can think of the <> as being used to take things out or put things into the database for a Customer here. If you have something that doesn't map well, for example if that Table didn't have the implicit dateTypeMapper, the <> function is where you would manually adjust the values coming in and going out of the Customer case class for Date conversion.
Honestly, finding out where these methods come from is easier inside an IDE because the docs don't describe the class details and there are a lot of classes in the Slick scaladocs.
Here's a link to the 1.0.1 Lifted Embedded documentation.

Related

Using the Parser API with nullable columns in Anorm 2.4

I'm really struggling to get rid of deprecation warnings now that I've upgraded to Anorm 2.4. I've had a look at How to handle null in Anorm but it didn't help me enough.
Let's take a simple example: the account database table:
id (bigint not null)
email_address (varchar not null)
first_name (varchar)
last_name (varchar)
I could have 2 functions in my Scala code: getAccountOfId and getAccountsOfLastName.
getAccountOfId returns 0 or 1 account, therefore Option[(Long, String, Option[String], Option[String])] to keep our example simple
getAccountsOfLastName returns a list of accounts (which could potentially have a size of 0), therefore List[(Long, String, Option[String], String)] to keep our example simple
Part of the code of these 2 functions:
def getAccountOfId(id: Long): Option[(Long, String, Option[String], Option[String])] = {
DB.withConnection { implicit c =>
val query = """select email_address, first_name, last_name
from account
where id = {id};"""
/* Rest of the code that I struggle with unless I use deprecated functions */
}
}
def getAccountsOfLastName(lastName: String): List[(Long, String, Option[String], String)] = {
DB.withConnection { implicit c =>
val query = """select id, email_address, first_name
from account
where last_name = {lastName};"""
/* Rest of the code that I struggle with unless I use deprecated functions */
}
}
I want the "rest of the code" in these 2 functions to be based on Anorm's Parser API.
Not sure if this helps, but using Anorm 2.4, I have a case class which looks like this:
final case class Role(id: Int,
label: String,
roletype: Int,
lid: Option[Int],
aid: Option[Int],
created: DateTime,
modified: DateTime)
and then just have parser combinator for it which looks like this:
val roleOptionRowParser = int("id") ~ str("label") ~ int("roletype") ~ (int("lid")?) ~ (int("vid")?) ~ get[DateTime]("created") ~
get[DateTime]("modified") map {
case id~label~roletype~lid~vid~created~modified ⇒ Some(Role(id, label, roletype, lid, vid, created, modified))
case _ ⇒ None
}
so you basically just parse out using the ? combinator for optional fields, then match based on what you extract from the SQL result row. You can then apply this to queries in the following way:
SQL(s"""
| select * from $source
| where $clause
""".stripMargin).on(params : _*).as(rowParser.single).get
where 'rowParser' in this case is just a reference to the roleOptionRowParser defined in the last lump of code.
If you have multiple rows returned from your query (or expect to have multiple rows) then you can apply the same combinators (such as ? or *) before passing them through to the 'as' function like this:
SQL(s"""
| select * from $source
| where $clause
""".stripMargin).on(params : _*).as(rowParser *).flatten
or
SQL(s"""
| select * from $source
| where $clause
""".stripMargin).on(params : _*).as(rowParser ?).flatten
Ah - forgot to mention that the 'flatten' on the end is there because my parser in this example returns an Option[Role], depending on whether all the necessary column values are present in the returned row (this bit):
case id~label~roletype~lid~vid~created~modified ⇒ Some(Role(id, label, roletype, lid, vid, created, modified))
So, when multiple rows are returned, I just apply 'flatten' to uplift out of the Option type so that I end up with a list of actual 'Role' instances.
Cheers,
HTH.
Turns out it was easy:
def getAccountOfId(id: Long): Option[(Long, String, Option[String], Option[String])] = {
DB.withConnection { implicit c =>
val query = """select email_address, first_name, last_name
from account
where id = {id};"""
val rowParser = str("email_address") ~ (str("first_name") ?) ~ (str("last_name") ?) map {
case emailAddress ~ firstNameOpt ~ lastNameOpt => (id, emailAddress, firstNameOpt, lastNameOpt)
}
SQL(query).on("id" -> id).as(rowParser.singleOpt)
}
}
def getAccountsOfLastName(lastName: String): List[(Long, String, Option[String], String)] = {
DB.withConnection { implicit c =>
val query = """select id, email_address, first_name
from account
where last_name = {lastName};"""
val rowParser = long("id") ~ str("email_address") ~ (str("first_name") ?) map {
case id ~ emailAddress ~ firstNameOpt => (id, emailAddress, firstNameOpt, lastName)
}
SQL(query).on("lastName" -> lastName).as(rowParser.*)
}
}

strategy for loading related entities with slick 2

I am using play 2.3 with slick 2.1
I have two related entities - Message and User (a simplified example domain). Messages are written by users.
A recommended way (the only way?) of expressing such a relation is by using explicit userId in Message
My classes and table mappings look like this:
case class Message (
text: String,
userId: Int,
date: Timestamp = new Timestamp(new Date().getTime()),
id: Option[Int] = None) {}
case class User (
userName: String,
displayName: String,
passHash: String,
creationDate: Timestamp = new Timestamp(new Date().getTime()),
lastVisitDate: Option[Timestamp] = None,
// etc
id: Option[Int] = None){}
class MessageTable(tag: Tag) extends Table[Message](tag, "messages") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def text = column[String]("text")
def userId = column[Int]("user_id")
def date = column[Timestamp]("creation_date")
def * = (title, text, userId, date, id.?) <> (Post.tupled, Post.unapply
def author = foreignKey("message_user_fk", userId, TableQuery[UserTable])(_.id)
}
class UserTable(tag: Tag) extends Table[User](tag, "USER") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def username = column[String]("username")
def passHash = column[String]("password")
def displayname = column[String]("display_name")
def * = (username, passHash,created, lastvisit, ..., id.?) <> (User.tupled, User.unapply)
}
And a convenient helper object:
object db {
object users extends TableQuery(new UserTable(_)) {
// helper methods, compiled queries
}
object messages extends TableQuery(new MessageTable(_)) {
// helper methods, compiled queries
}
}
Now, this is all perfect internally, but if I want to display the actual message, I want the message class to be able to return it's author name when used in templates.
Here are my considerations:
I don't want (and I wasn't able to anyway) to pass around implicit slick Session where it doesn't belong - like templating engine and model classes
I'd like to avoid requesting additional data for messages one-by-one in this particular case
I am more familiar with Hibernate than Slick; in Hibernate I'd use join-fetching. With Slick, the best idea I came up with is to use another data holder class for display:
class LoadedMessage (user:User, message:Message) {
// getters
def text = message.text
def date = message.date
def author = user.displayName
}
object LoadedMessage {
def apply( u:User , m:Message ) = new LoadedMessage(u, m)
}
and populate it with results of join query:
val messageList: List[LoadedMessage] = (
for (
u <- db.users;
m <- db.messages if u.id === m.userId
) yield (u, m))
.sortBy({ case (u, m) => m.date })
.drop(n)
.take(amount)
.list.map { case (u, m) => LoadedMessage(u, m) }
and then pass it wherever. My concern is with this extra class - not very DRY, so unnecessary conversion (and doesn't seem I can make it implicit), much boilerplate.
What is the common approach?
Is there a way to cut on extra classes and actually make a model able to return it's associations?
Following my comment:
How to handle join results is a matter of personal taste in my opinion, your approach is what I would also use, you have an ad hoc data structure which encapsulate your data and can be easily passed around (for example in views) and accessed.
Two other approaches which comes to mind are
query for fields instead of objects, it's less legible and I usually hate working with tuples (or triples in this case) because I find the notation _._1 way less legible than MyClass.myField. This means doing something like this:
val messageList = (
for (
u <- db.users;
m <- db.messages if u.id === m.userId
) yield (u.displayName, m.date, m.text))
.sortBy({ case (name, date, text) => date })
.drop(n)
.take(amount)
.list()
Which will return a triple and then can be passed in your view, something that is possible but I would never do.
Another option is to pass the tuple of objects, very similar to your approach except for the last map
val messageList: List[(User, Message)] = (
for (
u <- db.users;
m <- db.messages if u.id === m.userId
) yield (u, m))
.sortBy({ case (u, m) => m.date })
.drop(n)
.take(amount)
.list()
Here you can pass in the view something like this:
#(messagesAndAuthors: List[(User, Message)])
and then access the data using tuples and classes access functionality, but again your template would be a collection of _1s and again that is horrible to read, plus you have to do something like messagesAndAuthors._1.name just to get one value.
In the end I prefer passing variables as clean as I can in my views (and I guess this is one of the few universally accepted principle in computer science) and hide the logic used in models, for this using an ad hoc case class is the way to go in my opinion. Case classes are a very powerful tool and it's nice to have it, to you it may looks not DRY and more verbose than other approaches (like Hibernate you talked about) but think that when a developer will read you code it will be as easy as it can get in this moment and that extra small class will save hours of head banging.
For the Session trouble see this related question, at the moment is not possible to avoid passing it around (as far as I know) but you should be able to contain this situation and avoid passing session in your template, I usually create a session in my entry point (most of the time a controller) and pass it to the model.
Note: the code is untested, there may be some mistakes.

Scala Slick: Issues with groupBy and missing shapes

I'm trying to use Slick to query a many-to-many relationship, but I'm running into a variety of errors, the most prominent being "Don't know how to unpack (User, Skill) to T and pack to G".
The structure of the tables is similar to the following:
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
object Users extends Table[User]("users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
case class Skill(val name: String, val id: Option[UUID])
object Skills extends Table[Skill]("skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
object UserSkills extends Table[UserSkill]("user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
Ultimately, what I want to achieve is something of the form
SELECT u.*, group_concat(s.name) FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
but before I spend the time trying to get group_concat to work as well, I have been trying to produce the simpler query (which I believe is still valid...)
SELECT u.* FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
I've tried a variety of scala code to produce this query, but an example of what causes the shape error above is
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.first }
Similarly, the following produces a packing error regarding "User" instead of "(User, Skill)"
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.map(_._1).first }
If anyone has any suggestions, I would be very grateful: I've spent most of today and yesterday scouring google/google groups as well as the slick source, but I haven't a solution yet.
(Also, I'm using postgre so group_concat would actually be string_agg)
EDIT
So it seems like when groupBy is used, the mapped projection gets applied because something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s)).map(_._1)
works fine because _._1 gives the type Users, which has a Shape since Users is a table. However, when we call xs.first (as we do when we call groupBy), we actually get back a mapped projection type (User, Skill), or if we apply map(_._1) first, we get the type User, which is not Users! As far as I can tell, there is no shape with User as the mixed type because the only shapes defined are for Shape[Column[T], T, Column[T]] and for a table T <: TableNode, Shape[T, NothingContainer#TableNothing, T] as defined in slick.lifted.Shape. Furthermore, if I do something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s))
.groupBy(_._1.id)
.map { case (_, xs) => xs.map(_._1.id).first }
I get a strange error of the form "NoSuchElementException: key not found: #1515100893", where the numeric key value changes each time. This is not the query I want, but it is a strange issue none the less.
I've run up against similar situations as well. While I love working with Scala and Slick, I do believe there are times when it is easier to denormalize an object in the database itself and link the Slick Table to a view.
For example, I have an application that has a Tree object that is normalized into several database tables. Since I'm comfortable with SQL, I think it is a cleaner solution than writing a plain Scala Slick query. The Scala code:
case class DbGFolder(id: String,
eTag: String,
url: String,
iconUrl: String,
title: String,
owner: String,
parents: Option[String],
children: Option[String],
scions: Option[String],
created: LocalDateTime,
modified: LocalDateTime)
object DbGFolders extends Table[DbGFolder]("gfolder_view") {
def id = column[String]("id")
def eTag = column[String]("e_tag")
def url = column[String]("url")
def iconUrl = column[String]("icon_url")
def title = column[String]("title")
def owner = column[String]("file_owner")
def parents = column[String]("parent_str")
def children = column[String]("child_str")
def scions = column[String]("scion_str")
def created = column[LocalDateTime]("created")
def modified = column[LocalDateTime]("modified")
def * = id ~ eTag ~ url ~ iconUrl ~ title ~ owner ~ parents.? ~
children.? ~ scions.? ~ created ~ modified <> (DbGFolder, DbGFolder.unapply _)
def findAll(implicit s: Session): List[GFolder] = {
Query(DbGFolders).list().map {v =>
GFolder(id = v.id,
eTag = v.eTag,
url = v.url,
iconUrl = v.iconUrl,
title = v.title,
owner = v.owner,
parents = v.parents.map { parentStr =>
parentStr.split(",").toSet }.getOrElse(Set()),
children = v.children.map{ childStr =>
childStr.split(",").toSet }.getOrElse(Set()),
scions = v.scions.map { scionStr =>
scionStr.split(",").toSet }.getOrElse(Set()),
created = v.created,
modified = v.modified)
}
}
}
And the underlying (postgres) view:
CREATE VIEW scion_view AS
WITH RECURSIVE scions(id, scion) AS (
SELECT c.id, c.child
FROM children AS c
UNION ALL
SELECT s.id, c.child
FROM children AS c, scions AS s
WHERE c.id = s.scion)
SELECT * FROM scions ORDER BY id, scion;
CREATE VIEW gfolder_view AS
SELECT
f.id, f.e_tag, f.url, f.icon_url, f.title, m.name, f.file_owner,
p.parent_str, c.child_str, s.scion_str, f.created, f.modified
FROM
gfiles AS f
JOIN mimes AS m ON (f.mime_type = m.name)
LEFT JOIN (SELECT DISTINCT id, string_agg(parent, ',' ORDER BY parent) AS parent_str
FROM parents GROUP BY id) AS p ON (f.id = p.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(child, ',' ORDER BY child) AS child_str
FROM children GROUP BY id) AS c ON (f.id = c.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(scion, ',' ORDER BY scion) AS scion_str
FROM scion_view GROUP BY id) AS s ON (f.id = s.id)
WHERE
m.category = 'folder';
Try this. Hope it may yield what you expected. Find the Slick Code below the case classes.
click here for the reference regarding lifted embedding .
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
class Users(_tableTag: Tag) extends Table[User](_tableTag,"users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
lazy val userTable = new TableQuery(tag => new Users(tag))
case class Skill(val name: String, val id: Option[UUID])
class Skills(_tableTag: Tag) extends Table[Skill](_tableTag,"skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
lazy val skillTable = new TableQuery(tag => new Skills(tag))
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
class UserSkills(_tableTag: Tag) extends Table[UserSkill](_tableTag,"user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
lazy val userSkillTable = new TableQuery(tag => new UserSkills(tag))
(for {((userSkill, user), skill) <- userSkillTable join userTable.filter on
(_.userId === _.id) join skillTable.filter on (_._1.skillId === _.id)
} yield (userSkill, user, skill)).groupBy(_.2.id)

Trouble updating a record with Slick

With a class and table definition looking like this:
case class Group(
id: Long = -1,
id_parent: Long = -1,
label: String = "",
description: String = "")
object Groups extends Table[Group]("GROUPS") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def id_parent = column[Long]("ID_PARENT")
def label = column[String]("LABEL")
def description = column[String]("DESC")
def * = id ~ id_parent ~ label ~ design <> (Group, Group.unapply _)
def autoInc = id_parent ~ label ~ design returning id into {
case ((_, _, _), id) => id
}
}
To update a record, I can do this:
def updateGroup(id: Long) = Groups.where(_.id === id)
def updateGroup(g: Group)(implicit session: Session) = updateGroup(g.id).update(g)
But I can't get updates to work using for expressions:
val findGById = for {
id <- Parameters[Long]
g <- Groups; if g.id === id
} yield g
def updateGroupX(g: Group)(implicit session: Session) = findGById(g.id).update(g)
----------------------------------------------------------------------------^
Error: value update is not a member of scala.slick.jdbc.MutatingUnitInvoker[com.exp.Group]
I'm obviously missing something in the documentation.
The update method is supplied by the type UpdateInvoker. An instance of that type can be implicitly created from a Query by the methods productQueryToUpdateInvoker and/or tableQueryToUpdateInvoker (found in the BasicProfile), if they are in scope.
Now the type of your findById method is not a Query but a BasicQueryTemplate[Long, Group]. Looking at the docs, I can find no way from a BasicQueryTemplate (which is a subtype of StatementInvoker) to an UpdateInvoker, neither implicit nor explicit. Thinking about it, that makes kinda sense to me, since I understand a query template (invoker) to be something that has already been "compiled" from an abstract syntax tree (Query) to a prepared statement rather early, before parameterization, whereas an update invoker can only be built from an abstract syntax tree, i.e. a Query object, because it needs to analyze the query and extract its parameters/columns. At least that's the way it appears to work at present.
With that in mind, a possible solution unfolds:
def findGById(id: Long) = for {
g <- Groups; if g.id === id
} yield g
def updateGroupX(g: Group)(implicit session: Session) = findGById(g.id).update(g)
Where findById(id: Long) has the type Query[Groups, Group] which is converted by productQueryToUpdateInvoker to an UpdateInvoker[Group] on which the update method can finally be called.
Hope this helped.
Refer to http://madnessoftechnology.blogspot.ru/2013/01/database-record-updates-with-slick-in.html
I stuck with the updating today, and this blog post helped me much. Also refer to the first comment under the post.

Describing optional fields in Slick

The Slick DSL allows two ways to create optional fields in tables.
For this case class:
case class User(id: Option[Long] = None, fname: String, lname: String)
You can create a table mapping in one of the following ways:
object Users extends Table[User]("USERS") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def fname = column[String]("FNAME")
def lname = column[String]("LNAME")
def * = id.? ~ fname ~ lname <> (User, User.unapply _)
}
and
object Users extends Table[User]("USERS") {
def id = column[Option[Long]]("id", O.PrimaryKey, O.AutoInc)
def fname = column[String]("FNAME")
def lname = column[String]("LNAME")
def * = id ~ fname ~ lname <> (User, User.unapply _)
}
}
What is the difference between the two? Is the one the old way and the other the new way, or do they serve different purposes?
I prefer the second choice where you define the identity as optional as part of the id definition because it's more consistent.
The .? operator in the first one allows you to defer the choice of having your field be optional to the moment of defining your projections. Sometimes that's not what you want, but defining your PK to be an Option is perhaps a bit funny because one might expect a PK to be NOT NULL.
You can use .? in additional projections besides *, for example:
def partial = id.? ~ fname
Then you could do Users.partial.insert(None, "Jacobus") and not worry about fields you're not interested in.