How to return a sequence generation for an Id - scala

In Scala Slick, if you are not using auto-incremented Id, but with sequence generation strategy for the id, how do you return that id?

Let's say you have the following case class and Slick table:
case class User(id: Option[Int], first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id.? ~ first ~ last <> (User, User.unapply _)
}
The important things to consider here is the fact that User.id is an Option, because when we create it we will set it to None and the DB will generate the number for it.
Now you need to define a new insert mapping which omits the autoincremented column. This is needed because some databases don't allow you to insert into a column which is labeled as Auto Incremental. So instead of:
INSERT INTO users VALUES (NULL, "first, "last")
Slick will generate:
INSERT INTO user(first, last) VALUES ("first", "last")
The mapping looks like this (which must be placed inside Users):
def forInsert = first ~ last <> ({ t => User(None, t._1, t._2)}, { (u: User) => Some((u.first, u.last))})
Finally getting the auto-generated id is simple. We only need to specify in the returning the id column:
val userId = Users.forInsert returning Users.id insert User(None, "First", "Last")
Or you could instead move the returning statement:
def forInsert = first ~ last <> ({ t => User(None, t._1, t._2)}, { (u: User) => Some((u.first, u.last))}) returning id
And simplify your insert calls:
val userId = Users.forInsert insert User(None, "First", "Last")
Source

Related

Using the Parser API with nullable columns in Anorm 2.4

I'm really struggling to get rid of deprecation warnings now that I've upgraded to Anorm 2.4. I've had a look at How to handle null in Anorm but it didn't help me enough.
Let's take a simple example: the account database table:
id (bigint not null)
email_address (varchar not null)
first_name (varchar)
last_name (varchar)
I could have 2 functions in my Scala code: getAccountOfId and getAccountsOfLastName.
getAccountOfId returns 0 or 1 account, therefore Option[(Long, String, Option[String], Option[String])] to keep our example simple
getAccountsOfLastName returns a list of accounts (which could potentially have a size of 0), therefore List[(Long, String, Option[String], String)] to keep our example simple
Part of the code of these 2 functions:
def getAccountOfId(id: Long): Option[(Long, String, Option[String], Option[String])] = {
DB.withConnection { implicit c =>
val query = """select email_address, first_name, last_name
from account
where id = {id};"""
/* Rest of the code that I struggle with unless I use deprecated functions */
}
}
def getAccountsOfLastName(lastName: String): List[(Long, String, Option[String], String)] = {
DB.withConnection { implicit c =>
val query = """select id, email_address, first_name
from account
where last_name = {lastName};"""
/* Rest of the code that I struggle with unless I use deprecated functions */
}
}
I want the "rest of the code" in these 2 functions to be based on Anorm's Parser API.
Not sure if this helps, but using Anorm 2.4, I have a case class which looks like this:
final case class Role(id: Int,
label: String,
roletype: Int,
lid: Option[Int],
aid: Option[Int],
created: DateTime,
modified: DateTime)
and then just have parser combinator for it which looks like this:
val roleOptionRowParser = int("id") ~ str("label") ~ int("roletype") ~ (int("lid")?) ~ (int("vid")?) ~ get[DateTime]("created") ~
get[DateTime]("modified") map {
case id~label~roletype~lid~vid~created~modified ⇒ Some(Role(id, label, roletype, lid, vid, created, modified))
case _ ⇒ None
}
so you basically just parse out using the ? combinator for optional fields, then match based on what you extract from the SQL result row. You can then apply this to queries in the following way:
SQL(s"""
| select * from $source
| where $clause
""".stripMargin).on(params : _*).as(rowParser.single).get
where 'rowParser' in this case is just a reference to the roleOptionRowParser defined in the last lump of code.
If you have multiple rows returned from your query (or expect to have multiple rows) then you can apply the same combinators (such as ? or *) before passing them through to the 'as' function like this:
SQL(s"""
| select * from $source
| where $clause
""".stripMargin).on(params : _*).as(rowParser *).flatten
or
SQL(s"""
| select * from $source
| where $clause
""".stripMargin).on(params : _*).as(rowParser ?).flatten
Ah - forgot to mention that the 'flatten' on the end is there because my parser in this example returns an Option[Role], depending on whether all the necessary column values are present in the returned row (this bit):
case id~label~roletype~lid~vid~created~modified ⇒ Some(Role(id, label, roletype, lid, vid, created, modified))
So, when multiple rows are returned, I just apply 'flatten' to uplift out of the Option type so that I end up with a list of actual 'Role' instances.
Cheers,
HTH.
Turns out it was easy:
def getAccountOfId(id: Long): Option[(Long, String, Option[String], Option[String])] = {
DB.withConnection { implicit c =>
val query = """select email_address, first_name, last_name
from account
where id = {id};"""
val rowParser = str("email_address") ~ (str("first_name") ?) ~ (str("last_name") ?) map {
case emailAddress ~ firstNameOpt ~ lastNameOpt => (id, emailAddress, firstNameOpt, lastNameOpt)
}
SQL(query).on("id" -> id).as(rowParser.singleOpt)
}
}
def getAccountsOfLastName(lastName: String): List[(Long, String, Option[String], String)] = {
DB.withConnection { implicit c =>
val query = """select id, email_address, first_name
from account
where last_name = {lastName};"""
val rowParser = long("id") ~ str("email_address") ~ (str("first_name") ?) map {
case id ~ emailAddress ~ firstNameOpt => (id, emailAddress, firstNameOpt, lastName)
}
SQL(query).on("lastName" -> lastName).as(rowParser.*)
}
}

Play Slick 2.1.0 This DBMS allows only a single AutoInc column to be returned from an INSERT

In the following code I can insert my records just fine. But I would really like to get back the ID of the inserted value so that I can then return the object as part of my response.
def postEntry = DBAction { request =>
request.body.asJson.map {json =>
json.validate[(String, Long, String)].map {
case (name, age, lang) => {
implicit val session = request.dbSession
val something = Entries += Entry(None, name, age, lang)
Ok("Hello!!!: " + something)
}
}.recoverTotal {
e => BadRequest("Detected error: " + JsError.toFlatJson(e))
}
}.getOrElse {
BadRequest("Expecting Json data")
}
}
So I tried changing the insert to:
val something = (Entries returning Entries.map(_.id)) += Entry(None, name, age, lang)
But I get the following exception:
SlickException: This DBMS allows only a single AutoInc column to be returned from an INSERT
There is a note about it here: http://slick.typesafe.com/doc/2.1.0/queries.html
"Note that many database systems only allow a single column to be returned which must be the table’s auto-incrementing primary key. If you ask for other columns a SlickException is thrown at runtime (unless the database actually supports it)."
But it doesn't say how to just request the ID column.
Ende Nue above gave me the hint to find the problem. I needed to have the column marked primary key and auto increment in the table definition.
class Entries(tag: Tag) extends Table[Entry](tag, "entries") {
def id = column[Option[Long]]("id", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def age = column[Long]("age")
def lang = column[String]("lang")
def * = (id, name, age, lang).shaped <> ((Entry.apply _)tupled, Entry.unapply _)
}
O.PrimaryKey, O.AutoInc

Scala Slick: Issues with groupBy and missing shapes

I'm trying to use Slick to query a many-to-many relationship, but I'm running into a variety of errors, the most prominent being "Don't know how to unpack (User, Skill) to T and pack to G".
The structure of the tables is similar to the following:
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
object Users extends Table[User]("users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
case class Skill(val name: String, val id: Option[UUID])
object Skills extends Table[Skill]("skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
object UserSkills extends Table[UserSkill]("user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
Ultimately, what I want to achieve is something of the form
SELECT u.*, group_concat(s.name) FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
but before I spend the time trying to get group_concat to work as well, I have been trying to produce the simpler query (which I believe is still valid...)
SELECT u.* FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
I've tried a variety of scala code to produce this query, but an example of what causes the shape error above is
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.first }
Similarly, the following produces a packing error regarding "User" instead of "(User, Skill)"
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.map(_._1).first }
If anyone has any suggestions, I would be very grateful: I've spent most of today and yesterday scouring google/google groups as well as the slick source, but I haven't a solution yet.
(Also, I'm using postgre so group_concat would actually be string_agg)
EDIT
So it seems like when groupBy is used, the mapped projection gets applied because something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s)).map(_._1)
works fine because _._1 gives the type Users, which has a Shape since Users is a table. However, when we call xs.first (as we do when we call groupBy), we actually get back a mapped projection type (User, Skill), or if we apply map(_._1) first, we get the type User, which is not Users! As far as I can tell, there is no shape with User as the mixed type because the only shapes defined are for Shape[Column[T], T, Column[T]] and for a table T <: TableNode, Shape[T, NothingContainer#TableNothing, T] as defined in slick.lifted.Shape. Furthermore, if I do something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s))
.groupBy(_._1.id)
.map { case (_, xs) => xs.map(_._1.id).first }
I get a strange error of the form "NoSuchElementException: key not found: #1515100893", where the numeric key value changes each time. This is not the query I want, but it is a strange issue none the less.
I've run up against similar situations as well. While I love working with Scala and Slick, I do believe there are times when it is easier to denormalize an object in the database itself and link the Slick Table to a view.
For example, I have an application that has a Tree object that is normalized into several database tables. Since I'm comfortable with SQL, I think it is a cleaner solution than writing a plain Scala Slick query. The Scala code:
case class DbGFolder(id: String,
eTag: String,
url: String,
iconUrl: String,
title: String,
owner: String,
parents: Option[String],
children: Option[String],
scions: Option[String],
created: LocalDateTime,
modified: LocalDateTime)
object DbGFolders extends Table[DbGFolder]("gfolder_view") {
def id = column[String]("id")
def eTag = column[String]("e_tag")
def url = column[String]("url")
def iconUrl = column[String]("icon_url")
def title = column[String]("title")
def owner = column[String]("file_owner")
def parents = column[String]("parent_str")
def children = column[String]("child_str")
def scions = column[String]("scion_str")
def created = column[LocalDateTime]("created")
def modified = column[LocalDateTime]("modified")
def * = id ~ eTag ~ url ~ iconUrl ~ title ~ owner ~ parents.? ~
children.? ~ scions.? ~ created ~ modified <> (DbGFolder, DbGFolder.unapply _)
def findAll(implicit s: Session): List[GFolder] = {
Query(DbGFolders).list().map {v =>
GFolder(id = v.id,
eTag = v.eTag,
url = v.url,
iconUrl = v.iconUrl,
title = v.title,
owner = v.owner,
parents = v.parents.map { parentStr =>
parentStr.split(",").toSet }.getOrElse(Set()),
children = v.children.map{ childStr =>
childStr.split(",").toSet }.getOrElse(Set()),
scions = v.scions.map { scionStr =>
scionStr.split(",").toSet }.getOrElse(Set()),
created = v.created,
modified = v.modified)
}
}
}
And the underlying (postgres) view:
CREATE VIEW scion_view AS
WITH RECURSIVE scions(id, scion) AS (
SELECT c.id, c.child
FROM children AS c
UNION ALL
SELECT s.id, c.child
FROM children AS c, scions AS s
WHERE c.id = s.scion)
SELECT * FROM scions ORDER BY id, scion;
CREATE VIEW gfolder_view AS
SELECT
f.id, f.e_tag, f.url, f.icon_url, f.title, m.name, f.file_owner,
p.parent_str, c.child_str, s.scion_str, f.created, f.modified
FROM
gfiles AS f
JOIN mimes AS m ON (f.mime_type = m.name)
LEFT JOIN (SELECT DISTINCT id, string_agg(parent, ',' ORDER BY parent) AS parent_str
FROM parents GROUP BY id) AS p ON (f.id = p.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(child, ',' ORDER BY child) AS child_str
FROM children GROUP BY id) AS c ON (f.id = c.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(scion, ',' ORDER BY scion) AS scion_str
FROM scion_view GROUP BY id) AS s ON (f.id = s.id)
WHERE
m.category = 'folder';
Try this. Hope it may yield what you expected. Find the Slick Code below the case classes.
click here for the reference regarding lifted embedding .
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
class Users(_tableTag: Tag) extends Table[User](_tableTag,"users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
lazy val userTable = new TableQuery(tag => new Users(tag))
case class Skill(val name: String, val id: Option[UUID])
class Skills(_tableTag: Tag) extends Table[Skill](_tableTag,"skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
lazy val skillTable = new TableQuery(tag => new Skills(tag))
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
class UserSkills(_tableTag: Tag) extends Table[UserSkill](_tableTag,"user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
lazy val userSkillTable = new TableQuery(tag => new UserSkills(tag))
(for {((userSkill, user), skill) <- userSkillTable join userTable.filter on
(_.userId === _.id) join skillTable.filter on (_._1.skillId === _.id)
} yield (userSkill, user, skill)).groupBy(_.2.id)

[SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID]

I am using Slick 1.0.0 with play framework 2.1.0. I am getting the following error when I query my Users table. The value of LOGIN_ID is null in DB.
The query I am executing is:
val user = { for { u <- Users if u.providerId === id.id } yield u}.first
This results in the following error:
play.api.Application$$anon$1: Execution exception[[SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID]]
at play.api.Application$class.handleError(Application.scala:289) ~[play_2.10.jar:2.1.0]
at play.api.DefaultApplication.handleError(Application.scala:383) [play_2.10.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:314) [play_2.10.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:312) [play_2.10.jar:2.1.0]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.0]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.0]
scala.slick.SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID
at scala.slick.lifted.Column$$anonfun$getResult$1.apply(ColumnBase.scala:29) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.TypeMapperDelegate$class.nextValueOrElse(TypeMapper.scala:158) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.driver.BasicTypeMapperDelegatesComponent$TypeMapperDelegates$StringTypeMapperDelegate.nextValueOrElse(BasicTypeMapperDelegatesComponent.scala:146) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Column.getResult(ColumnBase.scala:28) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Projection15.getResult(Projection.scala:627) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Projection15.getResult(Projection.scala:604) ~[slick_2.10-1.0.0.jar:1.0.0]
My User table is defined as :
package models
import scala.slick.driver.MySQLDriver.simple._
case class User(userId:String,email:String,loginId:String,fullName:String,firstName:String,lastName:String,location:String,homeTown:String,providerId:String,provider:String,state:String,zip:String,accessKey:String,refreshKey:String,avatarUrl:String)
object Users extends Table[User]("USERS") {
def userId = column[String]("USER_ID", O.PrimaryKey) // This is the primary key column
def email = column[String]("EMAIL",O.NotNull)
def loginId = column[String]("LOGIN_ID",O.Nullable)
def fullName = column[String]("FULL_NAME",O.NotNull)
def firstName = column[String]("FIRST_NAME",O.Nullable)
def lastName = column[String]("LAST_NAME",O.Nullable)
def location = column[String]("LOCATION",O.Nullable)
def homeTown = column[String]("HOME_TOWN",O.Nullable)
def providerId = column[String]("PROVIDER_ID",O.Nullable)
def provider = column[String]("PROVIDER",O.Nullable)
def state = column[String]("STATE",O.Nullable)
def zip = column[String]("ZIP",O.Nullable)
def accessKey = column[String]("ACCESS_KEY",O.Nullable)
def refreshKey = column[String]("REFRESH_KEY",O.Nullable)
def avatarUrl = column[String]("AVATAR_URL",O.Nullable)
// Every table needs a * projection with the same type as the table's type parameter
def * = userId ~ email ~ loginId ~ fullName ~ firstName ~ lastName ~ location ~ homeTown ~ providerId ~ provider ~ state ~ zip ~ accessKey ~ refreshKey ~ avatarUrl <> (User,User.unapply _)
}
Please help. It looks like Slick can not handle Null values from DB?
Your case class is not ok. If you use O.Nullable, all your properties have to be Option[String].
If you get this error, you'll have to either make the properties O.Nullable, or you have to specify that your query returns an option.
For example let's say you do a rightJoin you might not want to make the properties of the right record optional. In that case you can customize the way you yield your results using .?
val results = (for {
(left, right) <- rightRecord.table rightJoin leftRecord.table on (_.xId === _.id)
} yield (rightRecord.id, leftRecord.name.?)).list
results map (r => SomeJoinedRecord(Some(r._1), r._2.getOrElse(default)))
This problem arises if a column contains a null value and at runtime it gets a null in the column response. If you see in the code below, my cust_id is nullable, but it has no null values. Since, there is a job that makes sure that is is never null. So, the below mapping works. However, it is the best practice to look at your table structure and create the class accordingly. This avoids nasty runtime exception.
If the table definition on database is like:
CREATE TABLE public.perf_test (
dwh_id serial NOT NULL,
cust_id int4 NULL,
cust_address varchar(30) NULL,
partner_id int4 NULL,
CONSTRAINT perf_test_new_dwh_id_key UNIQUE (dwh_id)
);
The corresponding class definition can be as below. But, it will be advised to have the cust_id also as Option[Int]. However, as long as it has values and no nulls, you will not encounter error.
import slick.jdbc.PostgresProfile.api._
class PerfTest(tag: Tag) extends Table[(Int, Int, Option[String], Option[Int])](tag, "perf_test") {
def dwhId = column[Int]("dwh_id")
def custId = column[Int]("cust_id")
def custAddress = column[Option[String]]("cust_address")
def partnerId = column[Option[Int]]("partner_id")
def * = (dwhId, custId, custAddress,partnerId)
}
What happened to me was that I had anomalies in the DB and some values were accidentally nulls - those that I didn't expect to be. So do not forget to check your data too :)

Describing optional fields in Slick

The Slick DSL allows two ways to create optional fields in tables.
For this case class:
case class User(id: Option[Long] = None, fname: String, lname: String)
You can create a table mapping in one of the following ways:
object Users extends Table[User]("USERS") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def fname = column[String]("FNAME")
def lname = column[String]("LNAME")
def * = id.? ~ fname ~ lname <> (User, User.unapply _)
}
and
object Users extends Table[User]("USERS") {
def id = column[Option[Long]]("id", O.PrimaryKey, O.AutoInc)
def fname = column[String]("FNAME")
def lname = column[String]("LNAME")
def * = id ~ fname ~ lname <> (User, User.unapply _)
}
}
What is the difference between the two? Is the one the old way and the other the new way, or do they serve different purposes?
I prefer the second choice where you define the identity as optional as part of the id definition because it's more consistent.
The .? operator in the first one allows you to defer the choice of having your field be optional to the moment of defining your projections. Sometimes that's not what you want, but defining your PK to be an Option is perhaps a bit funny because one might expect a PK to be NOT NULL.
You can use .? in additional projections besides *, for example:
def partial = id.? ~ fname
Then you could do Users.partial.insert(None, "Jacobus") and not worry about fields you're not interested in.