Play2's anorm can't work on postgresql - postgresql

I found that the row parsers of play2's anorm depend on the meta data returned by jdbc driver.
So in the built-in sample "zentasks" provided by play, I can find such code:
object Project {
val simple = {
get[Pk[Long]]("project.id") ~
get[String]("project.folder") ~
get[String]("project.name") map {
case id~folder~name => Project(id, folder, name)
}
}
}
Please notice that the fields all have a project. prefix.
It works well on h2 database, but not on postgresql. If I use portgresql, I should write it as:
object Project {
val simple = {
get[Pk[Long]]("id") ~
get[String]("folder") ~
get[String]("name") map {
case id~folder~name => Project(id, folder, name)
}
}
}
I've asked this in play's google group, and Guillaume Bort said:
Yes if you are using postgres it's probably the cause. The postgresql
jdbc driver is broken and doesn't return table names.
If the postgresql's jdbc driver really have this issue, I think there will be a problem for anorm:
If two tables have fields with the same name, and I query them with join, anorm won't get the correct values, since it can't find out which name belongs to which table.
So I write a test.
1. create tables on postgresql
create table a (
id text not null primary key,
name text not null
);
create table b (
id text not null primary key,
name text not null,
a_id text,
foreign key(a_id) references a(id) on delete cascade
);
2. create anorm models
case class A(id: Pk[String] = NotAssigned, name: String)
case class B(id: Pk[String] = NotAssigned, name: String, aId: String)
object A {
val simple = {
get[Pk[String]]("id") ~
get[String]("name") map {
case id ~ name =>
A(id, name)
}
}
def create(a: A): A = {
DB.withConnection { implicit connection =>
val id = newId()
SQL("""
insert into a (id, name)
values (
{id}, {name}
)
""").on('id -> id, 'name -> a.name).executeUpdate()
a.copy(id = Id(id))
}
}
def findAll(): Seq[(A, B)] = {
DB.withConnection { implicit conn =>
SQL("""
select a.*, b.* from a as a left join b as b on a.id=b.a_id
""").as(A.simple ~ B.simple map {
case a ~ b => a -> b
} *)
}
}
}
object B {
val simple = {
get[Pk[String]]("id") ~
get[String]("name") ~
get[String]("a_id") map {
case id ~ name ~ aId =>
B(id, name, aId)
}
}
def create(b: B): B = {
DB.withConnection { implicit conneciton =>
val id = UUID.randomUUID().toString
SQL("""
insert into b (id, name, a_id)
values (
{id}, {name}, {aId}
)
""").on('id -> id, 'name -> b.name, 'aId -> b.aId).executeUpdate()
b.copy(id = Id(id))
}
}
}
3. test cases with scalatest
class ABTest extends DbSuite {
"AB" should "get one-to-many" in {
running(fakeApp) {
val a = A.create(A(name = "AAA"))
val b1 = B.create(B(name = "BBB1", aId = a.id.get))
val b2 = B.create(B(name = "BBB2", aId = a.id.get))
val ab = A.findAll()
ab foreach {
case (a, b) => {
println("a: " + a)
println("b: " + b)
}
}
}
}
}
4. the output
a: A(dbc52793-0f6f-4910-a954-940e508aab26,BBB1)
b: B(dbc52793-0f6f-4910-a954-940e508aab26,BBB1,4a66ebe7-536e-4bd5-b1bd-08f022650f1f)
a: A(d1bc8520-b4d1-40f1-af92-52b3bfe50e9f,BBB2)
b: B(d1bc8520-b4d1-40f1-af92-52b3bfe50e9f,BBB2,4a66ebe7-536e-4bd5-b1bd-08f022650f1f)
You can see that the "a"s have name of "BBB1/BBB2", but not "AAA".
I tried to redefine the parsers with prefixes as:
val simple = {
get[Pk[String]]("a.id") ~
get[String]("a.name") map {
case id ~ name =>
A(id, name)
}
}
But it will report errors that they can't find specified fields.
Is it a big issue of anorm? Or do I miss something?

The latest play2(RC3) has solved this problem by checking the class name of meta object:
// HACK FOR POSTGRES
if (meta.getClass.getName.startsWith("org.postgresql.")) {
meta.asInstanceOf[{ def getBaseTableName(i: Int): String }].getBaseTableName(i)
} else {
meta.getTableName(i)
}
But be careful if you want to use it with p6spy, it doesn't work because the class name of meta will be "com.p6spy....", not "org.postgresql....".

Related

jdbc reading resultSet by colName issue for aliases

I have a generic repository with a method as:
object Queries {
def getByFieldId(field: String, id: Int): String = {
s"""
|SELECT
| DF.id AS fileId,
| DF.name AS fileName,
| AG.id AS groupId,
| AG.name AS groupName
|FROM $tableName DFG
|INNER JOIN directory_files DF on DF.id = DFG.file_id
|INNER JOIN ad_groups AG on AG.id = DFG.group_id
|WHERE DFG.$field = $id
|""".stripMargin
}
}
def getByFieldId(field: String, id: Int): Try[List[Object]] = {
try {
val sqlQuery = Queries.getByFieldId("ad_group", 1)
statement = conn.getPreparedStatement(sqlQuery)
setParameters(statement, params)
resultSet = statement.executeQuery()
val metadata = resultSet.getMetaData
val columnCount = metadata.getColumnCount
val columns: ListBuffer[String] = ListBuffer.empty
for (i <- 1 to columnCount) {
columns += metadata.getColumnName(i)
}
var item: List[Object] = List.empty
while (resultSet.next()) {
val row = columns.toList.map(x => resultSet.getObject(x))
item = row
}
Success(item)
} catch {
case e: Any => Failure(errorHandler(e))
} finally conn.closeConnection(resultSet, statement)
}
The problems is that my result set ignore the query aliases and return columns as (id, name, id, name) instead of (fileId, fileName, groupId, groupName).
One solution found is to use column index instead of col names, but I'm not sure if this solution will cover entire app and will not break some other queries.
Maybe, another found solution is here and if I'm right, I can still use colNames but need to get them together with colTypes, then inside resultSet.next() to call getType method for each as:
// this part of code is not tested
// this idea came to me writing this topic
while (resultSet.next()) {
val row = columns.toList.map(x => {
x.colType match {
case "string" => resultSet.getString(x.colName)
case "integer" => resultSet.getInt(x.colName)
case "decimal" => resultSet.getDecimal(x.colName)
case _ => resultSet.getString(x.colName)
})
item = row
}
You may try to use getColumnLabel instead of getColumnName
as documented
Gets the designated column's suggested title for use in printouts and displays. The suggested title is usually specified by the SQL AS clause. If a SQL AS is not specified, the value returned from getColumnLabel will be the same as the value returned by the getColumnName method.
Note that this is highly dependent on the used RDBM.
For Oracle both methods return the alias and there is no chance to get the original column name.

How to map a query result to case class using Anorm in scala

I have 2 case classes like this :
case class ClassTeacherWrapper(
success: Boolean,
classes: List[ClassTeacher]
)
2nd one :
case class ClassTeacher(
clid: String,
name: String
)
And a query like this :
val query =
SQL"""
SELECT
s.section_sk::text AS clid,
s.name AS name
from
********************
"""
P.S. I put * in place of query for security reasons :
So my query is returning 2 values. How do i map it to case class ClassTeacher
currently I am doing something like this :
def getClassTeachersByInstructor(instructor: String, section: String): ClassTeacherWrapper = {
implicit var conn: Connection = null
try {
conn = datamartDatasourceConnectionPool.getDBConnection()
// Define query
val query =
SQL"""
SELECT
s.section_sk::text AS clid,
s.name AS name
********
"""
logger.info("Read from DB: " + query)
// create a List containing all the datasets from the resultset and return
new ClassTeacherWrapper(
success =true,
query.as(Macro.namedParser[ClassTeacher].*)
)
//Trying new approch
//val users = query.map(user => new ClassTeacherWrapper(true, user[Int]("clid"), user[String]("name")).tolist
}
catch {
case NonFatal(e) =>
logger.error("getGradebookScores: error getting/parsing data from DB", e)
throw e
}
}
with is I am getting this exception :
{
"error": "ERROR: operator does not exist: uuid = character varying\n
Hint: No operator matches the given name and argument type(s). You
might need to add explicit type casts.\n Position: 324"
}
Can anyone help where am I going wrong. I am new to scala and Anorm
What should I modify in query.as part of code
Do you need the success field? Often an empty list would suffice?
I find parsers very useful (and reusable), so something like the following in the ClassTeacher singleton (or similar location):
val fields = "s.section_sk::text AS clid, s.name"
val classTeacherP =
get[Int]("clid") ~
get[String]("name") map {
case clid ~ name =>
ClassTeacher(clid,name)
}
def allForInstructorSection(instructor: String, section: String):List[ClassTeacher] =
DB.withConnection { implicit c => //-- or injected db
SQL(s"""select $fields from ******""")
.on('instructor -> instructor, 'section -> section)
.as(classTeacherP *)
}

In Anorm is it possible to apply multiple ColumnAliaser to the same query

The scenario is similar to the question at How to better parse the same table twice with Anorm? however the described solutions on that question can no longer be used.
On the scenario where a Message has 2 users I need to parse the from_user and to_user with SQL joins.
case class User(id: Long, name: String)
case class Message(id: Long, body: String, to: User, from: User)
def userParser(alias: String): RowParser[User] = {
get[Long](alias + "_id") ~ get[String](alias + "_name") map {
case id~name => User(id, name)
}
}
val parser: RowParser[Message] = {
userParser("from_user") ~
userParser("to_user") ~
get[Long]("messages.id") ~
get[String]("messages.name") map {
case from~to~id~body => Message(id, body, to, from)
}
}
// More alias here possible ?
val aliaser: ColumnAliaser = ColumnAliaser.withPattern((0 to 2).toSet, "from_user.")
SQL"""
SELECT from_user.* , to_user.*, message.* FROM MESSAGE
JOIN USER from_user on from_user.id = message_from_user_id
JOIN USER to_user on to_user.id = message.to_user
"""
.asTry(parser, aliaser)
If I'm right thinking you want to apply multiple ColumnAliaser with different aliasing policies to the same query, it's important to understand that ColumnAliaser is "just" a specific implementation of Function[(Int, ColumnName), Option[String]], so it can be defined/composed as any Function, and is simplified by the factory functions in its companion object.
import anorm.{ ColumnAliaser, ColumnName }
val aliaser = new ColumnAliaser {
def as1 = ColumnAliaser.withPattern((0 to 2).toSet, "from_user.")
def as2 = ColumnAliaser.withPattern((2 to 4).toSet, "to_user.")
def apply(column: (Int, ColumnName)): Option[String] =
as1(column).orElse(as2(column))
}

Play Slick 2.1.0 This DBMS allows only a single AutoInc column to be returned from an INSERT

In the following code I can insert my records just fine. But I would really like to get back the ID of the inserted value so that I can then return the object as part of my response.
def postEntry = DBAction { request =>
request.body.asJson.map {json =>
json.validate[(String, Long, String)].map {
case (name, age, lang) => {
implicit val session = request.dbSession
val something = Entries += Entry(None, name, age, lang)
Ok("Hello!!!: " + something)
}
}.recoverTotal {
e => BadRequest("Detected error: " + JsError.toFlatJson(e))
}
}.getOrElse {
BadRequest("Expecting Json data")
}
}
So I tried changing the insert to:
val something = (Entries returning Entries.map(_.id)) += Entry(None, name, age, lang)
But I get the following exception:
SlickException: This DBMS allows only a single AutoInc column to be returned from an INSERT
There is a note about it here: http://slick.typesafe.com/doc/2.1.0/queries.html
"Note that many database systems only allow a single column to be returned which must be the table’s auto-incrementing primary key. If you ask for other columns a SlickException is thrown at runtime (unless the database actually supports it)."
But it doesn't say how to just request the ID column.
Ende Nue above gave me the hint to find the problem. I needed to have the column marked primary key and auto increment in the table definition.
class Entries(tag: Tag) extends Table[Entry](tag, "entries") {
def id = column[Option[Long]]("id", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def age = column[Long]("age")
def lang = column[String]("lang")
def * = (id, name, age, lang).shaped <> ((Entry.apply _)tupled, Entry.unapply _)
}
O.PrimaryKey, O.AutoInc

Scala Slick: Issues with groupBy and missing shapes

I'm trying to use Slick to query a many-to-many relationship, but I'm running into a variety of errors, the most prominent being "Don't know how to unpack (User, Skill) to T and pack to G".
The structure of the tables is similar to the following:
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
object Users extends Table[User]("users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
case class Skill(val name: String, val id: Option[UUID])
object Skills extends Table[Skill]("skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
object UserSkills extends Table[UserSkill]("user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
Ultimately, what I want to achieve is something of the form
SELECT u.*, group_concat(s.name) FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
but before I spend the time trying to get group_concat to work as well, I have been trying to produce the simpler query (which I believe is still valid...)
SELECT u.* FROM user_skill us, users u, skills s WHERE us.skillId = s.id && us.userId = u.id GROUP BY u.id
I've tried a variety of scala code to produce this query, but an example of what causes the shape error above is
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.first }
Similarly, the following produces a packing error regarding "User" instead of "(User, Skill)"
(for {
us <- UserSkills
user <- us.user
skill <- us.skill
} yield (user, skill)).groupBy(_._1.id).map { case(_, xs) => xs.map(_._1).first }
If anyone has any suggestions, I would be very grateful: I've spent most of today and yesterday scouring google/google groups as well as the slick source, but I haven't a solution yet.
(Also, I'm using postgre so group_concat would actually be string_agg)
EDIT
So it seems like when groupBy is used, the mapped projection gets applied because something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s)).map(_._1)
works fine because _._1 gives the type Users, which has a Shape since Users is a table. However, when we call xs.first (as we do when we call groupBy), we actually get back a mapped projection type (User, Skill), or if we apply map(_._1) first, we get the type User, which is not Users! As far as I can tell, there is no shape with User as the mixed type because the only shapes defined are for Shape[Column[T], T, Column[T]] and for a table T <: TableNode, Shape[T, NothingContainer#TableNothing, T] as defined in slick.lifted.Shape. Furthermore, if I do something like
(for {
us <- UserSkills
u <- us.user
s <- us.skill
} yield (u,s))
.groupBy(_._1.id)
.map { case (_, xs) => xs.map(_._1.id).first }
I get a strange error of the form "NoSuchElementException: key not found: #1515100893", where the numeric key value changes each time. This is not the query I want, but it is a strange issue none the less.
I've run up against similar situations as well. While I love working with Scala and Slick, I do believe there are times when it is easier to denormalize an object in the database itself and link the Slick Table to a view.
For example, I have an application that has a Tree object that is normalized into several database tables. Since I'm comfortable with SQL, I think it is a cleaner solution than writing a plain Scala Slick query. The Scala code:
case class DbGFolder(id: String,
eTag: String,
url: String,
iconUrl: String,
title: String,
owner: String,
parents: Option[String],
children: Option[String],
scions: Option[String],
created: LocalDateTime,
modified: LocalDateTime)
object DbGFolders extends Table[DbGFolder]("gfolder_view") {
def id = column[String]("id")
def eTag = column[String]("e_tag")
def url = column[String]("url")
def iconUrl = column[String]("icon_url")
def title = column[String]("title")
def owner = column[String]("file_owner")
def parents = column[String]("parent_str")
def children = column[String]("child_str")
def scions = column[String]("scion_str")
def created = column[LocalDateTime]("created")
def modified = column[LocalDateTime]("modified")
def * = id ~ eTag ~ url ~ iconUrl ~ title ~ owner ~ parents.? ~
children.? ~ scions.? ~ created ~ modified <> (DbGFolder, DbGFolder.unapply _)
def findAll(implicit s: Session): List[GFolder] = {
Query(DbGFolders).list().map {v =>
GFolder(id = v.id,
eTag = v.eTag,
url = v.url,
iconUrl = v.iconUrl,
title = v.title,
owner = v.owner,
parents = v.parents.map { parentStr =>
parentStr.split(",").toSet }.getOrElse(Set()),
children = v.children.map{ childStr =>
childStr.split(",").toSet }.getOrElse(Set()),
scions = v.scions.map { scionStr =>
scionStr.split(",").toSet }.getOrElse(Set()),
created = v.created,
modified = v.modified)
}
}
}
And the underlying (postgres) view:
CREATE VIEW scion_view AS
WITH RECURSIVE scions(id, scion) AS (
SELECT c.id, c.child
FROM children AS c
UNION ALL
SELECT s.id, c.child
FROM children AS c, scions AS s
WHERE c.id = s.scion)
SELECT * FROM scions ORDER BY id, scion;
CREATE VIEW gfolder_view AS
SELECT
f.id, f.e_tag, f.url, f.icon_url, f.title, m.name, f.file_owner,
p.parent_str, c.child_str, s.scion_str, f.created, f.modified
FROM
gfiles AS f
JOIN mimes AS m ON (f.mime_type = m.name)
LEFT JOIN (SELECT DISTINCT id, string_agg(parent, ',' ORDER BY parent) AS parent_str
FROM parents GROUP BY id) AS p ON (f.id = p.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(child, ',' ORDER BY child) AS child_str
FROM children GROUP BY id) AS c ON (f.id = c.id)
LEFT JOIN (SELECT DISTINCT id, string_agg(scion, ',' ORDER BY scion) AS scion_str
FROM scion_view GROUP BY id) AS s ON (f.id = s.id)
WHERE
m.category = 'folder';
Try this. Hope it may yield what you expected. Find the Slick Code below the case classes.
click here for the reference regarding lifted embedding .
case class User(val name: String, val picture: Option[URL], val id: Option[UUID])
class Users(_tableTag: Tag) extends Table[User](_tableTag,"users") {
def name = column[String]("name")
def picture = column[Option[URL]]("picture")
def id = column[UUID]("id")
def * = name ~ picture ~ id.? <> (User, User.unapply _)
}
lazy val userTable = new TableQuery(tag => new Users(tag))
case class Skill(val name: String, val id: Option[UUID])
class Skills(_tableTag: Tag) extends Table[Skill](_tableTag,"skill") {
def name = column[String]("name")
def id = column[UUID]("id")
def * = name ~ id.? <> (Skill, Skill.unapply _)
}
lazy val skillTable = new TableQuery(tag => new Skills(tag))
case class UserSkill(val userId: UUID, val skillId: UUID, val id: Option[UUID])
class UserSkills(_tableTag: Tag) extends Table[UserSkill](_tableTag,"user_skill") {
def userId = column[UUID]("userId")
def skillId = column[UUID]("skillId")
def id = column[UUID]("id")
def * = userId ~ skillId ~ id.? <> (UserSkill, UserSkill.unapply _)
def user = foreignKey("userFK", userId, Users)(_.id)
def skill = foreignKey("skillFK", skillId, Skills)(_.id)
}
lazy val userSkillTable = new TableQuery(tag => new UserSkills(tag))
(for {((userSkill, user), skill) <- userSkillTable join userTable.filter on
(_.userId === _.id) join skillTable.filter on (_._1.skillId === _.id)
} yield (userSkill, user, skill)).groupBy(_.2.id)