How to create anorm's parser with array? - postgresql

I am using postgresql which supports array column field. To parse a row, I use this parser. It has error at the Array object. I guess I did it wrongly.
case class ServiceRequest(
id: Pk[Long],
firstname: String,
lastname: String,
images: Array[String])
val parser: RowParser[ServiceRequest] = {
get[Pk[Long]]("id") ~
get[String]("firstname") ~
get[String]("lastname") ~
Error here >>> get[Array[String]]("images") map {
case id ~ firstname ~ lastname ~ images=>
ServiceRequest(id, firstname, lastname, images)
}
}
Thanks

The Array[T] type is now natively supported in play 2.4.x, your don't have to roll your own converters.
But, it's still not nice to work with insert or update statements like:
def updateTags(id: Long, values: Seq[String]):Int = {
DB.withConnection { implicit conn =>
SQL("UPDATE entries SET tags = {value} WHERE id = {id}")
.on('value -> values, 'id -> id).executeUpdate
}
will give you an error
play - Cannot invoke the action, eventually got an error:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "$2"
Put it simply, you should create the PreparedStatement using java.sql.Array type:
on('value -> conn.createArrayOf("varchar", value.asInstanceOf[Array[AnyRef]]), ...
As of now (play 2.4-M1), java.sql.Array is not converted to ParameterValue by default, which means
SQL(...).on(`arr -> values:java.sql.Array ) still gives a compilation error, you need another implicit conversion to make it compiling:
implicit object sqlArrayToStatement extends ToStatement[java.sql.Array] {
def set(s: PreparedStatement, i: Int, n: java.sql.Array) = s.setArray(i, n)
}

I have solved my problem by adding this converter:
implicit def rowToStringArray: Column[Array[String]] = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case o: java.sql.Array => Right(o.getArray().asInstanceOf[Array[String]])
case _ => Left(TypeDoesNotMatch("Cannot convert " + value + ":" + value.asInstanceOf[AnyRef].getClass))
}
}
The converter transforms data type from postgresql's jdbc to java's data type for the parser in SELECT function. When you insert, you need another converter to convert from java's data type to postgresql's jdbc.

There is currently a pullrequest to add support for java.sql.Array as column mapping: https://github.com/playframework/playframework/pull/3062 .
Best,

Related

How to map a query result to case class using Anorm in scala

I have 2 case classes like this :
case class ClassTeacherWrapper(
success: Boolean,
classes: List[ClassTeacher]
)
2nd one :
case class ClassTeacher(
clid: String,
name: String
)
And a query like this :
val query =
SQL"""
SELECT
s.section_sk::text AS clid,
s.name AS name
from
********************
"""
P.S. I put * in place of query for security reasons :
So my query is returning 2 values. How do i map it to case class ClassTeacher
currently I am doing something like this :
def getClassTeachersByInstructor(instructor: String, section: String): ClassTeacherWrapper = {
implicit var conn: Connection = null
try {
conn = datamartDatasourceConnectionPool.getDBConnection()
// Define query
val query =
SQL"""
SELECT
s.section_sk::text AS clid,
s.name AS name
********
"""
logger.info("Read from DB: " + query)
// create a List containing all the datasets from the resultset and return
new ClassTeacherWrapper(
success =true,
query.as(Macro.namedParser[ClassTeacher].*)
)
//Trying new approch
//val users = query.map(user => new ClassTeacherWrapper(true, user[Int]("clid"), user[String]("name")).tolist
}
catch {
case NonFatal(e) =>
logger.error("getGradebookScores: error getting/parsing data from DB", e)
throw e
}
}
with is I am getting this exception :
{
"error": "ERROR: operator does not exist: uuid = character varying\n
Hint: No operator matches the given name and argument type(s). You
might need to add explicit type casts.\n Position: 324"
}
Can anyone help where am I going wrong. I am new to scala and Anorm
What should I modify in query.as part of code
Do you need the success field? Often an empty list would suffice?
I find parsers very useful (and reusable), so something like the following in the ClassTeacher singleton (or similar location):
val fields = "s.section_sk::text AS clid, s.name"
val classTeacherP =
get[Int]("clid") ~
get[String]("name") map {
case clid ~ name =>
ClassTeacher(clid,name)
}
def allForInstructorSection(instructor: String, section: String):List[ClassTeacher] =
DB.withConnection { implicit c => //-- or injected db
SQL(s"""select $fields from ******""")
.on('instructor -> instructor, 'section -> section)
.as(classTeacherP *)
}

Anorm: implicit convertion [all value(include null)] to [String]

I'm new to Scala and Play framework. I try to query all the data for selected columns from a data table and save them as Excel file.
Selected columns usually have different types, such as Int, Str, Timestamp, etc.
I want to convert all value types, include null into String
(null convert to empty string "")
without knowing the actual type of a column, so the code can be used for any tables.
According to Play's document, I can write the implicit converter below, however, this cannot handle null. Googled this for long time, cannot find solution. Can someone please let me know how to handle null in the implicit converter?
Thanks in advance~
implicit def valueToString: anorm.Column[String] =
anorm.Column.nonNull1[String] { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case s: String => Right(s) // Provided-default case
case i: Int => Right(i.toString()) // Int to String
case t: java.sql.Clob => Right(t.toString()) // Blob/Text to String
case d: java.sql.Timestamp => Right(d.toString()) // Datatime to String
case _ => Left(TypeDoesNotMatch(s"Cannot convert $value: ${value.asInstanceOf[AnyRef].getClass} to String for column $qualified"))
}
}
As indicated in the documentation, if there a Column[T], allowing to parse of column of type T, and if the column(s) can be null, then Option[T] should be asked, benefiting from the generic support as Option[T].
There it is a custom Column[String] (make sure the custom one is used, not the provided Column[String]), so Option[String] should be asked.
import myImplicitStrColumn
val parser = get[Option[String]]("col")

The "~:" is what meaning in slick and why the DateTime did not recognize

I use slick and playframework 2.1
why "~" is not recognize as operator, why?
object Comments extends IdTable[CommentId, Comment]("COMMENTS") {
def text = column[String]("TEXT")
// you can use your type-safe ID here - it will be mapped to long in database
def authorId = column[UserId]("AUTHOR")
def author = foreignKey("COMMENTS_AUTHOR_FK", authorId, Users)(_.id)
def date = column[DateTime]("DATE")
def base = text ~ authorId ~ date
override def * = id.? ~: base <> (Comment.apply _, Comment.unapply _) //the error happens here.
override def insertOne(elem: Comment)(implicit session: Session): CommentId =
saveBase(base, Comment.unapply _)(elem)
}
the error is:
[error] C:\assigment\slick-advanced\app\models\Comment.scala:30: value ~: is not
a member of scala.slick.lifted.Projection3[String,models.UserId,org.joda.time.DateTime]
why the ~ operator can not apply?
The operator it complains about is ~:, not ~. The error is probably just a consequence of the DateTime error. Slick does not support joda time out of the box. This should help: https://github.com/tototoshi/slick-joda-mapper

Access database column names from a Table?

Let's say I have a table:
object Suppliers extends Table[(Int, String, String, String)]("SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey)
def name = column[String]("SUP_NAME")
def state = column[String]("STATE")
def zip = column[String]("ZIP")
def * = id ~ name ~ state ~ zip
}
Table's database name
The table's database name can be accessed by going: Suppliers.tableName
This is supported by the Scaladoc on AbstractTable.
For example, the above table's database name is "SUPPLIERS".
Columns' database names
Looking through AbstractTable, getLinearizedNodes and indexes looked promising. No column names in their string representations though.
I assume that * means "all the columns I'm usually interested in." * is a MappedProjection, which has this signature:
final case class MappedProjection[T, P <: Product](
child: Node,
f: (P) ⇒ T,
g: (T) ⇒ Option[P])(proj: Projection[P])
extends ColumnBase[T] with UnaryNode with Product with Serializable
*.getLinearizedNodes contains a huge sequence of numbers, and I realized that at this point I'm just doing a brute force inspection of everything in the API for possibly finding the column names in the String.
Has anybody also encountered this problem before, or could anybody give me a better understanding of how MappedProjection works?
It requires you to rely on Slick internals, which may change between versions, but it is possible. Here is how it works for Slick 1.0.1: You have to go via the FieldSymbol. Then you can extract the information you want like how columnInfo(driver: JdbcDriver, column: FieldSymbol): ColumnInfo does it.
To get a FieldSymbol from a Column you can use fieldSym(node: Node): Option[FieldSymbol] and fieldSym(column: Column[_]): FieldSymbol.
To get the (qualified) column names you can simply do the following:
Suppliers.id.toString
Suppliers.name.toString
Suppliers.state.toString
Suppliers.zip.toString
It's not explicitly stated anywhere that the toString will yield the column name, so your question is a valid one.
Now, if you want to programmatically get all the column names, then that's a bit harder. You could try using reflection to get all the methods that return a Column[_] and call toString on them, but it wouldn't be elegant. Or you could hack a bit and get a select * SQL statement from a query like this:
val selectStatement = DB withSession {
Query(Suppliers).selectStatement
}
And then parse our the column names.
This is the best I could do. If someone knows a better way then please share - I'm interested too ;)
Code is based on Lightbend Activator "slick-http-app".
slick version: 3.1.1
Added this method to the BaseDal:
def getColumns(): mutable.Map[String, Type] = {
val columns = mutable.Map.empty[String, Type]
def selectType(t: Any): Option[Any] = t match {
case t: TableExpansion => Some(t.columns)
case t: Select => Some(t.field)
case _ => None
}
def selectArray(t:Any): Option[ConstArray[Node]] = t match {
case t: TypeMapping => Some(t.child.children)
case _ => None
}
def selectFieldSymbol(t:Any): Option[FieldSymbol] = t match {
case t: FieldSymbol => Some(t)
case _ => None
}
val t = selectType(tableQ.toNode)
val c = selectArray(t.get)
for (se <- c.get) {
val col = selectType(se)
val fs = selectFieldSymbol(col.get)
columns += (fs.get.name -> fs.get.tpe)
}
columns
}
this method gets the column names (real names in DB) + types form the TableQ
used imports are:
import slick.ast._
import slick.util.ConstArray

Anorm parse float values

In Play framework 2.0, I'm trying to load a real (i.e. single precision float) type column from PostgreSQL using a row parser like this:
case class Foo(bar: Float)
object Foo {
def all = DB.withConnection { implicit c =>
SQL("SELECT * FROM foo").as(fooParser *)
}
val fooParser = {
get[Float]("bar") map {
case bar => Foo(bar)
}
}
}
This generates an error: could not find implicit value for parameter extractor: anorm.Column[Float]
When using double precision types everything works fine. Is it somehow possible to use single precision floats with Anorm?
You can always create your own column parser base on the existing ones:
implicit def rowToFloat: Column[Float] = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case d: Float => Right(d)
case _ => Left(TypeDoesNotMatch("Cannot convert " + value + ":" + value.asInstanceOf[AnyRef].getClass + " to Float for column " + qualified))
}
}
but it matches on the type of value returned by the JDBC driver which may not be correct (depends on the column definition).