Custom LocalDateTime parser in Slick 3 - scala

I'm converting a bunch of java.sql.Timestamp columns from my Slick 3 models into LocalDateTime. My database backend is MySQL 8 and the columns I'm converting are either TIMESTAMP or DATETIME.
I ran into issues with MySQL returning dates in format yyyy-MM-dd HH:mm:ss, while LocalDateTime.parse expects yyyy-MM-dd'T'HH:mm:ss. This results in runtime errors such as java.time.format.DateTimeParseException: Text '2022-12-05 08:01:08' could not be parsed at index 10.
It found that it could be solved by using a custom formatter, like this:
private val formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")
val localDateTimeMapper: BaseColumnType[LocalDateTime] = MappedJdbcType.base[LocalDateTime, String](
ldt => ldt.format(formatter),
s => LocalDateTime.parse(s, formatter)
)
Normally I would define the formatter as implicit, but it creates a compile error in the model: No implicits found for parameter tt: TypedType[LocalDateTime]. Applying the formatter explicitly works wonderful for column[LocalDateTime], but does not work for column[Option[LocalDateTime]] (causes Type mismatch, required TypedType[Option[LocalDateTime]]).
class Users(tag: Tag) extends Table[User](tag, "users") {
def uuid = column[UUID]("uuid", O.PrimaryKey)
def name = column[String]("name")
def email = column[String]("email")
def lastSignedInAt = column[Option[LocalDateTime]]("last_signed_in_at")(localDateTimeMapper)
def createdAt = column[LocalDateTime]("created_at")(localDateTimeMapper)
override def * = (uuid, name, email, lastSignedInAt, createdAt) <> (User.tupled, User.unapply)
}
Other custom types (such as enums) works without issues using the implicit formatter approach, but I suspect the issue here is that Slick has a LocalDateTime-mapper that I'm trying to override. From what I can tell Slick wants LocalDateTime objects to be stored as VARCHAR rather than date types, but I don't want to convert the database columns.
Any advise on how I can make my custom formatter work (or use built in functionality in Slick) to allow LocalDateTime to work with MySQL's date types?

I eventually found a way that works by extending Slick's MySQLProfile:
package lib
import slick.jdbc.JdbcProfile
import java.sql.PreparedStatement
import java.sql.ResultSet
import java.time.LocalDateTime
import java.time.format.DateTimeFormatter
trait ExMySQLProfile extends JdbcProfile with slick.jdbc.MySQLProfile { driver =>
private val localDateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")
override val columnTypes = new ExJdbcTypes
class ExJdbcTypes extends super.JdbcTypes {
#inline
private[this] def stringToMySqlString(value: String): String = {
value match {
case null => "NULL"
case _ =>
val sb = new StringBuilder
sb.append('\'')
for (c <- value) c match {
case '\'' => sb.append("\\'")
case '"' => sb.append("\\\"")
case 0 => sb.append("\\0")
case 26 => sb.append("\\Z")
case '\b' => sb.append("\\b")
case '\n' => sb.append("\\n")
case '\r' => sb.append("\\r")
case '\t' => sb.append("\\t")
case '\\' => sb.append("\\\\")
case _ => sb.append(c)
}
sb.append('\'')
sb.toString
}
}
/**
* Override LocalDateTime handler, to parse values as we expect them.
*
* The default implementation in Slick does not support TIMESTAMP or DATETIME
* columns, but expects timestamps to be stored as VARCHAR
*/
override val localDateTimeType: LocalDateTimeJdbcType = new LocalDateTimeJdbcType {
override def sqlType: Int = java.sql.Types.TIMESTAMP
override def setValue(v: LocalDateTime, p: PreparedStatement, idx: Int): Unit = {
p.setString(idx, if (v == null) null else v.toString)
}
override def getValue(r: ResultSet, idx: Int): LocalDateTime = {
r.getString(idx) match {
case null => null
case iso8601String => LocalDateTime.parse(iso8601String, localDateTimeFormatter)
}
}
override def updateValue(v: LocalDateTime, r: ResultSet, idx: Int) = {
r.updateString(idx, if (v == null) null else v.format(localDateTimeFormatter))
}
override def valueToSQLLiteral(value: LocalDateTime): String = {
stringToMySqlString(value.format(localDateTimeFormatter))
}
}
}
}
trait MySQLProfile extends ExMySQLProfile {}
object MySQLProfile extends MySQLProfile
In my application.conf I've configured the profile with:
slick.dbs.default {
profile = "lib.MySQLProfile$"
}

Related

Slick: Composed, optional columns to composed optional type

This is given:
case class Money(value: BigDecimal, currency: Currency)
trait Currency
case class EUR ( ... ) extends Currency
... and so on
I want to model a optional Money type which contains a value part (BigDecimal) and a Currency part (a Enum through Enumeratum).
To map this in the DB, I have two optional columns:
private def moneyOptionalValue: Rep[Option[BigDecimal]] = column[Option[BigDecimal]]("money_value")
private def moneyOptionalCurrency: Rep[Option[Currency]] = column[Option[Currency]]("money_currency")
The desired state would be to have this:
def money: Rep[Option[Money]] = ...
I tried with this approach, as documented in
http://slick.lightbend.com/doc/3.1.0/userdefined.html#using-custom-record-types-in-queries
private implicit def myCurrencyType: BaseColumnType[Currency] = MappedColumnType.base[Currency, String](
c => c.shortName,
s => Currency.withNameUppercaseOnly(s.toUpperCase)
)
case class LiftedMoney(value: Rep[BigDecimal], currency: Rep[Currency])
case class LiftedMoneyOptional(a: Rep[Option[BigDecimal]], b: Rep[Option[Currency]])
case class MoneyOptional(a: Option[BigDecimal], b: Option[Currency])
// custom case class mapping
implicit object MoneyOptionalShape extends CaseClassShape(LiftedMoneyOptional.tupled, MoneyOptional.tupled)
implicit object MoneyShape extends CaseClassShape(LiftedMoney.tupled, Money.tupled)
implicit object MoneyToOptional
extends Isomorphism[Option[Money], MoneyOptional](
{
case Some(m) => MoneyOptional(Some(m.value), Some(m.currency))
case None => MoneyOptional(None, None)
}, {
case MoneyOptional(Some(v), Some(c)) => Some(Money(v, c))
case MoneyOptional(_, _) => None
}
)
private def moneyOptionalValue: Rep[Option[BigDecimal]] = column[Option[BigDecimal]]("money_value")
private def moneyOptionalCurrency: Rep[Option[Currency]] = column[Option[Currency]]("money_currency")
def moneyOptional: Rep[MoneyOptional] =
moneyOptionalValue.zip(moneyOptionalCurrency).mapTo[MoneyOptional]
def moneyOptional1: MappedProjection[MoneyOptional, (Option[BigDecimal], Option[Currency])] =
moneyOptionalValue.zip(moneyOptionalCurrency).mapTo[MoneyOptional]
// this compiles, if its not optional, but would break at runtime, when null values are in the columns:
private def moneyValue: Rep[BigDecimal] = column[BigDecimal]("money_value") // this column can be nullable!
private def moneyCurrency: Rep[Currency] = column[Currency]("money_currency") // // this column can be nullable!
def money: Rep[Money] = moneyValue.zip(moneyCurrency).value.mapTo[Money]
// I also tried this, but cannot compile my code with that for some reason
def optionMoney: Rep[Option[Money]] = RepOption[Money](money, money.toNode)
Any help would be greatly appreciated!
What about this one ?
def money = (moneyOptionalValue, moneyOptionalCurrency) <> [Option[Money]](
{
case (value,currency) if value.isDefined && currency.isDefined => Some(Money(value.get, currency.get))
case _ => None
},
(mapped:Option[Money]) => mapped.map( m => Some(Some(m.value), Some(m.currency))).getOrElse(Some(None,None))
)

Can't deal with UUID in Play (jdbc)

I'm using joda DateTime and UUID in my Play project. I'm struggling trying to put and get them from Postgresql:
import org.joda.time.DateTime
case class MyClass(id: Pk[UUID], name: String, addedAt: DateTime)
object MyClass {
val simple =
SqlParser.get[Pk[UUID]]("id") ~
SqlParser.get[String]("name") ~
SqlParser.get[DateTime]("added_at") map {
case id ~ name ~ addedAt => MyClass(id, name, addedAt)
}
implicit def rowToId = Column.nonNull[UUID] { (value, meta) =>
maybeValueToUUID(value) match {
case Some(uuid) => Right(uuid)
case _ => Left(TypeDoesNotMatch( s"Cannot convert $value: ${value.asInstanceOf[Any].getClass} to UUID"))
}
}
implicit def idToStatement = new ToStatement[UUID] {
def set(s: PreparedStatement, index: Int, aValue: UUID): Unit = s setObject(index, toByteArray(aValue))
}
def getSingle(id: UUID): Option[MyClass] = {
DB withConnection {
implicit con =>
SQL("SELECT my_table.id, my_table.name, my_table.added_at FROM my_table WHERE id = {id}")
.on('id -> id)
.as(MyClass.simple.*)
} match {
case List(x) => Some(x)
case _ => None
}
}
Implicit functions for joda DateTime are omited because they don't cause any error at this point. What causes an error is a getSingle(...) - conversion from and to UUID. The error is
org.postgresql.util.PSQLException: operator does not exist: uuid = bytea
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
4 helper functions:
private def maybeValueToUUID(value: Any): Option[UUID] = maybeValueToByteArray(value) match {
case Some(bytes) => Some(fromByteArray(bytes))
case _ => None
}
private def maybeValueToByteArray(value: Any): Option[Array[Byte]] =
try {
value match {
case bytes: Array[Byte] => Some(bytes)
case clob: Clob => None //todo
case blob: Blob => None //todo
case _ => None
}
} catch {
case e: Exception => None
}
def toByteArray(uuid: UUID) = {
val buffer = ByteBuffer.wrap(new Array[Byte](16))
buffer putLong uuid.getMostSignificantBits
buffer putLong uuid.getLeastSignificantBits
buffer.array
}
def fromByteArray(b: Array[Byte]) = {
val buffer = ByteBuffer.wrap(b)
val high = buffer.getLong
val low = buffer.getLong
new UUID(high, low)
}
Note that a record I'm trying to retrieve exists and has a correct format.

Idiomatic Scala way of deserializing delimited strings into case classes

Suppose I was dealing with a simple colon-delimited text protocol that looked something like:
Event:005003:information:2013 12 06 12 37 55:n3.swmml20861:1:Full client swmml20861 registered [entry=280 PID=20864 queue=0x4ca9001b]
RSET:m3node:AUTRS:1-1-24:A:0:LOADSHARE:INHIBITED:0
M3UA_IP_LINK:m3node:AUT001LKSET1:AUT001LK1:r
OPC:m3node:1-10-2(P):A7:NAT0
....
I'd like to deserialize each line as an instance of a case class, but in a type-safe way. My first attempt uses type classes to define 'read' methods for each possible type that I can encounter, in addition to the 'tupled' method on the case class to get back a function that can be applied to a tuple of arguments, something like the following:
case class Foo(a: String, b: Integer)
trait Reader[T] {
def read(s: String): T
}
object Reader {
implicit object StringParser extends Reader[String] { def read(s: String): String = s }
implicit object IntParser extends Reader[Integer] { def read(s: String): Integer = s.toInt }
}
def create[A1, A2, Ret](fs: Seq[String], f: ((A1, A2)) => Ret)(implicit A1Reader: Reader[A1], A2Reader: Reader[A2]): Ret = {
f((A1Reader.read(fs(0)), A2Reader.read(fs(1))))
}
create(Seq("foo", "42"), Foo.tupled) // gives me a Foo("foo", 42)
The problem though is that I'd need to define the create method for each tuple and function arity, so that means up to 22 versions of create. Additionally, this doesn't take care of validation, or receiving corrupt data.
As there is a Shapeless tag, a possible solution using it, but I'm not an expert and I guess one can do better :
First, about the lack of validation, you should simply have read return Try, or scalaz.Validation or just option if you do not care about an error message.
Then about boilerplate, you may try to use HList. This way you don't need to go for all the arities.
import scala.util._
import shapeless._
trait Reader[+A] { self =>
def read(s: String) : Try[A]
def map[B](f: A => B): Reader[B] = new Reader[B] {
def read(s: String) = self.read(s).map(f)
}
}
object Reader {
// convenience
def apply[A: Reader] : Reader[A] = implicitly[Reader[A]]
def read[A: Reader](s: String): Try[A] = implicitly[Reader[A]].read(s)
// base types
implicit object StringReader extends Reader[String] {
def read(s: String) = Success(s)
}
implicit object IntReader extends Reader[Int] {
def read(s: String) = Try {s.toInt}
}
// HLists, parts separated by ":"
implicit object HNilReader extends Reader[HNil] {
def read(s: String) =
if (s.isEmpty()) Success(HNil)
else Failure(new Exception("Expect empty"))
}
implicit def HListReader[A : Reader, H <: HList : Reader] : Reader[A :: H]
= new Reader[A :: H] {
def read(s: String) = {
val (before, colonAndBeyond) = s.span(_ != ':')
val after = if (colonAndBeyond.isEmpty()) "" else colonAndBeyond.tail
for {
a <- Reader.read[A](before)
b <- Reader.read[H](after)
} yield a :: b
}
}
}
Given that, you have a reasonably short reader for Foo :
case class Foo(a: Int, s: String)
object Foo {
implicit val FooReader : Reader[Foo] =
Reader[Int :: String :: HNil].map(Generic[Foo].from _)
}
It works :
println(Reader.read[Foo]("12:text"))
Success(Foo(12,text))
Without scalaz and shapeless, I think the ideomatic Scala way to parse some input are Scala parser combinators. In your example, I would try something like this:
import org.joda.time.DateTime
import scala.util.parsing.combinator.JavaTokenParsers
val input =
"""Event:005003:information:2013 12 06 12 37 55:n3.swmml20861:1:Full client swmml20861 registered [entry=280 PID=20864 queue=0x4ca9001b]
|RSET:m3node:AUTRS:1-1-24:A:0:LOADSHARE:INHIBITED:0
|M3UA_IP_LINK:m3node:AUT001LKSET1:AUT001LK1:r
|OPC:m3node:1-10-2(P):A7:NAT0""".stripMargin
trait LineContent
case class Event(number : Int, typ : String, when : DateTime, stuff : List[String]) extends LineContent
case class Reset(node : String, stuff : List[String]) extends LineContent
case class Other(typ : String, stuff : List[String]) extends LineContent
object LineContentParser extends JavaTokenParsers {
override val whiteSpace=""":""".r
val space="""\s+""".r
val lineEnd = """"\n""".r //"""\s*(\r?\n\r?)+""".r
val field = """[^:]*""".r
def stuff : Parser[List[String]] = rep(field)
def integer : Parser[Int] = log(wholeNumber ^^ {_.toInt})("integer")
def date : Parser[DateTime] = log((repsep(integer, space) filter (_.length == 6)) ^^ (l =>
new DateTime(l(0), l(1), l(2), l(3), l(4), l(5), 0)
))("date")
def event : Parser[Event] = "Event" ~> integer ~ field ~ date ~ stuff ^^ {
case number~typ~when~stuff => Event(number, typ, when, stuff)}
def reset : Parser[Reset] = "RSET" ~> field ~ stuff ^^ { case node~stuff =>
Reset(node, stuff)
}
def other : Parser[Other] = ("M3UA_IP_LINK" | "OPC") ~ stuff ^^ { case typ~stuff =>
Other(typ, stuff)
}
def line : Parser[LineContent] = event | reset | other
def lines = repsep(line, lineEnd)
def parseLines(s : String) = parseAll(lines, s)
}
LineContentParser.parseLines(input)
The patterns in the parser combinators are self explanatory. I always convert each successfully parsed chunk as early as possible to an partial result. Then the partial results will be combined to the final result.
A hint for debugging: You can always add the log parser. It will print before and after when a rule is applied. Together with the given name (e.g. "date") it will also print the current position of the input source, where the rule is applied and when applicable the parsed partial result.
An example output looks like this:
trying integer at scala.util.parsing.input.CharSequenceReader#108589b
integer --> [1.13] parsed: 5003
trying date at scala.util.parsing.input.CharSequenceReader#cec2e3
trying integer at scala.util.parsing.input.CharSequenceReader#cec2e3
integer --> [1.30] parsed: 2013
trying integer at scala.util.parsing.input.CharSequenceReader#14da3
integer --> [1.33] parsed: 12
trying integer at scala.util.parsing.input.CharSequenceReader#1902929
integer --> [1.36] parsed: 6
trying integer at scala.util.parsing.input.CharSequenceReader#17e4dce
integer --> [1.39] parsed: 12
trying integer at scala.util.parsing.input.CharSequenceReader#1747fd8
integer --> [1.42] parsed: 37
trying integer at scala.util.parsing.input.CharSequenceReader#1757f47
integer --> [1.45] parsed: 55
date --> [1.45] parsed: 2013-12-06T12:37:55.000+01:00
I think this is an easy and maintainable way to parse input into well typed Scala objects. It is all in the core Scala API, hence I would call it "idiomatic". When typing the example code in an Idea Scala worksheet, completion and type information worked very well. So this way seems to well supported by the IDEs.

Scala Macros: Accessing members with quasiquotes

I'm trying to implement an implicit materializer as described here: http://docs.scala-lang.org/overviews/macros/implicits.html
I decided to create a macro that converts a case class from and to a String using quasiquotes for prototyping purposes. For example:
case class User(id: String, name: String)
val foo = User("testid", "foo")
Converting foo to text should result in "testid foo" and vice versa.
Here is the simple trait and its companion object I have created:
trait TextConvertible[T] {
def convertTo(obj: T): String
def convertFrom(text: String): T
}
object TextConvertible {
import language.experimental.macros
import QuasiTest.materializeTextConvertible_impl
implicit def materializeTextConvertible[T]: TextConvertible[T] = macro materializeTextConvertible_impl[T]
}
and here is the macro:
object QuasiTest {
import reflect.macros._
def materializeTextConvertible_impl[T: c.WeakTypeTag](c: Context): c.Expr[TextConvertible[T]] = {
import c.universe._
val tpe = weakTypeOf[T]
val fields = tpe.declarations.collect {
case field if field.isMethod && field.asMethod.isCaseAccessor => field.asMethod.accessed
}
val strConvertTo = fields.map {
field => q"obj.$field"
}.reduce[Tree] {
case (acc, elem) => q"""$acc + " " + $elem"""
}
val strConvertFrom = fields.zipWithIndex map {
case (field, index) => q"splitted($index)"
}
val quasi = q"""
new TextConvertible[$tpe] {
def convertTo(obj: $tpe) = $strConvertTo
def convertFrom(text: String) = {
val splitted = text.split(" ")
new $tpe(..$strConvertFrom)
}
}
"""
c.Expr[TextConvertible[T]](quasi)
}
}
which generates
{
final class $anon extends TextConvertible[User] {
def <init>() = {
super.<init>();
()
};
def convertTo(obj: User) = obj.id.$plus(" ").$plus(obj.name);
def convertFrom(text: String) = {
val splitted = text.split(" ");
new User(splitted(0), splitted(1))
}
};
new $anon()
}
The generated code looks fine, but yet I get the error value id in class User cannot be accessed in User in compilation while trying to use the macro.
I suspect I am using a wrong type for fields. I tried field.asMethod.accessed.name, but it results in def convertTo(obj: User) = obj.id .$plus(" ").$plus(obj.name ); (note the extra spaces after id and name), which naturally results in the error value id is not a member of User.
What am I doing wrong?
Ah, figured it out almost immediately after sending my question.
I changed the lines
val fields = tpe.declarations.collect {
case field if field.isMethod && field.asMethod.isCaseAccessor => field.asMethod.accessed
}
to
val fields = tpe.declarations.collect {
case field if field.isMethod && field.asMethod.isCaseAccessor => field.name
}
which solved the problem.
The field you get with accessed.name has a special suffix attached to it, to avoid naming conflicts.
The special suffix is scala.reflect.api.StandardNames$TermNamesApi.LOCAL_SUFFIX_STRING, which has the value, you guessed it, a space char.
This is quite evil, of course.

How do I add an additional implicit extractor in Play 2.1.4 and actually use it?

I am using Play 2.1.4 against a postgresql db. In the postgresql db, I am using uuid as my pk datatype, which correlates to java.util.UUID. The SqlParser.getT function in anorm doesn't have an implicit extractor for java.util.UUID. That makes sense, because I don't think many people use it; however, I can't seem to find instructions on how I can add one. Does anyone know how to add an additional implicit extractor to anorm.SqlParser in Play?
The error I am getting is below:
could not find implicit value for parameter extractor: anorm.Column[java.util.UUID]
I'm really new to Scala and Play, so if my approach is completely wrong, please let me know, but I'd really like to be able to do something like what you see below.
case class App(appId: UUID, appName: String, appServerName: String,
appComponent: String, appDescription: String,
appDateCreated: DateTime, appDateModified: DateTime,
appValidated: Boolean)
val app = {
get[UUID]("app_id") ~
get[String]("app_name") ~
get[String]("app_server_name") ~
get[String]("app_component") ~
get[String]("app_description") ~
get[java.util.Date]("app_date_created") ~
get[java.util.Date]("app_date_modified") ~
get[Boolean]("app_validated") map {
case id ~ name ~ serverName ~ component ~ description ~ dateCreated ~
dateModified ~ validated => App(id, name, serverName, component,
description, new DateTime(dateCreated.getTime),
new DateTime(dateModified.getTime), validated)
}
}
def all(): List[App] = DB.withConnection { implicit conn =>
SQL("SELECT * FROM apps").as(app *)
}
Here is a simplified variation of the #r.piesnikowski answer for a JDBC driver that returns java.util.UUID, like PostgreSQL does:
/**
* Implicit conversion from UUID to Anorm statement value
*/
implicit def uuidToStatement = new ToStatement[UUID] {
def set(s: java.sql.PreparedStatement, index: Int, aValue: UUID): Unit = s.setObject(index, aValue)
}
/**
* Implicit conversion from Anorm row to UUID
*/
implicit def rowToUUID: Column[UUID] = {
Column.nonNull[UUID] { (value, meta) =>
value match {
case v: UUID => Right(v)
case _ => Left(TypeDoesNotMatch(s"Cannot convert $value:${value.asInstanceOf[AnyRef].getClass} to UUID for column ${meta.column}"))
}
}
}
Maybe this post will be helpfull. (Used in my project. Working fine)
/**
* Attempt to convert a SQL value into a UUID
*
* #param value value to convert
* #return UUID
*/
private def valueToUUIDOption(value: Any): Option[UUID] = {
try {
valueToByteArrayOption(value) match {
case Some(bytes) => Some(UUIDHelper.fromByteArray(bytes))
case _ => None
}
}
catch {
case e: Exception => None
}
}
/**
* Implicit conversion from UUID to anorm statement value
*/
implicit def uuidToStatement = new ToStatement[UUID] {
def set(s: java.sql.PreparedStatement, index: Int, aValue: UUID): Unit = s.setObject(index, aValue)
}
/**
* Implicit converstion from anorm row to uuid
*/
implicit def rowToUUID: Column[UUID] = {
Column.nonNull[UUID] { (value, meta) =>
val u = UUID.fromString(value.toString)
val MetaDataItem(qualified, nullable, clazz) = meta
valueToUUIDOption(value) match {
case Some(uuid) => Right(uuid)
case _ => Left(TypeDoesNotMatch("Cannot convert " + value + ":" + value.asInstanceOf[AnyRef].getClass + " to UUID for column " + qualified))
}
}
}