I am having trouble deserializing a org.joda.time.DateTime field from JSON into a case class.
The JSON:
val ajson=parse(""" { "creationDate": "2013-01-02T10:48:41.000-05:00" }""")
I also set these serialization options:
implicit val formats = Serialization.formats(NoTypeHints) ++ net.liftweb.json.ext.JodaTimeSerializers.all
And the deserialization:
val val1=ajson.extract[Post]
where Post is:
case class Post(creationDate: DateTime){ ... }
The exception I get is:
net.liftweb.json.MappingException: No usable value for creationDate
Invalid date format 2013-01-02T10:48:41.000-05:00
How can I deserialize that date string into a DateTime object?
EDIT:
This works: val date3= new DateTime("2013-01-05T06:24:53.000-05:00")
which uses the same date string from the JSON as in the deserialization. What's happening here?
Seems like it is the DateParser format that Lift uses by default. In digging into the code, you can see that the parser attempts to use DateParser.parse(s, format) before passing the result to the constructor for org.joda.time.DateTime.
object DateParser {
def parse(s: String, format: Formats) =
format.dateFormat.parse(s).map(_.getTime).getOrElse(throw new MappingException("Invalid date format " + s))
}
case object DateTimeSerializer extends CustomSerializer[DateTime](format => (
{
case JString(s) => new DateTime(DateParser.parse(s, format))
case JNull => null
},
{
case d: DateTime => JString(format.dateFormat.format(d.toDate))
}
))
The format that Lift seems to be using is: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
To get around that, you could either specify the correct pattern and add that to your serialization options, or if you would prefer to just have the JodaTime constructor do all the work, you could create your own serializer like:
case object MyDateTimeSerializer extends CustomSerializer[DateTime](format => (
{
case JString(s) => tryo(new DateTime(s)).openOr(throw new MappingException("Invalid date format " + s))
case JNull => null
},
{
case d: DateTime => JString(format.dateFormat.format(d.toDate))
}
))
And then add that to your list of formats, instead of net.liftweb.json.ext.JodaTimeSerializers.all
Not perhaps 100% elegant but is just a few lines, quite readable and works:
val SourISODateTimeFormat = DateTimeFormat.forPattern("YYYY-MM-dd'T'HH:mm:ss.SSSZ")
val IntermediateDateTimeFormat = DateTimeFormat.forPattern("YYYY-MM-dd'T'HH:mm:ss'Z'")
def transformTimestamps(jvalue: JValue) = jvalue.transform {
case JField(name # ("createdTime" | "updatedTime"), JString(value)) =>
val dt = SourceISODateTimeFormat.parseOption(value).get
JField(name, JString(IntermediateDateTimeFormat.print(dt)))
}
Related
Let's say I have a case class with the optional field nickName and codec like this:
final case class Person(name: String, nickName: Option[String])
object Person {
implicit val personCodec: JsonCodec[Person] = DeriveJsonCodec.gen
}
I want to encode it using zio-json (v1.5.0) and have this as result:
{"name":"SomeName"}
And this is my test for it:
encoder.encodeJson(Person("SomeName", None), None).toString shouldBe """{"name":"SomeName"}""".stripMargin
Looks like the zio-json encode None with null and I've get the test error:
Expected :"{"name":"SomeName"[]}"
Actual :"{"name":"SomeName"[,"nickName":null]}"
I checked the code and found the encoder for Option https://github.com/zio/zio-json/blob/52d007ee22f214d12e1706b016f149c3243c632c/zio-json/shared/src/main/scala/zio/json/encoder.scala#L188-L202
Any idea how I can encode it as a missing JSON field?
implicit val OptionStringCodec: JsonCodec[Option[String]] = JsonCodec.string.xmap( s =>
s match {
case null | "" => None
case s => Some(s)
},
_.getOrElse("")
)
Note: I'm not familiar sith zio-json, there might be another way like a configuration to achieve the same thing.
Given the sample of code you linked, you can easily write an encoder that doesn't write anything in case of none by copying most of the code but modifying:
def unsafeEncode(oa: Option[A], indent: Option[Int], out: Write): Unit = oa match {
case None => () // out.write("null")
case Some(a) => A.unsafeEncode(a, indent, out)
}
I have the following implementation:
val dateFormats = Seq("dd/MM/yyyy", "dd.MM.yyyy")
implicit def dateTimeCSVConverter: CsvFieldReader[DateTime] = (s: String) => Try {
val elem = dateFormats.map {
format =>
try {
Some(DateTimeFormat.forPattern(format).parseDateTime(s))
} catch {
case _: IllegalArgumentException =>
None
}
}.collectFirst {
case e if e.isDefined => e.get
}
if (elem.isDefined)
elem.get
else
throw new IllegalArgumentException(s"Unable to parse DateTime $s")
}
So basically what I'm doing is that, I'm running over my Seq and trying to parse the DateTime with different formats. I then collect the first one that succeeds and if not I throw the Exception back.
I'm not completely satisfied with the code. Is there a better way to make it simpler? I need the exception message passed on to the caller.
The one problem with your code is it tries all patterns no matter if date was already parsed. You could use lazy collection, like Stream to solve this problem:
def dateTimeCSVConverter(s: String) = Stream("dd/MM/yyyy", "dd.MM.yyyy")
.map(f => Try(DateTimeFormat.forPattern(format).parseDateTime(s))
.dropWhile(_.isFailure)
.headOption
Even better is the solution proposed by jwvh with find (you don't have to call headOption):
def dateTimeCSVConverter(s: String) = Stream("dd/MM/yyyy", "dd.MM.yyyy")
.map(f => Try(DateTimeFormat.forPattern(format).parseDateTime(s))
.find(_.isSuccess)
It returns None if none of patterns matched. If you want to throw exception on that case, you can uwrap option with getOrElse:
...
.dropWhile(_.isFailure)
.headOption
.getOrElse(throw new IllegalArgumentException(s"Unable to parse DateTime $s"))
The important thing is, that when any validation succeedes, it won't go further but will return parsed date right away.
This is a possible solution that iterates through all the options
val dateFormats = Seq("dd/MM/yyyy", "dd.MM.yyyy")
val dates = Vector("01/01/2019", "01.01.2019", "01-01-2019")
dates.foreach(s => {
val d: Option[Try[DateTime]] = dateFormats
.map(format => Try(DateTimeFormat.forPattern(format).parseDateTime(s)))
.filter(_.isSuccess)
.headOption
d match {
case Some(d) => println(d.toString)
case _ => throw new IllegalArgumentException("foo")
}
})
This is an alternative solution that returns the first successful conversion, if any
val dateFormats = Seq("dd/MM/yyyy", "dd.MM.yyyy")
val dates = Vector("01/01/2019", "01.01.2019", "01-01-2019")
dates.foreach(s => {
dateFormats.find(format => Try(DateTimeFormat.forPattern(format).parseDateTime(s)).isSuccess) match {
case Some(format) => println(DateTimeFormat.forPattern(format).parseDateTime(s))
case _ => throw new IllegalArgumentException("foo")
}
})
I made it sweet like this now! I like this a lot better! Use this if you want to collect all the successes and all the failures. Note that, this might be a bit in-efficient when you need to break out of the loop as soon as you find one success!
implicit def dateTimeCSVConverter: CsvFieldReader[DateTime] = (s: String) => Try {
val (successes, failures) = dateFormats.map {
case format => Try(DateTimeFormat.forPattern(format).parseDateTime(s))
}.partition(_.isSuccess)
if (successes.nonEmpty)
successes.head.get
else
failures.head.get
}
I am using case classes to extract json with json4s's extract method. Unfortunately, the Natural Earth source data I am using isn't consistent about casing... at some resolutions a field is called iso_a2 and at some it's ISO_A2. I can only make json4s accept the one that matches the field in the case class:
object TopoJSON {
case class Properties(ISO_A2: String)
...
// only accepts capitalised version.
Is there any way to make json4s ignore case and accept both?
There is no way to make it case insensitive using the configuration properties, but a similar result can be achieved by either lowercasing or uppercasing the field names in the parsed JSON.
For example, we have our input:
case class Properties(iso_a2: String)
implicit val formats = DefaultFormats
val parsedLower = parse("""{ "iso_a2": "test1" }""")
val parsedUpper = parse("""{ "ISO_A2": "test2" }""")
We can lowercase all field names using a short function:
private def lowercaseAllFieldNames(json: JValue) = json transformField {
case (field, value) => (field.toLowerCase, value)
}
or make it for specific fields only:
private def lowercaseFieldByName(fieldName: String, json: JValue) = json transformField {
case (field, value) if field == fieldName => (fieldName.toLowerCase, value)
}
Now, to extract the case class instances:
val resultFromLower = lowercaseAllFieldNames(parsedLower).extract[Properties]
val resultFromUpper = lowercaseAllFieldNames(parsedUpper).extract[Properties]
val resultByFieldName = lowercaseFieldByName("ISO_A2", parsedUpper).extract[Properties]
// all produce expected items:
// Properties(test1)
// Properties(test2)
// Properties(test2)
I'm new to Scala and Play framework. I try to query all the data for selected columns from a data table and save them as Excel file.
Selected columns usually have different types, such as Int, Str, Timestamp, etc.
I want to convert all value types, include null into String
(null convert to empty string "")
without knowing the actual type of a column, so the code can be used for any tables.
According to Play's document, I can write the implicit converter below, however, this cannot handle null. Googled this for long time, cannot find solution. Can someone please let me know how to handle null in the implicit converter?
Thanks in advance~
implicit def valueToString: anorm.Column[String] =
anorm.Column.nonNull1[String] { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case s: String => Right(s) // Provided-default case
case i: Int => Right(i.toString()) // Int to String
case t: java.sql.Clob => Right(t.toString()) // Blob/Text to String
case d: java.sql.Timestamp => Right(d.toString()) // Datatime to String
case _ => Left(TypeDoesNotMatch(s"Cannot convert $value: ${value.asInstanceOf[AnyRef].getClass} to String for column $qualified"))
}
}
As indicated in the documentation, if there a Column[T], allowing to parse of column of type T, and if the column(s) can be null, then Option[T] should be asked, benefiting from the generic support as Option[T].
There it is a custom Column[String] (make sure the custom one is used, not the provided Column[String]), so Option[String] should be asked.
import myImplicitStrColumn
val parser = get[Option[String]]("col")
I am struggling with using the extractor pattern in a certain use case where it seems that it could be very powerful.
I start with an input of Map[String, String] coming from a web request. This is either a searchRequest or a countRequest to our api.
searchRequest has keys
query(required)
fromDate(optional-defaulted)
toDate(optional-defaulted)
nextToken(optional)
maxResults(optional-defaulted)
countRequest has keys
query(required)
fromDate(optional-defaulted)
toDate(optional-defaulted)
bucket(optional-defaulted)
Then, I want to convert both of these to a composition type structure like so
protected case class CommonQueryRequest(
originalQuery: String,
fromDate: DateTime,
toDate: DateTime
)
case class SearchQueryRequest(
commonRequest: CommonQueryRequest,
maxResults: Int,
nextToken: Option[Long])
case class CountRequest(commonRequest: CommonQueryRequest, bucket: String)
As you can see, I am sort of converting Strings to DateTimes and Int, Long, etc. My issue is that I really need errors for invalid fromDate vs. invalid toDate format vs. invalid maxResults vs. invalid next token IF available.
At the same time, I need to stick in defaults(which vary depending on if it is a search or count request).
Naturally, with the Map being passed in, you can tell search vs. count so in my first go at this, I added a key="type" with value of search or count so that I could match at least on that.
Am I even going down the correct path? I thought perhaps using matching could be cleaner than our existing implementation but the further I go down this path, it seems to be getting a bit uglier.
thanks,
Dean
I would suggest you to take a look at scalaz.Validation and ValidationNel. It's super nice way to collect validation errors, perfect fit for input request validation.
You can learn more about Validation here: http://eed3si9n.com/learning-scalaz/Validation.html. However in my example I use scalaz 7.1 and it can be a little bit different from what described in this article. However main idea remains the same.
Heres small example for your use case:
import java.util.NoSuchElementException
import org.joda.time.DateTime
import org.joda.time.format.DateTimeFormat
import scala.util.Try
import scalaz.ValidationNel
import scalaz.syntax.applicative._
import scalaz.syntax.validation._
type Input = Map[String, String]
type Error = String
case class CommonQueryRequest(originalQuery: String,
fromDate: DateTime,
toDate: DateTime)
case class SearchQueryRequest(commonRequest: CommonQueryRequest,
maxResults: Int,
nextToken: Option[Long])
case class CountRequest(commonRequest: CommonQueryRequest, bucket: String)
def stringField(field: String)(input: Input): ValidationNel[Error, String] =
input.get(field) match {
case None => s"Field $field is not defined".failureNel
case Some(value) => value.successNel
}
val dateTimeFormat = DateTimeFormat.fullTime()
def dateTimeField(field: String)(input: Input): ValidationNel[Error, DateTime] =
Try(dateTimeFormat.parseDateTime(input(field))) recover {
case _: NoSuchElementException => DateTime.now()
} match {
case scala.util.Success(dt) => dt.successNel
case scala.util.Failure(err) => err.toString.failureNel
}
def intField(field: String)(input: Input): ValidationNel[Error, Int] =
Try(input(field).toInt) match {
case scala.util.Success(i) => i.successNel
case scala.util.Failure(err) => err.toString.failureNel
}
def countRequest(input: Input): ValidationNel[Error, CountRequest] =
(
stringField ("query") (input) |#|
dateTimeField("fromDate")(input) |#|
dateTimeField("toDate") (input) |#|
stringField ("bucket") (input)
) { (query, from, to, bucket) =>
CountRequest(CommonQueryRequest(query, from, to), bucket)
}
val validCountReq = Map("query" -> "a", "bucket" -> "c")
val badCountReq = Map("fromDate" -> "invalid format", "bucket" -> "c")
println(countRequest(validCountReq))
println(countRequest(badCountReq))
scalactic looks pretty cool as well and I may go that route (though not sure if we can use that lib or not but I think I will just proceed forward until someone says no).