Scala, cats - convert FUUID with Circe - scala

I use this library https://christopherdavenport.github.io/fuuid/ for creating ID of custom object and persist them into databse.
I have a simple case class which is my model:
import io.chrisdavenport.fuuid.FUUID
case class Bet(
betId: Option[FUUID],
home: String,
away: String,
stake: BigDecimal,
betType: String)
I used FUUID here as an Option parameter. I have also a routes created with Http4s which should take json from input and map it into model:
class BettingRoutes[F[_] : Async](service: BettingService[F]) extends Http4sDsl[F] {
def routes: HttpRoutes[F] = HttpRoutes.of[F] {
case req#PUT -> Root / "bets" =>
for {
bet <- req.as[Bet]
created <- service.put(bet)
response <- Created(created)
} yield response
}
}
I also added some implicits to encode and decode from Circe:
object jsons {
implicit def circeDecoder[A[_] : Sync, B: Decoder]: EntityDecoder[A, B] = jsonOf[A, B]
implicit def circeEncoder[A[_] : Sync, B: Encoder]: EntityEncoder[A, B] = jsonEncoderOf[A, B]
}
The problem is - when I want to compile project, I got an errors like this in route class:
Error:(23, 22) Cannot decode into a value of type model.Bet, because no EntityDecoder[F, model.Bet] instance could be found.
bet <- req.as[Bet]
Error:(23, 22) not enough arguments for method as: (implicit F: cats.Functor[F], implicit decoder: org.http4s.EntityDecoder[F,model.Bet])F[model.Bet].
Unspecified value parameter decoder.
bet <- req.as[Bet]
Error:(25, 28) Cannot convert from model.Bet to an Entity, because no EntityEncoder[F, model.Bet] instance could be found.
response <- Created(created)
etc. I investigated it and it appears because of using FUUID. I changed all FUUID classes to Long and after this just to java's UUID and then everything compile correctly without errors. The problem is only with FUUID and probably with conversion of it. I tried to use Circe Integration as It was shown in FUUID link above, but it did not help. Do you know how to fix this code to compile everything with fuuid and circe?
I am new to cats and connected libs, so maybe it is a simple mistake, but it is not trivial for me now.

In order to have EntityDecoder[F, Bet] via jsons.circeDecoder we firstly need Decoder[Bet]. It can be auto-generated by Circe if we have decoders for all fields. The thing is there is Decoder[UUID] but no Decoder[FUUID].
So just define necessary implicit
implicit val fuuidDecoder: Decoder[FUUID] = Decoder[UUID].map(FUUID.fromUUID)
Similarly for encoders
implicit val fuuidEncoder: Encoder[FUUID] = Encoder[UUID].contramap(FUUID.Unsafe.toUUID)

Related

Scala Dynamic Parse Json using case class No Manifest available for T

I have a JSON string and I created a function which parses this JSON as an object using Scala case class. I wrote the below code to parse it in a generic way. However, It gave me an error:
def getJsonObj[T](jsonString:String): T = {
implicit val formats: DefaultFormats.type = DefaultFormats
parse(jsonString).extract[T]
}
Error can be found below:
Error:(19, 32) No Manifest available for T.
parse(jsonString).extract[T] Error:(19, 32) not enough arguments for
method extract: (implicit formats: org.json4s.Formats, implicit mf:
scala.reflect.Manifest[T])T. Unspecified value parameter mf.
parse(jsonString).extract[T]
I found this No Manifest available for Type But I don't know how to fix it in my code. Also, I found this,Spark Scala - How to construct Scala Map from nested JSON (Scala: "No manifest available for type T") But I need to pass the case class to the function in a generic way. It seems a common problem but I can't solve it using the available answers as I am new in Scala.
Another point, How can I add try-catch to see if it's correctly parsed or not?
I think this answer solves your question, Scala: “No manifest available for type T”. It easily solved by implicitly passing on the manifest for the type method. I add an example of the code and a simple function for error handling.
val jsonStr: String = """{"airports":[{"name":"sfo","score":1},{"name":"phx","score":1},{"name":"sjc","score":1}]}"""
case class AirPortScores(name: String, score: Double)
case class JsonRulesHandler(airports: List[AirPortScores])
val json: JsonRulesHandler = getJsonObj[JsonRulesHandler](jsonStr)
println(json)
def getJsonObj[T](jsonString:String)(implicit m: Manifest[T]): T = {
extractFrom(jsonString) match {
case Success(jsonParsed) ⇒
jsonParsed
case Failure(exc) ⇒
throw new IllegalArgumentException(exc)
}
}
private def extractFrom[T](jsonString:String)(implicit m: Manifest[T]): Try[T] = {
implicit val formats: DefaultFormats.type = DefaultFormats
Try {
parse(jsonString).extract[T]
}
}

Scala Type Classes Understanding Interface Syntax

I'm was reading about cats and I encountered the following code snippet which is about serializing objects to JSON!
It starts with a trait like this:
trait JsonWriter[A] {
def write(value: A): Json
}
After this, there are some instances of our domain object:
final case class Person(name: String, email: String)
object JsonWriterInstances {
implicit val stringWriter: JsonWriter[String] =
new JsonWriter[String] {
def write(value: String): Json =
JsString(value)
}
implicit val personWriter: JsonWriter[Person] =
new JsonWriter[Person] {
def write(value: Person): Json =
JsObject(Map(
"name" -> JsString(value.name),
"email" -> JsString(value.email)
))
}
// etc...
}
So far so good! I can then use this like this:
import JsonWriterInstances._
Json.toJson(Person("Dave", "dave#example.com"))
Later on I come across something called the interface syntax, which uses extension methods to extend existing types with interface methods like below:
object JsonSyntax {
implicit class JsonWriterOps[A](value: A) {
def toJson(implicit w: JsonWriter[A]): Json =
w.write(value)
}
}
This then simplifies the call to serializing a Person as:
import JsonWriterInstances._
import JsonSyntax._
Person("Dave", "dave#example.com").toJson
What I don't understand is that how is the Person boxed into JsonWriterOps such that I can directly call the toJson as though toJson was defined in the Person case class itself. I like this magic, but I fail to understand this one last step about the JsonWriterOps. So what is the idea behind this interface syntax and how does this work? Any help?
This is actually a standard Scala feature, since JsonWriterOps is marked implicit and is in scope, the compiler can apply it at compilation-time when needed.
Hence scalac will do the following transformations:
Person("Dave", "dave#example.com").toJson
new JsonWriterOps(Person("Dave", "dave#example.com")).toJson
new JsonWriterOps[Person](Person("Dave", "dave#example.com")).toJson
Side note:
It's much more efficient to implicit classes as value classes like this:
implicit class JsonWriterOps[A](value: A) extends AnyVal
This makes the compiler also optimize away the new object construction, if possible, compiling the whole implicit conversion + method call to a simple function call.

Generically Serialize Java Enums to json using json4s

Our finatra application uses json4s to serialize objects to jsons in our controller responses. However, I noticed that when trying to serialize enums, it creates an empty object.
I saw this response that would resolve my issue but would have to be replicated for each enum:
https://stackoverflow.com/a/35850126/2668545
class EnumSerializer[E <: Enum[E]](implicit ct: Manifest[E]) extends CustomSerializer[E](format ⇒ ({
case JString(name) ⇒ Enum.valueOf(ct.runtimeClass.asInstanceOf[Class[E]], name)
}, {
case dt: E ⇒ JString(dt.name())
}))
// first enum I could find
case class X(a: String, enum: java.time.format.FormatStyle)
implicit val formats = DefaultFormats + new EnumSerializer[java.time.format.FormatStyle]()
// {"a":"test","enum":"FULL"}
val jsonString = Serialization.write(X("test", FormatStyle.FULL))
Serialization.read[X](jsonString)
Is there a way to make a generic custom serializer that would handle all java enum instances by grabbing their .name() value when serializing to json?
I don't think there is a clean solution because of the type-safety constraints. Still if you are OK with a hacky solution that relies on the fact that Java uses type erasure, here is one that seems to work:
class EnumSerializer() extends Serializer[Enum[_]] {
override def deserialize(implicit format: Formats): PartialFunction[(TypeInfo, JValue), Enum[_]] = {
// using Json4sFakeEnum is a huge HACK here but it seems to work
case (TypeInfo(clazz, _), JString(name)) if classOf[Enum[_]].isAssignableFrom(clazz) => Enum.valueOf[Json4sFakeEnum](clazz.asInstanceOf[Class[Json4sFakeEnum]], name)
}
override def serialize(implicit format: Formats): PartialFunction[Any, JValue] = {
case v: Enum[_] => JString(v.name())
}
}
where Json4sFakeEnum is really a fake enum defined in Java (actually any enum should work but I prefer to make it explicitly fake)
enum Json4sFakeEnum {
}
With such definition an example similar to yours
// first enum I could find
case class X(a: String, enum: java.time.format.FormatStyle)
def js(): Unit = {
implicit val formats = DefaultFormats + new EnumSerializer()
val jsonString = Serialization.write(X("test", FormatStyle.FULL))
println(s"jsonString '$jsonString'")
val r = Serialization.read[X](jsonString)
println(s"res ${r.getClass} '$r'")
}
Produces following output:
jsonString '{"a":"test","enum":"FULL"}'
res class so.Main$X 'X(test,FULL)'
Update or How does it work and why you need Json4sFakeEnum?
There are 2 important things:
Extending Serializer instead of CustomSerializer. This is important because it allows creating a single non-generic instance that can handle all Enum types. This works because the function created by Serializer.deserialize receives TypeInfo as an argument so it can analyze runtime class.
Json4sFakeEnum hack. From the high-level point of view it is enough to have just a Class of the given enum to get all names because they are stored in the Class object. However on the implementation details level the simplest way to access that is to use Enum.valueOf method that has following signature:
public static <T extends Enum<T>> T valueOf(Class<T> enumType, String name)
The unlucky part here is that it has a generic signature and there is a restriction T extends Enum<T>. It means that even though we have proper Class object the best type we know is still Enum[_] and that doesn't fit the self-referencing restriction of extends Enum<T>. On the other hand Java uses type erasure so valueOf is actually compiled to something like
public static Enum<?> valueOf(Class<Enum<?>> enumType, String name)
It means that if we just trick the compiler into allowing us to call valueOf, at the runtime everything will be alright. And this is where Json4sFakeEnum comes on the scene: we just need some known at the compile time specific subclass of Enum to make the valueOf call.

TypeTag for case classes

I would like to make a case class Bla that takes a type parameter A and it knows the type of A at runtime (it stores it in its info field).
My attempt is shown in the example below. The problem is that this example does not compile.
case class Bla[A] (){
val info=Run.paramInfo(this) // this does not compile
}
import scala.reflect.runtime.universe._
object Run extends App{
val x=Bla[Int]
def paramInfo[T](x:T)(implicit tag: TypeTag[T]): String = {
val targs = tag.tpe match { case TypeRef(_, _, args) => args }
val tinfo=s"type of $x has type arguments $targs"
println(tinfo)
tinfo
}
paramInfo(x)
}
However when I comment val info=Run.paramInfo(this) then the program runs fine and prints:
type of Bla() has type arguments List(Int)
Is there a way to make this example below compile ? (or in some other way achieve the same goal, i.e. that a case class is self aware of the type of it's type parameter?)
There's little point in using reflection based APIs for this, shapeless has a typeclass that exposes compile time information to runtime using an implicit macro.
import shapeless.Typeable
class Test[T : Typeable] {
def info: String = implicitly[Typeable[T]].describe
}
It's also relatively easy to roll your own thing here, with the added inconvenience of having to compile the implicit macro in a different compilation unit than whatever is using it.
You just need to pass the implicit type tag parameter to the case class constructor (otherwise the type information is lost before calling paraInfo which requires it):
case class Bla[A : TypeTag]() { ... }
Which is shorthand for:
case class Bla[A](implicit tag: TypeTag[A]) { ... }

spray-json and spray-routing: how to invoke JsonFormat write in complete

I am trying to figure out how to get a custom JsonFormat write method to be invoked when using the routing directive complete. JsonFormat created with the jsonFormat set of helper functions work fine, but defining a complete JsonFormat will not get called.
sealed trait Error
sealed trait ErrorWithReason extends Error {
def reason: String
}
case class ValidationError(reason: String) extends ErrorWithReason
case object EntityNotFound extends Error
case class DatabaseError(reason: String) extends ErrorWithReason
case class Record(a: String, b: String, error: Error)
object MyJsonProtocol extends DefaultJsonProtocol {
implicit object ErrorJsonFormat extends JsonFormat[Error] {
def write(err: Error) = failure match {
case e: ErrorWithReason => JsString(e.reason)
case x => JsString(x.toString())
}
def read(value: JsValue) = {
value match {
//Really only intended to serialize to JSON for API responses, not implementing read
case _ => throw new DeserializationException("Can't reliably deserialize Error")
}
}
}
implicit val record2Json = jsonFormat3(Record)
}
And then a route like:
import MyJsonProtocol._
trait TestRoute extends HttpService with Json4sSupport {
path("testRoute") {
val response: Record = getErrorRecord()
complete(response)
}
}
If I add logging, I can see that the ErrorJsonFormat.write method never gets called.
The ramifications are as follows showing what output I'm trying to get and what I actually get. Let's say the Record instance was Record("something", "somethingelse", EntityNotFound)
actual
{
"a": "something",
"b": "somethingelse",
"error": {}
}
intended
{
"a": "something",
"b": "somethingelse",
"error": "EntityNotFound"
}
I was expecting that the complete(record) uses the implicit JsonFormat for Record which in turn relies on the implicit object ErrorJsonFormat that specifies the write method that creates the appropriate JsString field. Instead it seems to both recognize the provided ErrorJsonFormat while ignoring its instructions for serializing.
I feel like there should be a solution that does not involve needing to replace implicit val record2Json = jsonFormat3(Record) with an explicit implicit object RecordJsonFormat extends JsonFormat[Record] { ... }
So to summarize what I am asking
Why does the serialization of Record fail to call the ErrorJsonFormat write method (what does it even do instead?) answered below
Is there a way to match my expectation while still using complete(record)?
Edit
Digging through the spray-json source code, there is an sbt-boilerplate template that seems to define the jsonFormat series of methods: https://github.com/spray/spray-json/blob/master/src/main/boilerplate/spray/json/ProductFormatsInstances.scala.template
and the relevant product for jsonFormat3 from that seems to be :
def jsonFormat3[P1 :JF, P2 :JF, P3 :JF, T <: Product :ClassManifest](construct: (P1, P2, P3) => T): RootJsonFormat[T] = {
val Array(p1,p2,p3) = extractFieldNames(classManifest[T])
jsonFormat(construct, p1, p2, p3)
}
def jsonFormat[P1 :JF, P2 :JF, P3 :JF, T <: Product](construct: (P1, P2, P3) => T, fieldName1: String, fieldName2: String, fieldName3: String): RootJsonFormat[T] = new RootJsonFormat[T]{
def write(p: T) = {
val fields = new collection.mutable.ListBuffer[(String, JsValue)]
fields.sizeHint(3 * 4)
fields ++= productElement2Field[P1](fieldName1, p, 0)
fields ++= productElement2Field[P2](fieldName2, p, 0)
fields ++= productElement2Field[P3](fieldName3, p, 0)
JsObject(fields: _*)
}
def read(value: JsValue) = {
val p1V = fromField[P1](value, fieldName1)
val p2V = fromField[P2](value, fieldName2)
val p3V = fromField[P3](value, fieldName3)
construct(p1v, p2v, p3v)
}
}
From this it would seem that jsonFormat3 itself is perfectly fine (if you trace into the productElement2Field it grabs the writer and directly calls write). The problem must then be that the complete(record) doesn't involve JsonFormat at all and somehow alternately marshals the object.
So this seems to answer part 1: Why does the serialization of Record fail to call the ErrorJsonFormat write method (what does it even do instead?). No JsonFormat is called because complete marshals via some other means.
It seems the remaining question is if it is possible to provide a marshaller for the complete directive that will use the JsonFormat if it exists otherwise default to its normal behavior. I realize that I can generally rely on the default marshaller for basic case class serialization. But when I get a complicated trait/case class setup like in this example I need to use JsonFormat to get the proper response. Ideally, this distinction shouldn't have to be explicit for someone writing routes to need to know the situations where its the default marshaller as opposed to needing to invoke JsonFormat. Or in other words, needing to distinguish if the given type needs to be written as complete(someType) or complete(someType.toJson) feels wrong.
After digging further, it seems the root of the problem has been a confusion of the Json4s and Spray-Json libraries in the code. In trying to track down examples of various elements of JSON handling, I didn't recognize the separation between the two libraries readily and ended up with code that mixed some of each, explaining the unexpected behavior.
In this question, the offending piece is pulling in the Json4sSupport in the router. The proper definition should be using SprayJsonSupport:
import MyJsonProtocol._
trait TestRoute extends HttpService with SprayJsonSupport {
path("testRoute") {
val response: Record = getErrorRecord()
complete(response)
}
}
With this all considered, the answers are more apparent.
1: Why does the serialization of Record fail to call the ErrorJsonFormat write method (what does it even do instead)?.
No JsonFormat is called because complete marshals via some other means. That other means is the marshaling provided implicitly by Json4s with Json4sSupport. You can use record.toJson to force spray-json serialization of the object, but the output will not be clean (it will include nested JS objects and "fields" keys).
Is there a way to match my expectation while still using complete(record)?
Yes, using SprayJsonSupport will use implicit RootJsonReader and/or RootJsonWriter where needed to automatically create a relevant Unmarshaller and/or Marshaller. Documentation reference
So with SprayJsonSupport it will see the RootJsonWriter defined by the jsonFormat3(Record) and complete(record) will serialize as expected.