I am using the liftweb JSON converter and got it working, by including the dependency in build.sbt like this:
"net.liftweb" %% "lift-json" % "2.6.2"
This all works before I added Enumerations.
I can see here that Enumerations are supported, and you should do something like this:
// Scala enums
implicit val formats = net.liftweb.json.DefaultFormats + new EnumSerializer(MyEnum)
But the problem is in my environment the net.liftweb.json.ext package is not recognized. This is the package where EnumSerializer lives.
There is a separate extensions lib that you would need to include. Adding an extra line something like:
"net.liftweb" %% "lift-json-ext" % "2.6.2"
should do the trick.
I had an enumeration that was created by the gRPC proto and in that case the EnumSerializer didn't work for me. In that case, I created a custom serializer and worked awesome.
case object GrpcTimeUnitSerializer extends CustomSerializer[TimeUnit] (format => (
{
case JString(tu) => TimeUnit.fromName(tu.toUpperCase).get
case JNull => throw new GrpcServiceException(Status.INTERNAL.withDescription("Not allowed null value for the type TimeUnit."))
},
{
case tu: TimeUnit => JString(tu.toString)
}
))
And here is the DefaultFormats definition:
implicit val formats: Formats = DefaultFormats + GrpcTimeUnitSerializer
Related
This question is based upon Scala 2.12.12
scalaVersion := "2.12.12"
using play-json
"com.typesafe.play" %% "play-json" % "2.9.1"
If I have a Json object that looks like this:
{
"UpperCaseKey": "some value",
"AnotherUpperCaseKey": "some other value"
}
I know I can create a case class like so:
case class Yuck(UpperCaseKey: String, AnotherUpperCaseKey: String)
and follow that up with this chaser:
implicit val jsYuck = Json.format[Yuck]
and that will, of course, give me both reads[Yuck] and writes[Yuck] to and from Json.
I'm asking this because I have a use case where I'm not the one deciding the case of the keys and I've being handed a Json object that is full of keys that start with an uppercase letter.
In this use case I will have to read and convert millions of them so performance is a concern.
I've looked into #JsonAnnotations and Scala's transformers. The former doesn't seem to have much documentation for use in Scala at the field level and the latter seems to be a lot of boilerplate for something that might be very simple another way if I only knew how...
Bear in mind as you answer this that some Keys will be named like this:
XXXYyyyyZzzzzz
So the predefined Snake/Camel case conversions will not work.
Writing a custom conversion seems to be an option yet unsure how to do that with Scala.
Is there a way to arbitrarily request that the Json read will take Key "XXXYyyyZzzz" and match it to a field labeled "xxxYyyyZzzz" in a Scala case class?
Just to be clear I may also need to convert, or at least know how, a Json key named "AbCdEf" into field labeled "fghi".
Just use provided PascalCase.
case class Yuck(
upperCaseKey: String,
anotherUpperCaseKey: String)
object Yuck {
import play.api.libs.json._
implicit val jsonFormat: OFormat[Yuck] = {
implicit val cfg = JsonConfiguration(naming = JsonNaming.PascalCase)
Json.format
}
}
play.api.libs.json.Json.parse("""{
"UpperCaseKey": "some value",
"AnotherUpperCaseKey": "some other value"
}""").validate[Yuck]
// => JsSuccess(Yuck(some value,some other value),)
play.api.libs.json.Json.toJson(Yuck(
upperCaseKey = "foo",
anotherUpperCaseKey = "bar"))
// => JsValue = {"UpperCaseKey":"foo","AnotherUpperCaseKey":"bar"}
I think that the only way play-json support such a scenario, is defining your own Format.
Let's assume we have:
case class Yuck(xxxYyyyZzzz: String, fghi: String)
So we can define Format on the companion object:
object Yuck {
implicit val format: Format[Yuck] = {
((__ \ "XXXYyyyZzzz").format[String] and (__ \ "AbCdEf").format[String]) (Yuck.apply(_, _), yuck => (yuck.xxxYyyyZzzz, yuck.fghi))
}
}
Then the following:
val jsonString = """{ "XXXYyyyZzzz": "first value", "AbCdEf": "second value" }"""
val yuck = Json.parse(jsonString).validate[Yuck]
println(yuck)
yuck.map(yuckResult => Json.toJson(yuckResult)).foreach(println)
Will output:
JsSuccess(Yuck(first value,second value),)
{"XXXYyyyZzzz":"first value","AbCdEf":"second value"}
As we can see, XXXYyyyZzzz was mapped into xxxYyyyZzzz and AbCdEf into fghi.
Code run at Scastie.
Another option you have, is to usd JsonNaming, as #cchantep suggested in the comment. If you define:
object Yuck {
val keysMap = Map("xxxYyyyZzzz" -> "XXXYyyyZzzz", "fghi" -> "AbCdEf")
implicit val config = JsonConfiguration(JsonNaming(keysMap))
implicit val fotmat = Json.format[Yuck]
}
Running the same code will output the same. Code ru nat Scastie.
I need to make a BSONSerializer for my case class Filter that looks the following way.
case class Filter(name: String,
`type`: LogicType,
value: JsArray,
operator: OperatorType)
Within my serializer class I already have the following guys:
implicit val LogicTypeHandler: BSONHandler[LogicType] = EnumHandler.handler(LogicType)
implicit val OperatorTypeHandler: BSONHandler[OperatorType] = EnumHandler.handler(OperatorType)
>>> Missing BSON serializer for JsArray
implicit val FilterHandler: BSONDocumentHandler[Filter] = Macros.handler[Filter]
The issue is that I need to save "value" as JsArray and only during later runtime usage I will know what it will be exactly.
In my case I'm expecting it to be wether array of Strings or Ints.
The question here is how to implement BSONSerializer for this JsArray. I'm getting error:
Implicit not found for 'Filter.value': reactivemongo.api.bson.BSONReader[play.api.libs.json.JsArray]
Additional info:
"org.reactivemongo" %% "play2-reactivemongo" % "1.0.0-play28"
"com.typesafe.play" % "sbt-plugin" % "2.8.5"
Please advise.
I'm trying to build a simple typeclass IsEnum[T] using a macro.
I use knownDirectSubclasses to get all the direct subclasses if T, ensure T is a sealed trait, and that all subclasses are of case objects (using subSymbol.asClass.isModuleClass && subSymbol.asClass.isCaseClass).
Now I'm trying to build a Seq with the case objects referred by the subclasses.
It's working, using a workaround:
Ident(subSymbol.asInstanceOf[scala.reflect.internal.Symbols#Symbol].sourceModule.asInstanceOf[Symbol])
But I copied that from some other question, yet it seems hacky and wrong. Why does that work? and is there a cleaner way to achieve that?
In 2.13 you can materialize scala.ValueOf
val instanceTree = c.inferImplicitValue(appliedType(typeOf[ValueOf[_]].typeConstructor, subSymbol.asClass.toType))
q"$instanceTree.value"
Tree will be different
sealed trait A
object A {
case object B extends A
case object C extends A
}
//scalac: Seq(new scala.ValueOf(A.this.B).value, new scala.ValueOf(A.this.C).value)
but at runtime it's still Seq(B, C).
In 2.12 shapeless.Witness can be used instead of ValueOf
val instanceTree = c.inferImplicitValue(appliedType(typeOf[Witness.Aux[_]].typeConstructor, subSymbol.asClass.toType))
q"$instanceTree.value"
//scalac: Seq(Witness.mkWitness[App.A.B.type](A.this.B.asInstanceOf[App.A.B.type]).value, Witness.mkWitness[App.A.C.type](A.this.C.asInstanceOf[App.A.C.type]).value)
libraryDependencies += "com.chuusai" %% "shapeless" % "2.4.0-M1" // in 2.3.3 it doesn't work
In Shapeless they use kind of
subSymbol.asClass.toType match {
case ref # TypeRef(_, sym, _) if sym.isModuleClass => mkAttributedQualifier(ref)
}
https://github.com/milessabin/shapeless/blob/master/core/src/main/scala/shapeless/singletons.scala#L230
or in our case simply
mkAttributedQualifier(subSymbol.asClass.toType)
but their mkAttributedQualifier also uses downcasting to compiler internals and the tree obtained is like Seq(A.this.B, A.this.C).
Also
Ident(subSymbol.companionSymbol)
seems to work (tree is Seq(B, C)) but .companionSymbol is deprecated (in scaladocs it's written "may return unexpected results for module classes" i.e. for objects).
Following approach similar to the one used by #MateuszKubuszok in his library enumz you can try also
val objectName = symbol.fullName
c.typecheck(c.parse(s"$objectName"))
and the tree is Seq(App.A.B, App.A.C).
Finally, if you're interested in the tree Seq(B, C) (and not some more complicated tree) it seems you can replace
Ident(subSymbol.asInstanceOf[scala.reflect.internal.Symbols#Symbol].sourceModule.asInstanceOf[Symbol/*ModuleSymbol*/])
with more conventional
Ident(subSymbol.owner.info.decl(subSymbol.name.toTermName)/*.asModule*/)
or (the shortest option)
Ident(subSymbol.asClass.module/*.asModule*/)
Quasiquotes simplify many things when writing macros in Scala. However I noticed that macros containing quasiquotes could be recompiled every time a compilation in SBT is triggered, even though neither the macro implementation nor any of it's call sites have changed and need recompilation.
This doesn't seem to happen, if the code in quasiquotes is fairly simple, it seems to happen only if there's a dependency on another class. I noticed that rewriting everything with "reify" seems to solve the recompilation problem but I don't manage to rewrite the last part without quasiquotes...
My macro avoids reflection on startup by creating wrapper functions during compilation.
I have the following classes:
object ExportedFunction {
def apply[R: Manifest](f: Function0[R], fd: FunctionDescription): ExportedExcelFunction = new ExcelFunction0[R] {
def apply: R = f()
val functionDescription = fd
}
def apply[T1: Manifest, R: Manifest](f: Function1[T1, R], fd: FunctionDescription): ExportedExcelFunction = new ExcelFunction1[T1, R] {
def apply(t1: T1): R = f(t1)
val functionDescription = fd
}
... and so on... until Function17...
}
I then analyze an object and export any member function using the described interface like so:
def export(registrar: FunctionRegistrar,
root: Object,
<...more args...>) = macro exportImpl
def exportImpl(c: Context)(registrar: c.Expr[FunctionRegistrar],
root: c.Expr[Object],
<...>): c.Expr[Any] = {
import c.universe._
<... the following is simplified ...>
root.typeSignature.members.flatMap {
case x if x.isMethod =>
val method = x.asMethod
val callee = c.Expr(method))
val desc = q"""FunctionDescription(<...result from reflective lookup...>)"""
val export = q"ExportedFunction($callee _, $desc)"
q"$registrar.+=({$export})"
I can rewrite the first and last line with reify but I don't manage to rewrite the second line, my best shot is with quasiquotes:
val export = reify {
...
ExportedFunction(c.Expr(q"""$callee _"""), desc)
...
}.tree
But this results in:
overloaded method value apply with alternatives... cannot be applied to (c.Expr[Nothing], c.universe.Expr[FunctionDescription])
I think the compiler is missing the implicits, or maybe this code will only work for a function with a fixed number of arguments, since it needs to know at macro compile time how many arguments the method has? However it works if everything is written using quasiquotes...
From the description of the SBT problem I can assume that you're using macro paradise for 2.10.x and are facing https://github.com/scalamacros/paradise/issues/11. I was planning to address this issue this week, so the fix should arrive really soon. In the meanwhile you could use a workaround described on the issue page.
As for the reify problem, not all quasiquotes can be rewritten using reify. Limitations such as the one you have faced here were a very strong motivator towards shifting our focus to quasiquotes in Scala 2.11.
For the record, these SBT settings (upgrade to newer version) fixed it:
...
settings = Seq(
libraryDependencies += "org.scala-lang" % "scala-reflect" % scalaVersion.value,
libraryDependencies += "org.scalamacros" % "quasiquotes" % "2.0.0-M3" cross CrossVersion.full,
autoCompilerPlugins := true,
addCompilerPlugin("org.scalamacros" % "paradise" % "2.0.0-M3" cross CrossVersion.full)
)
I found one library for this https://github.com/daltontf/scala-yaml, but it seems like not many developers use it and it's pretty outdated. It also might be this http://www.lag.net/configgy/ if the link wasn't dead.
I wonder, what is the most popular or de-facto library for working with YAML in Scala?
Here's an example of using the Jackson YAML databinding.
First, here's our sample document:
name: test
parameters:
"VERSION": 0.0.1-SNAPSHOT
things:
- colour: green
priority: 128
- colour: red
priority: 64
Add these dependencies:
libraryDependencies ++= Seq(
"com.fasterxml.jackson.core" % "jackson-core" % "2.1.1",
"com.fasterxml.jackson.core" % "jackson-annotations" % "2.1.1",
"com.fasterxml.jackson.core" % "jackson-databind" % "2.1.1",
"com.fasterxml.jackson.dataformat" % "jackson-dataformat-yaml" % "2.1.1"
)
Here's our outermost class (Preconditions is a Guava-like check and raises an exception if said field is not in the YAML):
import java.util.{List => JList, Map => JMap}
import collection.JavaConversions._
import com.fasterxml.jackson.annotation.JsonProperty
class Sample(#JsonProperty("name") _name: String,
#JsonProperty("parameters") _parameters: JMap[String, String],
#JsonProperty("things") _things: JList[Thing]) {
val name = Preconditions.checkNotNull(_name, "name cannot be null")
val parameters: Map[String, String] = Preconditions.checkNotNull(_parameters, "parameters cannot be null").toMap
val things: List[Thing] = Preconditions.checkNotNull(_things, "things cannot be null").toList
}
And here's the inner object:
import com.fasterxml.jackson.annotation.JsonProperty
class Thing(#JsonProperty("colour") _colour: String,
#JsonProperty("priority") _priority: Int {
val colour = Preconditions.checkNotNull(_colour, "colour cannot be null")
val priority = Preconditions.checkNotNull(_priority, "priority cannot be null")
}
Finally, here's how to instantiate it:
val reader = new FileReader("sample.yaml")
val mapper = new ObjectMapper(new YAMLFactory())
val config: Sample = mapper.readValue(reader, classOf[Sample])
A little late to the party but I think this method works in the most seamless way. This method has:
Automatic conversion to scala collection types
Use case classes
No need for boilerplate code like BeanProperty/JsonProperty
Uses Jackson-YAML & Jackson-scala
Code:
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.dataformat.yaml.YAMLFactory
import com.fasterxml.jackson.module.scala.DefaultScalaModule
case class Prop(url: List[String])
// uses Jackson YAML to parsing, relies on SnakeYAML for low level handling
val mapper: ObjectMapper = new ObjectMapper(new YAMLFactory())
// provides all of the Scala goodiness
mapper.registerModule(DefaultScalaModule)
val prop: Prop = mapper.readValue("url: [abc, def]", classOf[Prop])
// prints List(abc, def)
println(prop.url)
SnakeYAML is a high-quality, actively maintained YAML parser/renderer for Java. You can of course use it from Scala.
If you're already working with circe, you might be interested in circe-yaml which uses SnakeYAML to parse a YAML file and then converts the result to a circe AST.
I would love to see a library that could parse either JSON or YAML (or whatever -- pluggable) to a common AST and then construct Scala objects using typeclasses. Several JSON libraries work like that (and of course can also render JSON for objects using the same typeclasses), but I don't know of such a facility for YAML.
PS: There also appear to be a number of seemingly abandoned wrappers for SnakeYAML, namely HelicalYAML and yaml4s
And now we have circe-yaml https://github.com/circe/circe-yaml
SnakeYAML provides a Java API for parsing YAML and marshalling its structures into JVM classes. However, you might find circe's way of marshalling into a Scala ADT preferable -- using compile-time specification or derivation rather than runtime reflection. This enables you to parse YAML into Json, and use your existing (or circe's generic) Decoders to perform the ADT marshalling. You can also use circe's Encoder to obtain a Json, and print that to YAML using this library.
I came across moultingyaml today.
MoultingYAML is a Scala wrapper for SnakeYAML based on spray-json.
It looks quite familiar to me, having worked years with spray-json. I think it might fit #sihil's need of a "compelling" and "mature" Scala YAML library.
For anyone else that runs across this answer and is looking for help and examples, I found a basic example that uses snakeYAML Hope it helps. Here's the code:
package yaml
import org.yaml.snakeyaml.Yaml
import org.yaml.snakeyaml.constructor.Constructor
import scala.collection.mutable.ListBuffer
import scala.reflect.BeanProperty
object YamlBeanTest1 {
val text = """
accountName: Ymail Account
username: USERNAME
password: PASSWORD
mailbox: INBOX
imapServerUrl: imap.mail.yahoo.com
protocol: imaps
minutesBetweenChecks: 1
usersOfInterest: [barney, betty, wilma]
"""
def main(args: Array[String]) {
val yaml = new Yaml(new Constructor(classOf[EmailAccount]))
val e = yaml.load(text).asInstanceOf[EmailAccount]
println(e)
}
}
/**
* With the Snakeyaml Constructor approach shown in the main method,
* this class must have a no-args constructor.
*/
class EmailAccount {
#BeanProperty var accountName: String = null
#BeanProperty var username: String = null
#BeanProperty var password: String = null
#BeanProperty var mailbox: String = null
#BeanProperty var imapServerUrl: String = null
#BeanProperty var minutesBetweenChecks: Int = 0
#BeanProperty var protocol: String = null
#BeanProperty var usersOfInterest = new java.util.ArrayList[String]()
override def toString: String = {
return format("acct (%s), user (%s), url (%s)", accountName, username, imapServerUrl)
}
}
So I don't have enough reputation to comment (41 atm) but I thought my experience was worth mentioning.
After reading this thread, I decided to try to use the Jackson YAML parser because I didn't want zero-argument constructors and it was much more readable. What I didn't realize was that there is no support for inheritance (merging), and there is limited support for anchor reference (isn't that the whole point of YAML??).
Merge is explained here.
Anchor reference is explained here. While it appears that complex anchor reference is supported, I could not get it to work in a simple case.
In my experience JSON libraries for Scala are more mature and easier to use (none of the YAML approaches are enormously compelling or as mature as JSON equivalents when it comes to dealing with case classes or writing custom serialisers and deserialisers).
As such I prefer to converting from YAML to JSON and then use a JSON library. this might sound slightly crazy but it works really well provided that:
You are only working with YAML that is a subset of JSON (a great deal of use cases in my experience)
The path is not performance critical (as there is overhead in taking this approach)
The approach I use for converting from YAML to JSON leverages Jackson:
val tree = new ObjectMapper(new YAMLFactory()).readTree(yamlTemplate)
val json = new ObjectMapper()
.writer(new DefaultPrettyPrinter().withoutSpacesInObjectEntries())
.writeValueAsString(tree)