I need to make a BSONSerializer for my case class Filter that looks the following way.
case class Filter(name: String,
`type`: LogicType,
value: JsArray,
operator: OperatorType)
Within my serializer class I already have the following guys:
implicit val LogicTypeHandler: BSONHandler[LogicType] = EnumHandler.handler(LogicType)
implicit val OperatorTypeHandler: BSONHandler[OperatorType] = EnumHandler.handler(OperatorType)
>>> Missing BSON serializer for JsArray
implicit val FilterHandler: BSONDocumentHandler[Filter] = Macros.handler[Filter]
The issue is that I need to save "value" as JsArray and only during later runtime usage I will know what it will be exactly.
In my case I'm expecting it to be wether array of Strings or Ints.
The question here is how to implement BSONSerializer for this JsArray. I'm getting error:
Implicit not found for 'Filter.value': reactivemongo.api.bson.BSONReader[play.api.libs.json.JsArray]
Additional info:
"org.reactivemongo" %% "play2-reactivemongo" % "1.0.0-play28"
"com.typesafe.play" % "sbt-plugin" % "2.8.5"
Please advise.
Related
This question is based upon Scala 2.12.12
scalaVersion := "2.12.12"
using play-json
"com.typesafe.play" %% "play-json" % "2.9.1"
If I have a Json object that looks like this:
{
"UpperCaseKey": "some value",
"AnotherUpperCaseKey": "some other value"
}
I know I can create a case class like so:
case class Yuck(UpperCaseKey: String, AnotherUpperCaseKey: String)
and follow that up with this chaser:
implicit val jsYuck = Json.format[Yuck]
and that will, of course, give me both reads[Yuck] and writes[Yuck] to and from Json.
I'm asking this because I have a use case where I'm not the one deciding the case of the keys and I've being handed a Json object that is full of keys that start with an uppercase letter.
In this use case I will have to read and convert millions of them so performance is a concern.
I've looked into #JsonAnnotations and Scala's transformers. The former doesn't seem to have much documentation for use in Scala at the field level and the latter seems to be a lot of boilerplate for something that might be very simple another way if I only knew how...
Bear in mind as you answer this that some Keys will be named like this:
XXXYyyyyZzzzzz
So the predefined Snake/Camel case conversions will not work.
Writing a custom conversion seems to be an option yet unsure how to do that with Scala.
Is there a way to arbitrarily request that the Json read will take Key "XXXYyyyZzzz" and match it to a field labeled "xxxYyyyZzzz" in a Scala case class?
Just to be clear I may also need to convert, or at least know how, a Json key named "AbCdEf" into field labeled "fghi".
Just use provided PascalCase.
case class Yuck(
upperCaseKey: String,
anotherUpperCaseKey: String)
object Yuck {
import play.api.libs.json._
implicit val jsonFormat: OFormat[Yuck] = {
implicit val cfg = JsonConfiguration(naming = JsonNaming.PascalCase)
Json.format
}
}
play.api.libs.json.Json.parse("""{
"UpperCaseKey": "some value",
"AnotherUpperCaseKey": "some other value"
}""").validate[Yuck]
// => JsSuccess(Yuck(some value,some other value),)
play.api.libs.json.Json.toJson(Yuck(
upperCaseKey = "foo",
anotherUpperCaseKey = "bar"))
// => JsValue = {"UpperCaseKey":"foo","AnotherUpperCaseKey":"bar"}
I think that the only way play-json support such a scenario, is defining your own Format.
Let's assume we have:
case class Yuck(xxxYyyyZzzz: String, fghi: String)
So we can define Format on the companion object:
object Yuck {
implicit val format: Format[Yuck] = {
((__ \ "XXXYyyyZzzz").format[String] and (__ \ "AbCdEf").format[String]) (Yuck.apply(_, _), yuck => (yuck.xxxYyyyZzzz, yuck.fghi))
}
}
Then the following:
val jsonString = """{ "XXXYyyyZzzz": "first value", "AbCdEf": "second value" }"""
val yuck = Json.parse(jsonString).validate[Yuck]
println(yuck)
yuck.map(yuckResult => Json.toJson(yuckResult)).foreach(println)
Will output:
JsSuccess(Yuck(first value,second value),)
{"XXXYyyyZzzz":"first value","AbCdEf":"second value"}
As we can see, XXXYyyyZzzz was mapped into xxxYyyyZzzz and AbCdEf into fghi.
Code run at Scastie.
Another option you have, is to usd JsonNaming, as #cchantep suggested in the comment. If you define:
object Yuck {
val keysMap = Map("xxxYyyyZzzz" -> "XXXYyyyZzzz", "fghi" -> "AbCdEf")
implicit val config = JsonConfiguration(JsonNaming(keysMap))
implicit val fotmat = Json.format[Yuck]
}
Running the same code will output the same. Code ru nat Scastie.
I am using json4s library in my scala program.
my build.sbt looks like
libraryDependencies ++= Seq(
"org.json4s" % "json4s-native_2.11" % "3.3.0"
)
in my code, i have a function
import org.json4s._
import org.json4s.native.JsonMethods._
import org.json4s.JValue
class Foo {
def parse(content: String) = {
val json = parse(content)
}
}
but the IDE complains "Recursive method parse needs result type"
The scala compiler usually infers the return type of methods based on their implementations, but it has trouble inferring the type of recursive methods.
The message recursive method parse needs result type is due to this shortcoming. Your def parse(content: String) recurses by calling parse(content). This makes the method recursive (infinitely so, but I'm assuming you were planning on changing it later). In order for it to compile, you'll need to explicitly state the return type, e.g. def parse(content: String): Unit.
I'm going to take a further guess and say that there is a parse method being imported from either json4s or JsonMethods. This is being shadowed by your own parse method due to it having the same method signature. If you actually want to call JsonMethods.parse, then you'll need to actually say JsonMethods.parse to clarify the ambiguity.
I am using the liftweb JSON converter and got it working, by including the dependency in build.sbt like this:
"net.liftweb" %% "lift-json" % "2.6.2"
This all works before I added Enumerations.
I can see here that Enumerations are supported, and you should do something like this:
// Scala enums
implicit val formats = net.liftweb.json.DefaultFormats + new EnumSerializer(MyEnum)
But the problem is in my environment the net.liftweb.json.ext package is not recognized. This is the package where EnumSerializer lives.
There is a separate extensions lib that you would need to include. Adding an extra line something like:
"net.liftweb" %% "lift-json-ext" % "2.6.2"
should do the trick.
I had an enumeration that was created by the gRPC proto and in that case the EnumSerializer didn't work for me. In that case, I created a custom serializer and worked awesome.
case object GrpcTimeUnitSerializer extends CustomSerializer[TimeUnit] (format => (
{
case JString(tu) => TimeUnit.fromName(tu.toUpperCase).get
case JNull => throw new GrpcServiceException(Status.INTERNAL.withDescription("Not allowed null value for the type TimeUnit."))
},
{
case tu: TimeUnit => JString(tu.toString)
}
))
And here is the DefaultFormats definition:
implicit val formats: Formats = DefaultFormats + GrpcTimeUnitSerializer
I found one library for this https://github.com/daltontf/scala-yaml, but it seems like not many developers use it and it's pretty outdated. It also might be this http://www.lag.net/configgy/ if the link wasn't dead.
I wonder, what is the most popular or de-facto library for working with YAML in Scala?
Here's an example of using the Jackson YAML databinding.
First, here's our sample document:
name: test
parameters:
"VERSION": 0.0.1-SNAPSHOT
things:
- colour: green
priority: 128
- colour: red
priority: 64
Add these dependencies:
libraryDependencies ++= Seq(
"com.fasterxml.jackson.core" % "jackson-core" % "2.1.1",
"com.fasterxml.jackson.core" % "jackson-annotations" % "2.1.1",
"com.fasterxml.jackson.core" % "jackson-databind" % "2.1.1",
"com.fasterxml.jackson.dataformat" % "jackson-dataformat-yaml" % "2.1.1"
)
Here's our outermost class (Preconditions is a Guava-like check and raises an exception if said field is not in the YAML):
import java.util.{List => JList, Map => JMap}
import collection.JavaConversions._
import com.fasterxml.jackson.annotation.JsonProperty
class Sample(#JsonProperty("name") _name: String,
#JsonProperty("parameters") _parameters: JMap[String, String],
#JsonProperty("things") _things: JList[Thing]) {
val name = Preconditions.checkNotNull(_name, "name cannot be null")
val parameters: Map[String, String] = Preconditions.checkNotNull(_parameters, "parameters cannot be null").toMap
val things: List[Thing] = Preconditions.checkNotNull(_things, "things cannot be null").toList
}
And here's the inner object:
import com.fasterxml.jackson.annotation.JsonProperty
class Thing(#JsonProperty("colour") _colour: String,
#JsonProperty("priority") _priority: Int {
val colour = Preconditions.checkNotNull(_colour, "colour cannot be null")
val priority = Preconditions.checkNotNull(_priority, "priority cannot be null")
}
Finally, here's how to instantiate it:
val reader = new FileReader("sample.yaml")
val mapper = new ObjectMapper(new YAMLFactory())
val config: Sample = mapper.readValue(reader, classOf[Sample])
A little late to the party but I think this method works in the most seamless way. This method has:
Automatic conversion to scala collection types
Use case classes
No need for boilerplate code like BeanProperty/JsonProperty
Uses Jackson-YAML & Jackson-scala
Code:
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.dataformat.yaml.YAMLFactory
import com.fasterxml.jackson.module.scala.DefaultScalaModule
case class Prop(url: List[String])
// uses Jackson YAML to parsing, relies on SnakeYAML for low level handling
val mapper: ObjectMapper = new ObjectMapper(new YAMLFactory())
// provides all of the Scala goodiness
mapper.registerModule(DefaultScalaModule)
val prop: Prop = mapper.readValue("url: [abc, def]", classOf[Prop])
// prints List(abc, def)
println(prop.url)
SnakeYAML is a high-quality, actively maintained YAML parser/renderer for Java. You can of course use it from Scala.
If you're already working with circe, you might be interested in circe-yaml which uses SnakeYAML to parse a YAML file and then converts the result to a circe AST.
I would love to see a library that could parse either JSON or YAML (or whatever -- pluggable) to a common AST and then construct Scala objects using typeclasses. Several JSON libraries work like that (and of course can also render JSON for objects using the same typeclasses), but I don't know of such a facility for YAML.
PS: There also appear to be a number of seemingly abandoned wrappers for SnakeYAML, namely HelicalYAML and yaml4s
And now we have circe-yaml https://github.com/circe/circe-yaml
SnakeYAML provides a Java API for parsing YAML and marshalling its structures into JVM classes. However, you might find circe's way of marshalling into a Scala ADT preferable -- using compile-time specification or derivation rather than runtime reflection. This enables you to parse YAML into Json, and use your existing (or circe's generic) Decoders to perform the ADT marshalling. You can also use circe's Encoder to obtain a Json, and print that to YAML using this library.
I came across moultingyaml today.
MoultingYAML is a Scala wrapper for SnakeYAML based on spray-json.
It looks quite familiar to me, having worked years with spray-json. I think it might fit #sihil's need of a "compelling" and "mature" Scala YAML library.
For anyone else that runs across this answer and is looking for help and examples, I found a basic example that uses snakeYAML Hope it helps. Here's the code:
package yaml
import org.yaml.snakeyaml.Yaml
import org.yaml.snakeyaml.constructor.Constructor
import scala.collection.mutable.ListBuffer
import scala.reflect.BeanProperty
object YamlBeanTest1 {
val text = """
accountName: Ymail Account
username: USERNAME
password: PASSWORD
mailbox: INBOX
imapServerUrl: imap.mail.yahoo.com
protocol: imaps
minutesBetweenChecks: 1
usersOfInterest: [barney, betty, wilma]
"""
def main(args: Array[String]) {
val yaml = new Yaml(new Constructor(classOf[EmailAccount]))
val e = yaml.load(text).asInstanceOf[EmailAccount]
println(e)
}
}
/**
* With the Snakeyaml Constructor approach shown in the main method,
* this class must have a no-args constructor.
*/
class EmailAccount {
#BeanProperty var accountName: String = null
#BeanProperty var username: String = null
#BeanProperty var password: String = null
#BeanProperty var mailbox: String = null
#BeanProperty var imapServerUrl: String = null
#BeanProperty var minutesBetweenChecks: Int = 0
#BeanProperty var protocol: String = null
#BeanProperty var usersOfInterest = new java.util.ArrayList[String]()
override def toString: String = {
return format("acct (%s), user (%s), url (%s)", accountName, username, imapServerUrl)
}
}
So I don't have enough reputation to comment (41 atm) but I thought my experience was worth mentioning.
After reading this thread, I decided to try to use the Jackson YAML parser because I didn't want zero-argument constructors and it was much more readable. What I didn't realize was that there is no support for inheritance (merging), and there is limited support for anchor reference (isn't that the whole point of YAML??).
Merge is explained here.
Anchor reference is explained here. While it appears that complex anchor reference is supported, I could not get it to work in a simple case.
In my experience JSON libraries for Scala are more mature and easier to use (none of the YAML approaches are enormously compelling or as mature as JSON equivalents when it comes to dealing with case classes or writing custom serialisers and deserialisers).
As such I prefer to converting from YAML to JSON and then use a JSON library. this might sound slightly crazy but it works really well provided that:
You are only working with YAML that is a subset of JSON (a great deal of use cases in my experience)
The path is not performance critical (as there is overhead in taking this approach)
The approach I use for converting from YAML to JSON leverages Jackson:
val tree = new ObjectMapper(new YAMLFactory()).readTree(yamlTemplate)
val json = new ObjectMapper()
.writer(new DefaultPrettyPrinter().withoutSpacesInObjectEntries())
.writeValueAsString(tree)
I'd like to do something like
def getMeASammy() {println "getMeASammy"}
def getMeADrink() {println "getMeADrink"}
def getMeASub() {println "getMeASub"}
But, I don't want to explicitly type out the name of the function.
scala> def currentMethodName() : String = Thread.currentThread.getStackTrace()(2).getMethodName
currentMethodName: ()String
scala> def getMeASammy() = { println(currentMethodName()) }
getMeASammy: ()Unit
scala> getMeASammy()
getMeASammy
I wrote a simple library, which is using a macro to get the name of the function. It might be a more elegant solution than using Thread.currentThread.getStackTrace()(2).getMethodName if you don't mind additional dependency:
libraryDependencies += "com.github.katlasik" %% "functionmeta" % "0.2.3" % "provided"
import io.functionmeta._
def getMeASammy() {
println(functionName) //will print "getMeASammy"
}
It's somewhat revolting, but the only supported way to get the name of the current method from the JVM is to create an exception (but not throw it), and then read the method name out of the exception's stack trace.
def methodName:String= new Exception().getStackTrace().apply(1).getMethodName()
Nameof does exactly this, and at compile time, so there are no creating exceptions, inspecting the stack trace and other reflection-ish overhead.
Relevant example from nameof readme:
import com.github.dwickern.macros.NameOf._
def startCalculation(value: Int): Unit = {
println(s"Entered ${nameOf(startCalculation _)}")
}
// compiles to:
def startCalculation(value: Int): Unit = {
println(s"Entered startCalculation")
}
In your case
def getMeASammy() = println(nameOf(getMeASammy _))
Also, nameOf works with classes, class attributes, types, etc.