Transforming Scala case class into JSON - scala

I have two case classes. The main one, Request, contains two maps.
The first map has a string for both key and value.
The second map has a string key, and value which is an instance of the second case class, KVMapList.
case class Request (var parameters:MutableMap[String, String] = MutableMap[String, String](), var deps:MutableMap[String, KVMapList] = MutableMap[String, KVMapList]())
case class KVMapList(kvMap:MutableMap[String, String], list:ListBuffer[MutableMap[String, String]])
The requirement is to transform Request into a JSON representation.
The following code is trying to do this:
import com.fasterxml.jackson.annotation.PropertyAccessor
import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
import com.fasterxml.jackson.annotation.JsonAutoDetect.Visibility
import com.fasterxml.jackson.databind.ObjectMapper
def test(req:Request):String {
val mapper = new ObjectMapper() with ScalaObjectMapper
mapper.setVisibility(PropertyAccessor.ALL, Visibility.ANY)
var jsonInString: String = null
try {
jsonInString = mapper.writeValueAsString(request)
}
catch {
=case e: IOException => {
e.printStackTrace
}
jsonString
}
This however is not working. Even when the Request class is populated, the output is :
{"parameters":{"underlying":{"some-value":""},"empty":false,"traversableAgain":true},"deps":{"sizeMapDefined":false,"empty":false,"traversableAgain":true}}
Using the JSON object mapper with corresponding Java classes is straightforward, but have not yet got it working in Scala. Any assistance is very much appreciated.

Jackson is more of a bad old memory in Scala to some degree. You should use a native Scala library for JSON processing, particularly one really good at compile time derivation of JSON serializers, such as circe.
I'm aware this doesn't directly answer your question, but after using circe I would never go back to anything else.
import io.circe.generic.auto._
import io.circe.parser._
import io.circe.syntax._
val req = new Request(...)
val json = req.asJson.noSpaces
val reparsed = decode[Request](json)
On a different note, using mutable maps inside case classes is as non-idiomatic as it gets, and it should be quite trivial to implement immutable ops for your maps using the auto-generated copy method.
case class Request(parameters: Map[String, String] {
def +(key: String, value: String): Request = {
this.copy(parameters = parameters + (key -> value))
}
}
You should really avoid mutability wherever possible, and it looks like avoiding it here wouldn't be much work at all.

I am not sure what this ScalaObjectMapper does, doesn't look like it is useful.
If you add mapper.registerModule(DefaultScalaModule) in the beginning, it should work ... assuming that by MutableMap you mean mutable.Map, and not some sort of home-made class (because, if you do, you'd have to provide a serializer for it yourself).
(DefaultScalaModule is in jackson-module-scala library. Just add it to your build if you don't already have it).

Related

How to ignore a field from serializing when using circe in scala

I am using circe in scala and have a following requirement :
Let's say I have some class like below and I want to avoid password field from being serialised then is there any way to let circe know that it should not serialise password field?
In other libraries we have annotations like #transient which prevent field from being serialised ,is there any such annotation in circe?
case class Employee(
name: String,
password: String)
You could make a custom encoder that redacts some fields:
implicit val encodeEmployee: Encoder[Employee] = new Encoder[Employee] {
final def apply(a: Employee): Json = Json.obj(
("name", Json.fromString(a.name)),
("password", Json.fromString("[REDACTED]")),
)
}
LATER UPDATE
In order to avoid going through all fields contramap from a semiauto/auto decoder:
import io.circe.generic.semiauto._
implicit val encodeEmployee: Encoder[Employee] =
deriveEncoder[Employee]
.contramap[Employee](unredacted => unredacted.copy(password = "[REDACTED]"))
Although #gatear's answer is useful, it doesn't actually answer the question.
Unfortunately Circe (at least until version 0.14.2) does not have annotations to ignore fields. So far there is only a single annotation (#JsonKey) and this is used to rename field names.
In order to ignore a field when serialising (which Circe calls encoding) you can avoid that field in the Encoder implementation.
So instead of including the password field:
implicit val employeeEncoder: Encoder[Employee] =
Encoder.forProduct2("name", "password")(employee => (employee.name, employee.password))
you ommit it:
implicit val employeeEncoder: Encoder[Employee] =
Encoder.forProduct1("name")(employee => (u.name))
Alternatively what I've been using is creating a smaller case class which only includes the fields I'm interested in. Then I let Circe's automatic derivation kick in with io.circe.generic.auto._:
import io.circe.generic.auto._
import io.circe.syntax._
case class EmployeeToEncode(name: String)
// Then given an employee object:
EmployeeToEncode(employee.name).asJson
deriveEncoder.mapJsonObject(_.remove("password"))

No instance of play.api.libs.json.Format is available for akka.actor.typed.ActorRef[org.knoldus.eventSourcing.UserState.Confirmation]

No instance of play.api.libs.json.Format is available for akka.actor.typed.ActorRef[org.knoldus.eventSourcing.UserState.Confirmation] in the implicit scope (Hint: if declared in the same file, make sure it's declared before)
[error] implicit val userCommand: Format[AddUserCommand] = Json.format
I am getting this error even though I have made Implicit instance of Json Format for AddUserCommand.
Here is my code:
trait UserCommand extends CommandSerializable
object AddUserCommand{
implicit val format: Format[AddUserCommand] = Json.format[AddUserCommand]
}
final case class AddUserCommand(user:User, reply: ActorRef[Confirmation]) extends UserCommand
Can anyone please help me with this error and how to solve it?
As Gael noted, you need to provide a Format for ActorRef[Confirmation]. The complication around this is that the natural serialization, using the ActorRefResolver requires that an ExtendedActorSystem be present, which means that the usual approaches to defining a Format in a companion object won't quite work.
Note that because of the way Lagom does dependency injection, this approach doesn't really work in Lagom: commands in Lagom basically can't use Play JSON.
import akka.actor.typed.scaladsl.adapter.ClassicActorSystemOps
import play.api.libs.json._
class PlayJsonActorRefFormat(system: ExtendedActorSystem) {
def reads[A] = new Reads[ActorRef[A]] {
def reads(jsv: JsValue): JsResult[ActorRef[A]] =
jsv match {
case JsString(s) => JsSuccess(ActorRefResolver(system.toTyped).resolveActorRef(s))
case _ => JsError(Seq(JsPath() -> Seq(JsonValidationError(Seq("ActorRefs are strings"))))) // hopefully parenthesized that right...
}
}
def writes[A] = new Writes[ActorRef[A]] {
def writes(a: ActorRef[A]): JsValue = JsString(ActorRefResolver(system.toTyped).toSerializationFormat(a))
}
def format[A] = Format[ActorRef[A]](reads, writes)
}
You can then define a format for AddUserCommand as
object AddUserCommand {
def format(arf: PlayJsonActorRefFormat): Format[AddUserCommand] = {
implicit def arfmt[A]: Format[ActorRef[A]] = arf.format
Json.format[AddUserCommand]
}
}
Since you're presumably using JSON to serialize the messages sent around a cluster (otherwise, the ActorRef shouldn't be leaking out like this), you would then construct an instance of the format in your Akka Serializer implementation.
(NB: I've only done this with Circe, not Play JSON, but the basic approach is common)
The error says that it cannot construct a Format for AddUserCommand because there's no Format for ActorRef[Confirmation].
When using Json.format[X], all the members of the case class X must have a Format defined.
In your case, you probably don't want to define a formatter for this case class (serializing an ActorRef doesn't make much sense) but rather build another case class with data only.
Edit: See Levi's answer on how to provide a formatter for ActorRef if you really want to send out there the actor reference.

How to create an Encoder for Scala collection (to implement custom Aggregator)?

Spark 2.3.0 with Scala 2.11. I'm implementing a custom Aggregator according to the docs here. The aggregator requires 3 types for input, buffer, and output.
My aggregator has to act upon all previous rows in the window so I declared it like this:
case class Foo(...)
object MyAggregator extends Aggregator[Foo, ListBuffer[Foo], Boolean] {
// other override methods
override def bufferEncoder: Encoder[ListBuffer[Mod]] = ???
}
One of the override methods is supposed to return the encoder for the buffer type, which in this case is a ListBuffer. I can't find any suitable encoder for org.apache.spark.sql.Encoders nor any other way to encode this so I don't know what to return here.
I thought of creating a new case class which has a single property of type ListBuffer[Foo] and using that as my buffer class, and then using Encoders.product on that, but I am not sure if that is necessary or if there is something else I am missing. Thanks for any tips.
You should just let Spark SQL do its work and find the proper encoder using ExpressionEncoder as follows:
scala> spark.version
res0: String = 2.3.0
case class Mod(id: Long)
import org.apache.spark.sql.Encoder
import scala.collection.mutable.ListBuffer
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
scala> val enc: Encoder[ListBuffer[Mod]] = ExpressionEncoder()
enc: org.apache.spark.sql.Encoder[scala.collection.mutable.ListBuffer[Mod]] = class[value[0]: array<struct<id:bigint>>]
I cannot see anything in org.apache.spark.sql.Encoders that could be used to directly encode a ListBuffer, or for that matter even a List
One option seems to be going with putting it in a case class, as you suggested:
import org.apache.spark.sql.Encoders
case class Foo(field: String)
case class Wrapper(lb: scala.collection.mutable.ListBuffer[Foo])
Encoders.product[Wrapper]
Another option could be to use kryo:
Encoders.kryo[scala.collection.mutable.ListBuffer[Foo]]
Or finally you could look at ExpressionEncoders, which extend Encoder:
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
ExpressionEncoder[scala.collection.mutable.ListBuffer[Foo]]
This is the best solution, as it keeps everything transparent to catalyst and therefore allows it to do all of its wonderful optimisations.
One thing I noticed whilst having a play:
ExpressionEncoder[scala.collection.mutable.ListBuffer[Foo]].schema == ExpressionEncoder[List[Foo]].schema
I haven't tested any of the above whilst executing aggregations, so there may be runtime issues. Hope this is helpful.

ReactiveMongo w/ Play Scala

New to play,scala, and reactivemongo and the documentation is not very noob friendly.
I see the Bulk Insert section at See Bulk Insert
but I don't know why they aren't showing it contained in a method?
I am expecting a request with JSON data containing multiple objects in it. How do I set up a bulk insert that handles multiple inserts with errors that can be returned.
For example by single insert method is as follows:
def createFromJson = Action(parse.json) {
request =>
try {
val person = request.body.validate[Person].get
val mongoResult = Await.result(collection.insert(person),Duration.apply(20,"seconds"))
if(mongoResult.hasErrors) throw new Exception(mongoResult.errmsg.getOrElse("something unknown"))
Created(Json.toJson(person))
}
catch {
case e: Exception => BadRequest(e.getMessage)
}
}
Here is a full example how you can do it:
class ExampleController #Inject()(database: DefaultDB) extends Controller {
case class Person(firstName: String, lastName: String)
val personCollection: BSONCollection = database.collection("persons")
implicit val PersonJsonReader: Reads[Person] = Json.reads[Person]
implicit val PersonSeqJsonReader: Reads[Seq[Person]] = Reads.seq(PersonJsonReader)
implicit val PersonJsonWriter: Writes[Person] = Json.writes[Person]
implicit val PersonSeqJsonWriter: Writes[Seq[Person]] = Writes.seq(PersonJsonWriter)
implicit val PersonBsonWriter = Macros.writer[Person]
def insertMultiple = Action.async(parse.json) { implicit request =>
val validationResult: JsResult[Seq[Person]] = request.body.validate[Seq[Person]]
validationResult.fold(
invalidValidationResult => Future.successful(BadRequest),
// [1]
validValidationResult => {
val bulkDocs = validValidationResult.
map(implicitly[personCollection.ImplicitlyDocumentProducer](_))
personCollection.bulkInsert(ordered = true)(bulkDocs: _*).map {
case insertResult if insertResult.ok =>
Created(Json.toJson(validationResult.get))
case insertResult =>
InternalServerError
}
}
)
}
}
The meat of it all sits in the lines after [1]. validValidationResult is a variable of type Seq[Person] and contains valid data at this point. Thats what we want to insert into the database.
To do that we need to prepare the documents by mapping each document through the ImplicitlyDocumentProducer of your target collection (here personCollection). Thats leaves you with bulkDocs of type Seq[personCollection.ImplicitlyDocumentProducer]. You can just use bulkInsert() with that:
personCollection.bulkInsert(ordered = true)(bulkDocs: _*)
We use _* here to splat the Seq since bulkInsert() expects varargs and not a Seq. See this thread for more info about it. And thats basically it already.
The remaing code is handling play results and validating the received request body to make sure it contains valid data.
Here are a few general tips to work with play/reactivemongo/scala/futures:
Avoid Await.result. You basically never need it in production code. The idea behind futures is to perform non-blocking operations. Making them blocking again with Await.result defeats the purpose. It can be useful for debugging or test code, but even then there are usually better ways to go about things. Scala futures (unlike java ones) are very powerful and you can do a lot with them, see e.g. flatMap/map/filter/foreach/.. in the Future scaladoc. The above code for instance makes use of exactly that. It uses Action.async instead of Action at the controller method. This means it has to return a Future[Result] instead of a Result. Which is great because ReactiveMongo returns a bunch of Futures for all operations. So all you have to do is execute bulkInsert, which returns a Future and use map() to map the returned Future[MultiBulkWriteResult] to a Future[Result]. This results in no blocking and play can work with the returned future just fine.
Of course the above example can be improved a bit, I tried to keep it simple.
For instance you should return proper error messages when returning BadRequest (request body validation failed) or InternalServerError (database write failed). You can get more info about the errors from invalidValidationResult and insertResult. And you could use Formats instead of that many Reads/Writes (and also use them for ReactiveMongo). Check the play json documentation as well as the reactive mongo doc for more info on that.
Although the previous answer is correct.
We can reduce the boilerplate using JSONCollection
package controllers
import javax.inject._
import play.api.libs.json._
import play.api.mvc._
import play.modules.reactivemongo._
import reactivemongo.play.json.collection.{JSONCollection, _}
import utils.Errors
import scala.concurrent.{ExecutionContext, Future}
case class Person(name: String, age: Int)
object Person {
implicit val formatter = Json.format[Person]
}
#Singleton
class PersonBulkController #Inject()(val reactiveMongoApi: ReactiveMongoApi)(implicit exec: ExecutionContext) extends Controller with MongoController with ReactiveMongoComponents {
val persons: JSONCollection = db.collection[JSONCollection]("person")
def createBulkFromJson = Action.async(parse.json) { request =>
Json.fromJson[Seq[Person]](request.body) match {
case JsSuccess(newPersons, _) =>
val documents = newPersons.map(implicitly[persons.ImplicitlyDocumentProducer](_))
persons.bulkInsert(ordered = true)(documents: _*).map{ multiResult =>
Created(s"Created ${multiResult.n} persons")
}
case JsError(errors) =>
Future.successful(BadRequest("Could not build an array of persons from the json provided. " + errors))
}
}
}
In build.sbt
libraryDependencies ++= Seq(
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.12"
)
Tested with play 2.5.1 although it should compile in previous versions of play.
FYI, as previous answers said, there are two ways to manipulate JSON data: use ReactiveMongo module + Play JSON library, or use ReactiveMongo's BSON library.
The documentation of ReactiveMongo module for Play Framework is available online. You can find code examples there.

Scala pickling: how?

I'm trying to use "pickling" serialization is Scala, and I see the same example demonstrating it:
import scala.pickling._
import json._
val pckl = List(1, 2, 3, 4).pickle
Unpickling is just as easy as pickling:
val lst = pckl.unpickle[List[Int]]
This example raises some question. First of all, it skips converting of object to string. Apparently you need to call pckl.value to get json string representation.
Unpickling is even more confusing. Deserialization is an act of turning string (or bytes) into an object. How come this "example" demonstrates deserialization if there is no string/binry representation of object?
So, how do I deserialize simple object with pickling library?
Use the type system and case classes to achieve your goals. You can unpickle to some superior type in your hierarchy (up to and including AnyRef). Here is an example:
trait Zero
case class One(a:Int) extends Zero
case class Two(s:String) extends Zero
object Test extends App {
import scala.pickling._
import json._
// String that can be sent down a wire
val wire: String = Two("abc").pickle.value
// On the other side, just use a case class
wire.unpickle[Zero] match {
case One(a) => println(a)
case Two(s) => println(s)
case unknown => println(unknown.getClass.getCanonicalName)
}
}
Ok, I think I understood it.
import scala.pickling._
import json._
var str = Array(1,2,3).pickle.value // this is JSON string
println(str)
val x = str.unpickle[Array[Int]] // unpickle from string
will produce JSON string:
{
"tpe": "scala.Array[scala.Int]",
"value": [
1,
2,
3
]
}
So, the same way we pickle any type, we can unpickle string. Type of serialization is regulated by implicit formatter declared in "json." and can be replaced by "binary."
It does look like you will be starting with a pickle to unpickle to a case class. But the JSON string can be fed to the JSONPickle class to get the starting pickle.
Here's an example based on their array-json test
package so
import scala.pickling._
import json._
case class C(arr: Array[Int]) { override def toString = s"""C(${arr.mkString("[", ",", "]")})""" }
object PickleTester extends App {
val json = """{"arr":[ 1, 2, 3 ]}"""
val cPickle = JSONPickle( json )
val unpickledC: C = cPickle.unpickle[C]
println( s"$unpickledC, arr.sum = ${unpickledC.arr.sum}" )
}
The output printed is:
C([1,2,3]), arr.sum = 6
I was able to drop the "tpe" in from the test as well as the .stripMargin.trim on the input JSON from the test. It works all in one line, but I thought it might be more apparent split up. It's unclear to me if that "tpe" from the test is supposed to provide a measure of type safety for the incoming JSON.
Looks like the only other class they support for pickling is a BinaryPickle unless you want to roll your own. The latest scala-pickling snapshot jar requires quasiquotes to compile the code in this answer.
I tried someting more complicated this morning and discovered that the "tpe" is required for non-primatives in the incoming JSON - which points out that the serialized string really must be compatible with the pickler( which I mixed into the above code ):
case class J(a: Option[Boolean], b: Option[String], c: Option[Int]) { override def toString = s"J($a, $b, $c)" }
...
val jJson = """{"a": {"tpe": "scala.None.type"},
| "b":{"tpe": "scala.Some[java.lang.String]","x":"donut"},
| "c":{"tpe": "scala.Some[scala.Int]","x":47}}"""
val jPickle = JSONPickle( jJson.stripMargin.trim )
val unpickledJ: J = jPickle.unpickle[J]
println( s"$unpickledJ" )
...
where naturually, I had to use .value on a J(None, Some("donut"), Some(47)) to figure out how to create the jJson input value to prevent the unpickling from throwing an exception.
The output for J is like:
J(None, Some(donut), Some(47))
Looking at this test, it appears that if the incoming JSON is all primatives or case classes (or combinations) that the JSONPickle magic works, but some other classes like Options require extra "tpe" type information to unpickle correctly.