Scala, couchbase - convert AsyncN1qlQueryResult into custom object - scala

I have a case class with simple data:
case class MyClass(
details: Details,
names: List[String],
id: String,
)
I have created a couchbase query which should retrieve all documents from database:
val query = s"SELECT * from `docs`"
for {
docs<- bucket
.query(N1qlQuery.simple(query))
.flatMap((rows: AsyncN1qlQueryResult) => rows.rows())
.toList
.parse[F]
.map(_.asScala.toList)
} yield docs
parse[F] is a simple function to convert from Observable. The problem here is that I got an error type mismatch which says that found List[AsyncN1qlQueryResult] instead of required List[MyClass]. How should I convert from AsyncN1qlQueryResult into MyClass objects?
I'm using Circe to parse documents.

I'm happy to report that there is now an early release of the native Couchbase Scala SDK available, which does include support for converting each row result of a N1QL query directly into your case class:
case class Address(line1: String)
case class User(name: String, age: Int, addresses: Seq[Address])
object User {
// Define a Codec so SDK knows how to convert User to/from JSON
implicit val codec: Codec[User] = Codecs.codec[User]
}
val statement = """select * from `users`;"""
val rows: Try[Seq[User]] = cluster.query(statement)
.map(result => result
.rows.flatMap(row =>
row.contentAs[User].toOption))
rows match {
case Success(rows: Seq[User]) =>
rows.foreach(row => println(row))
case Failure(err) =>
println(s"Error: $err")
}
This is the blocking API. There's also APIs to allow getting the results as Futures, or as Flux/Monos from Reactive Programming, so you have a lot of flexibility on how to get the data.
You can see how to get started here: https://docs.couchbase.com/scala-sdk/1.0alpha/hello-world/start-using-sdk.html
Please note that this is an alpha release to let the community get an idea where we're heading with it and give them an opportunity to provide feedback. It shouldn't be used in production. The forums (https://forums.couchbase.com/) are the best place to raise any feedback you have.

Related

Creating a `Decoder` for arbitrary JSON

I am building a GraphQL endpoint for an API using Finch, Circe and Sangria. The variables that come through in a GraphQL query are basically an arbitrary JSON object (let's assume there's no nesting). So for example, in my test code as Strings, here are two examples:
val variables = List(
"{\n \"foo\": 123\n}",
"{\n \"foo\": \"bar\"\n}"
)
The Sangria API expects a type for these of Map[String, Any].
I've tried a bunch of ways but have so far been unable to write a Decoder for this in Circe. Any help appreciated.
The Sangria API expects a type for these of Map[String, Any]
This is not true. Variables for an execution in sangria can be of an arbitrary type T, the only requirement that you have an instance of InputUnmarshaller[T] type class for it. All marshalling integration libraries provide an instance of InputUnmarshaller for correspondent JSON AST type.
This means that sangria-circe defines InputUnmarshaller[io.circe.Json] and you can import it with import sangria.marshalling.circe._.
Here is a small and self-contained example of how you can use circe Json as a variables:
import io.circe.Json
import sangria.schema._
import sangria.execution._
import sangria.macros._
import sangria.marshalling.circe._
val query =
graphql"""
query ($$foo: Int!, $$bar: Int!) {
add(a: $$foo, b: $$bar)
}
"""
val QueryType = ObjectType("Query", fields[Unit, Unit](
Field("add", IntType,
arguments = Argument("a", IntType) :: Argument("b", IntType) :: Nil,
resolve = c ⇒ c.arg[Int]("a") + c.arg[Int]("b"))))
val schema = Schema(QueryType)
val vars = Json.obj(
"foo" → Json.fromInt(123),
"bar" → Json.fromInt(456))
val result: Future[Json] =
Executor.execute(schema, query, variables = vars)
As you can see in this example, I used io.circe.Json as variables for an execution. The execution would produce following result JSON:
{
"data": {
"add": 579
}
}
Here's a decoder that works.
type GraphQLVariables = Map[String, Any]
val graphQlVariablesDecoder: Decoder[GraphQLVariables] = Decoder.instance { c =>
val variablesString = c.downField("variables").focus.flatMap(_.asString)
val parsedVariables = variablesString.flatMap { str =>
val variablesJsonObject = io.circe.jawn.parse(str).toOption.flatMap(_.asObject)
variablesJsonObject.map(j => j.toMap.transform { (_, value: Json) =>
val transformedValue: Any = value.fold(
(),
bool => bool,
number => number.toDouble,
str => str,
array => array.map(_.toString),
obj => obj.toMap.transform((s: String, json: Json) => json.toString)
)
transformedValue
})
}
parsedVariables match {
case None => left(DecodingFailure(s"Unable to decode GraphQL variables", c.history))
case Some(variables) => right(variables)
}
}
We basically parse the JSON, turn it into a JsonObject, then transform the values within the object fairly simplistically.
Although the above answers work for the specific case of Sangria, I'm interested in the original question: What's the best approach in Circe (which generally assumes that all types are known up front) for dealing with arbitrary chunks of Json?
It's fairly common when encoding/decoding Json that 95% of the Json is specified, but the last 5% is some type of "additional properties" chunk which can be any Json object.
Solutions I've played with:
Encode/Decode the free-form chunk as Map[String,Any]. This means you'll have to introduce implicit encoders/decoders for Map[String, Any], which can be done, but is dangerous as that implicit can be pulled into places you didn't intend.
Encode/Decode the free-form chunk as Map[String, Json]. This is the easiest approach and is supported out of the box in Circe. But now the Json serialization logic has leaked out into your API (often you'll want to keep the Json stuff completely wrapped, so you can swap in other non-json formats later).
Encode/Decode to a String, where the string is required to be a valid Json chunk. At least you haven't locked your API into a specific Json library, but it doesn't feel very nice to have to ask your users to create Json chunks in this manual way.
Create a custom trait hierarchy to hold the data (e.g sealed trait Free; FreeInt(i: Int) extends Free; FreeMap(m: Map[String, Free] extends Free; ...). Now you you can create specific encoders/decoders for it. But what you've really done is replicate the Json type hierarchy that already exists in Circe.
I'm leaning more toward option 3. Since it's the most flexible, and will introduce the least dependencies in the API. But none of them are entirely satisfying. Any other ideas?

Conversion from bson.Document to JsObject and applying reads on it with joda.DateTime

I am currently integrating part of our system with MongoDB and we decided to use the official scala driver for it.
We have case class with joda.DateTime as parameters:
case class Schema(templateId: Muid,
createdDate: DateTime,
updatedDate: DateTime,
columns: Seq[Column])
We also defined format for it:
implicit lazy val checklistSchemaFormat : Format[Schema] = (
(__ \ "templateId").format[Muid] and
(__ \ "createdDate").format[DateTime] and
(__ \ "updatedDate").format[DateTime] and
(__ \ "columns").format[Seq[Column]]
)((Schema.apply _), unlift(Schema.unapply))
When I serialize this object to json and write to mongo, the createdDate and updatedDate getting converted to Long (which is technically fine). And this is how we do it:
val bsonDoc = Document(Json.toJson(schema).toString())
collection(DbViewSchemasCollectionName).replaceOne(filter, bsonDoc, new UpdateOptions().upsert(true)).subscribe(new Observer[UpdateResult] {
override def onNext(result: UpdateResult): Unit = logger.info(s"Successfully updates checklist template schema with result: $result")
override def onError(e: Throwable): Unit = logger.info(s"Failed to update checklist template schema with error: $e")
override def onComplete(): Unit = {}
})
as a result Mongo has this type of object:
{
"_id": ObjectId("56fc4247eb3c740d31b04f05"),
"templateId": "gaQP3JIB3ppJtro9rO9BAw",
"createdDate": NumberLong(1459372615507),
"updatedDate": NumberLong(1459372615507),
"columns": [
...
]
}
Now, I am trying to read it like so:
collection(DbViewSchemasCollectionName).find(filter).first().head() map { document =>
ChecklistSchema.checklistSchemaFormat reads Json.parse(document.toJson()) match {
case JsSuccess(value, _) => {
Some(value)
}
case JsError(e) => throw JsResultException(e)
}
} recover { case e =>
logger.info("ERROR " + e)
None
}
And at this point the reads always failing, since the createdDate and updatedDate now look like this:
"createdDate" : { "$numberLong" : "1459372615507" }, "updatedDate" :
{"$numberLong" : "1459372615507" }
How do I deal with this situation? Is there any easier conversion between bson.Document and JsObject? Or I am completely digging into the wrong direction...
Thanks,
You can use the following approach to resolve your issue.
Firstly, I used json4s for reading the writing json to case classes
example:
case class User(_id: Option[Int], username: String, firstName: String, createdDate: DateTime , updatedDate: DateTime )
// A small wapper to convert case class to Document
def toBson[A <: AnyRef](x : A):Document = {
val json = write[A](x)
Document(json) }
def today() = DateTime.now
val user = User(Some(212),"binkabir","lacmalndl", today , today)
val bson = toBson(user)
usersCollection.insertOne(bson).subscribe((x: Completed) => println(x))
val f = usersCollection.find(equal("_id",212)).toFuture()
f.map(_.head.toJson).map( x => read[User](x)).foreach(println)
The code above will create a user case class, convert to Document, save to mongo db, query the db and print the returned User case class
I hope this makes sense!
To answer your second question (bson.Document <-> JsObject) - YES; this is a solved problem, check out Play2-ReactiveMongo, which makes it seem like you're storing/retrieving JsObject instances - plus it's fully asynchronous and super-easy to get going in a Play application.
You can even go a step further and use a library like Mondrian (full disclosure: I wrote it!) to get the basic CRUD operations on top of ReactiveMongo for your Play-JSON domain case-classes.
Obviously I'm biased, but I think these solutions are a great fit if you've already defined your models as case classes in Play - you can forget about the whole BSONDocument family and stick to Json._ and JsObject etc that you already know well.
EDIT:
At the risk of further downvotes, I will demonstrate how I would go about storing and retrieving the OP's Schema object, using Mondrian. I'm going to show pretty-much everything for completeness; bear in mind you've actually already done most of this. Your final code will have fewer lines than your current code, as you'd expect when you use a library.
Model Objects
There's a Column class here that's never mentioned, for simplicity let's just say that's:
case class Column (name:String, width:Int)
Now we can get on with the Schema, which is just:
import com.themillhousegroup.mondrian._
case class Schema(_id: Option[MongoId],
createdDate: DateTime,
updatedDate: DateTime,
columns: Seq[Column]) extends MongoEntity
So far, we've just implemented the MongoEntity trait, which just required the templateId field to be renamed and given the required type.
JSON Converters
import com.themillhousegroup.mondrian._
import play.api.libs.json._
import play.api.libs.functional.syntax._
object SchemaJson extends MongoJson {
implicit lazy val columnFormat = Json.format[Column]
// Pick one - easy:
implicit lazy val schemaFormat = Json.format[Schema]
// Pick one - backwards-compatible (uses "templateId"):
implicit lazy val checklistSchemaFormat : Format[Schema] = (
(__ \ "templateId").formatNullable[MongoId] and
(__ \ "createdDate").format[DateTime] and
(__ \ "updatedDate").format[DateTime] and
(__ \ "columns").format[Seq[Column]]
)((Schema.apply _), unlift(Schema.unapply))
}
The JSON converters are standard Play-JSON stuff; we pick up the MongoId Format by extending MongoJson. I've shown two different ways of defining the Format for Schema. If you have clients out in the wild using templateId (or if you prefer it) then use the second, more verbose declaration.
Service layer
For brevity I'll skip the application-configuration, you can read the Mondrian README.md for that. Let's define the SchemaService that is responsible for persistence operations on Schema instances:
import com.themillhousegroup.mondrian._
import SchemaJson._
class SchemaService extends TypedMongoService[Schema]("schemas")
That's it. We've linked the model object, the name of the MongoDB collection ("schemas") and (implicitly) the necessary converters.
Saving a Schema and finding a Schema based on some criteria
Now we start to realize the value of Mondrian. save and findOne are standard operations - we get them for free in our Service, which we inject into our controllers in the standard way:
class SchemaController #Inject (schemaService:SchemaService) extends Controller {
...
// Returns a Future[Boolean]
schemaService.save(mySchema).map { saveOk =>
...
}
...
...
// Define a filter criteria using standard Play-JSON:
val targetDate = new DateTime()
val criteria = Json.obj("createdDate" -> Json.obj("$gt" ->targetDate.getMillis))
// Returns a Future[Option[Schema]]
schemaService.findOne(criteria).map { maybeFoundSchema =>
...
}
}
So there we go. No sign of the BSON family, just the Play JSON that, as you say, we all know and love. You'll only need to reach for the Mongo documentation when you need to construct a JSON query (that $gt stuff) although in some cases you can use Mondrian's overloaded findOne(example:Schema) method if you are just looking for a simple object match, and avoid even that :-)

Basic Create-from-Json Method Walkthrough

I'm new to play, scala, and reactivemongo and was wondering if someone could explain to me the following code in easy terms to understand.
def createFromJson = Action.async(parse.json) { request =>
import play.api.libs.json.Reads._
val transformer: Reads[JsObject] =
Reads.jsPickBranch[JsString](__ \ "name") and
Reads.jsPickBranch[JsNumber](__ \ "age") and
Reads.jsPut(__ \ "created", JsNumber(new java.util.Date().getTime())) reduce
request.body.transform(transformer).map { result =>
collection.insert(result).map { lastError =>
Logger.debug(s"Successfully inserted with LastError: $lastError")
Created
}
}.getOrElse(Future.successful(BadRequest("invalid json")))}
I know that it creates a user from a JSON user with name and age attributes. What I don't understand is the way that input JSON is read in this method. ALSO the concept of Action.async(par.json), request => getorElse, Future, etc.
ALSO any easier/simpler ways of writing this method would be greatly appreciated.
Thanks in advance!
I believe you found this code in a template I have made following excellent reactive mongo documentation.
http://reactivemongo.org/releases/0.11/documentation/index.html
http://reactivemongo.org/releases/0.11/documentation/tutorial/play2.html
I feel a bit obliged to explain it. Let's run through the code.
def createFromJson = Action.async(parse.json) { request =>
The function createFromJson will return an Action (play stuff) that is asynchronous (returns a future of a result) that handles a body in json format. To do that it will use the request.
Documentation: https://www.playframework.com/documentation/2.5.x/ScalaAsync
A json can be anything that follows the json formats, for example an array a String, an object, ...
Our transformer is going to take only the data that we are interested in from the json and will return a clean json object
val transformer: Reads[JsObject] =
Reads.jsPickBranch[JsString](__ \ "name") and
Reads.jsPickBranch[JsNumber](__ \ "age") and
Reads.jsPut(__ \ "created", JsNumber(new java.util.Date().getTime())) reduce
As you see, it will pick the branch name as a string and the branch age as a number. It will also add to the final json object a field created with the time of creation.
As you see we are not transforming it to a Person instance, it is just a JsObject instance as it is defined in
val transformer: Reads[JsObject] ....
Play offers you a few ways to handle json in a simpler way. This examples tries to show the power of manipulating directly the json values without converting to a model.
For example if you have a case class
case class Person(name: String, age: Int)
You could create automatically a reads from it,
val personReads: Person[Person] = Json.reads[Person]
But to just store it in Mongo
DB there is no reason to build this instance and then transform it to json again.
Of course if you need to do some logic with the models before inserting them, you might need to create the model.
Documentation:
https://www.playframework.com/documentation/2.5.x/ScalaJson
https://www.playframework.com/documentation/2.5.x/ScalaJsonCombinators
https://www.playframework.com/documentation/2.5.x/ScalaJsonAutomated
https://www.playframework.com/documentation/2.5.x/ScalaJsonTransformers
With this in mind the rest of the code should be clear
request.body.transform(transformer).map { result =>
collection.insert(result).map { lastError =>
Logger.debug(s"Successfully inserted with LastError: $lastError")
Created
}
}
From the request, we take the body (a JsValue) we transform it into a JsObject (result) and we insert it in the collection.
Insert returns a Future with the last error, when the Person is stored last error will be logged and a Created (201 code) will be returned to the client of the API.
The last bit should be also clear by now
}.getOrElse(Future.successful(BadRequest("invalid json")))
If there is any problem parsing and transforming the json body of the request into our JsObject an already completed future with the result BadRequest (400 code) will be returned to the client.
It is a future because Action.Async needs future of result as the return type.
Enjoy scala.

How to print contents of a BSONDocument

I am tyring to implement CRUD operations using reactiveMongo, and here is my find function from a tutorial online.
def findTicker(ticker: String) = {
val query = BSONDocument("firstName" -> ticker)
val future = collection.find(query).one
future.onComplete {
case Failure(e) => throw e
case Success(result) => {
println(result)
}
}
}
However I am getting this Result:
Some(BSONDocument(<non-empty>))
How can I actually see an actual readable JSON data I am looing for:
{ "_id" : ObjectId("569914557b85c62b49634c1d"), "firstName" : "Stephane", "lastName" : "Godbillon", "age" : 29 }
You can do this without playframework module. There is a pretty function specialy for this:
result match{
case Some(document) => println(BSONDocument.pretty(document))
case None => println("No document")
}
With Play-ReactiveMongo
So you have a few options. It looks like your using the Play framework and then I assume Play-ReactiveMongo Plugin. If thats the case checkout this question Its a bit different but I think you can re-use the ideas from the submitted answer.
import play.modules.reactivemongo.json.BSONFormats._
and then in your success case
case Success(result) => {
result.map { data =>
Json.toJson(data)
}
There are other options to convert BSONDocuments to JSON but Play-ReactiveMongo makes things easier.
Without the Play-ReactiveMongo plugin you will need to tell ReactiveMongo how to Write and Read your data. To do this ReactiveMongo uses BSONDocumentReaders & BSONDocumentWriters. They do provide a Macro to generate these for most classes this link has more info
import reactivemongo.bson._
//lets say your domain/case class is called Person
implicit val personHandler:BSONHandler[BSONDocument,Person] = Macros.handler[Person]
A BSONHandler gathers both BSONReader and BSONWriter traits and you can place this implicit in Persons companion object.
ReactiveMongos one method is generic on the type of entity it is looking for and takes an implicit reader for your entity.
def one[T](readPreference: ReadPreference)(implicit reader: Reader[T], ec: ExecutionContext): Future[Option[T]]
So in this example it would use the Reader generated from the Macro above to return a Future[Option[Person]] instead of Future[Option[BSONDocument]]. Then you can use the Play JSON to write your domain in JSON
For full disclosure you can write your own customer writers rather than use the Macro and these end up being similar to writing Play JSON writers and readers
THIS ANSWER IS BASED ON #Barry's PREVIOUS ANSWER BEFORE THE EDITS:
I got it to work using the play-reactivemongo updated version:
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.9",
Now,
result.map { data =>
println(Json.toJson(data))
}
returns what I want:
{"_id":0,"name":"MongoDB","type":"database","count":1,"info":{"x":203,"y":102}}

Serializing case class with trait mixin using json4s

I've got a case class Game which I have no trouble serializing/deserializing using json4s.
case class Game(name: String,publisher: String,website: String, gameType: GameType.Value)
In my app I use mapperdao as my ORM. Because Game uses a Surrogate Id I do not have id has part of its constructor.
However, when mapperdao returns an entity from the DB it supplies the id of the persisted object using a trait.
Game with SurrogateIntId
The code for the trait is
trait SurrogateIntId extends DeclaredIds[Int]
{
def id: Int
}
trait DeclaredIds[ID] extends Persisted
trait Persisted
{
#transient
private var mapperDaoVM: ValuesMap = null
#transient
private var mapperDaoDetails: PersistedDetails = null
private[mapperdao] def mapperDaoPersistedDetails = mapperDaoDetails
private[mapperdao] def mapperDaoValuesMap = mapperDaoVM
private[mapperdao] def mapperDaoInit(vm: ValuesMap, details: PersistedDetails) {
mapperDaoVM = vm
mapperDaoDetails = details
}
.....
}
When I try to serialize Game with SurrogateIntId I get empty parenthesis returned, I assume this is because json4s doesn't know how to deal with the attached trait.
I need a way to serialize game with only id added to its properties , and almost as importantly a way to do this for any T with SurrogateIntId as I use these for all of my domain objects.
Can anyone help me out?
So this is an extremely specific solution since the origin of my problem comes from the way mapperDao returns DOs, however it may be helpful for general use since I'm delving into custom serializers in json4s.
The full discussion on this problem can be found on the mapperDao google group.
First, I found that calling copy() on any persisted Entity(returned from mapperDao) returned the clean copy(just case class) of my DO -- which is then serializable by json4s. However I did not want to have to remember to call copy() any time I wanted to serialize a DO or deal with mapping lists, etc. as this would be unwieldy and prone to errors.
So, I created a CustomSerializer that wraps around the returned Entity(case class DO + traits as an object) and gleans the class from generic type with an implicit manifest. Using this approach I then pattern match my domain objects to determine what was passed in and then use Extraction.decompose(myDO.copy()) to serialize and return the clean DO.
// Entity[Int, Persisted, Class[T]] is how my DOs are returned by mapperDao
class EntitySerializer[T: Manifest] extends CustomSerializer[Entity[Int, Persisted, Class[T]]](formats =>(
{PartialFunction.empty} //This PF is for extracting from JSON and not needed
,{
case g: Game => //Each type is one of my DOs
implicit val formats: Formats = DefaultFormats //include primitive formats for serialization
Extraction.decompose(g.copy()) //get plain DO and then serialize with json4s
case u : User =>
implicit val formats: Formats = DefaultFormats + new LinkObjectEntitySerializer //See below for explanation on LinkObject
Extraction.decompose(u.copy())
case t : Team =>
implicit val formats: Formats = DefaultFormats + new LinkObjectEntitySerializer
Extraction.decompose(t.copy())
...
}
The only need for a separate serializer is in the event that you have non-primitives as parameters of a case class being serialized because the serializer can't use itself to serialize. In this case you create a serializer for each basic class(IE one with only primitives) and then include it into the next serializer with objects that depend on those basic classes.
class LinkObjectEntitySerializer[T: Manifest] extends CustomSerializer[Entity[Int, Persisted, Class[T]]](formats =>(
{PartialFunction.empty},{
//Team and User have Set[TeamUser] parameters, need to define this "dependency"
//so it can be included in formats
case tu: TeamUser =>
implicit val formats: Formats = DefaultFormats
("Team" -> //Using custom-built representation of object
("name" -> tu.team.name) ~
("id" -> tu.team.id) ~
("resource" -> "/team/") ~
("isCaptain" -> tu.isCaptain)) ~
("User" ->
("name" -> tu.user.globalHandle) ~
("id" -> tu.user.id) ~
("resource" -> "/user/") ~
("isCaptain" -> tu.isCaptain))
}
))
This solution is hardly satisfying. Eventually I will need to replace mapperDao or json4s(or both) to find a simpler solution. However, for now, it seems to be the fix with the least amount of overhead.