I'm using reactivemongo in my Play Framework App and I noticed that all documents represented as for example
{name: "Robert", age: 41 }
are stored in MongoDB as
{_id: { $oid:"574005977e356b7310bcdc8d"}, name: "Robert", age: 41 }
and that's fine. That's the method I use to save the documents
// Scala code
def save(document: JsObject)
(implicit ec: ExecutionContext): Future[WriteResult] = {
collection.insert(document)
}
The latter representation is also what I get when I fetch the same document from the DB, using this method:
def find(query: JsObject, queryOptions: QueryOpts, projection: JsObject,
sort: JsObject, pageSize: Int)
(implicit ec: ExecutionContext): Future[List[JsObject]] = {
collection.find(query, projection)
.options(queryOptions)
.sort(sort)
.cursor[JsObject](ReadPreference.primaryPreferred)
.collect[List](pageSize)
}
but in this case I'd like to get a representation like
{_id: "574005977e356b7310bcdc8d", name: "Robert", age: 41 }
in order to send the documents to the requesting client via my ReSTful API. How can I get this?
You can use Json transformers: Case 6: Prune a branch from input JSON
...
.collect[List](pageSize)
.map(JsArray)
.map(
_.transform(
Reads.list(
(__\"_id").json.prune
)
)
)
... tranform JsResult to your needs
First let me say that the following representation is not what is stored in the MongoDB, which is using BSON, but what is serialized using the JSON extended syntax (see JSON documentation for ReactiveMongo).
Then, when using .cursor[T] on a query build, you are free to provide a custom document reader (in the implicit scope).
When using the JSON serialization pack, it means providing the appropriate Reads[T].
I would also add that the function .find and .save are essentially what is already done by the ReactiveMongo Collection API.
Related
I have a case class with simple data:
case class MyClass(
details: Details,
names: List[String],
id: String,
)
I have created a couchbase query which should retrieve all documents from database:
val query = s"SELECT * from `docs`"
for {
docs<- bucket
.query(N1qlQuery.simple(query))
.flatMap((rows: AsyncN1qlQueryResult) => rows.rows())
.toList
.parse[F]
.map(_.asScala.toList)
} yield docs
parse[F] is a simple function to convert from Observable. The problem here is that I got an error type mismatch which says that found List[AsyncN1qlQueryResult] instead of required List[MyClass]. How should I convert from AsyncN1qlQueryResult into MyClass objects?
I'm using Circe to parse documents.
I'm happy to report that there is now an early release of the native Couchbase Scala SDK available, which does include support for converting each row result of a N1QL query directly into your case class:
case class Address(line1: String)
case class User(name: String, age: Int, addresses: Seq[Address])
object User {
// Define a Codec so SDK knows how to convert User to/from JSON
implicit val codec: Codec[User] = Codecs.codec[User]
}
val statement = """select * from `users`;"""
val rows: Try[Seq[User]] = cluster.query(statement)
.map(result => result
.rows.flatMap(row =>
row.contentAs[User].toOption))
rows match {
case Success(rows: Seq[User]) =>
rows.foreach(row => println(row))
case Failure(err) =>
println(s"Error: $err")
}
This is the blocking API. There's also APIs to allow getting the results as Futures, or as Flux/Monos from Reactive Programming, so you have a lot of flexibility on how to get the data.
You can see how to get started here: https://docs.couchbase.com/scala-sdk/1.0alpha/hello-world/start-using-sdk.html
Please note that this is an alpha release to let the community get an idea where we're heading with it and give them an opportunity to provide feedback. It shouldn't be used in production. The forums (https://forums.couchbase.com/) are the best place to raise any feedback you have.
I have a query object:
case class SearchQuery(keyword: String, count: Int, sort: String)
I serialize this object to send it to a restful api to get search response.
Is it possible to not write some of the properties when serializing based on some condition like, if sort is empty I want the json string to be "{keyword: 'whatever', count: 25}" and if sort is non empty then I would like it to be "{keyword: 'whatever', count: 25, sort: 'unitPrice'}". What is best way to achieve this?
I am using lift json for serialization.
Any help is greatly appreciated. Thank you.
Update
val reqHeaders: scala.collection.immutable.Seq[HttpHeader] = scala.collection.immutable.Seq(
RawHeader("accept", "application/json"),
RawHeader("authorization", "sgdg545wf34rergt34tg"),
RawHeader("content-type", "application/json"),
RawHeader("x-customer-id", "45645"),
RawHeader("x-locale-currency", "usd"),
RawHeader("x-locale-language", "en"),
RawHeader("x-locale-shiptocountry", "US"),
RawHeader("x-locale-site", "us"),
RawHeader("x-partner-id", "45sfg45fgd5")
)
implicit val formats = DefaultFormats
val searchObject = net.liftweb.json.Serialization.write(req) //req is search object
val searchObjectEntity = HttpEntity(ContentTypes.`application/json`, searchObject)
val request = HttpRequest(HttpMethods.POST, "https://api.xxxxxxx.com/services/xxxxxxxx/v1/search?client_id=654685", reqHeaders, searchObjectEntity)
In Lift-Json, optional values are not serialized. So if you change your case class to have case class SearchQuery(keyword: String, count: Int, sort: Option[String]), you should get just the behavior you want.
See "Any value can be optional" in
https://github.com/lift/lift/tree/master/framework/lift-base/lift-json
You can make your your sort field optional, as scala gives you a way of handling fields which can be optional and then use Lift Json or Jerkson Json for serialization.
Here is the sample code with Jerkson Json.
case class SearchQuery(keyword: String, count: Int, sort: Option[String])
com.codahale.jerkson.Json.generate(SearchQuery("keyword",1,None))
This will give you output ->
{"keyword":"keyword","count":1}
I am currently integrating part of our system with MongoDB and we decided to use the official scala driver for it.
We have case class with joda.DateTime as parameters:
case class Schema(templateId: Muid,
createdDate: DateTime,
updatedDate: DateTime,
columns: Seq[Column])
We also defined format for it:
implicit lazy val checklistSchemaFormat : Format[Schema] = (
(__ \ "templateId").format[Muid] and
(__ \ "createdDate").format[DateTime] and
(__ \ "updatedDate").format[DateTime] and
(__ \ "columns").format[Seq[Column]]
)((Schema.apply _), unlift(Schema.unapply))
When I serialize this object to json and write to mongo, the createdDate and updatedDate getting converted to Long (which is technically fine). And this is how we do it:
val bsonDoc = Document(Json.toJson(schema).toString())
collection(DbViewSchemasCollectionName).replaceOne(filter, bsonDoc, new UpdateOptions().upsert(true)).subscribe(new Observer[UpdateResult] {
override def onNext(result: UpdateResult): Unit = logger.info(s"Successfully updates checklist template schema with result: $result")
override def onError(e: Throwable): Unit = logger.info(s"Failed to update checklist template schema with error: $e")
override def onComplete(): Unit = {}
})
as a result Mongo has this type of object:
{
"_id": ObjectId("56fc4247eb3c740d31b04f05"),
"templateId": "gaQP3JIB3ppJtro9rO9BAw",
"createdDate": NumberLong(1459372615507),
"updatedDate": NumberLong(1459372615507),
"columns": [
...
]
}
Now, I am trying to read it like so:
collection(DbViewSchemasCollectionName).find(filter).first().head() map { document =>
ChecklistSchema.checklistSchemaFormat reads Json.parse(document.toJson()) match {
case JsSuccess(value, _) => {
Some(value)
}
case JsError(e) => throw JsResultException(e)
}
} recover { case e =>
logger.info("ERROR " + e)
None
}
And at this point the reads always failing, since the createdDate and updatedDate now look like this:
"createdDate" : { "$numberLong" : "1459372615507" }, "updatedDate" :
{"$numberLong" : "1459372615507" }
How do I deal with this situation? Is there any easier conversion between bson.Document and JsObject? Or I am completely digging into the wrong direction...
Thanks,
You can use the following approach to resolve your issue.
Firstly, I used json4s for reading the writing json to case classes
example:
case class User(_id: Option[Int], username: String, firstName: String, createdDate: DateTime , updatedDate: DateTime )
// A small wapper to convert case class to Document
def toBson[A <: AnyRef](x : A):Document = {
val json = write[A](x)
Document(json) }
def today() = DateTime.now
val user = User(Some(212),"binkabir","lacmalndl", today , today)
val bson = toBson(user)
usersCollection.insertOne(bson).subscribe((x: Completed) => println(x))
val f = usersCollection.find(equal("_id",212)).toFuture()
f.map(_.head.toJson).map( x => read[User](x)).foreach(println)
The code above will create a user case class, convert to Document, save to mongo db, query the db and print the returned User case class
I hope this makes sense!
To answer your second question (bson.Document <-> JsObject) - YES; this is a solved problem, check out Play2-ReactiveMongo, which makes it seem like you're storing/retrieving JsObject instances - plus it's fully asynchronous and super-easy to get going in a Play application.
You can even go a step further and use a library like Mondrian (full disclosure: I wrote it!) to get the basic CRUD operations on top of ReactiveMongo for your Play-JSON domain case-classes.
Obviously I'm biased, but I think these solutions are a great fit if you've already defined your models as case classes in Play - you can forget about the whole BSONDocument family and stick to Json._ and JsObject etc that you already know well.
EDIT:
At the risk of further downvotes, I will demonstrate how I would go about storing and retrieving the OP's Schema object, using Mondrian. I'm going to show pretty-much everything for completeness; bear in mind you've actually already done most of this. Your final code will have fewer lines than your current code, as you'd expect when you use a library.
Model Objects
There's a Column class here that's never mentioned, for simplicity let's just say that's:
case class Column (name:String, width:Int)
Now we can get on with the Schema, which is just:
import com.themillhousegroup.mondrian._
case class Schema(_id: Option[MongoId],
createdDate: DateTime,
updatedDate: DateTime,
columns: Seq[Column]) extends MongoEntity
So far, we've just implemented the MongoEntity trait, which just required the templateId field to be renamed and given the required type.
JSON Converters
import com.themillhousegroup.mondrian._
import play.api.libs.json._
import play.api.libs.functional.syntax._
object SchemaJson extends MongoJson {
implicit lazy val columnFormat = Json.format[Column]
// Pick one - easy:
implicit lazy val schemaFormat = Json.format[Schema]
// Pick one - backwards-compatible (uses "templateId"):
implicit lazy val checklistSchemaFormat : Format[Schema] = (
(__ \ "templateId").formatNullable[MongoId] and
(__ \ "createdDate").format[DateTime] and
(__ \ "updatedDate").format[DateTime] and
(__ \ "columns").format[Seq[Column]]
)((Schema.apply _), unlift(Schema.unapply))
}
The JSON converters are standard Play-JSON stuff; we pick up the MongoId Format by extending MongoJson. I've shown two different ways of defining the Format for Schema. If you have clients out in the wild using templateId (or if you prefer it) then use the second, more verbose declaration.
Service layer
For brevity I'll skip the application-configuration, you can read the Mondrian README.md for that. Let's define the SchemaService that is responsible for persistence operations on Schema instances:
import com.themillhousegroup.mondrian._
import SchemaJson._
class SchemaService extends TypedMongoService[Schema]("schemas")
That's it. We've linked the model object, the name of the MongoDB collection ("schemas") and (implicitly) the necessary converters.
Saving a Schema and finding a Schema based on some criteria
Now we start to realize the value of Mondrian. save and findOne are standard operations - we get them for free in our Service, which we inject into our controllers in the standard way:
class SchemaController #Inject (schemaService:SchemaService) extends Controller {
...
// Returns a Future[Boolean]
schemaService.save(mySchema).map { saveOk =>
...
}
...
...
// Define a filter criteria using standard Play-JSON:
val targetDate = new DateTime()
val criteria = Json.obj("createdDate" -> Json.obj("$gt" ->targetDate.getMillis))
// Returns a Future[Option[Schema]]
schemaService.findOne(criteria).map { maybeFoundSchema =>
...
}
}
So there we go. No sign of the BSON family, just the Play JSON that, as you say, we all know and love. You'll only need to reach for the Mongo documentation when you need to construct a JSON query (that $gt stuff) although in some cases you can use Mondrian's overloaded findOne(example:Schema) method if you are just looking for a simple object match, and avoid even that :-)
I'm writing a generic update method to simplify save an case class change to mongodb. my model T trait has the following function:
def update(id: BSONObjectID, t: T)(implicit writer: OFormat[T]): Future[WriteResult] = {
collection.update(Json.obj("_id" -> id), t)
}
when i'm calling it, it fails with the following error:
Caused by: reactivemongo.api.commands.UpdateWriteResult:
DatabaseException['The _id field cannot be changed from {_id: ObjectId('4ec58120cd6cad6afc000001')} to {_id: "4ec58120cd6cad6afc000001"}.' (code = 16837)]
Which makes sense cause MongoDB does not allow to update the document ID even though its the same value.
I'm wondering how i would remove the _id from my case-class instance to update it in mongodb. I guess I have to tuple the instance before it is converted to BSON, but i don't know how to do that. this is my example case class:
case class User(
_id: BSONObjectID,
email: String
}
thanks
I agree with ipoteka. I would use the findAndModify command from reactive mongo. There is an example gist here that should help.
My database looks like
[
{
name: "domenic",
records: {
today: 5,
yesterday: 1.5
}
},
{
name: "bob",
records: { ... }
}
]
When I try queries like
val result: Option[DBObject] = myCollection.findOne(
MongoDBObject("name" -> "domenic")
MongoDBObject("records" -> 1),
)
val records = result.get.getAs[BasicDBObject]("records").get
grater[Map[String, Number]].asObject(records)
it fails (at runtime!) with
GRATER GLITCH - unable to find or instantiate a grater using supplied path name
REASON: Class scala.collection.immutable.Map is an interface
Context: 'global'
Path from pickled Scala sig: 'scala.collection.immutable.Map'
I think I could make this work by creating a case class whose only field is a Map[String, Number] and then getting its property. Is that really necessary?
grater doesn't take a collection as a type argument, only a case class or a trait/abstract class whose concrete representations are case classes. Since you're just querying for a map, just extract the values you need out of the DBObject using getAs[T].
Number may not be a supported type in Salat - I've certainly never tried it. If you need Number you can write a custom transformer or send a pull request to add real support to Salat.