Scala Salat Deserialization: how to get a Map[String, Number]? - mongodb

My database looks like
[
{
name: "domenic",
records: {
today: 5,
yesterday: 1.5
}
},
{
name: "bob",
records: { ... }
}
]
When I try queries like
val result: Option[DBObject] = myCollection.findOne(
MongoDBObject("name" -> "domenic")
MongoDBObject("records" -> 1),
)
val records = result.get.getAs[BasicDBObject]("records").get
grater[Map[String, Number]].asObject(records)
it fails (at runtime!) with
GRATER GLITCH - unable to find or instantiate a grater using supplied path name
REASON: Class scala.collection.immutable.Map is an interface
Context: 'global'
Path from pickled Scala sig: 'scala.collection.immutable.Map'
I think I could make this work by creating a case class whose only field is a Map[String, Number] and then getting its property. Is that really necessary?

grater doesn't take a collection as a type argument, only a case class or a trait/abstract class whose concrete representations are case classes. Since you're just querying for a map, just extract the values you need out of the DBObject using getAs[T].
Number may not be a supported type in Salat - I've certainly never tried it. If you need Number you can write a custom transformer or send a pull request to add real support to Salat.

Related

Prevent empty values in an array being inserted into Mongo collection

I am trying to prevent empty values being inserted into my mongoDB collection. The field in question looks like this:
MongoDB Field
"stadiumArr" : [
"Old Trafford",
"El Calderon",
...
]
Sample of (mapped) case class
case class FormData(_id: Option[BSONObjectID], stadiumArr: Option[List[String]], ..)
Sample of Scala form
object MyForm {
val form = Form(
mapping(
"_id" -> ignored(Option.empty[BSONObjectID]),
"stadiumArr" -> optional(list(text)),
...
)(FormData.apply)(FormData.unapply)
)
}
I am also using the Repeated Values functionality in Play Framework like so:
Play Template
#import helper._
#(myForm: Form[models.db.FormData])(implicit request: RequestHeader, messagesProvider: MessagesProvider)
#repeatWithIndex(myForm("stadiumArr"), min = 5) { (stadium, idx) =>
#inputText(stadium, '_label -> ("stadium #" + (idx + 1)))
}
This ensures that whether there are at least 5 values or not in the array; there will still be (at least) 5 input boxes created. However if one (or more) of the input boxes are empty when the form is submitted an empty string is still being added as value in the array, e.g.
"stadiumArr" : [
"Old Trafford",
"El Calderon",
"",
"",
""
]
Based on some other ways of converting types from/to the database; I've tried playing around with a few solutions; such as:
implicit val arrayWrite: Writes[List[String]] = new Writes[List[String]] {
def writes(list: List[String]): JsValue = Json.arr(list.filterNot(_.isEmpty))
}
.. but this isn't working. Any ideas on how to prevent empty values being inserted into the database collection?
Without knowing specific versions or libraries you're using it's hard to give you an answer, but since you linked to play 2.6 documentation I'll assume that's what you're using there. The other assumption I'm going to make is that you're using reactive-mongo library. Whether or not you're using the play plugin for that library or not is the reason why I'm giving you two different answers here:
In that library, with no plugin, you'll have defined a BSONDocumentReader and a BSONDocumentWriter for your case class. This might be auto-generated for you with macros or not, but regardless how you get it, these two classes have useful methods you can use to transform the reads/writes you have to another one. So, let's say I defined a reader and writer for you like this:
import reactivemongo.bson._
case class FormData(_id: Option[BSONObjectID], stadiumArr: Option[List[String]])
implicit val formDataReaderWriter = new BSONDocumentReader[FormData] with BSONDocumentWriter[FormData] {
def read(bson: BSONDocument): FormData = {
FormData(
_id = bson.getAs[BSONObjectID]("_id"),
stadiumArr = bson.getAs[List[String]]("stadiumArr").map(_.filterNot(_.isEmpty))
)
}
def write(formData: FormData) = {
BSONDocument(
"_id" -> formData._id,
"stadiumArr" -> formData.stadiumArr
)
}
}
Great you say, that works! You can see in the reads I went ahead and filtered out any empty strings. So even if it's in the data, it can be cleaned up. That's nice and all, but let's notice I didn't do the same for the writes. I did that so I can show you how to use a useful method called afterWrite. So pretend the reader/writer weren't the same class and were separate, then I can do this:
val initialWriter = new BSONDocumentWriter[FormData] {
def write(formData: FormData) = {
BSONDocument(
"_id" -> formData._id,
"stadiumArr" -> formData.stadiumArr
)
}
}
implicit val cleanWriter = initialWriter.afterWrite { bsonDocument =>
val fixedField = bsonDocument.getAs[List[String]]("stadiumArr").map(_.filterNot(_.isEmpty))
bsonDocument.remove("stadiumArr") ++ BSONDocument("stadiumArr" -> fixedField)
}
Note that cleanWriter is the implicit one, that means when the insert call on the collection happens, it will be the one chosen to be used.
Now, that's all a bunch of work, if you're using the plugin/module for play that lets you use JSONCollections then you can get by with just defining play json Reads and Writes. If you look at the documentation you'll see that the reads trait has a useful map function you can use to transform one Reads into another.
So, you'd have:
val jsonReads = Json.reads[FormData]
implicit val cleanReads = jsonReads.map(formData => formData.copy(stadiumArr = formData.stadiumArr.map(_.filterNot(_.isEmpty))))
And again, because only the clean Reads is implicit, the collection methods for mongo will use that.
NOW, all of that said, doing this at the database level is one thing, but really, I personally think you should be dealing with this at your Form level.
val form = Form(
mapping(
"_id" -> ignored(Option.empty[BSONObjectID]),
"stadiumArr" -> optional(list(text)),
...
)(FormData.apply)(FormData.unapply)
)
Mainly because, surprise surprise, form has a way to deal with this. Specifically, the mapping class itself. If you look there you'll find a transform method you can use to filter out empty values easily. Just call it on the mapping you need to modify, for example:
"stadiumArr" -> optional(
list(text).transform(l => l.filter(_.nonEmpty), l => l.filter(_.nonEmpty))
)
To explain a little more about this method, in case you're not used to reading the signatures in the scaladoc.
def
transform[B](f1: (T) ⇒ B, f2: (B) ⇒ T): Mapping[B]
says that by calling transform on some mapping of type Mapping[T] you can create a new mapping of type Mapping[B]. In order to do this you must provide functions that convert from one to the other. So the code above causes the list mapping (Mapping[List[String]]) to become a Mapping[List[String]] (the type did not change here), but when it does so it removes any empty elements. If I break this code down a little it might be more clear:
def convertFromTtoB(list: List[String]): List[String] = list.filter(_.nonEmpty)
def convertFromBtoT(list: List[String]): List[String] = list.filter(_.nonEmpty)
...
list(text).transform(convertFromTtoB, convertFromBtoT)
You might wondering why you need to provide both, the reason is because when you call Form.fill and the form is populated with values, the second method will be called so that the data goes into the format the play form is expecting. This is more obvious if the type actually changes. For example, if you had a text area where people could enter CSV but you wanted to map it to a form model that had a proper List[String] you might do something like:
def convertFromTtoB(raw: String): List[String] = raw.split(",").filter(_.nonEmpty)
def convertFromBtoT(list: List[String]): String = list.mkString(",")
...
text.transform(convertFromTtoB, convertFromBtoT)
Note that when I've done this in the past sometimes I've had to write a separate method and just pass it in if I didn't want to fully specify all the types, but you should be able to work from here given the documentation and type signature for the transform method on mapping.
The reason I suggest doing this in the form binding is because the form/controller should be the one with the concern of dealing with your user data and cleaning things up I think. But you can always have multiple layers of cleaning and whatnot, it's not bad to be safe!
I've gone for this (which always seems obvious when it's written and tested):
implicit val arrayWrite: Writes[List[String]] = new Writes[List[String]] {
def writes(list: List[String]): JsValue = Json.toJson(list.filterNot(_.isEmpty).toIndexedSeq)
}
But I would be interested to know how to
.map the existing Reads rather than redefining from scratch
as #cchantep suggests

Mapping aggregation results using ReactiveMongo JSON

I'm trying to write a function that aggregates data from my MongoDB using ReactiveMongo (0.12), with Play JSON serialisation (similar to this question).
So this is what I have:
def getPopAggregate(col: JSONCollection) = {
import col.BatchCommands.AggregationFramework.{AggregationResult, Group, Match, SumField}
col.aggregate(
Group(JsString("$rstId"))("totalPopulation" -> SumField("population")),
List(Match(Json.obj("totalPopulation" -> Json.obj("$gte" -> 1000))))
).map(_.firstBatch)
}
This outputs Future[List[JsObject]], however I want to map the results to a List of my case class (i.e. Future[Seq[PopAggregate]]).
case class PopAggregate(rstId: Option[BSONObjectID], totalPopulation: Double)
object PopAggregate {
implicit val popAggregateFormat = Json.format[PopAggregate]
}
I hope someone can spare a moment to help me past this one. Many thanks!

Reading/Writing None values as null with ReactiveMongo

We are in the process of migrating an existing REST service from Spring/Java to Spray using ReactiveMongo. One of the requirements for the migration (the first phase of it anyway), is that all inputs and outputs must match the current system. The issue with this is the business objects allow null values - both at rest in the datastore, and when returned in GET methods on the service. Fields can be missing as input to the service for PUT/POST, but the corresponding values must still be written as null to the datastore and returned the same.
Normally 'not required' fields aren't an issue for Scala/Spray through the use of Option, but the issue I'm having is actually writing the values of the Option fields as null when persisting, and setting the fields as None when reading the same null from Mongo.
In the research I've been doing, I have not been able to find a way to do this.
Here are snippets of my code:
UserPersistent
case class UserPersistent(id: Option[String], name: Option[String])
PersistentUser
object PersistentUser {
implicit object PersistentUserReader extends BSONDocumentReader[UserPersistent] {
def read(doc: BSONDocument): UserPersistent = UserPersistent(
id = doc.getAs[String]("_id"),
name = doc.getAs[String]("name")
)
}
implicit object PersistentUserWriter extends BSONDocumentWriter[UserPersistent] {
override def write(persisted: UserPersistent): BSONDocument = {
BSONDocument(
"_id" -> persisted.id,
"name" -> persisted.name
)
}
}
}
I have tried the following on the write() side, and although the code compiles and runs, it throws a NullPointerException when executed
"name" -> {
val nnn = persisted.name match {
case Some(n) => n
case _ => null
}
nnn
}
I have used OptionFormat for the 'presentation' of the data, which returns nulls (but for everything), but I need to take care of the Mongo side of this.
Surely there's a way to do this - what am I missing?
Try This:
object PersistentUser {
implicit val reader: BSONDocumentReader[UserPersistent] = Macros.reader[UserPersistent]
implicit val writer: BSONDocumentWriter[UserPersistent] = Macros.writer[UserPersistent]
}

Elasticsearch index array of made up of another class

I have a class which I want to begin indexing into ElasticSearch using the Scala client elastic4s. I have extended DocumentMap to allow me to insert the documents. The simple values like String, Int etc are working but I cannot seem to get a List of another class to map correctly.
The documents look similar to this:
case class AThing(UserName: String, Comment: String, Time: String)
extends DocumentMap {
override def map: Map[String, Any] = Map(
"UserName" -> UserName,
"Comment" -> Comment,
"Time" -> Time
)
}
case class ThingsThatHappened(Id: String, Things: Seq[AThing] = Nil)
extends DocumentMap {
override def map: Map[String, Any] = Map(
"Id" -> Id,
"Things" -> Things
)
}
It will map the Id field fine within elasticsearch but then I get a an incorrect value which looks similar to this, when the document is inserted into elasticsearch:
List(AThing(id_for_the_thing,user_name_a,typed_in_comment,2015-03-12))
Obviously this is wrong and I am expecting something a kin to this JSON structure once it has been inserted into elasticsearch, such as:
"events" : [
{
"UserName" :"user_name_a",
"Comment": "typed_in_comment",
"Time": "2015-03-12"
}
]
Does anyone know a way to map an array of complex types when indexing data using elastic4s?
Elastic4s or the java client (currently) isn't smart enough to figure out that you have a nested sequence or array, but it would work if it was a nested java map (still a bit rubbish from the Scala point of view).
I think the best thing to do is use the new Indexable typeclass that was added in 1.4.13
So, given
case class AThing(UserName: String, Comment: String, Time: String)
Then create a type class and bring it into scope
implicit object AThingIndexable extends Indexable[AThing] {
def json = ... create json here using Jackson or similar which will handle nested sequences properly
}
Then you should be able to do:
client.execute { index into "myIndex/AThings" source aThing }
It's not quite as automatic as using the DocumentMap but gives you more control.
See a unit test here with it in action
First of all you need to create index in elastic4s. I assume you did this.
client.execute {
create index "myIndex" mappings (
"AThings" as(
"UserName" typed StringType,
"Comemnt" typed StringType,
"Time" typed StringType,
)
)
}
if you create this index, then you can put case class into this directly.
val aThings = AThings("username","comment","time")
client.execute {index into "myIndex/AThings" doc aThings}

How to implement a generic REST api for tables in Play2 with squeryl and spray-json

I'm trying to implement a controller in Play2 which exposes a simple REST-style api for my db-tables. I'm using squeryl for database access and spray-json for converting objects to/from json
My idea is to have a single generic controller to do all the work, so I've set up the following routes in conf/routes:
GET /:tableName controllers.Crud.getAll(tableName)
GET /:tableName/:primaryKey controllers.Crud.getSingle(tableName, primaryKey)
.. and the following controller:
object Crud extends Controller {
def getAll(tableName: String) = Action {..}
def getSingle(tableName: String, primaryKey: Long) = Action {..}
}
(Yes, missing create/update/delete, but let's get read to work first)
I've mapped tables to case classes by extended squeryl's Schema:
object MyDB extends Schema {
val accountsTable = table[Account]("accounts")
val customersTable = table[Customer]("customers")
}
And I've told spray-json about my case classes so it knows how to convert them.
object MyJsonProtocol extends DefaultJsonProtocol {
implicit val accountFormat = jsonFormat8(Account)
implicit val customerFormat = jsonFormat4(Customer)
}
So far so good, it actually works pretty well as long as I'm using the table-instances directly. The problem surfaces when I'm trying to generify the code so that I end up with excatly one controller for accessing all tables: I'm stuck with some piece of code that doesn't compile and I am not sure what's the next step.
It seems to be a type issue with spray-json which occurs when I'm trying to convert the list of objects to json in my getAll function.
Here is my generic attempt:
def getAll(tableName: String) = Action {
val json = inTransaction {
// lookup table based on url
val table = MyDB.tables.find( t => t.name == tableName).get
// execute select all and convert to json
from(table)(t =>
select(t)
).toList.toJson // causes compile error
}
// convert json to string and set correct content type
Ok(json.compactPrint).as(JSON)
}
Compile error:
[error] /Users/code/api/app/controllers/Crud.scala:29:
Cannot find JsonWriter or JsonFormat type class for List[_$2]
[error] ).toList.toJson
[error] ^
[error] one error found
I'm guessing the problem could be that the json-library needs to know at compile-time which model type I'm throwing at it, but I'm not sure (notice the List[_$2] in that compile error). I have tried the following changes to the code which compile and return results:
Remove the generic table-lookup (MyDB.tables.find(.....).get) and instead use the specific table instance e.g. MyDB.accountsTable. Proves that JSON serialization for work . However this is not generic, will require a unique controller and route config per table in db.
Convert the list of objects from db query to a string before calling toJson. I.e: toList.toJson --> toList.toString.toJson. Proves that generic lookup of tables work But not a proper json response since it is a string-serialized list of objects..
Thoughts anyone?
Your guess is correct. MyDb.tables is a Seq[Table[_]], in other words it could hold any type of table. There is no way for the compiler to figure out the type of the table you locate using the find method, and it seems like the type is needed for the JSON conversion. There are ways to get around that, but you'd need to some type of access to the model class.