I'm using the play 2.1 framework for scala and the MongoDB Salat plugin.
When I update an Enumeration.Value I got an exception:
java.lang.IllegalArgumentException: can't serialize class scala.Enumeration$Val
at org.bson.BasicBSONEncoder._putObjectField(BasicBSONEncoder.java:270) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder.putIterable(BasicBSONEncoder.java:295) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder._putObjectField(BasicBSONEncoder.java:234) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder.putObject(BasicBSONEncoder.java:174) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder.putObject(BasicBSONEncoder.java:120) ~[mongo-java-driver-2.11.1.jar:na]
at com.mongodb.DefaultDBEncoder.writeObject(DefaultDBEncoder.java:27) ~[mongo-java-driver-2.11.1.jar:na]
Inserting the Enumeration.Value works fine. My case class looks like:
case class User(
#Key("_id") id: ObjectId = new ObjectId,
username: String,
email: String,
#EnumAs language: Language.Value = Language.DE,
balance: Double,
added: Date = new Date)
and my update code:
object UserDAO extends ModelCompanion[User, ObjectId] {
val dao = new SalatDAO[User, ObjectId](collection = mongoCollection("users")) {}
def update(): WriteResult = {
UserDAO.dao.update(q = MongoDBObject("_id" -> new ObjectId(id)), o = MongoDBObject("$set" -> MongoDBObject("language" -> Language.EN))))
}
}
Any ideas how to get that working?
EDIT:
workaround: it works if I cast the Enumeration.Value toString, but that's not how it should be...
UserDAO.dao.update(q = MongoDBObject("_id" -> new ObjectId(id)), o = MongoDBObject("$set" -> MongoDBObject("language" -> Language.EN.toString))))
It is possible to add a BSON encoding for Enumeration. So, the conversion is done in a transparent manner.
Here is the code
RegisterConversionHelpers()
custom()
def custom() {
val transformer = new Transformer {
def transform(o: AnyRef): AnyRef = o match {
case e: Enumeration$Val => e.toString
case _ => o
}
}
BSON.addEncodingHook(classOf[Enumeration$Val], transformer)
}
}
At the time of writing mongoDB doesn't place nice with scala enums, I use a decorator method as a work around.
Say you have this enum:
object EmployeeType extends Enumeration {
type EmployeeType = Value
val Manager, Worker = Value
}
and this mongodb record:
import EmployeeType._
case class Employee(
id: ObjectId = new ObjectId
)
In your mongoDB, store the integer index of the enum instead of the enum itself:
case class Employee(
id: ObjectId = new ObjectId,
employeeTypeIndex: Integer = 0
){
def employeeType = EmployeeType(employeeTypeIndex); /* getter */
def employeeType_=(v : EmployeeType ) = { employeeTypeIndex= v.id} /* setter */
}
The extra methods implement getters and setters for the employee type enum.
Salat only does its work when you serialize to and from your model object with the grater, not when you do queries with MongoDB-objects yourself. The mongo driver api knows nothing about the annotation #EnumAs. (In addition to that even if you could use salat for that, how would it be able to know that you are referring to User.language in a generic key->value MongoDBObject?)
So you have to do like you describe in your workaround. Provide the "value" of the enum yourself when you want to do queries.
Related
I have a Scala job where I need to insert nested JSON file to BigQuery. The solution for that is to create a BQ table with field type as Record for the nested fields.
I wrote a case class that looks like this:
case class AvailabilityRecord(
nestedField: NestedRecord,
timezone: String,
) {
def toMap(): java.util.Map[String, Any] = {
val map = new java.util.HashMap[String, Any]
map.put("nestedField", nestedField)
map.put("timezone", timezone)
map
}
}
case class NestedRecord(
from: String,
to: String
)
I'm using the Java dependency "com.google.cloud" % "google-cloud-bigquery" % "2.11.0", in my program.
When I try to insert the JSON value that I parsed to the case class, into BQ, the value of field timezone of tpye String is inserted, however the nested field of type Record is inserted as null.
For insertion, I'm using the following code:
def insertData(records: Seq[AvailabilityRecord], gcpService: GcpServiceImpl): Task[Unit] = Task.defer {
val recordsToInsert = records.map(record => InsertBigQueryRecord("XY", record.toMap()))
gcpService.insertIntoBq(recordsToInsert, TableId.of("dataset", "table"))
}
override def insertIntoBq(records: Iterable[InsertBigQueryRecord],
tableId: TableId): Task[Unit] = Task {
val builder = InsertAllRequest.newBuilder(tableId)
records.foreach(record => builder.addRow(record.key, record.record))
bqContext.insertAll(builder.build)
}
What might be the issue of fields of Record type are inserted as null?
The issue was that I needed to map the sub case class too, because to the java API, the case class object is not known.
For that, this helped me to solve the issue:
case class NestedRecord(
from: String,
to: String
) {
def toMap(): java.util.Map[String, String] = {
val map = new java.util.HashMap[String, Any]
map.put("from", from)
map.put("to", to)
map
}
}
And in the parent case class, the edit would take place in the toMap method:
map.put("nestedField", nestedField.toMap)
map.put("timezone", timezone)
I have a trait and two case class:
#JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXTERNAL_PROPERTY, property = "type")
#JsonSubTypes(Array(
new Type(value = classOf[ScreenablePlainTextAddressValue], name = "ScreenablePlainTextAddressValue"),
new Type(value = classOf[ScreenableAddressValue], name = "ScreenableAddressValue")))
trait ScreenableAddress {
def screeningAddressType: String
def addressFormat: String
}
#JsonTypeName(value = "ScreenablePlainTextAddressValue")
case class ScreenablePlainTextAddressValue(addressFormat: String, screeningAddressType: String,
mailingAddress: MailingAddress) extends ScreenableAddress{
//code
}
#JsonTypeName(value = "ScreenableAddressValue")
case class ScreenableAddressValue(addressFormat: String, screeningAddressType: String, value: String)
extends ScreenableAddress{
//code
}
I am trying to deserialize the a json using jackson mapper for the above classes and I get a error as below:
missing property 'type' that is to contain type id (for class com.amazon.isp.execution.model.ScreenableAddress)
My Json look like this:
"{\"schemaVersion\":\"1.0\",\"screeningEntities\":[{\"screeningRecordList\":[{\"address\":{}},{\"address\":{\"addressFormat\":\"ADDRESS_ID\",\"value\":\"SK36SAV4WD4BSGDLGZQEG05RMMA0261533395I3IZ4AMMRVPXTQ2EIA2OXZFEHLF\",\"screeningAddressType\":\"OTHER_ADDRESS\"}},{\"name\":{}},{\"name\":{\"screeningNameType\":\"OTHER_NAME\",\"nameFormat\":\"KEYMASTER_SEALED_FULL_NAME\",\"value\":\"AAAAAAAAAADYzV1Zr7RxcB5y53pylNH4KgAAAAAAAAD3vya/1QtdPxVBZM+alrMIZZvaz9DHm+w+qby7z/c8YutbxeoX6so+mGY=\"}},{\"name\":{\"screeningNameType\":\"LEGAL_NAME\",\"nameFormat\":\"KEYMASTER_SEALED_FULL_NAME\",\"value\":\"AAAAAAAAAADYzV1Zr7RxcB5y53pylNH4KgAAAAAAAACRUkz7uKJGOQCB0pZWXRXTFrk9Enpucj33hV+/yHRM/dQKo2yWGxGcjB8=\"}}]}],\"screeningMetaData\":{\"customerId\":\"" + CustomerID + "\"}}"
Any help will be really appreciated.
I'm new to Slick thus I'm not sure whether the problem caused by incorrect usage of implicits or Slick doesn't allow doing what I'm trying to do.
In short I use Slick-pg extension for JSONB support in Postgres. I also use spray-json to deserialize JSONB fields into case classes.
In order to automagically convert columns into objects I wrote generic implicit JsonColumnType that you can see below. It allows me to have any case class for which I defined json formatter to be converted to jsonb field.
On the other hand I want to have alias of JsValue type for the same column so that I can use JSONB-operators.
import com.github.tminglei.slickpg._
import com.github.tminglei.slickpg.json.PgJsonExtensions
import org.bson.types.ObjectId
import slick.ast.BaseTypedType
import slick.jdbc.JdbcType
import spray.json.{JsValue, RootJsonWriter, RootJsonReader}
import scala.reflect.ClassTag
trait MyPostgresDriver extends ExPostgresDriver with PgArraySupport with PgDate2Support with PgRangeSupport with PgHStoreSupport with PgSprayJsonSupport with PgJsonExtensions with PgSearchSupport with PgNetSupport with PgLTreeSupport {
override def pgjson = "jsonb" // jsonb support is in postgres 9.4.0 onward; for 9.3.x use "json"
override val api = MyAPI
private val plainAPI = new API with SprayJsonPlainImplicits
object MyAPI extends API with DateTimeImplicits with JsonImplicits with NetImplicits with LTreeImplicits with RangeImplicits with HStoreImplicits with SearchImplicits with SearchAssistants { //with ArrayImplicits
implicit val ObjectIdColumnType = MappedColumnType.base[ObjectId, Array[Byte]](
{ obj => obj.toByteArray }, { arr => new ObjectId(arr) }
)
implicit def JsonColumnType[T: ClassTag](implicit reader: RootJsonReader[T], writer: RootJsonWriter[T]) = {
val columnType: JdbcType[T] with BaseTypedType[T] = MappedColumnType.base[T, JsValue]({ obj => writer.write(obj) }, { json => reader.read(json) })
columnType
}
}
}
object MyPostgresDriver extends MyPostgresDriver
Here is how my table is defined (minimized version)
case class Article(id : ObjectId, ids : Ids)
case class Ids(doi: Option[String], pmid: Option[Long])
class ArticleRow(tag: Tag) extends Table[Article](tag, "articles") {
def id = column[ObjectId]("id", O.PrimaryKey)
def idsJson = column[JsValue]("ext_ids")
def ids = column[Ids]("ext_ids")
private val fromTuple: ((ObjectId, Ids)) => Article = {
case (id, ids) => Article(id, ids)
}
private val toTuple = (v: Article) => Option((v.id, v.ids))
def * = ProvenShape.proveShapeOf((id, ids) <> (fromTuple, toTuple))(MappedProjection.mappedProjectionShape)
}
private val articles = TableQuery[ArticleRow]
Finally I have function that looks up articles by value of json field
def getArticleByDoi(doi : String): Future[Article] = {
val query = (for (a <- articles if (a.idsJson +>> "doi").asColumnOf[String] === doi) yield a).take(1).result
slickDb.run(query).map { items =>
items.headOption.getOrElse(throw new RuntimeException(s"Article with doi $doi is not found"))
}
}
Sadly I get following exception in runtime
java.lang.ClassCastException: spray.json.JsObject cannot be cast to server.models.db.Ids
The problem is in SpecializedJdbcResultConverter.base where ti.getValue is being called with wrong ti. It should be slick.driver.JdbcTypesComponent$MappedJdbcType but instead it's com.github.tminglei.slickpg.utils.PgCommonJdbcTypes$GenericJdbcType. As result wrong type is passed into my tuple converter.
What makes Slick choose different type for column even though there is explicit definition of projection in table row class ?
Sample project that demonstrates the issue is here.
Following this article https://github.com/FasterXML/jackson-module-scala/wiki/Enumerations
The enumeration declaration is as
object UserStatus extends Enumeration {
type UserStatus = Value
val Active, Paused = Value
}
class UserStatusType extends TypeReference[UserStatus.type]
case class UserStatusHolder(#JsonScalaEnumeration(classOf[UserStatusType]) enum: UserStatus.UserStatus)
The DTO is declared as
class UserInfo(val emailAddress: String, val userStatus:UserStatusHolder) {
}
and the serialization code is
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
def serialize(value: Any): String = {
import java.io.StringWriter
val writer = new StringWriter()
mapper.writeValue(writer, value)
writer.toString
}
The resulting JSON serialization is
{
"emailAddress":"user1#test.com",
"userStatus":{"enum":"Active"}
}
Is it possible to get it the following form ?
{
"emailAddress":"user1#test.com",
"userStatus":"Active"
}
Have you tried:
case class UserInfo(
emailAddress: String,
#JsonScalaEnumeration(classOf[UserStatusType]) userStatus: UserStatus.UserStatus
)
The jackson wiki's example is a little misleading. You don't need the holder class. Its just an example of a thing that has that element. The thing you need is the annotation
I have went through some of the reactive mongo example and find the conversion of the model and the bson object as below.
case class Artist(
name: String,
slug: String,
artistId: Int,
albums: Vector[Album] = Vector(),
id: ObjectId = new ObjectId())
object ArtistMap {
def toBson(artist: Artist): DBObject = {
MongoDBObject(
"name" -> artist.name,
"slug" -> artist.slug,
"artistId" -> artist.artistId,
"albums" -> artist.albums.map(AlbumMap.toBson),
"_id" -> artist.id
)
}
def fromBson(o: DBObject): Artist = {
Artist(
name = o.as[String]("name"),
slug = o.as[String]("slug"),
artistId = o.as[Int]("artistId"),
albums = o.as[MongoDBList]("albums").toVector
.map(doc => AlbumMap.fromBson(doc.asInstanceOf[DBObject])),
id = o.as[ObjectId]("_id")
)
}
}
Is there any other way to get rid of this overhead of mapping each field of the case classes, maybe some framework over reactivemongo or any utility for this?
I don't understand your comment too, but my assumption you want functionality like that.
(I haven't written on scala for a few month, so sorry for any stupid mistake.)
case class Album(name: String, year: Int)
/*
then you may implement serialization interface(trait) or use macros
*/
implicit val albumHandler = Macros.handler[Album]
/*
I'm not sure is there build in implementation for the Vector[].
*/
object VectorHandler[T](implicit handler: BSONHandler[BSONValue, T]) extends BSONHandler[BSONArray, Vector[T]]{
def read(bson: BSONArray): Vector[T]{
/*
iterate over array and serialize it in a Vector with a help of the handler,
or just persist Vector[] as List[]
*/
}
def write(t: Vector[T]): BSONArray {
}
}
implicit val albumVectorHandler = VectorHandler[Album]()
implicit val artistHandler = Macros.handler[Atrist]