I am saving a polymorphic model using mongoDB, and the data is saved perfectly, for example.
#JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXISTING_PROPERTY, property = "type", visible = true)
#JsonSubTypes(
JsonSubTypes.Type(value = B::class, name = "test_a"),
JsonSubTypes.Type(value = C::class, name = "test_b")
)
data class Base(
val list:List<A>? = null
)
open class A(
val type: String? = null
)
data class B : A(
val value: String? = null
)
data class C : A(
val values: List<String>? = null
)
the data is reflected in the database as follows
{
"list":[
{
"type": "type a",
"value": "test"
},
{
"type": "type b",
"values": [
"test 1",
"test 2"
]
}
]
}
but when i try to get the data from mongo db it returns it but without the attributes of child classes B and C.
{
"list":[
{
"type": "type a"
},
{
"type": "type b"
}
]
}
I am getting the data using mongoRepository with this annotation.
#Query("{'id' : ?0 }")
fun findById(#Param("id") id: String): Base?
Do you know what could be happening?
Related
I am trying to extract the following avro record
{
"StateName": "Alabama",
"Capital": "Montgomery",
"Counties": [{
"CountyName": "Baldwin",
"CountyPopulation": 200000,
"Cities": [{
"CityName": "Daphne",
"CityPopulation": 20000
},
{
"CityName": "Foley",
"CityPopulation": 14000
}
]
}, {
"CountyName": "Calhoun",
"CountyPopulation": 100000,
"Cities": [{
"CityName": "Anniston",
"CityPopulation": 23000
},
{
"CityName": "Glencoe",
"CityPopulation": 5000
}
]
}]
}
and modify them and create new individual record like this(Extract each county and create new records based on county like this)
{
"StateName": "Alabama",
"Capital": "Montgomery",
"CountyName": "Baldwin",
"CountyPopulation": 200000,
"Cities": [{
"CityName": "Daphne",
"CityPopulation": 20000
},
{
"CityName": "Foley",
"CityPopulation": 14000
}
]
}
I am trying to extract the records using the json4s. Taken the reference from https://nmatpt.com/blog/2017/01/29/json4s-custom-serializer/
val StateName = avroRecord.get("StateName").asInstanceOf[Utf8].toString
val Capital = avroRecord.get("Capital").asInstanceOf[Utf8].toString
val CountyArray = avroRecord.get("Counties").toString
val jsonData = parse(CountyArray, useBigDecimalForDouble = true)
val CountyList = jsonData match {
case JArray(_) =>
jsonData.extract[List[CountyArrayRecord]]
case JObject(_) =>
List(jsonData.extract[CountyArrayRecord])
List()
}
Custom serializer
implicit val formats: Formats = Serialization.formats(NoTypeHints) + new TestSerializer
class TestSerializer extends CustomSerializer[CountyArrayRecord](format => (
{ case jsonObj: JObject =>
val countyName = (jsonObj \ "CountyName").extract[String]
val countyPopulation = (jsonObj \ "CountyPopulation").extract[Int]
val cities = (jsonObj \ "Cities").extract[List[GenericRecord]]
CountyArrayRecord(countyName, countyPopulation, cities)
}
)
)
Once extracted trying to create list new records using avro4s.Taken reference from this https://github.com/sksamuel/avro4s#avro-records
val returnList = CountyList.map { CountyListRecord =>
val record = FinalCountyRecord (StateName, Capital, CountyListRecord.CountyName, CountyListRecord.CountyPopulation, CountyListRecord.Cities)
val format = RecordFormat[FinalCountyRecord]
format.to(record)
}
returnList
But this does not seem to work since county list has another list(Cities) inside.
I am trying to parse list of dictionaries(which is in string) inside scala. Basically i want to build another list so that i can traverse through the list using a for loop.
When i have one single list of dictionaries it works fine.
class CC[T] { def unapply(a:Any):Option[T] = Some(a.asInstanceOf[T]) }
object M extends CC[Map[String, Any]]
object A extends CC[List[Any]] //for s3
object I extends CC[Double]
object S extends CC[String]
object E extends CC[String]
object F extends CC[String]
object G extends CC[Map[String, Any]]
val jsonString =
"""
{
"index": 1,
"source": "a",
"name": "v",
"s3": [{
"path": "s3://1",
"bucket": "p",
"key": "r"
}]
}
""".stripMargin
//println(List(JSON.parseFull(jsonString)))
val result = for {
Some(M(map)) <- List(JSON.parseFull(jsonString))
//L(text) = map("text")
//M(texts) <- text
I(index) = map("index")
S(source) = map("source")
N(name) = map("name")
A(s3q)=map("s3")
G(s3data) <- s3q
F(path) = s3data("path")
} yield {
(index.toInt,source,name,path)
}
But when i aded another list, it gives error stating "java.lang.ClassCastException: scala.collection.immutable.$colon$colon cannot be cast to scala.collection.immutable.Map"
class CC[T] { def unapply(a:Any):Option[T] = Some(a.asInstanceOf[T]) }
object M extends CC[Map[String, Any]]
object A extends CC[List[Any]] //for s3
object I extends CC[Double]
object S extends CC[String]
object E extends CC[String]
object F extends CC[String]
object G extends CC[Map[String, Any]]
val jsonString =
"""
[{
"index": 1,
"source": "a",
"name": "v",
"s3": [{
"path": "s3://1",
"bucket": "p",
"key": "r"
}]
},{
"index": 1,
"source": "a",
"name": "v",
"s3": [{
"path": "s3://1",
"bucket": "p",
"key": "r"
}]
}]
""".stripMargin
//println(List(JSON.parseFull(jsonString)))
val result = for {
Some(M(map)) <- List(JSON.parseFull(jsonString))
//L(text) = map("text")
//M(texts) <- text
I(index) = map("index")
S(source) = map("source")
N(name) = map("name")
A(s3q)=map("s3")
G(s3data) <- s3q
F(path) = s3data("path")
} yield {
(index.toInt,source,name,path)
}
I am trying to model a product for my mongodb collection "products".
So it looks like this:
{
"_id": "abc123",
"sku": "sku123",
"name": "some product name",
"description": "this is a description",
"specifications": {
"name" : "name1",
"metadata": [
{"name1": "value1"},
{"name2": "value2"},
]
}
}
So my case classes would look like:
case class Product(
id: String,
sku: String,
name: String,
description: String,
specifications: Specification
)
case class Specification(
name: String,
metadata: Metadata
)
case class Metadata(
kvp: Map[String, String]
)
So now I will have to create handlers for each type Product, Specification and Metadata so when data is read/written to mongo it will perform the correct data mapping?
How will I map the Metadata case class, a little confused?
As indicated in the documentation, the Reader/Writer can be simply generated most of the time.
import reactivemongo.api.bson._
// Add in Metadata companion object to be in default implicit scope
implicit val metadataHandler: BSONDocumentHandler[Metadata] = Macros.handler
// Add in Specification companion object to be in default implicit scope
implicit val specHandler: BSONDocumentHandler[Specification] = Macros.handler
// Add in Product companion object to be in default implicit scope
implicit val productHandler: BSONDocumentHandler[Product] = Macros.handler
Then any function using the BSON Reader/Writer typeclasses will accept Product/Specification/Metadata:
BSON.writeDocument(Product(
id = "X",
sku = "Y",
name = "Z",
description = "...",
specifications = Specification(
name = "Lorem",
metadata = Metadata(Map("foo" -> "bar"))))).foreach { doc: BSONDocument =>
println(BSONDocument pretty doc)
}
/* {
'id': 'X',
'sku': 'Y',
'name': 'Z',
'description': '...',
'specifications': {
'name': 'Lorem',
'metadata': {
'kvp': {
'foo': 'bar'
}
}
}
} */
I have some class:
class SomeClass(val client: ElasticClient, val config: Config, val configName: String)(implicit val ec: ExecutionContext)
extends ElasticSearchRepositoryWrapper[AnotherClass]{
override def mapping: Option[MappingDefinition] = Some(
properties(
KeywordField("id"),
TextField("name").fielddata(true).analyzer("ngram_analyzer"),
KeywordField("lang"),
BasicField("order", "long"),
...
)
)
I'm creating an index with repository.createIndexIfNotExists() using this mapping.
Now I must create ngram_analyzer in my index settings:
"settings": {
"index": {
"analysis": {
"analyzer": {
"ngram_analyzer": {
"filter": [
"lowercase"
],
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"token_chars": [
"letter",
"digit"
],
"min_gram": "3",
"type": "ngram",
"max_gram": "3"
}
}
}
}
How can I do that using elastic4s?
OK. A lot of functions of createIndexIfNotExists() were deprecated. So, I used CreateIndexRequest where I put my analyzer:
CreateIndexRequest(repository.indexName, analysis = Option(ngramAnalyzer), mapping = repository.mapping)
.shards(repository.shards)
.replicas(repository.replicas)
And I initialized my analyzer like this:
val ngramAnalyzer = Analysis(
List(CustomAnalyzer(
name = "ngram_analyzer",
tokenizer = "ngram",
charFilters = Nil,
tokenFilters = List("lowercase")
))
)
I'm new to Scala and I'm learning Scala and Play Framework:
I'm trying to dynamically create a Json with play/scala starting from a sequence of data named "tables" by using Map(...), List(...) and Json.toJson(...).
My result should be like the code resultCustomJsonData shown below
var resultCustomJsonData = [
{
text: "Parent 1",
nodes: [
{
text: "Child 1",
nodes: [
{
text: "Grandchild 1"
},
{
text: "Grandchild 2"
}
]
},
{
text: "Child 2"
}
]
},
{
text: "Parent 2"
},
{
text: "Parent 3"
},
{
text: "Parent 4"
},
{
text: "Parent 5"
}
];
my scala code is this below:
val customJsonData = Json.toJson(
tables.map { t => {
Map(
"text" -> t.table.name.name, "icon" -> "fa fa-cube", "nodes" -> List (
Map( "text" -> "properties" )
)
)
}}
)
but i'm getting this error:
No Json serializer found for type Seq[scala.collection.immutable.Map[String,java.io.Serializable]]. Try to implement an implicit Writes or Format for this type.
Here is a way to do it without using temporary Map:
import play.api.libs.json.Json
val customJsonData = Json.toJson(
tables.map { t =>
Json.obj(
"text" -> t.table.name.name,
"icon" -> "fa fa-cube",
"nodes" -> Json.arr(Json.obj("text" -> "properties"))
)
}
)
I think you should try implement custom serializer/writer. Check here.
For example:
implicit val userWrites = new Writes[User] {
def writes(user: User) = Json.obj(
"id" -> user.id,
"email" -> user.email,
"firstName" -> user.firstName,
"lastName" -> user.lastName
)
}