I have simple json:
{
"name": "John",
"placesVisited": [
{
"name": "Paris",
"data": {
"weather": "warm",
"date": "31/01/22"
}
},
{
"name": "New York",
"data": [
{
"weather": "warm",
"date": "31/01/21"
},
{
"weather": "cold",
"date": "28/01/21"
}
]
}
]
}
as you can see in this json there is placesVisited field, and if name is "New York" the "data" field is a List, and if the name is "Paris" its an object.
what I want to do is to pull the placesVisited object where "name": "New York" and then I will parse it to a case class I have, I can't use this case class for both objects in placesVisited cause they have diff types for the same name.
so what I thought is to do something like:
(myJson \ "placesVisited") and here I need to add something that will give me element where name is "New York", how can I do that?
my result should be this:
{
"name": "New York",
"data": [
{
"weather": "warm",
"date": "31/01/21"
},
{
"weather": "cold",
"date": "28/01/21"
}
]
}
something like this maybe can happen but its horrible haha:
(Json.parse(myjson) \ "placesVisited").as[List[JsObject]].find(item => {
item.value.get("name").toString.contains("New York")
}).getOrElse(throw Exception("could not find New York element")).as[NewYorkModel]
item.value.get("name").toString can slightly be simplified to (item \ "name").as[String] but otherwise there's not much to improve.
Another option is to use a case class Place(name: String, data: JsValue) and do it like this:
(Json.parse(myjson) \ "placesVisited")
.as[List[Place]]
.find(_.name == "New York")
Related
I had to change one of the fields of my collection in mongoDB from an object to array of objects containing a lot of data. New documents get inserted without any problem, but when attempted to get old data, it never maps to the original DTO correctly and runs into errors.
subject is the field that was changed in Students collection.
I was wondering is there any way to update all the records so they all have the same data type, without losing any data.
The old version of Student:
{
"_id": "5fb2ae251373a76ae58945df",
"isActive": true,
"details": {
"picture": "http://placehold.it/32x32",
"age": 17,
"eyeColor": "green",
"name": "Vasquez Sparks",
"gender": "male",
"email": "vasquezsparks#orbalix.com",
"phone": "+1 (962) 512-3196",
"address": "619 Emerald Street, Nutrioso, Georgia, 6576"
},
"subject":
{
"id": 0,
"name": "math",
"module": {
"name": "Advanced",
"semester": "second"
}
}
}
This needs to be updated to the new version like this:
{
"_id": "5fb2ae251373a76ae58945df",
"isActive": true,
"details": {
"picture": "http://placehold.it/32x32",
"age": 17,
"eyeColor": "green",
"name": "Vasquez Sparks",
"gender": "male",
"email": "vasquezsparks#orbalix.com",
"phone": "+1 (962) 512-3196",
"address": "619 Emerald Street, Nutrioso, Georgia, 6576"
},
"subject": [
{
"id": 0,
"name": "math",
"module": {
"name": "Advanced",
"semester": "second"
}
},
{
"id": 1,
"name": "history",
"module": {
"name": "Basic",
"semester": "first"
}
},
{
"id": 2,
"name": "English",
"module": {
"name": "Basic",
"semester": "second"
}
}
]
}
I understand there might be a way to rename old collection, create new and insert data based on old one in to new one. I was wondering for some direct way.
The goal is to turn subject into an array of 1 if it is not already an array, otherwise leave it alone. This will do the trick:
update args are (predicate, actions, options).
db.foo.update(
// Match only those docs where subject is an object (i.e. not turned into array):
{$expr: {$eq:[{$type:"$subject"},"object"]}},
// Actions: set subject to be an array containing $subject. You MUST use the pipeline version
// of the update actions to correctly substitute $subject in the expression!
[ {$set: {subject: ["$subject"] }} ],
// Do this for ALL matches, not just first:
{multi:true});
You can run this converter over and over because it will ignore converted docs.
If the goal is to convert and add some new subjects, preserving the first one, then we can set up the additional subjects and concatenate them into one array as follows:
var mmm = [ {id:8, name:"CORN"}, {id:9, name:"DOG"} ];
rc = db.foo.update({$expr: {$eq:[{$type:"$subject"},"object"]}},
[ {$set: {subject: {$concatArrays: [["$subject"], mmm]} }} ],
{multi:true});
I have a collection with documents like:
{
"_id": "Mongo ObjectID",
"some_prop": "some_value",
"features": [
{ "name": "A", "icon": "01.png" },
{ "name": "B", "icon": "02.png" }
]
}
Another document sample:
{
"_id": "Mongo ObjectID",
"some_prop": "other one",
"features": [
{ "name": "B", "icon": "02.png" },
{ "name": "C", "icon": "03.png" },
{ "name": "D", "icon": "04.png" }
]
}
Notice that in the first document and the second there is the same feature B. This occurs all over many documents.
What I need is to update all features B to a new icon, something like this:
{ "name": "B", "icon": "10.png" }
I need to apply this change for all documents that has a feature with name B.
I already did a very horrible code to get all documents and update one by one in a loop. But my guess is there is a better way to do it, maybe in a single collection.update command? I'm new in MongoDB and so far googling didnt work.
You need to use $positional operator to update the fields inside an array
db.collection.updateMany(
{ "features.name": "B" },
{ "$set": { "features.$.icon": "10.png" }}
)
I'm forced to use the aggregation framework and the project operation of Spring Data MongoDb.
What I'd like to do is creating an array of object as a result of a project operation.
Considering this intermediate aggregation result:
{
"processes": [
{
"id": "101a",
"assignees": [
{
"id": "201a",
"username": "carl93"
},
{
"id": "202a",
"username": "susan"
}
]
},
{
"id": "101b",
"assignees": [
{
"id": "201a",
"username": "carl93"
},
{
"id": "202a",
"username": "susan"
}
]
}
]
}
I'm trying to get for each process, all the assignee usernames and ids. Hence, what I want to obtain is something like this:
[
{
"results": [
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101a"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101a"
},
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101b"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101b"
}
]
}
]
To reach this goal I'm using 2 nested VariableOperators.mapItemsOf obtaining:
org.springframework.data.mapping.MappingException: Cannot convert [Document{{id= 201a, value= carl93, parentObjectId= 101a}}, Document{{id= 202a, value = susan, parentObjectId= 101a}}]
of type class java.util.ArrayList into an instance of class java.lang.Object!
Implement a custom Converter<class java.util.ArrayList, class java.lang.Object> and register it with the CustomConversions.
Here's the code that I'm currently using:
new ProjectionOperation().and(
VariableOperators.mapItemsOf("processes")
.as("pr")
.andApply(
VariableOperators.mapItemsOf("$pr.ownership.assignees")
.as("ass")
.andApply(aggregationOperationContext -> {
Document document = new Document();
document.append("id", "$$ass.id");
document.append("value", "$$ass.username");
document.append("parentObjectId", "$$pr.id");
return document;
})
)
).as("results");
The code produces this:
[
[
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101a"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101a"
}
],
[
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101b"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101b"
}
]
]
As you can see there are 2 nested arrays, [[],[]]. This is the reason why the exception is thrown.
Nevertheless what I want to obtain is just one array, adding all the objects in it (possibly without duplicates or null values). I've tried the addToSet operator and other aggregtion operators, without any success.
Use $reduce with $concatArrays to join the arrays.
new ProjectionOperation().and(
ArrayOperators.arrayOf("processes")
.reduce(ArrayOperators.ConcatArrays.arrayOf("$$value").concat(
VariableOperators.mapItemsOf("$$this.ownership.assignees")
.as("ass")
.andApply(aggregationOperationContext -> {
Document document = new Document();
document.append("id", "$$ass.id");
document.append("value", "$$ass.username");
document.append("parentObjectId", "$$this.id");
return document;
})
)).startingWith(Arrays.asList())
).as("results");
Considering 2 sets of data as follows:
JSON1=> {
"data": [
{"id": "1-abc",
"model": "Agile",
"status":"open"
"configuration": {
"state": "running",
"rootVolumeSize": "0.00000",
"count": "2",
"type": "large",
"platform": "Linux"
}
"stateId":"123-567"
}
]}
JSON2=>{
"data": [
{"id": "1-abc",
"model": "Agile",
"configuration": {
"state": "running",
"diskSize": "0",
"type": "small",
"platform":"Windows"
}
}
]}
I need to compare JSON1 and JSON2 based on the 1st field id and if they match , I need to merge JSON1 with JSON 2 retaining the existing values in JSON2( only append fields not present).
I have coded the same as below:
private def merger(JSON1: Seq[JSON], JSON2: Seq[JSON]):Seq[JSON] = {
val abcKey = JSON1.groupBy(_.id) map { case (k, v) => (k, v.head)
val mergedRecords = for {
xyzJSON<- JSON2
} yield (
abcKey.get(xyzJSON.id) match {
case Some(JSON1) => xyzJSON.copy(status = JSON1.status,
stateId = JSON1.stateId)
case None => xyzJSON.copy(origin = "N/A")
}
)
I am not able to derive at a solution for reconciling the fields within the configurationMap.
Expected result set should be like:
{
"data": [
{"id": "1-abc",
"model": "Agile",
"status":"open"
"configuration": {
"state": "running",
"diskSize": "0",
"rootVolumeSize": "0.00000",
"count": "2",
"type": "small",
"platform": "Windows",
}
"stateId":"123-567"
}
]}
I have some documents in the "company" collection structured this way :
[
{
"company_name": "Company 1",
"contacts": {
"main": {
"email": "main#company1.com",
"name": "Mainuser"
},
"store1": {
"email": "store1#company1.com",
"name": "Store1 user"
},
"store2": {
"email": "store2#company1.com",
"name": "Store2 user"
}
}
},
{
"company_name": "Company 2",
"contacts": {
"main": {
"email": "main#company2.com",
"name": "Mainuser"
},
"store1": {
"email": "store1#company2.com",
"name": "Store1 user"
},
"store2": {
"email": "store2#company2.com",
"name": "Store2 user"
}
}
}
]
I'm trying to retrieve the doc that have store1#company2.com as a contact but cannot find how to query a specific value of a specific propertie of an "indexed" list of objects.
My feeling is that the contacts lists should not not be indexed resulting in the following structure :
{
"company_name": "Company 1",
"contacts": [
{
"email": "main#company1.com",
"name": "Mainuser",
"label": "main"
},
{
"email": "store1#company1.com",
"name": "Store1 user",
"label": "store1"
},
{
"email": "store2#company1.com",
"name": "Store2 user",
"label": "store2"
}
]
}
This way I can retrieve matching documents through the following request :
db.company.find({"contacts.email":"main#company1.com"})
But is there anyway to do a similar request on document using the previous structure ?
Thanks a lot for your answers!
P.S. : same question for documents structured this way :
{
"company_name": "Company 1",
"contacts": {
"0": {
"email": "main#company1.com",
"name": "Mainuser"
},
"4": {
"email": "store1#company1.com",
"name": "Store1 user"
},
"1": {
"email": "store2#company1.com",
"name": "Store2 user"
}
}
}
Short answer: yes, they can be queried but it's probably not what you want and it's not going to be really efficient.
The document structure in the first and third block is basically the same - you have an embedded document. The only difference between are the name of the keys in the contacts object.
To query document with that kind of structure you will have to do a query like this:
db.company.find({ $or : [
{"contacts.main.email":"main#company1.com"},
{"contacts.store1.email":"main#company1.com"},
{"contacts.store2.email":"main#company1.com"}
]});
This query will not be efficient, especially if you have a lot of keys in the contacts object. Also, creating a query will be unnecessarily difficult and error prone.
The second document structure, with an array of embedded objects, is optimal. You can create a multikey index on the contacts array which will make your query faster. The bonus is that you can use a short and simple query.
I think the easiest is really to shape your document using the structure describe in your 2nd example : (I have not fixed the JSON)
{
"company_name": "Company 1",
"contacts":{[
{"email":"main#company1.com","name":"Mainuser", "label": "main", ...}
{"email":"store1#company1.com","name":"Store1 user", "label": "store1",...}
{"email":"store2#company1.com","name":"Store2 user", "label": "store2",...}
]}
}
like that you can easily query on email independently of the "label".
So if you really want to use the other structure, (but you need to fix the JSON too) you will have to write more complex code/aggregation pipeline, since we do not know the name and number of attributes when querying the system. Theses structures are also probably hard to use by the developers independently of MongoDB queries.
Since it was not clear let me show what I have in mind
db.company.save(
{
"company_name": "Company 1",
"contacts":[
{"email":"main#company1.com","name":"Mainuser", "label": "main"},
{"email":"store1#company1.com","name":"Store1 user", "label": "store1"},
{"email":"store2#company1.com","name":"Store2 user", "label": "store2"}
]
}
);
db.company.save(
{
"company_name": "Company 2",
"contacts":[
{"email":"main#company2.com","name":"Mainuser", "label": "main"},
{"email":"store1#company2.com","name":"Store1 user", "label": "store1"},
{"email":"store2#company2.com","name":"Store2 user", "label": "store2"}
]
}
);
db.company.ensureIndex( { "contacts.email" : 1 } );
db.company.find( { "contacts.email" : "store1#company2.com" } );
This allows you to store many emails, and query with an index.