In MongoDb, with a Projection, I want to remove an array that contains one empty object [{}] resulting from a $unwind preserveNullAndEmptyArrays & group.
[{
"title": "Papaye",
"childrens": [{}],
"parents": [{
"title": "Arbres fruitiers",
"url": "/documents/plantes/arboriculture/arbres-fruitiers"
}
],
"url": "/documents/plantes/arboriculture/arbres-fruitiers/papaye"
},
{
"title": "Arbres fruitiers",
"childrens": [{
"title": "Tavelure",
"url": "/documents/maladies/tavelure"
},
{
"title": "Longane",
"url": "/documents/plantes/arboriculture/arbres-fruitiers/longane"
}],
"parents": [{
"title": "Arboriculture",
"url": "/documents/plantes/arboriculture"
}],
"url": "/documents/plantes/arboriculture/arbres-fruitiers"
}
Pipeline like :
var pipeline = [];
pipeline.push({$match:{url:/^\//}});
(...)
var proj = {};
proj.title = true;
proj.parents = true;
proj.url = true;
proj.parents = ???
proj.childrens = ???
pipeline.push({$project:proj});
db.getCollection('Pages').aggregate(pipeline)
Thanks in advance
Ok, I found the solution. Just test if url is null for the first array entry.
proj.childrens = {
"$cond", [
"$eq", [
"$arrayElemAt",["$childrens.url",0],
null
],
[],
"$childrens"
]
};
Related
I have create a Wiremock JSON mapping file as follows:
{
"request": {
"method": "POST",
"url": "/some/thing",
"bodyPatterns": [
{
"equalToJson": {
"items": [
{
"name": "${json-unit.any-string}",
"phone": "${json-unit.regex}(^[0-9]{10}$)"
},
{
"address": "${json-unit.any-string}"
}
]
},
"ignoreArrayOrder": true
}
]
},
"response": {
"status": 200,
"body": "Hello world!"
}
}
Now when I send a JSON request where the number of elements in the items list is more than two, it does not match the above mapping.
Is there any way to change the above mapping in such way that it matches JSON requests having two or more elements in its items list?
ignoreExtraElements param should do the job
"equalToJson": {
"items": [
{
"name": "${json-unit.any-string}",
"phone": "${json-unit.regex}(^[0-9]{10}$)"
},
{
"address": "${json-unit.any-string}"
}
]
},
"ignoreArrayOrder": true,
"ignoreExtraElements": true
I have a collection with the following documents (for example):
{
"_id": {
"$oid": "61acefe999e03b9324czzzzz"
},
"matchId": {
"$oid": "61a392cc54e3752cc71zzzzz"
},
"logs": [
{
"actionType": "CREATE",
"data": {
"talent": {
"talentId": "qq",
"talentVersion": "2.10",
"firstName": "Joelle",
"lastName": "Doe",
"socialLinks": [
{
"type": "FACEBOOK",
"url": "https://www.facebook.com"
},
{
"type": "LINKEDIN",
"url": "https://www.linkedin.com"
}
],
"webResults": [
{
"type": "VIDEO",
"date": "2021-11-28T14:31:40.728Z",
"link": "http://placeimg.com/640/480",
"title": "Et necessitatibus",
"platform": "Repellendus"
}
]
},
"createdBy": "DEVELOPER"
}
},
{
"actionType": "UPDATE",
"data": {
"talent": {
"firstName": "Joelle new",
"webResults": [
{
"type": "VIDEO",
"date": "2021-11-28T14:31:40.728Z",
"link": "http://placeimg.com/640/480",
"title": "Et necessitatibus",
"platform": "Repellendus"
}
]
}
}
}
]
},
{
"_id": {
"$oid": "61acefe999e03b9324caaaaa"
},
"matchId": {
"$oid": "61a392cc54e3752cc71zzzzz"
},
"logs": [....]
}
a brief breakdown: I have many objects like this one in the collection. they are a kind of an audit log for actions takes on other documents, 'Match(es)'. for example CREATE + the data, UPDATE + the data, etc.
As you can see, logs field of the document is an array of objects, each describing one of these actions.
data for each action may or may not contain specific fields, that in turn can also be an array of objects: socialLinks and webResults.
I'm trying to remove sensitive data from all of these documents with specified Match ids.
For each document, I want to go over the logs array field, and change the value of specific fields only if they exist, for example: change firstName to *****, same for lastName, if those appear. also, go over the socialLinks array if exists, and for each element inside it, if a field url exists, change it to ***** as well.
What I've tried so far are many minor variations for this query:
$set: {
'logs.$[].data.talent.socialLinks.$[].url': '*****',
'logs.$[].data.talent.webResults.$[].link': '*****',
'logs.$[].data.talent.webResults.$[].title': '*****',
'logs.$[].data.talent.firstName': '*****',
'logs.$[].data.talent.lastName': '*****',
},
and some play around with this kind of aggregation query:
[{
$set: {
'talent.socialLinks.$[el].url': {
$cond: [{ $ne: ['el.url', null] },'*****', undefined],
},
},
}]
resulting in errors like: message: "The path 'logs.0.data.talent.socialLinks' must exist in the document in order to apply array updates.",
But I just cant get it to work... :(
Would love an explanation on how to exactly achieve this kind of set-only-if-exists behaviour.
A working example would also be much appreciated, thx.
Would suggest using $\[<indentifier>\] (filtered positional operator) and arrayFilters to update the nested document(s) in the array field.
In arrayFilters, with $exists to check the existence of the certain document which matches the condition and to be updated.
db.collection.update({},
{
$set: {
"logs.$[a].data.talent.socialLinks.$[].url": "*****",
"logs.$[b].data.talent.webResults.$[].link": "*****",
"logs.$[b].data.talent.webResults.$[].title": "*****",
"logs.$[c].data.talent.firstName": "*****",
"logs.$[d].data.talent.lastName": "*****",
}
},
{
arrayFilters: [
{
"a.data.talent.socialLinks": {
$exists: true
}
},
{
"b.data.talent.webResults": {
$exists: true
}
},
{
"c.data.talent.firstName": {
$exists: true
}
},
{
"d.data.talent.lastName": {
$exists: true
}
}
]
})
Sample Mongo Playground
We are having difficulty adding header_text and description_text to a Service Alerts protobuff file. We are attempting to match the example shown on this page here.
https://developers.google.com/transit/gtfs-realtime/examples/alerts
Our data starts in the following dictionary:
alerts_dict = {
"header": {
"gtfs_realtime_version": "1",
"timestamp": "1543318671",
"incrementality": "FULL_DATASET"
},
"entity": [{
"497": {
"active_period": [{
"start": 1525320000,
"end": 1546315200
}],
"url": "http://www.capmetro.org/planner",
"effect": 4,
"header_text": "South 183: Airport",
"informed_entity": [{
"route_type": "3",
"route_id": "17",
"trip": "",
"stop_id": "3304"
}, {
"route_type": "3",
"route_id": "350",
"trip": "",
"stop_id": "3304"
}],
"description_text": "Stop closed temporarily",
"cause": 2
},
"460": {
"active_period": [{
"start": 1519876800,
"end": 1546315200
}],
"url": "http://www.capmetro.org/planner",
"effect": 4,
"header_text": "Ave F / Duval Detour",
"informed_entity": [{
"route_type": "3",
"route_id": "7",
"trip": "",
"stop_id": "1167"
}, {
"route_type": "3",
"route_id": "7",
"trip": "",
"stop_id": "1268"
}],
"description_text": "Stop closed temporarily",
"cause": 2
}
}]
}
Our Python code is as follows:
newfeed = gtfs_realtime_pb2.FeedMessage()
newfeedheader = newfeed.header
newfeedheader.gtfs_realtime_version = '2.0'
for alert_id, alert_dict in alerts_dict["entity"][0].iteritems():
print(alert_id)
print(alert_dict)
newentity = newfeed.entity.add()
newalert = newentity.alert
newentity.id = str(alert_id)
newtimerange = newalert.active_period.add()
newtimerange.end = alert_dict['active_period'][0]['end']
newtimerange.start = alert_dict['active_period'][0]['start']
for informed in alert_dict['informed_entity']:
newentityselector = newalert.informed_entity.add()
newentityselector.route_id = informed['route_id']
newentityselector.route_type = int(informed['route_type'])
newentityselector.stop_id = informed['stop_id']
print(alert_dict['description_text'])
newdescription = newalert.header_text
newdescription = alert_dict['description_text']
newalert.cause = alert_dict['cause']
newalert.effect = alert_dict['effect']
pb_feed = newfeed.SerializeToString()
with open("servicealerts.pb", 'wb') as fout:
fout.write(pb_feed)
The frustrating part is that we don't receive any sort of error message. Everything appears to run properly but the resulting pb file doesn't contain the new header_text or description_text items.
We are able to read the pb file using the following code:
feed = gtfs_realtime_pb2.FeedMessage()
response = open("servicealerts.pb")
feed.ParseFromString(response.read())
print(feed)
We truly appreciate any help that anyone can offer in pointing us in the right direction of figuring this out.
I was able to find the answer. This Python Notebook showed that by properly formatting the dictionary the PB could be generated with a few of lines of code.
from google.transit import gtfs_realtime_pb2
from google.protobuf.json_format import MessageToDict
newfeed = gtfs_realtime_pb2.FeedMessage()
ParseDict(alerts_dict, newfeed)
pb_feed = newfeed.SerializeToString()
with open("servicealerts.pb", 'wb') as fout:
fout.write(pb_feed)
All I had to do was format by dictionary properly.
if ALERT_GROUP_ID not in entity_dict.keys():
entity_dict[ALERT_GROUP_ID] = {"id": ALERT_GROUP_ID,
"alert":{
"active_period": [{
"start": int(START_TIME),
"end": int(END_TIME)
}],
"cause": cause_dict.get(CAUSE, ""),
"effect": effect_dict.get(EFFECT),
"url": {
"translation": [{
"text": URL,
"language": "en"
}]
},
"header_text": {
"translation": [{
"text": HEADER_TEXT,
"language": "en"
}]
},
"informed_entity": [{
'route_id': ROUTE_ID,
'route_type': ROUTE_TYPE,
'trip': TRIP,
'stop_id': STOP_ID
}],
"description_text": {
"translation": [{
"text": "Stop closed temporarily",
"language": "en"
}]
},
},
}
# print(entity_dict[ALERT_GROUP_ID]["alert"]['informed_entity'])
else:
entity_dict[ALERT_GROUP_ID]["alert"]['informed_entity'].append({
'route_id': ROUTE_ID,
'route_type': ROUTE_TYPE,
'trip': TRIP,
'stop_id': STOP_ID
})
I'm forced to use the aggregation framework and the project operation of Spring Data MongoDb.
What I'd like to do is creating an array of object as a result of a project operation.
Considering this intermediate aggregation result:
{
"processes": [
{
"id": "101a",
"assignees": [
{
"id": "201a",
"username": "carl93"
},
{
"id": "202a",
"username": "susan"
}
]
},
{
"id": "101b",
"assignees": [
{
"id": "201a",
"username": "carl93"
},
{
"id": "202a",
"username": "susan"
}
]
}
]
}
I'm trying to get for each process, all the assignee usernames and ids. Hence, what I want to obtain is something like this:
[
{
"results": [
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101a"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101a"
},
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101b"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101b"
}
]
}
]
To reach this goal I'm using 2 nested VariableOperators.mapItemsOf obtaining:
org.springframework.data.mapping.MappingException: Cannot convert [Document{{id= 201a, value= carl93, parentObjectId= 101a}}, Document{{id= 202a, value = susan, parentObjectId= 101a}}]
of type class java.util.ArrayList into an instance of class java.lang.Object!
Implement a custom Converter<class java.util.ArrayList, class java.lang.Object> and register it with the CustomConversions.
Here's the code that I'm currently using:
new ProjectionOperation().and(
VariableOperators.mapItemsOf("processes")
.as("pr")
.andApply(
VariableOperators.mapItemsOf("$pr.ownership.assignees")
.as("ass")
.andApply(aggregationOperationContext -> {
Document document = new Document();
document.append("id", "$$ass.id");
document.append("value", "$$ass.username");
document.append("parentObjectId", "$$pr.id");
return document;
})
)
).as("results");
The code produces this:
[
[
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101a"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101a"
}
],
[
{
"id": "201a",
"value": "carl93",
"parentObjectId": "101b"
},
{
"id": "202a",
"value": "susan",
"parentObjectId": "101b"
}
]
]
As you can see there are 2 nested arrays, [[],[]]. This is the reason why the exception is thrown.
Nevertheless what I want to obtain is just one array, adding all the objects in it (possibly without duplicates or null values). I've tried the addToSet operator and other aggregtion operators, without any success.
Use $reduce with $concatArrays to join the arrays.
new ProjectionOperation().and(
ArrayOperators.arrayOf("processes")
.reduce(ArrayOperators.ConcatArrays.arrayOf("$$value").concat(
VariableOperators.mapItemsOf("$$this.ownership.assignees")
.as("ass")
.andApply(aggregationOperationContext -> {
Document document = new Document();
document.append("id", "$$ass.id");
document.append("value", "$$ass.username");
document.append("parentObjectId", "$$this.id");
return document;
})
)).startingWith(Arrays.asList())
).as("results");
I have a mongo db with this model:
_id: ObjectId("5705005b240166e927f841cb")
chapters: {
type: Array,
default: [
{
"id":"capitulo_0",
"active": true,
"title": "CAPÍTULO 0 - INTRODUCCIÓN",
"sections": [
{
"title": "Institucional",
"type": "Video",
"id": "d74fb24654a2",
"url": "jPTG5P0528k",
"active": true
}
]
},
{
"id":"capitulo_1",
"active": false,
"title": "CAPÍTULO 1 - BIENVENIDA",
"sections": [
{
"title": "Introducción",
"type": "Video",
"url": "j2TG1P05k8k",
"id": "b2454d7f66de",
"active": false
}
]
},
...
]
}
For the query i have the user_id and the id of the sections and i need update the active field of the sections array.
I'm doing this:
User.findOneAndUpdate({_id: userId, 'chapters.sections':{$elemMatch: {id:sectionId}}}, {$set: {'sections.$.active': false}}).exec(function (err, doc) {console.log(doc)});
The active field not change.
How can I do it this query?
Thank's
I dont have your data so run this:
User.findOne({_id: userId, 'chapters.sections':{$elemMatch: {id:sectionId}}})
and see if you get response and record will be found, because your update looks fine.
UPDATE:
After seeing your data I think you are missing chapters in your query
User.findOneAndUpdate({_id: userId, 'chapters.sections':{$elemMatch: {id:sectionId}}}, {$set: {'chapters.sections.$.active': false}}).exec(function (err, doc) {console.log(doc)});
I hope this helps