Related
Currently, I have 3 schemas that I want to join. The first schema is the user schema.
{
"_id": {
"$oid": "6147f87ac51f060e8c1bc8c7"
},
"userID": "344431410360090625",
"currency": 72590,
"level": 10,
"exp": 78.5,
"sp": 338,
"location": {
"area": 2,
"floor": 3
},
"inv": {
"Rag Hood#363": {
"emote": "",
"description": "",
"rarity": "Common",
"type": "equipment",
"image": "",
"equipmentType": "helmet",
"level": 20,
"ascension": 0,
"exp": 0,
"quantity": 0,
"expToLevelUp": 0,
"equipped": false
},
"Jericho Jehammad": {
"emote": "<:Jericho:823551572029603840>",
"description": "Enhance your weapons with this mysterious item",
"rarity": "Common",
"type": "special",
"image": "",
"quantity": 7147,
"listed": 6964
},
},
"__v": 0,}
I want to be able to use the Object names of "Rag Hood#363" and "Jericho Jehammad". Firstly the equipment schema is shown below.
{
"_id": {
"$oid": "61474cb047a1b66f2cb1b6d8"
},
"itemName": "Rag Hood",
"stats": {
"defense": {
"flat": 2,
"multi": 0
}
},
"equipmentType": "helmet",
"ascensionRequirements": [],
"statsUpPerLvl": {
"defense": 0.5
}}
Next, the items schema is used to join Jericho Jehammad. Equipment in our database is named with #, with the item number after. While other items are identified with just the itemName.
{
"_id": {
"$oid": "60c5c5e6d2d78c794d33b7ae"
},
"itemName": "Jericho Jehammad",
"emote": "<:Jericho:823551572029603840>",
"description": "",
"rarity": "Common",
"type": "special",
"image": ""}
I want to return an object that overrides the value if the value is present in the equipment and items schemas. If the value is not present, I will use the value present in the user schema.
I am looking for how to use $graphLookup and its match condition to return all nodes until a condition is met, including matches breaks the condition in the recursion stack. the first node that satisfies the condition in the recursion stack on each separate branch.
This should work for an arbitrary topology.
EDIT: as Asya highlights, it's the first time that the condition is satisfied on each branch, and not on the entire recursion stack.
Example (note: "_id" fields in docs are omitted for brevity):
Data:
[
{
"key": 1,
"parent": null,
"name": "tom"
},
{
"key": 2,
"parent": 1,
"name": "tom"
},
{
"key": 3,
"parent": 2,
"name": "jack"
},
{
"key": 4,
"parent": 3,
"name": "jonny"
},
{
"key": 5,
"parent": 1,
"name": "jack"
},
{
"key": 6,
"parent": 5,
"name": "jack"
}
]
Query:
db.collection.aggregate([
{
"$match": {
"parent": null
}
},
{
"$graphLookup": {
"from": "collection",
"startWith": "$key",
"connectFromField": "key",
"connectToField": "parent",
"as": "children",
"restrictSearchWithMatch": {
"name": "jack"
},
"matchType":"firstHitAlongEachBranch" // made-up option
}
}
])
Desired result:
[
{
"children": [
{
"key": 2,
"name": "tom",
"parent": 1
},
{
"key": 3,
"parent": 2,
"name": "jack"
},
{
"key": 5,
"parent": 1,
"name": "jack"
},
],
"key": 1,
"name": "tom",
"parent": null
}
]
The result returns only the first of the three nodes (as expected).
Thank you
Apparently it's not possible to date.
There's an open Jira for it.
I have the document below, and I want to update in exams, the last exam with status "started" (according to date), I need to add to "done" another number
{
"_id": ObjectID("5d2a371cec7eaf00119df614"),
"firstname": "Laura",
"lastname": "Warriner",
"image": "2019-07-14t05:25:23.352z_13.jpg",
"exams": [
{
"examId": "Ba6apmRmz",
"name": "general",
"language": "es",
"version": 2,
"testId": "2000",
"status": [
{
"status": "created",
"date": ISODate("2019-09-08T11:49:42.124Z")
},
{
"status": "started",
"date": ISODate("2019-09-08T11:55:09.873Z"),
"done": [1]
},
{
"status": "started",
"date": ISODate("2019-09-09T12:01:57.886Z"),
"done": [2,3]
},
{
"status": "completed",
"date": ISODate("2019-09-09T12:01:57.886Z")
}
]
}
]
}
Thanks in advance
In the below json response, what is the date format for createdDate and updatedDate? I am not sure how to work in reverse to find what format the api is using for date. I couldn't find this any where in the documentation.
{
"size": 1,
"limit": 25,
"isLastPage": true,
"values": [
{
"id": 101,
"version": 1,
"title": "Talking Nerdy",
"description": "It’s a kludge, but put the tuple from the database in the cache.",
"state": "OPEN",
"open": true,
"closed": false,
"createdDate": 1359075920,
"updatedDate": 1359085920,
"fromRef": {
"id": "refs/heads/feature-ABC-123",
"repository": {
"slug": "my-repo",
"name": null,
"project": {
"key": "PRJ"
}
}
},
"toRef": {
"id": "refs/heads/master",
"repository": {
"slug": "my-repo",
"name": null,
"project": {
"key": "PRJ"
}
}
},
"locked": false,
"author": {
"user": {
"name": "tom",
"emailAddress": "tom#example.com",
"id": 115026,
"displayName": "Tom",
"active": true,
"slug": "tom",
"type": "NORMAL"
},
"role": "AUTHOR",
"approved": true
},
"reviewers": [
{
"user": {
"name": "jcitizen",
"emailAddress": "jane#example.com",
"id": 101,
"displayName": "Jane Citizen",
"active": true,
"slug": "jcitizen",
"type": "NORMAL"
},
"role": "REVIEWER",
"approved": true
}
],
"participants": [
{
"user": {
"name": "dick",
"emailAddress": "dick#example.com",
"id": 3083181,
"displayName": "Dick",
"active": true,
"slug": "dick",
"type": "NORMAL"
},
"role": "PARTICIPANT",
"approved": false
},
{
"user": {
"name": "harry",
"emailAddress": "harry#example.com",
"id": 99049120,
"displayName": "Harry",
"active": true,
"slug": "harry",
"type": "NORMAL"
},
"role": "PARTICIPANT",
"approved": true
}
],
"link": {
"url": "http://link/to/pullrequest",
"rel": "self"
},
"links": {
"self": [
{
"href": "http://link/to/pullrequest"
}
]
}
}
],
"start": 0
}
Just making a note that in my case, it is a UNIX timestamp, but I have to remove three trailing zeroes. E.g. the data looks like this:
"createdDate":1555621993000
If interpreted as a UNIX timestamp, that would be 09/12/51265 # 4:16am (UTC).
By removing the three trailing zeroes I get 1555621993, which is the correct time 04/18/2019 # 9:13pm (UTC)
Your mileage may vary but that was a key discovery for me :)
It looks like a UNIX timestamp.
https://en.wikipedia.org/wiki/Unix_time
I'm working on a prototype that will be used for reporting (read only) where the record is a very rich set of objects embedded into a single document. Essentially the document structure is this (edited for brevity):
{
"_id": ObjectId("56b3af6f84ef45c8903acc51"),
"id": "7815dd97-e895-46e5-b6c9-45184c6eae89",
"survey": {
"id": "1fb21c69-6a5c-4805-b1cf-fabef7a5d0e6",
"type": "Survey",
"data": {
"description": "Testing reporting and data ouput",
"id": "1fb21c69-6a5c-4805-b1cf-fabef7a5d0e6",
"start_date": "2016-02-04T11:12:46Z",
"questions": [
{
"sequence": 1,
"modified_at": "2016-02-04T16:11:04.505849+00:00",
"id": "2a77921b-6853-463b-80e7-5713c82c51ca",
"previous_question": null,
"created_at": "2016-02-04T16:10:56.647746+00:00",
"parent_question": "",
"next_question": "",
"validators": [
"required",
"email"
],
"question_data": {
"modified_at": "2016-02-04T16:10:37.542715+00:00",
"type": "open-ended",
"text": "Please provide your email address",
"id": "27aa00db-4a56-4a3e-bc30-226179062af0",
"reporting_name": "email address",
"created_at": "2016-02-04T16:10:37.542695+00:00"
}
},
{
"sequence": 2,
"modified_at": "2016-02-04T16:09:53.539073+00:00",
"id": "c034819d-9281-4943-801f-c53f4047d03e",
"previous_question": null,
"created_at": "2016-02-04T16:09:53.539051+00:00",
"parent_question": "",
"next_question": null,
"validators": [
"alpha-numeric"
],
"question_data": {
"modified_at": "2016-02-04T16:05:31.008363+00:00",
"type": "open-ended",
"text": "Is there anything else that we could have done to improve your experience?",
"id": "e33c7804-20cb-4473-abfa-77b3c2a3113c",
"reporting_name": "more info open-ended",
"created_at": "2016-02-01T20:19:55.036899+00:00"
}
},
{
"sequence": 1,
"modified_at": "2016-02-04T16:08:55.681461+00:00",
"id": "f91fd70e-f204-4c38-9a56-dd6ff25e4cd8",
"previous_question": "",
"created_at": "2016-02-04T16:08:55.681441+00:00",
"parent_question": "",
"next_question": null,
"validators": [
"required"
],
"question_data": {
"modified_at": "2016-02-04T16:04:56.848528+00:00",
"type": "nps",
"text": "On a scale of 0-10 how likely are you to recommend us to a friend?",
"id": "fdb6b74d-96a3-4680-af35-8b2f6aa2bbc9",
"reporting_name": "key nps",
"created_at": "2016-02-01T20:19:27.371920+00:00"
}
}
],
"name": "Reporting Survey",
"end_date": "2016-02-11T11:12:47Z",
"trigger_active": false,
"created_at": "2016-02-04T16:13:16.808108Z",
"url": "http://www.peoplemetrics.com",
"fatigue_limit": "monthly",
"modified_at": "2016-02-04T16:13:16.808132Z",
"template": {
"id": "0ea02379-c80b-4e17-b0a6-d621d49076b9",
"type": "Template"
},
"landing_page": null,
"trigger": null,
"slug": "test-reporting-survey"
}
},
"invite_code": "7801",
"end_date": null,
"created_at": "2016-02-04T19:38:31.931147Z",
"url": "http://127.0.0.1:8000/api/v0/responses/7815dd97-e895-46e5-b6c9-45184c6eae89",
"answers": {
"data": [
{
"id": "bcc3d0dd-5419-4661-9900-ccda3ac9a308",
"end_datetime": "2016-01-22T19:57:03Z",
"survey_question": {
"id": "662fcdf9-3c92-415e-b779-ac5b0fd330d3",
"type": "SurveyQuestion"
},
"response": {
"id": "7815dd97-e895-46e5-b6c9-45184c6eae89",
"type": "Response"
},
"modified_at": "2016-02-04T19:38:31.972717Z",
"value_type": "number",
"created_at": "2016-02-04T19:38:31.972687Z",
"value": "10",
"slug": "",
"start_datetime": "2016-01-21T10:10:21Z"
},
{
"id": "8696f11e-679a-43da-b6e2-aee72a70ca9b",
"end_datetime": "2016-01-28T13:45:37Z",
"survey_question": {
"id": "f118c9dd-1c03-47e0-80ef-2a36eb3b9a29",
"type": "SurveyQuestion"
},
"response": {
"id": "7815dd97-e895-46e5-b6c9-45184c6eae89",
"type": "Response"
},
"modified_at": "2016-02-04T19:38:32.001970Z",
"value_type": "boolean",
"created_at": "2016-02-04T19:38:32.001939Z",
"value": "True",
"slug": "",
"start_datetime": "2016-02-15T04:51:24Z"
}
]
},
"modified_at": "2016-02-04T19:38:31.931171Z",
"start_date": "2016-02-01T16:14:13Z",
"invite_date": "2016-02-01T13:14:08Z",
"contact": {
"id": "94833455-b9b8-4206-9bc9-a2f96c1706ca",
"type": "Contact",
"external_contactid": null,
"name": "Miss Marceline Herzog PhD"
},
"referring_source": "web"
}
given a structure in that format, I'm unsure the best path forward using Mongoose as the ORM. Again, this is read-only, so I was it would seem that creating a nested schema would work, but the mapping itself seems tedious to say the least. Is there a better/different option available for something with embedded?
Interesting. First, I would think if I need all the document and its embedded subdocuments fields. You said it will be read-only, so will each call needs the entire document?
If not, I recommend taking a look at the mongo drivers (node.js, .NET, Python, etc.) and using their aggregation pipelines to simplify the document if possible.
If you're using Mongoose, you will probably end up with two or three Schemas, and with schemas inside a list. Mongoose docs e.g.
var surveySchema = new Schema(
{ "type" : string,
"data" : [dataSchema],
"invite_code" : string,
"end_date" : DateTime,
"created_at" : DateTime,
"url" : string,
"answers" : { "data": [answersSchema]},
"modified_at" : DateTime,
"start_date" : DateTime,
"invite_date" : DateTime,
"contact" : [ContactSchema],
"referring_source" : string
});
Or, you can use mongoose references and build your own schema depending on what data you need to use for your report. A simple example:
var surveySchema = {
"id" : { type: Schema.Types.ObjectId }
"description" : { type: string , ref: dataSchema },
"contactSchema" : { type: string , ref: contactSchema }
}