Mongoose very nested document add or update (approach) + rest/gql - rest

Let asume that we have a few documents when is document A and it has collection of documnets B and the B contains collection of C
So, it look more or less like this:
A
B
C
All of them have an ID, so I can easly find the A element and add the B elements to the A document.
But, ... what about the C? How I can/should organizne the adding/updating the collection of C in the B collection?
To accomplish that, I shoud find the A element by ID, then the B element by ID, and the add a new C element or update by its ID, right? Looks like quite complicated algorithms...
Is there any smarter way to do this?
BTW:
What then with the API (REST / GQL).
Normally, I would do something like this: POST api/a/1/b/1/c or PATCH api/a/1/b/1/c/1
Again, is it easier / smarter to do it?

Related

Group the keys of a keyValueGroupDataset based in a specific rule in scala

I need your help being a begginner.
I have a KeyValueGroupedDataset[String, some_complex_data].
Let's say the keys are:
key
some_complex_data
A_1_Z
collection of something
B_1_Z
collection of something
A_2_X
collection of something
B_2_X
collection of something
A_3_Y
collection of something
I also have another information, for now in a Dataframe, that A and B should be grouped together, but only if the rest of the string match.
For example A_1_Z and B_1_Z together in one single group, and A_2_X with B_2_X, but A_3_Y has no match, so there is no grouping for it. And of course the purpose is to aggregate all the collection of something all together.
key
some_complex_data
A_1_Z, B_1_Z
collection of something
A_2_X, B_2_X
collection of something
A_3_Y
collection of something
Is there a way to group - aggregate already a keyValueGroupedDataset, or what other strategy can be used to do this grouping?
Please let me know if it needs more explanation
Thank you :)

Find oldest for every joined document

I have two collections, A and B. Each document in collection A corresponds to many documents in collection B. Documents in collection B have two important properties, "AID", the ID of the corresponding collection A document, and "date", something I want to sort by.
I now have a find query on collection A, which returns many documents in that collection. For each of the returned documents, I now want to find the oldest corresponding document in collection B.
So far, I used a for-each on my find query in collection A, and then used a find({"AID": doc._id}).sort({"date": 1}).limit(1) on collection B.
This is obviously super inefficient, since I have to go through my enormous collection B every single time for each document in collection A. Can I somehow simplify this into two aggregated queries only, perhaps using some kind of aggregate pipeline? How can I only traverse collection B once?
Thanks!

How can I filter different key words from one collection into another one?and put them together.

I used the aggregate, match and out query, however, if I want to filter C and also put into B collection, there will be a problem that A will disapper, only left C.
So do you have any query that I can filter from "location" collection and put my keywords.A,c and others all into B collection?
It sees $out only can have one keyword

Mongo query for number of items in a sub collection

This seems like it should be very simple but I can't get it to work. I want to select all documents A where there are one or more B elements in a sub collection.
Like if a Store document had a collection of Employees. I just want to find Stores with 1 or more Employees in it.
I tried something like:
{Store.Employees:{$size:{$ne:0}}}
or
{Store.Employees:{$size:{$gt:0}}}
Just can't get it to work.
This isn't supported. You basically only can get documents in which array size is equal to the value. Range searches you can't do.
What people normally do is that they cache array length in a separate field in the same document. Then they index that field and make very efficient queries.
Of course, this requires a little bit more work from you (not forgetting to keep that length field current).

How to best filter a MongoDB collection using a predicate

We would like to filter a MongoDB collection using an "overspecified" find() query. For example: collection A, the collection we want to filter, has documents that contain a set of requirements for attributes. Examples are the document a, which contains the requirement {req: age:{min:20,max:30}} and b which contains the requirement {req: gender:male}.
We also have a document, d, from collection D that contains the following attributes: d = {age:21, gender: male}.
In this case, both a and b should be in the set of documents that d is eligible for, as d fulfills the requirements for both.
However, if we include all of d's attributes in a find query, we get db.A.find({d.age > req.age.min, d.age < req.age.max, d.gender: req.gender}), which would exclude both a and b from our result.
What is the best way to select all the documents in A that d fulfills the requirements for, given that d may contain more attributes than the requirements of a document in A specify, and that the requirements in A and attributes in D are not fixed? We would like to avoid specifying every possible attribute in D in all A.req documents as we would like our requirements to be as flexible as possible.
There are no straightforward ways to do this. The only route you can take is performing an existence check on each requirement which doesn't result in the most elegant queries imaginable. Using your query format :
db.A.find({$and:[{req.age.min:{$exists:true}}, {d.age > req.age.min}], ....)
In other words. You modify your query so it follows "if D's attribute has a requirement in A check if it meets that requirement". Frankly I think looking at a more appropriate schema might be a more elegant route though.