What is the maximum depth of a Map field in Firestore? - google-cloud-firestore

I am using a Map field to store all the medical aid IDs of a member. I don't want to use a subcollection because its maximum depth is only 100. The medical aid of a member can reach more than 100.
Here is an example of my map field:
medicalAids:{ id1: true, id2: true, id3: true}
How many ids I can store in this medicalAids (map)?
Should I use subcollection instead?

What you're describing isn't called "depth". That's simply the number of nested map fields. You can have as many map fields as you want, up to the max size of a document, which is 1MB. Learn how to calculate the size of a document using the documentation.
Subcollections are not limited in number of documents. The max subcollection depth you're referring to is talking about nested subcollections, not number of documents. If you have an unbounded number of items to store, you should definitely do that using documents in a subcollection, not a map field, because you will run out of that 1MB space.

Related

There is a doc in firestore that has a ID_number field when created. If I want to create another doc, how do I make its ID equal to the previous ID+1?

I'm using Flutter and Firestore, and I want to be able to create documents with an assigned ID field inside them, but I don't know how to make that the new document IDs Field is equal to the last document ID field number + 1.
For Example, if I have this field inside a document, I want to make the next document's correlativeNumber equal to this one + 1 = 87
The best option that you have is to store the last correlativeNumber into a document. Each time a new document is added, increment that number by 1. In this way, you can always know which number was used previously.
But there is something that should take into consideration. When it comes to document IDs, according to the official documentation:
Do not use monotonically increasing document IDs such as:
Customer1, Customer2, Customer3, ...
Product 1, Product 2, Product 3, ...
Such sequential IDs can lead to hotspots that impact latency.
So it's best to use the Firestore's built-in identifiers, which are by definition completely unique.

Firestore - If I give Document ID in Alphabetical Order will it create hotspots/inefficiency?

I want to build users collection which will have Documents with IDs as A,B,C,...Z.
Each Document will contain a subcollection which will contain data of all users with starting letter A,B,...Z depending on document ID.
In Firestore documentation it is mentioned
Do not use monotonically increasing document IDs such as:
Customer1, Customer2, Customer3, ...
Product 1, Product 2, Product 3, ...
Such sequential IDs can lead to hotspots that impact latency.
So Does using alphabets as Document ID create the same issue and cause hotspots or some other in efficiencies?
Thanks
Yes, you could observe the same effect. It's best to use fully randomized IDs like you get if you use add() to create a new document.

Firestore - how to do a "reverse" array-contains?

I have a collection users in firestore where one of the fields is "contact_person".
Then I have an array arrNames {'Jim', 'Danny', 'Rachel'} in my frontend and I want to get all users that have either of these names in their "contact_person" field.
Something like where("contact_person" IN arrNames)
How could I do that?
This is currently not possible within the Firestore API. You will need to do a separate get for each document, unless the names happen to be in a single contiguous range.
Also see:
FireStore Where In Query
Google Firestore - how to get document by multiple ids in one round trip?
It is now possible (from 11/2019) https://firebase.googleblog.com/2019/11/cloud-firestore-now-supports-in-queries.html but keep in mind that the array size can't be greater than 10.
db.collection("projects").where("status", "in", ["public", "unlisted", "secret"]);

How would I fetch random pairs from mongodb

So I have an interesting use case that I'm stuck trying to find a efficient mongo query for.
To begin, I have 12,000 categories with 100,000 posts. I need to randomly select a 100 pairs of posts, from random categories. The pairs are randomly selected from categories, but each pair must have both posts belonging to the same category.
Users look at each pair to rate and once they finish looking at the 100, they fetch another 100 random posts (preferably not any of the same pairs they've already seen).
So the requirements are:
Fetch 100 pairs of posts randomly from a random set of categories
Optional requirements:
Not to return the same pairs they've already rated
Mongo Collections
Users
Categories
Posts
CategoryId
Ratings (embedded collection in posts)
How would I do this in Mongo... should I move some of this data off of mongo to another db if it's easier?
Yes. Very interesting question. My suggestion is to put a randomVal field on your post documents. Then you can sort on {CategoryId: 1, randomVal: 1}. The result will be a cursor that groups all the posts by CategoryId but randomly within that grouping. If you conceptually think of this as an array, you can pick all the even indexed posts, and pair them with an odd neighbor to get unique random pairs within categories.
I think that how to select the random pairs from this list will take some experimentation, but my gut instinct is that the best approach would be to have a separate process that periodically caches a collection of pairs which are sorted by a separate randomVal2. The user facing queries would just increment through this pairs collection 100 at a time.
I think you can achieve this in two query. First you need to use aggregation framework and do a map reduce operation on Posts collection. In the map phase use category id as the key and emit post ids to reducer.
In the reduce phase choose two random id from each category. In the end of the map reduce you will have a list of Post ids. Then retrieve those posts from Posts collection.
Add a ratedBy field to Post document and when user rated a post add his or her userName to ratedBy field. Then use that field as a filter to your map reduce command in the first place so that you don't bring already rated documents to user.
Good luck

MongoDB vs Columnar

Is MongoDB a good fit when there are several combinations of columns used for querying , thus creating indexes on all of the columns is not feasible? How does MongoDB performs when, say, you have no index on the column and you have millions of entries for that column?
If you have no index, a table scan is performed, as with any database system.
If the documents are in memory this will still be relatively fast but still take a given amount of time based on the number of documents in the collection as the database must look at each one. O(n)
Is the problem that you have a small set of varying keys per document or a large numer of keys that every document must have?
Column oriented datastores must store a large amount of columns to model varying attributes but mongodb is more flexible because of the document data model.
If you have documents that have a small number of varying attributes (out of a large set of attributes), this is indexable and will be O(logn).
Your documents would look like this:
{
"name":"some name",
"attrs":[
{"n":"subject","v":"the subject"},
{"n":"description","v":"Some amazing description"},
{"n":"comments","v":"Comments on this thing"},
]
}
Be indexible like this:
db.mycollection.ensureIndex({"attrs.n":1, "attrs.v":1})
and be queryable like this:
db.mycollection.find({attrs: {$elemMatch: {n: "subject", v: "the subject"}}})