Firestore security rules: check a condition for every value in an array - google-cloud-firestore

In Firestore security rules, is there any way to check a condition for every value in an array?
I have a document that has a subcollection. The document has an order field which is an array of IDs of documents in the subcollection; this array defines a custom user-defined order for those documents.
I want a security rule that checks that any values added to the order array correspond to a document in the subcollection (i.e. that the document exists). That is, it needs to check this condition for every value in the array.

What you call an array is known as a List in Firestore security rules, and there are no list comprehension style operations beyond the has* checks for literal values.
The problem is that such a looped check would quickly become a performance (and cost) bottle neck.

Related

Firestore index on maps and array - clarification

I'm trying to understand how Firestore creates indexes on fields. Given the following sample document, how are indexes created, especially for the maps/arrays?
I read the documentation at Index types in Cloud Firestore multiple times and I'm still unsure. There it says:
Automatic indexing
By default, Cloud Firestore automatically maintains single-field indexes for each field in a document and each subfield in a map. Cloud Firestore uses the following default settings for single-field indexes:
For each non-array and non-map field, Cloud Firestore defines two collection-scope single-field indexes, one in ascending mode and one in descending mode.
For each map field, Cloud Firestore creates one collection-scope ascending index and one descending index for each non-array and non-map subfield in the map.
For each array field in a document, Cloud Firestore creates and maintains a collection-scope array-contains index.
Single-field indexes with collection group scope are not maintained by default.
If I understand this correctly then there is an index created for each of these fields, even for the values in the alternate_names array.
So if I want to search for any document where fields.alternate_names contains a value of (for example) "Caofang", then Firestore would use an index for its search
Is my assumption/understanding correct?
No, your understanding is not correct. fields.alternate_names is an array subfield in a map field, which means it would not satisfy the requirements in the second point. You can test your assumption simply by issuing the query. If the query fails, you will see in the error message that it failed due to lack of index.
Firestore will simply not allow queries that are not indexed. The error message from that failure will contain a link to the console that will let you create the index necessary for that query, if such a thing is possible.
If you want to be able to query the contents of fields.alternate_names, consider promoting it to its own top-level field, which will be indexed by default.

How to create an index exemption on Firestore subdocuments?

We have a database structured as follows:
Collection foo
Documents
Collection bar
Documents with many fields (approaching the 1 MB limit)
Trying to write a document to the bar collection containing 34571 fields, I get (from the Go API):
rpc error: code = InvalidArgument desc = too many builtin index entries for entity
OK, fine, it seems I need to add an exemption:
Large array or map fields
Large array or map fields can approach the limit of 20,000 index entries per document. If you are not querying based on a large array or map field, you should exempt it from indexing.
But how? The console only lets me set a single collection name and a single field path, and slashes aren't accepted:
I tried other combinations, but / isn't accepted in either the Collection ID or the Field path, and using ., while not clearly forbidden, results in a generic error when trying to save the exemption. I'm also not sure if * is allowed.
Index exemptions are based on collection ID and not collection path. In this case, you can enter bar as the collection ID. This also means the exemption applies to all collections with ID bar, regardless of hierarchy.
As for the fields, you can specify only a single field path per exemption. The "*" all-selector is not supported. There is a limit of 200 index exemptions so you wouldn't be able to exempt all 34571 fields. If possible, I suggest moving your fields into a map. Then you could disable indexing on the map field.

Can you predefine allowed fields on documents in Meteor/MongoDB?

I searched everywhere for this but I can't find anything on that matter. If you don't predefine fields in MongoDB, couldn't a user with Insert permission then Insert a document with every kind of field he wants? Via Collection.insert? If I am thinking correctly here, is there a way to restrict this?
You can restrict inserting any kind of fields in these two ways:
Use collection.allow/deny(http://docs.meteor.com/#/full/allow) - the insert callback has a doc parameter, which contains the exact document that user wants to insert - you can check the content of it and deny the insertion if you spot fields that are not allowed.
Use SimpleSchema (https://github.com/aldeed/meteor-simple-schema) and Collection2 (https://github.com/aldeed/meteor-collection2) packages to define the schema and attach it to your collection - it will prevent the insertion if a document has additional/missing fields (or fields of not expected type).
This is my personal preference. Because fieldNames param in
Collections.update(userId, doc, fieldNames) only gives top-level fields in doc. So if you are having nested fields it is very hard to track.
So I don't use collection allow/deny rules. Without allow deny rules Collections.insert/Collections.update does nothing on client. Instead I am using Meteor methods to update/delete documents to collections, so I can decide which exact fields should update/insert.

MongoDB: findOne with $or condition

Such request as db.collection.findOne({$or: [{email: 'email#example.com'},{'linkedIn.id': 'profile.id'}]}); may return an array with two records.
Is it possible to specify to return only the first occurrence so that I always have a model as a response, not an array?
E.g. if there is a record with a specified email, return it and do not return another record, if any, matching profile.id?
Another question is if the order of the params 'email' and 'linkedIn.id' matters.
All this hazel is about LinkedIn strategy, which never returns an email (at least for me) but I have to cater for case when it may return an email. So I construct my query depending on email presence and if it is present, the query is with $or operator. But I would like to avoid checking for whether the response is an object or an array and then perform additional operation on array values to figure out which of the values to use.
According to documentation of mongo DB
findOne()
always returns a single document irrespective of matches it found.
And regarding order of retrieval it will always return the first match except capped collection which maintains order of insertion of documents into collection.
For more detailed description about findOne please refer the documentation as mentioned in following URL
https://docs.mongodb.org/manual/reference/method/db.collection.findOne/
According to the MongoDB docs for db.collection.findOne():
Returns one document that satisfies the specified query criteria. If multiple documents satisfy the query, this method returns the first document according to the natural order which reflects the order of documents on the disk. In capped collections, natural order is the same as insertion order. If no document satisfies the query, the method returns null.
You can't recieve multiple records from db.collection.findOne(). Are you sure you're not using db.collection.find()?

MongoDB Aggregate Framework - Grouping with Multiple Fields in _id

Before marking this question as a duplicate - please read through. I don't think a sufficiently conclusive and general answer has been given yet, as most questions have focused on specific examples.
The MongoDB documentation says that you can specify an aggregate key for the _id value of a $group operation. There are a number of previously answered questions about using MongoDB's aggregate framework to group over multiple fields in this way, i.e:
{$group: {_id:{field_a:'$field_a', field_b:'$field_b'} } }
Q: In the most general sense, what does this action do?
If grouping documents by field A condenses any documents sharing the same value of field A into a single document, does grouping by fields A and B condense documents with matching values of both A and B into a single document?
Is the grouping operation sequential?
If so, does that imply any level of precedence between 'field_a' and 'field_b' depending on their ordering?
If grouping documents by field A condenses any documents sharing the same value of field A into a single document, does grouping by fields A and B condense documents with matching values of both A and B into a single document?
Let A = { a:A, b:B }, then that automatically follows from the assumption. You didn't make any assumption about the type of A, which is correct: the type doesn't matter. If the type of A is document, the usual comparison rules apply (equal content is considered equal).
Is the grouping operation sequential?
I'm not sure what that means. The aggregation pipeline runs accumulator functions on all items in each stage, so it certainly iterates the entire set, but I'd refrain from making assumptions about the exact order that happens in, i.e. from performing any non-associative operations.
If so, does that imply any level of precedence between 'field_a' and 'field_b' depending on their ordering?
No, documents are compared field-by-field and there are no strict guarantees on the ordering of fields ("attempts to...") in MongoDB. However, one can, in principle, create documents that contain multiple fields of the same name where the ordering might matter. But it's hard to do so, since most client interfaces don't allow different fields of equal name.