Firestore sub documents read pricing - google-cloud-firestore

Firestore if I read a collection and the collection contains 100 documents then is firebase calculate it as 100 read operation or 1 read operation?
If my 1 document contains another 2 collections and each sub collection contains 10 docs then How much will be the total read count in this case?
if it counts subcollection and it doc separately then Firestore is very very high pricing

If you read all documents from a collection that contains 100 documents, then you're reading 100 documents. So you'll be charged for 100 document reads.
If you're reading documents from subcollections, then there too: you'll be charged for each document you read.
If you're struggling to find a data model that strikes a balance between a flexible structure and limiting the number of reads you need, I recommend watching the Getting to know Cloud Firestore video series, specifically these episodes:
What is a NoSQL Database? How is Cloud Firestore structured?
Cloud Firestore Pricing
How to Structure Your Data

Related

Firebase Read Operation Calculation

When I use .collection().doc() and specify a document id. Resulting in getting only one document from the collection, does this operation count as one read or as reading all of the documents in the firestore database?
StreamBuilder(
stream: firestore
.collection('users')
.doc(auth.currentUser!.uid)
.snapshots(),
Additional Question: The .where() query reads all the documents in the collection right? So the total reads is not the amount of documents I get as a result of the query but the total amount of documents in the collection? Thank you.
You are only charged for the documents that you read from/on the server. Since you only read one document, it's charged as one read (plus any charges for the bandwidth required to transfer the data).
A query does not read all documents in a collection, but instead uses one or more indexes to determine what documents to read. You are not charged explicitly for those index reads, unless there are no results for a query: in that case you get charged for one document read.

Firestore Document Size Limitations

I've Google Cloud Firestore Project. My database model is like this:
Each store has own document. Sales and inventory collections has a lot of documents and their size increases every day.
There is document max size limitation for documents in Firestore. So, Document that named Store1 has sales and inventory collections and they store every sale and item. Does Store1 document have max size limitation? Would sales and inventory documents size increasing be a problem? If it would be, my data model should be incorrect and if it's incorrect, how should it be?
The document size limitation in Firestore is enforced per individual document, and does not include the size of the documents in subcollections of that document. It is relatively uncommon for folks to hit the document size limit.

Firestore reading quota

I have 1 collection on Firestore database and there are 2000 test documents (records) in this collection. Firestore gives free 50.000 daily reading quota. When I run my Javascript code to query documents, my reading quota decreases more than I expected. If I count all documents by using one query, is that mean "2000 reading" operation or only "1 reading" operation?
Currently firestore doesn't have any native support for aggregate queries over documents like sum of some fields or even count of documents.
So yes, when you count total number of documents in the collection then you are actually first fetching atleast the references for those docs.
So, having 2000 documents in a collections and using a query to count number of docs in that collection you are actually doing 2000 reads.
You accomplish what you want, you can take a look at the following also https://stackoverflow.com/a/49407570
Firebase Spark free give you
1 GiB total - size of data you can store
10GiB/month - Network egress Egress in the world of
networking implies traffic that exits an entity or a network
boundary, while Ingress is traffic that enters the boundary of a
network In short network bandwidth on database
20K/day Writes
50K/day Reads
20K/day Del
If You reading 2000 documents depends on calling a single call read cause one read if you reading multipal at one consider 1 reads the answer is depends how you reads you calling
Firebase Console also consume some reads writes thats why quota decreases more than you expected

Firebase read calculations for document queries?

Quick question about how firestore reads are calculated. Say I have a collection with 100 items in it, and I do
citiesRef.order(by: "name").limit(to: 3)
This would technically have to look at all 100 items, order them by name, and then return 3. Would this count for 3 reads or would it count for 100 reads, since we're looking at 100 items?
Thanks.
If the above query returns 3 documents then it would count as 3 reads.
You are charged for each document read, write, and delete that you perform with Cloud Firestore.
Charges for writes and deletes are straightforward. For writes, each set or update operation counts a single write.
Charges for reads have some nuances that you should keep in mind. The following sections explain these nuances in detail
https://firebase.google.com/docs/firestore/pricing#operations

Mapping datasets to NoSql (MongoDB) collection

what I have ?
I have data of 'n' department
each department has more than 1000 datasets
each datasets has more than 10,000 csv files(size greater than 10MB) each with different schema.
This data even grow more in future
What I want to DO?
I want to map this data into mongodb
What approaches I used?
I can't map each datasets to a document in mongo since it has limit of 4-16MB
I cannot create collection for each datasets as max number of collection is also limited (<24000)
So finally I thought to create collection for each department , in that collection one document for each record in csv file belonging to that department.
I want to know from you :
will there be a performance issue if we map each record to document?
is there any max limit for number of documents?
is there any other design i can do?
will there be a performance issue if we map each record to document?
mapping each record to document in mongodb is not a bad design. You can have a look at FAQ at mongodb site
http://docs.mongodb.org/manual/faq/fundamentals/#do-mongodb-databases-have-tables .
It says,
...Instead of tables, a MongoDB database stores its data in collections,
which are the rough equivalent of RDBMS tables. A collection holds one
or more documents, which corresponds to a record or a row in a
relational database table....
Along with limitation of BSON document size(16MB), It also has max limit of 100 for level of document nesting
http://docs.mongodb.org/manual/reference/limits/#BSON Document Size
...Nested Depth for BSON Documents Changed in version 2.2.
MongoDB supports no more than 100 levels of nesting for BSON document...
So its better to go with one document for each record
is there any max limit for number of documents?
No, Its mention in reference manual of mongoDB
...Maximum Number of Documents in a Capped Collection Changed in
version
2.4.
If you specify a maximum number of documents for a capped collection
using the max parameter to create, the limit must be less than 232
documents. If you do not specify a maximum number of documents when
creating a capped collection, there is no limit on the number of
documents ...
is there any other design i can do?
If your document is too large then you can think of document partitioning at application level. But it will have high computation requirement at application layer.
will there be a performance issue if we map each record to document?
That depends entirely on how you search them. When you use a lot of queries which affect only one document, it is likely even faster that way. When a higher document-granularity results in a lot of document-spanning queries, it will get slower because MongoDB can't do that itself.
is there any max limit for number of documents?
No.
is there any other design i can do?
Maybe, but that depends on how you want to query your data. When you are content with treating files as a BLOB which is retrieved as a whole but not searched or analyzed on the database level, you could consider storing them on GridFS. It's a way to store files larger than 16MB on MongoDB.
In General, MongoDB database design doesn't depend so much on what and how much data you have, but rather on how you want to work with it.