I'm trying to fetch reprorts from mongodB Azure Cosmos DB
but I got this error any ideas about "Request rate is large"
[MongoDB\Driver\Exception\RuntimeException]
Message: {"Errors":["Request rate is large"]}
ActivityId: 3ed9b0b0-0000-0000-0000-000000000000, Request URI: /apps/56e5f1c8-3a07-4d35-974e-aabfdb9d95c3/services/1aead77f-7daf-4dd5-b514-c4694384803c/partitions/a9eb8681-b325-4b62-9601-9d57b325da3a/replicas/131818510503404005p, RequestStats:
RequestStartTime: 2018-10-01T11:56:27.9231945Z, Number of regions attempted: 1
, SDK: Microsoft.Azure.Documents.Common/2.0.0.0
"Request rate is large" is a CosmosDB error that you will get if your actions cost more data than the provisioned throughput can provide you with.
It means that your collection's provisioned RU/s are less that what your query costs to run in one second. You can get around this by increasing the retry count for these errors in the IDocumentClient by setting the RetryOptions MaxRetryAttemptsOnThrottledRequests property to something higher. This is an object of the ConnectionPolicy object used to initialise the IDocumentClient.
The other way of course is to increate the throughput of the collection from the portal or your C# code.
Related
I’m using MongoDB Atlas’ paid tier Replicaset (Primary-Secondary-Secondary)
Every hour I create a new collection and insert about 1.5 to 2 million documents.
When I check Atlas’ cluster metrics every time I insert it, the primary is unchanged, and the query targeting of secondary is rapidly increasing.
As a result, Interferes with alerts of actual dangerous operations according to collscan and it is very noisy because the alarm of atlas occurs every hour
The alarm is using readPreference=secondary in my application, so it is difficult to disable.
I need an opinion on how this can happen.
Below is the metric information atlas metrics page that I checked.
enter image description here
Does Azure Devops Server 2020 (on-prem) have any limits to the number of odata queries and/or amount of data returned in odata queries (for Analytics feature)? I found this documentation, https://learn.microsoft.com/en-us/azure/devops/integrate/concepts/rate-limits?view=azure-devops-2020, but it implicitly refers to Azure Devops Services by referencing information such as Usage views/settings that are not available on-prem; so, I don't believe it to be accurate for AZD on-prem.
The documentation doesnot mention there is any limits to the number of odata queries. As for the amount of data returned in odata queries. The document has below:
Analytics forces paging when query results exceed 10000 records. In that case, you will get first page of data and link to follow to get next page. Link (#odata.nextLink) can be found at the end of the JSON output. It will look like an original query followed by $skip or $skiptoken
See here for more information.
So i donot think there is any limits to the number of odata queries and/or amount of data returned in odata queries.
It is described in the document that the rate limits delay requests from individual users when their usage exceeds threshold consumption of a resource within a (sliding) five-minute window. So the rate limits donot have effect on the number of odata queries and/or amount of data returned in odata queries but the frequency the odata queries call Azure Devops Services.
I have 1 collection on Firestore database and there are 2000 test documents (records) in this collection. Firestore gives free 50.000 daily reading quota. When I run my Javascript code to query documents, my reading quota decreases more than I expected. If I count all documents by using one query, is that mean "2000 reading" operation or only "1 reading" operation?
Currently firestore doesn't have any native support for aggregate queries over documents like sum of some fields or even count of documents.
So yes, when you count total number of documents in the collection then you are actually first fetching atleast the references for those docs.
So, having 2000 documents in a collections and using a query to count number of docs in that collection you are actually doing 2000 reads.
You accomplish what you want, you can take a look at the following also https://stackoverflow.com/a/49407570
Firebase Spark free give you
1 GiB total - size of data you can store
10GiB/month - Network egress Egress in the world of
networking implies traffic that exits an entity or a network
boundary, while Ingress is traffic that enters the boundary of a
network In short network bandwidth on database
20K/day Writes
50K/day Reads
20K/day Del
If You reading 2000 documents depends on calling a single call read cause one read if you reading multipal at one consider 1 reads the answer is depends how you reads you calling
Firebase Console also consume some reads writes thats why quota decreases more than you expected
I'm trying to create a document from IBM Cloudant service in BlueMix. The size of this document is in excess of 10mb. Now every time I try to create this document it throws an error saying, '413 Request Entity Too Large'.
I've tried creating documents of similar size in past but never had any problems. Please help.
Max doc size is limited to 1M on Bluemix Public Standard and Lite plans.
I have deployed mongodb 64 bit 2.x version on aws m1.large instance.
I am trying to find best performance that mongo can give us on aws in-light of http://www.snailinaturtleneck.com/blog/tag/mongodb/ (and mongodb read/write performance and mongo hosting in the cloud)
I have created one db with one collection i.e. user and inserted 100,000 records/json object (each json object size is 4KB) using random number as suffix to “user-“. Also, created index on user id.
Further, I set db profiler to log slow query taking 20ms or more. I have executed java program with 10 threads. Each java class generates user id with random number and finds it in user collection in infinite loop. With such load I have observed latency in query/read up-to 60ms.
I also observed that when I run less number of threads say 3 or 4 (having query load on user collection 5K per second to find users) then I see no latency or less then 2ms latency.
I failed to understand why increasing load of finding user in collection is causing latency. I believe that mongo db can perform much more concurrent read then what I am trying and should not impact on performance as such.
One possibility I assume that would be - mongo is having performance issues if there are large queries executed on single collection like in our case, I expect to have 10K to 20K queries per second on single collection.
We would appreciate your thoughts / suggestion.
Some information is missing - what is your disk configuration? The EBS may contribute to the latency if everything is persisted to disk.
Amazon had released a white paper with best practices on how to install mongo on EC2: MongoDB on AWS. Here's its description
This whitepaper provides an overview of general best practices that apply to all major NoSQL systems and highlights one of popular NoSQL systems - MongoDB - and discusses how to best run it on the AWS cloud. It further examines different MongoDB configurations so you can optimize it for performance, durability, and security.