I'm trying to create a document from IBM Cloudant service in BlueMix. The size of this document is in excess of 10mb. Now every time I try to create this document it throws an error saying, '413 Request Entity Too Large'.
I've tried creating documents of similar size in past but never had any problems. Please help.
Max doc size is limited to 1M on Bluemix Public Standard and Lite plans.
Related
I want to add a new object to an existing MongoDB document which I don't control and I don't want to break the vendors application. I've tried this in test code and it works fine, but I wanted some confirmation.
I'm using a RestAPI to drive a commercial product and under the hood the application is using MongoDB to persist. I can add new and arbitrary fields/objects to the JSon messages and they're persisted into Mongo as expected. Am I right that as long as my naming is different from existing/new vendor fields, then the Vendors application should just keep working, ignoring my new data?
Bonus points if there's an article covering this that I can reference.
MongoDB does not have a fixed schema and it treats all documents in a collection differently. With the new storage engine WiredTiger, even there is a document level transaction. So adding a new document to the existing collection should not matter most. However, if you are going to read that new document and its not indexed then reading time will be high
We are using .net Core and node.js micro services some of them with mongoDB.
Currently we got the following DB structure :
Every customer gets his own Database.
So if we got a micro service for Invoices, every new customer adds 1 new DB for that micro service.
Invoice_customerA
Invoice_customerB
etc...
While the collections in each such DB remain the same (usually we got 1-3 collections in each DB)
In terms of logic - We choose the right DB by request input in runtime.
I am thinking now about changing it a bit, to start making separation on the collections instead:
So if we take the same example from before this time around this Invoice Service will only have 1 DB,
Invoice_allCustomers
and there will be 1 new collection for each customer in it ( or more if there were more collections for this service).
collection_customerA
collection_customerB
What I am trying to understand is if there is any difference performance wise?
Or is it mostly a "cosmetic" change?
Or maybe there are some other considerations?
P.S.
If the change is mostly cosmetic I am thinking that the new solution is better for us since we usually got only 1-2 collections per each micro service.
And it will be easier to navigate when there are significantly less Databases.
As far as I know in microservices,each service should have its own database. If it is not a different service than you can use one database with different collections in it. It is more of cosmetic changes but I should also warn you that mongodb still has it limits which you can find here. It really depends on the amount of data that will be stored and retrieved.
I have quite large request to save and it is really necessary to save it. I have read across the web and according to documentation the request size should be between 2-4 MB and when I save it and get error below:
"Mongo Error: Request size is too large "
it contains lots of text, and images that required for the user to upload so the document gets really big. How can I save large request data in the cosmosDb?
Based on the official document for Cosmos DB Limitation:
There are no restrictions on the item payloads like number of
properties and nesting depth, except for the length restrictions on
partition key and id values, and the overall size restriction of 2 MB
And the max request size is 2MB,response size is 4MB.(link)
If your data is 2MB+, you could follow the strategy in this blog: Cosmos DB document size is limited to 2 MB and is not supposed to be used for content storage. For larger payload storage, use Azure Blob Storage instead.
Or you could consider using MongoDB Atlas on Azure if you'd like full MongoDB feature support.
I'm trying to fetch reprorts from mongodB Azure Cosmos DB
but I got this error any ideas about "Request rate is large"
[MongoDB\Driver\Exception\RuntimeException]
Message: {"Errors":["Request rate is large"]}
ActivityId: 3ed9b0b0-0000-0000-0000-000000000000, Request URI: /apps/56e5f1c8-3a07-4d35-974e-aabfdb9d95c3/services/1aead77f-7daf-4dd5-b514-c4694384803c/partitions/a9eb8681-b325-4b62-9601-9d57b325da3a/replicas/131818510503404005p, RequestStats:
RequestStartTime: 2018-10-01T11:56:27.9231945Z, Number of regions attempted: 1
, SDK: Microsoft.Azure.Documents.Common/2.0.0.0
"Request rate is large" is a CosmosDB error that you will get if your actions cost more data than the provisioned throughput can provide you with.
It means that your collection's provisioned RU/s are less that what your query costs to run in one second. You can get around this by increasing the retry count for these errors in the IDocumentClient by setting the RetryOptions MaxRetryAttemptsOnThrottledRequests property to something higher. This is an object of the ConnectionPolicy object used to initialise the IDocumentClient.
The other way of course is to increate the throughput of the collection from the portal or your C# code.
We implemented solr cloud on 3 replicas and three shards. We are importing data from Mongo and indexing it to solr. While adding data to solr we found two solutions.
Add live data as user registered and create profile it will get indexed in solr.
Using cron pull data from mongo and index to solr.
Solution 2 is complex as we need to mentain failure status and all.
If we select solution 1 what will be actual problem in production environment? If any benchmark available ?
High availability and performance is main concerns