Mongo Atlas freetier limit MongoCommandException - mongodb-atlas

i'm using mongo atlas free tier.
i thought mongo free tier limit size is 512MB. but the data keeps inserting.
please someone explain how it happend.
sometimes i recieve below response. than i can't insert data.
but my free tier get over 1GB data. so it mean that without that response, i can insert more data.
it makes me confusing the limit of size.
what is real limit? and how can i insert more than 512MB?
com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'you are over your space quota, using 513 MB of 512 MB' on server XXX. The full response is {
"ok": 0,
"errmsg": "you are over your space quota, using 513 MB of 512 MB",
"code": 8000,
"codeName": "AtlasError"
}

Related

slow query fetch time

I am using gcp cloud sql (mysql 8.0.18) and I am trying to execute a query for only 5000 rows,
SELECT * FROM client_1079.c_crmleads ORDER BY LeadID DESC LIMIT 5000;
but I think the execution is taking long time to fetch data
here is the time details
Affected rows: 0 Found rows: 5,000 Warnings: 0 Duration for 1 query: 0.797 sec. (+ 117.609 sec. network)
Instance configuration is vCPU: 8 , RAM: 20 GB, SSD: 410GB
screenshot of gcp cloud sql instance
also I am facing some issues on high table_open_cache and high ram utilization.
how do I reduce open_table_cache also how to increase instance performance?
Looks like the size of the data retrieved is quite large and the time spent on sending the data from the SQL instance to your App is the reason of the latency observed.
You may review your use case and maybe retrieve less information, or try to parallellize queries, or improve the SQL instance I/O performance (it is related to DB Disk Size).

Mongo insert and update query performs slow. Why?

I have a server which consistently insert and update the data in MongoDb collections.
I have 3 collections : handTab, video, handHistory.
When the no of documents in those collection reaches 40000, 40000, 80000 respectively, then the insert and update and findAndModify command's performance degrades.
In the mongo Compass I have seen that these query are the slowest operations.
Also, the CPU utilization on production server goes upto 100%.
This leads to the server connection timeout of my game-server API.
I have database on a machine and game-server running on other machine, Why the CPU utilization of database machine reaches 100% after the data size become large in collections, i have same number of mongo operations previously as well as now?

What happen if chunk size goes beyond the limit[64 mb] for single shard key in mongodb

We have a sharded cluster of mongodb. shard key is sellerId. We have nearly 20k sellers. We capture responses for sellers. Some sellers may have huge response set. Now lets say sellerId 10001 has some very good listing and got millions of responses in that case single shard key 10001 has huge data and goes beyond of 64 mb of default size. As per mongo documentation there can be only one chunk per shard key in res in replica set. What will happen with this chunk. Does the chunk size automatically increase?

OrientDB disk utilization

I have been working with orientDB and stored about 120 Million records to it, the size on disk was 24 GB, I then I deleted all the records by running the following commands against console :
Delete from E unsafe
Delete from V unsafe
When i checked the DB size on disk it was also 24 GB, Is there anything extra I need to do to get free disk space?
In OrientDB when you delete a record the disk space remains allocated. The only way to free it is to export than re-import the DB.

Extremely slow deserialization for large documents with the MongoDB C# driver

I am using 10 threads to do find-and-modify on large documents (one document at a time on each thread). The documents are about 600kB each and contain each a 10,000 elements array. The client and the MongoDB server are on the same LAN in a datacenter and are connected via a very fast connection. The client runs on small machine (CPU is fairly slow, 768 MB of RAM).
The problem is that every find-and-modify operation is extremely slow.
For example here is the timeline of an operation:
11:56:04: Calling FindAndModify on the client.
12:05:59: The operation is logged on the server ("responseLength" : 682598, "millis" : 7), so supposedly the server returns almost immediately.
12:38:39: FindAndModify returns on the client.
The CPU usage on the machine is only 10-20%, but the memory seems to be running low. The available bytes performance counter is around 40 MB, and the Private Bytes of the process that is calling MongoDB is 800 MB (which is more than the physical RAM on the machine). Also the Page/sec performance counter is hovering around 4,000.
My theory is that it took 33 minutes for the driver to deserialize the document, and that could be because the OS is swapping due to high memory usage caused by the deserialization of the large documents into CLR objects. Still, this does not really explain why it took 10 minutes between the call to FindAndModify and the server side execution.
How likely is it that this is the case? Do you have another theory? How can I verify this is the case?