Openshift says Quota limit reached - mongodb

In the Open shift i have 4 projects and 25Gb of space allocated to the projects.
And db i use is Mongo Db(3.2 Version).
So in openshift iam getting the message has Quota limit reached and if i check all the 25 GB has been used as per openshift
But in Mongo db if i check using db.stats() for all the projects i have used 5.7GB
I want to know where the remaining space is used Or how to find exact space that i am using.

I think you’d like to do double checks about your resource issues.
check what resource limit was reached, is it a storage?
you should check the event logs which provide more details.
check what quota limits were configured your cluster or project.
have you been experienced some troubles after the showing the messages? Such as db hanging up, no response from pod and so on.
They are just troubleshooting guides, but i hope it help you.

Related

How to reset MongoDB Connections?

I'm using M0 Cloud Managed MongoDB. I'm facing with the problem of 500 connections limit. I've checked full of docs to restart the connections but I can't manage to find Restart in Clusters Menu in Cloud Manager. Do I miss something? Any help is highly appreciated. Here is the picture and document I've checked.
https://docs.cloudmanager.mongodb.com/tutorial/restart-deployment/
You have application(s) that are connected to your deployment that are responsible for these connections. Identify the applications in question, then identify whether they are leaking connections and if so fix them. If your applications genuinely need more than 500 connections you need a higher tier of Atlas.
The document you located is not applicable to you. See here, cloud manager is used when you host MongoDB yourself.

RDS Serverless - Could not verify and start postgres

In the last few days, I'm having this weird issue with my Serverless Postgres RDS.
After deploying new code to the backend service the RDS server becomes unavailable, the only logs I could find are those :
Freeable Memory (MB):
The only document I found is this one, which said AWS working on fixing this issue.
Any help will be much appreciated.
As per the AWS Blog on RDS serverless best practices:
Aurora Serverless scales up when capacity constraints are seen in CPU or connections. However, finding a scaling point can take time (see the Scale-blocking operations section). If there is a sudden spike in requests, you can overwhelm the database. Aurora Serverless might not be able to find a scaling point and scale quickly enough due to a shortage of resources.
The error - Error restarting database: Unable to find shared memory value in the postgres.log file from pg_ctl getSharedMemory command ideally would replace to memory allocation issue.
The best way to handle it would be to keep a buffer/minimum higher allocation of memory while expecting a load on the server.

MongoDB Atlas - Replica Set Has No Primary

I'm fairly new to MongoDB (Atlas - free tier), where I have created a project using it for storing my data. I had it set up and working fine for a couple of weeks, when suddenly I received an email with: An alert is open for your Atlas project: Replica set has no primary. I have no idea what this means and I don't believe I have done anything in the last couple of days/weeks that could warrant this alert. However, after checking my project, it seems that I can no longer connect to my cluster and access my data.
After checking on MongoDB Cloud, it seems that my cluster has stopped working and only the secondary shard (don't know if this is the right terminology) is running, while the other two seem to be down. Can anyone explain what this means, why it is happening or how to fix it? Thanks.
To troubleshoot issues like this, read the server logs and act based on the information therein.
For free and shared tiers in Atlas the logs are apparently not available. Therefore:
For a free tier cluster (M0), delete this cluster and create a new one. If you don't have a backup you should be able to dump via a direct connection to any of the operational secondary nodes or using the secondary read preference.
For a shared tier cluster (M2/M5), use the official MongoDB support channels for assistance.

googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 9.0, quotaExceeded

I'm encountering the following error for my ingress controller.
Warning GCE googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 9.0, quotaExceeded
My limit is set as 9, and this has previously worked so I'm not sure why this error is being encountered now.
I did delete the cluster and then created a new one, what do these backend services refer to? How could I remove any old ones that have not been deleted?
You could also ask for a small up on the backend # quota page.
If it's small enough it will get auto accepted.
I had to delete the previously created Load balancers, and the related "backends" in the Google Cloud console.
The quota was shortly updated after that.
Just a heads up — I ran into this quickly trialing a Multi Region GCE Ingress deployed using Kubemci. Since you are essentially duplicating your backend across many regions the maximum number of regions you could use on a GCP Trial Account would be 5.
GCP will force you to upgrade to a full account (and enter billing if you haven't yet). Not a big deal but in my instance I had do this in order to test a service being served from more than 5 regions at once — where the error was not immediately evident in the logs.
When trouble shooting the rest of the Multi-Region Ingress process this one was tricky to track down — so hopefully this saves a bit of time for someone trying to deploy many clusters on a trial account (like I was!).

AWS EB should create new instance once my docker reached its maximum memory limit

I have deployed my dockerized micro services in AWS server using Elastic Beanstalk which is written using Akka-HTTP(https://github.com/theiterators/akka-http-microservice) and Scala.
I have allocated 512mb memory size for each docker and performance problems. I have noticed that the CPU usage increased when server getting more number of requests(like 20%, 23%, 45%...) & depends on load, then it automatically came down to the normal state (0.88%). But Memory usage keeps on increasing for every request and it failed to release unused memory even after CPU usage came to the normal stage and it reached 100% and docker killed by itself and restarted again.
I have also enabled auto scaling feature in EB to handle a huge number of requests. So it created another duplicate instance only after CPU usage of the running instance is reached its maximum.
How can I setup auto-scaling to create another instance once memory usage is reached its maximum limit(i.e 500mb out of 512mb)?
Please provide us a solution/way to resolve these problems as soon as possible as it is a very critical problem for us?
CloudWatch doesn't natively report memory statistics. But there are some scripts that Amazon provides (usually just referred to as the "CloudWatch Monitoring Scripts for Linux) that will get the statistics into CloudWatch so you can use those metrics to build a scaling policy.
The Elastic Beanstalk documentation provides some information on installing the scripts on the Linux platform at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-cw.html.
However, this will come with another caveat in that you cannot use the native Docker deployment JSON as it won't pick up the .ebextensions folder (see Where to put ebextensions config in AWS Elastic Beanstalk Docker deploy with dockerrun source bundle?). The solution here would be to create a zip of your application that includes the JSON file and .ebextensions folder and use that as the deployment artifact.
There is also one thing I am unclear on and that is if these metrics will be available to choose from under the Configuration -> Scaling section of the application. You may need to create another .ebextensions config file to set the custom metric such as:
option_settings:
aws:elasticbeanstalk:customoption:
BreachDuration: 3
LowerBreachScaleIncrement: -1
MeasureName: MemoryUtilization
Period: 60
Statistic: Average
Threshold: 90
UpperBreachScaleIncrement: 2
Now, even if this works, if the application will not lower its memory usage after scaling and load goes down then the scaling policy would just continue to trigger and reach max instances eventually.
I'd first see if you can get some garbage collection statistics for the JVM and maybe tune the JVM to do garbage collection more often to help bring memory down faster after application load goes down.