Queries timeout. How to free RAM in Cloud SQL Postgres? - postgresql

I run nightly jobs that have quite a few long-lasting memory-heavy queries in Cloud SQL Postgres instance (PostgreSQL 12.11 with 12 CPUs and 40GB Memory).
The workload and the amount of data increased lately and i started seeing issues with the db more and more often, where nightly jobs(when db is under the most load) would run forever and never succeed or timeout. As I understand its because of the memory usage (also seeing this is Total memory usage section the memory reaches 100% capacity during the peak hours).
The only thing that helps is restart, which frees the memory, but it's an emergency short-term fix.
From the database configs, I have these set:
increased work_mem to 400Mb = 1% of RAM (1-5% recommended) Should I increase/decrease it?
increased maintenance_work_mem 4GB = 10% of RAM (10-20% recommended)
shared_buffers = 13.5GB (default)
How can I configure the instance to account for the load without having to increase recourses? Maybe there is a way to somehow free RAM without having to restart the instance?
Thank you so much in advance!

Related

AWS RDS Aurora Postgres freeable memory decreasing leads to db restart

I have a RDS Aurora Postgres instance crashing with OOM error when the freeable memory reaches close to 0. This error generates a database restart and once the freeable memory decreases again over the time, a new restart happens. After each new restart, the freeable memory goes back to around 16500MB.
Does anyone has any idea on why this is happening?
Some information:
Number of ongoing IDLE connections: around 450 IDLE connections
Instance class = db.r6g.2xlarge
vCPU = 8
RAM = 64 GB
Engine version = 11.9
shared_buffers = 43926128kB
work_mem = 4MB
temp_buffers = 8MB
wal_buffers = 16MB
max_connections = 5000
maintenance_work_mem = 1042MB
autovacuum_work_mem = -1
These connections are keep alive for application reutilization and when new queries goes to the database, one of those connections are used by the application. This is not a pool of connection implementation, just 100 instances of my application are connected to the database.
It seems some of these connections/processes are "eating" the memory over the time. By checking the OS processes, I saw some of these processes are increasing the RES memory. For example, one idle process had 920.9 MB as RES metric but now it has 3.96 GiB.
RES metric refers to physical memory being used by the process as per this AWS doc: https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/USER_Monitoring.OS.Viewing.html
I'm wondering if this issue is related to these idle connections as described here: https://aws.amazon.com/blogs/database/resources-consumed-by-idle-postgresql-connections/
Maybe I should reduce the number of connections to the database.
Freeable Memory on CloudWatch graph:
General metrics on CloudWatch:
Enhanced monitoring metrics:
OS processes (around 100 ongoing processes):

My Postgres RDS Database is constantly reastarting suddenly due to heavyconsumption of memory. But memory is not cosumed fully. postgres RDS

logs
The database process was killed by the OS due to excessive memory consumption. It is recommended to tune your database workload and/or parameter usage to reduce memory consumption.
But memory is not fully consumed. below is graph. check from 12:30 time in graph

Low Ram Usage in Postgres CloudSQL instance?

I have a production environment setup with Postgres CloudSql instance. My database in around 30GB and I have ram of 8GB on master and 16GB on slave. But one weird thing happening with me is that the memory usage on both master and slave is stuck at 43%. I am not sure what is the reason for same. Can anyone help regarding this?
I cannot tell what number exactly the graph represents, but I assume it is allocated memory.
Then that would be fine, because the "free" RAM is actually used by the kernel to cache files, and PostgreSQL uses that memory indirectly via the kernel cache.

CPU high load and free ram

I have a cloud server in cloudways, the CPU load is very high even after I upgrade my server up 2 levels but the strange thing is the ram is almost free ( server 16 GB ram 6 Core) is there anything we can do to take advantage of that free ram to reduce CPU load.
Regards
No CPU and RAM are different things
Check the reason why your CPU is highly loaded.
Maybe your host where your VM runs on is overloaded. Did you try to
contact your cloud provider?

CPU spike high in postgresql 9.6

We upgraded postgresql from 9.3 to 9.6 and we found that after upgrading postgresql consumes overall 75% of CPU.
We are having web based application(JBOSS) which will communicate to postgresql 9.6
We are having performance measures scripts which will call rest API and run for 1 hour and collect the overall CPU utilization.
Overall CPU utilization after 1 hour is around 75% with the below configuration.
postgresql.conf:
max_worker_processes=16
max_parallel_workers_per_gather=8
force_parallel_mode=on
shared_buffers=3GB
maintenance_work_mem=1GB
backend_flush_after=1MB
bgwriter_flush_after=1MB
bgwriter_delay=10ms
min_wal_size = 1GB
max_wal_size = 2GB
synchronous_commit=off
effective_io_concurrency = 100
max_prepared_transactions=1000
Hardware Configuration
CPU = 8 Core
Memory = 16GB
postgresql data dir resides in NFS.
These performance spikes were not faced in postgresql 9.3(Overall CPU consumption is around 50%)
Can any one help on this?