In the Postgres documentation, it says the parameter "shared_buffers" sets the amount of memory the database server uses for shared memory buffers. I know if this value is too high, then the database server might use too much memory than what is available, and may cause paging to occur.
However, what happens if this value is too low? Would the database just crash if it didn't have enough memory for an intensive query? Specifically, what would it lead to? High IO wait times? High CPU usage?
It won't crash; it may perform poorly.
Related
In v4.0.12 or later, wiredTigerMaxCacheOverflowSizeGB option is available to specify the maximum size of "lookaside table" file.
Is there any parameter which can limit the memory usage in mongo v3.4?
wiredTigerMaxCacheOverflowSizeGB limits disk usage, not memory usage.
For memory usage on 3.4 I use:
--wiredTigerCacheSizeGB 0.25
WiredTiger memory usage: see here
Tell MongoDB how much memory exists in the system: see here
Note that generally, limiting the memory available to the database (such as through system-level configuration) is not useful because if such a limit is reached, the database process will typically immediately terminate. Instead one generally would either:
Understand how much memory is required for the workloads being executed, and provision that much memory for the database, or
Limit workloads to use the memory which is available (for example through adding indexes or sharding the data).
Limiting database memory is a database anti-pattern. The primary goal of a database is to keep your working set in memory. if you really want limit memory consumption run MongoDB inside a container with a memory cap.
Why shall I decrease max_connections in PostgreSQL when I use PgBouncer? Will there be a difference if I set max_connections in PostgreSQL's config equal 100 or 1000 when I use PgBouncer to limit connections below either?
Each possible connection reserves some resources in shared memory, and some backend private memory is also scaled to it. Reserving this memory when it will never be used is a waste of resources. This was more of an issue in the past, when shared memory resources were much more fiddly than they are on modern OS.
Also, there is some code which needs to iterate over all of those resources, possibly while holding locks, so it takes more time to do that if there is more data to iterate over. The exact nature of the iteration and locks have changed from version to version, as code was optimized to make it more scalable to large number of CPUs.
Neither of these effects is likely to be huge when the most of the possible connections are not actually used. Maybe the most important reason to lower max_connections is to get instant diagnosis in case pgbouncer has been misconfigured and is not doing its job correctly.
I have been considering the idea of moving to a RAMdisk for a while. I know its risks, but just wanted to do a little benchmark. I just had two questions: (a) when reading the query plan, will it still differentiate between disk and buffers hits? If so, should I assume that both are equally expensive or should I assume that there is a difference between them?
(b) a RAM disk is not persistent, but if I want to export some results to persistent storage, are there some precautions I would need to take? Is it the same as usual e.g. COPY command?
I do not recommend using RAM disks in PostgreSQL for persistent storage. With careful tuning, you can get PostgreSQL not to use more disk I/O than what is required to make your data persistent.
I recommend doing this:
Have more RAM in your machine than the size of the database.
Define shared_buffers big enough to contain the database (on Linux, define memory hugepages to contain them).
Increase checkpoint_timeout and max_wal_size to get fewer checkpoints.
Set synchronous_commit = off to keep PostgreSQL from syncing WAL to disk on every commit.
If you are happy to lose all your data in the case of a crash, define your tables UNLOGGED. The data will survive a normal shutdown.
Anyway, to answer your questions:
(a) You should set seq_page_cost and random_page_cost way lower to tell PostgreSQL how fast your storage is.
(b) You could run backups with either pg_dump or pg_basebackup, they don't care what kind of storage you have got.
when reading the query plan, will it still differentiate between disk and buffers hits?
It never distinguished between them in the first place. It distinguishes between "hit" and "read", but the "read" can't tell which are truly from disk and which are from OS/FS cache.
PostgreSQL has no idea you are running on a RAM disk, so will continue to report those as it always has.
If so, should I assume that both are equally expensive or should I assume that there is a difference between them?
This is a question that should be answered through your benchmarking. On some systems, memory can be read-ahead from main memory into the faster caches, making sequential reads still faster than random reads. If you care, you will have to benchmark it on your own system.
Reading data from RAM into shared_buffers is still surprisingly expensive due to things like lock management. So as a rough starting point, maybe seq_page_cost=0.1 and random_page_cost=0.15.
a RAM disk is not persistent, but if I want to export some results to persistent storage, are there some precautions I would need to take?
The risk would be that your system crashes before the export has finished. But what precaution can you take against that?
I have a question regarding the freeable memory for AWS Aurora Postgres.
We recently wanted to create an index on one of our dbs and the db died and made a failover to the slave which all worked fine. It looks like the freeable memory dropped by the configured 500mb of maintenance_work_mem and by that went to around 800mb of memory - right after that the 32gig instance died.
1) I am wondering if the memory that is freeable is the overall system memory and if a low memory here could invoke the system oom killer on the AWS Aurora instance? So we may want to plan in more head room for operational tasks and the running of autovacuum jobs to not encounter this issue again?
2) The actual work of the index creation should then have used the free local storage as far as I understood, so the size of the index shouldn't have mattered, right?
Thanks in advance,
Chris
Regarding 1)
Freeable Memory from (https://forums.aws.amazon.com/thread.jspa?threadID=209720)
The freeable memory includes the amount of physical memory left unused
by the system plus the total amount of buffer or page cache memory
that are free and available.
So it's freeable memory across the entire system. While MySQL is the
main consumer of memory on the host we do have internal processes in
addition to the OS that use up a small amount of additional memory.
If you see your freeable memory near 0 or also start seeing swap usage
then you may need to scale up to a larger instance class or adjust
MySQL memory settings. For example decreasing the
innodb_buffer_pool_size (by default set to 75% of physical memory) is
one way example of adjusting MySQL memory settings.
That also means that if the memory gets low its likely to impact the process in some form. In this thread (https://forums.aws.amazon.com/thread.jspa?messageID=881320󗊨) e.g. it was mentioned that it caused the mysql server to restart.
Regarding 2)
This is like it is described in the documentation (https://aws.amazon.com/premiumsupport/knowledge-center/postgresql-aurora-storage-issue/) so I guess its right and the size shouldn't have mattered.
Storage used for temporary data and logs (local storage). All DB
temporary files (for example, logs and temporary tables) are stored in
the instance local storage. This includes sorting operations, hash
tables, and grouping operations that are required by queries.
Each Aurora instance contains a limited amount of local storage that
is determined by the instance class. Typically, the amount of local
storage is twice the amount of memory on the instance. If you perform
a sort or index creation operation that requires more memory than is
available on your instance, Aurora uses the local storage to fulfill
the operation.
I would like to know what what happens when AWS RDS CPU utilisation is 100%?
Do the database requests fail or are the requests put on hold until the CPU utilisation drops below 100%?
I'm using RDS Postgres and thanks in advance for your help.
Your query performance will degrade. Further queries will fail.
If your RDS is the sole database instance for your application, your entire application could come to a stand still.
You will need to figure out if the CPU is peaking due to high load or if there is one such query that is consuming all the resources.
If its under heavy load, adding another replica might help if its read heavy. If its write heavy, you may need to scale up to the next instance or probably think about sharding your datasets.
This could lead to a lot of issues namely:
There is a good chance that if the CPU remains at 100% consistently, your instance will crash. Now, if this a Multi AZ instance, an automatic failover can reduce the downtime incurred due to any unexpected reboot to around 2min. However, if this is a Single AZ instance, the downtime can be significantly longer.
Your Db instance won't accept any new connections despite not hitting the value of 'max_connections'. There is a good chance that some of the existing transactions will roll back due to performance degradation.
Continuous spike to 100% CPU may lead to memory pressure, ie, very high swap usage and low freeable memory and an eventual instance crash.
Workload will reach the threshold and read and write IOPS won't go further.