PostgreSQL + Apache connection pooling: MaxIdle and MaxActive vs RAM usage - postgresql

We are running a PostgreSQL database in the Google Cloud. On top of this we have an app. In the app we can configure runtime connection pooling settings for the database.
Our Google SQL server has 30GB ram so the default max_connections is 500 as I read in the Google docs. In our app we have set the following connection pooling (Apache commons pooling) settings:
MaxActive: 200
MaxIdle: 200
MinIdle: 50
We are experiencing issues with these settings. First of all, we often run into the MaxActive limit. I can see a flatline in the connections graph at 200 connections a couple times a day. At those moments our logs are flooded with SQL connection errors.
The server is using around 28GB ram on peak moments (with 200 active connections). So we are close to the RAM limit as well.
Instead of blindly increasing the RAM and MaxActive I was hoping to get some insights on what would be a best practice in our situation. I see 2 solutions to our problem:
Increase RAM, increase MaxActive and increase MaxIdle (not very cost efficient)
Increase MaxActive, keep MaxIdle the same (or even lower) and keep MinIdle the same (or even lower)
Option 1 would be more cost expensive so I am wondering about option 2. Because I lower the Idle connections I would take up less RAM. However, will this have a noticeable impact on performance? I was thought to keep MaxIdle as close to MaxActive as possible, to ensure least overhead in creating new connections.
I have done quite some research, but I came to the conclusion that tuning these settings are very situation specific and there is not really a general best practice on these settings. I could not find a definitive answer to the performance impact of option 1 vs option 2.
Ps. we are also experiencing some slow queries in our app, so of course we can optimize things or change the design of our app to decrease the amount of concurrent connections.
I really hope someone can give some helpful insights / advice / best practices. Thanks a lot in advance!

Related

Locust eats CPU after 2-3 hours running

I have a simple HTTP server that I was testing. This server interacts with other HTTP servers and Cassandra DB.
Currently I was using 100 users with 1 request/s, so totally 100 tps was on the server. What I noticed with the Docker stats was that the CPU usage became higher and higher and ~ 2-3 hours later the CPU usage reaches the 90% mark, and even more. After that I got a notice from Locust, stating that the measurement may be inconsistent. But the latencies were not increased, so I do not know why this has been happening.
Can you please suggest possible cause(s) of the problem? I think 100 tps should be handled by one vCPU.
Thanks,
AM
There's no way for us to know exactly what's wrong without at very least seeing some code, and even then other factors like the environment or data or server you're running it on or against could have additional factors we wouldn't know about.
It's possible you have a problem with your code for your Locust users, such as a memory leak or they're just doing too much for a single worker to handle that many users. For users only doing simple HTTP calls, a single CPU typically can handle upwards of thousands of requests per second. Do anything more than that and you'll start to expect to reduce what a worker can handle. It's also possible you may just need a more powerful CPU (or more RAM or bandwidth) to do what you want it to do at the scale you want.
Do some profiling to see if you can find any inefficiencies in your code. Run smaller tests to see if the same behavior is evident with smaller loads. Run the same load but with additional Locust workers on other CPUs.
It's also just as possible your DB can't handle the load. The increasing CPU usage could be due to how your code is handling waiting on the connection from the DB. Perhaps the DB could sustain, say, 80 users at an acceptable rate but any additional users makes it fall further and further behind and your Locust users are then waiting longer and longer for the requested data.
For more suggestions, check out the Locust FAQ https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps

Should I decrease max_connections in PostgreSQL when using PgBouncer?

Why shall I decrease max_connections in PostgreSQL when I use PgBouncer? Will there be a difference if I set max_connections in PostgreSQL's config equal 100 or 1000 when I use PgBouncer to limit connections below either?
Each possible connection reserves some resources in shared memory, and some backend private memory is also scaled to it. Reserving this memory when it will never be used is a waste of resources. This was more of an issue in the past, when shared memory resources were much more fiddly than they are on modern OS.
Also, there is some code which needs to iterate over all of those resources, possibly while holding locks, so it takes more time to do that if there is more data to iterate over. The exact nature of the iteration and locks have changed from version to version, as code was optimized to make it more scalable to large number of CPUs.
Neither of these effects is likely to be huge when the most of the possible connections are not actually used. Maybe the most important reason to lower max_connections is to get instant diagnosis in case pgbouncer has been misconfigured and is not doing its job correctly.

Rule of thumb for Max Postgres connections to set?

As a rule of thumb, how many max connections should I set my Postgres server to have? For instance, if I have 8 GB of memory, and quad core 3.2 GHZ machine, and the server is dedicated to only Postgres, how many max connections would be safe?
There is no real rule of thumb since it really depends on your load.
If you do lots of tiny queries than you can easily increase the amount of connections.
If you have a few heavy queries, than you will probably increase the work_mem so you'll run out of memory with a lot of connections.
The basic thing is:
don't have more connections than your memory allows.
don't kill and recreate connections if possible (pgbouncer springs to mind)

Scaling DB2 to increase tps

I wanna have brainstorm with you guys all about scaling option that DB2 have. Hope can helped me to resolve the problem.
I need to scale my DB2 database to anticipate flash crowd transaction to database server. My database can only serve around 200 transaction++ per sec in application term not database tps before my database totally stalled and out of cpu.
What are you guys think, if I want to increase to reach 2000++ or 10 times before, what options i have to scale my database?
Recently i read about pureScale feature. Its look promising but its not flexible solution by mean it just can be deploy on IBM System X and ours is not. Are there other solution like pureScale in shared-everything approach?
The second option maybe database partition. Is database partition or shared-nothing approach can help resolve my problem? Can add processing power to my system?
Thanks and regards,
Fritz
Before you worry about how to scale up (more hardware in 1 server) or out (more servers), look at how to tune your database. Buying your way out of a performance problem is almost always more expensive than spending time to find and fix the performance problem.
Assuming that the process(es) consuming CPU on your database server are the database engine, then high CPU activity and low I/O activity is indicative that you're doing a LOT of reads, but they are just all in memory. Scanning a huge table is still in inefficient, even if that table is stored completely in memory (buffer pools).
Find the SQL statements that are using the most CPU. Look at the explain plans, and figure out how to make them more efficient. There are LOTS of resources on the web for database performance tuning.

Postgres causing swapping on CentOS

All,
I am running CentOS 6.0 with Postgresql 8.4 and can't seem to figure out how to prevent so much disc swap from occurring. I have 12 gigs of RAM and 4 processors and I am doing some simple updates (1 table at a time). I thought for a minute that the inserts happening in parallel from a script I wrong was causing the large memory usage but when I saw the simple update causing it too I basically threw in the towel and decided to ask for help.
I pasted the conf file here. http://pastebin.com/e0jdBu0J
You can see that I set the buffers relatively low and the connection amounts high. The DB service will not start if I set the shared buffers any higher than 64 megs. Anyone have an idea what may be causing this for me?
Thanks,
Adam
If you're going into swap, increasing shared_buffers will make the problem worse; you'll be taking RAM away from the part that's running out and swapping, instead dedicating memory to the database caching. It's worth fixing SHMMAX etc. just on general principle and for later tuning work, but that's not going to help with this problem.
Guessing at the identify of your memory gobbling source is a crapshoot. Far better to look at data from "top -c" and ps to find which processes are using a lot of it. It's possible for a really bad query to consume way more memory than it should. If you see memory use spike up for a PostgreSQL process running something, check the process ID against the information in pg_stat_tables to see what it's doing.
There are a couple of things that can cause this sort of issue that often surprise people. If you are doing a large number of row updates in a single transaction, and there are foreign key checks or triggers involved, that can run out of memory. The queue of things to check in each of those cases is kept in RAM, and can be surprisingly big.
There are two problems with your PostgreSQL settings that might be related. Databases don't actually work very well if you have a lot more active connections than cores in the server; best performance is normally 2 to 3 active clients per core. And all sorts of things go wrong once you've got more than a few hundred connection. There is some connections^2 behavior that gets ugly there performance wise, and there are some memory issues too. If you really need 1250 connections, you should be using a connection pooler such as pgBouncer or pgpool-II.
And effective_io_concurrency = 1000 is way too high for any hardware on the planet. Useful values for that in a small multiple of how many disks you have in the server. I have no idea what happens as far as memory usage goes when you set it that high, but it's not been tested very well at that range. Normal settings more like 1 to 25. The parameters outlined at Tuning Your PostgreSQL Server are much more important than it is; the concurrency value only impacts one particular type of table scan.
Centos 6 seems to have a very conservative shmmax as a default
Set your shared buffers to that recommended by postgres tuning resources
see for explanation and how to set.
To experiment you can (as root) use sysctl -w kernel.shmmax = n
where n is the value that the startup error message that postgres is trying to allocate on startup. When you identify the value you wish to use permanently then set that in /etc/sysctl.conf