The configuration of postgresql.conf for PostgreSql - postgresql

I have a server with 32GB RAM, Windows platform. Out of that 8GB is assigned to JVM. 500 is the max number of postgres connections. Database size is around 24GB. Postgres Version is 9.2.3
What can be the best configuration in postgresql.conf.
I am a newbie in postgres database. Appreciate your help.
This is the current configuration
max_connections = 500
shared_buffers = 1GB
temp_buffers = 512MB
work_mem = 24MB
maintenance_work_mem = 512MB
wal_buffers = 8MB
effective_cache_size = 8GB

Related

Handle more than 5000 connections without PGBouncer in PostgreSQL

We are using, PostgreSQL Version 14.
Our PC configuration at production
Windows 2016 Server
32 GB RAM
8 TB Hard disk
8 Core CPU
In my PostgreSQL.conf file
Shared_buffer 8 GB
Work_MEM 1 GB
Maintenance_work_mem 1 GB
Max_Connections 1000
I wish to handle 5000 connection at a time. Somebody suggest me to go with PGBouncer.
But we initially starts with PostgreSQL without PGBouncer.
I need to know, is my configuration is OK with the 5000 connection or we need to increase RAM or any other...
This is our first, PostgreSQL implementation. So Please suggest me to start with PostgreSQL with out PGBouncer.
Thank you
Note:
In SQL if we set -1, it will handle more number of connections. Like this is there any configuration is available in PostgreSQL

How to use more memory to run a PostgreSQL server in a Singularity Container on an HPC cluster

I am running PostgreSQL 13.4 on Singularity 3.6.4 on a well resources HPC cluster as a data warehouse for 1.5 TB of data utilized by me team for research projects. The HPC cluster utilizes LSF for job scheduling. At present, I am running the server container with 4 cores and 30 GB of available memory.
The LSF summary for the job running the server shows a MEMLIMIT of 29.7 GB, but MAX MEM usage of less than 500 Mbytes.
Ultimately, I am trying to increase query speed, however I believe that increasing memory utilization will ultimately improve overall performance.
Current non-default postgres.conf settings are below:
max_connections = 30
shared_buffers = 7680MB
effective_cache_size = 23040MB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 64MB
min_wal_size = 4GB
max_wal_size = 16GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
Singularity image build from docker://postgres

Increase Max connections in postgresql

My Server Config :
CPU - 16 core
RAM - 64 GB
Storage : 2 TB
OS : CentOs 64 Bit
I have DB and java application on the same server.
My postgres config file has the following:
max_connections = 9999
shared_buffers = 6GB
However, when i check DB via show max_connections it shows only 500.
How can i increase the max_connections value ?
Either you forgot to remove the comment (#) at the beginning of the postgresql.conf line, or you didn't restart PostgreSQL.
But a setting of 500 is already much too high, unless you have some 100 cores in the machine and an I/O system to match. Use a connection pool.

High concurrency on PgPool + PostgreSQL Cluster

I have a PgPool II cluster with 2 PostreSQL 9.5 backends (4vCores, 8gb ram) doing loadbalacing+replication. My use case is a website that provides just SSO-login/Registration, is a relative small database, the queries are very simple but it needs to support very high concurrency (several thousand concurrent users).
Before adding more backends, i want to make sure that the configuration of the current Cluster is optimal. Ran some test with pgbench (regular SELECT queries simulating the normal behaviour of the website) and i was able to overload the connection pool without too much effort (pgbench -c 64 -j 4 -t 1000 -f queries.sql) even when there was plenty of CPU/RAM available in the LB and the backends.
This are the relevant settings:
pgPool II
listen_backlog_multiplier = 3
connection_cache = on
num_init_children = 62
max_pool = 4
child_life_time = 0
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
PostgreSQL
max_connections = 256
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 8MB
maintenance_work_mem = 512MB
min_wal_size = 1GB
max_wal_size = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
Increasing the num_init_children/max_pool will force me to increase the max_connections on the backends, and that doesn't seems to be recommended. Any suggestions? Thanks!
There is no way to achieve more concurrent connections through Pgpool-II than the value of num_init_children.
num_init_children in Pgpool-II directly corresponds to the maximum number of concurrent client connections Pgpool-II can handle, So you can not set the num_init_children value lower than the maximum concurrent connection you want to support.
But to save the max_connections on PG side, you can use the lower value for max_pool config. As a Pgpool-II child process opens a new backend connection only if the requested [user,database] pair is not already in the cache. And if the application uses only one user to connect to only one database, Say [user1,db1], then you can set max_pool to 1, and can have max_connection in PG backend equal to (num_init_children +1).

Very slow Loading of LinkedGeoData in PostgreSql

I have installed and tuned my PostgreSql Database and I downloaded LinkedGeoData files from here and then I have executed the line lgd-createdb -h localhost -d databasename -U user -W password -f bremen-latest.osm.pbf (12MB) and the same for saarland-latest.osm.pbf (21.6 MB) and worked fine and under 15 Minutes but I tried to load a heavier file like Mecklenburg-Vorpommern-latest.osm.pbf (54MB) and it didn't react very good, system executes that line but I wait for result since yesterday.
The values of my PostgreSql's conf File postgresql.conf are
shared_buffers = 2GB
effective_cache_size = 4GB
checkpoint_segments = 256
checkpoint_completion_target = 0.9
autovacuum = off
work_mem = 256MB
maintainance_work_mem = 256MB
My PostgreSql Version is 9.1 under Debian Machine.
How can I solve this issue?
I thank you in advance.
I am the developer of the lgd-createdb script, and I just tried to reproduce the problem using postgresql 9.3 (via ubuntu 14.04) on a notebook with quad core I7, SSD and 8GB RAM - and for me the Mecklenburg-Vorpommern-latest.osm.pbf file was loaded in less than 10 minutes.
My settings were:
shared_buffers = 2GB
temp_buffers = 64MB
work_mem = 64MB
maintenance_work_mem = 256MB
checkpoint_segments = 64
checkpoint_completion_target = 0.9
checkpoint_warning = 30s
effective_cache_size = 2GB
so quite similar to yours.
I even created a new version of the LGD script (not in the repo yet), where osmosis is configured to first load the data into the "snapshot" schema and afterwards convert it to the "simple" schema. Osmosis is optimized for the former schema, and indeed on a single run (using the CompactTempFile option) it was a slightly faster (8min snapshot vs 8:30min simple).
Do you have SSDs? The latter loading strategy might be significantly faster on non-SSDs (although it shouldn't be hours for a 50MB file).
Maybe a system load indicator such as htop or indicator-multiload could help you reveal resource problems (such as running out of RAM or high disk I/O by another process).