I am trying to enable and set a value for the effective_cache_size, and issuing a SIGHUP, but the value does not change. Running postgres 9.5.5. Based on the documentation it does not require a restart, merely a reload.
Here is the value I inserted into postgresql.conf
effective_cache_size = 12GB
I am not calling other configuration files from within the postgresql.conf.
When I query pg_settings the source file shows /data1/pgdata/mydb/postgresql.auto.conf rather than /data1/pgdata/mydb/postgresql.conf
This gets a little more bizzare, I used
ALTER system set effective_cache_size = 12 GB
and ran
select pg_reload_conf;
when I run
show effective_cache_size;
it says 12 GB
Any ideas?
pg_settings shows it in blocks, typically 8KB - perhaps that's the problem.
I got the same problem - which actually led me here. I set 6GB and pg_settings showed 786432 and the source was configuration file - confusing. When I changed it to 4GB and restarted, it said 524288 - aha, so it is changing!
Documentation for the variable says: "If this value is specified without units, it is taken as blocks, that is BLCKSZ bytes, typically 8kB."
Related
I have a problem. I am learning PostgreSQL and I work with pgAdmin 4v4. At this point, I am trying to set PostgreSQL to use as buffers more RAM than my computer has. I am thinking of using something like SET shared_buffers TO '256MB' but I am not sure if it is correct. Do you have any ideas?
SET shared_buffers TO '256MB'
This will not work because shared_buffers must be set at server start and cannot be changed later, and this is a command you would run after the server is already running. You would have to put the setting in postgresql.conf, or specify it with the -B option to the "postgres" command.
You could also set it through 'alter system' command, and it would take effect at the next restart. However, you could easily make it a setting that will cause your system to fail to start again (indeed, that appears to be your goal...), at which point you can't use 'alter system' to fix it, and will have to dig into the .conf files.
I have a PostgreSQL text dump file approximatley 4.5GB in size (uncompressed) that I am trying to restore, but always fails due to running out of memory.
Interestingly enough, no matter what I try it always fails at the exact same line number of the dump file, which leads me to believe the changes I have attempted have had no effect. (I did look at this line number in the file and it is just another row of data, nothing significant is occurring at that point in the file.)
I am using psql with the -f option, as I read that can be better than the standard input. Both methods fail, however.
I have tried the following:
increase work_mem from 4MB to 128MB
increase shared buffers from 128MB to 2GB
increase VM memory from 8GB to 16GB
Using both Top and PG_Top I can see (what I believe shows) both the OS and database still have memory available when psql fails. I'm not doubting that something somewhere is running out of memory, I just wish I had a better way of telling what exactly that was.
Other information that may be helpful:
PostgreSQL 10.5
Ubuntu 16.04 LTS running on VMWare Workstation
I am trying to change some parameteres in postgresql.conf file. I changed the parameters to following values
Shared_buffers: 8000MB
work_mem: 3200MB
maintenance_work_mem: 1600MB
I have postgresql installed on 128GB RAM server. After making these changes I restarted postgresql server. After that when I use psql to check these parameters using show (parameter_name) I get the following values.
Shared_buffers: 8000MB
work_mem: 4MB
maintenance_work_mem: 2047MB
Why did the changes reflect correctly only in the shared_buffer parameter but not in the other two?
I changed the max_wal_size to 4GB and min_wal_size to 1000MB but these parameters did not change too and the values shown are 1GB and 80MB. So in conclustion, of all the changes that I made only the changes to shared_buffers parameter got reflected while others did not change.
Some possibilities what might be the problem:
You edited the wrong postgresql.conf.
You restarted the wrong server.
The value was configured with ALTER SYSTEM.
The value was configured with ALTER USER or ALTER DATABASE.
Use the psql command \drds to see such settings.
To figure out from where PostgreSQL takes the setting, use
SELECT * FROM pg_settings WHERE name = 'work_mem';
I am using Postgres DB for my product. While doing the batch insert using slick 3, I am getting an error message:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
My batch insert operation will be more than thousands of records.
Max connection for my postgres is 100.
How to increase the max connections?
Just increasing max_connections is bad idea. You need to increase shared_buffers and kernel.shmmax as well.
Considerations
max_connections determines the maximum number of concurrent connections to the database server. The default is typically 100 connections.
Before increasing your connection count you might need to scale up your deployment. But before that, you should consider whether you really need an increased connection limit.
Each PostgreSQL connection consumes RAM for managing the connection or the client using it. The more connections you have, the more RAM you will be using that could instead be used to run the database.
A well-written app typically doesn't need a large number of connections. If you have an app that does need a large number of connections then consider using a tool such as pg_bouncer which can pool connections for you. As each connection consumes RAM, you should be looking to minimize their use.
How to increase max connections
1. Increase max_connection and shared_buffers
in /var/lib/pgsql/{version_number}/data/postgresql.conf
change
max_connections = 100
shared_buffers = 24MB
to
max_connections = 300
shared_buffers = 80MB
The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data.
If you have a system with 1GB or more of RAM, a reasonable starting
value for shared_buffers is 1/4 of the memory in your system.
it's unlikely you'll find using more than 40% of RAM to work better
than a smaller amount (like 25%)
Be aware that if your system or PostgreSQL build is 32-bit, it might
not be practical to set shared_buffers above 2 ~ 2.5GB.
Note that on Windows, large values for shared_buffers aren't as
effective, and you may find better results keeping it relatively low
and using the OS cache more instead. On Windows the useful range is
64MB to 512MB.
2. Change kernel.shmmax
You would need to increase kernel max segment size to be slightly larger
than the shared_buffers.
In file /etc/sysctl.conf set the parameter as shown below. It will take effect when postgresql reboots (The following line makes the kernel max to 96Mb)
kernel.shmmax=100663296
References
Postgres Max Connections And Shared Buffers
Tuning Your PostgreSQL Server
Adding to Winnie's great answer,
If anyone is not able to find the postgresql.conf file location in your setup, you can always ask the postgres itself.
SHOW config_file;
For me changing the max_connections alone made the trick.
EDIT: From #gies0r: In Ubuntu 18.04 it is at
/etc/postgresql/11/main/postgresql.conf
If your postgres instance is hosted by Amazon RDS, Amazon configures the max connections for you based on the amount of memory available.
Their documentation says you get 112 connections per 1 GB of memory (with a limit of 5000 connections no matter how much memory you have), but we found we started getting error messages closer to 80 connections in an instance with only 1 GB of memory. Increasing to 2 GB let us use 110 connections without a problem (and probably more, but that's the most we've tried so far.) We were able to increase the memory of an existing instance from 1 GB to 2 GB in just a few minutes pretty easily.
Here's the link to the relevant Amazon documentation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.MaxConnections
change max_connections variable
in postgresql.conf file located in
/var/lib/pgsql/data or /usr/local/pgsql/data/
Locate postgresql.conf file by below command
locate postgresql.conf
Edit postgresql.conf file by below command
sudo nano /etc/postgresql/14/main/postgresql.conf
Change
max_connections = 100
shared_buffers = 24MB
to
max_connections = 300
shared_buffers = 80MB
I'm trying to configure a PostgreSQL 9.6 database to limit the size of the pg_xlog folder. I've read a lot of threads about this issue or similar ones but nothing I've tried helped.
I wrote an setup script for my Postgresql 9.6 service instance. It executes initdb, registers a windows service, starts it, creates an empty database and restores a dump into the database. After the script is done, the database structure is fine, the data is there but the xlog folder already contains 55 files (880 mb).
To reduce the size of the folder, I tried setting wal_keep_segments to 0 or 1, setting the max_wal_size to 200mb, reducing the checkpoint_timeout, setting archive_mode off and archive_command to an empty string. I can see the properties have been set correctly when I query pg_settings.
I then forced checkpoints through SQL, vacuumed the database, restarted the windows service and tried pg_archivecleanup, nothing really worked. My xlog folder downsized to 50 files (800 mb), not anywhere near the 200 mb limit I set in the config.
I have no clue what else to try. If anyone can tell me what I'm doing wrong, I would be very grateful. If more information is required, I'll be glad to provide it.
Many thanks
PostgreSQL won't aggressively remove WAL segments that have already been allocated when max_wal_size was at the default value of 1GB.
The reduction will happen gradually, whenever a WAL segment is full and needs to be recycled. Then PostgreSQL will decide whether to delete the file (if max_wal_size is exceeded) or rename it to a new WAL segment for future use.
If you don't want to wait that long, you could force a number of WAL switches by calling the pg_switch_xlog() function, that should reduce the number of files in your pg_xlog.