We have a hosted PostgreSQL, with no access to the system or *.conf files.
I do have a admin access and can connect to it using Oracle SQL developer.
Can I run any command to increase the max_connections. All other parameters seems to be ok shared mem and buffers can hold more connections so there is not problem there.
Changing max_connection parameter needs a Postgres restart
Commands
Check max_connection just to keep current value in mind
SHOW max_connections;
Change max_connection value
ALTER SYSTEM SET max_connections TO '500';
Restart PostgreSQL server
Apparently, the hosted Postgres we are using does not provide this option. (compose.io)
So the work around is to use a pgbouncer to manage you connections better.
Related
I'm trying to audit connections to my postgres databases.
i got 32 databases in my installation and one postgresql.conf for all of them.
I've configured to
log_connections = on and now i got information in my file log about connections to 32 databases.
But that should i do to monitor only databases that i need?
For example i need to monitor connections only to 5 of them, other is not interesting for me.
Where should i configure it?
It would be really nice if i could do it in postgresql.conf
with the log_connections parameter, you cannot leverage the granularity to audit selective databases. I would suggest you use pg_audit extension
by default pgAudit will log all databases but you can change it to log per database by using
ALTER DATABASE <database name> set pgaudit.log='<value>';
If you are using it AWS RDS/AURORA refer https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-pgaudit/
For community Postgres you can use https://github.com/pgaudit/pgaudit/blob/master/README.md
My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).
I have a problem. I am learning PostgreSQL and I work with pgAdmin 4v4. At this point, I am trying to set PostgreSQL to use as buffers more RAM than my computer has. I am thinking of using something like SET shared_buffers TO '256MB' but I am not sure if it is correct. Do you have any ideas?
SET shared_buffers TO '256MB'
This will not work because shared_buffers must be set at server start and cannot be changed later, and this is a command you would run after the server is already running. You would have to put the setting in postgresql.conf, or specify it with the -B option to the "postgres" command.
You could also set it through 'alter system' command, and it would take effect at the next restart. However, you could easily make it a setting that will cause your system to fail to start again (indeed, that appears to be your goal...), at which point you can't use 'alter system' to fix it, and will have to dig into the .conf files.
I am trying to change some parameteres in postgresql.conf file. I changed the parameters to following values
Shared_buffers: 8000MB
work_mem: 3200MB
maintenance_work_mem: 1600MB
I have postgresql installed on 128GB RAM server. After making these changes I restarted postgresql server. After that when I use psql to check these parameters using show (parameter_name) I get the following values.
Shared_buffers: 8000MB
work_mem: 4MB
maintenance_work_mem: 2047MB
Why did the changes reflect correctly only in the shared_buffer parameter but not in the other two?
I changed the max_wal_size to 4GB and min_wal_size to 1000MB but these parameters did not change too and the values shown are 1GB and 80MB. So in conclustion, of all the changes that I made only the changes to shared_buffers parameter got reflected while others did not change.
Some possibilities what might be the problem:
You edited the wrong postgresql.conf.
You restarted the wrong server.
The value was configured with ALTER SYSTEM.
The value was configured with ALTER USER or ALTER DATABASE.
Use the psql command \drds to see such settings.
To figure out from where PostgreSQL takes the setting, use
SELECT * FROM pg_settings WHERE name = 'work_mem';
I am using pgAdmin to connect remotely to my database as phpPgAdmin is a bit limited in its features. Only problem is if I leave the SQL window open without running a query for a few minutes, then it says I need to reconnect.
Is this a setting I need to change in my database to keep remote connections alive for longer or is it a pgAdmin setting?
It is client setting.
You need specify connect_timeout in your config file.
29.1. Database Connection Control Functions
29.14. The Connection Service File