I am using pgAdmin to connect remotely to my database as phpPgAdmin is a bit limited in its features. Only problem is if I leave the SQL window open without running a query for a few minutes, then it says I need to reconnect.
Is this a setting I need to change in my database to keep remote connections alive for longer or is it a pgAdmin setting?
It is client setting.
You need specify connect_timeout in your config file.
29.1. Database Connection Control Functions
29.14. The Connection Service File
Related
What is the quickest way to open a command-line interface to a Google Cloud SQL database?
I like the old-school mysql command line interface, so currently I open a terminal from the cloud console and then connect with gcloud sql connect .... This then shows a message "Allowlisting your IP for incoming connection for 5 minutes.." which then sits for well over than a minute before the password prompt is given.
Compounding things, the cloud console disconnects if you leave the tab for 10 minutes so you have to do it all over again.
Are there any options to more quickly open a mysql command line client for quick queries? Should I spin up a linux server and connect from there? Load a MySQL client on my PC and connect from there? All of those are extra steps that I have to figure out, so I was wondering which connection method will give me the quickest connection speed just for simple querying.
Use Cloud SQL Auth Proxy with a local database client, but just keep the proxy running
My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).
When I trying connect to my postgres database, I always receiving connection time out error. For instance I want to connect from pqadmin. Can you please help with it ?
PostgreSQL databases on PythonAnywhere are protected by a firewall, so external computers can't access them directly -- you need to use a thing called an SSH tunnel, which opens a secure SSH connection to PythonAnywhere, then sends the Postgres stuff over it.
This help page on the PythonAnywhere site has the details on how to set that up.
I'm using several connections in SQL developer to connect to different Oracle databases. For some connections I have to change the schema to that of another user. This can be done is several ways
By using: alter session set current_schema = <otheruser>;
The drawback is that I have to enter this for every connection I want to open and with a different <otheruser> for each connection.
Using the global connection startup script in Preferences > Database > Filename for connection startup script. The drawback of this method is that SQL Developer uses the same global startup script and runs it for every connection I open. Probably trying to set a non existing schema in most -but one- connections.
Is there a way to automatically set the default schema on connecting to a database for individual connections?
Connection Schema
conn_1 Leave current schema unchanged for this connection
conn_2 Change current schema to <schema_A> for this connection
conn_3 Leave current schema unchanged for this connection
conn_4 Change current schema to <schema_B> for this connection
conn_5 Change current schema to <schema_C> for this connection
A solution will be very helpful.
No, that is not a feature. We assume when you define the connection, you are using the schema that you want to work with.
The tool is VERY connection driven - using alter session set current schema will work with queries you run in a SQL Worksheet, but won't have any effect for the rest of the tool, say browsing your tables in the Connection navigation tree.
Now, if you have PROXY connect privs, you could set up your connection to actually connect to your 'default' schema via proxy.
I show how here
I have postgres with hba configuration file and pgbouncer for connection pooling.
I want to connect to pgbouncer (instead of postgres) only by changing the port number of the connection string (6543 instead of 5432). Both postgres and pgbouncer run on the same server.
So far, I've been able to have pgbouncer run with its own hba file with duplicated user/password. It's not maintainable (or at least very painful) to manually sync postgres and pgbouncer user/password.
Is there any way I can make pgbouncer forward user/password login attempts to postgres as-is? Or am I trying to work my conf against the way things should go?
Which version of pgbouncer do you use? Starting from 1.6 it is able to load users/passwords directly from database. You just need to specify "auth_query"
in your config file.
https://pgbouncer.github.io/config.html