We are looking to configure gorm (https://gorm.io) with a highly available postgres cluster. Currently, when we fail over the primary postgres node we start to see error messages such as "driver: bad connection" in the logs. We are hoping there is a way to configure the connection pool such that it can detect the "bad connections", remove them from the pool, and establish news connections. Is this a possibility?
Tried failing over the primary postgres node to determine if gorm would establish new connections.
Related
I have a job on my k8s cluster that initializes a Postgres DB, but during the run, it can't connect to the db. I have deployed the same job in another cluster with a different RDS Postgres DB without having any issues.
Error:
Unable to connect to the database at "postgresql://<username>:<password>#<endpoint>:5432/boundary?sslmode=disable"
CREATE DATABASE "boundary"
WITH ENCODING='UTF8'
OWNER=<username>
CONNECTION LIMIT=-1;
This is how my job is trying to establish the connection.
boundary database migrate -config /boundary/boundary-config.hcl || boundary database init -config /boundary/boundary-config.hcl || sleep 10000
I can also connect to the db by myself, but the job can't do so. Since this job is able to run on other clusters, I'm trying to figure out what would be wrong with db. The db usernames has the same privileges as well. What do you think would cause such issues?
Thanks!
My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).
When I deploy my Rundeck and database on a different cloud region, sometime the following exception happens:
java.net.SocketTimeoutException: connect timed out
But the app which I developed uses a database connection pool. The connection to the database never happens. How make Rundeck connect to the database?
You can enable connection pool in Rundeck when you set the RDB datasource. Use the following setting in rundeck-config.properties:
dataSource.pooled = true
Hope it helps.
I have two Postgres 9.3.5 instances in RDS, both in one security group that allows all inbound traffic from within the security group and all outbound traffic. I'm trying to set up one database to be able to select from a few tables from the other via postgres_fdw.
I've created the server -
create server master
foreign data wrapper postgres_fdw
OPTIONS (dbname 'main',
host 'myinstance.xxxxx.amazonaws.com');
as well as the requisite user mapping and foreign table -
create foreign table condition_fdw (
cond_id integer,
cond_name text
) server master options(table_name 'condition', schema_name 'data');
However, a simple select count(*) from condition_fdw gives me
ERROR: could not connect to server "master"
DETAIL: could not connect to server: Connection timed out
Is the server running on host "myinstance.xxxxxx.amazonaws.com" (xx.xx.xx.xx) and accepting
TCP/IP connections on port 5432?
I can connect to both databases via psql from an EC2 instance. I know until recently RDS didn't support postgres_fdw, but I'm running the newer versions that do.
In the create server statement, I have tried replacing "myinstance.xxxxxx.amazonaws.com" with the IP address it resolves to, no luck.
Any ideas?
Further Testing
I installed postgres on an ec2 instance with the same security group, foreign tables to the master server behave as expected.
postgres_fdw between databases on the same RDS instance works.
This all leads me to think it must some issue with outgoing connections from postgres_fdw on my Postgres RDS instance.
To get postgres_fdw to work between two instances in AWS RDS (with VPC) I had to:
Enable custom_dns_resolution https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.CustomDNS
Add the public IP of the querying database in the inbound rules of the Security group OR add the internal IP of the RDS instance as server host. https://forums.aws.amazon.com/thread.jspa?threadID=235154
The first one is documented pretty well but I could not get it to work until stumbling over the second one.
It would appear that Amazon does not allow outgoing connections from RDS instances, so until that changes using postgres_fdw across RDS instances is not possible. I'll have to run an ec2 instance as my postgres server in order to use a foreign table to a database on another server.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html
I experienced a similar problem and found that postgres_fdw would only work in the EU-WEST-1 RDS region when the master and remote instances were in the same availability zone and on 9.3.5, if you crossed AZs it wouldn't connect. My security group was 5432 TCP inbound only and all outbound.
If you are in the same availability zone, there is no reason why it wouldn't work.
I have a Tomcat and PostgreSQL installed on a server. I'm having a connection problem trying to connect from my servlet to PostgreSQL database using c3p0 pool.
I can reach DB if I'm running Tomcat locally on my laptop. Also I can connect from server to DB using psql (i.e. command line sql utility). But when I'm trying to deploy my servlet to server and establish a connection I'm getting the following error:
java.sql.SQLException: Connections could not be acquired from the underlying database!
com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
...
com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)
com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)
What should I check to locate a problem? It should be a trivial issue but may be due to 4 a.m. I'm missing something :) Thanks in advance!
PS: Connection from all network interfaces are allowed to database. PostgreSQL JDBC driver and c3p0 pool are distributed in WAR. Tomcat configuration is very default. JNDI is not used.
You need to check a few things:
java.policy which tomcat is using
(e.g.
/etc/tomcat5.5/policy.d/02debian.policy)
db server settings (e.g.
/etc/postgresql/pg_hba.conf)
try connecting without pool first as
in my case c3p0 was hiding important information from me
Adding to #Alexey's answer, I had this issue with Tomcat and PostgreSQL 9.4. In my case, the md5 authentication method in postgres was causing the issue.
If you are using Windows server or RHEL server, make sure you update the authentication method in pg_hba.conf file. Modify it to trust and restart postgresql.