Unable to connect to the database postgres - postgresql

I have a job on my k8s cluster that initializes a Postgres DB, but during the run, it can't connect to the db. I have deployed the same job in another cluster with a different RDS Postgres DB without having any issues.
Error:
Unable to connect to the database at "postgresql://<username>:<password>#<endpoint>:5432/boundary?sslmode=disable"
CREATE DATABASE "boundary"
WITH ENCODING='UTF8'
OWNER=<username>
CONNECTION LIMIT=-1;
This is how my job is trying to establish the connection.
boundary database migrate -config /boundary/boundary-config.hcl || boundary database init -config /boundary/boundary-config.hcl || sleep 10000
I can also connect to the db by myself, but the job can't do so. Since this job is able to run on other clusters, I'm trying to figure out what would be wrong with db. The db usernames has the same privileges as well. What do you think would cause such issues?
Thanks!

Related

Use Terraform on Google Cloud SQL Postgres to create a Replication Slot

Overall I'm trying to create a Datastream Connection to a Postgres database in Cloud SQL.
As I'm trying to configure it all through Terraform, I'm stuck on how I should create a Replication Slot. This guide explains how to do it through the Postgres Client and running SQL commands, but I thought there might be a way to do it in the Terraform configuration directly.
Example SQL that I would like to replicate in Terraform:
ALTER USER [CURRENT_USER] WITH REPLICATION;
CREATE PUBLICATION [PUBLICATION_NAME] FOR ALL TABLES;
SELECT PG_CREATE_LOGICAL_REPLICATION_SLOT('[REPLICATION_SLOT_NAME]', 'pgoutput');
If not, does anyone know how to run the Postgres SQL commands against the Cloud SQL database through Terraform?
I have setup the Datastream and Postgres connection for all other parts. I'm expecting that there is a Terraform setting I'm missing or a way to run Postgres commands against the Google Cloud SQL Postgres database.
Unfortunately, there is no terraform resource for specifying a replication slot on a google_sql_database_instance.

Postgres OLBC Connection error: FATAL: No pg_hba_conf entry

I am trying to create a linked server between the warehouse and a amazon cloud service.
The service provide is using a PostgreSQL database.
I have installed the ODBC Driver (12.10) on my server but I keep getting this error.
I am not sure how to work around this as I have never used Postgres before.

"error: too many connections for database 'postgres'" when trying to connect to any Postgres 13 instance

My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).

Connecting to MongoDB from AWS Glue

I am trying to create a connection from AWS Glue to MongoDB, but when I test the connection it fails with error: "Check that your connection definition references your Mongo database with correct URL syntax, username, and password. Exiting with error code 30". I know that my connection parameters are correct because I can connect with the same host, port, database, user name, and password from another client application (DataGrip). And I know that my VPC configuration should be correct too because I have another connection in the Glue, to connect to the PostgreSQL database on-premise with public IP, that works just fine.
My MongoDB version is 4.4.1. I am out of ideas what else can cause the problem. Is anyone successfully connects to MongoDB form Glue and run the Crawler?

Connecting to and executing queries on a timescsleDB running in an EC2 or Lightsail instance from a Lambda function

I am planning to have a timescale database running in an EC2 or Lightsail instance. I would like to be able to connect to and run queries on this timescale database from a Lambda function to insert data into and read data from the DB.
I know timescaleDB is a Postgres plugin and there are plenty of articles online documenting the process of connecting to a Postgres DB running inside of AWS RDS from a lambda, but I can't seem to find any describing how I would connect to one running in an EC2 or Lightsail instance.
Question: How do I connect to a timescaleDB running in an EC2 or Lightsail instance from a lambda function?
I'd say the answer is the same as for how to connect to RDS, as documented here:
https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds.html
This answer also gives a good example how to connect to a PostgreSQL RDS, but instead of using the rds_config, you'll need to specify the hostname/ip in such a way that they point to your EC2 instance.
The differences being that you will need to specify the hostname/ip and other connection details to point to the EC2 instance. For example, if your EC2 instances
import sys, logging, psycopg2
host = "ec2-3-14-229-184.us-east-2.compute.amazonaws.com"
name = "demo_user"
password = "p#assword"
db_name = "demo"
try:
conn = psycopg2.connect(host=host,
database=db_name,
user=name,
password=password)
cur = conn.cursor()
except:
logger.error("ERROR: Unexpected error: Could not connect to Postgresql instance.")
sys.exit()