Connecting to and executing queries on a timescsleDB running in an EC2 or Lightsail instance from a Lambda function - postgresql

I am planning to have a timescale database running in an EC2 or Lightsail instance. I would like to be able to connect to and run queries on this timescale database from a Lambda function to insert data into and read data from the DB.
I know timescaleDB is a Postgres plugin and there are plenty of articles online documenting the process of connecting to a Postgres DB running inside of AWS RDS from a lambda, but I can't seem to find any describing how I would connect to one running in an EC2 or Lightsail instance.
Question: How do I connect to a timescaleDB running in an EC2 or Lightsail instance from a lambda function?

I'd say the answer is the same as for how to connect to RDS, as documented here:
https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds.html
This answer also gives a good example how to connect to a PostgreSQL RDS, but instead of using the rds_config, you'll need to specify the hostname/ip in such a way that they point to your EC2 instance.
The differences being that you will need to specify the hostname/ip and other connection details to point to the EC2 instance. For example, if your EC2 instances
import sys, logging, psycopg2
host = "ec2-3-14-229-184.us-east-2.compute.amazonaws.com"
name = "demo_user"
password = "p#assword"
db_name = "demo"
try:
conn = psycopg2.connect(host=host,
database=db_name,
user=name,
password=password)
cur = conn.cursor()
except:
logger.error("ERROR: Unexpected error: Could not connect to Postgresql instance.")
sys.exit()

Related

Can't connect to AWS Aurora MySQL

I'm exploring AWS RDS for the first time.
I have two MySQL instances. The first is the traditional MySQL RDS and the second is an Aurora MySQL instance.
Both RDSs have the same region, VPC, and security groups.
I can successfully connect to the traditional MySQL RDS from MySQL workbench on my localhost (Mac OS Monterey FWIW)
I cannot connect to the Aurora Instance. When I attempt to do this I get message "Unable to connect to localhost"
Clearly the hostname I'm using is not localhost (see attached). I've tried connecting from other SQL clients without any success. For example, from a windows box using ODBC I get "unable to connect to " without any other explanation.

Error trying to do logical replication between PostgreSQL and AWS Aurora PostgreSQL could not connect to the publisher: could not connect to server

I'm trying to create a subscription.
It works well when I create it on a PostgreSQL instance or cluster (to PostgreSQL).
But when I try to do logical replication from PostgreSQL to AWS Aurora PostgreSQL I see the following error:
ERROR: could not connect to the publisher: could not connect to server: No route to host
Is the server running on host "my-db.dfsfdsfsdfd.us-east-1.rds.amazonaws.com" (10.2.5.7) and accepting
TCP/IP connections on port 5432?
Some notes:
I updated the parameter group to enable replication: rds.logical_replication (from 0 to 1)
The RDS policy has enough permissions.
Both DBs are in the same VPC, same subnets and share the security group
I can connect to both DBs, and as I said, subscription works if instead of using Aurora I use PostgreSQL
Any idea why this could be happening?
Thanks!

Unable to connect to the database postgres

I have a job on my k8s cluster that initializes a Postgres DB, but during the run, it can't connect to the db. I have deployed the same job in another cluster with a different RDS Postgres DB without having any issues.
Error:
Unable to connect to the database at "postgresql://<username>:<password>#<endpoint>:5432/boundary?sslmode=disable"
CREATE DATABASE "boundary"
WITH ENCODING='UTF8'
OWNER=<username>
CONNECTION LIMIT=-1;
This is how my job is trying to establish the connection.
boundary database migrate -config /boundary/boundary-config.hcl || boundary database init -config /boundary/boundary-config.hcl || sleep 10000
I can also connect to the db by myself, but the job can't do so. Since this job is able to run on other clusters, I'm trying to figure out what would be wrong with db. The db usernames has the same privileges as well. What do you think would cause such issues?
Thanks!

Connect to AWS RDS database via psycopg2

I am trying to connect to my RDS database from my computer with a python script using psycopg2.
python code:
import psycopg2
from db_credentials import *
import logging
def get_psql_conn():
conn = psycopg2.connect(dbname=DB_NAME, user=DB_USER, password=DB_PASS, host=DB_HOST)
logging.info("connected to DB!")
return conn
I get the following error:
psycopg2.OperationalError: could not connect to server: Operation timed out
Is the server running on host ********* and accepting
TCP/IP connections on port 5432?
My security groups assigned to the RDS database:
SG 1:
SG 2:
Now i tried to make a security group which allows my computer IP to access the DB.
SG 3:
I can connect to the DB from my ec2 instances, running the same python script as above. This seemingly has to do with the 2nd security group, as when i remove it, i can no longer connect from my ec2 instances either. It then throws the same error i get when trying to connect from my computer.
I have little understanding of RDS or security groups, i just followed internet tutorials, but seemingly couldnt make much sense out of it.
Any help is greatly appreciated! Thanks
When accessing an Amazon RDS database from the Internet, the database needs to be configured for Publicly Accessible = Yes.
This will assign a Public IP address to the database instance. The DNS Name of the instance will also resolve to the public IP address.
For good security on publicly-accessible databases, ensure that the Security Group only permits access from your personal IP address.

Amazon RDS Postgres name or service not known

I have 2 Amazon EC2 instances. I am using one for development. I started a new one that I want to get working properly so I can use this cleaner one to make an AMI. I am using Django with a Postgres backend in an RDS instance. The RDS instance is running Postgresql 9.4.4. The development EC2 instance (which is the one that works) is running Postgresql 9.3.9. The new instance is running Postgresql 9.3.10.
On the development instance I have no trouble connecting to and using the RDS instance with the command line:
psql --host django.xxxxxxxxx.us-east-1.rds.amazonaws.com --port 5432 --username django_login --dbname django_db
But if I use the same command on the new instance, I get
psql: could not translate host name "django.xxxxxxxxx.us-east-1.rds.amazonaws.com" to address: Name or service not known
Both EC2 instances are in the same security group (launch-wizard-1). The RDS has a security group with all TCP, all UDP and all IMCP set with launch-wizard-1 as the source.
The development instance is in us-east-1d. The new instance is in us-east-1c. The RDS instance is in us-east-1d. I suspect that might be the problem but as I understand the RDS documentation it should be fine. If however, that is the problem, do you know how to change the Availability Zone of an EC2 instance?
I have tried this with the RDS instance set to private and then to public. It did not make a difference.
Any ideas will be appreciated.
The problem was as I had suspected. The instance was in a different subnet. I made an ami from the instance in us-east-1c, made a new instance in the us-east-1d subnet from the ami. Now I can connect from the new instance.
BTW - It is not immediately obvious when you create an instance that you should set the subnet to match your other instances. Look for that option on the instance configuration page.