Cannot connect to aws rds using postgres and sqlalchemy - postgresql

I create a db instance in aws rds with Easy create (which offers default and recommended config),
then I made the Publicly accessible to true.
When I try to connect using my fastapi app and the code:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = "postgresql://USER:PASSWORD#HOST/DB"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
It shows in the error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "database-1.cshqjmfxxxxx.eu-central-1.rds.amazonaws.com" (3.72.174.85), port 5432 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
The configuration of the db instance are:

Related

FastAPI + Postgres + SQLAlchemy - Fatal: remaining connection slots are reserved

I am using FastAPI as a full stack with Jinja2 template.
The main problem is SQLalchemy and postgres
Here's example of main page
async def read_posts(request: Request, page: int = 1, page_size: int = 12, db: Session = Depends(get_db)):
posts = db.query(models.Post).offset(start).limit(end).all()
return templates.TemplateResponse("index.html", {"request": request, "posts": posts})
I just have a blog with a lot of posts, and speed of loading page is very slow, I think I somehow wrongly build queries to the database, but can't find what I did wrong, it is very simple app.
But the main problem is that website is not able to withstand a load, there's statistics from one of the services to check the load
LOAD STATS
here's the logs of error when there is load
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
i find out that is connection leak, but i can't find the source of the problem. I spent 2 days to find the problem and got nothing
I found out the answer
in FastApi you connect to database like this
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = "sqlite:///./my.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
Most important part is get_db(), so you call this command wherever you access the database and think when your query is ended it will finally close, but it won't happen.
When you return posts to template, as in the example above, there is still connection to database and because of it there's connection overflow.
It won't happen if you use JsonResponse for example, but with TemplateResponse database will still connected and working

Psycopg2 can't connect to PostgreSQL

I've set up a PostgreSQL database running on Docker and am currently trying to get psycopg2-binary to run a script to create tables in the PostgreSQL db. I keep getting this error and I don't know/understand what to do next. Please help!
My script:
import psycopg2
import os
from dotenv import load_dotenv
HOST = os.environ.get("POSTGRES_HOST")
USER = os.environ.get("POSTGRES_USER")
PASSWORD = os.environ.get("POSTGRES_PASSWORD")
DATABASE = os.environ.get("POSTGRES_DB")
PORT = os.environ.get("POSTGRES_PORT")
class Connection():
def __init__(self):
self.conn = psycopg2.connect(
database=DATABASE,
user=USER,
password=PASSWORD,
host=HOST,
port=PORT
)
self.cursor = self.conn.cursor()
Error:
File "venv/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?

How to connect aws postgres db over the ssh using python

I have a Postgres db over the AWS and currently we connect that using postico client by providing below information-
DB Host
DB Port
DB Username
DB Password
DB Name
SSH Host (it is domain)
SSH Port
SSH Private Key
During this time I used to my organisation VPN. But Now I have to connect the same with python code and I believe when I can connect the with postico I should be through code as well. I have used below code but unable to connect to db and fetch records so anyone can give idea or sample code?-
def __init__(self, pgres_host, pgres_port, db, ssh, ssh_user, ssh_host, ssh_pkey):
# SSH Tunnel Variables
self.pgres_host = pgres_host
self.pgres_port = pgres_port
if ssh == True:
self.server = SSHTunnelForwarder(
(ssh_host, 22),
ssh_username=ssh_user,
ssh_private_key=ssh_pkey,
remote_bind_address=(pgres_host, pgres_port),
)
server = self.server
server.start() #start ssh server
self.local_port = server.local_bind_port
print(f'Server connected via SSH || Local Port: {self.local_port}...')
elif ssh == False:
pass
def query(self, db, query, psql_user, psql_pass):
engine = create_engine(f'postgresql://{psql_user}:{psql_pass}#{self.pgres_host}:{self.local_port}/{db}')
print (f'Database [{db}] session created...')
print(f'host [{self.pgres_host}]')
self.query_df = pd.read_sql(query,engine)
print ('<> Query Sucessful <>')
engine.dispose()
return self.query_df
pgres = Postgresql_connect(pgres_host=p_host, pgres_port=p_port, db=db, ssh=ssh, ssh_user=ssh_user, ssh_host=ssh_host, ssh_pkey=ssh_pkey)
print(ssh_pkey)
query_df = pgres.query(db=db, query='Select * from table', psql_user=user, psql_pass=password)
print(query_df)
connect just as you would locally after creating an ssh tunnel
https://www.howtogeek.com/168145/how-to-use-ssh-tunneling/

When trying to connect to redshift through python using psycopg2 module following error is bring displayed

I was getting the following error when I trying to connect to redshift via python and using the psycopg2 module.
import psycopg2
my_db = 'dbname'
my_host = 'red-shift hostname'
my_port = '5439'
my_user = 'username'
my_password = 'password'
con = psycopg2.connect(dbname=my_db,host=my_host,port=my_port,user=my_user,password=my_password)
Error:
OperationalError: could not translate host name "redshift://redshift-cluster-1.cqxnjksdfndsjsdf.us-east-2.redshift.amazonaws.com" to address: Unknown host
I faced similar issue. Seems like this is an SSL issue when trying to connect.
Use sqlalchemy-redshift to connect to your redshift cluster, it will work.
This is how the docs show to connect
import sqlalchemy as sa
db_uri = 'redshift+psycopg2://username:password#redshift-cluster-1.cqxnxldsfjjbsdc.us-east-2.redshift.amazonaws.com:5439/dbname'
eng = sa.create_engine(db_uri)

Flask-sqlalchemy losing connection after restarting of DB server

I use flask-sqlalchemy in my application. DB is postgresql 9.3.
I have simple init of db, model and view:
from config import *
from flask import Flask, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://%s:%s#%s/%s' % (DB_USER, DB_PASSWORD, HOST, DB_NAME)
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(255), unique=True, index=True, nullable=False)
db.create_all()
db.session.commit()
#app.route('/users/')
def users():
users = User.query.all()
return '1'
And all works fine. But when happens DB server restarting (sudo service postgresql restart), on first request to the /users/ I obtain sqlalchemy.exc.OperationalError:
OperationalError: (psycopg2.OperationalError) terminating connection due to administrator command
SSL connection has been closed unexpectedly
[SQL: ....
Is there any way to renew connection inside view, or setup flask-sqlalchemy in another way for renew connection automatically?
UPDATE.
I ended up with using clear SQLAlchemy, declaring engine, metadata and db_session for every view, where I critically need it.
It is not solution of question, just a 'hack'.
So question is open. I am sure, It will be nice to find solution for this :)
The SQLAlchemy documentation explains that the default behaviour is to handle disconnects optimistically. Did you try another request - the connection should have re-established itself ? I've just tested this with a Flask/Postgres/Windows project and it works.
In a typical web application using an ORM Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is “optimistic” in that frequent database restarts are not anticipated.
If you want the connection state to be checked prior to a connection attempt you need to write code that handles disconnects pessimistically. The following example code is provided at the documentation:
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
Here's some screenshots of the event being caught in PyCharm's debugger:
Windows 7 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.11, Flask-SQLAlchemy 2.1 and psycopg 2.6.1)
On first db request
After db restart
Ubuntu 14.04 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.8, Flask-SQLAlchemy 2.0 and psycopg 2.5.5)
On first db request
After db restart
In plain SQLAlchemy you can add the pool_pre_ping=True kwarg when calling the create_engine function to fix this issue.
When using Flask-SQLAlchemy you can use the same argument, but you need to pass it as a dict in the engine_options kwarg:
app.db = SQLAlchemy(app, engine_options={"pool_pre_ping": True})