Psycopg2 can't connect to PostgreSQL - postgresql

I've set up a PostgreSQL database running on Docker and am currently trying to get psycopg2-binary to run a script to create tables in the PostgreSQL db. I keep getting this error and I don't know/understand what to do next. Please help!
My script:
import psycopg2
import os
from dotenv import load_dotenv
HOST = os.environ.get("POSTGRES_HOST")
USER = os.environ.get("POSTGRES_USER")
PASSWORD = os.environ.get("POSTGRES_PASSWORD")
DATABASE = os.environ.get("POSTGRES_DB")
PORT = os.environ.get("POSTGRES_PORT")
class Connection():
def __init__(self):
self.conn = psycopg2.connect(
database=DATABASE,
user=USER,
password=PASSWORD,
host=HOST,
port=PORT
)
self.cursor = self.conn.cursor()
Error:
File "venv/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?

Related

Cannot connect to aws rds using postgres and sqlalchemy

I create a db instance in aws rds with Easy create (which offers default and recommended config),
then I made the Publicly accessible to true.
When I try to connect using my fastapi app and the code:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = "postgresql://USER:PASSWORD#HOST/DB"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
It shows in the error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "database-1.cshqjmfxxxxx.eu-central-1.rds.amazonaws.com" (3.72.174.85), port 5432 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
The configuration of the db instance are:

How to connect aws postgres db over the ssh using python

I have a Postgres db over the AWS and currently we connect that using postico client by providing below information-
DB Host
DB Port
DB Username
DB Password
DB Name
SSH Host (it is domain)
SSH Port
SSH Private Key
During this time I used to my organisation VPN. But Now I have to connect the same with python code and I believe when I can connect the with postico I should be through code as well. I have used below code but unable to connect to db and fetch records so anyone can give idea or sample code?-
def __init__(self, pgres_host, pgres_port, db, ssh, ssh_user, ssh_host, ssh_pkey):
# SSH Tunnel Variables
self.pgres_host = pgres_host
self.pgres_port = pgres_port
if ssh == True:
self.server = SSHTunnelForwarder(
(ssh_host, 22),
ssh_username=ssh_user,
ssh_private_key=ssh_pkey,
remote_bind_address=(pgres_host, pgres_port),
)
server = self.server
server.start() #start ssh server
self.local_port = server.local_bind_port
print(f'Server connected via SSH || Local Port: {self.local_port}...')
elif ssh == False:
pass
def query(self, db, query, psql_user, psql_pass):
engine = create_engine(f'postgresql://{psql_user}:{psql_pass}#{self.pgres_host}:{self.local_port}/{db}')
print (f'Database [{db}] session created...')
print(f'host [{self.pgres_host}]')
self.query_df = pd.read_sql(query,engine)
print ('<> Query Sucessful <>')
engine.dispose()
return self.query_df
pgres = Postgresql_connect(pgres_host=p_host, pgres_port=p_port, db=db, ssh=ssh, ssh_user=ssh_user, ssh_host=ssh_host, ssh_pkey=ssh_pkey)
print(ssh_pkey)
query_df = pgres.query(db=db, query='Select * from table', psql_user=user, psql_pass=password)
print(query_df)
connect just as you would locally after creating an ssh tunnel
https://www.howtogeek.com/168145/how-to-use-ssh-tunneling/

When trying to connect to redshift through python using psycopg2 module following error is bring displayed

I was getting the following error when I trying to connect to redshift via python and using the psycopg2 module.
import psycopg2
my_db = 'dbname'
my_host = 'red-shift hostname'
my_port = '5439'
my_user = 'username'
my_password = 'password'
con = psycopg2.connect(dbname=my_db,host=my_host,port=my_port,user=my_user,password=my_password)
Error:
OperationalError: could not translate host name "redshift://redshift-cluster-1.cqxnjksdfndsjsdf.us-east-2.redshift.amazonaws.com" to address: Unknown host
I faced similar issue. Seems like this is an SSL issue when trying to connect.
Use sqlalchemy-redshift to connect to your redshift cluster, it will work.
This is how the docs show to connect
import sqlalchemy as sa
db_uri = 'redshift+psycopg2://username:password#redshift-cluster-1.cqxnxldsfjjbsdc.us-east-2.redshift.amazonaws.com:5439/dbname'
eng = sa.create_engine(db_uri)

Trouble connecting to SQL database in Jupyter Notebook

I'm having trouble connecting to a SQL database in Juptyer Notebook.
I have changed the port to match the port on my computer (because it was not the default).
dbconn = psycopg2.connect(dbname='mimic',user=[hidden], host='localhost', password=[hidden], port=5433)
I'm using the exact username and password I use to login into the database online, so I expect it to successfully set up the connection, however Jupyter Notebook keeps giving me an Operational Error:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\psycopg2\__init__.py in connect(dsn, connection_factory, cursor_factory, **kwargs)
124
125 dsn = _ext.make_dsn(dsn, **kwargs)
--> 126 conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
127 if cursor_factory is not None:
128 conn.cursor_factory = cursor_factory
OperationalError: FATAL: password authentication failed for user

how to fix "OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly"

Services
My service based on flask + postgresql + gunicorn + supervisor + nginx
When deploying by docker, after running the service, then accessing the api, sometimes it told the error message, and sometimes it workes well.
And the sqlachemy connect database add the parameters 'sslmode:disable'.
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection
Return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1107, in _execute_clauseelement
Distilled_params,
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1466, in _handle_dbapi_exception
Util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 383, in raise_from_cause
Reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context
Cursor, statement, parameters, context
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 552, in do_execute
Cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Information
Docker for Mac: version: 2.0.0.3 (31259)
macOS: version 10.14.2
Python: version 2.7.15
Recurrence method
When view port information by command
lsof -i:5432
the port 5432 is postgresql database default port,if the outputconsole was
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
postgres 86469 user 4u IPv6 0xxddd 0t0 TCP *:postgresql (LISTEN)
postgres 86469 user 5u IPv4 0xxddr 0t0 TCP *:postgresql (LISTEN)
it would display the error message:
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
but if the outputconsolelog show this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 62421 user 26u IPv4 0xe93 0t0 TCP 192.168.2.7:6435->192.168.2.7:postgresql (ESTABLISHED)
postgres 86460 user 4u IPv6 0xed3 0t0 TCP *:postgresql (LISTEN)
postgres 86460 user 5u IPv4 0xe513 0t0 TCP *:postgresql (LISTEN)
postgres 86856 user 11u IPv4 0xfe93 0t0 TCP 192.168.2.7:postgresql->192.168.2.7:6435 (ESTABLISHED)
the situation, the api would work well.
Becauce of Docker for mac?
Refer link https://github.com/docker/for-mac/issues/2442 , the issue can not solve my problem.
Notice a similar problem?
Refer link Python & Sqlalchemy - Connection pattern -> Disconnected from the remote server randomly
also this issue can not solve my problem.
Solution
flask_sqlachemy need the parameter pool_pre_ping
from flask_sqlalchemy import SQLAlchemy as _BaseSQLAlchemy
class SQLAlchemy(_BaseSQLAlchemy):
def apply_pool_defaults(self, app, options):
super(SQLAlchemy, self).apply_pool_defaults(self, app, options)
options["pool_pre_ping"] = True
db = SQLAlchemy()
Same logic for sqlalchemy.orm, ( on which flask_sqlalchemy is based btw )
engine = sqlalchemy.create_engine(connection_string, pool_pre_ping=True)
More protection strategies can be setup such as it is described in the doc: https://docs.sqlalchemy.org/en/13/core/pooling.html#disconnect-handling-pessimistic
For example, here is my engine instantiation:
engine = sqlalchemy.create_engine(connection_string,
pool_size=10,
max_overflow=2,
pool_recycle=300,
pool_pre_ping=True,
pool_use_lifo=True)
sqlalchemy.orm.sessionmaker(bind=engine, query_cls=RetryingQuery)
For RetryingQuery code, cf: Retry failed sqlalchemy queries
I'm posting my own answer to this, since none of the above addressed my particular setup (Postgres 12.2, SQLAlchemy 1.3).
To stop the OperationalErrors, I had to pass in some additional connect_args to create_engine:
create_engine(
connection_string,
pool_pre_ping=True,
connect_args={
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 10,
"keepalives_count": 5,
}
)
Building on the Solution in the answer and the info from #MaxBlax360's answer. I think the proper way to set these config values in Flask-SQLAlchemy is by setting app.config['SQLALCHEMY_ENGINE_OPTIONS']:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
# pool_pre_ping should help handle DB connection drops
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {"pool_pre_ping": True}
app.config['SQLALCHEMY_DATABASE_URI'] = \
f'postgresql+psycopg2://{POSTGRES_USER}:{dbpass}#{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DBNAME}'
db = SQLAlchemy(app)
See also Flask-SQLAlchemy docs on Configuration Keys
My db configuration:
app = Flask(__name__)
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {"pool_pre_ping": True}
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL']
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# Play with following options:
app.config['SQLALCHEMY_POOL_SIZE'] = 10
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 20
app.config['SQLALCHEMY_POOL_RECYCLE'] = 1800
db = SQLAlchemy(app)