Flask & Alchemy - (psycopg2.OperationalError) FATAL: password authentication failed - postgresql

I'm new to python. I have to develop a simple Flask app (in my local Ubuntu 16.4) with PostgreSQL as database.
I install pgadmin, Flask, SQLAlchemy and postgres and also this is my app code:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://dbUserName:userNamePassword#localhost/dbName'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
email = db.Column(db.String(120), unique=True)
def __init__(self, username, email):
self.username = username
self.email = email
def __repr__(self):
return '<User %r>' % self.username
#app.route('/')
def index():
return "Hello Flask"
if __name__ == "__main__":
app.run()
Also I create a database and new user in pgAdmin (and replace them with related variable in my code), but when I try to test this code in python shell I found error.
my python code:
from app import db
result:
/home/user/point2map2/venv/lib/python3.5/site-packages/flask_sqlalchemy/__init__.py:839: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
Then:
db.create_all()
result:
(psycopg2.OperationalError) FATAL: password authentication failed for user "dbUserName"
FATAL: password authentication failed for user "dbUserName"
after a lot of search in forum I found this guide:
in your pg_hba.conf
# IPv4 local connections:
# TYPE DATABASE USER CIDR-ADDRESS METHOD
host all all 127.0.0.1/32 trust
But its not work for me.

I was stuck with this same error.
The problem for me was that I hadn't set the password for the psql user.
See similar question with answer here:
https://askubuntu.com/questions/413585/postgres-password-authentication-fails
it got solved when I did
ALTER USER db_username PASSWORD 'new_password'

its an old question and i guess its not important to you but for people with same problem in future:
i was stuck too. i found postgres default behavior converts everything to lowercase.[1]
my problem solved when i converted my user to lowercase.
sorry for bad english :(

After some debugging of my sqlalchemy code, I saw that the url that sqlalchemy used was a decoded url-string (at least for postgres). This means that if you have substrings in your connection string such as %34, the sqlalchemy connection string will be 4, as that is the url-decoded string. The solution for this problem is simple: simply replace all occurences of % in the connection string with %25, as that is the url encoding for %. The code for this is simply:
from sqlalchemy import create_engine
connection_string_orig = "postgres://user_with_%34_in_the_string:pw#host:port/db"
connection_string = connection_string_orig.replace("%", "%25")
engine = create_engine(connection_string)
print(engine.url) # should be identical to connection_string_orig
engine.connect()
This probably doesn't solve everyone's problem, but it's nevertheless good to be aware of it.

Related

FastAPI + Postgres + SQLAlchemy - Fatal: remaining connection slots are reserved

I am using FastAPI as a full stack with Jinja2 template.
The main problem is SQLalchemy and postgres
Here's example of main page
async def read_posts(request: Request, page: int = 1, page_size: int = 12, db: Session = Depends(get_db)):
posts = db.query(models.Post).offset(start).limit(end).all()
return templates.TemplateResponse("index.html", {"request": request, "posts": posts})
I just have a blog with a lot of posts, and speed of loading page is very slow, I think I somehow wrongly build queries to the database, but can't find what I did wrong, it is very simple app.
But the main problem is that website is not able to withstand a load, there's statistics from one of the services to check the load
LOAD STATS
here's the logs of error when there is load
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
i find out that is connection leak, but i can't find the source of the problem. I spent 2 days to find the problem and got nothing
I found out the answer
in FastApi you connect to database like this
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = "sqlite:///./my.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
Most important part is get_db(), so you call this command wherever you access the database and think when your query is ended it will finally close, but it won't happen.
When you return posts to template, as in the example above, there is still connection to database and because of it there's connection overflow.
It won't happen if you use JsonResponse for example, but with TemplateResponse database will still connected and working

Why Flask does not traceback when it recognizes bad DB connection?

Could not find any answer to this, I try to create two separated objects for different configurations to separate development environment from production.
So I came up with this config.py file
class BaseConfig:
SECRET_KEY = 'MYSECRET'
class DevConfig(BaseConfig):
SQLALCHEMY_DATABASE_URI = 'sqlite:///market.db'
DEBUG = True
class ProdConfig(BaseConfig):
DEBUG = False
pg_user = 'admin'
pg_password = 'admin'
pg_db = 'mytable'
pg_host = 'localhost'
pg_port = '5432'
SQLALCHEMY_DATABASE_URI = f"postgresql://{pg_user}:{pg_password}#{pg_host}:{pg_port}/{pg_db}"
Now I have this config file, I want to test out and see failure by purpose when I create a Flask instance and read from ProdConfig object (Note: I still don't have Postgres instance running on localhost), but I expect for a connection failure when I start my app. But why it lets me to even run the application?
Project has one package, so here is __init__.py of one of my packages:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_bcrypt import Bcrypt
from flask_login import LoginManager
from market.config import DevConfig, ProdConfig
app = Flask(__name__)
app.config.from_object(ProdConfig) # Note this one!
db = SQLAlchemy(app)
bcrypt = Bcrypt(app)
login_manager = LoginManager(app)
login_manager.login_view = "login_page"
login_manager.login_message_category = "info"
And I have a run.py outside of the package that imports this package, so:
from market import app
#Checks if the run.py file has executed directly and not imported
if __name__ == '__main__':
app.run()
I would expect Flask to fail immediately when the connection could not be established against the DB.

Database 'height_collector' does not exist! Python flask

When i am trying to connect to my existing database(it is realy exist) i have an ERROR occur:
Here is my code and error:
from flask import Flask, render_template, request
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:postgres510#localhost/height_collector'
db = SQLAlchemy(app)
class Data(db.Model):
__tablename__ = "data"
id = db.Column(db.Integer, primary_key=True)
email_ = db.Column(db.String(120), unique=True)
height_ = db.Column(db.Integer)
def __init__(self, email_, height_):
self.email_ = email_
self.heigth_ = height_
#app.route("/")
def index():
return render_template("index.html")
#app.route("/success", methods=['post'])
def success():
if request.method == 'POST':
email = request.form["email_name"]
height = request.form["height_name"]
print(email, height)
return render_template("success.html")
if __name__ == '__main__':
app.debug = True
app.run()
and then i have an error
Data base ".." doesn't exist!
Here is a picture of my database
It's hard to be sure, but this is probably related to either network names, postgres server config, or permissions.
You need to go through different possibilities step by step and eliminate them as the cause. You can connect and see the db in pgAdmin, and you can't connect in Flask. Somewhere between these two is a difference which stops it from working.
Double-check that in pgAdmin you can correctly open the database which you see pictured and look at the tables (if any). It could be that pgAdmin is showing this db open but it isn't connectable any more.
Can you make sure that in pgAdmin, you use localhost as the host name of your connection, and not the IP address of the machine or anything else. If this is the problem, you need to look at how postgres is configured, and in particular the listen key in the postgres config. If listen is set to localhost, you should be good.
I don't see where you mentioned that you are using Windows, but another answerer seems to have assumed this, is this the case? Does the command ping localhost in a shell succeed?
Connect in pgAdmin using the exact user and password that you use in your Flask code.
Try to connect in Python, not in Flask. Open a Python shell, import psycopg2 and call psycopg2.connect(host='localhost', user='postgres', ...)

Unable to connect to MS-SQL with ISQL

First post on StackExchange - please go easy :)
I have setup ODBC in Centos 6 in order to perform ms-sql queries from my Asterisk installation.
My Config files are:
/etc/odbc.ini
[asterisk-connector]
Description = MS SQL connection to 'asterisk' database
Driver = /usr/lib64/libtdsodbc.so
Setup = /usr/lib64/libtdsS.so
Servername = SQL2
Port = 1433
Username = MyUsername
Password = MyPassword
TDS_Version = 7.0
/etc/odbcinst.ini
[odbc-test]
Description = TDS connection
Driver = /usr/lib64/libtdsodbc.so
Setup = /usr/lib64/libtdsS.so
UsageCount = 1
FileUsage = 1
/etc/asterisk/res_odbc.conf
[asterisk-connector]
enabled => yes
dsn => asterisk-connector
username => MyUsername
password => MyPassword
pooling => no
limit =>
pre-connect => yes
I am able to connect via ISQL when I pass in the password and username:
[root#TestVM etc]# isql -v asterisk-connector MyUsername MyPassword
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>
..but I should be able to connect without the username / password. All that returns is:
[root#TestVM etc]# isql -v asterisk-connector
[S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source
[01000][unixODBC][FreeTDS][SQL Server]Adaptive Server connection failed
[ISQL]ERROR: Could not SQLConnect
It is as if ISQL cannot read the username and password from the config files.
I need to be able to perform MS-SQL lookups from within the Asterisk dialplan, but for that to happen I must be able to call ISQL with just the data source name and can't pass in the authentication parameters.
All the guides I've read online state that I should be able to connect with just the
isql -v asterisk-connector
command, but that's not happening for me.
I've been pulling my hair out for a few days on this, so any help or pointers in the right direction would be much appreciated.
Thanks in advance.
Edit:
I have turned on logging, and may have a clue. The username and password definitely aren't being passed in. Look:
[ODBC][27557][1455205133.129690][SQLConnect.c][3614]
Entry:
Connection = 0xac3080
Server Name = [asterisk-connector][length = 18 (SQL_NTS)]
User Name = [NULL]
Authentication = [NULL]
UNICODE Using encoding ASCII 'ISO8859-1' and UNICODE 'UCS-2LE'
DIAG [01000] [FreeTDS][SQL Server]Adaptive Server connection failed
DIAG [S1000] [FreeTDS][SQL Server]Unable to connect to data source
So User Name and Authentication here are [NULL]. It's obviously not picking up the username / password in odbc.ini or res_odbc.conf, but the question is why. I'll keep investigating :)
Edit2:
The OSQL utility returns:
[root#TestVM etc]# osql -S SQL2 -U MyUsername -P MyPassword
checking shared odbc libraries linked to isql for default directories...
strings: '': No such file
trying /tmp/sqlH ... no
trying /tmp/sqlL ... no
trying /etc ... OK
checking odbc.ini files
reading /root/.odbc.ini
[SQL2] not found in /root/.odbc.ini
reading /etc/odbc.ini
[SQL2] found in /etc/odbc.ini
found this section:
looking for driver for DSN [SQL2] in /etc/odbc.ini
no driver mentioned for [SQL2] in odbc.ini
looking for driver for DSN [default] in /etc/odbc.ini
osql: error: no driver found for [SQL2] in odbc.ini
I would replace "Username" with "UID" and "Password" with "PWD" in your odbc.ini.... from FreeTDS Manual - Chapter 4 - Preparing ODBC:
The original ODBC solution to this conundrum employed the odbc.ini file. odbc.ini stored information about a server, known generically as a Data Source Name (DSN). ODBC applications connected to the server by calling the function SQLConnect(DSN, UID, PWD), where DSN is the Data Source Name entry in odbc.ini, UID is the username, and PWD the password. Any and all information about the DSN was kept in odbc.ini. And all was right with the world.
The ODBC 3.0 specification introduced a new function: SQLDriverConnect. The connection attributes are provided as a single argument, a string of concatenated name-value pairs. SQLDriverConnect subsumed the functionality of SQLConnect, in that the name-value pair string allowed the caller to pass — in addition the the original DSN, UID, and PWD — any other parameters the driver could accept. Moreover, the application can specify which driver to use. In effect, it became possible to specify the entire set of DSN properties as parameters to SQLDriverConnect, obviating the need for odbc.ini. This led to the use of the so-called DSN-less configuration, a setup with no odbc.ini.
Ok, so I solved it (pretty much). The password and username in my odbc files were being ignored. Because I was calling the DB queries from Asterisk, I was using a file called res_odbc.ini too. This contained my username and password also, and when I run the query from Asterisk, it conencts and returns the correct result.
In case it helps, here is my final working configuration.
odbc.ini
[asterisk-connector]
Description = MS SQL connection to asterisk database
driver = /usr/lib64/libtdsodbc.so
servername = SQL2
Port = 1433
User = MyUsername
Password = MyPassword
odbcinst.ini
[FreeTDS]
Description = TDS connection
Driver = /usr/lib64/libtdsodbc.so
UsageCount = 1
[ODBC]
trace = Yes
TraceFile = /tmp/sql.log
ForceTrace = Yes
freetds.conf
# $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same
# name is found in the installation directory.
#
# For information about the layout of this file and its settings,
# see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database
# server specific section
[global]
# TDS protocol version
; tds version = 4.2
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
# A typical Sybase server
[egServer50]
host = symachine.domain.com
port = 5000
tds version = 5.0
# A typical Microsoft server
[SQL2]
host = 192.168.1.59
port = 1433
tds version = 8.0
res_odbc.conf
[asterisk-connector]
enabled = yes
dsn = asterisk-connector
username = MyUsername
password = MyPassword
pooling = no
limit = 1
pre-connect = yes
Remember if you are using Centos 64 bit to modify the driver path to lib64. Most of the guides online have the wrong (for 64 bit) paths.
Good luck - it's a headache :)
I contacted the Nick Gorham the developer of unixODBC about this exact issue and he confirmed that isql is not reading the username/password from the config file
Hi Nick,
I think unixODBC is a great project but I was surprised to see that it
is insecure (or at least I don’t know how to use it properly).
When I connect to the database using the isql I have to type in the
password. On a shared server this is insecure because the
$ ps –aux
Command shows the password in clear.
Is there a fix for that? Can I put the password in a file readable
only by my user?
Thank you for your help.
The answer:
Hi,
It depends on the driver. Some can read the user and password from the
odbc.ini or ~/.odbc.ini file so you can store the password there.
isql is only designed as a simple test app, there is nothing stopping
you from modifying ilsq to pull the user and password from a file of
your choice, decrypting it if needed.
I was having a slightly different issue, but my google search lead me here. When trying to connect through isql, I was getting Login failed for user '' even though I had specified a user in my odbc.ini file
[SQLSERVER_SAMPLE]
Driver=ODBC Driver 17 for SQL Server
Server=SERVER
Database=DATABASE
Trusted_Connection=no
UID=USER
PWD=PASSWORD
I tried both UID and User, but both gave the same error. After reading #Andrei Sura's solution, I figured out that the username and password were being ignored.
My solution was to run isql -v SQLSERVER_SAMPLE USER PASSWORD even though the username and password were specified in the odbc.ini file - and it connected.

Flask-sqlalchemy losing connection after restarting of DB server

I use flask-sqlalchemy in my application. DB is postgresql 9.3.
I have simple init of db, model and view:
from config import *
from flask import Flask, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://%s:%s#%s/%s' % (DB_USER, DB_PASSWORD, HOST, DB_NAME)
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(255), unique=True, index=True, nullable=False)
db.create_all()
db.session.commit()
#app.route('/users/')
def users():
users = User.query.all()
return '1'
And all works fine. But when happens DB server restarting (sudo service postgresql restart), on first request to the /users/ I obtain sqlalchemy.exc.OperationalError:
OperationalError: (psycopg2.OperationalError) terminating connection due to administrator command
SSL connection has been closed unexpectedly
[SQL: ....
Is there any way to renew connection inside view, or setup flask-sqlalchemy in another way for renew connection automatically?
UPDATE.
I ended up with using clear SQLAlchemy, declaring engine, metadata and db_session for every view, where I critically need it.
It is not solution of question, just a 'hack'.
So question is open. I am sure, It will be nice to find solution for this :)
The SQLAlchemy documentation explains that the default behaviour is to handle disconnects optimistically. Did you try another request - the connection should have re-established itself ? I've just tested this with a Flask/Postgres/Windows project and it works.
In a typical web application using an ORM Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is “optimistic” in that frequent database restarts are not anticipated.
If you want the connection state to be checked prior to a connection attempt you need to write code that handles disconnects pessimistically. The following example code is provided at the documentation:
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
Here's some screenshots of the event being caught in PyCharm's debugger:
Windows 7 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.11, Flask-SQLAlchemy 2.1 and psycopg 2.6.1)
On first db request
After db restart
Ubuntu 14.04 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.8, Flask-SQLAlchemy 2.0 and psycopg 2.5.5)
On first db request
After db restart
In plain SQLAlchemy you can add the pool_pre_ping=True kwarg when calling the create_engine function to fix this issue.
When using Flask-SQLAlchemy you can use the same argument, but you need to pass it as a dict in the engine_options kwarg:
app.db = SQLAlchemy(app, engine_options={"pool_pre_ping": True})