I am using flask-testing to do some unit tests on a Postgres application. According to doc , I have following code.
from flask_testing import TestCase
from src.game.models import db
from src import create_app
class BaseTest(TestCase):
SQLALCHEMY_DATABASE_URI = 'postgresql://demo:demo#postgres:5432/test_db'
TESTING = True
def create_app(self):
# pass in test configuration
return create_app(self)
def setUp(self):
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
Of course I got this error
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL:
database "test_db" does not exist
I do have a database postgresql://demo:demo#postgres:5432/demo which is my production database.
How can I create test_db in this BaseTest class? I am using Python 3.6 and latest flask and flask-sqlalchemy. Thank you very much.
Related
Could not find any answer to this, I try to create two separated objects for different configurations to separate development environment from production.
So I came up with this config.py file
class BaseConfig:
SECRET_KEY = 'MYSECRET'
class DevConfig(BaseConfig):
SQLALCHEMY_DATABASE_URI = 'sqlite:///market.db'
DEBUG = True
class ProdConfig(BaseConfig):
DEBUG = False
pg_user = 'admin'
pg_password = 'admin'
pg_db = 'mytable'
pg_host = 'localhost'
pg_port = '5432'
SQLALCHEMY_DATABASE_URI = f"postgresql://{pg_user}:{pg_password}#{pg_host}:{pg_port}/{pg_db}"
Now I have this config file, I want to test out and see failure by purpose when I create a Flask instance and read from ProdConfig object (Note: I still don't have Postgres instance running on localhost), but I expect for a connection failure when I start my app. But why it lets me to even run the application?
Project has one package, so here is __init__.py of one of my packages:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_bcrypt import Bcrypt
from flask_login import LoginManager
from market.config import DevConfig, ProdConfig
app = Flask(__name__)
app.config.from_object(ProdConfig) # Note this one!
db = SQLAlchemy(app)
bcrypt = Bcrypt(app)
login_manager = LoginManager(app)
login_manager.login_view = "login_page"
login_manager.login_message_category = "info"
And I have a run.py outside of the package that imports this package, so:
from market import app
#Checks if the run.py file has executed directly and not imported
if __name__ == '__main__':
app.run()
I would expect Flask to fail immediately when the connection could not be established against the DB.
I am trying to use pytest to run some simple integration test against postgresql db. I have to use python 2.7 so testcontainers are not the option.
my conftest.py looks like:
import pytest
import docker # this is a official system docker package
#pytest.fixture(scope="session")
def finalizer_function(container_id):
container_id.stop()
#pytest.fixture(scope="session", autouse=True)
def db_init():
"""
This db_init() method will start up docker container
which will be used throughout all the tests and once
all the tests are completed, containers will be
automatically removed.
:return:
"""
client = docker.from_env()
con_id = client.containers.run(
image="postgres:9.6",
name="postgres",
detach=True,
environment={
'POSTGRES_DB': 'postgres',
'POSTGRES_PASSWORD': 'test',
'POSTGRES_USER': 'postgres',
},
remove=True,
ports={"5432/tcp": 5432}
)
# yield con_id.stop()
and here is a very basic test I am not able to run test_postgres.py:
import docker
import socket
import psycopg2
from postgres import Postgres
def test_connection_to_db(db_init):
"""
:param db_init:
:return:
"""
print(socket.gethostname())
dbo = Postgres(
default_dbuser='postgres',
default_db='postgres',
host='127.0.0.1',
password='test',
port='5432'
)
_superuser = 'userx'
dbo.create_user(user=_superuser, password='start123', superuser=True)
consider that from postgres import Postgres is my custom postgres
module which works with no problem.
PYTHONPATH=lib pytest -s -vv tests/test_postgres.py
...
E OperationalError: server closed the connection unexpectedly
E This probably means the server terminated abnormally
E before or while processing the request.
However, when I start docker container by using fixture db_init
and I try to connect to this container with psql I can connect to db with no problem:
psql -h 127.0.0.1 -p 5432 -U postgres -d postgres
Password for user postgres:
psql (10.6, server 9.6.12)
Type "help" for help.
postgres=#
Please advise, Iam sure that there is just a silly mistake.
Thx
When i am trying to connect to my existing database(it is realy exist) i have an ERROR occur:
Here is my code and error:
from flask import Flask, render_template, request
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:postgres510#localhost/height_collector'
db = SQLAlchemy(app)
class Data(db.Model):
__tablename__ = "data"
id = db.Column(db.Integer, primary_key=True)
email_ = db.Column(db.String(120), unique=True)
height_ = db.Column(db.Integer)
def __init__(self, email_, height_):
self.email_ = email_
self.heigth_ = height_
#app.route("/")
def index():
return render_template("index.html")
#app.route("/success", methods=['post'])
def success():
if request.method == 'POST':
email = request.form["email_name"]
height = request.form["height_name"]
print(email, height)
return render_template("success.html")
if __name__ == '__main__':
app.debug = True
app.run()
and then i have an error
Data base ".." doesn't exist!
Here is a picture of my database
It's hard to be sure, but this is probably related to either network names, postgres server config, or permissions.
You need to go through different possibilities step by step and eliminate them as the cause. You can connect and see the db in pgAdmin, and you can't connect in Flask. Somewhere between these two is a difference which stops it from working.
Double-check that in pgAdmin you can correctly open the database which you see pictured and look at the tables (if any). It could be that pgAdmin is showing this db open but it isn't connectable any more.
Can you make sure that in pgAdmin, you use localhost as the host name of your connection, and not the IP address of the machine or anything else. If this is the problem, you need to look at how postgres is configured, and in particular the listen key in the postgres config. If listen is set to localhost, you should be good.
I don't see where you mentioned that you are using Windows, but another answerer seems to have assumed this, is this the case? Does the command ping localhost in a shell succeed?
Connect in pgAdmin using the exact user and password that you use in your Flask code.
Try to connect in Python, not in Flask. Open a Python shell, import psycopg2 and call psycopg2.connect(host='localhost', user='postgres', ...)
I'm new to python. I have to develop a simple Flask app (in my local Ubuntu 16.4) with PostgreSQL as database.
I install pgadmin, Flask, SQLAlchemy and postgres and also this is my app code:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://dbUserName:userNamePassword#localhost/dbName'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
email = db.Column(db.String(120), unique=True)
def __init__(self, username, email):
self.username = username
self.email = email
def __repr__(self):
return '<User %r>' % self.username
#app.route('/')
def index():
return "Hello Flask"
if __name__ == "__main__":
app.run()
Also I create a database and new user in pgAdmin (and replace them with related variable in my code), but when I try to test this code in python shell I found error.
my python code:
from app import db
result:
/home/user/point2map2/venv/lib/python3.5/site-packages/flask_sqlalchemy/__init__.py:839: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
Then:
db.create_all()
result:
(psycopg2.OperationalError) FATAL: password authentication failed for user "dbUserName"
FATAL: password authentication failed for user "dbUserName"
after a lot of search in forum I found this guide:
in your pg_hba.conf
# IPv4 local connections:
# TYPE DATABASE USER CIDR-ADDRESS METHOD
host all all 127.0.0.1/32 trust
But its not work for me.
I was stuck with this same error.
The problem for me was that I hadn't set the password for the psql user.
See similar question with answer here:
https://askubuntu.com/questions/413585/postgres-password-authentication-fails
it got solved when I did
ALTER USER db_username PASSWORD 'new_password'
its an old question and i guess its not important to you but for people with same problem in future:
i was stuck too. i found postgres default behavior converts everything to lowercase.[1]
my problem solved when i converted my user to lowercase.
sorry for bad english :(
After some debugging of my sqlalchemy code, I saw that the url that sqlalchemy used was a decoded url-string (at least for postgres). This means that if you have substrings in your connection string such as %34, the sqlalchemy connection string will be 4, as that is the url-decoded string. The solution for this problem is simple: simply replace all occurences of % in the connection string with %25, as that is the url encoding for %. The code for this is simply:
from sqlalchemy import create_engine
connection_string_orig = "postgres://user_with_%34_in_the_string:pw#host:port/db"
connection_string = connection_string_orig.replace("%", "%25")
engine = create_engine(connection_string)
print(engine.url) # should be identical to connection_string_orig
engine.connect()
This probably doesn't solve everyone's problem, but it's nevertheless good to be aware of it.
I use flask-sqlalchemy in my application. DB is postgresql 9.3.
I have simple init of db, model and view:
from config import *
from flask import Flask, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://%s:%s#%s/%s' % (DB_USER, DB_PASSWORD, HOST, DB_NAME)
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(255), unique=True, index=True, nullable=False)
db.create_all()
db.session.commit()
#app.route('/users/')
def users():
users = User.query.all()
return '1'
And all works fine. But when happens DB server restarting (sudo service postgresql restart), on first request to the /users/ I obtain sqlalchemy.exc.OperationalError:
OperationalError: (psycopg2.OperationalError) terminating connection due to administrator command
SSL connection has been closed unexpectedly
[SQL: ....
Is there any way to renew connection inside view, or setup flask-sqlalchemy in another way for renew connection automatically?
UPDATE.
I ended up with using clear SQLAlchemy, declaring engine, metadata and db_session for every view, where I critically need it.
It is not solution of question, just a 'hack'.
So question is open. I am sure, It will be nice to find solution for this :)
The SQLAlchemy documentation explains that the default behaviour is to handle disconnects optimistically. Did you try another request - the connection should have re-established itself ? I've just tested this with a Flask/Postgres/Windows project and it works.
In a typical web application using an ORM Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is “optimistic” in that frequent database restarts are not anticipated.
If you want the connection state to be checked prior to a connection attempt you need to write code that handles disconnects pessimistically. The following example code is provided at the documentation:
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
Here's some screenshots of the event being caught in PyCharm's debugger:
Windows 7 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.11, Flask-SQLAlchemy 2.1 and psycopg 2.6.1)
On first db request
After db restart
Ubuntu 14.04 (Postgres 9.4, Flask 0.10.1, SQLAlchemy 1.0.8, Flask-SQLAlchemy 2.0 and psycopg 2.5.5)
On first db request
After db restart
In plain SQLAlchemy you can add the pool_pre_ping=True kwarg when calling the create_engine function to fix this issue.
When using Flask-SQLAlchemy you can use the same argument, but you need to pass it as a dict in the engine_options kwarg:
app.db = SQLAlchemy(app, engine_options={"pool_pre_ping": True})