Unable to connect to docker postgres container with pytest - postgresql

I am trying to use pytest to run some simple integration test against postgresql db. I have to use python 2.7 so testcontainers are not the option.
my conftest.py looks like:
import pytest
import docker # this is a official system docker package
#pytest.fixture(scope="session")
def finalizer_function(container_id):
container_id.stop()
#pytest.fixture(scope="session", autouse=True)
def db_init():
"""
This db_init() method will start up docker container
which will be used throughout all the tests and once
all the tests are completed, containers will be
automatically removed.
:return:
"""
client = docker.from_env()
con_id = client.containers.run(
image="postgres:9.6",
name="postgres",
detach=True,
environment={
'POSTGRES_DB': 'postgres',
'POSTGRES_PASSWORD': 'test',
'POSTGRES_USER': 'postgres',
},
remove=True,
ports={"5432/tcp": 5432}
)
# yield con_id.stop()
and here is a very basic test I am not able to run test_postgres.py:
import docker
import socket
import psycopg2
from postgres import Postgres
def test_connection_to_db(db_init):
"""
:param db_init:
:return:
"""
print(socket.gethostname())
dbo = Postgres(
default_dbuser='postgres',
default_db='postgres',
host='127.0.0.1',
password='test',
port='5432'
)
_superuser = 'userx'
dbo.create_user(user=_superuser, password='start123', superuser=True)
consider that from postgres import Postgres is my custom postgres
module which works with no problem.
PYTHONPATH=lib pytest -s -vv tests/test_postgres.py
...
E OperationalError: server closed the connection unexpectedly
E This probably means the server terminated abnormally
E before or while processing the request.
However, when I start docker container by using fixture db_init
and I try to connect to this container with psql I can connect to db with no problem:
psql -h 127.0.0.1 -p 5432 -U postgres -d postgres
Password for user postgres:
psql (10.6, server 9.6.12)
Type "help" for help.
postgres=#
Please advise, Iam sure that there is just a silly mistake.
Thx

Related

How to create test Postgres db in flask-tesing

I am using flask-testing to do some unit tests on a Postgres application. According to doc , I have following code.
from flask_testing import TestCase
from src.game.models import db
from src import create_app
class BaseTest(TestCase):
SQLALCHEMY_DATABASE_URI = 'postgresql://demo:demo#postgres:5432/test_db'
TESTING = True
def create_app(self):
# pass in test configuration
return create_app(self)
def setUp(self):
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
Of course I got this error
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL:
database "test_db" does not exist
I do have a database postgresql://demo:demo#postgres:5432/demo which is my production database.
How can I create test_db in this BaseTest class? I am using Python 3.6 and latest flask and flask-sqlalchemy. Thank you very much.

Flyway not able to find role with postgres docker

I am trying to run my first flyway example using docker postgres image but getting the following error:
INFO: Flyway Community Edition 6.4.2 by Redgate
Exception in thread "main" org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database (jdbc:postgresql://localhost/flyway-service) for user 'flyway-service': FATAL: role "flyway-service" does not exist
-------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 28000
Error Code : 0
Message : FATAL: role "flyway-service" does not exist
at org.flywaydb.core.internal.jdbc.JdbcUtils.openConnection(JdbcUtils.java:65)
at org.flywaydb.core.internal.jdbc.JdbcConnectionFactory.<init>(JdbcConnectionFactory.java:80)
I looked up into the docker container and can see that the user role flyway-service is created as part of the docker-compose execution:
$ docker exec -it flywayexample_postgres_1 bash
root#b2037e382112:/# psql -U flyway-service;
psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.
flyway-service=# \du;
List of roles
Role name | Attributes | Member of
----------------+------------------------------------------------------------+-----------
flyway-service | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
flyway-service=#
Main class is:
public static void main( String[] args ) {
var flyway = Flyway.configure().schemas("flyway_test_schema")
.dataSource("jdbc:postgresql://localhost/flyway-service", "flyway-service",
"password")
.load()
.migrate();
System.out.println( "Flyway example's hello world!" );
}
}
The migration called src/main/resources/db/migration/V1__Create_person_table.sql:
create table PERSON (
ID int not null,
NAME varchar(100) not null
);
Docker-compose yml file:
version: "3.8"
services:
postgres:
image: postgres:12.2
ports: ["5432:5432"]
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=flyway-service
I am running this code on MAC OSX. I assume, I am missing something obvious here, but not sure what! Any pointers would be appreciated.
Finally managed to figure out the issue with the help of a friend! The problem was not with the attached code but with a postgres daemon process running on the same port 5432 by an old Postgres installation.
I found the complete uninstallation procedure here. After removing the additional daemon process got only one port listening.
a$ lsof -n -i4TCP:5432
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 654 root 50u IPv6 0x7ae1b5f8fbcf1cb 0t0 TCP *:postgresql (LISTEN)

When trying to connect to redshift through python using psycopg2 module following error is bring displayed

I was getting the following error when I trying to connect to redshift via python and using the psycopg2 module.
import psycopg2
my_db = 'dbname'
my_host = 'red-shift hostname'
my_port = '5439'
my_user = 'username'
my_password = 'password'
con = psycopg2.connect(dbname=my_db,host=my_host,port=my_port,user=my_user,password=my_password)
Error:
OperationalError: could not translate host name "redshift://redshift-cluster-1.cqxnjksdfndsjsdf.us-east-2.redshift.amazonaws.com" to address: Unknown host
I faced similar issue. Seems like this is an SSL issue when trying to connect.
Use sqlalchemy-redshift to connect to your redshift cluster, it will work.
This is how the docs show to connect
import sqlalchemy as sa
db_uri = 'redshift+psycopg2://username:password#redshift-cluster-1.cqxnxldsfjjbsdc.us-east-2.redshift.amazonaws.com:5439/dbname'
eng = sa.create_engine(db_uri)

PGBouncer : Cant connect on the right db

I'm actually facing an issue. I've installed pgbouncer on a production server, on which i've a Odoo instance and postgresql as well.
Perhaps :
In my logs, i'm having this :
2018-09-10 16:39:16.389 10123 WARNING C-0x1eb5478:
(nodb)/(nouser)#unix(18272):6432 pooler error: no such database: postgres
2018-09-10 16:39:16.389 10123 LOG C-0x1eb5478: (nodb)/(nouser)#unix(18272):6432 login failed: db=postgres user=oerppreprod
Here is the actual conf of pgbouncer :
pgbouncer_archive = host=127.0.0.1 port=5432 dbname=archive
admin_users = postgres
ignore_startup_parameters = extra_float_digits
With aswell, the default config (i've only added/edited this).
Why is he trying to connect on the postgres database ?
When i go back on the previous conf (without PGBouncer, just swapping from port 6432 to 5432), everything is working ....
Any idea ?
Thanks in advance !
I had the same issue, and in my situation. Maybe it will be usefull to somebody:
I have solved this by a few steps:
At the beginning of every request - your Framework or PDO (or else) running the initial query to check if database you asking is exists in the postgres data to process you request.
I have replaced the part of line "user=project_user password=mytestpassword" from the database section of pgbouncer.ini file. As I tested, if you replace this part - then the pgbouncer will use your userlist.txt file (or your selected auth), in my case, it was the userlist.txt.
Added the line "postgres = host=127.0.0.1 port=5432 dbname=postgres"
[databases]
postgres = host=127.0.0.1 port=5432 dbname=postgres
my_database = host=127.0.0.1 port=5432 dbname=my_database
My userlist.txt file looks like this (I am using auth_type = md5, so my password was in md5):
"my_user" "md5passwordandsoelse"
I have added my admin users to my pgbouncer.ini file:
admin_users = postgres, my_user
After all manipulations I advise you to check from which user u are running queries, by usin this simple query:
select current_user;
At the end, with this query you must to receive you selected username (in my case it was - my_user)
p.s. also I must to mention, that I was using 127.0.0.1 - because my pgbouncer is installed on the same server with postgres.

what is client property of knexfile.js

In knex documentation of configuration of knexfile.js for PostgreSQL, they have a property called client, which looks this way:
...
client: 'pg'
...
However, going through some other projects that utilize PostgreSQL I noticed that they have a different value there, which looks this way:
...
client: 'postgresql'
...
Does this string correspond to the name of some sort of command line tool that is being used with the project or I misunderstand something?
Postgresql is based on a server-client model as described in 'Architectural Fundamentals'
psql is the standard cli client of postgres as mentioned here in the docs.
A client may as well be a GUI such as pg-admin, or a node-package such as 'pg' - here's a list.
The client parameter is required and determines which client adapter will be used with the library.
You should also read the docs of 'Server Setup and Operation'
To initialize the library you can do the following (in this case on localhost):
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
}
})
The standard user of the client deamon ist 'postgres' - which you can use of course, but its highly advisable to create a new user as stated in the docs and/or apply a password to the standard user 'postgres'.
On Debian stretch i.E.:
# su - postgres
$ psql -d template1 -c "ALTER USER postgres WITH PASSWORD 'SecretPasswordHere';"
Make sure you delete the command line history so nobody can read out your pwd:
rm ~/.psql_history
Now you can add a new user (i.E. foobar) on the system and for postgres
# adduser foobar
and
# su - postgres
$ createuser --pwprompt --interactive foobar
Lets look at the following setup:
module.exports = {
development: {
client: 'xyz',
connection: { user: 'foobar', database: 'my_app' }
},
production: { client: 'abc', connection: process.env.DATABASE_URL }
};
This basically tells us the following:
In dev - use the client xyz to connect to postgresqls database my_app with the user foobar (in this case without pwd)
In prod - retrieve the globalenv the url of the db-server is set to and connect via the client abc
Here's an example how node's pg-client package opens a connection pool:
const pool = new Pool({
user: 'foobar',
host: 'someUrl',
database: 'someDataBaseName',
password: 'somePWD',
port: 5432,
})
If you could clarify or elaborate your setup or what you like to achieve a little more i could give you some more detailed info - but i hope that helped anyways..