sqlachemy explanation of schema generation - postgresql

I am trying to use sqlalchemy to create a schema in postgres. I cannot find a working example of the method of creating the schema via the MetaData and reflect function where one declares the name of the schema.
conn = db.postgres_db(host, port, database, pass, user)
postg_meta = sql.MetaData(bind=conn.engine)
postg_meta.reflect(schema='transactions')
stat_query = sql.select([sql.distinct(acocunts.c.user_id)]
This is a snippet of the code given to me and I am attempting to clarify where and how the schema ('transactions') is defined. I am new to sqlalchemy and the rational of some of its working pieces so I would appreciate some assistance.
Could this refer to a file with an external class where the schema is defined?
Thanks.

Postgres has the concept of a schema which is hierarchically ordered between the database and table like database.schema.table.
SQLAlchemy supports this concept of schemas for Postgres and other databases.
Since you performing a reflection, it is implicit that this schema already exists in your database. This is what the schema keyword is referring to.
You can confirm that this is the case by listing all the schemas in your database:
select schema_name
from information_schema.schemata

Related

postgresql combine data from different schema tables if that schema exists

Need to create a view based on these conditions:
There are several schemas and tables in the db
We will create a union from tables from certain schemas
If the schema don't exist we should skip that schema from our union
It is given that if schema exists the associated table definitely exists, no need to check that.
Query should not give error if any of the schema is not created.
At the time of running query any schema could be missing that is not known until query is run.
So far creating the view using unions is simple enough but I'm not able to figure out what is the best way to include that condition check for schema existence, I'm sorry if this is trivial or duplicate question, any advice or reference could be helpful.
Thanks,
CJ
In postgresql we can use if schema exists:
SELECT schema_name FROM information_schema.schemata WHERE schema_name = 'name';

Why doesn't SQLAlchemy reflect tables in non-public schema?

I have a PostgreSQL database, which has had all objects in public schema.
Used SQLAlchemy to succesfully connect to it and reflect objects from it.
Now I needed to create a separate schema schema2 in the same database. I assigned to new user all rights in that schema, checked that I can connect from command line with psql and do things in it.
But SQLAlchemy doesn't see any tables in the schema, when I try to use the same method to reflect its tables -- despite trying various ways to specify schema!
This is what worked for initial connection to public schema:
from sqlalchemy.ext.automap import automap_base
from sqlalchemy import create_engine
from sqlalchemy import Table, Integer, String, Text, Column, ForeignKey
Base=automap_base()
sql_conn='postgres://my_user#/foo'
engine=create_engine(sql_conn)
Base.prepare(engine, reflect=True)
Then I could use Base.classes.* for tables and I didn't need to create Table classes on my own.
Now, this same works for this new user for public schema as well.
But whenever I somehow try to pass the schema2 to reflect tables from schema2, I always get empty Base.classes.*
I tried all the solutions in SQLAlchemy support of Postgres Schemas but I don't get anything at all reflected!
I tried:
making user's default schema schema2 via SQL means:
ALTER new_user SET search_path=schema2;
tried to pass schema in engine.connect via engine options
tried to set schema in MetaData and use that, as per SQLAlchemy docs:
Doing:
meta=MetaData(bind=engine,schema='schema2')
meta.reflect()
does work, as I can see all the tables correctly in meta.tables afterwards.
However, when I try to get Base.classes working as per SQLAlchemy automapper docs, the Base.classes don't get populated:
from sqlalchemy.ext.automap import automap_base
from sqlalchemy import create_engine
from sqlalchemy import Table, Integer, String, Text, Column, ForeignKey, MetaData
sql_conn='postgres://new_user#/foo'
engine=create_engine(sql_conn)
meta=MetaData(bind=engine,schema='schema2')
Base=automap_base(metadata=meta)
Base.prepare(engine, reflect=True)
Base.classes is empty...
I am now stumped. Any ideas?
PS. I am working on newest SQLAlchemy available (1.3) under pip for Python2.7, Ubuntu 18.04LTS.

Default schema for native SQL queries (spring-boot + hibernate + postgresql + postgis)

I am introducing spring to the existing application (hibernate has already been there) and encountered a problem with native SQL queries.
A sample query:
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM
OUR_TABLE;
OUR_TABLE is in OUR_SCHEMA.
When we connect to the db to OUR_SCHEMA:
spring.datasource.url: jdbc:postgresql://host:port/db_name?currentSchema=OUR_SCHEMA
the query fails because function ST_MAKEPOINT is not found - the function is located in schema: PUBLIC.
When we connect to the db without specifying the schema, ST_MAKEPOINT is found and runs correctly, though schema name needs to be added to the table name in the query.
As we are talking about thousands of such queries and all the tables are located in OUR_SCHEMA, is there a chance to anyhow specify the default schema, so still functions from PUBLIC schema were visible?
So far, I have tried the following springboot properties - with no success:
spring.jpa.properties.hibernate.default_schema: OUR_SCHEMA
spring.datasource.tomcat.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
spring.datasource.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
Also, it worked before switching to springboot config - specifying hibernate.default-schema = OUR_SCHEMA in persistence.xml was enough.
Stack:
spring-boot: 2.0.6
hibernate: 5.3.1.Final
postgresql: 42.2.5
postgis: 2.2.1
You're probably looking for the PostgreSQL search_path variable, which controls which schemas are checked when trying to resolve database object names. The path accepts several schema names, which are checked in order. So you can use the following
SET search_path=our_schema,public;
This will make PostgreSQL look for your tables (and functions!) first in our_schema, and then in public. Your JDBC driver may or may not support multiple schemas in its current_schema parameter.
Another option is to install the PostGIS extension (which provides the make_point() function) in the our_schema schema:
CREATE EXTENSION postgis SCHEMA our_schema;
This way you only have to have one schema in your search path.
JDBC param currentSchema explicitly allows specifying several schemas separating them by commas:
jdbc:postgresql://postgres-cert:5432/db?currentSchema=my,public&connectTimeout=4&ApplicationName=my-app
From https://jdbc.postgresql.org/documentation/head/connect.html
currentSchema = String
Specify the schema (or several schema separated by commas) to be set in the search-path. This schema will be used to resolve unqualified object names used in statements over this connection.
Note you probably need Postgres 9.6 or better for currentSchema support.
PS Probably better solution is to set search_path per user:
ALTER USER myuser SET search_path TO mydb,pg_catalog;
if you use hibernate.default_schema, then for native queries, you need to provide the {h-schema} placeholder, something like that
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM {h-schema}OUR_TABLE;

Create PostgreSQL schemas before create_all function start creating the tables in Flask-SQLAlchemy

I am starting to develop an application with Flask and PostgreSQL for my back-end and I am using PostgreSQL schemas.
My problem is that, when I call the database.create_all() function, SQLalchemy thrown me an error because the schema doesn't exist.
I trying to create the required schemas before the creation process start, using SQLAlchemy events and CreateSchema, but I don't understand well how to use the CreateSchema, with the next code:
from flask.ext.sqlalchemy import SQLAlchemy
from sqlalchemy.event import listens_for
from sqlalchemy.schema import CreateSchema
database = SQLAlchemy()
#listens_for(database.metadata, 'before_create')
def create_schemas(target, b, **kw):
CreateSchema('app')
database.session.commit()
The listener is called, but the error still persist. The schema isn't being created in the database.
NOTE:
The database is initialized with the Flask application instance in another module as following:
from .model import database
database.init_app(app)
with app.app_context():
database.create_all()
So, I am not getting any problem related to application context.
EDIT:
CreateSchema represent a CREATE SCHEMA sql statement, that must be executed with database.session.execute(CreateSchema('app')).
Now I realize that the listener is called every time one table is created, throwing the error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) schema "app" already exists
Thanks.
The fast way I solved my problem was creating the schemas right before I call database.create_all(). I used the statement CREATE SCHEMA IF NOT EXISTS schema_name that creates a schema just if it doesn't exist. This statement is just available starting from PostgreSQL 9.3
# my model module
def create_schemas():
""" Note that this function run a SQL Statement available just in
PostgreSQL 9.3+
"""
database.session.execute('CREATE SCHEMA IF NOT EXISTS schema_name')
database.session.execute('CREATE SCHEMA IF NOT EXISTS schema_name2')
database.session.commit()
I initializing the database and calling the function from another module:
from .model import database, create_schemas
database.init_app(app)
with app.app_context():
create_schemas()
database.create_all()
Well, this is a solution that works for my specific situation.

Specifying Postgres table schemas in Esqueleto/Persistent

I'm using Esqueleto with Postgres and I don't see a way to specify the schema that a table resides in. Currently I'm issuing the following sql to set the schemas:
set search_path to foo,bar;
This allows me to use the tables I want as long as they are in schema foo or schema bar. Is there a way to just set the schema for each table?