How could the following table be represented as an SQLA model?
CREATE TABLE x
(
x_id integer GENERATED ALWAYS AS IDENTITY,
x_name text unique
);
I think that if I create a column using:
id = Column(Integer, nullable=False, primary_key=True)
The generated SQL won't use GENERATED ALWAYS AS IDENTITY, but will instead use SERIAL.
Completing the following, in order to use the GENERATED ALWAYS AS IDENTITY syntax, is what I'm unsure of:
class X(Base):
__tablename__ = "x"
x_id = Column( <something here> )
x_name = Column(Text, unique = True)
You can use an Identity in your column definition.
class X(Base):
__tablename__ = "x"
x_id = Column(Integer, Identity(always=True), primary_key=True)
x_name = Column(Text, unique=True)
PS. no need to set nullable=False if you set primary_key=True.
Related
I am trying to build a system with Postgres in which users can connect with each other by sending requests. I am handling the security logic with RLS. However, I am having some trouble with the structure of one of my policies.
Here are the tables of concern, stripped of any nonessential columns:
CREATE TABLE profile (
id UUID PRIMARY KEY,
name text
);
CREATE TABLE request (
id serial PRIMARY KEY,
sender_id UUID,
recipient_id UUID
);
CREATE TABLE connection (
id serial PRIMARY KEY,
owner_id UUID,
contact_id UUID
);
Users are only allowed to add a connection if:
There is a row which references their profile id in recipient_id in the request table.
There is no existing connection that involves both users (no matter which one is the owner_id and which is the contact_id)
Here is the Policy I've written:
CREATE POLICY "Participants INSERT" ON public.connection FOR
INSERT
WITH CHECK (
auth.uid() = owner_id
AND NOT EXISTS(
SELECT
*
FROM
public.connection c
WHERE
(
(
c.owner_id = owner_id --> Problem: How to reference query
AND c.contact_id = contact_id
)
OR (
c.contact_id = owner_id
AND c.owner_id = contact_id
)
)
)
AND EXISTS(
SELECT
*
FROM
public.request r
WHERE
(
r.sender_id = contact_id
AND r.recipient_id = owner_id
)
)
);
However, whenever I try to create a contact with a user whose id is already present in any row in the connection table (irrespective if it exists together with the correct contact_id), I get a policy violation error.
I think I might be referencing owner_id and contact_id wrong in the subquery because if I replace them manually with the appropriate id as a string, it works.
I'd really appreciate everyones input.
I found the answer. The parent query can be accessed through the name of the table. So all I had to do was to access owner_id and contact_id like this:
connection.owner_id
connection.contact_id
On a side-note: The query values can NOT be accessed like this:
public.connection.owner_id
public.connection.contact_id
I'm using SQLAlchemy 1.3.4 and PostgreSQL 11.3.
I have the following (simplified) table definition:
class MyModel(Base):
__tablename__ = 'mymodel'
id = Column(Integer, primary_key=True)
col1 = Column(Unicode, nullable=False)
col2 = Column(Unicode, nullable=False)
col3 = Column(Unicode, nullable=False)
col4 = Column(Boolean)
created_at = Column(DateTime(timezone=True), nullable=False)
updated_at = Column(DateTime(timezone=True), nullable=False)
__table_args__ = (
Index('uq_mymodel_col1_col2_col3_col4',
col1, col2, col3, col4,
unique=True, postgresql_where=col4.isnot(None)),
Index('uq_mymodel_col1_col2_col3',
col1, col2, col3,
unique=True, postgresql_where=col4.is_(None)),
)
(I had to create 2 unique index rather than a UniqueConstraint because a UniqueConstraint would allow multiple rows with the same (col1, col2, col3) is col4 is null, which I do not want)
I'm trying to do the following query:
INSERT INTO mymodel (col1, col2, col3, col4, created_at, updated_at)
VALUES (%(col1)s, %(col2)s, %(col3)s, %(col4)s, %(created_at)s, %(updated_at)s)
ON CONFLICT DO UPDATE SET updated_at = %(param_1)s
RETURNING mymodel.id
I can't figure out how to properly use SQLAlchemy's on_conflict_do_update() though. :-/
Here is what I tried:
values = {…}
stmt = insert(MyModel.__table__).values(**values)
stmt = stmt.returning(MyModel.__table__.c.id)
stmt = stmt.on_conflict_do_update(set_={'updated_at': values['updated_at']})
result = dbsession.connection().execute(stmt)
However SQLAlchemy complains: Either constraint or index_elements, but not both, must be specified unless DO NOTHING
I find it very unclear how to use constraint or index_elements.
I tried a few things, to no avail. For example:
values = {…}
stmt = insert(MyModel.__table__).values(**values)
stmt = stmt.returning(MyModel.__table__.c.id)
stmt = stmt.on_conflict_do_update(constraint='uq_mymodel_col1_col2_col3_col4'
set_={'updated_at': values['updated_at']})
result = dbsession.connection().execute(stmt)
But then this doesn't work either: constraint "uq_mymodel_col1_col2_col3_col4" for table "mymodel" does not exist. But it does exist. (I even copy-pasted from pgsql to make sure I hadn't made a typo)
In any case, I have two unique constraints which can raise a conflict, but on_conflict_do_update() seems to only take one. So I also tried specifying both like this:
values = {…}
stmt = insert(MyModel.__table__).values(**values)
stmt = stmt.returning(MyModel.__table__.c.id)
stmt = stmt.on_conflict_do_update(constraint='uq_mymodel_col1_col2_col3_col4'
set_={'updated_at': values['updated_at']})
stmt = stmt.on_conflict_do_update(constraint='uq_mymodel_col1_col2_col3'
set_={'updated_at': values['updated_at']})
result = dbsession.connection().execute(stmt)
But I get the same error, that the uq_mymodel_col1_col2_col3_col4 does not exist.
At this point I just can't figure out how to do the above query, and would really appreciate some help.
Ok, I think I figured it out. So the problem didn't come from SQLAlchemy after all, I was actually misusing PostgreSQL.
First, the SQL query I pasted above didn't work, because like SQLAlchemy, PostgreSQL requires specifying either the index columns or a constraint name.
And when I specified one of my constraints, PostgreSQL gave me the same error as SQLAlchemy. And that's because my constraints weren't actually constraints, but unique indexes. It seems it really has to be a unique constraint, not a unique index. (even though that index would have the same effect as a unique constraint)
So I rewrote the model as follows:
# Feel free to use the following code under the MIT license
class NullableBoolean(TypeDecorator):
"""A three-states boolean, which allows working with UNIQUE constraints
In PostgreSQL, when making a composite UNIQUE constraint where one of the
columns is a nullable boolean, then null values for that column are counted
as always different.
So if you have:
class MyModel(Base):
__tablename__ = 'mymodel'
id = Column(Integer, primary_key=True)
col1 = Column(Unicode, nullable=False)
col2 = Column(Unicode, nullable=False)
col3 = Column(Boolean)
__table_args__ = (
UniqueConstraint(col1, col2, col3,
name='uq_mymodel_col1_col2_col3'),
}
Then you could INSERT multiple records which have the same (col1, col2)
when col3 is None.
If you want None to be considered a "proper" value that triggers the
unicity constraint, then use this type instead of a nullable Boolean.
"""
impl = Enum
def __init__(self, **kwargs):
kwargs['name'] = 'nullable_boolean_enum'
super().__init__('true', 'false', 'unknown', **kwargs)
def process_bind_param(self, value, dialect):
"""Convert the Python values into the SQL ones"""
return {
True: 'true',
False: 'false',
None: 'unknown',
}[value]
def process_result_value(self, value, dialect):
"""Convert the SQL values into the Python ones"""
return {
'true': True,
'false': False,
'unknown': None,
}[value]
class MyModel(Base):
__tablename__ = 'mymodel'
id = Column(Integer, primary_key=True)
col1 = Column(Unicode, nullable=False)
col2 = Column(Unicode, nullable=False)
col3 = Column(Unicode, nullable=False)
col4 = Column(Boolean)
created_at = Column(DateTime(timezone=True), nullable=False)
updated_at = Column(DateTime(timezone=True), nullable=False)
__table_args__ = (
UniqueConstraint(col1, col2, col3, col4,
name='uq_mymodel_col1_col2_col3_col4')
)
And now it seems to be working as expected.
Hope that helps someone in the future. If anybody has a better idea though, I'm interested. :)
I have a table for lawyers:
CREATE TABLE lawyers (
id SERIAL PRIMARY KEY,
name VARCHAR,
name_url VARCHAR,
pic_url VARCHAR(200)
);
Imagine the whole table looks like this:
And a table for firms:
CREATE TABLE firms (
id SERIAL PRIMARY KEY,
name VARCHAR,
address JSONb
);
Whole table:
Then to map many to many relationship I'm using a map table lawyers_firms:
CREATE TABLE lawyers_firms (
lawyer_id INTEGER,
firm_id INTEGER
);
I'm not sure how to retrieve values from lawyersand from firmsgiven a lawyers_firms.firm_id.
For example:
1. SELECT name, name_url and pic_url FROM lawyers.
2. also SELECT name and address FROM firms.
3. WHERE `lawyers_firms.firm_id` = 1.
Try this:
SELECT l.name, l.name_url, l.pic_url, f.name, f.address
FROM lawyers l
inner join lawyers_firms lf
on lf.lawyer_id = l.id
inner join firms f
on f.id = lf.firm_id
WHERE lf.firm_id = 1;
I have the following model:
class GeoLocation(Base):
__tablename__ = "geolocations"
id = Column(SmallInteger, primary_key=True)
name = Column(String(8), nullable=False)
coordinates = Column(String(80), nullable=False)
Using SQLAlchemy 0.9.8 and postgresql 9.3.6 it produces the right create statements:
CREATE TABLE geolocations (
id SMALLSERIAL NOT NULL,
name VARCHAR(8) NOT NULL,
coordinates VARCHAR(80) NOT NULL,
PRIMARY KEY (id)
)
On another machine, same SQLAlchemy version but with postgresql 9.1.13 it does not, producing:
CREATE TABLE geolocations (
id SMALLINT NOT NULL,
name VARCHAR(8) NOT NULL,
coordinates VARCHAR(80) NOT NULL,
PRIMARY KEY (id)
)
Any pointers?
Seem that the difference in in the support for the smallserial datatype. This is not available for postgres 9.1.13:
http://www.postgresql.org/docs/9.1/static/datatype-numeric.html
This is the SQL I want to generate:
CREATE UNIQUE INDEX users_lower_email_key ON users (LOWER(email));
From the SQLAlchemy Index documentation I would expect this to work:
Index('users_lower_email_key', func.lower(users.c.email), unique=True)
But after I call metadata.create(engine) the table is created but this index is not. I've tried:
from conf import dsn, DEBUG
engine = create_engine(dsn.engine_info())
metadata = MetaData()
metadata.bind = engine
users = Table('users', metadata,
Column('user_id', Integer, primary_key=True),
Column('email', String),
Column('first_name', String, nullable=False),
Column('last_name', String, nullable=False),
)
Index('users_lower_email_key', func.lower(users.c.email), unique=True)
metadata.create_all(engine)
Viewing the table definition in PostgreSQL I see that this index was not created.
\d users
Table "public.users"
Column | Type | Modifiers
------------+-------------------+---------------------------------------------------------
user_id | integer | not null default nextval('users_user_id_seq'::regclass)
email | character varying |
first_name | character varying | not null
last_name | character varying | not null
Indexes:
"users_pkey" PRIMARY KEY, btree (user_id)
How can I create my lower, unique index?
I have no idea why you want to index an integer column in lower case; The problem is that the generated sql does not typecheck:
LINE 1: CREATE UNIQUE INDEX banana123 ON mytable (lower(col5))
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
'CREATE UNIQUE INDEX banana123 ON mytable (lower(col5))' {}
On the other hand, if you use an actual string type:
Column('col5string', String),
...
Index('banana123', func.lower(mytable.c.col5string), unique=True)
The index is created as expected. If, for some very strange reason, you are insistent about this absurd index, you just need to fix the types:
Index('lowercasedigits', func.lower(cast(mytable.c.col5, String)), unique=True)
Which produces perfectly nice:
CREATE UNIQUE INDEX lowercasedigits ON mytable (lower(CAST(col5 AS VARCHAR)))