How to create a unique lowercase functional index using SQLAlchemy on PostgreSQL? - postgresql

This is the SQL I want to generate:
CREATE UNIQUE INDEX users_lower_email_key ON users (LOWER(email));
From the SQLAlchemy Index documentation I would expect this to work:
Index('users_lower_email_key', func.lower(users.c.email), unique=True)
But after I call metadata.create(engine) the table is created but this index is not. I've tried:
from conf import dsn, DEBUG
engine = create_engine(dsn.engine_info())
metadata = MetaData()
metadata.bind = engine
users = Table('users', metadata,
Column('user_id', Integer, primary_key=True),
Column('email', String),
Column('first_name', String, nullable=False),
Column('last_name', String, nullable=False),
)
Index('users_lower_email_key', func.lower(users.c.email), unique=True)
metadata.create_all(engine)
Viewing the table definition in PostgreSQL I see that this index was not created.
\d users
Table "public.users"
Column | Type | Modifiers
------------+-------------------+---------------------------------------------------------
user_id | integer | not null default nextval('users_user_id_seq'::regclass)
email | character varying |
first_name | character varying | not null
last_name | character varying | not null
Indexes:
"users_pkey" PRIMARY KEY, btree (user_id)
How can I create my lower, unique index?

I have no idea why you want to index an integer column in lower case; The problem is that the generated sql does not typecheck:
LINE 1: CREATE UNIQUE INDEX banana123 ON mytable (lower(col5))
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
'CREATE UNIQUE INDEX banana123 ON mytable (lower(col5))' {}
On the other hand, if you use an actual string type:
Column('col5string', String),
...
Index('banana123', func.lower(mytable.c.col5string), unique=True)
The index is created as expected. If, for some very strange reason, you are insistent about this absurd index, you just need to fix the types:
Index('lowercasedigits', func.lower(cast(mytable.c.col5, String)), unique=True)
Which produces perfectly nice:
CREATE UNIQUE INDEX lowercasedigits ON mytable (lower(CAST(col5 AS VARCHAR)))

Related

PostgreSQL primary key id datatype from serial to bigserial?

I did some research but can't find the exact answer that I look for. Currently I have a primary key column 'id' which is set to serial but I want to change it to bigserial to map to Long in Java layer. What is the best way to achieve this considering this is a existing table? I think my Postgres version is 10.5. Also I am aware that both serial and bigserial are not a data type.
In Postgres 9.6 or earlier the sequence created by a serial column already returns bigint. You can check this using psql:
drop table if exists my_table;
create table my_table(id serial primary key, str text);
\d my_table
Table "public.my_table"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+--------------------------------------
id | integer | | not null | nextval('my_table_id_seq'::regclass)
str | text | | |
Indexes:
"my_table_pkey" PRIMARY KEY, btree (id)
\d my_table_id_seq
Sequence "public.my_table_id_seq"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Owned by: public.my_table.id
So you should only alter the type of the serial column:
alter table my_table alter id type bigint;
The behavior has changed in Postgres 10:
Also, sequences created for SERIAL columns now generate positive 32-bit wide values, whereas previous versions generated 64-bit wide values. This has no visible effect if the values are only stored in a column.
Hence in Postgres 10+:
alter sequence my_table_id_seq as bigint;
alter table my_table alter id type bigint;
-- backup table first
CREATE TABLE tablenamebackup as select * from tablename ;
--add new column idx
alter table tablename add column idx bigserial not null;
-- copy id to idx
update tablename set idx = id ;
-- drop id column
alter table tablename drop column id ;
-- rename idx to id
alter table tablename rename column idx to id ;
-- Reset Sequence to max + 1
SELECT setval(pg_get_serial_sequence('tablename', 'id'), coalesce(max(id)+1, 1), false) FROM tablename ;

How use nextval in sequence using node-postgres

my table postgres database has an auto increment key.
In convencional postgres sql I can:
INSERT INTO CITY(ID,NAME) VALUES(nextval('city_sequence_name'),'my first city');
How can I do this using node-postgres ? Until now, I have tried:
myConnection.query('INSERT INTO city(id,name) VALUES ($1,$2)', ['nextval("city_sequence_name")','my first city'],callback);
But the error rules:
erro { error: sintaxe de entrada é inválida para integer: "nextval("city_sequence_name")"
So, I was able to identify the solution to this case:
connection.query("INSERT INTO city(id,name) VALUES(nextval('sequence_name'),$1)", ['first city'],callback);
This is a bad practice. Instead, just set the column to DEFAULT nextval() or use the serial type.
# CREATE TABLE city ( id serial PRIMARY KEY, name text );
CREATE TABLE
# \d city
Table "public.city"
Column | Type | Modifiers
--------+---------+---------------------------------------------------
id | integer | not null default nextval('city_id_seq'::regclass)
name | text |
Indexes:
"city_pkey" PRIMARY KEY, btree (id)
# INSERT INTO city (name) VALUES ('Houston'), ('Austin');
INSERT 0 2
test=# TABLE city;
id | name
----+---------
1 | Houston
2 | Austin
(2 rows)

Postgres: create foreign key relationship, getting 'Key is not present in table'?

I am working in Postgres 9.1 and I want to create a foreign key relationship for two tables that don't currently have one.
These are my tables:
# \d frontend_item;
Table "public.frontend_item"
Column | Type | Modifiers
-------------------+-------------------------+--------------------------------------------------------------------
id | integer | not null default nextval('frontend_prescription_id_seq'::regclass)
presentation_code | character varying(15) | not null
pct_code | character varying(3) | not null
Indexes:
"frontend_item_pkey" PRIMARY KEY, btree (id)
# \d frontend_pct;
Column | Type | Modifiers
------------+--------------------------+-----------
code | character varying(3) | not null
Indexes:
"frontend_pct_pkey" PRIMARY KEY, btree (code)
"frontend_pct_code_1df55e2c36c298b2_like" btree (code varchar_pattern_ops)
This is what I'm trying:
# ALTER TABLE frontend_item ADD CONSTRAINT pct_fk
FOREIGN KEY (pct_code) REFERENCES frontend_pct(code) ON DELETE CASCADE;
But I get this error:
ERROR: insert or update on table "frontend_item" violates
foreign key constraint "pct_fk"
DETAIL: Key (pct_code)=(5HQ) is not present in table "frontend_pct"
I guess this makes sense, because currently the frontend_pct table is empty, while the frontend_item has values in it.
Firstly, is the syntax of my ALTER TABLE correct?
Secondly, is there an automatic way to create the required values in frontend_pct? It would be great if there was some way to say to Postgres "create the foreign key, and insert values into the foreign key table if they don't exist".
Your syntax seems correct.
No, there is not an automatic way to insert the required values.
You can only do it manually before adding the constraint. In your case must be something like
INSERT INTO frontend_pct (code)
SELECT code FROM
(
SELECT DISTINCT pct_code AS code
FROM frontend_item
WHERE pct_code NOT IN (SELECT code FROM frontend_pct)
) AS a;
NOTICE:
The query can be heavy if you have lot of data..

Altering a parent table in Postgresql 8.4 breaks child table defaults

The problem: In Postgresql, if table temp_person_two inherits fromtemp_person, default column values on the child table are ignored if the parent table is altered.
How to replicate:
First, create table and a child table. The child table should have one column that has a default value.
CREATE TEMPORARY TABLE temp_person (
person_id SERIAL,
name VARCHAR
);
CREATE TEMPORARY TABLE temp_person_two (
has_default character varying(4) DEFAULT 'en'::character varying NOT NULL
) INHERITS (temp_person);
Next, create a trigger on the parent table that copies its data to the child table (I know this appears like bad design, but this is a minimal test case to show the problem).
CREATE FUNCTION temp_person_insert() RETURNS trigger
LANGUAGE plpgsql
AS '
BEGIN
INSERT INTO temp_person_two VALUES ( NEW.* );
RETURN NULL;
END;
';
CREATE TRIGGER temp_person_insert_trigger
BEFORE INSERT ON temp_person
FOR EACH ROW
EXECUTE PROCEDURE temp_person_insert();
Then insert data into parent and select data from child. The data should be correct.
INSERT INTO temp_person (name) VALUES ('ovid');
SELECT * FROM temp_person_two;
person_id | name | has_default
-----------+------+-------------
1 | ovid | en
(1 row )
Finally, alter parent table by adding a new, unrelated column. Attempt to insert data and watch a "not-null constraint" violation occur:
ALTER TABLE temp_person ADD column foo text;
INSERT INTO temp_person(name) VALUES ('Corinna');
ERROR: null value in column "has_default" violates not-null constraint
CONTEXT: SQL statement "INSERT INTO temp_person_two VALUES ( $1 .* )"
PL/pgSQL function "temp_person_insert" line 2 at SQL statement
My version:
testing=# select version();
version
-------------------------------------------------------------------------------------------------------
PostgreSQL 8.4.17 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit
(1 row)
It's there all the way to 9.3, but it's going to be tricky to fix, and I'm not sure if it's just undesirable behaviour rather than a bug.
The constraint is still there, but look at the column-order.
Table "pg_temp_2.temp_person"
Column | Type | Modifiers
-----------+-------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
Number of child tables: 1 (Use \d+ to list them.)
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
Inherits: temp_person
ALTER TABLE
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
foo | text |
Inherits: temp_person
It works in your first example because you are effectively doing:
INSERT INTO temp_person_two (person_id,name)
VALUES (person_id, name)
BUT look where your new column is added in the child table - at the end! So you end up with
INSERT INTO temp_person_two (person_id,name,has_default)
VALUES (person_id, name, foo)
rather than what you hoped for:
INSERT INTO temp_person_two (person_id,name,foo)...
So - what's the correct behaviour here? If PostgreSQL shuffled the columns in the child table that could break code. If it doesn't, that can also break code. As it happens, I don't think the first option is do-able without substantial PG code changes, so it's unlikely to do that in the medium term.
Moral of the story: explicitly list your INSERT column-names.
Could take a while by hand. You know any languages with regexes? ;-)
It's not a bug. NEW.* expands to the values of each column in the new row, so you're doing INSERT INTO temp_person_two VALUES ( NEW.person_id, NEW.name, NEW.foo ), the last of which is indeed NULL if you didn't specify it (and wrong if you did).
I'm surprised it even works before you added the new column, since the number of values doesn't match the number of fields in the child table. Presumably it assumes the default for missing trailing values.

Postgres remove constraint by column names

Is there a way I can remove a constraint based on column names?
I have postgres 8.4 and when I upgrade my project the upgrade fails because a constraint was named something different in a different version.
Basically, I need to remove a constraint if it exists or I can just remove the constraint using the column names.
The name of the constraint is the only thing that has changed. Any idea if that's possible?
In this case, I need to remove "patron_username_key"
discovery=# \d patron
Table "public.patron"
Column | Type | Modifiers
--------------------------+-----------------------------+-----------
patron_id | integer | not null
create_date | timestamp without time zone | not null
row_version | integer | not null
display_name | character varying(255) | not null
username | character varying(255) | not null
authentication_server_id | integer |
Indexes:
"patron_pkey" PRIMARY KEY, btree (patron_id)
"patron_username_key" UNIQUE, btree (username, authentication_server_id)
You can use System Catalogs to find information bout constraints. Still, some constraints, like keys, are mentioned in the separate pg_constraint table, while others, like NOT NULL, are essentially a columns in the pg_attribute table.
For the keys, you can use this query to get a list of constraint definitions:
SELECT pg_get_constraintdef(c.oid) AS def
FROM pg_class t
JOIN pg_constraint c ON c.conrelid=t.oid
WHERE t.relkind='r' AND t.relname = 'table';
You can then filter out the ones that references your column and dynamically construct ALTER TABLE ... DROP CONSTRAINT ... statements.
Assuming that unique index is the result of adding a unique constraint, you can use the following SQL statement to remove that constraint:
do $$
declare
cons_name text;
begin
select constraint_name
into cons_name
from information_schema.constraint_column_usage
where constraint_schema = current_schema()
and column_name in ('authentication_server_id', 'username')
and table_name = 'patron'
group by constraint_name
having count(*) = 2;
execute 'alter table patron drop constraint '||cons_name;
end;
$$
I'm not sure if this will work if you have "only" added a unique index (instead of a unique constraint).
If you need to do that for more than 2 columns you also need to adjust the having count(*) = 2 part to match the number of columns in the column_name in .. condition.
(As you did not specify your PostgreSQL version I'm assuming the current version)