I have schema
id | company_id | tag
So I trying to add new unique tag for every company_id.
What would be the best way to achieve it? Select by company_id first and refuse to add tag? Or postgres is able to do it in one shot?
So if I try to add tag tag1 - nothing would happen
id | company_id | tag
1 | 1 | tag1
2 | 1 | tag2
3 | 1 | tag3
Add an UNIQUE constraint to column tag and Postgres will do it (refuse duplicates) for you .
ALTER TABLE my_table ADD CONSTRAINT tag_un UNIQUE (tag);
This makes tag unique across the table. If you need that tag be unique per company_id then
ALTER TABLE my_table ADD CONSTRAINT tag_company_un UNIQUE (tag, company_id);
demo:db<>fiddle
You can create an UNIQUE constraint using both columns. So the combination of both would be unique. Using the same tag on another company_id would be ok:
ALTER TABLE my_table ADD CONSTRAINT tag_unique UNIQUE (company_id, tag);
Related
Hi, I want to add a unique, non-nullable column to a table.
It
already has data. I would therefore like to instantly populate the
new column with unique values, eg 'ABC123', 'ABC124', 'ABC125', etc.
The data will eventually be wiped and
replaced with proper data, so i don't want to introduce a sequence
just to populate the default value.
Is it possible to generate a default value for the existing rows, based on something like rownumber()? I realise the use case is ridiculous but is it possible to achieve... if so how?
...
foo text not null unique default 'ABC'||rownumber()' -- or something similar?
...
can be applied generate_series?
select 'ABC' || generate_series(123,130)::text;
ABC123
ABC124
ABC125
ABC126
ABC127
ABC128
ABC129
ABC130
Variant 2 add column UNIQUE and not null
begin;
alter table test_table add column foo text not null default 'ABC';
with s as (select id,(row_number() over(order by id))::text t from test_table) update test_table set foo=foo || s.t from s where test_table.id=s.id;
alter table test_table add CONSTRAINT unique_foo1 UNIQUE(foo);
commit;
results
select * from test_table;
id | foo
----+------
1 | ABC1
2 | ABC2
3 | ABC3
4 | ABC4
5 | ABC5
6 | ABC6
So I am trying to create a database that can store videos from products, but I do intend to add a few million of them. So obviously I want the performance to be as good as possible.
I wanted to achieve the following:
BIGINT | SMALLSERIAL | VARCHAR(30)
product_id | video_id | video_hash
1 1 Dkfjoie124
1 2 POoieqlgkQ
1 3 Xd2t9dakcx
2 1 Df2459Afdw
However, when I insert a new video for a product:
INSERT INTO TABLE (product_id, video_hash) VALUES (2, DSpewirncS)
I want the following to happen:
BIGINT | SMALLSERIAL | VARCHAR(30)
product_id | video_id | video_hash
1 1 Dkfjoie124
1 2 POoieqlgkQ
1 3 Xd2t9dakcx
2 1 Df2459Afdw
2 2 DSpewirncS
Will this happen when I set the column type for video_id to SMALLSERIAL? Because I am afraid that it will insert a different value (the highest in the entire column), which I do not want.
Thanks.
No, a serial is bound to a sequence and that doesn't reset without telling it to do.
But if you want an ordinal for the videos per products you can query the table to produce it using the row_number() window function.
SELECT product_id,
row_number() OVER (PARTITION BY product_id
ORDER BY video_id) video_ordinal,
video_hash
FROM table;
You could also create a view for this query for convenience, so that you can query the view instead of the table and the view would look like you want it.
I have run into a unique index violation in a bigger db. The original problem occurs in a stored pl/pgsql function.
I have simplified everything to show my problem. I can reproduce it in a rather simple table:
CREATE TABLE public.test
(
id integer NOT NULL DEFAULT nextval('test_id_seq'::regclass),
pos integer,
text text,
CONSTRAINT text_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.test
OWNER TO root;
GRANT ALL ON TABLE public.test TO root;
I define a unique index on 'pos':
CREATE UNIQUE INDEX test_idx_pos
ON public.test
USING btree
(pos);
Before the UPDATE the data in the table looks like this:
testdb=# SELECT * FROM test;
id | pos | text
----+-----+----------
2 | 1 | testpos1
3 | 2 | testpos2
1 | 5 | testpos4
4 | 4 | testpos3
(4 Zeilen)
tr: (4 rows)
Now I want to decrement all 'pos' values by 1 that are bigger than 2 and get an error (tr are my translations from German to English):
testdb=# UPDATE test SET pos = pos - 1 WHERE pos > 2;
FEHLER: doppelter Schlüsselwert verletzt Unique-Constraint »test_idx_pos«
tr: ERROR: duplicate key violates unique constraint »test_idx_pos«
DETAIL: Schlüssel »(pos)=(4)« existiert bereits.
tr: key »(pos)=(4) already exists.
If the UPDATE had run complete the table would look like this and be unique again:
testdb=# SELECT * FROM test;
id | pos | text
----+-----+----------
2 | 1 | testpos1
3 | 2 | testpos2
1 | 4 | testpos4
4 | 3 | testpos3
(4 Zeilen)
tr: (4 rows)
How can I avoid such situation? I learned that stored pl/pgsql functions are embedded into transactions, so this problem shouldn't appear?
Unique indexes are evaluated per row not per statement (which is e.g. different to Oracle's implementation)
The solution to this problem is to use a unique constraint which can be deferred and thus is evaluated at the end of the transaction.
So instead of the unique index, define a constraint:
alter table test add constraint test_idx_pos unique (pos)
deferrable initially deferred;
In postgresql, when inherit a serial column from parent table, the sequence is shared by parent & child table.
Is it possible to inherit the serial column, while let the 2 table have separated sequence values, e.g both table's column could have value 1.
Is this possible & reasonable, and if yes, how to do that?
#Update
The reasons that I want to avoid sequence sharing are:
Sharing a single int range by multiple table might use up the
MAX_INT, using bigint could improve this, but it takes more space
too.
There is a kind of resource locking when multiple table doing insert concurrently, so it's a performance issue I guess.
The id jump from 1 to 5 then might to 1000 don't look as beautiful as it could.
#Summary
solutions:
If want child table have its own sequence, while still keep the global sequence among parent & child table. (As described in #wildplasser 's answer.)
Then could add a sub_id serial column for each child table.
If want child table have its own sequence, while don't need a global sequence among parent & child table,
There there are 2 ways:
Using int instead of serial. (As described in #lsilva 's answer.)
Steps:
define type as int or bigint in parent table,
for each parent & child table, create a individual sequence,
specify default value for int type for each table using nextval of their own sequence,
don't forget to maintain/reset the sequence, when re-create table,
Define id serial directly in child table, and not in parent table.
DROP schema tmp CASCADE;
CREATE schema tmp;
set search_path = tmp, pg_catalog;
CREATE TABLE common
( seq SERIAL NOT NULL PRIMARY KEY
);
CREATE TABLE one
( subseq SERIAL NOT NULL
, payload integer NOT NULL
)
INHERITS (tmp.common)
;
CREATE TABLE two
( subseq SERIAL NOT NULL
, payload integer NOT NULL
)
INHERITS (tmp.common)
;
/**
\d common
\d one
\d two
\q
***/
INSERT INTO one(payload)
SELECT gs FROM generate_series(1,5) gs
;
INSERT INTO two(payload)
SELECT gs FROM generate_series(101,105) gs
;
SELECT * FROM common;
SELECT * FROM one;
SELECT * FROM two;
Results:
NOTICE: drop cascades to table tmp.common
DROP SCHEMA
CREATE SCHEMA
SET
CREATE TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 5
INSERT 0 5
seq
-----
1
2
3
4
5
6
7
8
9
10
(10 rows)
seq | subseq | payload
-----+--------+---------
1 | 1 | 1
2 | 2 | 2
3 | 3 | 3
4 | 4 | 4
5 | 5 | 5
(5 rows)
seq | subseq | payload
-----+--------+---------
6 | 1 | 101
7 | 2 | 102
8 | 3 | 103
9 | 4 | 104
10 | 5 | 105
(5 rows)
But: in fact you don't need the subseq columns, since you can always enumerate them by means of row_number():
CREATE VIEW vw_one AS
SELECT seq
, row_number() OVER (ORDER BY seq) as subseq
, payload
FROM one;
CREATE VIEW vw_two AS
SELECT seq
, row_number() OVER (ORDER BY seq) as subseq
, payload
FROM two;
[results are identical]
And, you could add UNIQUE AND PRIMARY KEY constraints to the child tables, like:
CREATE TABLE one
( subseq SERIAL NOT NULL UNIQUE
, payload integer NOT NULL
)
INHERITS (tmp.common)
;
ALTER TABLE one ADD PRIMARY KEY (seq);
[similar for table two]
I use this :
Parent table definition:
CREATE TABLE parent_table (
id bigint NOT NULL,
Child table definition:
CREATE TABLE cild_schema.child_table
(
id bigint NOT NULL DEFAULT nextval('child_schema.child_table_id_seq'::regclass),
I am emulating the serial by using a sequence number as a default.
following is a simplified illustration
TABLE : EMPLOYEE (TENANT_ID is a FK)
ID | NAME | TENANT_ID
1 | John | 1
TABLE DEPARTMENT
ID | NAME | TENANT_ID
1 | Physics | 1
2 | Math | 2
TABLE : EMPLOYEE_DEPARTMENTS (Join between employee and department)
ID | EMPLOYEE_ID | DEPARTMENT_ID
1 | 1 | 1
Is there a way to fail inserting data into EMPLOYEE_DEPARTMENTS if EMPLOYEE value is for TENANT 1 and DEPARMENT_ID is from TENANT 2? e.g. where employee_id=1 belongs to tenant=1 and department_id=2 belongs to tenant=2
ID | EMPLOYEED_ID | DEPARTMENT_ID
2 | 1 | 2
Is there a way to prevent such data insertion either at an app or db level. PS> no room for using triggers and don't want to use triggers.
Without triggers, the only way to do this is copy the tenant id so it appears in every table, and use composite primary or unique constraint and a composite foreign key.
e.g. if you had a UNIQUE constraint on EMPLOYEE(TENANT_ID, ID) and on DEPARTMENT(TENANT_ID, ID) you could add a FOREIGN KEY (TENANT_ID, EMPLOYEE_ID) REFERENCES EMPLOYEE (TENANT_ID, ID) and FOREIGN KEY (TENANT_ID, DEPARTMENT_ID) REFERENCES DEPARTMENT (TENANT_ID, ID).
This requires that the join table incorporate the TENANT_ID.
I suggest defining the PRIMARY_KEY of EMPLOYEE_DEPARTMENTS as (TENANT_ID, DEPARTMENT_ID, EMPLOYEE_ID) and getting rid of the useless surrogate key ID on the EMPLOYEE_DEPARTMENTS table, unless your toolkit/framework/ORM can't cope without it.