Postgresql - `serial` column & inheritence (sequence sharing policy) - postgresql

In postgresql, when inherit a serial column from parent table, the sequence is shared by parent & child table.
Is it possible to inherit the serial column, while let the 2 table have separated sequence values, e.g both table's column could have value 1.
Is this possible & reasonable, and if yes, how to do that?
#Update
The reasons that I want to avoid sequence sharing are:
Sharing a single int range by multiple table might use up the
MAX_INT, using bigint could improve this, but it takes more space
too.
There is a kind of resource locking when multiple table doing insert concurrently, so it's a performance issue I guess.
The id jump from 1 to 5 then might to 1000 don't look as beautiful as it could.
#Summary
solutions:
If want child table have its own sequence, while still keep the global sequence among parent & child table. (As described in #wildplasser 's answer.)
Then could add a sub_id serial column for each child table.
If want child table have its own sequence, while don't need a global sequence among parent & child table,
There there are 2 ways:
Using int instead of serial. (As described in #lsilva 's answer.)
Steps:
define type as int or bigint in parent table,
for each parent & child table, create a individual sequence,
specify default value for int type for each table using nextval of their own sequence,
don't forget to maintain/reset the sequence, when re-create table,
Define id serial directly in child table, and not in parent table.

DROP schema tmp CASCADE;
CREATE schema tmp;
set search_path = tmp, pg_catalog;
CREATE TABLE common
( seq SERIAL NOT NULL PRIMARY KEY
);
CREATE TABLE one
( subseq SERIAL NOT NULL
, payload integer NOT NULL
)
INHERITS (tmp.common)
;
CREATE TABLE two
( subseq SERIAL NOT NULL
, payload integer NOT NULL
)
INHERITS (tmp.common)
;
/**
\d common
\d one
\d two
\q
***/
INSERT INTO one(payload)
SELECT gs FROM generate_series(1,5) gs
;
INSERT INTO two(payload)
SELECT gs FROM generate_series(101,105) gs
;
SELECT * FROM common;
SELECT * FROM one;
SELECT * FROM two;
Results:
NOTICE: drop cascades to table tmp.common
DROP SCHEMA
CREATE SCHEMA
SET
CREATE TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 5
INSERT 0 5
seq
-----
1
2
3
4
5
6
7
8
9
10
(10 rows)
seq | subseq | payload
-----+--------+---------
1 | 1 | 1
2 | 2 | 2
3 | 3 | 3
4 | 4 | 4
5 | 5 | 5
(5 rows)
seq | subseq | payload
-----+--------+---------
6 | 1 | 101
7 | 2 | 102
8 | 3 | 103
9 | 4 | 104
10 | 5 | 105
(5 rows)
But: in fact you don't need the subseq columns, since you can always enumerate them by means of row_number():
CREATE VIEW vw_one AS
SELECT seq
, row_number() OVER (ORDER BY seq) as subseq
, payload
FROM one;
CREATE VIEW vw_two AS
SELECT seq
, row_number() OVER (ORDER BY seq) as subseq
, payload
FROM two;
[results are identical]
And, you could add UNIQUE AND PRIMARY KEY constraints to the child tables, like:
CREATE TABLE one
( subseq SERIAL NOT NULL UNIQUE
, payload integer NOT NULL
)
INHERITS (tmp.common)
;
ALTER TABLE one ADD PRIMARY KEY (seq);
[similar for table two]

I use this :
Parent table definition:
CREATE TABLE parent_table (
id bigint NOT NULL,
Child table definition:
CREATE TABLE cild_schema.child_table
(
id bigint NOT NULL DEFAULT nextval('child_schema.child_table_id_seq'::regclass),
I am emulating the serial by using a sequence number as a default.

Related

postgres: temporary column default that is unique and not nullable, without relying on a sequence?

Hi, I want to add a unique, non-nullable column to a table.
It
already has data. I would therefore like to instantly populate the
new column with unique values, eg 'ABC123', 'ABC124', 'ABC125', etc.
The data will eventually be wiped and
replaced with proper data, so i don't want to introduce a sequence
just to populate the default value.
Is it possible to generate a default value for the existing rows, based on something like rownumber()? I realise the use case is ridiculous but is it possible to achieve... if so how?
...
foo text not null unique default 'ABC'||rownumber()' -- or something similar?
...
can be applied generate_series?
select 'ABC' || generate_series(123,130)::text;
ABC123
ABC124
ABC125
ABC126
ABC127
ABC128
ABC129
ABC130
Variant 2 add column UNIQUE and not null
begin;
alter table test_table add column foo text not null default 'ABC';
with s as (select id,(row_number() over(order by id))::text t from test_table) update test_table set foo=foo || s.t from s where test_table.id=s.id;
alter table test_table add CONSTRAINT unique_foo1 UNIQUE(foo);
commit;
results
select * from test_table;
id | foo
----+------
1 | ABC1
2 | ABC2
3 | ABC3
4 | ABC4
5 | ABC5
6 | ABC6

How to pushdown filters to a view's group by clause?

We have a third party BI tool on the project that can only add a where clause with specified filters on a table/view select. We use a set of 4 source tables, they have indexes for columns that can be filtered using BI's UI. We have a view for each table that do grouping by indexed columns and add 1 additional column. Then we have another view that joins all the data from those 4 views using index columns, that is a view that is queried from our BI's UI, BI adds where clause to queries.
The problem is indexes on source tables are not utilized, filters are not pushed down on the level of tables, instead they are applied at the very end. We can't use Set Returning Function, all our BI tool can do is just select from table\view and add a where clause.
We thought about intercepting a select's where condition in Pg but I'm not sure is it possible. Or maybe it's possible to hint a optimizer that filters need to be pushed down. We can query source tables directly without using views but it will multiply a number of data sources\elements on UI which is not desirable. Is there any other ways we can solve it in PostgreSQL?
Update 1
Bellow examples of schemas/queries we use for our tables and views
CREATE TABLE source_table_1
(
dim1 VARCHAR(255) NOT NULL,
dim2 VARCHAR(255) NOT NULL,
dim3 VARCHAR(255) NOT NULL,
meausre1 Bigint NOT NULL,
meausre2 Bigint NOT NULL,
meausre3 Bigint NOT NULL
);
CREATE INDEX ON uc13_failures_by_cell (dim1, dim2, dim3);
... another 3 tables
CREATE OR REPLACE VIEW view1 AS
SELECT
"type1" as type,
dim1,
dim2,
dim3,
sum(meausre1) AS meausre1,
sum(meausre2) AS meausre2,
sum(meausre3) AS meausre3
FROM source_table_1
GROUP BY 1, 2, 3, 4;
... another 3 views
CREATE OR REPLACE VIEW view_uinion AS
SELECT
coalesce(view1.dim1, view2.dim1, view3.dim1, view4.dim1) AS dim1,
... two other dims
view1.meausre1 AS meausre1_1,
view2.meausre1 AS meausre2_1,
view3.meausre1 AS meausre3_1,
view4.meausre1 AS meausre4_1,
... two meausres
FROM view1
FULL JOIN view2 ON
view1.dim1 = view2.dim1 AND
view1.dim2 = view2.dim2 AND
view1.dim3 = view2.dim3 AND
FULL JOIN view3 ON ...
FULL JOIN view4 ON ...
WHERE -- this is were filters on dims are inserted
;
You cannot push a WHERE condition into a full outer join.
See this example:
CREATE TABLE a(id integer NOT NULL, a1 integer NOT NULL);
INSERT INTO a VALUES (1, 20), (2, 20);
CREATE TABLE b(id integer NOT NULL, b1 integer NOT NULL);
INSERT INTO b VALUES (2, 30), (3, 30);
SELECT *
FROM a
FULL JOIN b USING (id)
WHERE b1 = 30;
id | a1 | b1
----+----+----
2 | 20 | 30
3 | | 30
(2 rows)
SELECT *
FROM a
FULL JOIN (SELECT *
FROM b
WHERE b1 = 30) AS b_red
USING (id);
id | a1 | b1
----+----+----
1 | 20 |
2 | 20 | 30
3 | | 30
(3 rows)
So you would have to modify the underlying queries/views.
If you used inner joins, it would not be a problem.

Postgresql track serial by ID in another column

So I am trying to create a database that can store videos from products, but I do intend to add a few million of them. So obviously I want the performance to be as good as possible.
I wanted to achieve the following:
BIGINT | SMALLSERIAL | VARCHAR(30)
product_id | video_id | video_hash
1 1 Dkfjoie124
1 2 POoieqlgkQ
1 3 Xd2t9dakcx
2 1 Df2459Afdw
However, when I insert a new video for a product:
INSERT INTO TABLE (product_id, video_hash) VALUES (2, DSpewirncS)
I want the following to happen:
BIGINT | SMALLSERIAL | VARCHAR(30)
product_id | video_id | video_hash
1 1 Dkfjoie124
1 2 POoieqlgkQ
1 3 Xd2t9dakcx
2 1 Df2459Afdw
2 2 DSpewirncS
Will this happen when I set the column type for video_id to SMALLSERIAL? Because I am afraid that it will insert a different value (the highest in the entire column), which I do not want.
Thanks.
No, a serial is bound to a sequence and that doesn't reset without telling it to do.
But if you want an ordinal for the videos per products you can query the table to produce it using the row_number() window function.
SELECT product_id,
row_number() OVER (PARTITION BY product_id
ORDER BY video_id) video_ordinal,
video_hash
FROM table;
You could also create a view for this query for convenience, so that you can query the view instead of the table and the view would look like you want it.

unique index violation during update

I have run into a unique index violation in a bigger db. The original problem occurs in a stored pl/pgsql function.
I have simplified everything to show my problem. I can reproduce it in a rather simple table:
CREATE TABLE public.test
(
id integer NOT NULL DEFAULT nextval('test_id_seq'::regclass),
pos integer,
text text,
CONSTRAINT text_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.test
OWNER TO root;
GRANT ALL ON TABLE public.test TO root;
I define a unique index on 'pos':
CREATE UNIQUE INDEX test_idx_pos
ON public.test
USING btree
(pos);
Before the UPDATE the data in the table looks like this:
testdb=# SELECT * FROM test;
id | pos | text
----+-----+----------
2 | 1 | testpos1
3 | 2 | testpos2
1 | 5 | testpos4
4 | 4 | testpos3
(4 Zeilen)
tr: (4 rows)
Now I want to decrement all 'pos' values by 1 that are bigger than 2 and get an error (tr are my translations from German to English):
testdb=# UPDATE test SET pos = pos - 1 WHERE pos > 2;
FEHLER: doppelter Schlüsselwert verletzt Unique-Constraint »test_idx_pos«
tr: ERROR: duplicate key violates unique constraint »test_idx_pos«
DETAIL: Schlüssel »(pos)=(4)« existiert bereits.
tr: key »(pos)=(4) already exists.
If the UPDATE had run complete the table would look like this and be unique again:
testdb=# SELECT * FROM test;
id | pos | text
----+-----+----------
2 | 1 | testpos1
3 | 2 | testpos2
1 | 4 | testpos4
4 | 3 | testpos3
(4 Zeilen)
tr: (4 rows)
How can I avoid such situation? I learned that stored pl/pgsql functions are embedded into transactions, so this problem shouldn't appear?
Unique indexes are evaluated per row not per statement (which is e.g. different to Oracle's implementation)
The solution to this problem is to use a unique constraint which can be deferred and thus is evaluated at the end of the transaction.
So instead of the unique index, define a constraint:
alter table test add constraint test_idx_pos unique (pos)
deferrable initially deferred;

Does the returning clause always execute first?

I have a many-to-many relation representing containers holding items.
I have a primary key row_id in the table.
I insert four rows: (container_id, item_id) values (1778712425160346751, 4). These rows will be identical except the aforementioned unique row_id.
I subsequently execute the following query:
delete from contains
where item_id = 4 and
container_id = '1778712425160346751' and
row_id =
(
select max(row_id) from contains
where container_id = '1778712425160346751' and
item_id = 4
)
returning
(
select count(*) from contains
where container_id = '1778712425160346751' and
item_id = 4
);
Now I expected to get 3 returned from this query, but I got a 4. Getting a 4 is the desired behavior, but it is not what was expected.
My question is: can I always expect that the returning clause executes before the delete, or is this an idiosyncrasy of certain versions or specific software?
The use of a query in returning section is allowed but not documented. For the documentation:
output_expression
An expression to be computed and returned by the DELETE command after each row is deleted. The expression can use any column names of the table named by table_name or table(s) listed in USING. Write * to return all columns.
It seems logical that the query sees the table in a state before deleting, as the statement is not completed yet.
create temp table test as
select id from generate_series(1, 4) id;
delete from test
returning id, (select count(*) from test);
id | count
----+-------
1 | 4
2 | 4
3 | 4
4 | 4
(4 rows)
The same concerns update:
create temp table test as
select id from generate_series(1, 4) id;
update test
set id = id+ 1
returning id, (select sum(id) from test);
id | sum
----+-----
2 | 10
3 | 10
4 | 10
5 | 10
(4 rows)