In my cakephp 3.7.5 app I have a test database in which one of the tables is partitioned as follow:
-- Table: public.impressions
-- DROP TABLE public.impressions;
CREATE TABLE public.impressions
(
id bigint NOT NULL DEFAULT nextval('impressions_id_seq'::regclass),
company_id bigint NOT NULL,
ad_id bigint,
seen_at timestamp without time zone NOT NULL,
site_url text COLLATE pg_catalog."default" NOT NULL,
cost_per_impression_usd numeric(20,10),
user_ip inet NOT NULL,
user_data jsonb,
CONSTRAINT impressions_pkey PRIMARY KEY (id),
CONSTRAINT impressions_company_id_fkey FOREIGN KEY (ad_id, company_id)
REFERENCES public.ads (id, company_id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.impressions
OWNER to postgres;
-- Rule: impressions_one ON public.impressions
-- DROP Rule impressions_one ON public.impressions;
CREATE OR REPLACE RULE impressions_one AS
ON INSERT TO public.impressions
WHERE new.company_id < 25
DO INSTEAD
INSERT INTO impressions_one (id, company_id, ad_id, seen_at, site_url, cost_per_impression_usd, user_ip, user_data)
VALUES (new.id, new.company_id, new.ad_id, new.seen_at, new.site_url, new.cost_per_impression_usd, new.user_ip, new.user_data);
-- Rule: impressions_three ON public.impressions
-- DROP Rule impressions_three ON public.impressions;
CREATE OR REPLACE RULE impressions_three AS
ON INSERT TO public.impressions
WHERE new.company_id >= 50
DO INSTEAD
INSERT INTO impressions_three (id, company_id, ad_id, seen_at, site_url, cost_per_impression_usd, user_ip, user_data)
VALUES (new.id, new.company_id, new.ad_id, new.seen_at, new.site_url, new.cost_per_impression_usd, new.user_ip, new.user_data);
-- Rule: impressions_two ON public.impressions
-- DROP Rule impressions_two ON public.impressions;
CREATE OR REPLACE RULE impressions_two AS
ON INSERT TO public.impressions
WHERE new.company_id >= 25 AND new.company_id < 50
DO INSTEAD
INSERT INTO impressions_two (id, company_id, ad_id, seen_at, site_url, cost_per_impression_usd, user_ip, user_data)
VALUES (new.id, new.company_id, new.ad_id, new.seen_at, new.site_url, new.cost_per_impression_usd, new.user_ip, new.user_data);
All the rules are working correctly and the values are inserted properly in their partitions, but only when I do an INSERT INTO sql command.
The problem with my partitioned table and cakephp is that cakephp seems to retrive the inserted entity using the RETURNING option after sql INSERT
This is from cakephp debug
INSERT INTO impressions (company_id, seen_at, site_url,
cost_per_impression_usd, user_ip, user_data) VALUES (:c0, :c1, :c2,
:c3, :c4, :c5) RETURNING *
and of course when I run this query in postgres raises the same exception as the cakephp
ERROR: cannot perform INSERT RETURNING on relation «impressions»
HINT: You need an unconditional ON INSERT DO INSTEAD rule with a
RETURNING clause. SQL state: 0A000
the problem is that postgresql does not allow this kind of scenario with returning data, but this is the cakephp way. I did it with symfony, same database, and all worked fine.
Is this cakephp beahvior changeable or configurable in a way that when I do the save for an entity it skip the returning option for the insert. something like:
$this->Impressions->save($impression,['skip_returning_stm'=>true]);
I need the table partitioned and the cakephp framework, I only tried on symfony in order to discard framework compatibility.
Thanks in advance!!!
Related
I have 2 tables like this
drop table if exists public.table_1;
drop table if exists public.table_2;
CREATE TABLE public.table_1 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
CREATE TABLE public.table_2 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
alter table public.table_1
add constraint my_constraint_1
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
alter table public.table_2
add constraint my_constraint_2
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
Every table contains rows which are related to a user, and all the rows of the same user cannot overlap in range. In addition, some rows may be logically deleted, so I added a where condition.
So far it's working w/o problems, but the 2 constraints work separately for each table.
I need to create a constraint which cover the 2 set of tables, so that a single daterange (of the same user and not deleted), may appaer only once across the 2 different tables.
Does the EXCLUDE notation be extended to work with different tables or do I need to check it with a trigger? If the trigger is the answer, which is the simplier way to do this? Create a temporary table with the union of the 2, add the constraint on it and check if fails?
Starting from #Laurenz Albe suggestion, this is what I made
-- #################### SETUP SAMPLE TABLES ####################
drop table if exists public.table_1;
drop table if exists public.table_2;
CREATE TABLE public.table_1 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
CREATE TABLE public.table_2 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
alter table public.table_1
add constraint my_constraint_1
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
alter table public.table_2
add constraint my_constraint_2
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
-- #################### SETUP TRIGGER ####################
create or REPLACE FUNCTION check_date_overlap_trigger_hook()
RETURNS trigger as
$body$
DECLARE
l_table text;
l_sql text;
l_row record;
begin
l_table := TG_ARGV[0];
l_sql := format('
select *
from public.%s as t
where
t.user_id = %s -- Include only records of the same user
and t.status != ''deleted'' -- Include only records that are active
', l_table, new.user_id);
for l_row in execute l_sql
loop
IF daterange(l_row.date_start, COALESCE(l_row.date_end, 'infinity'::date)) && daterange(new.date_start, COALESCE(new.date_end, 'infinity'::date))
THEN
RAISE EXCEPTION 'Date interval is overlapping with another one in table %', l_table
USING HINT = 'You can''t have the same interval across table1 AND table2';
END IF;
end loop;
RETURN NEW;
end
$body$
LANGUAGE plpgsql;
-- #################### INSTALL TRIGGER ####################
create trigger check_date_overlap
BEFORE insert or update
ON public.table_1
FOR EACH row
EXECUTE PROCEDURE check_date_overlap_trigger_hook('table_2');
create trigger check_date_overlap
BEFORE insert or update
ON public.table_2
FOR EACH row
EXECUTE PROCEDURE check_date_overlap_trigger_hook('table_1');
-- #################### INSERT DEMO ROWS ####################
insert into public.table_1 (user_id, status, date_start, date_end) values (1, 'active', '2020-12-10', '2020-12-20');
insert into public.table_1 (user_id, status, date_start, date_end) values (1, 'deleted', '2020-12-15', '2020-12-25');
insert into public.table_1 (user_id, status, date_start, date_end) values (2, 'active', '2020-12-10', '2020-12-20');
insert into public.table_1 (user_id, status, date_start, date_end) values (2, 'deleted', '2020-12-15', '2020-12-25');
-- This will fail for overlap on the same table
-- insert into public.table_1 (user_id, status, date_start, date_end) values (1, 'active', '2020-12-15', '2020-12-25');
-- This will fail as the user 1 already has an overlapping period on table 1
-- insert into public.table_2 (user_id, status, date_start, date_end) values (1, 'active', '2020-12-15', '2020-12-25');
-- This will fail as the user 1 already has an overlapping period on table 1
insert into public.table_2 (user_id, status, date_start, date_end) values (1, 'deleted', '2020-12-15', '2020-12-25');
update public.table_2 set status = 'active' where id = 1;
select 'table_1' as src_table, * from public.table_1
union
select 'table_2', * from public.table_2
You can probably use a trigger, but triggers are always vulnerable to race conditions (unless you are using SERIALIZABLE isolation).
If your tables really have the same columns, why don't you use a single table (and perhaps add a type column to disambiguate)?
So I'm setting up a schema in which I can input transactions of a journal entry independent of each other but also that rely on each other (mainly to ensure that debits = credits). I set up the tables, function, and trigger. Then, when I try to input values into the transactions table, I get the error below. I'm doing all of this in pgAdmin4.
CREATE TABLE transactions (
transactions_id UUID PRIMARY KEY DEFAULT uuid_generate_v1(),
entry_id INTEGER NOT NULL,
post_date DATE NOT NULL,
account_id INTEGER NOT NULL,
contact_id INTEGER NULL,
description TEXT NOT NULL,
reference_id UUID NULL,
document_id UUID NULL,
amount NUMERIC(12,2) NOT NULL
);
CREATE TABLE entries (
id UUID PRIMARY KEY,
test_date DATE NOT NULL,
balance NUMERIC(12,2)
CHECK (balance = 0.00)
);
CREATE OR REPLACE FUNCTION transactions_biut()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
BEGIN
EXECUTE 'INSERT INTO entries (id,test_date,balance)
SELECT
entry_id,
post_date,
SUM(amount) AS ''balance''
FROM
transactions
GROUP BY
entry_id;';
END;
$$;
CREATE TRIGGER transactions_biut
BEFORE INSERT OR UPDATE ON transactions
FOR EACH ROW EXECUTE PROCEDURE transactions_biut();
INSERT INTO transactions (
entry_id,
post_date,
account_id,
description,
amount
)
VALUES
(
'1',
'2019-10-01',
'101',
'MISC DEBIT: PAID FOR FACEBOOK ADS',
-200.00
),
(
'1',
'2019-10-01',
'505',
'MISC DEBIT: PAID FOR FACEBOOK ADS',
200.00
);
After I execute this input, I get the following error:
ERROR: column "id" of relation "entries" does not exist
LINE 1: INSERT INTO entries (id,test_date,balance)
^
QUERY: INSERT INTO entries (id,test_date,balance)
SELECT
entry_id,
post_date,
SUM(amount) AS "balance"
FROM
transactions
GROUP BY
entry_id;
CONTEXT: PL/pgSQL function transactions_biut() line 2 at EXECUTE
SQL state: 42703
There are a few problems here:
You're not returning anything from the trigger function => should probably be return NEW or return OLD since you're not modifying anything
Since you're executing the trigger before each row, it's bound to fail for any transaction that isn't 0 => maybe you want a deferred constraint trigger?
You're not grouping by post_date, so your select should fail
You've defined entry_id as INTEGER, but entries.id is of type UUID
Also note that this isn't really going to scale (you're summing up all transactions of all days, so this will get slower and slower...)
#chirs I was able to figure out how to create a functioning solution using statement-level triggers:
CREATE TABLE transactions (
transactions_id UUID PRIMARY KEY DEFAULT uuid_generate_v1(),
entry_id INTEGER NOT NULL,
post_date DATE NOT NULL,
account_id INTEGER NOT NULL,
contact_id INTEGER NULL,
description TEXT NOT NULL,
reference_id UUID NULL,
document_id UUID NULL,
amount NUMERIC(12,2) NOT NULL
);
CREATE TABLE entries (
entry_id INTEGER PRIMARY KEY,
post_date DATE NOT NULL,
balance NUMERIC(12,2),
CHECK (balance = 0.00)
);
CREATE OR REPLACE FUNCTION transactions_entries() RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'DELETE') THEN
INSERT INTO entries
SELECT o.entry_id, o.post_date, SUM(o.amount) FROM old_table o GROUP BY o.entry_id, o.post_date;
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO entries
SELECT o.entry_id, n.post_date, SUM(n.amount) FROM new_table n, old_table o GROUP BY o.entry_id, n.post_date;
ELSIF (TG_OP = 'INSERT') THEN
INSERT INTO entries
SELECT n.entry_id,n.post_date, SUM(n.amount) FROM new_table n GROUP BY n.entry_id, n.post_date;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER transactions_ins
AFTER INSERT ON transactions
REFERENCING NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE transactions_entries();
CREATE TRIGGER transactions_upd
AFTER UPDATE ON transactions
REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE transactions_entries();
CREATE TRIGGER transactions_del
AFTER DELETE ON transactions
REFERENCING OLD TABLE AS old_table
FOR EACH STATEMENT EXECUTE PROCEDURE transactions_entries();
Any thoughts on optimization?
I want to create e temp table using select into syntax. Like:
select top 0 * into #AffectedRecord from MyTable
Mytable has a primary key. When I insert record using merge into syntax primary key be a problem. How could I drop pk constraint from temp table
The "SELECT TOP (0) INTO.." trick is clever but my recommendation is to script out the table yourself for reasons just like this. SELECT INTO when you're actually bringing in data, on the other hand, is often faster than creating the table and doing the insert. Especially on 2014+ systems.
The existence of a primary key has nothing to do with your problem. Key Constraints and indexes don't get created when using SELECT INTO from another table, the data type and NULLability does. Consider the following code and note my comments:
USE tempdb -- a good place for testing on non-prod servers.
GO
IF OBJECT_ID('dbo.t1') IS NOT NULL DROP TABLE dbo.t1;
IF OBJECT_ID('dbo.t2') IS NOT NULL DROP TABLE dbo.t2;
GO
CREATE TABLE dbo.t1
(
id int identity primary key clustered,
col1 varchar(10) NOT NULL,
col2 int NULL
);
GO
INSERT dbo.t1(col1) VALUES ('a'),('b');
SELECT TOP (0)
id, -- this create the column including the identity but NOT the primary key
CAST(id AS int) AS id2, -- this will create the column but it will be nullable. No identity
ISNULL(CAST(id AS int),0) AS id3, -- this this create the column and make it nullable. No identity.
col1,
col2
INTO dbo.t2
FROM t1;
Here's the (cleaned up for brevity) DDL for the new table I created:
-- New table
CREATE TABLE dbo.t2
(
id int IDENTITY(1,1) NOT NULL,
id2 int NULL,
id3 int NOT NULL,
col1 varchar(10) NOT NULL,
col2 int NULL
);
Notice that the primary key is gone. When I brought in id as-is it kept the identity. Casting the id column as an int (even though it already is an int) is how I got rid of the identity insert. Adding an ISNULL is how to make a column nullable.
By default, identity insert is set to off here to this query will fail:
INSERT dbo.t2 (id, id3, col1) VALUES (1, 1, 'x');
Msg 544, Level 16, State 1, Line 39
Cannot insert explicit value for identity column in table 't2' when IDENTITY_INSERT is set to OFF.
Setting identity insert on will fix the problem:
SET IDENTITY_INSERT dbo.t2 ON;
INSERT dbo.t2 (id, id3, col1) VALUES (1, 1, 'x');
But now you MUST provide a value for that column. Note the error here:
INSERT dbo.t2 (id3, col1) VALUES (1, 'x');
Msg 545, Level 16, State 1, Line 51
Explicit value must be specified for identity column in table 't2' either when IDENTITY_INSERT is set to ON
Hopefully this helps.
On a side-note: this is a good way to play around with and understand how select insert works. I used a perm table because it's easier to find.
Using Postgres, what I would like to achieve is to be able to have many different instrument types, with corresponding [TYPE].instrument tables, which all have a unique ID in the table, but also reference a unique ID in the instrument.master table. I have the following:
create schema instrument
CREATE TABLE instrument.type (
id smallserial NOT NULL,
name text not null,
code text not null,
CONSTRAINT pk_instrument_type PRIMARY KEY (id)
);
ALTER TABLE instrument.type ADD CONSTRAINT unq_instrument_type_code UNIQUE(code);
ALTER TABLE instrument.type ADD CONSTRAINT unq_instrument_type_name UNIQUE(name);
insert into instrument.type (name, code) values ('futures', 'f');
CREATE TABLE instrument.master (
id serial NOT NULL,
type smallint not null references instrument.type (id),
timestamp timestamp with time zone not null,
CONSTRAINT pk_instrument_master PRIMARY KEY (id)
);
CREATE TABLE futures.definition (
id smallserial NOT NULL,
code text not null,
CONSTRAINT pk_futures_definition PRIMARY KEY (id)
);
ALTER TABLE futures.definition ADD CONSTRAINT unq_futures_definition_code UNIQUE(code);
insert into futures.definition (code) values ('ED');
CREATE TABLE futures.instrument (
id smallserial NOT NULL,
master serial not null references instrument.master (id),
definition smallint not null references futures.definition (id),
month smallint not null,
year smallint not null,
CONSTRAINT pk_futures_instrument PRIMARY KEY (id),
check (month >= 1),
check (month <= 12),
check (year >= 1900)
);
ALTER TABLE futures.instrument ADD CONSTRAINT unq_futures_instrument UNIQUE(definition, month, year);
CREATE OR REPLACE FUNCTION trigger_master_futures()
RETURNS trigger AS
$BODY$
BEGIN
insert into instrument.master (type, timestamp)
select id, current_timestamp from instrument.type where code = 'f';
NEW.master := currval('instrument.master_id_seq');
RETURN NEW;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
create trigger trg_futures_instrument before insert on futures.instrument
for each row
execute procedure trigger_master_futures();
I then test with:
insert into futures.instrument (definition, month, year)
select id, 3, 2015 from futures.definition where code = 'ED';
Everything works almost as I would like it to. The only issue is that somehow, instrument.master.id ends up being one more than futures.instrument.master. I am not sure what I need to do to achieve the behavior I want, which is that whenever an entry is inserted into futures.instrument, an entry should be inserted into instrument.master, and the id entry of the latter should be inserted into the master entry of the former. I actually think it should have failed since the foreign key relationship is violated somehow.
As it turns out, everything was correct. The issue was that in futures.instrument, the type of the master column is serial, and it should have been int.
I currently have a parent table:
CREATE TABLE members (
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
first_name varchar(20)
last_name varchar(20)
address address (composite type)
contact_numbers varchar(11)[3]
date_joined date
type varchar(5)
);
and two related tables:
CREATE TABLE basic_member (
activities varchar[3])
INHERITS (members)
);
CREATE TABLE full_member (
activities varchar[])
INHERITS (members)
);
If the type is full the details are entered to the full_member table or if type is basic into the basic_member table. What I want is that if I run an update and change the type to basic or full the tuple goes into the corresponding table.
I was wondering if I could do this with a rule like:
CREATE RULE tuple_swap_full
AS ON UPDATE TO full_member
WHERE new.type = 'basic'
INSERT INTO basic_member VALUES (old.member_id, old.first_name, old.last_name,
old.address, old.contact_numbers, old.date_joined, new.type, old.activities);
... then delete the record from the full_member
Just wondering if my rule is anywhere near or if there is a better way.
You don't need
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
A PRIMARY KEY implies UNIQUE NOT NULL automatically:
member_id SERIAL PRIMARY KEY
I wouldn't use hard coded max length of varchar(20). Just use text and add a check constraint if you really must enforce a maximum length. Easier to change around.
Syntax for INHERITS is mangled. The key word goes outside the parens around columns.
CREATE TABLE full_member (
activities text[]
) INHERITS (members);
Table names are inconsistent (members <-> member). I use the singular form everywhere in my test case.
Finally, I would not use a RULE for the task. A trigger AFTER UPDATE seems preferable.
Consider the following
Test case:
Tables:
CREATE SCHEMA x; -- I put everything in a test schema named "x".
-- DROP TABLE x.members CASCADE;
CREATE TABLE x.member (
member_id SERIAL PRIMARY KEY
,first_name text
-- more columns ...
,type text);
CREATE TABLE x.basic_member (
activities text[3]
) INHERITS (x.member);
CREATE TABLE x.full_member (
activities text[]
) INHERITS (x.member);
Trigger function:
Data-modifying CTEs (WITH x AS ( DELETE ..) are the best tool for the purpose. Requires PostgreSQL 9.1 or later.
For older versions, first INSERT then DELETE.
CREATE OR REPLACE FUNCTION x.trg_move_member()
RETURNS trigger AS
$BODY$
BEGIN
CASE NEW.type
WHEN 'basic' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.basic_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
WHEN 'full' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.full_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
END CASE;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Trigger:
Note that it is an AFTER trigger and has a WHEN condition.
WHEN condition requires PostgreSQL 9.0 or later. For earlier versions, you can just leave it away, the CASE statement in the trigger itself takes care of it.
CREATE TRIGGER up_aft
AFTER UPDATE
ON x.member
FOR EACH ROW
WHEN (NEW.type IN ('basic ','full')) -- OLD.type cannot be IN ('basic ','full')
EXECUTE PROCEDURE x.trg_move_member();
Test:
INSERT INTO x.member (first_name, type) VALUES ('peter', NULL);
UPDATE x.member SET type = 'full' WHERE first_name = 'peter';
SELECT * FROM ONLY x.member;
SELECT * FROM x.basic_member;
SELECT * FROM x.full_member;