I am trying to create one view (or view in each schema to be used without modifications) on table which exists in multiple schemas with the same name
create schema company_1;
create schema company_2;
...
CREATE TABLE company_1.orders
(
id serial NOT NULL,
amount real,
paid real,
CONSTRAINT orders_pkey PRIMARY KEY (id )
)
WITH (
OIDS=FALSE
);
CREATE TABLE company_2.orders
(
id serial NOT NULL,
amount real,
paid real,
CONSTRAINT orders_pkey PRIMARY KEY (id )
)
WITH (
OIDS=FALSE
);
....
What is correct way of creating view on table orders without specifying schema for every view or specifying current schema?
What I need and failed to get is either
CREATE OR REPLACE VIEW
public.full_orders AS
SELECT id, amount FROM orders;
or
CREATE OR REPLACE VIEW
company_1.full_orders AS
-- company_2.full_orders AS
-- company_n.full_orders AS
SELECT id, amount FROM current_schema.orders;
Using postgresql 9.2.2
EDIT: The way I went:
CREATE VIEW company_1.full_orders AS
SELECT id, amount FROM company_1.orders;
On schema copy discussed here I butaly do this
FOR src_table IN
SELECT table_name
FROM information_schema.TABLES
WHERE table_schema = source_schema AND table_type = 'VIEW'
LOOP
SELECT view_definition
FROM information_schema.views
WHERE table_name = src_table AND table_schema = source_schema INTO q;
trg_table := target_schema||'.'||src_table;
EXECUTE 'CREATE VIEW ' || trg_table || ' AS '||replace(q, source_schema, target_schema);
END LOOP;
Still looking for better solution...
It's not possible to do with with a straightforward view. The view records the underlying table's identity at creation time, so it is not affected by schema settings done later on.
You could do it using a set-returning function using dynamic SQL, and then wrap that into a view. But I don't think that's a good solution.
I would just create quasi-duplicates for the view, as you have been doing, and enhance my deployment script to keep them all up to date.
Related
Child table that looks like this
CREATE TABLE folder_item (
id uuid PRIMARY KEY DEFAULT gen_random_uuid()
,parent_id uuid REFERENCES folder_item (id) ON DELETE CASCADE
,role text NOT NULL DEFAULT 'inherit'
);
With a permissions model
CREATE POLICY folder_item_rolecheck ON folder_item FOR SELECT USING ( role = assigned_role );
However if it finds a row with 'inherit' I want it to look at the parent role instead (recursively)
Is that possible?
-- Set NO FORCE ROW LEVEL SECURITY on table "folder_item" to off RLS for OWNER
ALTER TABLE folder_item NO FORCE ROW LEVEL SECURITY
-- Create function with RECURSIVE qwery and SECURITY DEFINER with OWNER as for table "folder_item"
CREATE OR REPLACE FUNCTION folder_item_check_child(
in_parent_id uuid
, in_role text)
RETURNS boolean
LANGUAGE 'plpgsql'
COST 100
STABLE SECURITY DEFINER
AS $BODY$BEGIN
RETURN EXISTS(
WITH RECURSIVE
childs AS (
SELECT tt.id, tt.role FROM folder_item AS tt
WHERE tt.parent_id=in_parent_id
UNION
SELECT child.id, child.role
FROM childs AS parent
INNER JOIN folder_item AS child ON child.parent_id=parent.id
)
SELECT * FROM childs AS tt WHERE tt.role=in_role
);
END$BODY$;
-- CREATE POLICY
CREATE POLICY folder_item_rolecheck ON folder_item FOR SELECT USING ( role = assigned_role
OR folder_item_check_child(id, assigned_role)
);
I am writing migration script to migrate database. I have to duplicate the row by incrementing primary key considering that different database can have n number of different columns in the table. I can't write each and every column in query. If i simply just copy the row then, I am getting duplicate key error.
Query: INSERT INTO table_name SELECT * FROM table_name WHERE id=255;
ERROR: duplicate key value violates unique constraint "table_name_pkey"
DETAIL: Key (id)=(255) already exist
Here, It's good that I don't have to mention all column names. I can select all columns by giving *. But, same time I am also getting duplicate key error.
What's the solution of this problem? Any help would be appreciated. Thanks in advance.
If you are willing to type all column names, you may write
INSERT INTO table_name (
pri_key
,col2
,col3
)
SELECT (
SELECT MAX(pri_key) + 1
FROM table_name
)
,col2
,col3
FROM table_name
WHERE id = 255;
Other option (without typing all columns , but you know the primary key ) is to CREATE a temp table, update it and re-insert within a transaction.
BEGIN;
CREATE TEMP TABLE temp_tab ON COMMIT DROP AS SELECT * FROM table_name WHERE id=255;
UPDATE temp_tab SET pri_key_col = ( select MAX(pri_key_col) + 1 FROM table_name );
INSERT INTO table_name select * FROM temp_tab;
COMMIT;
This is just a DO block but you could create a function that takes things like the table name etc as parameters.
Setup:
CREATE TABLE public.t1 (a TEXT, b TEXT, c TEXT, id SERIAL PRIMARY KEY, e TEXT, f TEXT);
INSERT INTO public.t1 (e) VALUES ('x'), ('y'), ('z');
Code to duplicate values without the primary key column:
DO $$
DECLARE
_table_schema TEXT := 'public';
_table_name TEXT := 't1';
_pk_column_name TEXT := 'id';
_columns TEXT;
BEGIN
SELECT STRING_AGG(column_name, ',')
INTO _columns
FROM information_schema.columns
WHERE table_name = _table_name
AND table_schema = _table_schema
AND column_name <> _pk_column_name;
EXECUTE FORMAT('INSERT INTO %1$s.%2$s (%3$s) SELECT %3$s FROM %1$s.%2$s', _table_schema, _table_name, _columns);
END $$
The query it creates and runs is: INSERT INTO public.t1 (a,b,c,e,f) SELECT a,b,c,e,f FROM public.t1. It's selected all the columns apart from the PK one. You could put this code in a function and use it for any table you wanted, or just use it like this and edit it for whatever table.
I have a table with a column storing table OID of table where records come from. If this field must be filled (NOT NULL) but can have a default value if not provided. I would like to store the current table OID when inserting.
CREATE TABLE t AS(
Source regclass NOT NULL DEFAULT current_table_name()::regclass
);
Is there any function (current_table_name) in postgreSQL table to perform this task?
There is no such function, but you can achieve the goal in three steps:
Create the table without DEFAULT.
Find the OID of the new table.
ALTER the table and set the DEFAULT.
It is better to use the table OID than to cast the table name to regclass, because with the latter INSERTs will suddenly start failing after the table is renamed.
Here is a DO block that would achieve that:
DO $$DECLARE
reloid oid;
BEGIN
CREATE TABLE t (source regclass NOT NULL);
SELECT t.oid INTO reloid
FROM pg_class t
JOIN pg_namespace n ON t.relnamespace = n.oid
WHERE t.relname = 't' AND n.nspname = current_schema;
EXECUTE 'ALTER TABLE t ALTER source SET DEFAULT ' || reloid;
END;$$;
I'm using the following syntax for all the tables I'm creating with a script.
if ((select count(*) from sys.tables where Name = 'SpecificName') > 0)
drop table SpecificName
create table SpecificName( ... )
Then, I got lazy of editing three places each time I add a table, so I tried the following.
declare #name varchar(100) = 'NameToBe'
if ((select count(*) from sys.tables where Name = #name) > 0)
drop table #name
create table #name( ... )
Of course, this doesn't work because the server won't allow me for dropping nor creating the tables referred to by a string. Can this be done and if so how?
I currently have a parent table:
CREATE TABLE members (
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
first_name varchar(20)
last_name varchar(20)
address address (composite type)
contact_numbers varchar(11)[3]
date_joined date
type varchar(5)
);
and two related tables:
CREATE TABLE basic_member (
activities varchar[3])
INHERITS (members)
);
CREATE TABLE full_member (
activities varchar[])
INHERITS (members)
);
If the type is full the details are entered to the full_member table or if type is basic into the basic_member table. What I want is that if I run an update and change the type to basic or full the tuple goes into the corresponding table.
I was wondering if I could do this with a rule like:
CREATE RULE tuple_swap_full
AS ON UPDATE TO full_member
WHERE new.type = 'basic'
INSERT INTO basic_member VALUES (old.member_id, old.first_name, old.last_name,
old.address, old.contact_numbers, old.date_joined, new.type, old.activities);
... then delete the record from the full_member
Just wondering if my rule is anywhere near or if there is a better way.
You don't need
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
A PRIMARY KEY implies UNIQUE NOT NULL automatically:
member_id SERIAL PRIMARY KEY
I wouldn't use hard coded max length of varchar(20). Just use text and add a check constraint if you really must enforce a maximum length. Easier to change around.
Syntax for INHERITS is mangled. The key word goes outside the parens around columns.
CREATE TABLE full_member (
activities text[]
) INHERITS (members);
Table names are inconsistent (members <-> member). I use the singular form everywhere in my test case.
Finally, I would not use a RULE for the task. A trigger AFTER UPDATE seems preferable.
Consider the following
Test case:
Tables:
CREATE SCHEMA x; -- I put everything in a test schema named "x".
-- DROP TABLE x.members CASCADE;
CREATE TABLE x.member (
member_id SERIAL PRIMARY KEY
,first_name text
-- more columns ...
,type text);
CREATE TABLE x.basic_member (
activities text[3]
) INHERITS (x.member);
CREATE TABLE x.full_member (
activities text[]
) INHERITS (x.member);
Trigger function:
Data-modifying CTEs (WITH x AS ( DELETE ..) are the best tool for the purpose. Requires PostgreSQL 9.1 or later.
For older versions, first INSERT then DELETE.
CREATE OR REPLACE FUNCTION x.trg_move_member()
RETURNS trigger AS
$BODY$
BEGIN
CASE NEW.type
WHEN 'basic' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.basic_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
WHEN 'full' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.full_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
END CASE;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Trigger:
Note that it is an AFTER trigger and has a WHEN condition.
WHEN condition requires PostgreSQL 9.0 or later. For earlier versions, you can just leave it away, the CASE statement in the trigger itself takes care of it.
CREATE TRIGGER up_aft
AFTER UPDATE
ON x.member
FOR EACH ROW
WHEN (NEW.type IN ('basic ','full')) -- OLD.type cannot be IN ('basic ','full')
EXECUTE PROCEDURE x.trg_move_member();
Test:
INSERT INTO x.member (first_name, type) VALUES ('peter', NULL);
UPDATE x.member SET type = 'full' WHERE first_name = 'peter';
SELECT * FROM ONLY x.member;
SELECT * FROM x.basic_member;
SELECT * FROM x.full_member;