Postgres generate sql output using data dictionary tables - postgresql

All I need is to get a SQL query output as :
ALTER TABLE TABLE_NAME
ADD CONSTRAINT
FOREIGN KEY (COLUMN_NAME)
REFERENCES (PARENT_TABLE_NAME);
I'm running the below DYNAMIC query USING DATA DICTIONARY TABLES,
SELECT DISTINCT
'ALTER TABLE ' || cs.TABLE_NAME ||
'ADD CONSTRAINT' || rc.CONSTRAINT_NAME ||
'FOREIGN KEY' || c.COLUMN_NAME ||
'REFERENCES' || cs.TABLE_NAME ||
' (' || cs.CONSTRAINT_NAME || ') ' ||
' ON UPDATE ' || rc.UPDATE_RULE ||
' ON DELETE ' || rc.DELETE_RULE
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS RC,
INFORMATION_SCHEMA.TABLE_CONSTRAINTS CS,
INFORMATION_SCHEMA.COLUMNS C
WHERE cs.CONSTRAINT_NAME = rc.CONSTRAINT_NAME
AND cs.TABLE_NAME = c.TABLE_NAME
AND UPPER(cs.TABLE_SCHEMA) = 'SSP2_PCAT';
But here even though I'm able to generate the desired output, the concern is its not giving the PARENT_TABLE_NAME here,
rather its giving the same table_name after the ALTER TABLE Keywords.
I hope this is clear as we are using Dynamic SQL here and any help is absolutely appreciated!

Your query is missing a couple of join tables and join conditions. Also, don't forget that a foreign key can be defined on more than one column. Finally, your query is vulnerable to SQL injection via object names.
But it would be much simpler if you used pg_catalog.pg_constraint rather than the `information_schema':
SELECT format('ALTER TABLE %s ADD CONSTRAINT %I %s',
conrelid::regclass,
conname,
pg_get_constraintdef(oid))
FROM pg_catalog.pg_constraint
WHERE contype = 'f'
AND upper(connamespace::regnamespace::text) = 'SSP2_PCAT';

Related

How to find all foreign key constraints that are broken in Postgresql

We recently discovered an issue with our database where a foreign key constraint was not working correctly. Basically the primary table did not have any primary ids that matched the foreign key in the child table. When we dropped the foreign key constraint and tried to recreate it, it then threw an error that the foreign key constraint could not be created because there were foreign keys with no matching id in the parent table. Once those were cleaned up, it allowed us to recreate the foreign key.
Of course we are wondering how this happened to begin with. I worked with Oracle for 15 years and never saw a foreign key failing in this way. But our concern right now is how many other foreign keys are not working correctly. This is a problem because we have some BEFORE DELETE triggers that fail silently when the calling function returns a null because of a foreign_key_violation (this is how we discovered the issue to begin with).
EXCEPTION
WHEN foreign_key_violation
THEN RETURN NULL;
What we want to do is get all of the foreign keys in the database (probably a few thousand), loop over them all, and check every single one against it's parent table to see if any are "broken".
Basically:
Select all foreign keys using Postgres system tables.
Loop over all of them and do something like:
select count(parent_id) from child_table
where foreign_key_id not in (
select parent_id as foreign_key_id
from parent_table
)
);
For all the ones that are not 0, drop the foreign key constraint, fix the orphaned data, and recreate the foreign key constraint.
Does this sound reasonable? Has anyone done something like this before? What is the best way to get the foreign key constraints from Postgres?
Edit:
What we realized is that if we dropped and recreated the foreign keys, it would tell us which ones were problematic.
SELECT 'Alter table ' || conrelid::regclass || ' drop constraint ' || conname || '; alter table ' || conrelid::regclass || ' add constraint ' || conname || ' ' || pg_get_constraintdef(oid) || ';'
FROM pg_constraint
WHERE contype = 'f'
AND connamespace = 'public'::regnamespace
ORDER BY conrelid::regclass::text, contype DESC;
Next, run all the commands. If there are any foreign key violations, they will show up when the alter table add constraint command tries to run. Make a note of which ones had a problem and keep running the following commands until you have a list of all issues.
The following is example of sql commands to fix the issues.
delete from child_table where id_child_table in (
select id_child_table from child_table
where id_parent_id not in (
select id_parent_id
from parent_table
)
);
alter table child_table drop constraint child_table_fk1;
alter table child_table add constraint child_table_fk1 FOREIGN KEY (id_parent_table) REFERENCES parent_table(id_parent_table) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE;
Using a basic data sample :
create table test1 (id1 serial, seq1 int, constraint pk1 primary key (id1, seq1));
create table test2 (id2 int, seq2 int, descr2 text, constraint pk2 primary key (id2, seq2), constraint fk1 foreign key (id2, seq2) references test1 (id1, seq1) ON DELETE RESTRICT ON UPDATE RESTRICT) ;
insert into test1 (seq1) values (1), (2), (3), (4), (5);
insert into test2 (id2, seq2, descr2) values (1,1,'A'), (2,2,'B'), (3,3,'C'), (4,4,'D'), (5,5,'E');
Considering the query to get the list of foreign keys (source internet) :
SELECT conrelid::regclass AS table_name,
conname AS foreign_key,
pg_get_constraintdef(oid)
FROM pg_constraint
WHERE contype = 'f'
AND connamespace = 'public'::regnamespace
ORDER BY conrelid::regclass::text, contype DESC;
Result :
table_name
foreign_key
pg_get_constraintdef
test2
fk1
FOREIGN KEY (id2, seq2) REFERENCES test1(id1, seq1) ON UPDATE RESTRICT ON DELETE RESTRICT
With some changes to get a computable resulting list :
SELECT conrelid::regclass AS ReferencingTable
, a.ReferencingKey
, b.ReferencedTable
, b.ReferencedKey
FROM pg_constraint AS pg
CROSS JOIN LATERAL
( SELECT '(' || string_agg(conrelid::regclass || '.' || fkey, ',' ORDER BY fkey.ORDINALITY) || ')' AS ReferencingKey
FROM pg_get_constraintdef(pg.oid) AS fk
CROSS JOIN LATERAL string_to_table(translate(regexp_substr(fk, '\([^\)]*\)', 1, 1), '() ', ''), ',') WITH ORDINALITY AS fkey
) AS a
CROSS JOIN LATERAL
( SELECT '(' || string_agg(ReferencedTable || '.' || rkey, ',' ORDER BY rkey.ORDINALITY) || ')' AS ReferencedKey
, ReferencedTable
FROM pg_get_constraintdef(pg.oid) AS fk
CROSS JOIN LATERAL split_part((regexp_match(fk, 'REFERENCES\s\w+'))[1], ' ', 2) AS ReferencedTable
CROSS JOIN LATERAL string_to_table(translate(regexp_substr(fk, '\([^\)]*\)', 1, 2), '() ', ''), ',') WITH ORDINALITY AS rkey
GROUP BY ReferencedTable
) AS b
WHERE contype = 'f'
AND connamespace = 'public' ::regnamespace
Result :
referencingtable
referencingkey
referencedtable
referencedkey
test2
(test2.id2,test2.seq2)
test1
(test1.id1,test1.seq1)
Creating a plpgsql function with schema_name as input data and with a dynamic query :
CREATE OR REPLACE FUNCTION test(IN schema_name text, OUT Referencing_Table text, OUT Referencing_Key text, OUT Referenced_Table text)
RETURNS setof record LANGUAGE plpgsql AS
$$
DECLARE
_row record ;
BEGIN
FOR _row IN
( SELECT conrelid::regclass AS ReferencingTable
, a.ReferencingKey
, b.ReferencedTable
, b.ReferencedKey
FROM pg_constraint AS pg
CROSS JOIN LATERAL
( SELECT '(' || string_agg(conrelid::regclass || '.' || fkey, ',' ORDER BY fkey.ORDINALITY) || ')' AS ReferencingKey
FROM pg_get_constraintdef(pg.oid) AS fk
CROSS JOIN LATERAL string_to_table(translate(regexp_substr(fk, '\([^\)]*\)', 1, 1), '() ', ''), ',') WITH ORDINALITY AS fkey
) AS a
CROSS JOIN LATERAL
( SELECT '(' || string_agg(ReferencedTable || '.' || rkey, ',' ORDER BY rkey.ORDINALITY) || ')' AS ReferencedKey
, ReferencedTable
FROM pg_get_constraintdef(pg.oid) AS fk
CROSS JOIN LATERAL split_part((regexp_match(fk, 'REFERENCES\s\w+'))[1], ' ', 2) AS ReferencedTable
CROSS JOIN LATERAL string_to_table(translate(regexp_substr(fk, '\([^\)]*\)', 1, 2), '() ', ''), ',') WITH ORDINALITY AS rkey
GROUP BY ReferencedTable
) AS b
WHERE contype = 'f'
AND connamespace = schema_name ::regnamespace )
LOOP
RETURN QUERY EXECUTE FORMAT (
'SELECT %L :: text AS Referencing_Table, %s :: text AS Referencing_Key, %L :: text AS Referenced_Table
FROM %I LEFT JOIN %I ON %s = %s
WHERE %s IS NULL'
, _row.ReferencingTable
, _row.ReferencingKey
, _row.ReferencedTable
, _row.ReferencingTable
, _row.ReferencedTable
, _row.ReferencedKey
, _row.ReferencingKey
, _row.ReferencedKey
) ;
END LOOP ;
END ;
$$ ;
SELECT * FROM test('public') should return the list of foreign keys of any table Referencing_Table in schema schema_name with no correspondance in table Referenced_Table
but not able to test the result in dbfiddle because not able to break the foreign key (superuser privilege)

Compare two tables and find the missing column using left join

I wanted to compare the two tables employees and employees_a and find the missing columns in the table comployees_a.
select a.Column_name,
From User_tab_columns a
LEFT JOIN User_tab_columns b
ON upper(a.table_name) = upper(b.table_name)||'_A'
AND a.column_name = b.column_name
Where upper(a.Table_name) = 'EMPLOYEES'
AND upper(b.table_name) = 'EMPLOYEES_A'
AND b.column_name is NULL
;
But this doesnt seems to be working. No rows are returned.
My employees table has the below columns
emp_name
emp_id
base_location
department
current_location
salary
manager
employees_a table has below columns
emp_name
emp_id
base_location
department
current_location
I want to find the rest two columns and add them into employees_a table.
I have more than 50 tables like this to compare them and find the missing column and add those columns into their respective "_a" table.
Missing columns? Why not using the MINUS set operator, seems to be way simpler, e.g.
select column_name from user_tables where table_name = 'EMP_1'
minus
select column_name from user_tables where table_name = 'EMP_2'
Thirstly, check if user_tab_columns table contains columns of your tables (in my case user_tab_columns is empty and I have to use all_tab_columns):
select a.Column_name
From User_tab_columns a
Where upper(a.Table_name) = 'EMPLOYEES'
Secondly, remove line AND upper(b.table_name) = 'EMPLOYEES_A', because upper(b.table_name) is null in case a column is not found. You have b.table_name in JOIN part of the SELECT already.
select a.Column_name
From User_tab_columns a
LEFT JOIN User_tab_columns b
ON upper(a.table_name) = upper(b.table_name)||'_A'
AND a.column_name = b.column_name
Where upper(a.Table_name) = 'EMPLOYEES'
AND b.column_name is NULL
You do not need any joins and can use:
select 'ALTER TABLE EMPLOYEES_A ADD "'
|| Column_name || '" '
|| CASE MAX(data_type)
WHEN 'NUMBER'
THEN 'NUMBER(' || MAX(data_precision) || ',' || MAX(data_scale) || ')'
WHEN 'VARCHAR2'
THEN 'VARCHAR2(' || MAX(data_length) || ')'
END
AS sql
From User_tab_columns
Where Table_name IN ('EMPLOYEES', 'EMPLOYEES_A')
GROUP BY COLUMN_NAME
HAVING COUNT(CASE table_name WHEN 'EMPLOYEES' THEN 1 END) = 1
AND COUNT(CASE table_name WHEN 'EMPLOYEES_A' THEN 1 END) = 0;
Or, for multiple tables:
select 'ALTER TABLE ' || MAX(table_name) || '_A ADD "'
|| Column_name || '" '
|| CASE MAX(data_type)
WHEN 'NUMBER'
THEN 'NUMBER(' || MAX(data_precision) || ',' || MAX(data_scale) || ')'
WHEN 'VARCHAR2'
THEN 'VARCHAR2(' || MAX(data_length) || ')'
END
AS sql
From User_tab_columns
Where Table_name IN ('EMPLOYEES', 'EMPLOYEES_A', 'SOMETHING', 'SOMETHING_A')
GROUP BY
CASE
WHEN table_name LIKE '%_A'
THEN SUBSTR(table_name, 1, LENGTH(table_name) - 2)
ELSE table_name
END,
COLUMN_NAME
HAVING COUNT(CASE WHEN table_name NOT LIKE '%_A' THEN 1 END) = 1
AND COUNT(CASE WHEN table_name LIKE '%_A' THEN 1 END) = 0;
fiddle

How to perform a database-wide full text search in PostgreSQL?

I have a PostgreSQL database with about 500 tables. Each table has a unique ID column named id and a user ID column named user_id. I would like to perform a full-text search of all varchar columns across all of these tables for a particular user. I do this today with ElasticSearch but I'd like to simplify my architecture. I understand that I can add full text search columns to all of the tables with things like stored generated columns and then add indices for fast full text search:
ALTER TABLE pgweb
ADD COLUMN textsearchable_index_col tsvector
GENERATED ALWAYS AS (to_tsvector('english', coalesce(title, '') || ' ' || coalesce(body, ''))) STORED;
CREATE INDEX textsearch_idx ON pgweb USING GIN (textsearchable_index_col);
However, I'm not familiar with how to do cross-table searches efficiently. Maybe a view across all textsearchable_index_col columns? I'd like the result to be something like the table name and id of the matching row. For example:
table_name | id
-------------+-------
table1 | 492
table42 | 20
If it matters, I'm using Ruby on Rails as the client with ActiveRecord. I'm using a managed PostgreSQL 13 database at Digital Ocean so I won't be able to install custom psql plugins.
Maybe It is not the answer you are looking for, because I am not sure if there is a better approach, but first I will try to automate the process.
I will make two dynamic queries, the first one to create columns textsearchable_index_col (in each table with at least one varchar column) and the other to create an index on that columns (one index per table).
You could ADD a textsearchable_index_col column for each "character varying" column instead only one concatenating all "character varying" columns, but in this case I will create one textsearchable_index_col column per table like you propose.
I assume table schema "public" but you can use the real one.
-- Create columns textsearchable_index_col:
SELECT 'ALTER TABLE ' || table_schema || '.' || table_name || E' ADD COLUMN textsearchable_index_col tsvector GENERATED ALWAYS AS (to_tsvector(\'english\', coalesce(' ||
string_agg(column_name, E', \'\') || \' \' || coalesce(') || E', \'\'))) STORED;'
FROM information_schema.columns
WHERE table_schema = 'public' AND data_type IN ('character varying')
GROUP BY table_schema, table_name;
-- Create indexes on textsearchable_index_col columns:
SELECT 'CREATE INDEX ' || table_name || '_textsearch_idx ON ' || table_schema || '.' || table_name || ' USING GIN (textsearchable_index_col);'
FROM information_schema.columns
WHERE table_schema = 'public' AND data_type IN ('character varying')
GROUP BY table_schema, table_name;
Then I will use a dynamic query to create a query (using UNION) to search on all that textsearchable_index_col columns:
You need to replace question mark by parameters (user_id and searched text), and take out the last "UNION ALL"
SELECT E'SELECT \'' || table_name || E'\' AS table_name, id FROM ' || table_schema || '.' || table_name || E' WHERE user_id = ? AND textsearchable_index_col' || ' ## to_tsquery(?) UNION ALL'
FROM information_schema.columns
WHERE table_schema = 'public' AND data_type IN ('character varying')
GROUP BY table_schema, table_name;

How do I find all the NUMERIC columns in a table and do a SUM() on them?

I have a few tables in Netezza, DB2 and PostgreSQL databases, for which I need to reconcile and the best way we have come out with is to do a SUM() across all the NUMERIC Table columns on all the 3 databases.
Does anyone have a quick and simple way to find all the COLUMNS which are either NUMERIC or INTEGER or BIGINT and then run a SUM() on all these?
For comparing the results, I can do it manually also, or if someone has a way to capture these results in a common table and automatically check the differences in the SUM?
For DB2 you can use this metadata which will help you to find out the data type for each column
SELECT
COLUMN_NAME || ' ' || REPLACE(REPLACE(DATA_TYPE,'DECIMAL','NUMERIC'),'CHARACTER','VARCHAR') ||
CASE
WHEN DATA_TYPE = 'TIMESTAMP' THEN ''
ELSE
' (' ||
CASE
WHEN CHARACTER_MAXIMUM_LENGTH IS NOT NULL THEN CAST(CHARACTER_MAXIMUM_LENGTH AS VARCHAR(30))
WHEN NUMERIC_PRECISION IS NOT NULL THEN CAST(NUMERIC_PRECISION AS VARCHAR(30)) ||
CASE
WHEN NUMERIC_SCALE = 0 THEN ''
ELSE ',' || CAST(NUMERIC_SCALE AS VARCHAR(3))
END
ELSE ''
END || ')'
END || ',' "SQLCOL",
COLUMN_NAME,
DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, NUMERIC_PRECISION, NUMERIC_SCALE, ORDINAL_POSITION
FROM SYSIBM.COLUMNS
WHERE TABLE_NAME = 'insert your table name'
AND TABLE_SCHEMA = 'insert your table schema'
ORDER BY ORDINAL_POSITION
For Netezza, I got the following query:
SELECT 0 AS ATTNUM, 'SELECT' AS SQL
UNION
SELECT ATTNUM, 'SUM(' || ATTNAME || ') AS S_' || ATTNAME || ',' AS COLMN
FROM _V_RELATION_COLUMN RC
WHERE NAME = '<table-name>'
AND FORMAT_TYPE= 'NUMERIC'
UNION
SELECT 10000 AS ATTNUM, ' 0 AS FLAG FROM ' || '<table-name>'
ORDER BY ATTNUM
Still looking how to do this across DB2 and PostgreSQL.

Problem with Postgres ALTER TABLE

I have one problem with the ALTER TABLE in postgre. I want to change size of the varchar column. When I try to do this, It says that the view is dependent on that column. I can't drop the view because comething else is dependent on it. Is there any other way than to drop everything and recreate it again?
I just found one option, which is to remove the table joining from the view, when I will not change the returned columns, I can do that. But still, there is more views I'll need to change. Isn't there anything how can I say that it should be deferred and checked with commit?
I have run into this problem and couldn't find any way around it. Unfortunately, as best I can tell, one must drop the views, alter the column type on the underlying table, and then recreate the views. This can happen entirely in a single transaction.
Constraint deferral doesn't apply to this problem. In other words, even SET CONSTRAINTS ALL DEFERRED has no impact on this limitation. To be specific, constraint deferral does not apply to the consistency check that prints ERROR: cannot alter type of a column used by a view or rule when one tries to alter the type of a column underlying a view.
I'm a little late to the party, but years after this question was posted, a brilliant solution was posted via an article referenced below (not mine -- I'm simply a thankful beneficiary of his brilliance).
I just tested this on an object that is referenced (on the first level) in 136 separate views, and each of those views is referenced in other views. The solution ran in mere seconds.
So, read this article and copy and paste the table and two functions listed:
http://mwenus.blogspot.com/2014/04/postgresql-how-to-handle-table-and-view.html
Implementation example:
alter table mdm.global_item_master_swap
alter column prod_id type varchar(128),
alter column prod_nme type varchar(512);
ERROR: cannot alter type of a column used by a view or rule DETAIL:
rule _RETURN on view toolbox_reporting."Average_setcost" depends on
column "prod_id"
********** Error **********
ERROR: cannot alter type of a column used by a view or rule
And now for the PostgreSQL ninja's magic:
select util.deps_save_and_drop_dependencies('mdm', 'global_item_master_swap');
alter table mdm.global_item_master_swap
alter column prod_id type varchar(128),
alter column prod_nme type varchar(512);
select util.deps_restore_dependencies('mdm', 'global_item_master_swap');
-- EDIT 11/13/2018 --
It appears the link above might be dead. Here is the code for the two procedures:
Table that stores DDL:
CREATE TABLE util.deps_saved_ddl
(
deps_id serial NOT NULL,
deps_view_schema character varying(255),
deps_view_name character varying(255),
deps_ddl_to_run text,
CONSTRAINT deps_saved_ddl_pkey PRIMARY KEY (deps_id)
);
Save and Drop:
-- Edit 8/28/2020 --
-- This stopped working with Pg12. The fix is below to change the parameters of p_view_schema and p_view_name from varchar to name:
CREATE OR REPLACE FUNCTION util.deps_save_and_drop_dependencies(
p_view_schema name, p_view_name name)
RETURNS void
LANGUAGE plpgsql
COST 100
AS $BODY$
declare
v_curr record;
begin
for v_curr in
(
select obj_schema, obj_name, obj_type from
(
with recursive recursive_deps(obj_schema, obj_name, obj_type, depth) as
(
select p_view_schema, p_view_name, null::varchar, 0
union
select dep_schema::varchar, dep_name::varchar, dep_type::varchar, recursive_deps.depth + 1 from
(
select ref_nsp.nspname ref_schema, ref_cl.relname ref_name,
rwr_cl.relkind dep_type,
rwr_nsp.nspname dep_schema,
rwr_cl.relname dep_name
from pg_depend dep
join pg_class ref_cl on dep.refobjid = ref_cl.oid
join pg_namespace ref_nsp on ref_cl.relnamespace = ref_nsp.oid
join pg_rewrite rwr on dep.objid = rwr.oid
join pg_class rwr_cl on rwr.ev_class = rwr_cl.oid
join pg_namespace rwr_nsp on rwr_cl.relnamespace = rwr_nsp.oid
where dep.deptype = 'n'
and dep.classid = 'pg_rewrite'::regclass
) deps
join recursive_deps on deps.ref_schema = recursive_deps.obj_schema and deps.ref_name = recursive_deps.obj_name
where (deps.ref_schema != deps.dep_schema or deps.ref_name != deps.dep_name)
)
select obj_schema, obj_name, obj_type, depth
from recursive_deps
where depth > 0
) t
group by obj_schema, obj_name, obj_type
order by max(depth) desc
) loop
insert into util.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run)
select p_view_schema, p_view_name, 'COMMENT ON ' ||
case
when c.relkind = 'v' then 'VIEW'
when c.relkind = 'm' then 'MATERIALIZED VIEW'
else ''
end
|| ' ' || n.nspname || '.' || c.relname || ' IS ''' || replace(d.description, '''', '''''') || ''';'
from pg_class c
join pg_namespace n on n.oid = c.relnamespace
join pg_description d on d.objoid = c.oid and d.objsubid = 0
where n.nspname = v_curr.obj_schema and c.relname = v_curr.obj_name and d.description is not null;
insert into util.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run)
select p_view_schema, p_view_name, 'COMMENT ON COLUMN ' || n.nspname || '.' || c.relname || '.' || a.attname || ' IS ''' || replace(d.description, '''', '''''') || ''';'
from pg_class c
join pg_attribute a on c.oid = a.attrelid
join pg_namespace n on n.oid = c.relnamespace
join pg_description d on d.objoid = c.oid and d.objsubid = a.attnum
where n.nspname = v_curr.obj_schema and c.relname = v_curr.obj_name and d.description is not null;
insert into util.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run)
select p_view_schema, p_view_name, 'GRANT ' || privilege_type || ' ON ' || table_schema || '.' || table_name || ' TO ' || grantee
from information_schema.role_table_grants
where table_schema = v_curr.obj_schema and table_name = v_curr.obj_name;
if v_curr.obj_type = 'v' then
insert into util.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run)
select p_view_schema, p_view_name, 'CREATE VIEW ' || v_curr.obj_schema || '.' || v_curr.obj_name || ' AS ' || view_definition
from information_schema.views
where table_schema = v_curr.obj_schema and table_name = v_curr.obj_name;
elsif v_curr.obj_type = 'm' then
insert into util.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run)
select p_view_schema, p_view_name, 'CREATE MATERIALIZED VIEW ' || v_curr.obj_schema || '.' || v_curr.obj_name || ' AS ' || definition
from pg_matviews
where schemaname = v_curr.obj_schema and matviewname = v_curr.obj_name;
end if;
execute 'DROP ' ||
case
when v_curr.obj_type = 'v' then 'VIEW'
when v_curr.obj_type = 'm' then 'MATERIALIZED VIEW'
end
|| ' ' || v_curr.obj_schema || '.' || v_curr.obj_name;
end loop;
end;
$BODY$
Restore:
CREATE OR REPLACE FUNCTION util.deps_restore_dependencies(
p_view_schema character varying,
p_view_name character varying)
RETURNS void AS
$BODY$
declare
v_curr record;
begin
for v_curr in
(
select deps_ddl_to_run
from util.deps_saved_ddl
where deps_view_schema = p_view_schema and deps_view_name = p_view_name
order by deps_id desc
) loop
execute v_curr.deps_ddl_to_run;
end loop;
delete from util.deps_saved_ddl
where deps_view_schema = p_view_schema and deps_view_name = p_view_name;
end;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
If you don't need to change the type of the field, but just the size of it, this approach should work:
Starting with these tables:
CREATE TABLE foo (id integer primary key, names varchar(10));
CREATE VIEW voo AS (SELECT id, names FROM foo);
\d foo and \d voo both show the length as 10:
id | integer | not null
names | character varying(10) |
Now change the lengths to 20 in the pg_attribute table:
UPDATE pg_attribute SET atttypmod = 20+4
WHERE attrelid IN ('foo'::regclass, 'voo'::regclass)
AND attname = 'names';
(note: the 20+4 is some crazy postgresql legacy thing, the +4 is compulsory.)
Now \d foo shows:
id | integer | not null
names | character varying(20) |
Bonus: that was waaay faster than doing:
ALTER TABLE foo ALTER COLUMN names TYPE varchar(20);
Technically you can change the size of the table column without changing the size of the view column, but no guarantees on what side effects that will have; it's probably best to change them both at once.
source and fuller explanation: http://sniptools.com/databases/resize-a-column-in-a-postgresql-table-without-changing-data
I ran into this problem today and found a work around to avoid dropping and recreating the VIEW . I cannot just drop my VIEW because it is a master VIEW that has many dependent VIEWs built on top of it. Short of having a rebuild script to DROP CASCADE and then recreate ALL of my VIEWs this is a work around.
I change my master VIEW to use a dummy value for the offending column, altered the column in the table, and switched my VIEW back to the column. Using a setup like this:
CREATE TABLE base_table
(
base_table_id integer,
base_table_field1 numeric(10,4)
);
CREATE OR REPLACE VIEW master_view AS
SELECT
base_table_id AS id,
(base_table_field1 * .01)::numeric AS field1
FROM base_table;
CREATE OR REPLACE VIEW dependent_view AS
SELECT
id AS dependent_id,
field1 AS dependent_field1
FROM master_view;
Trying to alter base_table_field1 type like this:
ALTER TABLE base_table ALTER COLUMN base_table_field1 TYPE numeric(10,6);
Will give you this error:
ERROR: cannot alter type of a column used by a view or rule
DETAIL: rule _RETURN on view master_view depends on column "base_table_field1"
If you change master_view to use a dummy value for the column like this:
CREATE OR REPLACE VIEW master_view AS
SELECT
base_table_id AS id,
0.9999 AS field1
FROM base_table;
Then run your alter:
ALTER TABLE base_table ALTER COLUMN base_table_field1 TYPE numeric(10,6);
And switch your view back:
CREATE OR REPLACE VIEW master_view AS
SELECT
base_table_id AS id,
(base_table_field1 * .01)::numeric AS field1
FROM base_table;
It all depends on if your master_view has an explicit type that does not change. Since my VIEW uses '(base_table_field1 * .01)::numeric AS field1' it works, but 'base_table_field1 AS field1' would not because the column type changes. This approach might help in some cases like mine.
I wanted to comment on the second answer but cannot since I'm too new to stackoverflow, so here my comment:
To those interested in the original article mentioned in that answer, the blogspot entry is not available any more but the wayback machine has it still stored: https://web.archive.org/web/20180323155900/http://mwenus.blogspot.com/2014/04/postgresql-how-to-handle-table-and-view.html
Here is the article itself in case archive.org should be turned off at some future point in time:
2014-04-22
PostgreSQL: How to handle table and view dependencies
PostgreSQL is very restrictive when it comes to modyfing existing objects. Very often when you try to ALTER TABLE or REPLACE VIEW it tells you that you cannot do it, because there's another object (typically a view or materialized view), which depends on the one you want to modify. It seems that the only solution is to DROP dependent objects, make desired changes to the target object and then recreate dropped objects.
It is tedious and cumbersome, because those dependent objects can have further dependencies, which also may have other dependencies and so on. I created utility functions which can help in such situations.
The usage is very simple - you just have to call:
select deps_save_and_drop_dependencies(p_schema_name, p_object_name);
You have to pass two arguments: the name of the schema and the name of the object in that schema. This object can be a table, a view or a materialized view. The function will drop all views and materialized views dependent on p_schema_name.p_object_name and save DDL which restores them in a helper table.
When you want to restore those dropped objects (for example when you are done modyfing p_schema_name.p_object_name), you just need to make another simple call:
select deps_restore_dependencies(p_schema_name, p_object_name);
and the dropped objects will be recreated.
These functions take care about:
dependencies hierarchy
proper order of dropping and creating views/materialized views across hierarchy
restoring comments and grants on views/materialized views
Click here for a working sqlfiddle example or check this gist for a complete source code.
Autor: Mateusz Wenus o 19:32
do $$
declare gorev_lisans_ihlali_def text;
declare exec_text text;
begin
gorev_lisans_ihlali_def := pg_get_viewdef('public.gorev_lisans_ihlali');
drop view public.gorev_lisans_ihlali;
exec_text := format('create view public.gorev_lisans_ihlali as %s',
gorev_lisans_ihlali_def);
ALTER TABLE public.ara_bakis_duyma
ALTER COLUMN gain TYPE DOUBLE PRECISION;
execute exec_text;
end $$;