I would like to find all column names in a table that contains a value in any given record.
I.e All columns that contains a string in the record value.
'%ABC%' or '%QAW%' or '%IGH%'
If possible give me all the tables and columns in a DB schema, so I do not have to query ever table manually
2016-06-15
So I got a little further, I can now get all the values from each column in each row in each table. Now I need to see if that value ( v_value ) exist in a list of airport codes. i.e ['LAS','LAX','BIL']
I have all the airports in a table that I want to read into and array.
I am having trouble with creating that array and getting the data into it.
Here is what I have so far.
Look at the TODO's
CREATE OR REPLACE PROCEDURE "CMSDB"."TEST1"
()
LANGUAGE SQL
SPECIFIC SQL3
P1: BEGIN
DECLARE v_tabschema VARCHAR(255);
DECLARE v_tabname VARCHAR(255);
DECLARE v_colname VARCHAR(255);
DECLARE v_airport VARCHAR(255);
DECLARE v_stmt VARCHAR(3000);
DECLARE V_SQL VARCHAR(3000);
DECLARE v_value VARCHAR(255);
DECLARE SQLSTATE CHAR(5) DEFAULT '00000';
DECLARE v_stmt2 STATEMENT;
DECLARE v_value_cursor CURSOR FOR v_stmt2;
DECLARE v_airport_cursor CURSOR FOR select IDX from CMSDB.AIRPORTS;
DECLARE syscat_cursor CURSOR FOR select trim(tabschema), tabname, colname from cmsdb.syscat.columns where tabname = 'ACCTGROUP' and tabschema = 'CMSDB' and TYPENAME = 'VARCHAR' and colname not in ('CHGDATE','CHGPAGE','CHGPROG','CHGTYPE','CHGUSER','CREATEDATETIME','CREATEDBYID','REC_ID');
open v_airport_cursor;
FETCH FROM v_airport_cursor INTO v_airport;
WHILE (SQLSTATE = '00000') DO
call DBMS_OUTPUT.PUT_LINE(v_airport);
-- TODO Add each value to a list, arryalist that can be used to check if the v_value is in the list.
FETCH FROM v_airport_cursor INTO v_airport;
END WHILE;
close v_airport_cursor;
OPEN syscat_cursor;
FETCH FROM syscat_cursor INTO v_tabschema, v_tabname, v_colname;
WHILE (SQLSTATE = '00000') DO
--call DBMS_OUTPUT.PUT_LINE(v_tabschema || ' ' || v_tabname || ' ' || v_colname);
SET v_stmt = 'select ' || v_colname || ' from ' || v_tabschema || '.' || v_tabname;
--call DBMS_OUTPUT.PUT_LINE(v_stmt);
PREPARE v_stmt2 FROM v_stmt;
OPEN v_value_cursor;
FETCH FROM v_value_cursor INTO v_value;
WHILE (SQLSTATE = '00000') DO
-- TODO
--IF ( airportList contains v_value) THEN
--call DBMS_OUTPUT.PUT_LINE(v_value);
--END IF;
FETCH FROM v_value_cursor INTO v_value;
END WHILE;
CLOSE v_value_cursor;
FETCH FROM syscat_cursor INTO v_tabschema, v_tabname, v_colname;
END WHILE;
close syscat_cursor;
END P1
You can use sysibm.syscolumns:
select colname
from sysibm.syscolumns
where tbname = 'XX' and
(name like %ABC%' or name like '%QAW%' or name like '%IGH%');
You'll need to create a cursor over SYSTABLES that returns all the tables in the system. Then have another cursor that returns all the column names in a given table. Once you have those, you can build a dynamic statement that checks all the columns in a given table for the value you are looking for. Fetch the next table name and do it all over again.
Obviously, if you can narrow down your search to a particular schema or even limit the search to tables/columns with a particular naming pattern; you'd be better off.
Another technique, depending on your platform and version of DB2. You might be able to do some sort of a bulk export to a set of text files. Then use a tool that will serach the contents of those text files.
Related
Writing an audit trigger. Inside the postgresql function I'm trying todo:
'INSERT INTO ' || able_name || ' (' || columns || ') VALUES ' || NEW || ';'
When NEW is turned into string, varchar variables will not have quotes around them. This will cause the insert to fail. Easier would be to turn all the column values of NEW into varchar values, and postgres would automatically cast them into right values - when INSERT is executed.
Can I loop over the NEW record without turning it into json?
Looking around, I couldn't find good resource explaining how to work with Postgres Record type.
If your target table's structure is identical to the new structure, you don't really need to iterate over the columns.
Something like this will work:
create function audit_trigger()
returns trigger
as
$$
declare
l_columns text;
l_table_name text;
begin
-- this builds the name of the target table dynamicall
l_table_name := tg_table_name||'_audit';
execute format('insert into %I select ($1).*', l_table_name) using new;
return new;
end;
$$
language plpgsql;
Even if you don't want to store the changed data as a JSONB column, you can still use JSON functions to iterate over the columns of the new record if think you need it nevertheless.
The following will store the list of column names of the new record in the variable l_columns:
select string_agg(quote_ident(col), ',')
into l_columns
from jsonb_each_text(to_jsonb(new)) as t(col, val);
I am very new to DB2 even though have experience in Oracle. I am not able to resolve this issue.I have a requirement where I need to find missing child records in the parent table .The parent table , child table and the join_key are all passed as input parameter.
I have tried this in a procedure was able to achieve this, but the admin wants it in a function so that they can just use it in a select statment and get the result in a table format. Since the parent table , child table and the join_key are comming as input parement, I am not able to run them as dynamic sql.
create or replace function missing_child_rec(PARENT_TBL VARCHAR(255),JOIN_KEY VARCHAR(255),CHILD_TBL VARCHAR(255))
RETURNS TABLE(Key VARCHAR(255))
LANGUAGE SQL
BEGIN
DECLARE V_SQL VARCHAR(500);
DECLARE C_SQL CURSOR WITH RETURN FOR S_SQL;
SET V_PARENT_TAB = PARENT_TBL;
SET V_KEY = JOIN_KEY;
SET V_CHILD_TAB = CHILD_TBL;
SET V_SQL = 'SELECT DISTINCT '|| JOIN_KEY || ' FROM ' || V_CHILD_TAB || ' A WHERE NOT EXISTS
(SELECT ' ||V_KEY || ' FROM ' || V_PARENT_TAB || ' B WHERE A.'||JOIN_KEY || '= B.'||JOIN_KEY ||' )' ;
PREPARE S_SQL FROM V_SQL;
OPEN C_SQL;
CLOSE C_SQL;
RETURN
END
When I try to compile it , it says prepare is invalid , I have tried even execute immediate but that also gave error.Can you please help me with how to use dynamic sql in UDF or an alternative logic for this problem
There is more than one way to solve this, here's one way.
If you already have a working stored-procedure that returns the correct result-set then you can call that stored-procedure from a pipelined table function. The idea is that a pipelined table function can consume the result-set and pipe it to the caller.
This will work on Db2-LUW v10.1 or higher, as long as the database is not partitioned over multiple nodes.
It may work on Db2-for-i v7.1 or higher.
It will not work with Db2 for Z/os at current versions.
Suppose your stored procedure is sp_missing_child_rec and it takes the same input parameters as the function you show in your question, and suppose the data type of the join column is varchar(100).
The pipelined wrapper table function would look something like this:
--#SET TERMINATOR #
create or replace function missing_child_rec(PARENT_TBL VARCHAR(255),JOIN_KEY VARCHAR(255),CHILD_TBL VARCHAR(255))
returns table ( join_column_value varchar(100))
begin
declare v_rs result_set_locator varying;
declare v_row varchar(100); -- to match the join_column_datatype, adjust as necessary
declare sqlstate char(5) default '00000';
CALL sp_missing_child_rec( parent_tbl, join_key, child_tbl);
associate result set locator (v_rs) with procedure sp_missing_child_rec ;
allocate v_rscur cursor for result set v_rs;
fetch from v_rscur into v_row;
while ( sqlstate = '00000') do
pipe(v_row);
fetch from v_rscur into v_row;
end while;
return;
end#
select * from table(missing_child_rec( 'parent_table' , 'join_column', 'child_table'))
#
I have a fields table to store column information for other tables:
CREATE TABLE public.fields (
schema_name varchar(100),
table_name varchar(100),
column_text varchar(100),
column_name varchar(100),
column_type varchar(100) default 'varchar(100)',
column_visible boolean
);
And I'd like to create a function to fetch data for a specific table.
Just tried sth like this:
create or replace function public.get_table(schema_name text,
table_name text,
active boolean default true)
returns setof record as $$
declare
entity_name text default schema_name || '.' || table_name;
r record;
begin
for r in EXECUTE 'select * from ' || entity_name loop
return next r;
end loop;
return;
end
$$
language plpgsql;
With this function I have to specify columns when I call it!
select * from public.get_table('public', 'users') as dept(id int, uname text);
I want to pass schema_name and table_name as parameters to function and get record list, according to column_visible field in public.fields table.
Solution for the simple case
As explained in the referenced answers below, you can use registered (row) types, and thus implicitly declare the return type of a polymorphic function:
CREATE OR REPLACE FUNCTION public.get_table(_tbl_type anyelement)
RETURNS SETOF anyelement AS
$func$
BEGIN
RETURN QUERY EXECUTE format('TABLE %s', pg_typeof(_tbl_type));
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM public.get_table(NULL::public.users); -- note the syntax!
Returns the complete table (with all user columns).
Wait! How?
Detailed explanation in this related answer, chapter
"Various complete table types":
Refactor a PL/pgSQL function to return the output of various SELECT queries
TABLE foo is just short for SELECT * FROM foo:
Is there a shortcut for SELECT * FROM?
2 steps for completely dynamic return type
But what you are trying to do is strictly impossible in a single SQL command.
I want to pass schema_name and table_name as parameters to function and get record list, according to column_visible field in
public.fields table.
There is no direct way to return an arbitrary selection of columns (return type not known at call time) from a function - or any SQL command. SQL demands to know number, names and types of resulting columns at call time. More in the 2nd chapter of this related answer:
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
There are various workarounds. You could wrap the result in one of the standard document types (json, jsonb, hstore, xml).
Or you generate the query with one function call and execute the result with the next:
CREATE OR REPLACE FUNCTION public.generate_get_table(_schema_name text, _table_name text)
RETURNS text AS
$func$
SELECT format('SELECT %s FROM %I.%I'
, string_agg(quote_ident(column_name), ', ')
, schema_name
, table_name)
FROM fields
WHERE column_visible
AND schema_name = _schema_name
AND table_name = _table_name
GROUP BY schema_name, table_name
ORDER BY schema_name, table_name;
$func$ LANGUAGE sql;
Call:
SELECT public.generate_get_table('public', 'users');
This create a query of the form:
SELECT usr_id, usr FROM public.users;
Execute it in the 2nd step. (You might want to add column numbers and order columns.)
Or append \gexec in psql to execute the return value immediately. See:
How to force evaluation of subquery before joining / pushing down to foreign server
Be sure to defend against SQL injection:
INSERT with dynamic table name in trigger function
Define table and column names as arguments in a plpgsql function?
Asides
varchar(100) does not make much sense for identifiers, which are limited to 63 characters in standard Postgres:
Maximum characters in labels (table names, columns etc)
If you understand how the object identifier type regclass works, you might replace schema and table name with a singe regclass column.
I think you just need another query to get the list of columns you want.
Maybe something like (this is untested):
create or replace function public.get_table(_schema_name text, _table_name text, active boolean default true) returns setof record as $$
declare
entity_name text default schema_name || '.' || table_name;
r record;
columns varchar;
begin
-- Get the list of columns
SELECT string_agg(column_name, ', ')
INTO columns
FROM public.fields
WHERE fields.schema_name = _schema_name
AND fields.table_name = _table_name
AND fields.column_visible = TRUE;
-- Return rows from the specified table
RETURN QUERY EXECUTE 'select ' || columns || ' from ' || entity_name;
RETURN;
end
$$
language plpgsql;
Keep in mind that column/table references may need to be surrounded by double quotes if they have certain characters in them.
I have a problem on creating PostgreSQL (9.3) trigger on update table.
I want set new values in the loop as
EXECUTE 'NEW.'|| fieldName || ':=''some prepend data'' || NEW.' || fieldName || ';';
where fieldName is set dynamically. But this string raise error
ERROR: syntax error at or near "NEW"
How do I go about achieving that?
You can implement that rather conveniently with the hstore operator #=:
Make sure the additional module is installed properly (once per database), in a schema that's included in your search_path:
How to use % operator from the extension pg_trgm?
Best way to install hstore on multiple schemas in a Postgres database?
Trigger function:
CREATE OR REPLACE FUNCTION tbl_insup_bef()
RETURNS TRIGGER AS
$func$
DECLARE
_prefix CONSTANT text := 'some prepend data'; -- your prefix here
_prelen CONSTANT int := 17; -- length of above string (optional optimization)
_col text := quote_ident(TG_ARGV[0]);
_val text;
BEGIN
EXECUTE 'SELECT $1.' || _col
USING NEW
INTO _val;
IF left(_val, _prelen) = _prefix THEN
-- do nothing: prefix already there!
ELSE
NEW := NEW #= hstore(_col, _prefix || _val);
END IF;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
Trigger (reuse the same func for multiple tables):
CREATE TRIGGER insup_bef
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW
EXECUTE PROCEDURE tbl_insup_bef('fieldName'); -- unquoted, case-sensitive column name
Closely related with more explanation and advice:
Assignment of a column with dynamic column name
How to access NEW or OLD field given only the field's name?
Get values from varying columns in a generic trigger
Your problem is that EXECUTE can only be used to execute SQL statements and not PL/pgSQL statements like the assignment in your question.
You can maybe work around that like this:
Let's assume that table testtab is defined like this:
CREATE TABLE testtab (
id integer primary key,
val text
);
Then a trigger function like the following will work:
BEGIN
EXECUTE 'SELECT $1.id, ''prefix '' || $1.val' INTO NEW USING NEW;
RETURN NEW;
END;
I used hard-coded idand val in my example, but that is not necessary.
I found a working solution:
trigger should execute after insert/update, not before. Then desired row takes the form
EXECUTE 'UPDATE ' || TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME ||
' SET ' || fieldName || '= ''prefix:'' ||''' || fieldValue || ''' WHERE id = ' || NEW.id;
fieldName and fieldValue I get in the next way:
FOR fieldName,fieldValue IN select key,value from each(hstore(NEW)) LOOP
IF .... THEN
END LOOP:
I have a table with many columns, one of which is a lastUpdate column.
I am writing a trigger in plpgsql for Postgres 9.1, that should set a value for lasUpdate upon an UPDATE to the record.
The challenge is to exclude some pre-defined columns from that trigger; Meaning, updating those specific columns shouldn't affect the lastUpdate value of the record.
Any advise?
In PostgreSQL you can access the previous value using OLD. and the new ones using NEW. aliases. There is even a specific example in the docs for what you need:
CREATE TRIGGER check_update
BEFORE UPDATE ON accounts
FOR EACH ROW
WHEN (OLD.balance IS DISTINCT FROM NEW.balance)
EXECUTE PROCEDURE check_account_update();
I know it is too old question, but I found myself with the same need and I managed to do it with a trigger using the information_schema.colmns table.
I attach here the possible solution where the only parameters to edit would be the TIMEUPDATE_FIELD and EXCLUDE_FIELDS in the trigger function check_update_testtrig():
CREATE TABLE testtrig
(
id bigserial NOT NULL,
col1 integer,
col2 integer,
col3 integer,
lastupdate timestamp not null default now(),
lastread timestamp,
CONSTRAINT testtrig_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
CREATE OR REPLACE FUNCTION check_update_testtrig()
RETURNS trigger AS
$BODY$
DECLARE
TIMEUPDATE_FIELD text := 'lastupdate';
EXCLUDE_FIELDS text[] := ARRAY['lastread'];
PK_FIELD text := 'id';
ROW_RES RECORD;
IS_DISTINCT boolean := false;
COND_RES integer := 0;
BEGIN
FOR ROW_RES IN
SELECT column_name
FROM information_schema.columns
WHERE table_schema = TG_TABLE_SCHEMA
AND table_name = TG_TABLE_NAME
AND column_name != TIMEUPDATE_FIELD
AND NOT(column_name = ANY (EXCLUDE_FIELDS))
LOOP
EXECUTE 'SELECT CASE WHEN $1.' || ROW_RES.column_name || ' IS DISTINCT FROM $2.' || ROW_RES.column_name || ' THEN 1 ELSE 0 END'
INTO STRICT COND_RES
USING NEW, OLD;
IS_DISTINCT := IS_DISTINCT OR (COND_RES = 1);
END LOOP;
IF (IS_DISTINCT)
THEN
EXECUTE 'UPDATE ' || TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME || ' SET ' || TIMEUPDATE_FIELD || ' = now() WHERE ' || PK_FIELD || ' = $1.' || PK_FIELD
USING NEW;
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
CREATE TRIGGER trigger_update_testtrig
AFTER UPDATE
ON testtrig
FOR EACH ROW
EXECUTE PROCEDURE check_update_testtrig();
Looking to your question and your comment on the answer of Jakub Kania, would I say that part of the solution is that you will create an extra table.
The issue is that constraints on columns should only apply the functioning of the column itself, it should not affect the values of other columns in the table. Specifying which columns should influence the status 'lastUpdate' is imo business logic.
The idea which columns should have impact on the value of the status column 'lastUpdate' changes along with the business, not with the table design. Therefor should the solution imo consist of a table in combination with a trigger.
I would add a table with a column for a list of columns (column can be of type array) that can be used in a trigger on the table like described by Jakub Kania. If the default behaviour should be that a new column has to change the value of the column 'lastUpdate', then would I create the trigger so that it only lists names of columns that do not change the value of 'lastUpdate'. If the default behaviour is to not change the value of the column 'lastUpdate',then would I advise you to add the name of the column to the list of columns in case the members in the list would change the value of the column 'lastUpdate'.
If the table column is within the list of columns then should it update the field lastUpdate.