automatically selecting databasis and columns and searching data - metadata

Is it possible to write a query that automatically selects all database names and column names from dbc.Columns table in Teradata, and searches a particular set of values?
Set of values:
WHERE abc in (1,2,3)
Selecting dbc.columns:
SELECT DatabaseName, TableName FROM dbc.COLUMNS
WHERE ColumnName LIKE '%abc%'
How can I combine this and make a query that will return only those combinations of DatabaseName and TableName where ColumnName has specific subset of values?
UPDATE:
This query finds all database - column combinations:
SELECT TRIM(BOTH FROM a.DatabaseName) || '.' || TRIM( BOTH FROM a.TableName)
FROM dbc.COLUMNS AS a
WHERE ColumnName LIKE '%abc%'
is it possible to define some variables or sthg. else?

You need to write Dynamic SQL statements like
SELECT
'SELECT ''' || DatabaseName || '.' || TableName || '.' || ColumnName || ''''
' WHERE EXISTS (SELECT * FROM ' || DatabaseName || '.' || TableName ||
' WHERE ' || ColumnName || ' IN (1,2,3));'
FROM dbc.ColumnsVX
WHERE ColumnName LIKE '%abc%';
Running the resulting queries will return one result set with zero or one row for each table.
To get a single result set you need to write a Stored Procedure with a cursor on the dbc.columnsVX result (adding an INSERT INTO temptable), EXECTE IMMEDIATE each row. Finally return the rows of the temptable.
Unless you're an experienced SQL programmer your DBA will not grant you the right to create SPs.
But why do you actually need this kind of info? Looking for a needle in a haystack?

Related

Getting A "Could Not Open Relation" Error On Simple Query

I have a function that creates a set of INSERT INTO ... VALUES scripts. If I uncomment the dvp.content line, the function fails with an "ERROR: could not open relation with OID ###", which refers to the temp table. The content column is a jsonb type. Not sure where to begin?
CREATE OR REPLACE FUNCTION export_docs_as_sql(doc_list uuid[], to_org_id uuid)
RETURNS table(id integer, sql text)
AS $$
BEGIN
...
-- use a temp table to gather all INSERT statements
CREATE TEMP TABLE IF NOT EXISTS doc_data_export(
id serial PRIMARY KEY,
sql text
);
...
-- get doc_version_pages
INSERT INTO doc_data_export(sql)
SELECT 'INSERT INTO doc_version_pages(id, doc_version_id, persona_id, care_category_id, patient_group_id, title, content, created_at, updated_at, is_guide, is_root) VALUES (' ||
quote_literal(dvp.id::TEXT) || ', ' ||
quote_literal(dvp.doc_version_id::TEXT) || ', ' ||
CASE WHEN p.name IS NOT NULL THEN '(SELECT px.id FROM personas px WHERE px.org_id = ' || quote_literal(dv.id::TEXT) || ' AND px.name = ' || quote_literal(p.name) || '), ' ELSE 'NULL, ' END ||
CASE WHEN c.name IS NOT NULL THEN '(SELECT cx.id FROM care_categories cx WHERE cx.org_id = ' || quote_literal(to_org_id) || ' AND cx.name = ' || quote_literal(c.name) || '), ' ELSE 'NULL, ' END ||
CASE WHEN g.name IS NOT NULL THEN '(SELECT gx.id FROM patient_groups gx WHERE gx.org_id = ' || quote_literal(to_org_id) || ' AND gx.name = ' || quote_literal(g.name) || '), ' ELSE 'NULL, ' END ||
quote_literal(dvp.title::TEXT) || ', ' ||
--dvp.content || ', ' ||
quote_literal(dvp.created_at::TEXT) || ', ' ||
quote_literal(now()::timestamp) || ', ' ||
quote_literal(dvp.is_guide::TEXT) || ', ' ||
quote_literal(dvp.is_root::TEXT) || ');'
FROM unnest(doc_list) l
INNER JOIN doc_versions dv ON l = dv.doc_id
INNER JOIN doc_version_pages dvp ON dv.id = dvp.doc_version_id
LEFT JOIN personas p ON dvp.persona_id = p.id
LEFT JOIN care_categories c ON dvp.care_category_id = c.id
LEFT JOIN patient_groups g ON dvp.patient_group_id = g.id;
...
-- output all inserts
RETURN QUERY SELECT * FROM doc_data_export;
-- drop temp table
DROP TABLE doc_data_export;
END;
$$ LANGUAGE plpgsql;
The "Could Not Open Relation" problem is occurring due to the bug described here, which remains an issue as of Postgres 14.0:
What seems to be happening is that if the strings are large enough to be
toasted, then the data returned out of the function with RETURN QUERY
contains toast pointers referencing the temp table's toast table.
If you drop the temp table then those pointers will fail upon use.
To explain further, when a column value is greater than the TOAST_TUPLE_THRESHOLD configuration parameter (usually 2KB) and cannot be compressed or when the column is configured with a storage parameter of EXTERNAL, the value will be broken down into chunks and stored in a special secondary table called a TOAST table. This table will be stored in the pg_toast schema and will be named like pg_toast.pg_toast_<table OID>.
So when you add dvp.content to the sql statement you insert that into doc_data_export, some of these values are larger than the aforementioned constraints and are thus TOASTed. Your RETURN QUERY is only sending the pointers to the values in the toast table. After the return is done, the temporary table and its corresponding TOAST table is dropped. Thus when the outer query attempts to materialize the results, it can't find the TOAST table that these pointers reference - hence the cryptic error message you see.
You can avoid sending TOAST pointers for the temporary table -and thus safely DROP it after the RETURN QUERY -by performing an operation on the sql column that returns the same value:
RETURN QUERY SELECT id, sql || '' FROM doc_data_export;
The simple function below will reproduce a minimal example of the TOAST bug when you set fail to true and demonstrate the successful workaround when you set fail to false.
DROP FUNCTION IF EXISTS buttered_toast(boolean);
CREATE OR REPLACE FUNCTION buttered_toast(fail boolean)
RETURNS table(id integer, enormous_data text)
AS $$
BEGIN
CREATE TEMPORARY TABLE tbl_with_toasts (
id integer PRIMARY KEY,
enormous_data text
) ON COMMIT DROP;
--generate a giant string that is sure to generate a TOAST table.
INSERT INTO tbl_with_toasts(id,enormous_data) SELECT 1, string_agg(gen_random_uuid()::text,'-') FROM generate_series(1,10000) as ints(int);
IF buttered_toast.fail THEN
-- will return pointers to tbl_with_toast's TOAST table for the "enormous_data" column.
RETURN QUERY SELECT tbl_with_toasts.id, tbl_with_toasts.enormous_data FROM tbl_with_toasts ;
ELSE
-- will generate and return new values for the "enormous_data" column
RETURN QUERY SELECT tbl_with_toasts.id, tbl_with_toasts.enormous_data || '' FROM tbl_with_toasts ;
END IF;
DROP TABLE tbl_with_toasts;
END;
$$ LANGUAGE plpgsql;
-- fails with "Could Not Open Relation"
select * from buttered_toast(true)
--succeeds
select * from buttered_toast(false);

postgresql function query execute using parameter for ANY clause - get error - query string argument of EXECUTE is null

PostgreSQL 11.4, compiled by Visual C++ build 1914, 64-bit
Reviewed dozens of articles in Stackoverflow, no real match. The need: pass a comma separated string (id values), and use that list with the "ANY" postgresql clause.
Code
return query execute
'select aa.id, aa.course_id, aa.comments, aa.curr_cont_t_id, aa.date_done, ' ||
'aa.unit_id, aa.time_seq, aa.week_num, bb.module_id, bb.expected_hrs, ' ||
'bb.title unit_title, cc.module_name, cc.tally_hours, cc.time_of_day, ' ||
'bb.file_upload_expected, aa.app_files_id, xx.facility_id ' ||
'from course_content aa ' ||
'left outer join units bb on aa.unit_id = bb.id ' ||
'left outer join module_categories cc on bb.module_id = cc.id ' ||
'left outer join courses xx on aa.course_id = xx.id ' ||
'where xx.facility_id = any(''{' || $1 || '}'') '
using p_facilities;
I have checked p_facilities to ensure it is not empty or null. I have even specifically set p_facilities to a value inside the function like this:
p_facilities text = '3';
The returned error is consistently: 'query string argument of EXECUTE is null (SQL State 22004)'
The problem is that you are not referring to the using parameter anywhere in your query. Instead, you are concatenating $1 directly into your query, and this $1 refers to the first argument of the pl/pgsql function you are in (and apparently is NULL).
To use parameters in dynamically executed sql and pass them through using, you need to hardcode the text $1 into the query string:
EXECUTE 'SELECT … WHERE xx.facility_id = any($1)' USING some_array;
To interpolate a string into the query, you don't need any using clause, just refer to the string directly:
EXECUTE 'SELECT … WHERE xx.facility_id = any(''{' || p_facilities || '}'')';
However, notice that you don't need (and shouldn't use) dynamic sql here at all. You're constructing a value, not sql structure. You can just refer to that directly in a normal query:
SELECT … WHERE xx.facility_id = any( ('{' || p_facilities || '}')::int[] );
-- or better
SELECT … WHERE xx.facility_id = any( string_to_array(p_facilities, ',')::int[] );

Issue while forming regex string in a stored procedure

I am trying to run the below query in a stored procedure and its not working.
We tried to print the query using NOTICE and we saw E gets appended to the regex and thats the reason the query doesnt show any output.
Not working
select order,version from mytable
where substring(version from quote_literal('[0-9]+\.[0-9]{1}'))
IN ('6.2') and order= 'ABC';
But the same query if i run from pgadmin query tool, it works fine.
Working
select order,version from mytable
where substring(version from '[0-9]+\.[0-9]{1}')
IN ('6.2') and order= 'ABC';
My requirement is to form the reqex dynamically in the stored procedure. Please guide on how to achieve this.
Below is the line of code in my stored procedure,
sql_query = sql_query || ' AND substring(version from ' || quote_literal( '[0-9]+\.[0-9]{1}' ) || ') IN (' || quote_literal(compatibleVersions) || ')';
raise notice 'Value: %', sql_query;
EXECUTE sql_query INTO query_result ;
and from notice i am getting the below output,
AND substring(version from E'[0-9]+\\.[0-9]{1}') IN ('{6.2}')
My requirement is to make this regex work.
I narrowed down to this query,
working
select substring(version from '[0-9]+\.[0-9]{1}') from mytable ;
not working
select substring(version from quote_literal('[0-9]+\.[0-9]{1}')) from mytable ;
Now i think its easy to fix it. You can try at your end also running this above queries.
Since your problem is not really the extended string literal syntax using E, but the string representation of the array in the IN list, your PL/pgSQL should look somewhat like this:
sql_query = sql_query ||
' AND substring(version from ' || quote_literal( '[0-9]+\.[0-9]{1}' ) ||
') IN (' || (SELECT string_agg(quote_literal(x), ', ')
FROM unnest(compatibleVersions
) AS x(x)) || ')';
quote_literal should be used in situations where u want to dynamically construct queries. In such situation quote_literal will be replaced by E in the final constructed query.
right way to use
select * from config_support_module where substring(firmware from '[0-9]+\.[0-9]{1}') IN ('6.2');
select * from config_support_module where substring(firmware from E'[0-9]+\.[0-9]{1}') IN ('6.2') ;
wrong usage of quote_literal in static queries
select * from config_support_module where substring(firmware from quote_literal('[0-9]+\.[0-9]{1}')) IN ('6.2') ;
This doesnt give you any errors/output.
quote_literal usage in dynamic queries
sql_query = sql_query || ' AND substring(version from ' || quote_literal( '[0-9]+\.[0-9]{1}' ) || ') ... .. ...

Table and column names in dynamic PostgreSQL queries

I'm struggling with a stored procedure which heavily uses dynamic queries. Among others I need to store maximum value of an existing column into a variable.
Postgres documents state "if you want to use dynamically determined table or column names, you must insert them into the command string textually". Based on that I've come up with following statement:
EXECUTE 'SELECT MAX(' || pkColumn::regclass || ') FROM ' ||
tableName::regclass INTO maxValue;
Table name seems to be OK, column name triggers error.
What am I doing wrong ?
Pavel
EXECUTE 'SELECT MAX(' || pkColumn ||'::regclass) FROM ' || ...
::regclass is a cast done inside query. You can also skip it, or put " - which in PG works the same. So please try one of:
EXECUTE 'SELECT MAX(' || pkColumn || ') FROM ' || ...
or
EXECUTE 'SELECT MAX("' || pkColumn || '") FROM ' || ...
All tree should work. If not - just let me know. In that case it is my fault, postgresql simply works.
There is no reason to cast parameters as they are just identifiers. For better control and readability use the function format(), e.g.:
declare
pkcolumn text = 'my_column';
tablename text = 'my_table';
...
execute format('select max(%I) from %I', pkcolumn, tablename)
into maxvalue;

Postgresql create a log schema

So my problem is simple. I have a schema prod with many tables, and another one log with the exact same tables and structure (primary keys change that's it).
When I do UPDATE or DELETE in the schema prod, I want to record old data in the log schema.
I have the following function called after a update or delete:
CREATE FUNCTION prod.log_data() RETURNS trigger
LANGUAGE plpgsql AS $$
DECLARE
v RECORD;
column_names text;
value_names text;
BEGIN
-- get column names of current table and store the list in a text var
column_names = '';
value_names = '';
FOR v IN SELECT * FROM information_schema.columns WHERE table_name = quote_ident(TG_TABLE_NAME) AND table_schema = quote_ident(TG_TABLE_SCHEMA) LOOP
column_names = column_names || ',' || v.column_name;
value_names = value_names || ',$1.' || v.column_name;
END LOOP;
-- remove first char ','
column_names = substring( column_names FROM 2);
value_names = substring( value_names FROM 2);
-- execute the insert into log schema
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' ( ' || column_names || ' ) VALUES ( ' || value_names || ' )' USING OLD;
RETURN NULL; -- no need to return, it is executed after update
END;$$;
The annoying part is that I have to get column names from information_schema for each row.
I would rather use this:
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || OLD;
But some values can be NULL so this will execute:
INSERT INTO log.user SELECT 2,,,"2015-10-28 13:52:44.785947"
instead of
INSERT INTO log.user SELECT 2,NULL,NULL,"2015-10-28 13:52:44.785947"
Any idea to convert ",," to ",NULL,"?
Thanks
-Quentin
First of all I must say that in my opinion using PostgreSQL system tables (like information_schema) is the proper way for such a usecase. Especially that you must write it once: you create the function prod.log_data() and your done. Moreover it may be dangerous to use OLD in that context (just like *) as always because of not specified elements order.
But,
to answer your exact question the only way I know is to do some operations on OLD. Just observe that you cast OLD to text by doing concatenation ... ' SELECT ' || OLD. The default casting create that ugly double-commas. So, next you can play with that text. In the end I propose:
DECLARE
tmp TEXT
...
BEGIN
...
/*to make OLD -> text like (2,,3,4,,)*/
SELECT '' || OLD INTO tmp; /*step 1*/
/*take care of commas at the begining and end: '(,' ',)'*/
tmp := replace(replace(tmp, '(,', '(NULL,'), ',)', ',NULL)'); /*step 2*/
/* replace rest of commas to commas with NULL between them */
SELECT array_to_string(string_to_array(tmp, ',', ''), ',', 'NULL') INTO tmp; /*step 3*/
/* Now we can do EXECUTE*/
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || tmp;
Of course you can do steps 1-3 in one big step
SELECT array_to_string(string_to_array(replace(replace('' || NEW, '(,', '(NULL,'), ',)', ',NULL)'), ',', ''), ',', 'NULL') INTO tmp;
In my opinion this approach isn't any better from using information_schema, but it's your call.