DB2 Performance Tuning - db2

I have a DB2 Stored Procedure that runs for nearly 4 minutes and fetches 6000 pages of data.
Please may I know if there are any possible query-performance tuning methods in DB2 so that the procedure could be made to execute faster.
The basic structure of the stored procedure created is
CREATE PROCEDURE SAMPLE_PROC(ID INT)
RESULT SETS 1
P1: BEGIN
DECLARE GLOBAL TEMPORARY TABLE SESSION.TEMP1
(
COLUMN_NAMES
)ON COMMIT DROP TABLE;
BEGIN
DECLARE V_PARAM_1 VARCHAR(8000);
DECLARE V_PARAM_2 VARCHAR(8000);
DECLARE CURSOR1 CURSOR WITH RETURN FOR
SELECT
COLUMN_NAMES
FROM SESSION.TEMP1
GROUP BY
COLUMN_NAMES
WITH UR;
SELECT
RTRIM(IFNULL(PARAM_1, '')) PARAM_1,
RTRIM(IFNULL(PARAM_2, '')) PARAM_2
INTO
V_PARAM_1,
V_PARAM_2
FROM
PARAM_TABLE
WHERE
PARAM_ID = ID;
INSERT INTO SESSION.TEMP1(COLUMN_NAMES)
SELECT COLUMNS FROM (....) WHERE(...);
OPEN CURSOR1;
END;
END P1#
The procedure consists of a select which returns values from the PARAM_TABLE to be passed as parameters to the original query. Multiple comma separated values are passed from the table to the query.
Thanks

Related

Postgres - function get the values from one query then insert as dynamic sql string

I am building a function on postgresql, basically send one id from one table then re-build the insert statement of that row and insert it as string column from another table.
I have this table, in insert_query I want to store the insert statement of one row, with his variables:
create table get_insert (tabname varchar(30), insert_query varchar(5000));
I want to store something like this on insert_query column:
Insert into baseball_table (code, name) select '01','Robet';
Currently my function is storing just this, which doesn't work since I need to store the real values:
INSERT INTO baseball_table(code,name) SELECT code,name FROM baseball_table WHERE id=1;
This is my function:
CREATE OR REPLACE FUNCTION get_values(
_id character varying
)
RETURNS boolean
LANGUAGE 'plpgsql'
VOLATILE PARALLEL UNSAFE
AS $function$
DECLARE v_id integer;
DECLARE sql_brand varchar;
BEGIN
sql_query'INSERT INTO baseball_table(code,name) SELECT code,name FROM core.brand WHERE id=' || v_id ||'
';
INSERT INTO core.clone_brand (tabname, insert_query)VALUES ('brand',sql_query);
RETURN true;
END;
$function$;
Which is the best way to get the real values without making variables of each column?
Regards
I want to get the way to get the real values without making variables of each column.

how to iterate in all schemas and find count from all tables present in all schemas with same table name for every 5mins?

imagine there are 5 schemas in my database and in every schema there is a common name table (ex:- table1) after every 5mins records get inserted in table1, how I can iterate in all schemas n calculate the count of table1[i have to automate the process so i am going to write the code in function and call that function after every 5mins using crontab].
Basically 2 options: Hard code schema.table and union the results. So something like:
create or replace function count_rows_in_each_table1()
returns table (schema_name text, number_or_rows integer)
language sql
as $$
select 'schema1', count(*) from schema1.table1 union all
select 'schema2', count(*) from schema2.table1 union all
select 'schema3', count(*) from schema3.table1 union all
...
select 'scheman', count(*) from scheman.table1;
$$;
The alternative being building the query dynamically from information_scheme.
create or replace function count_rows_in_each_table1()
returns table (schema_name text, number_of_rows bigint)
language plpgsql
as $$
declare
c_rows_count cursor is
select table_schema::text
from information_schema.tables
where table_name = 'table1';
l_tbl record;
l_sql_statement text = '';
l_connector text = '';
l_base_select text = 'select ''%s'', count(*) from %I.table1';
begin
for l_tbl in c_rows_count
loop
l_sql_statement = l_sql_statement ||
l_connector ||
format (l_base_select, l_tbl.table_schema, l_tbl.table_schema);
l_connector = ' union all ';
end loop;
raise notice E'Running Query: \n%', l_sql_statement;
return query execute l_sql_statement;
end;
$$;
Which is better. With few schema and few schema add/drop, opt for the first. It is direct and easily shows what you are doing. If you add/drop schema often then opt for the second. If you have many schema, but seldom add/drop them then modify the second to generate the first, save and schedule execution of the generated query.
NOTE: Not tested

PostgreSQl function return multiple dynamic result sets

I have an old MSSQL procedure that needs to be ported to a PostgreSQL function. Basically the SQL procedure consist in a CURSOR over a select statement. For each cursor entity i have three select statements based on the current cursor output.
FETCH NEXT FROM #cursor INTO #entityId
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT * FROM table1 WHERE col1 = #entityId
SELECT * FROM table2 WHERE col2 = #entityId
SELECT * FROM table3 WHERE col3 = #entityId
END
The tables from the SELECT statements have different columns.
I know that the PostgreSQL use refcursor in order to return multiple result sets but the question is if is possible to open and return multiple dynamic refcursors inside of a loop?
The Npgsql .NET data provider is used for handling the results.
Postgres test code with only 1 cursor inside loop:
CREATE OR REPLACE FUNCTION "TestCursor"(refcursor)
RETURNS SETOF refcursor AS
$BODY$
DECLARE
entity_id integer;
BEGIN
FOR entity_id IN SELECT "FolderID" from "Folder"
LOOP
OPEN $1 FOR SELECT * FROM "FolderInfo" WHERE "FolderID" = entity_id;
RETURN NEXT $1;
CLOSE $1;
END LOOP;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
Then the test code:
BEGIN;
SELECT * FROM "TestCursor"('c');
FETCH ALL IN c;
COMMIT;
The SELECT * FROM "TestCursor"('c'); output is like on screenshot:
Then when i try to fetch data i get the error: ERROR: cursor "c" does not exist
You can emulate it via SETOF refcursor. But it is not good idea. This T-SQL pattern is not supported well in Postgres, and should be prohibited when it is possible. PostgreSQL support functions - function can return scalar, vector or relation. That is all. Usually in 90% is possible to rewrite T-SQL procedures to clean PostgreSQL functions.

How to select from variable that is a table name n Postgre >=9.2

i have a variable that is a name of a table. How can i select or update from this using variable in query , for example:
create or replace function pg_temp.testtst ()
returns varchar(255) as
$$
declare
r record; t_name name;
begin
for r in SELECT tablename FROM pg_tables WHERE schemaname = 'public' limit 100 loop
t_name = r.tablename;
update t_name set id = 10 where id = 15;
end loop;
return seq_name;
end;
$$
language plpgsql;
it shows
ERROR: relation "t_name" does not exist
Correct reply is a comment from Anton Kovalenko
You cannot use variable as table or column name in embedded SQL ever.
UPDATE dynamic_table_name SET ....
PostgreSQL uses a prepared and saved plans for embedded SQL, and references to a target objects (tables) are deep and hard encoded in plans - a some characteristics has significant impact on plans - for one table can be used index, for other not. Query planning is relatively slow, so PostgreSQL doesn't try it transparently (without few exceptions).
You should to use a dynamic SQL - a one purpose is using for similar situations. You generate a new SQL string always and plans are not saved
DO $$
DECLARE r record;
BEGIN
FOR r IN SELECT table_name
FROM information_schema.tables
WHERE table_catalog = 'public'
LOOP
EXECUTE format('UPDATE %I SET id = 10 WHERE id = 15', r.table_name);
END LOOP;
END $$;
Attention: Dynamic SQL is unsafe (there is a SQL injection risks) without parameter sanitization. I used a function "format" for it. Other way is using "quote_ident" function.
EXECUTE 'UPDATE ' || quote_ident(r.table_name) || 'SET ...

postgresql copy with schema support

I'm trying to load some data from CSV using the postgresql COPY command. The trick is that I'd like to implement multi-tenancy on a userid (which is contained in the CSV). Is there an easy way to tell the postgres copy command to filter based on this userid when loading the csv?
i.e. all rows with userid=x go to schema=x, rows with userid=y go to schema=y.
There is not a way of doing this with just the COPY command, but you could copy all your data into a master table, and then put together a simple PL/PGSQL function that does this for you. Something like this -
CREATE OR REPLACE FUNCTION public.spike()
RETURNS void AS
$BODY$
DECLARE
user_id integer;
destination_schema text;
BEGIN
FOR user_id IN SELECT userid FROM master_table GROUP BY userid LOOP
CASE user_id
WHEN 1 THEN
destination_schema := 'foo';
WHEN 2 THEN
destination_schema := 'bar';
ELSE
destination_schema := 'baz';
END CASE;
EXECUTE 'INSERT INTO '|| destination_schema ||'.my_table SELECT * FROM master_table WHERE userid=$1' USING user_id;
-- EXECUTE 'DELETE FROM master_table WHERE userid=$1' USING user_id;
END LOOP;
TRUNCATE TABLE master_table;
RETURN;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE
COST 100;
This gets all unique user_ids from the master_table, uses a CASE statement to determine the destination schema, and then executes an INSERT SELECT to move rows, and finally deletes the moved rows.