PL/SQL Error Happening - triggers

Select * from cat;
Attempting this

You double-fetch your cursor:
Your problem
[...]
OPEN UPDATETRIGGER;
FOR UT IN UPDATETRIGGER -- This performs fetching into UT
LOOP
FETCH UPDATETRIGGER -- Here You Fetch again..
[...]
What your loop should look like
CREATE OR REPLACE TRIGGER Display_Update_Message
BEFORE UPDATE
ON JOBS
FOR EACH ROW
WHEN ( (old.IsFilled != new.IsFilled) AND (new.isFilled = 'yes'))
DECLARE
CURSOR UPDATETRIGGER
IS
SELECT J.JobID JobID,
J.JobName JobName,
J.StopDate StopDate,
JS.LastName LastName,
JS.FirstName FirstName,
JS.Email Email
FROM JOBS J
FULL OUTER JOIN JOBAPPLICATIONS JA ON J.JobID = JA.JobID
FULL OUTER JOIN JOBSEEKERS JS ON JA.JSID = JS.JSID
WHERE J.JobID = :new.JobID;
JobID NUMBER (3);
JobName CHAR (30);
LastName CHAR (15);
FirstName CHAR (15);
Email CHAR (30);
StopDate DATE;
BEGIN
DBMS_OUTPUT.PUT_LINE (
'Seekers affected by closing job ' || JobID || ': ' || :new.JobName);
OPEN UPDATETRIGGER;
LOOP -- infinite loop
FETCH UPDATETRIGGER
INTO JobID,
JobName,
LastName,
FirstName,
Email,
StopDate;
EXIT WHEN UPDATETRIGGER%NOTFOUND; -- loop-breaker
:new.StopDate := SYSDATE;
DBMS_OUTPUT.PUT_LINE (
'--' || UT.LastName || ', ' || UT.FirstName || ' ' || UT.Email);
END LOOP;
CLOSE UPDATETRIGGER;
END;

You do not need to include the JOB table in the query. You have the information you require in the :NEW namespace. Not including the JOB table in the query will stop the ORA-04091 error.
Also you have two sets of cursor control statements. Choose one. I prefer the implicit cursor syntax because it is less typing (and marginally more efficient).
CREATE OR REPLACE TRIGGER Display_Update_Message
BEFORE UPDATE ON JOBS
FOR EACH ROW WHEN ((old.IsFilled != new.IsFilled) AND (new.isFilled = 'yes'))
BEGIN
DBMS_OUTPUT.PUT_LINE('Seekers affected by closing job '
|| :new.JobID || ': ' || :new.JobName);
FOR UT IN (SELECT JS.LastName
, JS.FirstName
, JS.Email Email
FROM JOBAPPLICATIONS JA
FULL OUTER JOIN JOBSEEKERS JS ON JA.JSID = JS.JSID
WHERE JA.JobID = :new.JobID )
LOOP
DBMS_OUTPUT.PUT_LINE('--' || UT.LastName || ', ' || UT.FirstName || ' ' || UT.Email);
END LOOP;
:new.StopDate := sysdate;
END;
/
Incidentally, I'm not sure why you have FULL OUTER JOIN in your cursor. I would have thought INNER JOIN was the correct solution. Surely you only want Job Seekers who have applied for the job you're closing? However, I have left that in because I don't know your business rules, and anyway I've changed enough of your code already :)

Related

Getting A "Could Not Open Relation" Error On Simple Query

I have a function that creates a set of INSERT INTO ... VALUES scripts. If I uncomment the dvp.content line, the function fails with an "ERROR: could not open relation with OID ###", which refers to the temp table. The content column is a jsonb type. Not sure where to begin?
CREATE OR REPLACE FUNCTION export_docs_as_sql(doc_list uuid[], to_org_id uuid)
RETURNS table(id integer, sql text)
AS $$
BEGIN
...
-- use a temp table to gather all INSERT statements
CREATE TEMP TABLE IF NOT EXISTS doc_data_export(
id serial PRIMARY KEY,
sql text
);
...
-- get doc_version_pages
INSERT INTO doc_data_export(sql)
SELECT 'INSERT INTO doc_version_pages(id, doc_version_id, persona_id, care_category_id, patient_group_id, title, content, created_at, updated_at, is_guide, is_root) VALUES (' ||
quote_literal(dvp.id::TEXT) || ', ' ||
quote_literal(dvp.doc_version_id::TEXT) || ', ' ||
CASE WHEN p.name IS NOT NULL THEN '(SELECT px.id FROM personas px WHERE px.org_id = ' || quote_literal(dv.id::TEXT) || ' AND px.name = ' || quote_literal(p.name) || '), ' ELSE 'NULL, ' END ||
CASE WHEN c.name IS NOT NULL THEN '(SELECT cx.id FROM care_categories cx WHERE cx.org_id = ' || quote_literal(to_org_id) || ' AND cx.name = ' || quote_literal(c.name) || '), ' ELSE 'NULL, ' END ||
CASE WHEN g.name IS NOT NULL THEN '(SELECT gx.id FROM patient_groups gx WHERE gx.org_id = ' || quote_literal(to_org_id) || ' AND gx.name = ' || quote_literal(g.name) || '), ' ELSE 'NULL, ' END ||
quote_literal(dvp.title::TEXT) || ', ' ||
--dvp.content || ', ' ||
quote_literal(dvp.created_at::TEXT) || ', ' ||
quote_literal(now()::timestamp) || ', ' ||
quote_literal(dvp.is_guide::TEXT) || ', ' ||
quote_literal(dvp.is_root::TEXT) || ');'
FROM unnest(doc_list) l
INNER JOIN doc_versions dv ON l = dv.doc_id
INNER JOIN doc_version_pages dvp ON dv.id = dvp.doc_version_id
LEFT JOIN personas p ON dvp.persona_id = p.id
LEFT JOIN care_categories c ON dvp.care_category_id = c.id
LEFT JOIN patient_groups g ON dvp.patient_group_id = g.id;
...
-- output all inserts
RETURN QUERY SELECT * FROM doc_data_export;
-- drop temp table
DROP TABLE doc_data_export;
END;
$$ LANGUAGE plpgsql;
The "Could Not Open Relation" problem is occurring due to the bug described here, which remains an issue as of Postgres 14.0:
What seems to be happening is that if the strings are large enough to be
toasted, then the data returned out of the function with RETURN QUERY
contains toast pointers referencing the temp table's toast table.
If you drop the temp table then those pointers will fail upon use.
To explain further, when a column value is greater than the TOAST_TUPLE_THRESHOLD configuration parameter (usually 2KB) and cannot be compressed or when the column is configured with a storage parameter of EXTERNAL, the value will be broken down into chunks and stored in a special secondary table called a TOAST table. This table will be stored in the pg_toast schema and will be named like pg_toast.pg_toast_<table OID>.
So when you add dvp.content to the sql statement you insert that into doc_data_export, some of these values are larger than the aforementioned constraints and are thus TOASTed. Your RETURN QUERY is only sending the pointers to the values in the toast table. After the return is done, the temporary table and its corresponding TOAST table is dropped. Thus when the outer query attempts to materialize the results, it can't find the TOAST table that these pointers reference - hence the cryptic error message you see.
You can avoid sending TOAST pointers for the temporary table -and thus safely DROP it after the RETURN QUERY -by performing an operation on the sql column that returns the same value:
RETURN QUERY SELECT id, sql || '' FROM doc_data_export;
The simple function below will reproduce a minimal example of the TOAST bug when you set fail to true and demonstrate the successful workaround when you set fail to false.
DROP FUNCTION IF EXISTS buttered_toast(boolean);
CREATE OR REPLACE FUNCTION buttered_toast(fail boolean)
RETURNS table(id integer, enormous_data text)
AS $$
BEGIN
CREATE TEMPORARY TABLE tbl_with_toasts (
id integer PRIMARY KEY,
enormous_data text
) ON COMMIT DROP;
--generate a giant string that is sure to generate a TOAST table.
INSERT INTO tbl_with_toasts(id,enormous_data) SELECT 1, string_agg(gen_random_uuid()::text,'-') FROM generate_series(1,10000) as ints(int);
IF buttered_toast.fail THEN
-- will return pointers to tbl_with_toast's TOAST table for the "enormous_data" column.
RETURN QUERY SELECT tbl_with_toasts.id, tbl_with_toasts.enormous_data FROM tbl_with_toasts ;
ELSE
-- will generate and return new values for the "enormous_data" column
RETURN QUERY SELECT tbl_with_toasts.id, tbl_with_toasts.enormous_data || '' FROM tbl_with_toasts ;
END IF;
DROP TABLE tbl_with_toasts;
END;
$$ LANGUAGE plpgsql;
-- fails with "Could Not Open Relation"
select * from buttered_toast(true)
--succeeds
select * from buttered_toast(false);

Postgresql create a log schema

So my problem is simple. I have a schema prod with many tables, and another one log with the exact same tables and structure (primary keys change that's it).
When I do UPDATE or DELETE in the schema prod, I want to record old data in the log schema.
I have the following function called after a update or delete:
CREATE FUNCTION prod.log_data() RETURNS trigger
LANGUAGE plpgsql AS $$
DECLARE
v RECORD;
column_names text;
value_names text;
BEGIN
-- get column names of current table and store the list in a text var
column_names = '';
value_names = '';
FOR v IN SELECT * FROM information_schema.columns WHERE table_name = quote_ident(TG_TABLE_NAME) AND table_schema = quote_ident(TG_TABLE_SCHEMA) LOOP
column_names = column_names || ',' || v.column_name;
value_names = value_names || ',$1.' || v.column_name;
END LOOP;
-- remove first char ','
column_names = substring( column_names FROM 2);
value_names = substring( value_names FROM 2);
-- execute the insert into log schema
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' ( ' || column_names || ' ) VALUES ( ' || value_names || ' )' USING OLD;
RETURN NULL; -- no need to return, it is executed after update
END;$$;
The annoying part is that I have to get column names from information_schema for each row.
I would rather use this:
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || OLD;
But some values can be NULL so this will execute:
INSERT INTO log.user SELECT 2,,,"2015-10-28 13:52:44.785947"
instead of
INSERT INTO log.user SELECT 2,NULL,NULL,"2015-10-28 13:52:44.785947"
Any idea to convert ",," to ",NULL,"?
Thanks
-Quentin
First of all I must say that in my opinion using PostgreSQL system tables (like information_schema) is the proper way for such a usecase. Especially that you must write it once: you create the function prod.log_data() and your done. Moreover it may be dangerous to use OLD in that context (just like *) as always because of not specified elements order.
But,
to answer your exact question the only way I know is to do some operations on OLD. Just observe that you cast OLD to text by doing concatenation ... ' SELECT ' || OLD. The default casting create that ugly double-commas. So, next you can play with that text. In the end I propose:
DECLARE
tmp TEXT
...
BEGIN
...
/*to make OLD -> text like (2,,3,4,,)*/
SELECT '' || OLD INTO tmp; /*step 1*/
/*take care of commas at the begining and end: '(,' ',)'*/
tmp := replace(replace(tmp, '(,', '(NULL,'), ',)', ',NULL)'); /*step 2*/
/* replace rest of commas to commas with NULL between them */
SELECT array_to_string(string_to_array(tmp, ',', ''), ',', 'NULL') INTO tmp; /*step 3*/
/* Now we can do EXECUTE*/
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || tmp;
Of course you can do steps 1-3 in one big step
SELECT array_to_string(string_to_array(replace(replace('' || NEW, '(,', '(NULL,'), ',)', ',NULL)'), ',', ''), ',', 'NULL') INTO tmp;
In my opinion this approach isn't any better from using information_schema, but it's your call.

db2 issue selecting dynamically from an unknown table

I need a store procedure that loop in a table that returns a name of a table in the db2 and depending from that name i need to do a select statement from the named table. i have tried doing it with an 'execute immediate' in so many ways that a lost the count here is an example of the execute immediate:
set insertstring = 'INSERT INTO pribpm.TEMP_T_TOQUE_CICLO (idSemana,tiempo_ciclo,tiempo_toque)
SELECT to_number(to_char( '''|| ' time_stamp ' ||''' ,' || ' IW ' || ')) ,SUM(KPITOTALTIMECLOCK),SUM(s.KPIEXECUTIONTIMECLOCK) FROM ' || TABLA || ' where to_number(to_char( '''|| ' time_stamp ' ||''' ,' || ' IW ' || ')) between ' || (to_number(to_char(FECHA,'IW'))-3) || ' and ' || to_number(to_char(FECHA,'IW')) || ' GROUP BY to_number(to_char('''|| ' time_stamp ' ||''' ,' || ' IW ' || '))';
PREPARE stmt FROM insertstring;
EXECUTE IMMEDIATE insertstring;
where tabla is a string that contains the name of the table and fecha is a date in timestamp type
besides i've tried it with cursors like this
set select_ = 'SELECT time_stamp, KPITOTALTIMECLOCK, KPIEXECUTIONTIMECLOCK FROM ' || tabla;
PREPARE stmt FROM select_;
FOR v2 AS
c2 cursor for
execute select_
do
if to_number(to_char(time_stamp,'IW')) between
(to_number(to_char(fecha,'IW'))-3) and to_number(to_char(fecha,'IW')) then
--something here
end if;
END FOR;
but with no success.
may you or may someone please help me clear my error or giving some other idea about this im trying to do?
all this in db2 environment
Write a procedure and loop tables from SYSCAT.TABLES to get the table name and again loop to fire a select query for each and every table.
I am not 100% sure as it has been a long time I worked on db2

PostgreSQL backend process high memory usage issue

We are evaluating using PostgreSQL to implement a multitenant database,
Currently we are running some tests on single-database-multiple-schema model
(basically, all tenants have the same set of database objects under then own schema within the same database).
The application will maintain a connection pool that will be shared among all tenants/schemas.
e.g. If the database has 500 tenants/schemas and each tenants has 200 tables/views,
the total number of tables/views will be 500 * 200 = 100,000.
Since the connection pool will be used by all tenants, eventually each connection will hit all the tables/views.
In our tests, when the connection hits more views, we found the memory usage of the backend process increases quite fast and most of them are private memory.
Those memory will be hold until the connection is closed.
We have a test case that one backend process uses more the 30GB memory and eventually get an out of memory error.
To help understand the issue, I wrote code to create a simplified test cases
- MTDB_destroy: used to clear tenant schemas
- MTDB_Initialize: used to create a multitenant DB
- MTDB_RunTests: simplified test case, basically select from all tenant views one by one.
The tests I've done was on PostgreSQL 9.0.3 on CentOS 5.4
To make sure I have a clean environment, I re-created database cluster and leave majority configurations as default,
(the only thing I HAVE to change is to increase "max_locks_per_transaction" since MTDB_destroy needs to drop many objects.)
This is what I do to reproduce the issue:
create a new database
create the three functions using the code attached
connect to the new created db and run the initialize scripts
-- Initialize
select MTDB_Initialize('tenant', 100, 100, true);
-- not sure if vacuum analyze is useful here, I just run it
vacuum analyze;
-- check the tables/views created
select table_schema, table_type, count(*) from information_schema.tables where table_schema like 'tenant%' group by table_schema, table_type order by table_schema, table_type;
open another connection to the new created db and run the test scripts
-- get backend process id for current connection
SELECT pg_backend_pid();
-- open a linux console and run ps -p and watch VIRT, RES and SHR
-- run tests
select MTDB_RunTests('tenant', 1);
Observations:
when the connection for running tests was first created,
VIRT = 182MB, RES = 6240K, SHR=4648K
after run the tests once, (took 175 seconds)
VIRT = 1661MB RES = 1.5GB SHR = 55MB
re-run the test again (took 167 seconds)
VIRT = 1661MB RES = 1.5GB SHR = 55MB
re-run the test again (took 165 seconds)
VIRT = 1661MB RES = 1.5GB SHR = 55MB
as we scale up the number of tables, the memory usages go up in the tests too.
Can anyone help explain what's happening here?
Is there a way we can control memory usage of PostgreSQL backend process?
Thanks.
Samuel
-- MTDB_destroy
create or replace function MTDB_destroy (schemaNamePrefix varchar(100))
returns int as $$
declare
curs1 cursor(prefix varchar) is select schema_name from information_schema.schemata where schema_name like prefix || '%';
schemaName varchar(100);
count integer;
begin
count := 0;
open curs1(schemaNamePrefix);
loop
fetch curs1 into schemaName;
if not found then exit; end if;
count := count + 1;
execute 'drop schema ' || schemaName || ' cascade;';
end loop;
close curs1;
return count;
end $$ language plpgsql;
-- MTDB_Initialize
create or replace function MTDB_Initialize (schemaNamePrefix varchar(100), numberOfSchemas integer, numberOfTablesPerSchema integer, createViewForEachTable boolean)
returns integer as $$
declare
currentSchemaId integer;
currentTableId integer;
currentSchemaName varchar(100);
currentTableName varchar(100);
currentViewName varchar(100);
count integer;
begin
-- clear
perform MTDB_Destroy(schemaNamePrefix);
count := 0;
currentSchemaId := 1;
loop
currentSchemaName := schemaNamePrefix || ltrim(currentSchemaId::varchar(10));
execute 'create schema ' || currentSchemaName;
currentTableId := 1;
loop
currentTableName := currentSchemaName || '.' || 'table' || ltrim(currentTableId::varchar(10));
execute 'create table ' || currentTableName || ' (f1 integer, f2 integer, f3 varchar(100), f4 varchar(100), f5 varchar(100), f6 varchar(100), f7 boolean, f8 boolean, f9 integer, f10 integer)';
if (createViewForEachTable = true) then
currentViewName := currentSchemaName || '.' || 'view' || ltrim(currentTableId::varchar(10));
execute 'create view ' || currentViewName || ' as ' ||
'select t1.* from ' || currentTableName || ' t1 ' ||
' inner join ' || currentTableName || ' t2 on (t1.f1 = t2.f1) ' ||
' inner join ' || currentTableName || ' t3 on (t2.f2 = t3.f2) ' ||
' inner join ' || currentTableName || ' t4 on (t3.f3 = t4.f3) ' ||
' inner join ' || currentTableName || ' t5 on (t4.f4 = t5.f4) ' ||
' inner join ' || currentTableName || ' t6 on (t5.f5 = t6.f5) ' ||
' inner join ' || currentTableName || ' t7 on (t6.f6 = t7.f6) ' ||
' inner join ' || currentTableName || ' t8 on (t7.f7 = t8.f7) ' ||
' inner join ' || currentTableName || ' t9 on (t8.f8 = t9.f8) ' ||
' inner join ' || currentTableName || ' t10 on (t9.f9 = t10.f9) ';
end if;
currentTableId := currentTableId + 1;
count := count + 1;
if (currentTableId > numberOfTablesPerSchema) then exit; end if;
end loop;
currentSchemaId := currentSchemaId + 1;
if (currentSchemaId > numberOfSchemas) then exit; end if;
end loop;
return count;
END $$ language plpgsql;
-- MTDB_RunTests
create or replace function MTDB_RunTests(schemaNamePrefix varchar(100), rounds integer)
returns integer as $$
declare
curs1 cursor(prefix varchar) is select table_schema || '.' || table_name from information_schema.tables where table_schema like prefix || '%' and table_type = 'VIEW';
currentViewName varchar(100);
count integer;
begin
count := 0;
loop
rounds := rounds - 1;
if (rounds < 0) then exit; end if;
open curs1(schemaNamePrefix);
loop
fetch curs1 into currentViewName;
if not found then exit; end if;
execute 'select * from ' || currentViewName;
count := count + 1;
end loop;
close curs1;
end loop;
return count;
end $$ language plpgsql;
Are these connections idle in transaction or just idle? Sounds like unfinished transactions are holding onto memory, or maybe you've got a memory leak or something.
For people who see this thread when searching around (as i did), I found what appeared to be the same problem in a different context. Idle processes slowly consuming more and more memory until the OOM killer takes them out (causing periodic DB crashes).
We traced the problem back to really long running PHP scripts which kept one connection open for a long time. We were able to get the memory under control by periodically closing the connection and re-connecting.
From what i've read postgres does a lot of caching so if you have one session hitting a lot of different tables/queries this cache data can continue to grow and grow.
-Ken

How can I measure the amount of space taken by blobs on a Firebird 2.1 database?

I have a production database, using Firebird 2.1, where I need to find out how much space is used by each table, including the blobs. The blob-part is the tricky one, because it is not covered using the standard statistical report.
I do not have easy access to the server's desktop, so installing UDFs etc. is not a good solution.
How can I do this easily?
You can count total size of all BLOB fields in a database with following statement:
EXECUTE BLOCK RETURNS (BLOB_SIZE BIGINT)
AS
DECLARE VARIABLE RN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE FN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE S BIGINT;
BEGIN
BLOB_SIZE = 0;
FOR
SELECT r.rdb$relation_name, r.rdb$field_name
FROM rdb$relation_fields r JOIN rdb$fields f
ON r.rdb$field_source = f.rdb$field_name
WHERE f.rdb$field_type = 261
INTO :RN, :FN
DO BEGIN
EXECUTE STATEMENT
'SELECT SUM(OCTET_LENGTH(' || :FN || ')) FROM ' || :RN ||
' WHERE NOT ' || :FN || ' IS NULL'
INTO :S;
BLOB_SIZE = :BLOB_SIZE + COALESCE(:S, 0);
END
SUSPEND;
END
I modified the code example of Andrej to show the size of each blob field, not only the sum of all blobs.
And used SET TERM so you can copy&paste this snippet directly to tools like FlameRobin.
SET TERM #;
EXECUTE BLOCK
RETURNS (BLOB_SIZE BIGINT, TABLENAME CHAR(31), FIELDNAME CHAR(31) )
AS
DECLARE VARIABLE RN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE FN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE S BIGINT;
BEGIN
BLOB_SIZE = 0;
FOR
SELECT r.rdb$relation_name, r.rdb$field_name
FROM rdb$relation_fields r JOIN rdb$fields f
ON r.rdb$field_source = f.rdb$field_name
WHERE f.rdb$field_type = 261
INTO :RN, :FN
DO BEGIN
EXECUTE STATEMENT
'SELECT SUM(OCTET_LENGTH(' || :FN || ')) AS BLOB_SIZE, ''' || :RN || ''', ''' || :FN || '''
FROM ' || :RN ||
' WHERE NOT ' || :FN || ' IS NULL'
INTO :BLOB_SIZE, :TABLENAME, :FIELDNAME;
SUSPEND;
END
END
#
SET TERM ;#
This example doesn't work with ORDER BY, maybe a more elegant solution without EXECUTE BLOCK exists.