Mirth Connect Database Reader not returning a result - mirth

I'm trying to pull a report that gives a volume of message counts accross all channels using the below postgreSQL block by setting up a channel with the source as a Database reader and the destination as a file writer to store the result in an xlm file in a specific directory. After setting up the channel when I enabled it just stays at a polling state with the below error in the server log.
But when I replaced this block with a simple select query it works. It would be really helpful if you could help me understand where I'm going wrong or does Mirth channel does not read a psql block?
declare
tableName text;
msgCount int;
channelName text;
channelID text;
countDate date;
PortNum text;
begin
raise info ' | CHANNEL NAME | CHANNEL ID | DATE | COUNT | PORT NUM';
execute 'select CURRENT_DATE-1' into countDate;
<<"Yesterday's Received Message Count">>
for tableName in (select local_channel_id from d_channels) loop
execute 'select count(*) from d_mm' || tableName || ' where connector_name = ''Source'' and received_date between (select CURRENT_DATE-31 || '' 00:00:00.00'')::timestamp and (select CURRENT_DATE-1 || '' 23:59:59.99'')::timestamp' into msgCount;
execute 'select channel.name from channel inner join d_channels on d_channels.channel_id = channel.id where d_channels.local_channel_id = '|| tableName into channelName;
execute 'select channel.id from channel inner join d_channels on d_channels.channel_id = channel.id where d_channels.local_channel_id = '|| tableName into channelID;
execute 'select substring (channel.channel, position (''<port>'' IN channel)+6, 4) AS port from channel inner join d_channels on d_channels.channel_id = channel.id where d_channels.local_channel_id = '|| tableName into PortNum;
raise info ' | %', channelName || ' | ' || channelID || ' | ' || countDate || ' | '|| msgCount || ' | '|| PortNum;
end loop "Yesterday's Received Message Count";
end;
$channelLoop$;
Error:
[2021-06-28 04:07:16,410] ERROR (com.mirth.connect.connectors.jdbc.DatabaseReceiverQuery:207): An error occurred while polling for messages, retrying after 10000 ms...
org.postgresql.util.PSQLException: No results were returned by the query.
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:117)
at com.mirth.connect.connectors.jdbc.DatabaseReceiverQuery.poll(DatabaseReceiverQuery.java:190)
at com.mirth.connect.connectors.jdbc.DatabaseReceiver.poll(DatabaseReceiver.java:111)
at com.mirth.connect.donkey.server.channel.PollConnectorJob.execute(PollConnectorJob.java:49)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)

I'm honestly not sure what's happening here.
Mirth is going to take whatever you put as your SQL Query in the DB Reader and use it to create a java.sql.PreparedStatement using whichever JDBC driver you have selected. Then it executes the PreparedStatement, passing any parameters if necessary based on whether you used replacement tokens or not.
I do not know if the postgres driver will allow this to be compiled to a prepared statement and executed. It seems like maybe it does since it's not complaining until after you try executing the query.
The problem appears to be that the query does not return a ResultSet. Looking at your code, I think you are sending your output to the postgres log instead?

Related

Exporting a query per row to csv in postgresql

I have a users table that has an id, first_name, and last_name.
I also have a messages table that has a user_id, and text.
How do I export a user's messages to a CSV file for each user?
I can do it for one user:
COPY (SELECT text FROM messages WHERE user_id = "my user id") to "Firstname_Lastname.csv" DELIMITER ',' CSV HEADER;
but postgres doesn't seem to have a for loop or anything? I did some googling and stumbled into LATERAL but could not get that to work...
Using Adrian's link and a lot of reading and cursing at my screen I hacked something together:
CREATE OR REPLACE FUNCTION export()
RETURNS integer
LANGUAGE plpgsql
AS $$
DECLARE
myuser RECORD;
filename TEXT;
BEGIN
FOR myuser IN SELECT id,first_name,last_name FROM users LOOP
RAISE NOTICE 'user %', myuser;
EXECUTE $$SELECT '/home/me/' || $1 || '_' || $2 || '.csv'$$ INTO filename USING myuser.first_name, myuser.last_name;
RAISE NOTICE 'filename %', filename;
EXECUTE $$COPY (SELECT text FROM messages WHERE user_id = '$$ || myuser.id || $$') TO '$$ || filename || $$' DELIMITER ',' CSV HEADER$$;
END LOOP;
RETURN 1;
END;
$$
It's quite a mess and is not idiomatic at all I'm sure. But it worked for me.

postgresql two phase commit prepare transaction error: transactions can not be started in PL / pgSQL

I would like to do a two phase commit transaction with prepare transaction for PostgreSQL.
Could you help with the error?
I can not understand how to connect to the remote database via dblick with prepare transaction?
create or replace function insert_into_table_a() returns void as $$
declare
trnxprepare text;
trnxcommit text;
trnxrollback text;
trnxid varchar;
begin
select uuid_generate_v4() into trnxid;
select 'prepare transaction ' || ' ''' || trnxid || ' ''' into trnxprepare;
select 'commit prepared ' || ' ''' || trnxid || ' ''' into trnxcommit;
select 'rollback prepared ' || ' ''' || trnxid || ' ''' into trnxrollback;
insert into table_a values ('test');
perform dblink_connect('cn','dbname=test2 user=test2user password=123456');
perform dblink_exec('cn','insert into table_b values (''test 2'');');
perform dblink_disconnect('cn');
execute trnxprepare;
execute trnxcommit;
exception
when others then
execute trnxrollback;
perform dblink_disconnect('cn');
raise notice '% %', sqlerrm, sqlstate;
end;
$$ language plpgsql;
select insert_into_table_a();
ERROR: ERROR: transactions can not be started in PL / pgSQL
HINT: Use the BEGIN block with the EXCEPTION clause instead.
CONTEXT: insert_into_table_a () PL / pgSQL function, line 24, in EXECUTE
SQL state: 0A000
So, in Postgres, you can't control transactions from inside functions for the most part. You can raise errors to abort them indirectly, if they aren't caught, but you can't begin or end them directly like this.
To manage transactions, you'd either need a worker process as a loadable module, or to control the transaction from a client through a connection.

Db2 stored procedure error (not valid in the context where it is used)

My issue is when i call the stored procedure it is not working
drop procedure DELETE_WITH_COMMIT_COUNT1
DB20000I The SQL command completed successfully.
CREATE PROCEDURE DELETE_WITH_COMMIT_COUNT1(IN v_TABLE_NAME VARCHAR(24), IN v_COMMIT_COUNT INTEGER )
NOT DETERMINISTIC
LANGUAGE SQL
P1 : BEGIN
-- DECLARE Statements
DECLARE SQLCODE INTEGER;
DECLARE v_DELETE_QUERY VARCHAR(1024);
DECLARE v_DELETE_STATEMENT STATEMENT;
P2 : BEGIN
DECLARE V1 CHAR(24) FOR BIT DATA;
DECLARE V2 CHAR(24) FOR BIT DATA ;
DECLARE cur1 CURSOR WITH RETURN TO CLIENT FOR select min(MESSAGE_ID),max(MESSAGE_ID) from TESTING where TIMESTAMP between (select TIMESTAMP(date(min(timestamp))) from TESTING with ur) and (select TIMESTAMP(date(min(timestamp))) + 1 day from TESTING with ur) ;
OPEN cur1;
FETCH FROM cur1 INTO V1, V2;
SET v_DELETE_QUERY = 'DELETE FROM (SELECT 1 FROM ' || v_TABLE_NAME || ' WHERE MESSAGE_ID between V1 and V2 '
|| ' FETCH FIRST ' || RTRIM(CHAR(v_COMMIT_COUNT)) || ' ROWS ONLY) AS DELETE_TABLE';
PREPARE v_DELETE_STATEMENT FROM v_DELETE_QUERY;
DEL_LOOP:
LOOP
EXECUTE v_DELETE_STATEMENT;
IF SQLCODE = 100 THEN
LEAVE DEL_LOOP;
END IF;
COMMIT;
END LOOP;
COMMIT;
END P2;
END P1
DB20000I The SQL command completed successfully.
My procedure was created successfully but when I call it the following issue happens:
db2 "call DELETE_WITH_COMMIT_COUNT1 ('TESTING',50)" SQL0206N "V1" is
not valid in the context where it is used. SQLSTATE=42703
More information :
db2 "select min(MESSAGE_ID),max(MESSAGE_ID) from TESTING where
TIMESTAMP between (select TIMESTAMP(date(min(timestamp))) from TESTING
with ur) and (select TIMESTAMP(date(min(timestamp))) + 1 day from
TESTING with ur) "
1 2
--------------------------------------------------- ---------------------------------------------------
x'4B5753313032353039313133392020202020202020202020' x'4B5753313032353039313230302020202020202020202020'
1 record(s) selected.
I want to delete the records between these values and i currently have 99 records between minimum and maximum message id
message_id column is defined as CHAR(24) FOR BIT DATA on the table .
Your problem is with this statement:
SET v_DELETE_QUERY = 'DELETE FROM (SELECT 1 FROM ' || v_TABLE_NAME || ' WHERE MESSAGE_ID between V1 and V2 '
|| ' FETCH FIRST ' || RTRIM(CHAR(v_COMMIT_COUNT)) || ' ROWS ONLY) AS DELETE_TABLE';
In this statement, it looks like you want to use the values of your previously declared variables, V1 and V2:
' WHERE MESSAGE_ID between V1 and V2 '
DB2 sees this as a string literal. Instead, try changing this part of the statement like so:
SET v_DELETE_QUERY = 'DELETE FROM (SELECT 1 FROM ' || v_TABLE_NAME || ' WHERE MESSAGE_ID between ' || V1 || ' and ' || V2
|| ' FETCH FIRST ' || RTRIM(CHAR(v_COMMIT_COUNT)) || ' ROWS ONLY) AS DELETE_TABLE';

Can't catch null_value_not_allowed exception in postgresql 9.3

I was wondering if I am missing something obvious here. I have a function that defines a table name and then goes to query that table in an execute clause. However if there are no data in the main table (one_min, fifteen_min etc) I get back a null_value_not_allowed exception with code 22004. When I try a handler around the exception, it seems to completely by pass it and still die. I tried wrapping the bigger if condition around out with still no luck.
CREATE OR REPLACE FUNCTION loadresults(force_update boolean)
RETURNS date AS
$BODY$
declare
inter int;
startdate date;
table_name varchar(50);
BEGIN
select 1440 / avg(array_length(intervals,1))::int into inter from temptable;
Case inter
when 1 then
table_name := 'one_min';
when 15 then
table_name := 'fifteen_mins';
when 30 then
table_name := 'half_hour';
when 60 then
table_name := 'one_hour';
else
raise EXCEPTION 'I do not recognise the interval %', inter ;
end case;
SET CONSTRAINTS ALL DEFERRED ;
if force_update is true then
select min(sday) into startdate from temptable ;
else
begin
execute ' select max(sday) from ' || table_name
|| ' where (orgid,householdid) in
(select orgid, householdid from temptable limit 1 )'
into startdate ;
EXCEPTION when null_value_not_allowed then
select min(sday) into startdate from temptable;
end;
end if;
return startdate;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Given that all the tables exist and work fine - the function carries on to load the data - and it works fine when the force_flag is true.
When the force_update flag is false and there are no data in the one_min table, I get this error back:
ERROR: query string argument of EXECUTE is null
SQL state: 22004
Context: PL/pgSQL function loadresults(boolean) line 39 at EXECUTE statement
This points to the execute statement where the query will not return any values.
Any ideas why that might happen? I'd rather keep the error handling within Postgres rather than my remaining code.
Update
I have now updated the query in the execute clause with this one:
execute ' select coalesce(res, tem) from ' ||
' (select max(sday) as res from ' || table_name || '
where (orgid,householdid) in (select orgid, householdid from temptable limit 1 )) t1,
(select min(sday) as tem from temptable) m ' into startdate ;
This seems to do the trick as the exception is not raised. I would still like to understand why the exception can't be caught.
Weird, but it seems, there is two null_value_not_allowed exceptions (22004 and 39004).
Try to catch it by their sqlstate, like:
BEGIN
-- ...
EXCEPTION WHEN SQLSTATE '22004' THEN
-- ...
END;
Or, you can achieve the same results, with an additional condition:
IF force_update OR table_name IS NULL THEN
SELECT min(sday) INTO startdate FROM temptable;
ELSE
EXECUTE 'select max(sday) from '
|| table_name
|| ' where (orgid,householdid) in (select orgid, householdid from temptable limit 1 )'
INTO startdate;
END IF;

PostgreSQL backend process high memory usage issue

We are evaluating using PostgreSQL to implement a multitenant database,
Currently we are running some tests on single-database-multiple-schema model
(basically, all tenants have the same set of database objects under then own schema within the same database).
The application will maintain a connection pool that will be shared among all tenants/schemas.
e.g. If the database has 500 tenants/schemas and each tenants has 200 tables/views,
the total number of tables/views will be 500 * 200 = 100,000.
Since the connection pool will be used by all tenants, eventually each connection will hit all the tables/views.
In our tests, when the connection hits more views, we found the memory usage of the backend process increases quite fast and most of them are private memory.
Those memory will be hold until the connection is closed.
We have a test case that one backend process uses more the 30GB memory and eventually get an out of memory error.
To help understand the issue, I wrote code to create a simplified test cases
- MTDB_destroy: used to clear tenant schemas
- MTDB_Initialize: used to create a multitenant DB
- MTDB_RunTests: simplified test case, basically select from all tenant views one by one.
The tests I've done was on PostgreSQL 9.0.3 on CentOS 5.4
To make sure I have a clean environment, I re-created database cluster and leave majority configurations as default,
(the only thing I HAVE to change is to increase "max_locks_per_transaction" since MTDB_destroy needs to drop many objects.)
This is what I do to reproduce the issue:
create a new database
create the three functions using the code attached
connect to the new created db and run the initialize scripts
-- Initialize
select MTDB_Initialize('tenant', 100, 100, true);
-- not sure if vacuum analyze is useful here, I just run it
vacuum analyze;
-- check the tables/views created
select table_schema, table_type, count(*) from information_schema.tables where table_schema like 'tenant%' group by table_schema, table_type order by table_schema, table_type;
open another connection to the new created db and run the test scripts
-- get backend process id for current connection
SELECT pg_backend_pid();
-- open a linux console and run ps -p and watch VIRT, RES and SHR
-- run tests
select MTDB_RunTests('tenant', 1);
Observations:
when the connection for running tests was first created,
VIRT = 182MB, RES = 6240K, SHR=4648K
after run the tests once, (took 175 seconds)
VIRT = 1661MB RES = 1.5GB SHR = 55MB
re-run the test again (took 167 seconds)
VIRT = 1661MB RES = 1.5GB SHR = 55MB
re-run the test again (took 165 seconds)
VIRT = 1661MB RES = 1.5GB SHR = 55MB
as we scale up the number of tables, the memory usages go up in the tests too.
Can anyone help explain what's happening here?
Is there a way we can control memory usage of PostgreSQL backend process?
Thanks.
Samuel
-- MTDB_destroy
create or replace function MTDB_destroy (schemaNamePrefix varchar(100))
returns int as $$
declare
curs1 cursor(prefix varchar) is select schema_name from information_schema.schemata where schema_name like prefix || '%';
schemaName varchar(100);
count integer;
begin
count := 0;
open curs1(schemaNamePrefix);
loop
fetch curs1 into schemaName;
if not found then exit; end if;
count := count + 1;
execute 'drop schema ' || schemaName || ' cascade;';
end loop;
close curs1;
return count;
end $$ language plpgsql;
-- MTDB_Initialize
create or replace function MTDB_Initialize (schemaNamePrefix varchar(100), numberOfSchemas integer, numberOfTablesPerSchema integer, createViewForEachTable boolean)
returns integer as $$
declare
currentSchemaId integer;
currentTableId integer;
currentSchemaName varchar(100);
currentTableName varchar(100);
currentViewName varchar(100);
count integer;
begin
-- clear
perform MTDB_Destroy(schemaNamePrefix);
count := 0;
currentSchemaId := 1;
loop
currentSchemaName := schemaNamePrefix || ltrim(currentSchemaId::varchar(10));
execute 'create schema ' || currentSchemaName;
currentTableId := 1;
loop
currentTableName := currentSchemaName || '.' || 'table' || ltrim(currentTableId::varchar(10));
execute 'create table ' || currentTableName || ' (f1 integer, f2 integer, f3 varchar(100), f4 varchar(100), f5 varchar(100), f6 varchar(100), f7 boolean, f8 boolean, f9 integer, f10 integer)';
if (createViewForEachTable = true) then
currentViewName := currentSchemaName || '.' || 'view' || ltrim(currentTableId::varchar(10));
execute 'create view ' || currentViewName || ' as ' ||
'select t1.* from ' || currentTableName || ' t1 ' ||
' inner join ' || currentTableName || ' t2 on (t1.f1 = t2.f1) ' ||
' inner join ' || currentTableName || ' t3 on (t2.f2 = t3.f2) ' ||
' inner join ' || currentTableName || ' t4 on (t3.f3 = t4.f3) ' ||
' inner join ' || currentTableName || ' t5 on (t4.f4 = t5.f4) ' ||
' inner join ' || currentTableName || ' t6 on (t5.f5 = t6.f5) ' ||
' inner join ' || currentTableName || ' t7 on (t6.f6 = t7.f6) ' ||
' inner join ' || currentTableName || ' t8 on (t7.f7 = t8.f7) ' ||
' inner join ' || currentTableName || ' t9 on (t8.f8 = t9.f8) ' ||
' inner join ' || currentTableName || ' t10 on (t9.f9 = t10.f9) ';
end if;
currentTableId := currentTableId + 1;
count := count + 1;
if (currentTableId > numberOfTablesPerSchema) then exit; end if;
end loop;
currentSchemaId := currentSchemaId + 1;
if (currentSchemaId > numberOfSchemas) then exit; end if;
end loop;
return count;
END $$ language plpgsql;
-- MTDB_RunTests
create or replace function MTDB_RunTests(schemaNamePrefix varchar(100), rounds integer)
returns integer as $$
declare
curs1 cursor(prefix varchar) is select table_schema || '.' || table_name from information_schema.tables where table_schema like prefix || '%' and table_type = 'VIEW';
currentViewName varchar(100);
count integer;
begin
count := 0;
loop
rounds := rounds - 1;
if (rounds < 0) then exit; end if;
open curs1(schemaNamePrefix);
loop
fetch curs1 into currentViewName;
if not found then exit; end if;
execute 'select * from ' || currentViewName;
count := count + 1;
end loop;
close curs1;
end loop;
return count;
end $$ language plpgsql;
Are these connections idle in transaction or just idle? Sounds like unfinished transactions are holding onto memory, or maybe you've got a memory leak or something.
For people who see this thread when searching around (as i did), I found what appeared to be the same problem in a different context. Idle processes slowly consuming more and more memory until the OOM killer takes them out (causing periodic DB crashes).
We traced the problem back to really long running PHP scripts which kept one connection open for a long time. We were able to get the memory under control by periodically closing the connection and re-connecting.
From what i've read postgres does a lot of caching so if you have one session hitting a lot of different tables/queries this cache data can continue to grow and grow.
-Ken