I have one sample function in postgresql and it raises a notice.
Sample function -
CREATE OR REPLACE FUNCTION raise_test() RETURNS TEXT AS
$body$
DECLARE
retStr TEXT;
BEGIN
SELECT current_timestamp into retStr;
RAISE NOTICE '%', retStr ;
RETURN retStr;
END;
$body$
LANGUAGE plpgsql;
Is there any way to update above function so that the entire notice stored into a file?
Like if I hit "call raise_test();" and in my specfic location I 'll have one out.txt with the entire notice printed.
PS. I hv tried to insert the notice in to a temp table then use -
COPY (select * from temp) TO '\home\pgsql\out.txt'
The reply depends on your possibilities. You cannot to do this work with usual tools. There are two possibilities - first - use some PostgreSQL extensions with possibility to create file and write to file like Orafce or you can write own C extension that will use PostgreSQL log hook - and then you can do what you want.
Related
My requirement is that I want to create a generic function where I can pass any other function and its params and it should return appropriate output (i.e It may be a table result or single result etc.) and it should be with in single statement there.
This Is what I have searched and tried but I don't want to run any multiple statements.
CREATE FUNCTION CustomerWithOrdersByState() RETURNS SETOF refcursor AS $$
DECLARE
ref1 refcursor; -- Declare cursor variables
ref2 refcursor;
BEGIN
OPEN ref1 FOR SELECT * FROM "table1" limit 10;
RETURN NEXT ref1;
OPEN ref2 FOR SELECT * FROM "table2" limit 10;
RETURN NEXT ref2;
END;
$$ LANGUAGE plpgsql;
==================================================================
begin;
select * from CustomerWithOrdersByState();
FETCH ALL FROM "<unnamed portal 31>";
-- FETCH ALL FROM "<unnamed portal 30>";
commit;
I am using Postgres 11.4 version..
I've had what I believe is a similar issue where I wanted a way to execute a script with multiple result sets in a single batch.
As pointed out above, PGAdmin4 (and many other clients I've tried) only seem to process a single command at a time, meaning that you have to select a row, execute, select the next, execute... etc.
One quick way I found which appears to work is to save the script as a single file, then execute it on the CLI via PSQL.
So, for an example, I created a file called myscript.sql as follows:
DROP TABLE IF EXISTS sampledata;
CREATE TABLE if not exists sampledata as select x,1 as c2,2 as c3, md5(random()::text) from generate_series(1,5) x;
CREATE OR REPLACE FUNCTION GET_RECORDS(ref refcursor) RETURNS REFCURSOR AS $$
BEGIN
OPEN ref FOR SELECT * FROM SAMPLEDATA; -- OPEN A CURSOR
RETURN ref; -- RETURN THE CURSOR TO THE CALLER
END;
$$ LANGUAGE PLPGSQL;
/*
In PGManage, you would need to execute this commands one at a time (ie, 4 times).
*/
BEGIN;
SELECT get_records('r1');
FETCH ALL IN "r1";
COMMIT;
I then created a bash script (runscript.sh) which allowed for easy execution of different files.
#!/bin/bash
# Can be used to execute scripts.
# Like this ./runfile.sh hello.sql
psql -U xuser -d postgres < "$1"
I set the script to be executable:
chmod a+x runscript.sh
And then execute as follows:
./runscript.sh myscript.sql
The script executes and I see the results in the CLI. I can iterate quickly on the file, save it and execute it in the shell.
I want to measure the performance of postgresql code I wrote. In the code tables get created, selfwritten functions get called etc.
Looking around, I found EXPLAIN ANALYSE is the way to go.
However, as far as I understand it, the code only gets executed once. For a more realistic analysis I want to execute the code many many times and have the results of each iteration written somewhere, ideally in a table (for statistics later).
Is there a way to do this with a native postgresql function? If there is no native postgresql function, would I accomplish this with a simple loop? Further, how would I write out the information of every EXPLAIN ANALYZE iteration?
One way to do this is to write a function that runs an explain and then spool the output of that into a file (or insert that into a table).
E.g.:
create or replace function show_plan(to_explain text)
returns table (line_nr integer, line text)
as
$$
declare
l_plan_line record;
l_line integer;
begin
l_line := 1;
for l_plan_line in execute 'explain (analyze, verbose)'||to_explain loop
return query select l_line, l_plan_line."QUERY PLAN";
l_line := l_line + 1;
end loop;
end;
$$
language plpgsql;
Then you can use generate_series() to run a statement multiple times:
select g.i as run_nr, e.*
from show_plan('select * from foo') e
cross join generate_series(1,10) as g(i)
order by g.i, e.line_nr;
This will run the function 10 times with the passed SQL statement. The result can either be spooled to a file (how you do that depends on the SQL client you are using) or inserted into a table.
For an automatic analysis it's probably easer to use a more "parseable" explain format, e.g. XML or JSON. This is also easier to handle in the output as the plan is a single XML (or JSON) value instead of multiple text lines:
create or replace function show_plan_xml(to_explain text)
returns xml
as
$$
begin
return execut 'explain (analyze, verbose, format xml)'||to_explain;
end;
$$
language plpgsql;
Then use:
select g.i as run_nr, show_plan_xml('select * from foo')
from join generate_series(1,10) as g(i)
order by g.i;
Edit: After posting I found Erwin Brandstetter's answer to a similar question. It sounds like in 9.2+ I could use the last option he listed, but none of the other alternatives sound workable for my situation. However, the comment from Jakub Kania and reiterated by Craig Ringer suggesting I use COPY, or \copy, in psql appears to solve my problem.
My goal is to get the results of executing a dynamically created query into a text file.
The names and number of columns are unknown; the query generated at run time is a 'pivot' one, and the names of columns in the SELECT list are taken from values stored in the database.
What I envision is being able, from the command line to run:
$ psql -o "myfile.txt" -c "EXECUTE mySQLGeneratingFuntion(param1, param2)"
But what I'm finding is that I can't get results from an EXECUTEd query unless I know the number of columns and their types that are in the results of the query.
create or replace function carrier_eligibility.createSQL() returns varchar AS
$$
begin
return 'SELECT * FROM carrier_eligibility.rule_result';
-- actual procedure writes a pivot query whose columns aren't known til run time
end
$$ language plpgsql
create or replace function carrier_eligibility.RunSQL() returns setof record AS
$$
begin
return query EXECUTE carrier_eligibility.createSQL();
end
$$ language plpgsql
-- this works, but I want to be able to get the results into a text file without knowing
-- the number of columns
select * from carrier_eligibility.RunSQL() AS (id int, uh varchar, duh varchar, what varchar)
Using psql isn't a requirement. I just want to get the results of the query into a text file, with the column names in the first row.
What format of a text file do you want? Something like csv?
How about something like this:
CREATE OR REPLACE FUNCTION sql_to_csv(in_sql text) returns setof text
SECURITY INVOKER -- CRITICAL DO NOT CHANGE THIS TO SECURITY DEFINER
LANGUAGE PLPGSQL AS
$$
DECLARE t_row RECORD;
t_out text;
BEGIN
FOR t_row IN EXECUTE in_sql LOOP
t_out := t_row::text;
t_out := regexp_replace(regexp_replace(t_out, E'^\\(', ''), E'\\)$', '');
return next t_out;
END LOOP;
END;
$$;
This should create properly quoted csv strings without the header. Embedded newlines may be a problem but you could write a quick Perl script to connect and write the data or something.
Note this presumes that the tuple structure (parenthesized csv) does not change with future versions, but it currently should work with 8.4 at least through 9.2.
Let's say I have a function show_files(IN file text, IN suffix text, OUT statement text). In next step the function is called:
SELECT * FROM show_files(file := 'example', suffix := '.png');
My question is: Is there any solution that I could get statement that has called this function from inside that function?
I mean, after running the SELECT the output of function (OUT statement text) should be: 'SELECT * FROM show_files(file := 'example', suffix := '.png');', or is it possible to assign this statement to the variable inside the function?
I need the functionality like those with TG_NAME, TG_OP, etc. in trigger procedures.
Maybe is it possible to retrieve this statement from SELECT current_query FROM pg_stat_activity ?
When I'm trying to use it inside a function I've got an empty record:
CREATE OR REPLACE FUNCTION f_snitch(text)
RETURNS text AS
$BODY$
declare
rr text;
BEGIN
RAISE NOTICE '.. from f_snitch.';
-- do stuff
SELECT current_query into rr FROM pg_stat_activity
WHERE current_query ilike 'f_snitch';
RETURN rr;
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
Any help and suggestions would be happily welcome!
TG_NAME and friends are special variables that only exist for trigger functions. Regular plpgsql functions don't have anything like that. I am fresh out of ideas how you could possibly get this inside the called function in plpgsql.
You could add RAISE NOTICE to your function so you get the desired information
CREATE OR REPLACE FUNCTION f_snitch(text)
RETURNS text LANGUAGE plpgsql AS
$func$
BEGIN
RAISE NOTICE '.. from f_snitch.';
-- do stuff
RETURN 'Snitch says hi!';
END
$func$;
Call:
SELECT f_snitch('foo')
In addition to the result, this returns a notice:
NOTICE: .. from f_snitch.
Fails to please in two respects:
Calling statement is not in the notice.
No CONTEXT in the notice.
For 1. you can use RAISE LOG instead (or set your cluster up to log NOTICES, too - which I usually don't, too verbose for me). With standard settings, you get an additional line with the STATEMENT in the database log:
LOG: .. from f_snitch.
STATEMENT: SELECT f_snitch('foo')
For 2., have a look at this related question at dba.SE. CONTEXT would look like:
CONTEXT: SQL statement "SELECT f_raise('LOG', 'My message')"
PL/pgSQL function "f_snitch" line 5 at PERFORM
Ok, I've got it!
CREATE OR REPLACE FUNCTION f_snitch(text)
RETURNS setof record AS
$BODY$
BEGIN
RETURN QUERY
SELECT current_query
FROM pg_stat_activity
<strike>ORDER BY length(current_query) DESC LIMIT 1;</strike>
where current_query ilike 'select * from f_snitch%';
-- much more reliable solution
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
select * from f_snitch('koper') AS (tt text);
And here is the result:
It's probably not 100% reliable solution but for small systems (for few users) it's quite ok.
While executing below shown trigger code using ANT I am getting the error
org.postgresql.util.PSQLException: ERROR: unterminated quoted string at or near "' DECLARE timeout integer"
Position: 57
I am able to sucessfully execute the below code through PGADmin (Provided by postgres) and command line utility "psql" and the trigger function is added but while executing through ANT it fails everytime
BEGIN TRANSACTION;
CREATE OR REPLACE FUNCTION sweeper() RETURNS trigger as '
DECLARE
timeout integer;
BEGIN
timeout = 30 * 24 * 60 * 60 ;
DELETE FROM diagnosticdata WHERE current_timestamp - teststarttime > (timeout * ''1 sec''::interval);
return NEW;
END;
' LANGUAGE 'plpgsql';
-- Trigger: sweep on diagnosticdata
CREATE TRIGGER sweep
AFTER INSERT
ON diagnosticdata
FOR EACH ROW
EXECUTE PROCEDURE sweeper();
END;
I encountered this error in liquibase and this page was one of the first search results so I guess I share my solution at this page:
You can put your whole sql in a separate file and include this in the changeset.
Its important to set the splitStatements option to false.
The whole changeset would then look like
<changeSet author="fgrosse" id="530b61fec3ac9">
<sqlFile path="your_sql_file_here.sql" splitStatements="false"/>
</changeSet>
I always like to have those big SQL parts (like function updates and such) in separate files.
This way you get proper syntax highlighting when opening the sql file and dont have to intermix XML and SQL in one file.
Edit: as mentioned in the comments its worth noting that the sql change supports the splitStatements option as well (thx to AndreyT for pointing that out).
I had the same problem with the JDBC driver used by Liquibase.
It seems that the driver explodes each line ended by a semicolon and runs it as a separate SQL command. That is why the code below will be executed by the JDBC driver in the following sequence:
CREATE OR REPLACE FUNCTION test(text) RETURNS VOID AS ' DECLARE tmp text
BEGIN tmp := "test"
END;
' LANGUAGE plpgsql
Of course, this is invalid SQL and causes the following error:
unterminated dollar-quoted string at or near ' DECLARE tmp text
To correct this, you need to use backslashes after each line ended with semicolon:
CREATE OR REPLACE FUNCTION test(text)
RETURNS void AS ' DECLARE tmp text; \
BEGIN
tmp := "test"; \
END;' LANGUAGE plpgsql;
Alternatively, you can place the whole definition in one line.
I am using HeidiSQL client and this was solved by placing DELIMITER // before CREATE OR REPLACE statement. There is a also a 'Send batch in one go' option in HeidiSQL that essentially achieves the same thing.
This error arises as an interaction between the particular client used to connect to the server and the form of the function. To illustrate:
The following code will run without casualty in Netbeans 7, Squirrel, DbSchema, PgAdmin3
CREATE OR REPLACE FUNCTION author.revision_number()
RETURNS trigger AS
$BODY$
begin
new.rev := new.rev + 1;
new.revised := current_timestamp;
return new;
end;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
Please note that the 'begin' statement comes immediately after the '$' quoted string.
The next code will halt all the above clients except PgAdmin3.
CREATE OR REPLACE FUNCTION author.word_count()
RETURNS trigger AS
$BODY$
declare
wordcount integer := 0; -- counter for words
indexer integer := 1; -- position in the whole string
charac char(1); -- the first character of the word
prevcharac char(1);
begin
while indexer <= length(new.blab) loop
charac := substring(new.blab,indexer,1); -- first character of string
if indexer = 1 then
prevcharac := ' '; -- absolute start of counting
else
prevcharac := substring(new.blab, indexer - 1, 1); -- indexer has increased
end if;
if prevcharac = ' ' and charac != ' ' then
wordcount := wordcount + 1;
end if;
indexer := indexer + 1;
end loop;
new.words := wordcount;
return new;
end;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
The crucial difference in the second example is the 'declare' section. The ploy of using back-slashes raises an error with PgAdmin3.
In summary I suggest trying different tools. Some tools even though they are supposed to be writing text files put invisible stuff into the text. Notoriously this occurs with the Unicode BOM which will halt any php file that tries to implement sessions or namespaces.
Whilst this is no solution I hope it helps.
I had the same problem with zeos and c++ builder.
The solution in my case:
Change the property delimiter (usually ";") to another in the component (class) I used.
dm->ZSQLProcessor1->DelimiterType=sdGo;
Perhaps Ant have something similar.
I know this question was asked a long time ago but I had kind of the same issue with a Postgresql script (run from Jenkins) using Ant's SQL Task.
I tried to run this SQL (saved in a file named audit.sql):
DROP SCHEMA IF EXISTS audit CASCADE
;
CREATE SCHEMA IF NOT EXISTS audit AUTHORIZATION faktum
;
CREATE FUNCTION audit.extract_interval_trigger ()
RETURNS trigger AS $extractintervaltrigger$
BEGIN
NEW."last_change_ts" := current_timestamp;
NEW."last_change_by" := current_user;
RETURN NEW;
END;
$extractintervaltrigger$ LANGUAGE plpgsql
;
but got the error "unterminated dollar-quoted string". No problem running it from pgAdmin.
I found out that it is not the driver that split the script at every ";" but rather Ant.
At http://grokbase.com/t/postgresql/pgsql-jdbc/06cjx3s3y0/ant-sql-tag-for-dollar-quoting I found the answer:
Ant eats double-$$ as part of its variable processing. You have to use
$BODY$ (or similar) in the stored procs, and put the delimiter on its
own line (with delimitertype="row"). Ant will cooperate then.
My Ant SQL script looks like this and it works:
<sql
driver="org.postgresql.Driver" url="jdbc:postgresql://localhost:5432/jenkins"
userid="user" password="*****"
keepformat="true"
autocommit="true"
delimitertype="row"
encoding="utf-8"
src="audit.sql"
/>
This example worked for me with PostgreSQL 14.1 and HeidiSQL 9.4.0.5125
DROP TABLE IF EXISTS emp;
CREATE TABLE emp (
empname text NOT NULL,
salary integer
);
DROP TABLE IF EXISTS EMP_AUDIT;
CREATE TABLE emp_audit(
operation char(1) NOT NULL,
stamp timestamp NOT NULL,
userid text NOT NULL,
empname text NOT NULL,
salary integer
);
DELIMITER //
CREATE OR REPLACE FUNCTION process_emp_audit() RETURNS TRIGGER AS $$
BEGIN
--
-- Create a row in emp_audit to reflect the operation performed on emp,
-- make use of the special variable TG_OP to work out the operation.
--
IF (TG_OP = 'DELETE') THEN
INSERT INTO emp_audit SELECT 'D', now(), user, OLD.*;
RETURN OLD;
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO emp_audit SELECT 'U', now(), user, NEW.*;
RETURN NEW;
ELSIF (TG_OP = 'INSERT') THEN
INSERT INTO emp_audit SELECT 'I', now(), user, NEW.*;
RETURN NEW;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$$ LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS emp_audit ON emp;
CREATE TRIGGER emp_audit
AFTER INSERT OR UPDATE OR DELETE ON emp
FOR EACH ROW EXECUTE PROCEDURE process_emp_audit();
I was receiving the same error because I had my semicolon in a new line like this:
WHERE colA is NULL
;
Make sure they are in a single line as
WHERE colA is NULL;