This code:
ALTER TABLE myschema.mytable add column geom geometry (point,4326);
CREATE INDEX mytable_idx on myschema.mytable using GIST(geom);
UPDATE myschema.mytable set geom = st_setsrid(st_point(mytable.long, mytable.lat), 4326);
This works fine when updating a single table. How would you convert it into a dynamic SQL function, with schema and table as parameters?
Since the function input must be an existing table, the simplest safe way would be to use a regclass input parameter like demonstrated here:
Table name as a PostgreSQL function parameter
However, you also need the bare table name for the concatenated index name, so I'll stick with taking text for schema and table separately:
CREATE OR REPLACE FUNCTION create_geom(_sch text, _tab text)
RETURNS void
LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE format(
'ALTER TABLE %1$I.%2$I ADD COLUMN geom geometry(POINT,4326);
UPDATE %1$I.%2$I SET geom = st_setsrid(st_point(long, lat), 4326);
CREATE INDEX %3$I ON %1$I.%2$I USING gist(geom);'
, _sch, _tab
, _tab || '_geom_gist_idx');
END
$func$;
Call:
SELECT create_geom('myschema', 'mytable');
Use a single EXECUTE, no need for multiple calls.
Just omit table-qualification for columns in the UPDATE. While not joining additional tables, column names are unambiguous. Else, use a table alias, which can be constant. Like:
UPDATE %1$s AS x SET geom = st_setsrid(st_point(x.long, x.lat), 4326);
But it's smarter to populate the column before you build the index. That's a lot faster and produces a balanced index without bloat. So I switched the commands.
Note how I concatenate the index name first (_tab || '_geom_gist_idx'), and then double-quote as required with %3$I. That's the safe way. Something like %I_idx fails with non-standard names.
That said, it's typically a mistake to add columns with redundant information to a table. (What keeps you from changing one or the other? Why bloat the table?) Either just use an expression index instead of all of the above:
CREATE INDEX ON myschema.mytable USING gist (st_setsrid(st_point(long, lat), 4326));
Or drop the now redundant long & lat from the table. Those can be extracted from the new geom cheaply on the fly.
Or, if you need all columns (for special performance reasons?), consider a generated column instead. See:
Computed / calculated / virtual / derived columns in PostgreSQL
Having your queries as SQL templates and using format function for identifiers:
CREATE OR REPLACE FUNCTION public.create_geom(sch text, tab text)
RETURNS void language plpgsql AS $body$
DECLARE
DYNSQLA constant text := 'ALTER TABLE %I.%I add column geom geometry (point,4326)';
DYNSQLB constant text := 'CREATE INDEX %I_idx on %I.%I using GIST(geom);';
DYNSQLC constant text := 'UPDATE %I.%I set geom = st_setsrid(st_point(%I.long, %I.lat), 4326)';
BEGIN
execute format(DYNSQLA, sch, tab);
execute format(DYNSQLB, tab, sch, tab);
execute format(DYNSQLC, sch, tab, tab, tab);
END;
$body$;
SELECT create_geom('myschema','mytable');
Related
I am trying to remove duplicated data from some of our databases based upon unique id's. All deleted data should be stored in a separate table for auditing purposes. Since it concerns quite some databases and different schemas and tables I wanted to start using variables to reduce chance of errors and the amount of work it will take me.
This is the best example query I could think off, but it doesn't work:
do $$
declare #source_schema varchar := 'my_source_schema';
declare #source_table varchar := 'my_source_table';
declare #target_table varchar := 'my_target_schema' || source_table || '_duplicates'; --target schema and appendix are always the same, source_table is a variable input.
declare #unique_keys varchar := ('1', '2', '3')
begin
select into #target_table
from #source_schema.#source_table
where id in (#unique_keys);
delete from #source_schema.#source_table where export_id in (#unique_keys);
end ;
$$;
The query syntax works with hard-coded values.
Most of the times my variables are perceived as columns or not recognized at all. :(
You need to create and then call a plpgsql procedure with input parameters :
CREATE OR REPLACE PROCEDURE duplicates_suppress
(my_target_schema text, my_source_schema text, my_source_table text, unique_keys text[])
LANGUAGE plpgsql AS
$$
BEGIN
EXECUTE FORMAT(
'WITH list AS (INSERT INTO %1$I.%3$I_duplicates SELECT * FROM %2$I.%3$I WHERE array[id] <# %4$L :: integer[] RETURNING id)
DELETE FROM %2$I.%3$I AS t USING list AS l WHERE t.id = l.id', my_target_schema, my_source_schema, my_source_table, unique_keys :: text) ;
END ;
$$ ;
The procedure duplicates_suppress inserts into my_target_schema.my_source_table || '_duplicates' the rows from my_source_schema.my_source_table whose id is in the array unique_keys and then deletes these rows from the table my_source_schema.my_source_table .
See the test result in dbfiddle.
As has been commented, you need some kind of dynamic SQL. In a FUNCTION, PROCEDURE or a DO statement to do it on the server.
You should be comfortable with PL/pgSQL. Dynamic SQL is no beginners' toy.
Example with a PROCEDURE, like Edouard already suggested. You'll need a FUNCTION instead to wrap it in an outer transaction (like you very well might). See:
When to use stored procedure / user-defined function?
CREATE OR REPLACE PROCEDURE pg_temp.f_archive_dupes(_source_schema text, _source_table text, _unique_keys int[], OUT _row_count int)
LANGUAGE plpgsql AS
$proc$
-- target schema and appendix are always the same, source_table is a variable input
DECLARE
_target_schema CONSTANT text := 's2'; -- hardcoded
_target_table text := _source_table || '_duplicates';
_sql text := format(
'WITH del AS (
DELETE FROM %I.%I
WHERE id = ANY($1)
RETURNING *
)
INSERT INTO %I.%I TABLE del', _source_schema, _source_table
, _target_schema, _target_table);
BEGIN
RAISE NOTICE '%', _sql; -- debug
EXECUTE _sql USING _unique_keys; -- execute
GET DIAGNOSTICS _row_count = ROW_COUNT;
END
$proc$;
Call:
CALL pg_temp.f_archive_dupes('s1', 't1', '{1, 3}', 0);
db<>fiddle here
I made the procedure temporary, since I assume you don't need to keep it permanently. Create it once per database. See:
How to create a temporary function in PostgreSQL?
Passed schema and table names are case-sensitive strings! (Unlike unquoted identifiers in plain SQL.) Either way, be wary of SQL-injection when concatenating SQL dynamically. See:
Are PostgreSQL column names case-sensitive?
Table name as a PostgreSQL function parameter
Made _unique_keys type int[] (array of integer) since your sample values look like integers. Use a the actual data type of your id columns!
The variable _sql holds the query string, so it can easily be debugged before actually executing. Using RAISE NOTICE '%', _sql; for that purpose.
I suggest to comment the EXECUTE line until you are sure.
I made the PROCEDURE return the number of processed rows. You didn't ask for that, but it's typically convenient. At hardly any cost. See:
Dynamic SQL (EXECUTE) as condition for IF statement
Best way to get result count before LIMIT was applied
Last, but not least, use DELETE ... RETURNING * in a data-modifying CTE. Since that has to find rows only once it comes at about half the cost of separate SELECT and DELETE. And it's perfectly safe. If anything goes wrong, the whole transaction is rolled back anyway.
Two separate commands can also run into concurrency issues or race conditions which are ruled out this way, as DELETE implicitly locks the rows to delete. Example:
Replicating data between Postgres DBs
Or you can build the statements in a client program. Like psql, and use \gexec. Example:
Filter column names from existing table for SQL DDL statement
Based on Erwin's answer, minor optimization...
create or replace procedure pg_temp.p_archive_dump
(_source_schema text, _source_table text,
_unique_key int[],_target_schema text)
language plpgsql as
$$
declare
_row_count bigint;
_target_table text := '';
BEGIN
select quote_ident(_source_table) ||'_'|| array_to_string(_unique_key,'_') into _target_table from quote_ident(_source_table);
raise notice 'the deleted table records will store in %.%',_target_schema, _target_table;
execute format('create table %I.%I as select * from %I.%I limit 0',_target_schema, _target_table,_source_schema,_source_table );
execute format('with mm as ( delete from %I.%I where id = any (%L) returning * ) insert into %I.%I table mm'
,_source_schema,_source_table,_unique_key, _target_schema, _target_table);
GET DIAGNOSTICS _row_count = ROW_COUNT;
RAISE notice 'rows influenced, %',_row_count;
end
$$;
--
if your _unique_key is not that much, this solution also create a table for you. Obviously you need to create the target schema yourself.
If your unique_key is too much, you can customize to properly rename the dumped table.
Let's call it.
call pg_temp.p_archive_dump('s1','t1', '{1,2}','s2');
s1 is the source schema, t1 is source table, {1,2} is the unique key you want to extract to the new table. s2 is the target schema
I'm trying to EXECUTE some SELECTs to use inside a function, my code is something like this:
DECLARE
result_one record;
BEGIN
EXECUTE 'WITH Q1 AS
(
SELECT id
FROM table_two
INNER JOINs, WHERE, etc, ORDER BY... DESC
)
SELECT Q1.id
FROM Q1
WHERE, ORDER BY...DESC';
RETURN final_result;
END;
I know how to do it in MySQL, but in PostgreSQL I'm failing. What should I change or how should I do it?
For a function to be able to return multiple rows it has to be declared as returns table() (or returns setof)
And to actually return a result from within a PL/pgSQL function you need to use return query (as documented in the manual)
To build dynamic SQL in Postgres it is highly recommended to use the format() function to properly deal with identifiers (and to make the source easier to read).
So you need something like:
create or replace function get_data(p_sort_column text)
returns table (id integer)
as
$$
begin
return query execute
format(
'with q1 as (
select id
from table_two
join table_three on ...
)
select q1.id
from q1
order by %I desc', p_sort_column);
end;
$$
language plpgsql;
Note that the order by inside the CTE is pretty much useless if you are sorting the final query unless you use a LIMIT or distinct on () inside the query.
You can make your life even easier if you use another level of dollar quoting for the dynamic SQL:
create or replace function get_data(p_sort_column text)
returns table (id integer)
as
$$
begin
return query execute
format(
$query$
with q1 as (
select id
from table_two
join table_three on ...
)
select q1.id
from q1
order by %I desc
$query$, p_sort_column);
end;
$$
language plpgsql;
What a_horse said. And:
How to return result of a SELECT inside a function in PostgreSQL?
Plus, to pick a column for ORDER BY dynamically, you have to add that column to the SELECT list of your CTE, which leads to complications if the column can be duplicated (like with passing 'id') ...
Better yet, remove the CTE entirely. There is nothing in your question to warrant its use anyway. (Only use CTEs when needed in Postgres, they are typically slower than equivalent subqueries or simple queries.)
CREATE OR REPLACE FUNCTION get_data(p_sort_column text)
RETURNS TABLE (id integer) AS
$func$
BEGIN
RETURN QUERY EXECUTE format(
$q$
SELECT t2.id -- assuming you meant t2?
FROM table_two t2
JOIN table_three t3 on ...
ORDER BY t2.%I DESC NULL LAST -- see below!
$q$, $1);
END
$func$ LANGUAGE plpgsql;
I appended NULLS LAST - you'll probably want that, too:
PostgreSQL sort by datetime asc, null first?
If p_sort_column is from the same table all the time, hard-code that table name / alias in the ORDER BY clause. Else, pass the table name / alias separately and auto-quote them separately to be safe:
Define table and column names as arguments in a plpgsql function?
I suggest to table-qualify all column names in a bigger query with multiple joins (t2.id not just id). Avoids various kinds of surprising results / confusion / abuse.
And you may want to schema-qualify your table names (myschema.table_two) to avoid similar troubles when calling the function with a different search_path:
How does the search_path influence identifier resolution and the "current schema"
How can I write a dynamic SELECT INTO query inside a PL/pgSQL function in Postgres?
Say I have a variable called tb_name which is filled in a FOR loop from information_schema.tables. Now I have a variable called tc which will be taking the row count for each table. I want something like the following:
FOR tb_name in select table_name from information_schema.tables where table_schema='some_schema' and table_name like '%1%'
LOOP
EXECUTE FORMAT('select count(*) into' || tc 'from' || tb_name);
END LOOP
What should be the data type of tb_name and tc in this case?
CREATE OR REPLACE FUNCTION myfunc(_tbl_pattern text, _schema text = 'public')
RETURNS void AS -- or whatever you want to return
$func$
DECLARE
_tb_name information_schema.tables.table_name%TYPE; -- currently varchar
_tc bigint; -- count() returns bigint
BEGIN
FOR _tb_name IN
SELECT table_name
FROM information_schema.tables
WHERE table_schema = _schema
AND table_name ~ _tbl_pattern -- see below!
LOOP
EXECUTE format('SELECT count(*) FROM %I.%I', _schema, _tb_name)
INTO _tc;
-- do something with _tc
END LOOP;
END
$func$ LANGUAGE plpgsql;
Notes
I prepended all parameters and variables with an underscore (_) to avoid naming collisions with table columns. Just a useful convention.
_tc should be bigint, since that's what the aggregate function count() returns.
The data type of _tb_name is derived from its parent column dynamically: information_schema.tables.table_name%TYPE. See the chapter Copying Types in the manual.
Are you sure you only want tables listed in information_schema.tables? Makes sense, but be aware of implications. See:
How to check if a table exists in a given schema
a_horse already pointed to the manual and Andy provided a code example. This is how you assign a single row or value returned from a dynamic query with EXECUTE to a (row) variable. A single column (like count in the example) is decomposed from the row type automatically, so we can assign to the scalar variable tc directly - in the same way we would assign a whole row to a record or row variable. Related:
How to get the value of a dynamically generated field name in PL/pgSQL
Schema-qualify the table name in the dynamic query. There may be other tables of the same name in the current search_path, which would result in completely wrong (and very confusing!) results without schema-qualification. Sneaky bug! Or this schema is not in the search_path at all, which would make the function raise an exception immediately.
How does the search_path influence identifier resolution and the "current schema"
Always quote identifiers properly to defend against SQL injection and random errors. Schema and table have to be quoted separately! See:
Table name as a PostgreSQL function parameter
Truncating all tables in a Postgres database
I use the regular expression operator ~ in table_name ~ _tbl_pattern instead of table_name LIKE ('%' || _tbl_pattern || '%'), that's simpler. Be wary of special characters in the pattern parameter either way! See:
PostgreSQL Reverse LIKE
Escape function for regular expression or LIKE patterns
Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL
I set a default for the schema name in the function call: _schema text = 'public'. Just for convenience, you may or may not want that. See:
Assigning default value for type
Addressing your comment: to pass values, use the USING clause like:
EXECUTE format('SELECT count(*) FROM %I.%I
WHERE some_column = $1', _schema, _tb_name,column_name)
USING user_def_variable;
Related:
INSERT with dynamic table name in trigger function
It looks like you want the %I placeholder for FORMAT so that it treats your variable as an identifier. Also, the INTO clause should go outside the prepared statement.
FOR tb_name in select table_name from information_schema.tables where table_schema='some_schema' and table_name like '%1%'
LOOP
EXECUTE FORMAT('select count(*) from %I', tb_name) INTO tc;
END LOOP
I have defined a function that determines the timezone from table tz_world for a set of lon, lat values:
create function get_timezone(numeric, numeric)
returns character varying(30) as $$
select tzid from tz_world where ST_Contains(geom, ST_MakePoint($1, $2));
$$ language SQL immutable;
Now I would like to use this function in the SELECT clause of a query on a different table:
select get_timezone(lon, lat) from event where...;
The function is rather slow, so I tried using an index to speed things up:
create index event_timezone_idx on event (get_timezone(event.lon, event.lat));
While this speeds up queries where the function is used in the WHERE clause, it has no effect on the variant above where get_timezone(lon, lat) is used in the SELECT clause.
Is it possible to rephrase the query and/or index to speed up the timezone determination?
Update
Thank you for the answers!! I decided to include an extra column for the timezone in the end and populate it when creating/updating the events.
I would recommend you create a local temporary table of the part of the select where you want to create the index on and then create an index on the temporary one:
CREATE LOCAL TEMPORARY TABLE temp_table AS (
select
.
.
.
);
CREATE INDEX temp_table idx
ON temp_table
USING btree
(col1,col2,....);
Otherwise write what you want your WHERE condition to be, indexes only work on WHERE clauses and values for the index should be exactly the ones you are trying to filter on.
I am new to PostgreSQL and found a trigger which serves my purpose completely except for one little thing. The trigger is quite generic and runs across different tables and logs different field changes. I found here.
What I now need to do is test for a specific field which changes as the tables change on which the trigger fires. I thought of using substr as all the column will have the same name format e.g. XXX_cust_no but the XXX can change to 2 or 4 characters. I need to log the value in theXXX_cust_no field with every record that is written to the history_ / audit table. Using a bunch of IF / ELSE statements to accomplish this is not something I would like to do.
The trigger as it now works logs the table_name, column_name, old_value, new_value. I however need to log the XXX_cust_no of the record that was changed as well.
Basically you need dynamic SQL for dynamic column names. format helps to format the DML command. Pass values from NEW and OLD with the USING clause.
Given these tables:
CREATE TABLE tbl (
t_id serial PRIMARY KEY
,abc_cust_no text
);
CREATE TABLE log (
id int
,table_name text
,column_name text
,old_value text
,new_value text
);
It could work like this:
CREATE OR REPLACE FUNCTION trg_demo()
RETURNS TRIGGER AS
$func$
BEGIN
EXECUTE format('
INSERT INTO log(id, table_name, column_name, old_value, new_value)
SELECT ($2).t_id
, $3
, $4
,($1).%1$I
,($2).%1$I', TG_ARGV[0])
USING OLD, NEW, TG_RELNAME, TG_ARGV[0];
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER demo
BEFORE UPDATE ON tbl
FOR EACH ROW EXECUTE PROCEDURE trg_demo('abc_cust_no'); -- col name here.
SQL Fiddle.
Related answer on dba.SE:
How to access NEW or OLD field given only the field's name?
List of special variables visible in plpgsql trigger functions in the manual.