Questions about transpose are asked many times before, but I cannot find any good answer when using generate_series and dates, because the columns may vary.
WITH range AS
(SELECT to_char(generate_series('2015-01-01','2015-01-05', interval '1 day'),'YYYY-MM-DD'))
SELECT * FROM range;
The normal output from generate series is:
2015-12-01
2015-12-02
2015-12-03
... and so on
http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/5478
But I want it to be columns instead
2015-12-01 2015-12-02 2015-12-03 ...and so on
It seems that crosstab maybe should do the trick, but I only get errors:
select * from crosstab('(SELECT to_char(generate_series('2015-01-01','2015-01-05', interval '1 day'),'YYYY-MM-DD'))')
as ct (dynamic columns?)
How do I get crosstab to work with generate_series(date-date) and different intervals dynamically?
TIA
Taking Reference from link PostgreSQL query with generated columns.
you can generate columns dynamically:
create or replace function sp_test()
returns void as
$$
declare cases character varying;
declare sql_statement text;
begin
drop table if exists temp_series;
create temporary table temp_series as
SELECT to_char(generate_series('2015-01-01','2015-01-02', interval '1 day'),'YYYY-MM-DD') as series;
select string_agg(concat('max(case when t1.series=','''',series,'''',' then t1.series else ''0000-00-00'' end) as ','"', series,'"'),',') into cases from temp_series;
drop table if exists temp_data;
sql_statement=concat('create temporary table temp_data as select ',cases ,'
from temp_series t1');
raise notice '%',sql_statement;
execute sql_statement;
end;
$$
language 'plpgsql';
Call function in following way to get output:
select sp_test(); select * from temp_data;
Updated Function which takes two date paramaeters:
create or replace function sp_test(start_date timestamp without time zone,end_date timestamp without time zone)
returns void as
$$
declare cases character varying;
declare sql_statement text;
begin
drop table if exists temp_series;
create temporary table temp_series as
SELECT to_char(generate_series(start_date,end_date, interval '1 day'),'YYYY-MM-DD') as series;
select string_agg(concat('max(case when t1.series=','''',series,'''',' then t1.series else ''0000-00-00'' end) as ','"', series,'"'),',') into cases from temp_series;
drop table if exists temp_data;
sql_statement=concat('create temporary table temp_data as select ',cases ,'
from temp_series t1');
raise notice '%',sql_statement;
execute sql_statement;
end;
$$
language 'plpgsql';
Function call:
select sp_test('2015-01-01','2015-01-10'); select * from temp_data;
Related
I'm trying to figure out how to use the Postgresql EXTRACT function to convert a given date_variable into its equivalent day of the week. I understand that it will convert the date_variable into a numbering from 0 - 6 (0 is Sunday, 6 is Saturday etc)
I've created a simple table to test my queries. Here I will attempt to convert the start_date into its DOW equivalent.
DROP TABLE IF EXISTS test;
CREATE TABLE test(
start_date date PRIMARY KEY,
end_date date
);
INSERT INTO test (start_date, end_date) VALUES ('2021-03-31', '2021-03-31'); -- Today (wed), hence 3
INSERT INTO test (start_date, end_date) VALUES ('2021-03-30', '2021-03-30'); -- Yesterday (tues), hence 2
INSERT INTO test (start_date, end_date) VALUES ('2021-03-29', '2021-03-29'); -- Day before (mon), hence 1
If I were to run the query below
SELECT (EXTRACT(DOW FROM t.start_date)) AS day FROM test t;
It works fine, and returns the result as intended (returns a single column table with values (3, 2, 1) respectively.)
However, when I attempt to write a function to return the exact same query
CREATE OR REPLACE FUNCTION get_day()
RETURNS TABLE (day integer) AS $$
BEGIN
RETURN QUERY
SELECT (EXTRACT(DOW FROM t.start_date)) as day
FROM test t;
END;
$$ LANGUAGE plpgsql;
SELECT * FROM get_day(); -- throws error "structure of query does not match function result type"
I get an error instead. I cant seem to find the issue and don't know what is causing it.
extract() returns a double precision value, but your function is declared to return an integer. You need to cast the value:
CREATE OR REPLACE FUNCTION get_day()
RETURNS TABLE (day integer) AS $$
BEGIN
RETURN QUERY
SELECT EXTRACT(DOW FROM t.start_date)::int as day
FROM test t;
END;
$$ LANGUAGE plpgsql;
But you don't really need PL/pgSQL for this, a language SQL function would also work.
CREATE OR REPLACE FUNCTION get_day()
RETURNS TABLE (day integer) AS $$
SELECT EXTRACT(DOW FROM t.start_date)::int as day
FROM test t;
$$
LANGUAGE sql
stable;
As you are not passing any parameters, I would actually use a view for this:
create or replace view day_view
AS
SELECT EXTRACT(DOW FROM t.start_date)::int as day
FROM test t;
The extract function returns values of type double precision.
You declare result to be integer.
You should cast the result of EXTRACT to integer:
CREATE OR REPLACE FUNCTION get_day()
RETURNS TABLE (day integer) AS $$
BEGIN
RETURN QUERY
SELECT EXTRACT(DOW FROM t.start_date)::integer as day
FROM test t;
END;
$$ LANGUAGE plpgsql;
I'm trying to implement table partitioning with dynamic table creation using BEFORE INSERT trigger to create new tables and indexes when necesarry using following solution:
create table mylog (
mylog_id serial not null primary key,
ts timestamp(0) not null default now(),
data text not null
);
CREATE OR REPLACE FUNCTION mylog_insert() RETURNS trigger AS
$BODY$
DECLARE
_name text;
_from timestamp(0);
_to timestamp(0);
BEGIN
SELECT into _name 'mylog_'||replace(substring(date_trunc('day', new.ts)::text from 0 for 11), '-', '');
IF NOT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name=_name) then
SELECT into _from date_trunc('day', new.ts)::timestamp(0);
SELECT into _to _from + INTERVAL '1 day';
EXECUTE 'CREATE TABLE '||_name||' () INHERITS (mylog)';
EXECUTE 'ALTER TABLE '||_name||' ADD CONSTRAINT ts_check CHECK (ts >= '||quote_literal(_from)||' AND ts < '||quote_literal(_to)||')';
EXECUTE 'CREATE INDEX '||_name||'_ts_idx on '||_name||'(ts)';
END IF;
EXECUTE 'INSERT INTO '||_name||' (ts, data) VALUES ($1, $2)' USING
new.ts, new.data;
RETURN null;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER mylog_insert
BEFORE INSERT
ON mylog
FOR EACH ROW
EXECUTE PROCEDURE mylog_insert();
Everything works as expected but each day when concurrent INSERT statements are being fired for the first time that day, one of them fails trying to "create table that already exists". I suspect that this is caused by the triggers being fired concurrently and both trying to create new table and only one can succeed.
I could be using CREATE TABLE IF NOT EXIST but I cannot detect the outcome so I cannot reliably create constraints and indexes.
What can I do to avoid such problem? Is there any way to signal the fact that the table has been already created to other concurrent triggers? Or maybe there is a way of knowing if CREATE TABLE IF NOT EXISTS created new table or not?
What I do is create a pgAgent job to run every day and create 3 months of tables ahead of time.
CREATE OR REPLACE FUNCTION avl_db.create_alltables()
RETURNS numeric AS
$BODY$
DECLARE
rec record;
BEGIN
FOR rec IN
SELECT date_trunc('day', i::timestamp without time zone) as table_day
FROM generate_series(now()::date,
now()::date + '3 MONTH'::interval,
'1 DAY'::interval) as i
LOOP
PERFORM avl_db.create_table (rec.table_day);
END LOOP;
PERFORM avl_db.avl_partition(now()::date,
now()::date + '3 MONTH'::interval);
RETURN 0;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION avl_db.create_alltables()
OWNER TO postgres;
create_table is very similar to your CREATE TABLE code
avl_partition update the BEFORE INSERT TRIGGER but I saw you do that part with dynamic query. Will have to check again that.
Also I see you are doing inherit, but you are missing a very important CONSTRAINT
CONSTRAINT route_sources_20170601_event_time_check CHECK (
event_time >= '2017-06-01 00:00:00'::timestamp without time zone
AND event_time < '2017-06-02 00:00:00'::timestamp without time zone
)
This improve the query a lot when doing a search for event_time because doesn't have to check every table.
See how doesn't check all tables for the month:
Eventually I wrapped CREATE TABLE in BEGIN...EXCEPTION block that catches duplicate_table exception - this did the trick, but creating the tables upfront in a cronjob is much better approach performance-wise.
CREATE OR REPLACE FUNCTION mylog_insert() RETURNS trigger AS
$BODY$
DECLARE
_name text;
_from timestamp(0);
_to timestamp(0);
BEGIN
SELECT into _name 'mylog_'||replace(substring(date_trunc('day', new.ts)::text from 0 for 11), '-', '');
IF NOT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name=_name) then
SELECT into _from date_trunc('day', new.ts)::timestamp(0);
SELECT into _to _from + INTERVAL '1 day';
BEGIN
EXECUTE 'CREATE TABLE '||_name||' () INHERITS (mylog)';
EXECUTE 'ALTER TABLE '||_name||' ADD CONSTRAINT ts_check CHECK (ts >= '||quote_literal(_from)||' AND ts < '||quote_literal(_to)||')';
EXECUTE 'CREATE INDEX '||_name||'_ts_idx on '||_name||'(ts)';
EXCEPTION WHEN duplicate_table THEN
RAISE NOTICE 'table exists -- ignoring';
END;
END IF;
EXECUTE 'INSERT INTO '||_name||' (ts, data) VALUES ($1, $2)' USING
new.ts, new.data;
RETURN null;
END;
$BODY$
LANGUAGE plpgsql;
I want to select persons from a table where the date is within a given month.
This is what I have so far, but it's not working:
CREATE OR REPLACE FUNCTION u7()
RETURNS character varying AS
$BODY$
DECLARE
data varchar=`data`;
mes varchar=`2016-11-21`;
incidencia varchar=`expulsions`;
valor varchar;
BEGIN
EXECUTE `SELECT `
||quote_ident(data)
||`FROM `
||quote_ident(incidencia)
||` WHERE data IN(select date_part(`month`, TIMESTAMP $1))`
INTO valor USING mes;
return valor;
END;
$BODY$
LANGUAGE plpgsql;
select * FROM u7();
Clean syntax for what you are trying to do could look like this:
CREATE OR REPLACE FUNCTION u7()
RETURNS TABLE (valor text) AS
$func$
DECLARE
data text := 'data'; -- the first 3 would typically be function parameters
incidencia text := 'expulsions';
mes timestamp = '2016-11-21';
mes0 timestamp := date_trunc('month', mes);
mes1 timestamp := (mes0 + interval '1 month');
BEGIN
RETURN QUERY EXECUTE format(
'SELECT %I
FROM %I
WHERE datetime_column_name >= $1
AND datetime_column_name < $2'
, data, incidencia)
USING mes0, mes1;
END
$func$ LANGUAGE plpgsql;
SELECT * FROM u7();
Obviously, data cannot be a text column and a timestamp or date column at the same time. I use datetime_column_name for the timestamp column - assuming it's data type timestamp.
Aside from various syntax errors, do not use the construct with date_part(). This way you would have to process every row of the table and could not use an index on datetime_column_name - which my proposed alternative can.
See related answers for explanation:
EXECUTE...INTO...USING statement in PL/pgSQL can't execute into a record?
Table name as a PostgreSQL function parameter
How do I match an entire day to a datetime field?
I am fairly new in postgres and what am trying to do is calculate sum values for each day for every month (i.e daily sum values). Based on scattering information I came up with something like this:
CREATE OR REPLACE FUNCTION sumvalues() RETURNS double precision AS
$BODY$
BEGIN
FOR i IN 0..31 LOOP
SELECT SUM("Energy")
FROM "public"."EnergyWh" e
WHERE e."DateTime" = day('01-01-2005 00:00:00'+ INTERVAL 'i' DAY);
END LOOP;
END
$BODY$
LANGUAGE plpgsql VOLATILE NOT LEAKPROOF;
ALTER FUNCTION public.sumvalues()
OWNER TO postgres;
The query returned successfully, so I thought I had made it. However when am trying to insert the values of the function to a table (which maybe wrong):
INSERT INTO "SumValues"
("EnergyDC")
(
SELECT sumvalues()
);
I get this:
ERROR: invalid input syntax for type interval: "01-01-2005 00:00:00"
LINE 3: WHERE e."DateTime" = day('01-01-2005 00:00:00'+ INTERVAL...
I tried to debug it myself but yet am not sure, which of the two I am doing wrong (or both) and why.
Here is an example of EnergyWh
(am using systemid and datetime as composite PK, but that should not matter)
see GROUP BY clause http://www.postgresql.org/docs/9.2/static/tutorial-agg.html
SELECT EXTRACT(day FROM e."DateTime"), EXTRACT(month FROM e."DateTime"),
EXTRACT(year FROM e."DateTime"), sum("Energy")
FROM "public"."EnergyWh" e
GROUP BY 1,2,3
but following query should to work too:
SELECT e."DateTime"::date, sum("Energy")
FROM "public"."EnergyWh" e
GROUP BY 1
I am using a short syntax for GROUP BY ~ GROUP BY 1 .. group by first column.
Here is simple Example that can help you:
Table :
create table demo (value double precision);
Function
CREATE OR REPLACE FUNCTION sumvalues() RETURNS void AS
$BODY$
DECLARE
inte text;
BEGIN
FOR i IN 0..31 LOOP
inte := 'INSERT INTO demo SELECT EXTRACT (DAY FROM TIMESTAMP ''01-01-2005 00:00:00''+ INTERVAL '''||i||' Days'')';
EXECUTE inte;
END LOOP;
END
$BODY$
LANGUAGE plpgsql VOLATILE NOT LEAKPROOF;
ALTER FUNCTION public.sumvalues()
OWNER TO postgres;
Function Call
SELECT sumvalues();
Output
SELECT * FROM demo;
Here if you want to use some variable value into SQL query than you must have to use some DYNAMIC QUERY for that.
Reference : Dynamic query in pgsql
I have plpgsql function:
CREATE OR REPLACE FUNCTION test() RETURNS VOID AS
$$
DECLARE
my_row my_table%ROWTYPE;
BEGIN
SELECT * INTO my_row FROM my_table WHERE id='1';
my_row.date := now();
END;
$$ LANGUAGE plpgsql;
I would like to know if it's possible to directly UPDATE my_row record.
The only way I've found to do it now is:
UPDATE my_table SET date=now() WHERE id='1';
Note this is only an example function, the real one is far more complex than this.
I'm using PostgreSQL 9.2.
UPDATE:
Sorry for the confusion, what I wanted to say is:
SELECT * INTO my_row FROM my_table INTO my_row WHERE id='1';
make_lots_of_complicated_modifications_to(my_row, other_complex_parameters);
UPDATE my_row;
I.e. Use my_row to persist information in the underlying table. I have lots of parameters to update.
I would like to know if it's possible to directly update "my_row"
record.
It is.
You can update columns of a row or record type in plpgsql - just like you have it. It should be working, obviously?
This would update the underlying table, of course, not the variable!
UPDATE my_table SET date=now() WHERE id='1';
You are confusing two things here ...
Answer to clarification in comment
I don't think there is syntax in PostgreSQL that can UPDATE a whole row. You can UPDATE a column list, though. Consider this demo:
Note how I use thedate instead of date as column name, date is a reserved word in every SQL standard and a type name in PostgreSQL.
CREATE TEMP TABLE my_table (id serial, thedate date);
INSERT INTO my_table(thedate) VALUES (now());
CREATE OR REPLACE FUNCTION test_up()
RETURNS void LANGUAGE plpgsql AS
$func$
DECLARE
_r my_table;
BEGIN
SELECT * INTO _r FROM my_table WHERE id = 1;
_r.thedate := now()::date + 5 ;
UPDATE my_table t
-- explicit list of columns to be to updated
SET (id, thedate) = (_r.id, _r.thedate)
WHERE t.id = 1;
END
$func$;
SELECT test_up();
SELECT * FROM my_table;
However, you can INSERT a whole row easily. Just don't supply a column list for the table (which you normally should, but in this case it is perfectly ok, not to).
As an UPDATE is internally a DELETE followed by an INSERT anyway, and a function automatically encapsulates everything in a transaction, I don't see, why you couldn't use this instead:
CREATE OR REPLACE FUNCTION x.test_ delins()
RETURNS void LANGUAGE plpgsql AS
$func$
DECLARE
_r my_table;
BEGIN
SELECT * INTO _r
FROM my_table WHERE id = 1;
_r.thedate := now()::date + 10;
DELETE FROM my_table t WHERE t.id = 1;
INSERT INTO my_table SELECT _r.*;
END
$func$;
I managed to get this working in PLPGSQL in a couple of lines of code.
Given a table called table in a schema called example, and a record of the same type declared as _record, you can update all the columns in the table to match the record using the following hack:
declare _record example.table;
...
-- get the columns in the correct order, as a string
select string_agg(format('%I', column_name), ',' order by ordinal_position)
into _columns
from information_schema.columns
where table_schema='example' and table_name='table';
execute 'update example.table set (' || _columns || ') = row($1.*) where pkey=$2'
using _record, _record.pkey;
In the above example, of course, _record.pkey is the table's primary key.
Postgresql has not set row in update.
If you wont update full row you should assign value for each column separately
yes, its possible to update / append the row-type variable,
CREATE OR REPLACE FUNCTION test() RETURNS VOID AS $$
DECLARE
my_row my_table%ROWTYPE;
BEGIN
SELECT * INTO my_row FROM my_table WHERE id='1';
my_row.date := now();
raise notice 'date : %; ',my_row.date;
END;
$$ LANGUAGE plpgsql;
here the raise notice will display the today's date only.
but this will not update the column date in my_table.