EXTRACT INTO with multiple rows (PostgreSQL) - postgresql

this is my function:
CREATE OR REPLACE FUNCTION SANDBOX.DAILYVERIFY_DATE(TABLE_NAME regclass, DATE_DIFF INTEGER)
RETURNS void AS $$
DECLARE
RESULT BOOLEAN;
DATE DATE;
BEGIN
EXECUTE 'SELECT VORHANDENES_DATUM AS DATE, CASE WHEN DATUM IS NULL THEN FALSE ELSE TRUE END AS UPDATED FROM
(SELECT DISTINCT DATE VORHANDENES_DATUM FROM ' || TABLE_NAME ||
' WHERE DATE > CURRENT_DATE -14-'||DATE_DIFF|| '
) A
RIGHT JOIN
(
WITH compras AS (
SELECT ( NOW() + (s::TEXT || '' day'')::INTERVAL )::TIMESTAMP(0) AS DATUM
FROM generate_series(-14, -1, 1) AS s
)
SELECT DATUM::DATE
FROM compras)
B
ON DATUM = VORHANDENES_DATUM'
INTO date,result;
RAISE NOTICE '%', result;
INSERT INTO SANDBOX.UPDATED_TODAY VALUES (TABLE_NAME, DATE, RESULT);
END;
$$ LANGUAGE plpgsql;
It is supposed to upload rows into the table SANDBOX.UPDATED_TODAY, which contains table name, a date and a boolean.
The boolean shows, whether there was an entry for that date in the table. The whole part, which is inside of EXECUTE ... INTO works fine and gives me those days.
However, this code only inserts the first row of the query's result. What I want is that all 14 rows get inserted. Obviously, I need to change it into something like a loop or something completely different, but how exactly would that work?
Side note: I removed some unnecessary parts regarding those 2 parameters you can see. It does not have to do with that at all.

Put the INSERT statement inside the EXECUTE. You don't need the result of the SELECT for anything other than inserting it into that table, right? So just insert it directly as part of the same query:
CREATE OR REPLACE FUNCTION SANDBOX.DAILYVERIFY_DATE(TABLE_NAME regclass, DATE_DIFF INTEGER)
RETURNS void AS
$$
BEGIN
EXECUTE
'INSERT INTO SANDBOX.UPDATED_TODAY
SELECT ' || QUOTE_LITERAL(TABLE_NAME) || ', VORHANDENES_DATUM, CASE WHEN DATUM IS NULL THEN FALSE ELSE TRUE END
FROM (
SELECT DISTINCT DATE VORHANDENES_DATUM FROM ' || TABLE_NAME ||
' WHERE DATE > CURRENT_DATE -14-'||DATE_DIFF|| '
) A
RIGHT JOIN (
WITH compras AS (
SELECT ( NOW() + (s::TEXT || '' day'')::INTERVAL )::TIMESTAMP(0) AS DATUM
FROM generate_series(-14, -1, 1) AS s
)
SELECT DATUM::DATE
FROM compras
) B
ON DATUM = VORHANDENES_DATUM';
END;
$$ LANGUAGE plpgsql;

The idiomatic way to loop through dynamic query results would be
FOR date, result IN
EXECUTE 'SELECT ...'
LOOP
INSERT INTO ...
END LOOP;

Related

Syntac Error when using EXECUTE command in postgresql Function

I'm trying to write a function which executes a dynamically prepared sql query and return the result as table.
I was refering to the SO answer, it mentioned to use language plpgsql, even after using it I'm getting the same syntax error.
Below is the function code provided.
CREATE OR REPLACE FUNCTION public.execute_test(
ddsmappingids text,
totalnumberofrecords bigint,
skiprecords bigint,
pagesize integer,
cid bigint)
RETURNS TABLE(sampletime timestamp without time zone, jsonstring text, rowscount bigint)
LANGUAGE 'plpgsql'
COST 100
VOLATILE PARALLEL UNSAFE
ROWS 1000
AS $BODY$
DECLARE
table_names text:='dynamictable';
sql_query text;
dsids_array int[];
sub_query text;
BEGIN
if cid>=2 then
select 'dynamictable_'||cid into table_names;
end if;
select string_to_array(ddsmappingids, ',')::int[] into dsids_array;
sub_query:='SELECT * , COUNT(*) over() AS row_count FROM (SELECT start_time, cast(jsonb_object_agg(vw.dataseries_name, ROUND(CAST(o.dbl_value as numeric), 4)) as text) AS DataSeriesValue
FROM ' || quote_ident(table_names) ||' o
join vwdds vw on o.ddmid=vw.id    
where ddmid in (' || array_to_string(dsids_array,',') || ')
and o.dbl_value is not null
GROUP BY start_time
ORDER BY start_time desc
limit ' || totalnumberofrecords || ') t offset '|| skiprecords || ' rows fetch next ' || pagesize || ' rows only;';
RAISE NOTICE 'Temporary table created';
RETURN QUERY Execute sub_query;
END
$BODY$;
Error Message when running the function
EXECUTE command, when i try with some simple query it is working fine, but the query mentioned in the code snippet it is giving syntax error.
Please help me where I'm doing wrong.

Using array in dynamic commands inside PL/pgSQL function

In a PL/pgSQL function, I am creating a view using the EXECUTE statement. The where clause in the view takes as input some jenkins job names. These job names are passed to the function as a comma-separated string. They are then converted to an array so that they can be used as argument to ANY in the where clause. See basic code below:
CREATE OR REPLACE FUNCTION FETCH_ALL_TIME_AGGR_KPIS(jobs VARCHAR)
RETURNS SETOF GenericKPI AS $$
DECLARE
job_names TEXT[];
BEGIN
job_names = string_to_array(jobs,',');
EXECUTE 'CREATE OR REPLACE TEMP VIEW dynamicView AS ' ||
'with pipeline_aggregated_kpis AS (
select
jenkins_build_parent_id,
sum (duration) as duration
from test_all_finished_job_builds_enhanced_view where job_name = ANY (' || array(select quote_ident(unnest(job_names))) || ') and jenkins_build_parent_id is not null
group by jenkins_build_parent_id)
select ' || quote_ident('pipeline-job') || ' as job_name, b1.jenkins_build_id, pipeline_aggregated_kpis.status, pipeline_aggregated_kpis.duration FROM job_builds_enhanced_view b1 INNER JOIN pipeline_aggregated_kpis ON (pipeline_aggregated_kpis.jenkins_build_parent_id = b1.jenkins_build_id)';
RETURN QUERY (select
count(*) as total_executions,
round(avg (duration) FILTER (WHERE status = 'SUCCESS')::numeric,2) as average_duration
from dynamicView);
END
$$
LANGUAGE plpgsql;
The creation of the function is successful but an error message is returned when I try to call the function. See below:
eea_ci_db=> select * from FETCH_ALL_TIME_AGGR_KPIS('integration,test');
ERROR: malformed array literal: ") and jenkins_build_parent_id is not null
group by jenkins_build_parent_id)
select "
LINE 7: ...| array(select quote_ident(unnest(job_names))) || ') and jen...
^
DETAIL: Array value must start with "{" or dimension information.
CONTEXT: PL/pgSQL function fetch_all_time_aggr_kpis(character varying) line 8 at EXECUTE
It seems like there is something going wrong with quotes & the passing of an array of string. I tried all following options with the same result:
where job_name = ANY (' || array(select quote_ident(unnest(job_names))) || ') and jenkins_build_parent_id is not null
or
where job_name = ANY (' || quote_ident(job_names)) || ') and jenkins_build_parent_id is not null
or
where job_name = ANY (' || job_names || ') and jenkins_build_parent_id is not null
Any ideas?
Thank you
There is no need for dynamic SQL at all. There isn't even the need for PL/pgSQL to do this:
CREATE OR REPLACE FUNCTION FETCH_ALL_TIME_AGGR_KPIS(jobs VARCHAR)
RETURNS SETOF GenericKPI
AS
$$
with pipeline_aggregated_kpis AS (
select jenkins_build_parent_id,
sum (duration) as duration
from test_all_finished_job_builds_enhanced_view
where job_name = ANY (string_to_array(jobs,','))
and jenkins_build_parent_id is not null
group by jenkins_build_parent_id
), dynamic_view as (
select "pipeline-job" as job_name,
b1.jenkins_build_id,
pipeline_aggregated_kpis.status,
pipeline_aggregated_kpis.duration
FROM job_builds_enhanced_view b1
JOIN pipeline_aggregated_kpis
ON pipeline_aggregated_kpis.jenkins_build_parent_id = b1.jenkins_build_id
)
select count(*) as total_executions,
round(avg (duration) FILTER (WHERE status = 'SUCCESS')::numeric,2) as average_duration
from dynamic_view;
$$
language sql;
You could do this with PL/pgSQL as well, you just need to use RETURN QUERY WITH ....

Resolve PgAttribute and Where Conditional Clause

What I'm trying to do:
Modular function querying for time-range of any table I specify (pg_attribute as string input)
Chain multiple conditions as function inputs in where clause, after ::regclass is called
Reference the attrelid = _tablename::regclass in the chained where clauses
CREATE OR REPLACE FUNCTION the_func(
rngstart timestamptz,
rngdend timestamptz,
_tablename VARCHAR(16),
_id INT
) AS $$
SELECT *
FROM pg_attribute
WHERE attrelid = _tablename::regclass
AND id = _id
AND time > rngstart
AND time <= rngend
$$ LANGUAGE sql STABLE;
You have to write a PL/pgSQL function and use dynamic SQL, something along the lines of
BEGIN
RETURN QUERY EXECUTE format(
'SELECT ... FROM %I '
'WHERE id = $1 '
'AND time > $2 AND time <= $3',
_tablename::text)
USING _id, rngstart, rngend;
END;

postgres lag and window to create cohort table [duplicate]

I am trying to create crosstab queries in PostgreSQL such that it automatically generates the crosstab columns instead of hardcoding it. I have written a function that dynamically generates the column list that I need for my crosstab query. The idea is to substitute the result of this function in the crosstab query using dynamic sql.
I know how to do this easily in SQL Server, but my limited knowledge of PostgreSQL is hindering my progress here. I was thinking of storing the result of function that generates the dynamic list of columns into a variable and use that to dynamically build the sql query. It would be great if someone could guide me regarding the same.
-- Table which has be pivoted
CREATE TABLE test_db
(
kernel_id int,
key int,
value int
);
INSERT INTO test_db VALUES
(1,1,99),
(1,2,78),
(2,1,66),
(3,1,44),
(3,2,55),
(3,3,89);
-- This function dynamically returns the list of columns for crosstab
CREATE FUNCTION test() RETURNS TEXT AS '
DECLARE
key_id int;
text_op TEXT = '' kernel_id int, '';
BEGIN
FOR key_id IN SELECT DISTINCT key FROM test_db ORDER BY key LOOP
text_op := text_op || key_id || '' int , '' ;
END LOOP;
text_op := text_op || '' DUMMY text'';
RETURN text_op;
END;
' LANGUAGE 'plpgsql';
-- This query works. I just need to convert the static list
-- of crosstab columns to be generated dynamically.
SELECT * FROM
crosstab
(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2',
'SELECT DISTINCT key FROM test_db ORDER BY 1'
)
AS x (kernel_id int, key1 int, key2 int, key3 int); -- How can I replace ..
-- .. this static list with a dynamically generated list of columns ?
You can use the provided C function crosstab_hash for this.
The manual is not very clear in this respect. It's mentioned at the end of the chapter on crosstab() with two parameters:
You can create predefined functions to avoid having to write out the
result column names and types in each query. See the examples in the
previous section. The underlying C function for this form of crosstab
is named crosstab_hash.
For your example:
CREATE OR REPLACE FUNCTION f_cross_test_db(text, text)
RETURNS TABLE (kernel_id int, key1 int, key2 int, key3 int)
AS '$libdir/tablefunc','crosstab_hash' LANGUAGE C STABLE STRICT;
Call:
SELECT * FROM f_cross_test_db(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2'
,'SELECT DISTINCT key FROM test_db ORDER BY 1');
Note that you need to create a distinct crosstab_hash function for every crosstab function with a different return type.
Related:
PostgreSQL row to columns
Your function to generate the column list is rather convoluted, the result is incorrect (int missing after kernel_id), it can be replaced with this SQL query:
SELECT 'kernel_id int, '
|| string_agg(DISTINCT key::text, ' int, ' ORDER BY key::text)
|| ' int, DUMMY text'
FROM test_db;
And it cannot be used dynamically anyway.
#erwin-brandstetter: The return type of the function isn't an issue if you're always returning a JSON type with the converted results.
Here is the function I came up with:
CREATE OR REPLACE FUNCTION report.test(
i_start_date TIMESTAMPTZ,
i_end_date TIMESTAMPTZ,
i_interval INT
) RETURNS TABLE (
tab JSON
) AS $ab$
DECLARE
_key_id TEXT;
_text_op TEXT = '';
_ret JSON;
BEGIN
-- SELECT DISTINCT for query results
FOR _key_id IN
SELECT DISTINCT at_name
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start BETWEEN i_start_date AND i_end_date
AND interval_type_id = i_interval
LOOP
-- build function_call with datatype of column
IF char_length(_text_op) > 1 THEN
_text_op := _text_op || ', ' || _key_id || ' NUMERIC(20,2)';
ELSE
_text_op := _text_op || _key_id || ' NUMERIC(20,2)';
END IF;
END LOOP;
-- build query with parameter filters
RETURN QUERY
EXECUTE '
SELECT array_to_json(array_agg(row_to_json(t)))
FROM (
SELECT * FROM crosstab(''SELECT date_start, at.at_name, cda.amount ct
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start between $$' || i_start_date::TEXT || '$$ AND $$' || i_end_date::TEXT || '$$
AND interval_type_id = ' || i_interval::TEXT || ' ORDER BY date_start'')
AS ct (date_start timestamptz, ' || _text_op || ')
) t;';
END;
$ab$ LANGUAGE 'plpgsql';
So, when you run it, you get the dynamic results in JSON, and you don't need to know how many values were pivoted:
select * from report.test(now()- '1 week'::interval, now(), 1);
tab
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{"date_start":"2015-07-27T08:40:01.277556-04:00","burn_rate":0.00,"monthly_revenue":5800.00,"cash_balance":0.00},{"date_start":"2015-07-27T08:50:02.458868-04:00","burn_rate":34000.00,"monthly_revenue":15800.00,"cash_balance":24000.00}]
(1 row)
Edit: If you have mixed datatypes in your crosstab, you can add logic to look it up for each column with something like this:
SELECT a.attname as column_name, format_type(a.atttypid, a.atttypmod) AS data_type
FROM pg_attribute a
JOIN pg_class b ON (a.attrelid = b.relfilenode)
JOIN pg_catalog.pg_namespace n ON n.oid = b.relnamespace
WHERE n.nspname = $$schema_name$$ AND b.relname = $$table_name$$ and a.attstattarget = -1;"
I realise this is an older post but struggled for a little while on the same issue.
My Problem Statement:
I had a table with muliple values in a field and wanted to create a crosstab query with 40+ column headings per row.
My Solution was to create a function which looped through the table column to grab values that I wanted to use as column headings within the crosstab query.
Within this function I could then Create the crosstab query. In my use case I added this crosstab result into a separate table.
E.g.
CREATE OR REPLACE FUNCTION field_values_ct ()
RETURNS VOID AS $$
DECLARE rec RECORD;
DECLARE str text;
BEGIN
str := '"Issue ID" text,';
-- looping to get column heading string
FOR rec IN SELECT DISTINCT field_name
FROM issue_fields
ORDER BY field_name
LOOP
str := str || '"' || rec.field_name || '" text' ||',';
END LOOP;
str:= substring(str, 0, length(str));
EXECUTE 'CREATE EXTENSION IF NOT EXISTS tablefunc;
DROP TABLE IF EXISTS temp_issue_fields;
CREATE TABLE temp_issue_fields AS
SELECT *
FROM crosstab(''select issue_id, field_name, field_value from issue_fields order by 1'',
''SELECT DISTINCT field_name FROM issue_fields ORDER BY 1'')
AS final_result ('|| str ||')';
END;
$$ LANGUAGE plpgsql;
The approach described here worked well for me.
Instead of retrieving the pivot table directly. The easier approach is to let the function generate a SQL query string. Dynamically execute the resulting SQL query string on demand.

Self-managing PostgreSQL partition tables

I am trying to make a self-managing partition table setup with Postgres. It all revolves around this function but I can't seem to get Postgres to accept my table names. Any ideas or examples of self-managing partition table trigger functions?
My current function:
DECLARE
day integer;
year integer;
tablename text;
startdate text;
enddate text;
BEGIN
day:=date_part('doy',to_timestamp(NEW.date));
year:=date_part('year',to_timestamp(NEW.date));
tablename:='pings_'||year||'_'||day||'_'||NEW.id;
-- RAISE EXCEPTION 'tablename=%',tablename;
PERFORM 'tablename' FROM pg_tables WHERE 'schemaname'=tablename;
-- RAISE EXCEPTION 'found=%',FOUND;
IF FOUND <> TRUE THEN
startdate:=date_part('year',to_timestamp(NEW.date))||'-'||date_part('month',to_timestamp(NEW.date))||'-'||date_part('day',to_timestamp(NEW.date));
enddate:=startdate::timestamp + INTERVAL '1 day';
EXECUTE 'CREATE TABLE $1 (
CHECK ( date >= DATE $2 AND date < DATE $3 )
) INHERITS (pings)' USING quote_ident(tablename),startdate,enddate;
END IF;
EXECUTE 'INSERT INTO $1 VALUES (NEW.*)' USING quote_ident(tablename);
RETURN NULL;
END;
I want it to auto-create a table called pings_YEAR_DOY_ID but it always fails with:
2011-10-24 13:39:04 CDT [15804]: [1-1] ERROR: invalid input syntax for type double precision: "-" at character 45
2011-10-24 13:39:04 CDT [15804]: [2-1] QUERY: SELECT date_part('year',to_timestamp( $1 ))+'-'+date_part('month',to_timestamp( $2 ))+'-'+date_part('day',to_timestamp( $3 ))
2011-10-24 13:39:04 CDT [15804]: [3-1] CONTEXT: PL/pgSQL function "ping_partition" line 15 at assignment
2011-10-24 13:39:04 CDT [15804]: [4-1] STATEMENT: INSERT INTO pings VALUES (0,0,5);
TRY 2
After applying the changes and modifying it some more (date is a unixtimestamp column, my thinking being that an integer column is faster than a timestamp column when selecting). I get the below error, not sure if I am using the proper syntax for USING NEW?
Updated function:
CREATE FUNCTION ping_partition() RETURNS trigger
LANGUAGE plpgsql
AS $_$DECLARE
day integer;
year integer;
tablename text;
startdate text;
enddate text;
BEGIN
day:=date_part('doy',to_timestamp(NEW.date));
year:=date_part('year',to_timestamp(NEW.date));
tablename:='pings_'||year||'_'||day||'_'||NEW.id;
-- RAISE EXCEPTION 'tablename=%',tablename;
PERFORM 'tablename' FROM pg_tables WHERE 'schemaname'=tablename;
-- RAISE EXCEPTION 'found=%',FOUND;
IF FOUND <> TRUE THEN
startdate := to_char(to_timestamp(NEW.date), 'YYYY-MM-DD');
enddate:=startdate::timestamp + INTERVAL '1 day';
EXECUTE 'CREATE TABLE ' || quote_ident(tablename) || ' (
CHECK ( date >= EXTRACT(EPOCH FROM DATE ' || quote_literal(startdate) || ')
AND date < EXTRACT(EPOCH FROM DATE ' || quote_literal(enddate) || ') )
) INHERITS (pings)';
END IF;
EXECUTE 'INSERT INTO ' || quote_ident(tablename) || ' SELECT $1' USING NEW;
RETURN NULL;
END;
$_$;
My statement:
INSERT INTO pings VALUES (0,0,5);
SQL error:
ERROR: column "date" is of type integer but expression is of type pings
LINE 1: INSERT INTO pings_1969_365_0 SELECT $1
^
HINT: You will need to rewrite or cast the expression.
QUERY: INSERT INTO pings_1969_365_0 SELECT $1
CONTEXT: PL/pgSQL function "ping_partition" line 22 at EXECUTE statement
Note: Since Postgres 10 declarative partitioning is typically superior to partitioning by inheritance as used in this case.
You are mixing double precision output of date_part() with text '-'. That doesn't make sense to PostgreSQL. You would need an explicit cast to text. But there is a much simpler way to do all of this:
startdate:=date_part('year',to_timestamp(NEW.date))
||'-'||date_part('month',to_timestamp(NEW.date))
||'-'||date_part('day',to_timestamp(NEW.date));
Use instead:
startdate := to_char(NEW.date, 'YYYY-MM-DD');
This makes no sense either:
EXECUTE 'CREATE TABLE $1 (
CHECK (date >= DATE $2 AND date < DATE $3 )
) INHERITS (pings)' USING quote_ident(tablename),startdate,enddate;
You can only supply values with the USING clause. Read the manual here. Try instead:
EXECUTE 'CREATE TABLE ' || quote_ident(tablename) || ' (
CHECK ("date" >= ''' || startdate || ''' AND
"date" < ''' || enddate || '''))
INHERITS (ping)';
Or better yet, use format(). See below.
Also, like #a_horse answered: You need to put your text values in single quotes.
Similar here:
EXECUTE 'INSERT INTO $1 VALUES (NEW.*)' USING quote_ident(tablename);
Instead:
EXECUTE 'INSERT INTO ' || quote_ident(tablename) || ' VALUES ($1.*)'
USING NEW;
Related answer:
How to dynamically use TG_TABLE_NAME in PostgreSQL 8.2?
Aside: While "date" is allowed for a column name in PostgreSQL it is a reserved word in every SQL standard. Don't name your column "date", it leads to confusing syntax errors.
Complete working demo
CREATE TABLE ping (ping_id integer, the_date date);
CREATE OR REPLACE FUNCTION trg_ping_partition()
RETURNS trigger
LANGUAGE plpgsql SET client_min_messages = 'WARNING' AS
$func$
DECLARE
_schema text := 'public'; -- double-quoted if necessary
_tbl text := to_char(NEW.the_date, '"ping_"YYYY_DDD_') || NEW.ping_id;
BEGIN
EXECUTE format('CREATE TABLE IF NOT EXISTS %1$s.%2$s
(CHECK (the_date >= %3$L
AND the_date < %4$L)) INHERITS (%1$s.ping)'
, _schema -- %1$s
, _tbl -- %2$s -- legal(!) name needs no quotes
, to_char(NEW.the_date, 'YYYY-MM-DD') -- %3$L
, to_char(NEW.the_date + 1, 'YYYY-MM-DD') -- %4$L
);
EXECUTE 'INSERT INTO ' || _tbl || ' VALUES ($1.*)'
USING NEW;
RETURN NULL;
END
$func$;
CREATE TRIGGER insbef
BEFORE INSERT ON ping
FOR EACH ROW EXECUTE FUNCTION trg_ping_partition();
Postgres 9.1 added the clause IF NOT EXISTS for CREATE TABLE. See:
PostgreSQL create table if not exists
Postgres 11 added the more appropriate syntax variant EXECUTE FUNCTION for triggers. Use EXECUTE PROCEDURE in older versions.
See:
Trigger uses a procedure or a function?
to_char() can take a date as $1. That's converted to timestamp automatically. See:
The manual on date / time functions.
I SET client_min_messages = 'WARNING' for the scope of the function to silence the flood of notices that would otherwise be raised on conflict by IF NOT EXISTS.
Multiple other simplifications and improvements. Compare the code.
Tests:
INSERT INTO ping VALUES (1, now()::date);
INSERT INTO ping VALUES (2, now()::date);
INSERT INTO ping VALUES (2, now()::date + 1);
INSERT INTO ping VALUES (2, now()::date + 1);
fiddle
OLD sqlfiddle
Dynamic partitioning in PostgreSQL is just a bad idea. Your code is not safe in a multi-user environment. For it to be safe you would have to use locks, which slows down execution. The optimal number of partitions is about one hundred. You can easily create that many well in advance to dramatically simplify the logic necessary for partitioning.
You need to put your date literals in single quotes. Currently you are executing something like this:
CHECK ( date >= DATE 2011-10-25 AND date < DATE 2011-11-25 )
which is invalid. In this case 2011-10-25 is interpreted as 2011 minus 10 minus 25
Your code needs to create the SQL using single quotes around the date literal:
CHECK ( date >= DATE '2011-10-25' AND date < DATE '2011-11-25' )
I figured out the entirety and it works great, even have an auto-delete after 30 days. I hope this helps out future people looking for an autopartition trigger function.
CREATE FUNCTION ping_partition() RETURNS trigger
LANGUAGE plpgsql
AS $_$
DECLARE
_keepdate text;
_tablename text;
_startdate text;
_enddate text;
_result record;
BEGIN
_keepdate:=to_char(to_timestamp(NEW.date) - interval '30 days', 'YYYY-MM-DD');
_startdate := to_char(to_timestamp(NEW.date), 'YYYY-MM-DD');
_tablename:='pings_'||NEW.id||'_'||_startdate;
PERFORM 1
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
AND c.relname = _tablename
AND n.nspname = 'pinglog';
IF NOT FOUND THEN
_enddate:=_startdate::timestamp + INTERVAL '1 day';
EXECUTE 'CREATE TABLE pinglog.' || quote_ident(_tablename) || ' (
CHECK ( date >= EXTRACT(EPOCH FROM DATE ' || quote_literal(_startdate) || ')
AND date < EXTRACT(EPOCH FROM DATE ' || quote_literal(_enddate) || ')
AND id = ' || quote_literal(NEW.id) || '
)
) INHERITS (pinglog.pings)';
EXECUTE 'CREATE INDEX ' || quote_ident(_tablename||'_indx1') || ' ON pinglog.' || quote_ident(_tablename) || ' USING btree (microseconds) WHERE microseconds IS NULL';
EXECUTE 'CREATE INDEX ' || quote_ident(_tablename||'_indx2') || ' ON pinglog.' || quote_ident(_tablename) || ' USING btree (date, id)';
EXECUTE 'CREATE INDEX ' || quote_ident(_tablename||'_indx3') || ' ON pinglog.' || quote_ident(_tablename) || ' USING btree (date, id, microseconds) WHERE microseconds IS NULL';
END IF;
EXECUTE 'INSERT INTO ' || quote_ident(_tablename) || ' VALUES ($1.*)' USING NEW;
FOR _result IN SELECT * FROM pg_tables WHERE schemaname='pinglog' LOOP
IF char_length(substring(_result.tablename from '[0-9-]*$')) <> 0 AND (to_timestamp(NEW.date) - interval '30 days') > to_timestamp(substring(_result.tablename from '[0-9-]*$'),'YYYY-MM-DD') THEN
-- RAISE EXCEPTION 'timestamp=%,table=%,found=%',to_timestamp(substring(_result.tablename from '[0-9-]*$'),'YYYY-MM-DD'),_result.tablename,char_length(substring(_result.tablename from '[0-9-]*$'));
-- could have it check for non-existant ids as well, or for archive bit and only delete if the archive bit is not set
EXECUTE 'DROP TABLE ' || quote_ident(_result.tablename);
END IF;
END LOOP;
RETURN NULL;
END;
$_$;