Raise error when date is not valid - postgresql

What I'm trying to do is to raise out of range error in case of dates outside of the supported range like what typecasting does.
I'm using PostgreSQL-9.1.6 on CentOS. The issue is below...
postgres=# select to_date('20130229','yyyymmdd');
to_date
------------
2013-03-01
(1 row)
But the output I want to see is:
postgres=# select '20130229'::date;
ERROR: date/time field value out of range: "20130229"
Surfing the web I found an informative page. So I did adding IS_VALID_JULIAN to the function body of to_date, adding the four lines marked + below to formatting.c:
Datum
to_date(PG_FUNCTION_ARGS)
{
text *date_txt = PG_GETARG_TEXT_P(0);
text *fmt = PG_GETARG_TEXT_P(1);
DateADT result;
struct pg_tm tm;
fsec_t fsec;
do_to_timestamp(date_txt, fmt, &tm, &fsec);
+ if (!IS_VALID_JULIAN(tm.tm_year, tm.tm_mon, tm.tm_mday))
+ ereport(ERROR,
+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
+ errmsg("date out of range: \"%s\"",text_to_cstring(date_txt))));
result = date2j(tm.tm_year, tm.tm_mon, tm.tm_mday) - POSTGRES_EPOCH_JDATE;
PG_RETURN_DATEADT(result);
}
Then I rebuilt PostgreSQL:
pg_ctl -m fast stop # 1. stopping pgsql
vi src/backend/utils/adt/formatting.c # 2. using the version above
rm -rf /usr/local/pgsql/* # 3. getting rid of all bin files
./configure --prefix=/usr/local/pgsql
--enable-nls --with-perl --with-libxml
--with-pam --with-openssl
make && make install # 4. rebuilding source
pg_ctl start # 5. starting the engine
My bin directory info is below.
[/home/postgres]echo $PATH
/usr/lib64/qt-3.3/bin:
/usr/local/bin:
/bin:
/usr/bin:
/usr/local/sbin:
/usr/sbin:
/sbin:
/home/postgres/bin:
/usr/bin:
/usr/local/pgsql/bin:
/usr/local/pgpool/bin:
/usr/local/pgtop/bin/pg_top:
[/home/postgres]which pg_ctl
/usr/local/pgsql/bin/pg_ctl
[/home/postgres]which postgres
/usr/local/pgsql/bin/postgres
[/usr/local/bin]which psql
/usr/local/pgsql/bin/psql
But upon checking to_date again, the result remained the same.
postgres=# select to_date('20130229','yyyymmdd');
to_date
------------
2013-03-01
(1 row)
Is there anything I missed?

You can write your own to_date() function, but you have to call it with its schema-qualified name. (I used the schema "public", but there's nothing special about that.)
create or replace function public.to_date(any_date text, format_string text)
returns date as
$$
select to_date((any_date::date)::text, format_string);
$$
language sql
Using the bare function name executes the native to_date() function.
select to_date('20130229', 'yyyymmdd');
2013-03-01
Using the schema-qualified name executes the user-defined function.
select public.to_date('20130229', 'yyyymmdd');
ERROR: date/time field value out of range: "20130229"
SQL state: 22008
I know that's not quite what you're looking for. But . . .
It's simpler than rebuilding PostgreSQL from source.
Fixing up your existing SQL and PLPGSQL source code is a simple search-and-replace with a streaming editor. I'm pretty sure that can't go wrong, as long as you really want every use of the native to_date() to be public.to_date().
The native to_date() function will still work as designed. Extensions and other code might rely on its somewhat peculiar behavior. Think hard and long before you change the behavior of native functions.
New SQL and PLPGSQL would need to be reviewed, though. I wouldn't expect developers to remember to write public.to_date() every time. If you use version control, you might be able to write a precommit hook to make sure only public.to_date() is used.
The native to_date() function has behavior I don't see documented. Not only can you call it with February 29, you can call it with February 345, or February 9999.
select to_date('201302345', 'yyyymmdd');
2014-01-11
select to_date('2013029999', 'yyyymmdd');
2040-06-17

Related

Using \set variables in psql cli working in normal queries but not working/expanding in \copy

As described in the title, the issue is that psql variables, set using \set work for me except when used inside the \copy function provided by the psql client
Is there a some special syntax required to reference a psql variable inside a \copy? Or am I out of luck? And if so, is this documented anywhere?
I couldn't find this issue in StackOverflow or documented anywhere. I looked at ~20 posts but found nothing. I also checked the documentation on \copy for PostgreSQL 11 (the version of the CLI) and saw no caveats about this- I searched the page for "variable" and found nothing related to this. I also searched for "expansion" and "expand" and found nothing. So now I'm here asking for help...
The version of PostgreSQL client is 11.10 with whatever downstream patches Debian applied:
psql (PostgreSQL) 11.10 (Debian 11.10-1.pgdg100+1)
I'm pretty sure the server version has little to no relevance, but just to be thorough, the server is version 10.13 as shipped by Ubuntu:
psql (PostgreSQL) 10.13 (Ubuntu 10.13-1.pgdg16.04+1)
Reproducing
I'm aware of the difference between \copy and COPY (one being implemented as a feature in the psql client, the other being a server feature executing in the context of the server process) and for this task, what I need to use is definitely \copy
The standard query that shows I'm setting and referencing the variables correctly:
[local:/tmp]:5432 dbuser#dbdev# \set var_tname ag_test
[local:/tmp]:5432 dbuser#dbdev# \set var_cname fname
[local:/tmp]:5432 dbuser#dbdev# SELECT * from :var_tname WHERE :var_cname = 'TestVal' LIMIT 1;
fname|lname|score|nonce
TestVal|C|100|b
(1 row)
Time: 88.786 ms
The failing case(s), which seem to be failing because the variables are referenced inside of \copy- I don't see any other difference between this and the working example:
[local:/tmp]:5432 dbuser#dbdev# \set var_tname ag_test
[local:/tmp]:5432 dbuser#dbdev# \set var_cname fname
[local:/tmp]:5432 dbuser#dbdev# \copy (SELECT * from :var_tname WHERE :var_cname = 'TestVal' LIMIT 1) TO 'testvar.csv';
ERROR: syntax error at or near ":"
LINE 1: COPY ( SELECT * from :var_tname WHERE :var_cname = 'TestVal...
^
Time: 193.322 ms
Obviously, based on the error, the expansion is not happening and the query is trying to reference a table with a literal name of :var_tname
I wasn't expecting quoting to help with this, but tried it just in case- who knows, could be a bizarre exception, right? Unsurprisingly, it's of no help either:
[local:/tmp]:5432 dbuser#dbdev# \copy (SELECT * from :'var_tname' WHERE :var_cname = 'TestVal' LIMIT 1) TO 'testvar.csv';
ERROR: syntax error at or near ":"
LINE 1: COPY ( SELECT * from : 'var_tname' WHERE :var_cname = 'Test...
^
Time: 152.407 ms
[local:/tmp]:5432 dbuser#dbdev# \set var_tname 'ag_test'
[local:/tmp]:5432 dbuser#dbdev# \copy (SELECT * from :var_tname WHERE :var_cname = 'TestVal' LIMIT 1) TO 'testvar.csv';
ERROR: syntax error at or near ":"
LINE 1: COPY ( SELECT * from :var_tname WHERE :var_cname = 'TestVal...
^
Time: 153.001 ms
[local:/tmp]:5432 dbuser#dbdev# \copy (SELECT * from :'var_tname' WHERE :var_cname = 'TestVal' LIMIT 1) TO 'testvar.csv';
ERROR: syntax error at or near ":"
LINE 1: COPY ( SELECT * from : 'var_tname' WHERE :var_cname = 'Test...
^
Time: 153.459 ms
I also tried setting the variables with single quotes (which is probably a best practice anyway) but this made no difference:
[local:/tmp]:5432 dbuser#dbdev# \set var_tname 'ag_test'
[local:/tmp]:5432 dbuser#dbdev# \set var_cname 'fname'
... <same behavior as above> ...
Is variable expansion just not supported inside of \copy? If so, that seems like a really crummy limitation, and it doesn't seem to be documented
One last thing to add as I expect someone may ask- there is a reason I'm not implementing these as functions or stored procedures. First, my version of PostgreSQL doesn't support stored procedures at all. It also does not support transactions in functions. Even if it did, the real reason I want these queries in psql files in the application repository is because they are then very easy to read for code reviews, easy to maintain for development, and act as documentation
It's not necessary to read beyond this point unless you also have this problem and want ideas for workarounds
Beyond this I've documented a bunch of workarounds I could quickly think of- the problem can be worked around 1001 different ways. But if there's a solution to this quirky behavior that lets me stick with it, I would much rather know about it than apply any workarounds. I also added use-case information below as it's not unheard of for responses to be along the lines of "why are you doing this? Just don't use xyz feature, problem solved!". I'm hoping to not receive any of those responses :>
Thanks for anyone willing to help out!
Options for Workarounds
I have plenty of options for workarounds, but I really would like to understand why this doesn't work, if it's documented somewhere, or if there may be some special way to cause the expansion to occur when used inside \copy, to avoid needing to change this- for reasons I explain below in the Use-Case section
Here are the workarounds I've come up with...
SELECT into temporary table using the variables, \copy that fixed name table
SELECT * INTO tmp_table FROM :var_tname WHERE :var_cname = 'TestVal' LIMIT 1;
\copy (SELECT * FROM tmp_table) TO 'testvar.csv'
That works, but it's kind of clunky and seems like it shouldn't be unnecessary
Produce a TSV using \pset fieldsetp and redirect stdout to the file (clunky, may have escaping issues)
The other option would be not using \copy and piping stdout to a file after setting the delimiter to tab:
[local:/tmp]:5432 dbuser#dbdev# \set var_tname ag_test
[local:/tmp]:5432 dbuser#dbdev# \pset format unaligned
Output format is unaligned.
[local:/tmp]:5432 dbuser#dbdev# \pset fieldsep '\t'
Field separator is " ".
[local:/tmp]:5432 dbuser#dbdev# SELECT * from :var_tname LIMIT 1;
fname lname score nonce
TestVal G 500 a
(1 row)
Time: 91.596 ms
[local:/tmp]:5432 dbuser#dbdev#
This could be invoked via psql -f query.psql > /the/output/path.tsv. I haven't checked yet but I'm assuming that should produce a valid TSV file. The one thing I'm not sure of is whether it will correctly escape or quote column values that contain tabs, like \copy or COPY would
Do the expansion in a shell script and write to temporary psql file, use psql -f tmp.psql
The final workaround would be setting the variables in a shell script, and invoking using psql -c "$shellvar", or writing the shell-expanded query to a temporary .psql file and then invoking with psql -f, and deleting the temporary file
The Use-Case (and why I don't particularly like some of the workarounds)
I should probably mention the use case... I have several separate (but related) Python applications that collect, parse and process data and then load them into the database using psycopg2. Once the raw data is in the database, I delegate a bunch of heavier logic into psql files, for readability and to reduce the amount of code that needs to be maintained
The psql files are invoked at completion of the application using something like this:
for psql_file in glob.glob(os.path.expandvars("$VIRTUAL_ENV/etc/<appname>/psql-post.d/*.psql:
subprocess.call([which('psql'), '-f', psql_file])
One of the reasons I want to use variables for the table names (and some column names) is because the database is currently being refactored/rebuilt, so the table names and a few column names are going to be renamed over time. Because some of the .psql scripts are quite extensive, the table names are referenced quite a few times in them- so it just makes more sense to set them once at the top, using \set, so as each table gets changed in the database, only one change in each psql file is required. There also may be minor changes in the future that make this approach better than one where you need to search and replace 10-15 instances of various column or table names
One last workaround I don't really want to use: templating psql files from Python
I realize I could use some home-grown templating or Jinja2 directly from the Python code to dynamically generate the PSQL files from templates as well. But I much prefer to have pure psql in the files as it makes the project much more readable and editable for those who may need to perform a code review, or take over maintenance of the projects in the future. It's also easier for me to work with. Obviously there are tons of options for workarounds once we start talking about doing this from within Python using queries via psycopg2- but having the .psql files in the same relative directory of each project repository serves a very useful purpose
It seems to be a parsing issue with \copy.
UPDATE: Actually a documented behavior:
https://www.postgresql.org/docs/current/app-psql.html
\copy
...
Unlike most other meta-commands, the entire remainder of the line is always taken to be the arguments of \copy, and neither variable interpolation nor backquote expansion are performed in the arguments.
Tip
Another way to obtain the same result as \copy ... to is to use the SQL > COPY ... TO STDOUT command and terminate it with \g filename or \g >|program. Unlike \copy, this method allows the command to span multiple > lines; also, variable interpolation and backquote expansion can be used.
\set var_tname 'cell_per'
\copy (select * from :var_tname) to stdout WITH (FORMAT CSV, HEADER);
ERROR: syntax error at or near ":"
LINE 1: COPY ( select * from :var_tname ) TO STDOUT WITH (FORMAT CS...
\copy (select * from :"var_tname") to stdout WITH (FORMAT CSV, HEADER);
ERROR: syntax error at or near ":"
LINE 1: COPY ( select * from : "var_tname" ) TO STDOUT WITH (FORMAT...
--Note the added space when using the suggested method of including a variable as
--table name.
copy (select * from :var_tname) to stdout WITH (FORMAT CSV, HEADER);
copy (select * from :"var_tname") to stdout WITH (FORMAT CSV, HEADER);
--Using COPY directly works.
--So:
\o cp.csv
copy (select * from :var_tname) to stdout WITH (FORMAT CSV, HEADER);
\o
--This opens file cp.csv COPYs to it and then closes file.
-- Or per docs example and UPDATE:
copy (select * from :var_tname) to stdout WITH (FORMAT CSV, HEADER) \g cp.csv
cat cp.csv
line_id,category,cell_per,ts_insert,ts_update,user_insert,user_update,plant_type,season,short_category
5,H PREM 3.5,18,,06/02/2004 15:11:26,,postgres,herb,none,HP3
7,HERB G,1,,06/02/2004 15:11:26,,postgres,herb,none,HG
9,HERB TOP,1,,06/02/2004 15:11:26,,postgres,herb,none,HT
10,VEGGIES,1,,06/02/2004 15:11:26,,postgres,herb,annual,VG

How do I use a variable in Postgres scripts?

I'm working on a prototype that uses Postgres as its backend. I don't do a lot of SQL, so I'm feeling my way through it. I made a .pgsql file I run with psql that executes each of many files that set up my database, and I use a variable to define the schema that will be used so I can test features without mucking up my "good" instance:
\set schema_name 'example_schema'
\echo 'The Schema name is' :schema_name
\ir sql/file1.pgsql
\ir sql/file2.pgsql
This has been working well. I've defined several functions that expand :schema_name properly:
CREATE OR REPLACE FUNCTION :schema_name.get_things_by_category(...
For reasons I can't figure out, this isn't working in my newest function:
CREATE OR REPLACE FUNCTION :schema_name.update_thing_details(_id uuid, _details text)
RETURNS text
LANGUAGE 'plpgsql'
AS $BODY$
BEGIN
UPDATE :schema_name.things
...
The syntax error indicates it's interpreting :schema_name literally after UPDATE instead of expanding it. How do I get it to use the variable value instead of the literal value here? I get that maybe within the BEGIN..END is a different context, but surely there's a way to script this schema name in all places?
I can think of three approaches, since psql cannot do this directly.
Shell script
Use a bash script to perform the variable substitution and pipe the results into psql, like.
#!/bin/bash
$schemaName = $1
$contents = `cat script.sql | sed -e 's/#SCHEMA_NAME#/$schemaName'`
echo $contents | psql
This would probably be a lot of boiler plate if you have a lot of .sql scripts.
Staging Schema
Keep the approach you have now with a hard-coded schema of something like staging and then have a bash script go and rename staging to whatever you want the actual schema to be.
Customize the search path
Your entry point could be an inline script within bash that is piped into psql, does an up-front update of the default connection schema, then uses \ir to include all of your .sql files, which should not specify a schema.
#!/bin/bash
$schemaName = $1
psql <<SCRIPT
SET search_path TO $schemaName;
\ir sql/file1.pgsql
\ir sql/file2.pgsql
SCRIPT
Some details: How to select a schema in postgres when using psql?
Personally I am leaning towards the latter approach as it seems the simplest and most scalable.
The documentation says:
Variable interpolation will not be performed within quoted SQL literals and identifiers. Therefore, a construction such as ':foo' doesn't work to produce a quoted literal from a variable's value (and it would be unsafe if it did work, since it wouldn't correctly handle quotes embedded in the value).
Now the function body is a “dollar-quoted%rdquo; string literal ($BODY$...$BODY$), so the variable will not be replaced there.
I can't think of a way to do this with psql variables.

how to declare a date (with time) variable in pl/sql

I want to use a date( DD/MM/YYYY HH:MI:SS ) variable in pl/sql. I'm using the following code but it doesn't work :
BEGIN
declare dateMig date ;
dateMig := to_date('19/05/2017 05:05:00', 'DD/MM/YYYY HH:MI:SS');
exec P_MY_PROC(100,'CHECK',dateMig);
END;
Can anyone help please? what am I doing wrong?
It would be helpful if you could explain what you mean by "doesn't work" - i.e. any error messages and/or unexpected results that you get.
However, there are a couple of obvious things wrong with your procedure:
You have the declaration section inside the execution block - that's not going to work for what you're wanting to do. PL/SQL programs consist of the declaration section, the execution section and the exception section in that order.
You're attempting to call a procedure using exec inside the PL/SQL program. That's not going to work as exec (or, to give it its full name, execute) is a SQL*Plus command not a PL/SQL command, and it allows you to run a procedure from the command line without having to nest it in a begin/end block. In PL/SQL you don't need to use exec.
So, your code should look something like:
declare
datemig date;
begin
datemig := to_date('19/05/2017 05:05:00', 'dd/mm/yyyy hh24:mi:ss');
p_my_proc(100, 'CHECK', datemig);
end;

Postgresql escape dollar sign

I've very complex data that I'm inserting into postgresql and am using double dollar ($$) to escape. However I've one row which ends with dollar sign and is causing error.
The original row is like 'dd^d\w=dd$' and when escaped '$$dd^d\w=dd$$$'.
How can I escape this specific row?
Use any string inside the double dollar to differentiate it:
select $anything$abc$$anything$;
?column?
----------
abc$
The insert is similar:
insert into t (a, b) values
($anything$abc$$anything$, $xyz$abc$$xyz$);
INSERT 0 1
select * from t;
a | b
------+------
abc$ | abc$
I found this question troubleshooting problem with executing query with double dollar in literal from within linux shell. For example select '$abc$' in psql gives correct result $abc$ while psql -U me -c "select '$abc$'" called from linux shell produces incorrect result $ (provided there's no system variable abc).
In that case, wrapping into another delimiter ($wrapper$$abc$$wrapper$) won't help since the primary problem is interpreting dollars in shell context. Possible solution is escaping dollars (psql -U me -c "select '\$abc\$'") however this produces backslashes literally when called in psql. To produce same query usable in both psql and linux shell, psql -U me -c "select concat(chr(36),'abc',chr(36))" is universal solution.
While Clodoaldo is quite right I think there's another aspect of this you need to look at:
Why are you doing the quoting yourself? You should generally be using parameterised ("prepared") statements via your programming language's client driver. See bobby tables for some examples in common languages. Using parameterised statements means you don't have to care about doing any quoting at all anymore, it's taken care of for you by the database client driver.
I'd give you an example but you haven't mentioned your client language/tools.

execute command in while loop

I have a postgres function which runs the following loop
while x<=colnum LOOP
EXECUTE
'Update attendrpt set
slot'||x||' = pres
from (SELECT branch, semester, attend_date , div, array_to_string(ARRAY_AGG(first_name||':'||alias_name||':'||lect_type||':'||
to_char(present,'99')),';')
As pres
from attend1 where lecture_slot_no ='||x||'
group by branch, semester, attend_date , div ) j
where attendrpt.branch=j.branch and attendrpt.semester=j.semester
and attendrpt.attenddate=j.attend_date and attendrpt.div=j.div;';
`x:=x+1;
END LOOP;`
The problem here is it is conflicting the single quotes closing in query and the execute command.Is there anyway to solve this.
Thanks in advance.
Quote your function definition with dollar-quoting (like $BODY$ or just $$) as per the manual.
Use execute ... using instead of string substitution. For substituting identifiers use the %I format specifier from the format function.
If you absolutely must use || string concatenation, say if you're on some ancient version of PostgreSQL, you need to use the quote_literal and quote_ident functions to avoid issues with quoting and potential security problems.
Beyond that, it looks like the whole approach is completely unnecessary; you're doing something that looks like it can be done in simple SQL.