Enforcing default time when only date in timestamptz provided - postgresql

Assume I have the table:
postgres=# create table foo (datetimes timestamptz);
CREATE TABLE
postgres=# \d+ foo
Table "public.foo"
Column | Type | Modifiers | Storage | Description
-----------+--------------------------+-----------+---------+-------------
datetimes | timestamp with time zone | | plain |
Has OIDs: no
So lets insert some values into it...
postgres=# insert into foo values
('2012-12-12'), --This is the value I want to catch for.
(null),
('2012-12-12 12:12:12'),
('2012-12-12 12:12');
INSERT 0 4
And here's what we have:
postgres=# select * from foo ;
datetimes
------------------------
2012-12-12 00:00:00+00
2012-12-12 12:12:12+00
2012-12-12 12:12:00+00
(4 rows)
Ideally, I'd like to set up a default time-stamp value when a TIME is not provided with the input, rather than the de-facto time of 2012-12-12 being 00:00:00, I would like to set a default of 15:45:10.
Meaning, my results should look like:
postgres=# select * from foo ;
datetimes
------------------------
2012-12-12 15:45:10+00 --This one gets the default time.
2012-12-12 12:12:12+00
2012-12-12 12:12:00+00
(4 rows)
I'm not really sure how to do this in postgres 8.4, I can't find anything in the datetime section of the manual or the sections regarding column default values.

Values for new rows can be tweaked in a BEFORE INSERT trigger. Such a trigger
could test if there's a non-zero time component in NEW.datetimes, and if not set it to the desired fixed time.
However, the case when the time part is explicitly set to zero in the INSERT clause cannot be handled with this technique because '2012-12-12'::timestamptz is equal to '2012-12-12 00:00:00'::timestamptz. So it would be as trying to distinguish 0.0 from 0.00.
Technically, tweaking the value should happen before the implicit cast from string to the column's type, which even a RULE (dynamic query rewriting) cannot do.
It seems to me that the best option is to rewrite the INSERT and apply a function to each value converting it explicitly from string to timestamp. This function would test the input format and add the time part when needed:
create function conv(text) returns timestamptz as $$
select case when length($1)=10 then ($1||' 15:45:10')::timestamptz
else $1::timestamptz end; $$
language sql strict immutable;
insert into foo values
(conv('2012-12-12')),
(conv(null)),
(conv('2012-12-12 12:12:12')),
(conv('2012-12-12 12:12'));

Related

How to get the next sequence value in PostgreSQL?

Current I have a problem in finding the next sequence value of the column.
The my_list table is created by
CREATE SEQUENCE my_list_id_seq;
CREATE TABLE IF NOT EXISTS
my_list (id int PRIMARY KEY UNIQUE NOT NULL DEFAULT nextval('my_list_id_seq'),
mycardno);
ALTER SEQUENCE my_list_id_seq OWNED BY my_list.id;
I tried to use currval query to find the next sequence value (should be 1) but I get
SELECT currval('my_list_id_seq');
ERROR: currval of sequence "my_list_id_seq" is not yet defined in this session
If I use nextval query, I will get 2, which is not I want and the sequence will increase 1 by this Query.
SELECT nextval('my_list_id_seq');
nextval
---------
2
(1 row)
After the nextval query, I can now use currval query.
SELECT currval('my_list_id_seq');
currval
---------
2
(1 row)
As I cannot find other solution at this stage, currently my step to get the is to
Do nextval and get the "next sequence" value
Use ALTER SEQUENCE my_list_id_seq RESTART WITH "next sequence - 1"; to set back the sequence value. The "next sequence - 1" is done by C# (I use WPF).
I would like to know if there are direct approach for this problem. Thank you.
As per the documentation, currval only works when nextval has been called in a session:
Returns the value most recently obtained by nextval for this sequence in the current session. (An error is reported if nextval has never been called for this sequence in this session.)
If you want to get the current value, you may want to try SELECT * FROM my_list_id_seq and use the corresponding value from the last_value column:
postgres=# CREATE SEQUENCE my_list_id_seq;
CREATE SEQUENCE
postgres=# CREATE TABLE IF NOT EXISTS
my_list (id int PRIMARY KEY UNIQUE NOT NULL DEFAULT nextval('my_list_id_seq'),
mycardno int);
CREATE TABLE
postgres=# ALTER SEQUENCE my_list_id_seq OWNED BY my_list.id;
ALTER SEQUENCE
postgres=# select * from my_list_id_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)

PostgreSQL - to_timestamp is not properly converting unix timestamp

I'm trying to get current UTC time, and insert it into PostgreSQL timestamp. But it's not working properly.
I am using the following command:
INSERT INTO public.rt_block_height
VALUES(to_timestamp('2018-09-09 00:36:00.778653', 'yyyy-mm-dd hh24:mi:ss.MS.US'), 83.7)
However, when I check the result, it looks like this:
tutorial=# select * from rt_block_height;
time | block_height
-------------------------+--------------
2018-09-09 00:48:58.653 | 83.7
(1 row)
I don't know what's causing this mismatch.
FYI, here my source code for table:
CREATE TABLE IF NOT EXISTS public.rt_BLOCK_HEIGHT
(
"time" timestamp without time zone,
BLOCK_HEIGHT double precision
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.rt_BLOCK_HEIGHT
OWNER to postgres;
SELECT create_hypertable('rt_BLOCK_HEIGHT', 'time');
There is a logical error in the format string as you should not use MS and US at the same time. However, you do not need the function at all, just cast the string to timestamp:
INSERT INTO public.rt_block_height
VALUES('2018-09-09 00:36:00.778653'::timestamp, 83.7)
From the documentation:
to_timestamp and to_date exist to handle input formats that cannot be converted by simple casting. For most standard date/time formats, simply casting the source string to the required data type works, and is much easier.

Back date Windows 2012 server to use program on another date

I have ERP application that uses the system date when posting transactions. The database is PostgreSQL. I'm able to use https://www.nirsoft.net/utils/run_as_date.html for backdate the application but I notice that the transactions are still posting as of "today" and I think that maybe because of PostgreSQL using the system date.
Is there any way I can set the date back for PostgreSQL? Or any other way to do this? The process in the ERP application does not have an option to back date.
The easiest would be to add a trigger to the database that would change the date for inserted rows:
create table testpast(
id serial primary key,
time timestamp with time zone not null default now()
);
insert into testpast (time) values (default);
select * from testpast;
id | time
----+-------------------------------
1 | 2018-03-16 00:09:20.219419+01
(1 row)
create function time_20_years_back() returns trigger as $$
begin
NEW.time = now()-'20 years'::interval;
return NEW;
end;
$$ language plpgsql;
create trigger testpast_time_20_years_back
before insert on testpast
for each row
execute procedure time_20_years_back();
insert into testpast (time) values (default);
select * from testpast;
id | time
----+-------------------------------
1 | 2018-03-16 00:09:20.219419+01
2 | 1998-03-16 00:09:55.741345+01
(2 rows)
Though I have no idea what would be the purpose of such a hack.

Is there a way to create a table variable in postgresql

I have created a function that creates a temporary table and inserts into the table. The problem is that I need this to work in a read-only instance as well, but in a read-only instance I can't create a table and/or insert into it. Is there any other way of doing this? Maybe by creating a table variable in a way similar to other SQL languages?
I did some research and there doesn't seem to be a table variable, but maybe an array of records? Any ideas?
UPDATE:
To answer people's questions, I am trying to create a function that returns a list of dates from now until x intervals ago in intervals of y.
So for instance, select * from getDates('3 days', 1 day') returns:
startdate | enddate
------------+------------
2016-07-20 | 2016-07-21
2016-07-19 | 2016-07-20
2016-07-18 | 2016-07-19
And select * from getDates('3 months', '1 month'); returns:
startdate | enddate
------------+------------
2016-07-01 | 2016-08-01
2016-06-01 | 2016-07-01
2016-05-01 | 2016-06-01
I currently do this by using a while loop and going back per interval until I hit the time given by the first parameter. I then insert those values into a temp table, and when finished I select everything from the table. I can include the code if necessary.
You can create a permanent named Composite Type representing the structure of your temporary table, and then use an array variable to manipulate a set of rows inside a function:
-- Define columns outside function
CREATE TYPE t_foo AS
(
id int,
bar text
);
CREATE OR REPLACE FUNCTION test()
RETURNS SETOF t_foo AS
$BODY$
DECLARE
-- Create an empty array of records of the appropriate type
v_foo t_foo[] = ARRAY[]::foo[];
BEGIN
-- Add some rows to the array
v_foo := v_foo || ( 42, 'test' )::t_foo;
v_foo := v_foo || ( -1, 'nothing' )::t_foo;
-- Convert the array to a resultset as though it was a table
RETURN QUERY SELECT * FROM unnest(v_foo);
END;
$BODY$
LANGUAGE plpgsql;
SELECT * FROM test();
The crucial part here is the variable of type t_foo[] - that is, an array of records of the pre-defined type t_foo.
This is not as easy to work with as a temporary table or table variable, because you need to use array functions to get data in and out, but may be useful.
It's worth considering though whether you really need the complex local state, or whether your problem can be re-framed to use a different approach, e.g. sub-queries, CTEs, or a set-returning function with RETURN NEXT.
Maybe the best way to approach it is to get your administrator to GRANT TEMPORARY ON DATABASE database_name TO the user account performing your actions. You still will only have read-only access to the database.
Declare the function as retuning table
create function f()
returns table (
a int,
b text
) as $$
select x, y from t;
$$ language sql;
Use it as:
select *
from f()

Issue with PostgreSQL: 'now' keeps returning same old value

I have an old web app, the relevant current stack is: Java 8, Tomcat 7, Apache Commons DBCP 2.1, Spring 2.5 (for transactions), iBatis, PostgreSQL 9.2 with postgresql-9.4.1208.jar
Part of the code inserts new records in incidents table, where the field begin_date (timestamp(3) with time zone) is a creation timestamp, filled with now:
insert into incidents
(...., begin_date, )
values
(..., 'now' ....)
All this is executed via iBatis, transactions managed programatically via Spring, connections acquired via DBCP pool. The webapp (actually a pair, clientside and backoffice, who share most of the code and jars) has been working since years.
Lately, perhaps after some libraries updates and reorganization (nothing important, it seemed), I've been experiencing (intermitent, hard to reproduce) some nasty problem: now seems to freeze, and it begins returning the same "old" value. Then, many records appear with the same creation timestamp, hours or days ago:
db=# select 'now'::timestamptz;
timestamp
-------------------------
2016-06-10 21:59:03.637+00
db=# select rid,begin_date from incidents order by rid desc limit 6;
rid | begin_date
-------+----------------------------
85059 | 2016-06-08 00:11:06.503+00
85058 | 2016-06-08 00:11:06.503+00
85057 | 2016-06-08 00:11:06.503+00
85056 | 2016-06-08 00:11:06.503+00
85055 | 2016-06-08 00:11:06.503+00
85054 | 2016-06-08 00:11:06.503+00
(All the above records were actually created minutes before 2016-06-10 21:50)
How can this happen? It might be some problem related with transactions and/or connection pooling, but I can't figure out what.
I know that 'now()' is an alias of transaction_timestamp(), it returns the time at the start of the transaction. This would suggest that a transaction was not properly closed, and the records inserts above were written (unintentionally) in a single long transaction. But this looks rather incredible to me.
First, I can insert a new record (via the webapp) and, using a psql console, I see that it has been written with the same begin_date (if the transaction were uncommited, I should not see the new record, I have the default serialization level).
Furthermore, the pg_stat_activity view only shows idle connections.
Any cues?
There's the constant (special timestamp value) 'now'.
And there's the function now().
The fact that you are mixing them freely suggests that you are unaware of the all-important difference. The manual:
Special Values
PostgreSQL supports several special date/time input values for
convenience, as shown in Table 8-13. The values infinity and -infinity
are specially represented inside the system and will be displayed
unchanged; but the others are simply notational shorthands that will
be converted to ordinary date/time values when read. (In particular,
now and related strings are converted to a specific time value as soon
as they are read.) All of these values need to be enclosed in single
quotes when used as constants in SQL commands.
Bold emphasis mine.
And (like you mentioned yourself already), but quoting the manual:
now() is a traditional PostgreSQL equivalent to transaction_timestamp().
And:
transaction_timestamp() is equivalent to CURRENT_TIMESTAMP
There is more, read the whole chapter.
Now (no pun intended), since you are using the special value instead of the function, you get a different (unexpected for you) behavior with prepared statements.
Consider this demo:
test=# BEGIN;
BEGIN
test=# PREPARE foo AS
test-# SELECT timestamptz 'now' AS now_constant, now() AS now_function;
PREPARE
test=# EXECUTE foo;
now_constant | now_function
-------------------------------+-------------------------------
2016-06-11 03:09:05.622783+02 | 2016-06-11 03:09:05.622783+02 -- identical
(1 row)
test=# commit;
COMMIT
test=# EXECUTE foo;
now_constant | now_function
-------------------------------+------------------------------
2016-06-11 03:09:05.622783+02 | 2016-06-11 03:10:00.92488+02 -- different!
(1 row)
While you run both in the same transaction, 'now' and now() produce the same value. But a prepared statement is designed to last for the duration of your session (possibly across many transactions). Next time you execute the prepared statement, you'll see the difference.
In other words: 'now' implements "early binding", while now() implements "late binding".
You may have introduced prepared statements and / or connection pooling (which can preserve prepared statements for a longer period of time) - both generally good ideas. But the hidden problem in your INSERT now starts kicking.
The "idle connections" you see indicate as much: connections stay open, preserving prepared statements.
In short: Use now().
Alternatively, set the column default of begin_date to now() (not 'now'!) and don't mention the column in the INSERT. Your "creation timestamp" is saved automatically.
I think the function you are looking for is now(), not 'now'...
insert into incidents
(..., begin_date, ...)
values
(..., now(), ...)
or at least that works from a psql shell.
robert.kuhar=# select now();
now
-------------------------------
2016-06-10 18:10:05.953661-07
(1 row)