now() default values are all showing same timestamp - postgresql

I have created my tables with a column (type: timestamp with timezone) and set its default value to now() (current_timestamp()).
I run a series of inserts in separate statements in a single function and I noticed all the timestamps are equal down to the (ms), is the function value somehow cached and shared for the entire function call or transaction?

That is expected and documented behaviour:
From the manual:
Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same time stamp.
If you want something that changes each time you run a statement, you need to use statement_timestamp() or even clock_timestamp() (again see the description in the manual)

now() and current_timestamp (the latter without parentheses - odd SQL standard) are STABLE functions returning the point in time when the transaction started as timestamptz.
Consider one of the other options PostgreSQL offers, in particular statement_timestamp(). The manual:
statement_timestamp() returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client)
Related:
Difference between now() and current_timestamp

Related

Default date for insert doesn't change in continuos transformation

I created the table below.
create table foo
(
ibutton text NULL,
severidade int4 NULL,
dt_insercao timestamptz NULL DEFAULT now()
)
My insert:
insert into foo (ibutton, severidade)values ('aa', 4);
For any cases of the value of 'dt_insersao', wich should be default "now", is always going as '2017-06-08 10:35:35'...
I don't have idea where does it's come from this value..
This insert are executed into my continuous transformation.
These inserts are executed into my continuous transformation of the pipelinedb. When I execute in my client PGAdmin, the date are correct.
Not sure how PipelineDB comes into play here, but in Postgres, now() returns the same value for all inserts in a single transaction:
Quote from the manual
Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same time stamp.
If you need a different value for each row that is inserted in one transaction use clock_timestamp() instead in your table definition.

PostgreSQL: Insert Date and Time of day in the same field/box?

After performing the following:
INSERT INTO times_table (start_time, end_time) VALUES (to_date('2/3/2016 12:05',
'MM/DD/YYYY HH24:MI'), to_date('2/3/2016 15:05', 'MM/DD/YYYY HH24:MI'));
PostgreSQL only displays the date.
If possible, would I have to run a separate select statement to extract the time (i.e. 12:05 and 15:05), stored in that field? Or are the times completey discard when the query gets executed.
I don't want to use timestamp, since I'd like to execute this in Oracle SQL as well.
to_date returns... a date. Surprise! So yeah, it's not going to give you the time.
You should be using the timestamp data type to store times and functions which return timestamps. So use to_timestamp.
Oracle also has a timestamp data type and to_timestamp function.
In general, trying to write one set of SQL that works with multiple databases results in either having to write very simple SQL that doesn't take advantage of any of the database's features, or madness.
Instead, use a SQL query builder to write your SQL for you, take care of compatibility issues, and allow you to add clauses to existing statements. For example, Javascript has Knex.js and Perl has SQL::Abstract.

Issue with PostgreSQL: 'now' keeps returning same old value

I have an old web app, the relevant current stack is: Java 8, Tomcat 7, Apache Commons DBCP 2.1, Spring 2.5 (for transactions), iBatis, PostgreSQL 9.2 with postgresql-9.4.1208.jar
Part of the code inserts new records in incidents table, where the field begin_date (timestamp(3) with time zone) is a creation timestamp, filled with now:
insert into incidents
(...., begin_date, )
values
(..., 'now' ....)
All this is executed via iBatis, transactions managed programatically via Spring, connections acquired via DBCP pool. The webapp (actually a pair, clientside and backoffice, who share most of the code and jars) has been working since years.
Lately, perhaps after some libraries updates and reorganization (nothing important, it seemed), I've been experiencing (intermitent, hard to reproduce) some nasty problem: now seems to freeze, and it begins returning the same "old" value. Then, many records appear with the same creation timestamp, hours or days ago:
db=# select 'now'::timestamptz;
timestamp
-------------------------
2016-06-10 21:59:03.637+00
db=# select rid,begin_date from incidents order by rid desc limit 6;
rid | begin_date
-------+----------------------------
85059 | 2016-06-08 00:11:06.503+00
85058 | 2016-06-08 00:11:06.503+00
85057 | 2016-06-08 00:11:06.503+00
85056 | 2016-06-08 00:11:06.503+00
85055 | 2016-06-08 00:11:06.503+00
85054 | 2016-06-08 00:11:06.503+00
(All the above records were actually created minutes before 2016-06-10 21:50)
How can this happen? It might be some problem related with transactions and/or connection pooling, but I can't figure out what.
I know that 'now()' is an alias of transaction_timestamp(), it returns the time at the start of the transaction. This would suggest that a transaction was not properly closed, and the records inserts above were written (unintentionally) in a single long transaction. But this looks rather incredible to me.
First, I can insert a new record (via the webapp) and, using a psql console, I see that it has been written with the same begin_date (if the transaction were uncommited, I should not see the new record, I have the default serialization level).
Furthermore, the pg_stat_activity view only shows idle connections.
Any cues?
There's the constant (special timestamp value) 'now'.
And there's the function now().
The fact that you are mixing them freely suggests that you are unaware of the all-important difference. The manual:
Special Values
PostgreSQL supports several special date/time input values for
convenience, as shown in Table 8-13. The values infinity and -infinity
are specially represented inside the system and will be displayed
unchanged; but the others are simply notational shorthands that will
be converted to ordinary date/time values when read. (In particular,
now and related strings are converted to a specific time value as soon
as they are read.) All of these values need to be enclosed in single
quotes when used as constants in SQL commands.
Bold emphasis mine.
And (like you mentioned yourself already), but quoting the manual:
now() is a traditional PostgreSQL equivalent to transaction_timestamp().
And:
transaction_timestamp() is equivalent to CURRENT_TIMESTAMP
There is more, read the whole chapter.
Now (no pun intended), since you are using the special value instead of the function, you get a different (unexpected for you) behavior with prepared statements.
Consider this demo:
test=# BEGIN;
BEGIN
test=# PREPARE foo AS
test-# SELECT timestamptz 'now' AS now_constant, now() AS now_function;
PREPARE
test=# EXECUTE foo;
now_constant | now_function
-------------------------------+-------------------------------
2016-06-11 03:09:05.622783+02 | 2016-06-11 03:09:05.622783+02 -- identical
(1 row)
test=# commit;
COMMIT
test=# EXECUTE foo;
now_constant | now_function
-------------------------------+------------------------------
2016-06-11 03:09:05.622783+02 | 2016-06-11 03:10:00.92488+02 -- different!
(1 row)
While you run both in the same transaction, 'now' and now() produce the same value. But a prepared statement is designed to last for the duration of your session (possibly across many transactions). Next time you execute the prepared statement, you'll see the difference.
In other words: 'now' implements "early binding", while now() implements "late binding".
You may have introduced prepared statements and / or connection pooling (which can preserve prepared statements for a longer period of time) - both generally good ideas. But the hidden problem in your INSERT now starts kicking.
The "idle connections" you see indicate as much: connections stay open, preserving prepared statements.
In short: Use now().
Alternatively, set the column default of begin_date to now() (not 'now'!) and don't mention the column in the INSERT. Your "creation timestamp" is saved automatically.
I think the function you are looking for is now(), not 'now'...
insert into incidents
(..., begin_date, ...)
values
(..., now(), ...)
or at least that works from a psql shell.
robert.kuhar=# select now();
now
-------------------------------
2016-06-10 18:10:05.953661-07
(1 row)

Create timestamp index from JSON on PostgreSQL

I have a table on PostgreSQL with a field named data that is jsonb with a lot of objects, I want to make an index to speed up the queries. I'm using few rows to test the data (just 15 rows) but I don't want to have problems with the queries in the future. I'm getting data from the Twitter API, so with a week I get around 10gb of data. If I make the normal index
CREATE INDEX ON tweet((data->>'created_at'));
I get a text index, if I make:
Create index on tweet((CAST(data->>'created_at' AS timestamp)));
I get
ERROR: functions in index expression must be marked IMMUTABLE
I've tried to make it "inmutable" setting the timezone with
date_trunc('seconds', CAST(data->>'created_at' AS timestamp) at time zone 'GMT')
but I'm still getting the "immutable" error. So, How can I accomplish a timestamp index from a JSON? I know that I could make a simple column with the date because probably it will remain constant in the time, but I want to learn how to do that.
This expression won't be allowed in the index either:
(CAST(data->>'created_at' AS timestamp) at time zone 'UTC')
It's not immutable, because the first cast depends on your DateStyle setting (among other things). Doesn't help to translate the result to UTC after the function call, uncertainty has already crept in ...
The solution is a function that makes the cast immutable by fixing the time zone (like #a_horse already hinted).
I suggest to use to_timestamp() (which is also only STABLE, not IMMUTABLE) instead of the cast to rule out some source of trouble - DateStyle being one.
CREATE OR REPLACE FUNCTION f_cast_isots(text)
RETURNS timestamptz AS
$$SELECT to_timestamp($1, 'YYYY-MM-DD HH24:MI')$$ -- adapt to your needs
LANGUAGE sql IMMUTABLE;
Note that this returns timestamptz. Then:
CREATE INDEX foo ON t (f_cast_isots(data->>'created_at'));
Detailed explanation for this technique in this related answer:
Does PostgreSQL support "accent insensitive" collations?
Related:
Query on a time range ignoring the date of timestamps

Changing time zone value of data

I have to import data without time zone information in it (however, I know the specific time zone of the data I want to import), but I need the timestamp with time zone format in the database. Once I import it and set the timestamp data type to timestamp with time zone, Postgres will automatically assume that the data in the table is from my time zone and assign my time zone to it. Unfortunately the data I want to import is not from my time frame, so this does not work.
The database also contains data with different time zones. However, the time zone within one table is always the same.
Now, I could set the time zone of the database to the time zone of the data I want to import before importing the data (using SET time zone command) and change it back to my time zone once the import is done, and I am pretty sure already stored data will not be affected by the time zone change of the database. But this seems to be a pretty dirty approach and may cause problems later on.
I wonder if there is a more elegant way to specify the time zone for the import without having the time zone data in the data itself?
Also, I have not found a way to edit time zone information after import. Is there a way not to convert, but simply to edit the time zone for a whole table, assuming that the whole table has the same time zone offset (i.e. if a wrong one has been assigned upon data entry/import)?
Edit:
I managed to specify a time zone upon import, the whole command being:
set session time zone 'UTC';
COPY tbl FROM 'c:\Users\Public\Downloads\test.csv' DELIMITERS ',' CSV;
set session time zone 'CET';
The data then gets imported using the session time zone. I assume this has no effect on any other queries on the database at the same time from other connections?
Edit 2:
I found out how to change the time zone of a table afterwards:
PostgreSQL update time zone offset
I suppose it is more elegant to change the time zone of the table after import then to use session to change the local time zone temporary. Assuming the whole table has the same time zone of course.
So the code would be now something along the line of:
COPY tbl FROM 'c:\Users\Public\Downloads\test.csv' DELIMITERS ',' CSV;
UPDATE tbl SET <tstz_field> = <tstz_field> AT TIME ZONE '<correct_time_zone>';
It is a lot more efficient to set the time zone for your import session than to update the values later.
I get the impression that you think of the time zone like a setting that applies to otherwise unchanged values in the tables. But it's not like that at all. Think of it as an input / output modifier. Actual timestamp values (with or without time zone) are always stored as UTC timestamps internally (number of seconds since '2000-01-01 00:00'). A lot more details:
Ignoring time zones altogether in Rails and PostgreSQL
The UPDATE in your second example doubles the size of the table, as every single row is invalidated and a new version added (that's how UPDATE works with MVCC in Postgres). In addition to the expensive operation, VACUUM will have to do more work later to clean up the table bloat. Very inefficient.
It is perfectly safe to SET the local time zone for the session. This doesn't affect concurrent operations in any way. BTW, SET SESSION is the same as plain SET because SESSION is the default anyway.
If you want to be absolutely sure, you can limit the setting to the current transaction with SET LOCAL. I quote the manual here
The effects of SET LOCAL last only till the end of the current
transaction, whether committed or not. A special case is SET followed
by SET LOCAL within a single transaction: the SET LOCAL value will be
seen until the end of the transaction, but afterwards (if the
transaction is committed) the SET value will take effect.
Put together:
BEGIN;
SET LOCAL timezone = 'UTC';
COPY tabledata FROM 'c:\Users\Public\Downloads\test.csv' DELIMITERS ',' CSV;
COMMIT;
Check:
SHOW timezone;