Postgres Running in Docker Container: Change current time, now() to custom date time - postgresql

I have a postgres timescale database running in docker. For the purposes of api testing I want SELECT NOW() to return lets say 2010-12-01 23:00:44.851242 +00:00. Basically every time I start up the container I want it to think the current date is some time in December 2010.
How can I achieve this? I cant seem to find any command to set current time in postgres. Do I need to change the system time in the docker container before the database shows up? Is that even something I can do?

You can achieve this by creating a custom now() function in a separate schema and then adjusting the search_path to prefer that function over the builtin now function:
CREATE SCHEMA test;
CREATE OR REPLACE FUNCTION test.now() RETURNS timestamptz LANGUAGE SQL AS $$ SELECT '2000-01-01 0:00'::timestamptz; $$;
SET search_path TO test,pg_catalog,public;
-- make search_path change permanent for a specific user
ALTER USER <testuser> SET search_path TO test,pg_catalog,public;
SELECT now();
now
------------------------
2000-01-01 00:00:00+01
(1 row)
Time: 1.826 ms

Related

Vertx Oracle Datetime in LocalTime

Version
Vertx 4.3.4, JDBCClient
Questions
Our Oracle-database is setup in the local-timezone "Europe/Vienna" and returns the local-time over JDBC. But within the convertion of the VertX-JDBC the time markt in any case to UTC.
If you do a
pool.preparedQuery("select SESSIONTIMEZONE as sessionTimeZone, sysdate as dateValue, to_char(sysdate,'yyyy-dd-mm hh24:mi:ss') localTime from dual")
The output will be:
"sessionTimeZone":"Europe/Vienna",
"dateValue":"2022-10-17T10:06:45Z",
"localTime":"2022-17-10 10:06:45"
I assume it is related to this code-part (JDBCDecoderImpl.java, Line 188), where the returned timestamp is set hard-coded to UTC:
if (descriptor.jdbcType() == JDBCType.TIMESTAMP || descriptor.jdbcType() == JDBCType.TIMESTAMP_WITH_TIMEZONE) {
return LocalDateTime.parse(value.toString(), DateTimeFormatter.ISO_LOCAL_DATE_TIME).atOffset(ZoneOffset.UTC);
}
The oracle-jdbc-driver always return the timestamp in the timezone the database is setup. There is no possibility as it is at MySQL, where the timezone could be defined.
Is there a way to workaround this conversion?
This is a known issue from JDBC with Oracle Databases. In order to correctly set the timezone (without requiring any code changes) you can issue the following statement at start of each connection:
ALTER SESSION SET TIME_ZONE = 'UTC'
This will allow you to set the timezone for the session which should match the application. In this case, UTC.
There are other options too. For example if you can modify the database you can set the database timezone at creation time:
CREATE DATABASE myDB
...
SET TIME_ZONE='UTC';
Or modify it after creation:
ALTER DATABASE myExistingDB
...
SET TIME_ZONE='UTC';
If you don't want/can alter the database and want to keep using the vert.x client without managing the connection yourself, in other words, keep using 1 shot queries, you can specify at the query level the timezone too:
SELECT
FROM_TZ(TIMESTAMP '2030-01-01 12:30:35', '-04:00')
AT TIME ZONE '+12:00'
FROM DUAL;

LAMBDA current_timestamp towards Postgres

Question is very simple:
We are executing select current_timestamp from lambda to PostgreSQL aurora (which is set to america/New_york timezone).
Scenario:
When we execute select Current_timestamp directly in PostgreSQL Server, it is giving the time in EST.
But when we execute the same form Lambda, it is giving UTC time zone.
I know that Lambda timezone is UTC.
But I am confused here, because am picking up this piece of code "select Current_timestamp" from dynamoDB and passing it as a string through lambda towards PostgreSQL. And I can see this statement is exciting in PostgreSQL too. It should pick up the timezone of PostgreSQL. Not sure why it is picking Lambda.
Need to fix this and I need to execute this statement from Lambda and get the Current_timestamp of PostgreSQL(Not LAMBDA)
current_timestamp returns the timestamp formatted for the current session, which may be in UTC. You have a few options:
SELECT current_timestamp at time zone 'America/New_York'; This is the most explicit.
Start each session with SET TIME ZONE 'America/New_York';
ALTER USER youruser SET TIME ZONE 'America/New_York'; This will default the user to this time zone, but the client can still override this.

PG SQL 10 set default time zone to UTC

I need to change the database config to use UTC as the default for a PostgreSQL10 instance hosted on AWS RDS. I want it permanently changed at the database level and to never revert back to any other timezone no matter what.
I've tried running this, but it shows 0 updated rows:
ALTER DATABASE <my-db> SET timezone='UTC';
I've tried attaching a custom param group to the DB in RDS and modifying the entries like so (also rebooted after):
No matter what I do, when I run select * from pg_settings where name = 'TimeZone'; or SHOW timezone it shows 'America/Chicago'.
It seems like this should be easy to do, but it is proving to be a challenge.
If you want to store your timestamps in UTC and always want the database to send the data to the client in UTC as well, you should use the data type timestamp without time zone, which will not perform any time zone handling for you. That would be the simplest solution.
To convert the data, you could proceed like this:
SET timezone = 'UTC';
ALTER TABLE mytable ALTER timestampcol TYPE timestamp without time zone;
Based on this dba.stackexchange.com question. Apparently PG stores the timestamp in UTC time but then converts it to the session time zone. From what I've gathered, since my timestamps don't include time zone information, I need to tell PG that the timezone being stored is UTC, and then convert that to whatever local time is needed, so something like this:
SELECT my_timestamp_in_utc AT TIME ZONE 'UTC' AT TIME ZONE 'America/Denver' as my_local_time
FROM my_table;
This is a little verbose, but I'll go with it for now.

Issue with PostgreSQL: 'now' keeps returning same old value

I have an old web app, the relevant current stack is: Java 8, Tomcat 7, Apache Commons DBCP 2.1, Spring 2.5 (for transactions), iBatis, PostgreSQL 9.2 with postgresql-9.4.1208.jar
Part of the code inserts new records in incidents table, where the field begin_date (timestamp(3) with time zone) is a creation timestamp, filled with now:
insert into incidents
(...., begin_date, )
values
(..., 'now' ....)
All this is executed via iBatis, transactions managed programatically via Spring, connections acquired via DBCP pool. The webapp (actually a pair, clientside and backoffice, who share most of the code and jars) has been working since years.
Lately, perhaps after some libraries updates and reorganization (nothing important, it seemed), I've been experiencing (intermitent, hard to reproduce) some nasty problem: now seems to freeze, and it begins returning the same "old" value. Then, many records appear with the same creation timestamp, hours or days ago:
db=# select 'now'::timestamptz;
timestamp
-------------------------
2016-06-10 21:59:03.637+00
db=# select rid,begin_date from incidents order by rid desc limit 6;
rid | begin_date
-------+----------------------------
85059 | 2016-06-08 00:11:06.503+00
85058 | 2016-06-08 00:11:06.503+00
85057 | 2016-06-08 00:11:06.503+00
85056 | 2016-06-08 00:11:06.503+00
85055 | 2016-06-08 00:11:06.503+00
85054 | 2016-06-08 00:11:06.503+00
(All the above records were actually created minutes before 2016-06-10 21:50)
How can this happen? It might be some problem related with transactions and/or connection pooling, but I can't figure out what.
I know that 'now()' is an alias of transaction_timestamp(), it returns the time at the start of the transaction. This would suggest that a transaction was not properly closed, and the records inserts above were written (unintentionally) in a single long transaction. But this looks rather incredible to me.
First, I can insert a new record (via the webapp) and, using a psql console, I see that it has been written with the same begin_date (if the transaction were uncommited, I should not see the new record, I have the default serialization level).
Furthermore, the pg_stat_activity view only shows idle connections.
Any cues?
There's the constant (special timestamp value) 'now'.
And there's the function now().
The fact that you are mixing them freely suggests that you are unaware of the all-important difference. The manual:
Special Values
PostgreSQL supports several special date/time input values for
convenience, as shown in Table 8-13. The values infinity and -infinity
are specially represented inside the system and will be displayed
unchanged; but the others are simply notational shorthands that will
be converted to ordinary date/time values when read. (In particular,
now and related strings are converted to a specific time value as soon
as they are read.) All of these values need to be enclosed in single
quotes when used as constants in SQL commands.
Bold emphasis mine.
And (like you mentioned yourself already), but quoting the manual:
now() is a traditional PostgreSQL equivalent to transaction_timestamp().
And:
transaction_timestamp() is equivalent to CURRENT_TIMESTAMP
There is more, read the whole chapter.
Now (no pun intended), since you are using the special value instead of the function, you get a different (unexpected for you) behavior with prepared statements.
Consider this demo:
test=# BEGIN;
BEGIN
test=# PREPARE foo AS
test-# SELECT timestamptz 'now' AS now_constant, now() AS now_function;
PREPARE
test=# EXECUTE foo;
now_constant | now_function
-------------------------------+-------------------------------
2016-06-11 03:09:05.622783+02 | 2016-06-11 03:09:05.622783+02 -- identical
(1 row)
test=# commit;
COMMIT
test=# EXECUTE foo;
now_constant | now_function
-------------------------------+------------------------------
2016-06-11 03:09:05.622783+02 | 2016-06-11 03:10:00.92488+02 -- different!
(1 row)
While you run both in the same transaction, 'now' and now() produce the same value. But a prepared statement is designed to last for the duration of your session (possibly across many transactions). Next time you execute the prepared statement, you'll see the difference.
In other words: 'now' implements "early binding", while now() implements "late binding".
You may have introduced prepared statements and / or connection pooling (which can preserve prepared statements for a longer period of time) - both generally good ideas. But the hidden problem in your INSERT now starts kicking.
The "idle connections" you see indicate as much: connections stay open, preserving prepared statements.
In short: Use now().
Alternatively, set the column default of begin_date to now() (not 'now'!) and don't mention the column in the INSERT. Your "creation timestamp" is saved automatically.
I think the function you are looking for is now(), not 'now'...
insert into incidents
(..., begin_date, ...)
values
(..., now(), ...)
or at least that works from a psql shell.
robert.kuhar=# select now();
now
-------------------------------
2016-06-10 18:10:05.953661-07
(1 row)

No getdate() function in EnterpriseDB PostgreSQL

Do you have any idea on how to have a getdate() function in EnterpriseDB PostgreSQL? I upgraded to EDB-PSQL, and when I try to restore old data from the free PSQL, it returns error on some tables since there is no getdate().
I believe this should automatically be created upon creating new database? But it didn't. :( Only a now() function.
Can I create the function instead? Help!
If getdate() is like now() (as with SQL Server) you can simply say
create function public.getdate() returns timestamptz
stable language sql as 'select now()';