Can Perl DBIx::Class override the way a column is retrieved from the database? - perl

I have never used DBIx::Class until today, so I'm completely new at it.
I'm not sure if this is possible or not, but basically I have a table in my SQLite database that has a timestamp column in it. The default value for the timestamp column is "CURRENT_TIMESTAMP". SQLite stores this in the GMT timezone, but my server is in the CDT timeszone.
My SQLite query to get the timestamp in the correct timezone is this:
select datetime(timestamp, 'localtime') from mytable where id=1;
I am wondering if it is possible in my DBIx schema for "MyTable" to force it to apply the datetime function every time it is retrieving the "timestamp" field from the database?
In the cookbook it looks like it is possible to do this when using the ->search() function, but I am wondering if it's possible to make it so if I'm using search(), find(), all(), find_or_new(), or any function that will pull this column from the database, it will apply the datetime() SQLite function to it?
DBIx::Class seems to have great documentation - I think I'm just so new at it I'm not finding the right places/things to search for.
Thanks in advance!

I've used InflateColumn::DateTime in this way and with a timestamp, and I can confirm it works, but I wonder if you have this backward.
If your column is in UTC, mark the column UTC, and then it should be a UTC time when you load it. Then when you set_timezone on the DateTime (presumably that would be an output issue - it's at output that you care it's locally zoned) you can set it to local time and it will make the necessary adjustment.

Related

PostgreSQL: require explicit time zone on INSERT/UPDATE of timestamptz columns

I'm digging into how Postgres works and have decided that any date/time data in my database should be of datatype timestamptz.
The rules that govern how Postgres parses date/time information vary based on the server's timezone, the client session timezone, and/or the database timezone setting. I can't expect my developers to know all of this, so to avoid any ambiguity I would like to somehow procedurally require a timezone be specified in any INSERT or UPDATE to a timestamptz column, and for any UPDATES or INSERTS to fail when the input value for a timestamptz column doesn't explicity include a time zone. I've created a regex that I can use to match against the input value; I just don't know how to hook up the plumbing.
I first thought I could do this with a custom domain; however, it appears that the CHECK constraint on a domain is done after the input string has already been parsed, so that won't work. (By then, the server has already inferred the time zone for values where time zone wasn't explicitly included.)
I could use a custom data type, but that's a whole can of worms there and I'm not sure that doing so would preserve all of the operators and functions that would operate on the underlying timstamptz column.
I could use BEFORE INSERT and BEFORE UPDATE triggers, but doing so would require me to iterate over every column in the NEW record, determine its datatype, then check the value against a regex to ensure a time zone is specified.
Does the community have any ideas on how to accomplish this? I think the BEFORE INSERT/BEFORE UPDATE is likely the best place to do this work, but I don't know how to iterate over the new record and find the data type for each column.
Is there an easier way to accomplish this that I've missed?
I can't expect my developers to know all of this
I think that's your problem. If you want to use PostgreSQL and work with time zones, you need your developers to understand it.
It's all very simple: Only set the timezone parameter correctly for the client session, then everything will just work.

Truncate datetimes by second for all queries, but keep milliseconds stored in Postgres

I'm trying to find a way to tell Postgres to truncate all datetime columns so that they are displayed and filtered by seconds (ignoring milliseconds).
I'm aware of the
date_trunc('second', my_date_field)
method, but do not want to do that for all datetime fields in every select and where clause that mentions them. Dates in the where clause need to also capture records with the granularity of seconds.
Ideally, I'd avoid stripping milliseconds from the data when it is stored. But then again, maybe this is the best way. I'd really like to avoid that data migration.
I can imagine Postgres having some kind of runtime configuration like this:
SET DATE_TRUNC 'seconds';
similar to how timezones are configured, but of course that doesn't work and I'm unable to find anything else in the docs. Do I need to write my own Postgres extension? Did someone already write this?

Can I automatically populate `created_at` timestamp to transaction timestamp with Google Spanner?

I am looking through the documentation for Google Cloud Spanner, and it looks like write operations return a timestamp when the row was actually written.
But when reading rows, it doesn't seem possible to re-capture that timestamp (either as a column that can be read or as a column that could be limited and sorted on).
I assume that I could just update the row after it is written to append a new column (created_at), but ideally it would be nice to have that field automatically appended.
Is there any way to access the original transaction timestamp when querying spanner? I also noticed that there was a CURRENT_TIMESTAMP() sql function. Is that equivalent to the transaction timestamp?
You can create commit timestamp columns, and Cloud Spanner writes the timestamp as part of the transaction:
https://cloud.google.com/spanner/docs/commit-timestamp
Currently, updating the timestamp column is the closest we can get.
CURRENT_TIMESTAMP() returns the current time.
See for more information:
https://cloud.google.com/spanner/docs/functions-and-operators#current_timestamp

Perl DBIx::Class: getting the current time from the Database

Here is my problem:
I want to calculate how long ago a record was updated in a DB.
The DB is in PostgreSQL, the update_time field is populated by a trigger that uses CURRENT_TIMESTAMP(2). The field is inflated to a DateTime object by DBIx::Class. I get the current time in my code using DateTime->now()
My problem is that when I retrieve the field value, it's off by 1 h (ie it's 1h ahead of DateTime->now()). I am in the CET time zone, so 1h ahead of UTC currently.
The right way to solve the problem is likely at the DB level. I have tried to replace CURRENT_TIMESTAMP with LOCALTIMESTAMP, to no avail.
I think actually a more robust solution (ie one that doesn't rely on getting the DB right) would be to get the current time stamp from the DB itself. I really just need the epoch, since that's what I use to compute the difference.
So the question is: is there a simple way to do this: get the current time from the DB using DBIx::Class?
A different way to get the DB and DateTime to agree on what the current time is would also be OK!
You can use dbh_do from your DBIx::Class::Storage to run arbitrary queries. With that, just SELECT the CURRENT_TIMESTAMP.
my ( $timestamp ) = $schema->storage->dbh_do(
sub {
my ($storage, $dbh) = #_;
$dbh->selectrow_array("SELECT CURRENT_TIMESTAMP");
},
);
I always recommend to do all date/time related things on the app server and not rely on the database server(s). Essentially that means to not use a trigger but pass the datetime on insert/update and make it mandatory (NOT NULL).
Besides that you should store datetimes in UTC and convert to your local or other required timezone in your code.
Your issue likely happens because of an incorrect or missing timezone configuartion in which case DateTime defaults to its floating timezone.

Postgres timestamp to date

I am building a map in CartoDB which uses Postgres. I'm simply trying to display my dates as: 10-16-2014 but, haven't been able to because Postgres includes an unneeded timestamp in every date column.
Should I alter the column to remove the timestamp or, is it simply a matter of a (correct) SELECT query? I can SELECT records from a date range no problem with:
SELECT * FROM mytable
WHERE myTableDate >= '2014-01-01' AND myTableDate < '2014-12-31'
However, my dates appear in my CartoDB maps as: 2014-10-16T00:00:00Z and I'm just trying to get the popups on my maps to read: 10-16-2014.
Any help would be appreciated - Thank you!
You are confusing storage with display.
Store a timestamp or date, depending on whethether you need time or not.
If you want formatted output, ask the database for formatted output with to_char, e.g.
SELECT col1, col2, to_char(col3, 'DD-MM-YY'), ... FROM ...;
See the PostgreSQL manual.
There is no way to set a user-specified date output format. Dates are always output in ISO format. If PostgreSQL let you specify other formats without changing the SQL query text it'd really confuse client drivers and applications that expect the date format the protocol specifies and get something entirely different.
You have two basic options.
1 Change the column from a timestamp to a date column.
2 Cast to date in your SQL query (i.e. mytimestamp::date works).
In general if this is a presentation issue, I don't usually think that is a good reason to muck around with the database structure. That's better handled by client-side processing or casting in an SQL query. On the other hand if the issue is a semantic one, then you may want to revisit your database structure.