I have two columns one is a time the other a timestamp
ALTER TABLE public.tour
ADD COLUMN reprocess_toupdate timestamp without time zone DEFAULT NOW();
ALTER TABLE public.tour
ADD COLUMN reprocess_updated time without time zone DEFAULT NOW();
when I execute:
select reprocess_toupdate, reprocess_updated
from tour
where reprocess_toupdate::date > reprocess_updated::date;
I get an error:
ERROR: cannot cast type time without time zone to date
without ::date, I get this error:
ERROR: operator does not exist: timestamp without time zone > time without time zone
That is because a TIME column does not have a date component. It's range of values is 00:00:00 - 24:00:00. see Documentation Section 8.5 Date/Time Types. Since it does not have a date component you cannot cast it as date. The proper solution would to change the type to "timestamp without time zone". If that is not possible then compare just the times or to "reattach" the date then compare:
with dateset as
(select '2019-06-02 13:00:00'::time without time zone tm, (now() - interval '1 day')::timestamp without time zone dt)
select tm, dt, date_trunc('day', dt)+tm redt from dateset
Works here:
create temporary table so (id serial primary key, ts timestamp default now());
insert into so (ts) values (now());
select * from so where ts::date < now();
Output:
+------+----------------------------+
| id | ts |
|------+----------------------------|
| 1 | 2019-07-01 10:16:43.093662 |
+------+----------------------------+
Related
Hi I have a entrytime and exittime timestamp in my database, how can I query it to display only ones where the person exited more than an hour later;
Select * from store where EXTRACT(EPOCH FROM (exittime - entrytime))/3600 >60
That's what I have so far but it won't work, any help would be appreciated.
Just subtract the values and compare it with an interval
Select *
from store
where exittime - entrytime > interval '1 hour';
This assumes that both columns are defined as timestamptz or timestamp
How can I convert this to work in PostgreSQL?
TO_CHAR(CAST(FROM_TZ(CAST(columnname AS TIMESTAMP), 'GMT') AT TIME ZONE 'US/Eastern' AS DATE),'MM/DD/YY HH:MI AM') AS dt
testdb=# select TO_CHAR(CAST('2020-02-28T18:43' AS TIMESTAMP) AT TIME ZONE 'UTC' AT TIME ZONE 'US/Eastern','MM/DD/YY HH:MI AM') as dt;
dt
-------------------
02/28/20 01:43 PM
(1 row)
To make it clear what's going on, we'll start with the cast to TIMESTAMP, show that adding the first AT TIME ZONE makes it a tz-aware timestamp, and then how the 2nd does the timezone conversion.
testdb=# select CAST('2020-02-28T18:43' AS TIMESTAMP),
testdb-# CAST('2020-02-28T18:43' AS TIMESTAMP) AT TIME ZONE 'GMT',
testdb-# CAST('2020-02-28T18:43' AS TIMESTAMP) AT TIME ZONE 'GMT' AT TIME ZONE 'US/Eastern';
timestamp | timezone | timezone
---------------------+------------------------+---------------------
2020-02-28 18:43:00 | 2020-02-28 18:43:00+00 | 2020-02-28 13:43:00
(1 row)
See the timezone conversion docs for more details.
There is this question about how to extract microseconds from an interval field
I want to do the opposite, I want to create an interval from a numeric microseconds. How would I do this?
The reason is I want to take a table of this format
column_name | data_type
-------------+--------------------------
id | bigint
date | date
duration | numeric
and import it into a table like this
column_name | data_type
-------------+--------------------------
id | integer
date | date
duration | interval
Currently I am trying:
select CAST(duration AS interval) from boboon.entries_entry;
which gives me:
ERROR: cannot cast type numeric to interval
LINE 1: select CAST(duration AS interval) from boboon.entries_entry;
You can do:
select duration * interval '1 microsecond'
This is how you convert any date part to an interval in Postgres. Postgres supports microseconds, as well as more common units.
you can append the units and then cast to interval
example:
select (123.1234 || ' seconds')::interval
outputs:
00:02:12.1234
valid units are the following (and their plural forms):
microsecond
millisecond
second
minute
hour
day
week
month
quarter
year
decade
century
millennium
Question: How is query 1 "semantically" different than the query 2?
Background:
To extract data from the table in a db which is at my localtime zone (AT TIME ZONE 'America/New_York').
The table has data for various time zones such as the 'America/Los_Angeles', America/North_Dakota/New_Salem and such time zones.
(Postgres stores the table data for various timezones in my local timezone)
So, everytime I retrieve data for a different location other than my localtime, I convert it to its relevant timezone for evaluation purposes..
Query 1:
test_db=# select count(id) from click_tb where date::date AT TIME ZONE 'America/Los_Angeles' = '2017-05-22'::date AT TIME ZONE 'America/Los_Angeles';
count
-------
1001
(1 row)
Query 2:
test_db=# select count(id) from click_tb where (date AT TIME ZONE 'America/Los_Angeles')::date = '2017-05-22'::date;
count
-------
5
(1 row)
Table structure:
test_db=# /d+ click_tb
Table "public.click_tb"
Column | Type | Modifiers | Storage | Stats target | Description
-----------------------------------+--------------------------+-------------------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('click_tb_id_seq'::regclass) | plain | |
date | timestamp with time zone | | plain | |
Indexes:
"click_tb_id" UNIQUE CONSTRAINT, btree (id)
"click_tb_date_index" btree (date)
The query 1 and query 2 do not produce consistent results.
As per my tests, the below query 3, semantically addresses my requirement.
Your critical feedback is welcome.
Query 3:
test_db=# select count(id) from click_tb where ((date AT TIME ZONE 'America/Los_Angeles')::timestamp with time zone)::date = '2017-05-22'::date;
Do not convert the timestamp field. Instead, do a range query. Since your data is already using a timestamp with time zone type, just set the time zone of your query accordingly.
set TimeZone = 'America/Los_Angeles';
select count(id) from click_tb
where date >= '2017-01-02'
and date < '2017-01-03';
Note how this uses a half open interval of the dates (at start of day in the set time zone). If you want to compute the second date from your first date, then:
set TimeZone = 'America/Los_Angeles';
select count(id) from click_tb
where date >= '2017-01-02'
and date < (timestamp with time zone '2017-01-02' + interval '1 day');
This properly handles daylight saving time, and sargability.
I have a table with epoch values (one per minute, the epoch itself is in milliseconds) and temperatures.
select * from outdoor_temperature order by time desc;
time | value
---------------+-------
1423385340000 | 31.6
1423385280000 | 31.6
1423385220000 | 31.7
1423385160000 | 31.7
1423385100000 | 31.7
1423385040000 | 31.8
1423384980000 | 31.8
1423384920000 | 31.8
1423384860000 | 31.8
[...]
I want to get the highest single value in a given day, which I'm doing like this:
SELECT *
FROM
outdoor_temperature
WHERE
value = (
SELECT max(value)
FROM outdoor_temperature
WHERE
((timestamp with time zone 'epoch' + (time::float/1000) * interval '1 second') at time zone 'Australia/Sydney')::date
= '2015-02-05' at time zone 'Australia/Sydney'
)
AND
((timestamp with time zone 'epoch' + (time::float/1000) * interval '1 second') at time zone 'Australia/Sydney')::date
= '2015-02-05' at time zone 'Australia/Sydney'
ORDER BY time DESC LIMIT 1;
On my Linode, running CentOS 5 and Postgres 8.4, it returns perfectly (I get a single value, within that date, with the maximum temperature). On my MacBook Pro with Postgres 9.3.5, however, the exact same query against the exact same data doesn't return anything. I started simplifying everything to work out what was going wrong, and got to here:
SELECT max(value)
FROM outdoor_temperature
WHERE
((timestamp with time zone 'epoch' + (time::float/1000) * interval '1 second') at time zone 'Australia/Sydney')::date
= '2015-02-05' at time zone 'Australia/Sydney';
max
-----
(1 row)
It's empty, and yet returning one row?!
My questions are:
Firstly, why is that query working against Postgres 8.4 and doing something different on 9.3.5?
Secondly, is there a much simpler way to achieve what I'm trying to do? I feel like there should be but if so I've not managed to work it out. This ultimately needs to work on Postgres 8.4.
I'm not really sure why you're getting no results - you seem to simply miss data for this day.
But you really should use another query for selecting a date, as your query would not be able to use an index.
You should select like this:
select max(value) from outdoor_temperature where
time>=extract(
epoch from
'2015-02-05'::timestamp at time zone 'Australia/Sydney'
)
and
time<extract(
epoch from
('2015-02-05'::timestamp+'1 day'::interval) at time zone 'Australia/Sydney'
)
;
This is much simpler and this way your database would be able to use an index on time, which should be a primary key (with automatic index).