In Postgres, if I do the following:
select (now() - created_at) from my_table
I get results like this:
854 days 12:04:50.29658
Whereas, if I do:
select age(now(), created_at) fro my_table
I get results like this:
2 years 4 mons 3 days 12:04:50.29658
According to pg_typeof(...) they are both of type interval
But if I try to extract the years:
select extract(years from age(now(), created_at)) from my_table
I get:
2
Whereas, with:
select extract(years from (now() - created_at)) from my_table
I get:
0
Is there a consistent way to extract the number of years from an interval value (no matter how it was generated)?
Note: I don't have write access to the db, so can't define stored procedures, etc. Needs to be a select statement.
------ UPDATE ------
justify_interval(...) was suggested below, but unfortunately it seems to be inaccurate in its calculations.
E.g:
select age('2018-01-03'::timestamp, '2016-01-05'::timestamp);
gives the correct answer:
1 year 11 mons 29 days
Whereas:
select justify_interval('2018-01-03'::timestamp - '2016-01-05'::timestamp);
gives:
2 years 9 days
I believe this is because it (incorrectly) assumes that all months have 30 days in them
(see justify_days
here: https://www.postgresql.org/docs/current/static/functions-datetime.html)
The function justify_interval does what you want. Use it in combination with EXTRACT to get the years:
SELECT EXTRACT(years FROM
justify_interval(INTERVAL '1 year 900 days 700 hours'));
date_part
-----------
3
(1 row)
If 30 days = 1 month isn't accurate enough for you, you'll have to use EXTRACT to get the number of days and divide by 365.25.
There is a theoretical limit how exact you can be, because the number of years in an interval somewhat depends on between which dates that interval is.
The two-element age function gives a precise result for the number of years between two dates.
Related
I need to substract 4 hours from CURRENT_TIMESTAMP in DB2, my query in SQL DEVELOPER is correct because I can see the registers I need to see, but when I do the query in Eclipse have some problem and I don't know which is. This is the query I need to do:
"SELECT yerror, COUNT(yerror) AS CantidadErrores FROM kexg.VD03154 WHERE tumod BETWEEN (CURRENT_TIMESTAMP - 4 HOUR) AND CURRENT_TIMESTAMP GROUP BY yerror;";
This query returns me an error but if I put 24 HOUR is correct. Also is correct if I put 5 DAYS for example.
I have try to subtract 04 HOUR, and it's wrong also.
Thanks in advance
The following works beautifully for me on DB2 for z/OS:
select current timestamp as rightnow,
current timestamp - 4 hours
from sysibm.sysdummy1
;
RIGHTNOW COL1
-------------------------- --------------------------
2020-02-27-08.16.55.142456 2020-02-27-04.16.55.142456
I would like to determine the number of days that an account has been open.
Ideally, I would like to compare the return value to an integer(days).
i.e
I would like to see if age(open_date) > 14
If anyone has any better ideas...
Thanks
Here is how you get the number of days comparing two dates:
SQL> select extract(day from now()-'2015-02-21'::timestamptz);
date_part
-----------
122
(1 row)
I'm a SQL newbie.
I have a postgres table that has datetime, and values. with subminute entries. I want to create a new table that takes the average of each minute and saves that instead of having subminute entries. so something like this:
1-12-07 12:29:56:00 2
1-12-07 12:29:56:16 3
1-12-07 12:29:56:34 3
1-12-07 12:29:56:58 4
1-12-07 12:30:00:00 7
to
1-12-07 12:29:00 3
1-12-07 12:30:00 #
Is there a way to do it in postgres?
The only solution I can think of is using a python script to do the trick. But that will take forever as I have a significant amount of data.
In order to do this you must group the results by your time truncated at the minutes:
insert into "NewTable"
select minute, avg(value) from (select date_trunc('minutes', date) as minute, value from "Time") as Average group by minute;
So this is going to be awful and inefficient and an incomplete answer, recommend writing a script (just my .02)
It should be something like this, since there is not a sqlFiddle provided in the question:
select average(my_count)
from
(
select my_date, my_time,my_count
from my_table
group by my_date, extract(minute from my_time)
)
I am trying to solve an interesting problem. I have a table that has, among other data, these columns (dates in this sample are shown in European format - dd/mm/yyyy):
n_place_id dt_visit_date
(integer) (date)
========== =============
1 10/02/2012
3 11/03/2012
4 11/05/2012
13 14/06/2012
3 04/10/2012
3 03/11/2012
5 05/09/2012
13 18/08/2012
Basically, each place may be visited multiple times - and the dates may be in the past (completed visits) or in the future (planned visits). For the sake of simplicity, today's visits are part of planned future visits.
Now, I need to run a select on this table, which would pull unique place IDs from this table (without date) sorted in the following order:
Future visits go before past visits
Future visits take precedence in sorting over past visits for the same place
For future visits, the earliest date must take precedence in sorting for the same place
For past visits, the latest date must take precedence in sorting for the same place.
For example, for the sample data shown above, the result I need is:
5 (earliest future visit)
3 (next future visit into the future)
13 (latest past visit)
4 (previous past visit)
1 (earlier visit in the past)
Now, I can achieve the desired sorting using case when in the order by clause like so:
select
n_place_id
from
place_visit
order by
(case when dt_visit_date >= now()::date then 1 else 2 end),
(case when dt_visit_date >= now():: date then 1 else -1 end) * extract(epoch from dt_visit_date)
This sort of does what I need, but it does contain repeated IDs, whereas I need unique place IDs. If I try to add distinct to the select statement, postgres complains that I must have the order by in the select clause - but then the unique won't be sensible any more, as I have dates in there.
Somehow I feel that there should be a way to get the result I need in one select statement, but I can't get my head around how to do it.
If this can't be done, then, of course, I'll have to do the whole thing in the code, but I'd prefer to have this in one SQL statement.
P.S. I am not worried about the performance, because the dataset I will be sorting is not large. After the where clause will be applied, it will rarely contain more than about 10 records.
With DISTINCT ON you can easily show additional columns of the row with the resulting n_place_id:
SELECT n_place_id, dt_visit_date
FROM (
SELECT DISTINCT ON (n_place_id) *
,dt_visit_date < now()::date AS prio -- future first
,#(now()::date - dt_visit_date) AS diff -- closest first
FROM place_visit
ORDER BY n_place_id, prio, diff
) x
ORDER BY prio, diff;
Effectively I pick the row with the earliest future date (including "today") per n_place_id - or latest date in the past, failing that.
Then the resulting unique rows are sorted by the same criteria.
FALSE sorts before TRUE
The "absolute value" # helps to sort "closest first"
More on the Postgres specific DISTINCT ON in this related answer.
Result:
n_place_id | dt_visit_date
------------+--------------
5 | 2012-09-05
3 | 2012-10-04
13 | 2012-08-18
4 | 2012-05-11
1 | 2012-02-10
Try this
select n_place_id
from
(
select *,
extract(epoch from (dt_visit_date - now())) as seconds,
1 - SIGN(extract(epoch from (dt_visit_date - now())) ) as futurepast
from #t
) v
group by n_place_id
order by max(futurepast) desc, min(abs(seconds))
I'm a bit of newbie when it comes to postgres, so bear with me a wee bit and i'll see if i can put up enough information.
i insert weather data into a table every 10 mins, i have a time column that is stamped with an epoch date.
I Have a column of the last hrs rain fall, and every hr that number changes of course with the running total (for that hour).
What i would like to do is skim through the rows to the end of each hour, and get that row, but do it over the last 4 hours, so i would only be returning 4 rows say.
Is this possible in 1 query? Or should i do multiple queries?
I would like to do this in 1 query but not fussed...
Thanks
Thanks guys for your answers, i was/am a bit confused by yours gavin - sorry:) comes from not knowing this terribly well.
I'm still a bit unsure about this, so i'll try and explain it a bit better..
I have a c program that inserts data into the database every 10 mins, it reads the data fom a device that keeps the last hrs rain fall, so every 10 mins it could go up by x amount.
So i guess i have 6 rows / hr of data.
My plan was to go back (in my php page) every 7, which would be the last entry for every hour, and just grab that value. Hence why i would only ever need 4 rows.. just spaced out a bit!
My table (readings) has data like this
index | time (text) | last hrs rain fall (text)
1 | 1316069402 | 1.2
All ears to better ways of storing it too :) I very much appreciate your help too guys thanks.
You should be able to do it in one query...
Would something along the lines of:
SELECT various_columns,
the_hour,
SUM ( column_to_be_summed )
FROM ( SELECT various_columns,
column_to_be_summed,
extract ( hour FROM TIME ) AS the_hour
FROM readings
WHERE TIME > ( NOW() - INTERVAL '4 hour' ) ) a
GROUP BY various_columns,
the_hour ;
do what you need?
SELECT SUM(rainfall) FROM weatherdata WHERE time > (NOW() - INTERVAL '4 hour' );
I don't know column names but that should do it the ones in caps are pgsql types. Is that what you are after?
I am not sure if this is exactly what you are looking for but perhaps it may serve as a basis for adaptation.
I often have a requirment for producing summary data over time periods though I don't use epoch time so there may be better ways of manipulating the values than I have come up with.
create and populate test table
create table epoch_t(etime numeric);
insert into epoch_t
select extract(epoch from generate_series(now(),now() - interval '6 hours',interval '-10 minutes'));
To divide up time into period buckets:
select generate_series(to_char(now(),'yyyy-mm-dd hh24:00:00')::timestamptz,
to_char(now(),'yyyy-mm-dd hh24:00:00')::timestamptz - interval '4 hours',
interval '-1 hour');
Convert epoch time to postgres timestamp:
select timestamptz 'epoch' + etime * '1 second'::interval from epoch_t;
then truncate to hour :
select to_char(timestamptz 'epoch' + etime * '1 second'::interval,
'yyyy-mm-dd hh24:00:00')::timestamptz from epoch_t
To provide summary information by hour :
select to_char(timestamptz 'epoch' + etime * '1 second'::interval,
'yyyy-mm-dd hh24:00:00')::timestamptz,
count(*)
from epoch_t
group by 1
order by 1 desc;
If you might have gaps in the data but need to report zero results use a generate_series to create period buckets and left join to data table.
In this case I create sample hour buckets back prior to the data population above - 9 hours instead of 6 and join on the conversion of epoch time to timestamp truncated to hour.
select per.sample_hour,
sum(case etime is null when true then 0 else 1 end) as etcount
from (select generate_series(to_char(now(),
'yyyy-mm-dd hh24:00:00')::timestamptz,
to_char(now(),'yyyy-mm-dd hh24:00:00')::timestamptz - interval '9 hours',
interval '-1 hour') as sample_hour) as per
left join epoch_t on to_char(timestamptz 'epoch' + etime * '1 second'::interval,
'yyyy-mm-dd hh24:00:00')::timestamptz = per.sample_hour
group by per.sample_hour
order by per.sample_hour desc;