I know how to extract a DOW from a date. eg SELECT EXTRACT(DOW FROM '2018-04-23'::date)
But how can I do the inverse? How can I take a series of DOW and convert them into the next date for a given week? (relative to the current week).
+-----+---------+
| id | the_dow |
+-----+----------
| 358 | 1 |
| 359 | 2 |
| 360 | 5 |
| 361 | 2 |
| 362 | 3 |
+-----+---------+
Just add that number to the start of the week:
date_trunc('week', current_date)::date + the_dow
As far as I know date_trunc() uses the ISO definition of the week, so the first day will be Monday. Using the isodow for the extract and the subtracting 1 from that value would be easier.
Related
Given a Postgres table with columns highwater_datetime::timestamp and highwater::integer, I am trying to construct a select statement for a given highwater_datetime range, that generates rows with a column for the max highwater for each hour (first occurrence when dups) and another column showing the highwater_datetime when it occurred (truncated to the minute and order by highwater_datetime asc). e.g.
| highwater_datetime | max_highwater |
+--------------------+---------------+
| 2021-01-27 20:05 | 8 |
| 2021-01-27 21:00 | 7 |
| 2021-01-27 22:00 | 7 |
| 2021-01-27 23:00 | 7 |
| 2021-01-28 00:00 | 7 |
| 2021-01-28 01:32 | 7 |
| 2021-01-28 02:00 | 7 |
| 2021-01-28 03:00 | 7 |
| 2021-01-28 04:22 | 9 |
DISTINCT ON should do the trick:
SELECT DISTINCT ON (date_trunc('hour', highwater_datetime))
highwater_datetime,
highwater
FROM mytable
ORDER BY date_trunc('hour', highwater_datetime),
highwater DESC,
highwater_datetime;
DISTINCT ON will output the first row for each entry with the same hour according to the ORDER BY clause.
I have a table that has following columns:- local_id | time_in | time_out | date | employee_id
I have to calculate average working hours(which will be calculated by time_out and time_in) on a monthly basis in PSQL. I have no clue how to do that, was thinking about using date_part function...
here are the table details:
local_id | time_in | time_out | date | employee_id
---------+----------+----------+------------+-------------
7 | 08:00:00 | 17:00:00 | 2020-02-12 | 2
6 | 08:00:00 | 17:00:00 | 2020-02-12 | 4
8 | 09:00:00 | 17:00:00 | 2020-02-12 | 3
13 | 08:05:00 | 17:00:00 | 2020-02-17 | 3
12 | 08:00:00 | 18:09:00 | 2020-02-13 | 2
Click: demo:db<>fiddle; extended example covering two months
SELECT
employee_id,
date_trunc('month', the_date) AS month, -- 1
AVG(time_out - time_in) -- 2, 3
FROM
mytable
GROUP BY employee_id, month -- 3
date_trunc() "shortens" the date to a certain date part. In that case, all dates are truncated to the month. This gives the opportunity to group by month. (for your "monthly basis")
Calculate the working time by calculating the difference of both times
Grouping by employee_id and calculated month, calculating the average of the time differences.
Hello I have been trying to generate a report based on some db data.
I need to calculate per DAY (finished) so in this case lets say that the day for calculation will be : (2001-01-02) and in current date we are in the 2001-01-03.
So basically day before current date.
MAX count for locker_orders occupancy in that day + time of occurrence (peak max load of lockers per place)
Min count for locker_orders occupancy in that day + time of occurrence
(peak min load of lockers per place)
AVG count for locker_orders occupancy in that day (average load in that day based on min max and the number of lockers per place)
group PER place_id
group PER each minute in current day
NUMBER of all lockers in store on that day (may change in time)
Where there is no pickup date the locker is still occupied - it may move to another days span
I was able to perform a simple query to group by place and per minute the locker order was created at but currently i have a problem placing it in current day scope
here is a representation of the timeline (handmade ;))
Given a schema of data containing
DB DATA
LOCKERS
------------------------------------
| id | created_at |
------------------------------------
| 1 | 2001-01-01 00:00 (DATETIME) |
------------------------------------
| 2 | 2001-01-01 00:00 (DATETIME) |
------------------------------------
| 3 | 2001-01-01 00:00 (DATETIME) |
------------------------------------
| 4 | 2001-01-01 00:00 (DATETIME) |
------------------------------------
| 5 | 2001-01-01 00:00 (DATETIME) |
------------------------------------
LOCKER_ORDERS
------------------------------------------------------------------------------------
| id | created_at | pickup_date | place_id | locker_id |
------------------------------------------------------------------------------------
| 1 | 2001-01-02 10:00 (DATETIME) | 2001-01-02 13:25 (DATETIME) | 1 | 2 |
------------------------------------------------------------------------------------
| 2 | 2001-01-02 07:45 (DATETIME) | 2001-01-02 11:50 (DATETIME) | 1 | 1 |
------------------------------------------------------------------------------------
| 3 | 2001-01-02 19:30 (DATETIME) | NULL | 1 | 4 |
------------------------------------------------------------------------------------
| 4 | 2001-01-01 14:40 (DATETIME) | 2001-01-01 21:15 (DATETIME) | 1 | 5 |
-------------------------------------------------------------------------------------
| 5 | 2001-01-02 12:25 (DATETIME) | NULL | 1 | 3 |
-------------------------------------------------------------------------------------
| 6 | 2001-01-02 13:30 (DATETIME) | 2001-01-02 18:40 (DATETIME) | 1 | 2 |
-------------------------------------------------------------------------------------
| 7 | 2001-01-02 12:45 (DATETIME) | 2001-01-02 20:50 (DATETIME) | 1 | 1 |
-------------------------------------------------------------------------------------
| 8 | 2001-01-02 07:40 (DATETIME) | 2001-01-02 18:15 (DATETIME) | 1 | 5 |
-------------------------------------------------------------------------------------
OUTPUT DATA - the desired output
# | Date (day) | place_id | min | max | avg | NO of all lockers in that day in given place |
---------------------------------------------------------------------------------------------
# | 2001-01-02 | 1 | 0 | 4 | 2 | 8 |
I have the following table with epoch timestamps in Postgres. I would like to select the timestamps where the time is from 20:00 to 21:00 in PST. I have tried the following partially but I can't seem to extract both hour and minutes.
SELECT timestamp from table where extract(‘hour’ from to_timestamp(created_at) at time zone ‘America/Los_angeles’) > 20
| created_at |
| 1526528788 |
| 1526442388 |
| 1526309188 |
| 1526359588 |
| 1526532388 |
| 1526489188 |
Expected result:
| created_at |
| 1526528788 |
| 1526442388 |
Any help would be greatly appreciated.
Why do you write America/Los Angeles when you mean PST? They are (sometimes) different.
Does that solve your problem:
... WHERE extract(hour FROM
to_timestamp(1526309188) AT TIME ZONE 'PST'
) BETWEEN 20 AND 21;
I have some
id_merchant | data | sell
11 | 2009-07-20 | 1100.00
22 | 2009-07-27 | 1100.00
11 | 2005-07-27 | 620.00
31 | 2009-08-07 | 2403.20
33 | 2009-08-12 | 4822.00
52 | 2009-08-14 | 4066.00
52 | 2009-08-15 | 295.00
82 | 2009-08-15 | 0.00
23 | 2011-06-11 | 340.00
23 | 2012-03-22 | 1000.00
23 | 2012-04-08 | 1000.00
23 | 2012-07-13 | 36.00
23 | 2013-07-17 | 2480.00
23 | 2014-04-09 | 1000.00
23 | 2014-06-10 | 1500.00
23 | 2014-07-20 | 700.50
I want to create table as select with interval 2 years. First date for merchant is min(date). So i generate series (min(date)::date,current(date)::date,'2 years')
I want to get to table like that:
id_merchant | data | sum(sell)
23 | 2011-06-11 | 12382.71
23 | 2013-06-11 | 12382.71
23 | 2015-06-11 | 12382.71
But there is some mistake in my query because sum(sell) is the same for all series and the sum is wrong. Event if i sum sale ther is about 6000 not 12382.71.
My query:
select m.id_gos_pla,
generate_series(m.min::date,dath()::date,'2 years')::date,
sum(rch.suma)
from rch, minmax m
where rch.id_gos_pla=m.id_gos_pla
group by m.id_gos_pla,m.min,m.max
order by 1,2;
Pls for help.
I would do it this way:
select
periods.id_merchant,
periods.date as period_start,
(periods.date + interval '2' year - interval '1' day)::date as period_end,
coalesce(sum(merchants.amount), 0) as sum
from
(
select
id_merchant,
generate_series(min(date), max(date), '2 year'::interval)::date as date
from merchants
group by id_merchant
) periods
left join merchants on
periods.id_merchant = merchants.id_merchant and
merchants.date >= periods.date and
merchants.date < periods.date + interval '2' year
group by periods.id_merchant, periods.date
order by periods.id_merchant, periods.date
We use sub-query to generate date periods for each id_merchant according to the first date for this merchant and required interval. Then join it with merchants table on date within period condition and group by merchant_id and period (periods.date is the starting period date which is enough). And finally we take everything we need: starting date, ending date, merchant and sum.