Aggregated values depending on an other field - amazon-redshift

I have a table with a date-time and multiples propertied some on which I group by and some on which I aggregate, the query will be like get me revenue per customer last week.
Now I want to see the change between the requested period and the previous one so I will have 2 columns revenue and previous_revenue.
Right now I'm requesting the rows of the requested period plus the rows of the previous period and for each aggregated field I add a case statement inside which return the value or 0 if not in the period that I want.
That lead to as many CASE as aggregate fields but always with the same conditional statement.
I'm wondering if there is a better design for this use case...
SELECT
customer,
SUM(
CASE TIMESTAMP_CMP('2016-07-01 00:00:00', ft.date) > 0 WHEN true THEN
REVENUE
ELSE 0 END
) AS revenue,
SUM(
CASE TIMESTAMP_CMP('2016-07-01 00:00:00', ft.date) < 0 WHEN true THEN
REVENUE
ELSE 0 END
) AS previous_revenue
WHERE date_hour >= '2016-06-01 00:00:00'
AND date_hour <= '2016-07-31 23:59:59'
GROUP BY customer
(In my real use case I have many columns which make it even more ugly)

First, I'd suggest to refactor out the timestamps and precalculate the current and previous period for later use. This is not strictly necessary to solve your problem, though:
create temporary table _period as
select
'2016-07-01 00:00:00'::timestamp as curr_period_start
, '2016-07-31 23:59:59'::timestamp as curr_period_end
, '2016-06-01 00:00:00'::timestamp as prev_period_start
, '2016-06-30 23:59:59'::timestamp as prev_period_end
;
Now a possible design to avoid repetition of timestamps and CASE statements is to group by the periods first and then doing a FULL OUTER JOIN for that table on itself:
with _aggregate as (
select
case
when date_hour between prev_period_start and prev_period_end then 'previous'
when date_hour between curr_period_start and curr_period_end then 'current'
end::varchar(20) as period
, customer
-- < other columns to group by go here >
, sum(revenue) as revenue
-- < other aggregates go here >
from
_revenue, _period
where
date_hour between prev_period_start and curr_period_end
group by 1, 2
)
select
customer
, current_period.revenue as revenue
, previous_period.revenue as previous_revenue
from
(select * from _aggregate where period = 'previous') previous_period
full outer join (select * from _aggregate where period = 'current') current_period
using(customer) -- All columns which have been group by must go into the using() clause:
-- e.g. using(customer, some_column, another_column)
;

Related

How to get timestamp associated with percentile(x) value using timescale db time_bucket

I need find percentile(50) value and its timestamp using timescale db time-bucket. Finding P50 is easy but I don't know how to get the time stamp.
Select time_bucket('120 sec',timestamp_utc) as interval_size,
first(timestamp_utc,int_val) as minTime,
min(int_val) as minVal,
last(timestamp_utc,int_val) as maxTime,
max(int_val) as maxVal,
-- timestamp of percentile value below.
percentile_disc(0.5) within group (order by int_val) as medianVal
from timeseries.raw
where timestamp_utc > NOW() - INTERVAL '10 min'
AND tag_id = 59560544877390423
group by interval_size
order by interval_size desc
I think what you're looking for we can do by selecting where the int_val is equal to the median value in a lateral (percentile_disc does ensure that there is a value exactly equal to that value, there may be more than one depending on what you want there you could deal with the more than one case in different ways), building on a previous answer and making it work a bit better I think would look something like this:
WITH p50 AS (
Select time_bucket('120 sec',timestamp_utc) as interval_size,
first(timestamp_utc,int_val) as minTime,
min(int_val) as minVal,
last(timestamp_utc,int_val) as maxTime,
max(int_val) as maxVal,
-- timestamp of percentile value below.
percentile_disc(0.5) within group (order by int_val) as medianVal
from timeseries.raw
where timestamp_utc > NOW() - INTERVAL '10 min'
AND tag_id = 59560544877390423
group by interval_size
order by interval_size desc
) SELECT p50.*, rmed.*
FROM p50, LATERAL (SELECT * FROM timeseries.raw r
-- copy over the same where clause from above so we're dealing with the same subset of data
WHERE timestamp_utc > NOW() - INTERVAL '10 min'
AND tag_id = 59560544877390423
-- add a where clause on the median value
AND r.int_val = p50.medianVal
-- now add a where clause to account for the time bucket
AND r.timestamp_utc >= p50.interval_size
AND r.timestamp_utc < p50.interval_size + '120 sec'::interval
-- Can add an order by something desc limit 1 if you want to avoid ties
) rmed;
Note that this will do a second scan of the table, it should be reasonably efficient, especially if you have an index on that column, but it will cause another scan, there isn't a great way that I know of of doing it without a second scan.

Merge overlapping date intervals into big intervals providing uniquness of some id inside merged group

I have some date intervals, each interval is characterized by known "prop_id". My goal is to merge overlapping intervals into big intervals, while keeping the uniquness of "prop_id" inside the merged group. I have some code that helps me to get big intervals, but I 've no idea how to keep condition of uniquness (. Thanks in advance for any assistance.
________1 ________1
___________2
________1 |________1
_________|__2
[1,2]_________|________[1,2]
For SQLFiddle:
CREATE SEQUENCE ido_seq;
create table slots (
ido integer NOT NULL default nextval('ido_seq'),
begin_at date,
end_at date,
prop_id integer);
ALTER SEQUENCE ido_seq owned by slots.ido;
INSERT INTO slots (ido, begin_at, end_at, prop_id) VALUES
(0, '2014-10-05', '2014-10-10', 1),
(1, '2014-10-08', '2014-10-15', 2),
(2, '2014-10-13', '2014-10-20', 1),
(3, '2014-10-21', '2014-10-30', 2);
-- disired output:
-- start, end, props
-- 2014-10-05, 2014-10-12, [1,2] --! the whole group is (2014-10-05, 2014-10-20, [1,2,1]), but props should be unique
-- 2014-10-13, 2014-10-20, [1,2] --so, we obtain 2 ranges instead of 1, each one with 2 generating prop_id
-- 2014-10-21, 2014-10-30 [2]
How do we get it:
if two date intervals overlap, we merge them. The first ['2014-10-05', '2014-10-10'] and second ['2014-10-08', '2014-10-15'] have part ['2014-10-08', '2014-10-10'] in common. So we can merge them to ['2014-10-05', '2014-10-15']. The generalizing props are unique - OK. The next one ['2014-10-13', '2014-10-20'] is overlapping with previously calculated ['2014-10-05', '2014-10-15'], but we can't merge them without breaking the condition of uniquness. So we are to split the big interval ['2014-10-05', '2014-10-20'] into 2 small using the begin date of repeating prop ('2014-10-13'), but keeng the condition and receive ['2014-10-05', '2014-10-12'] (as '2014-10-13' minus 1 day) and ['2014-10-13', '2014-10-20'] both generalizing by props 1 and 2.
My attempt to get merged intervals (not keeping uniqueness condition):
SELECT min(begin_at), max(enddate), array_agg(prop_id) AS props
FROM (
SELECT *,
count(nextstart > enddate OR NULL) OVER (ORDER BY begin_at DESC, end_at DESC) AS grp
FROM (
SELECT
prop_id
, begin_at
, end_at
, end_at AS enddate
, lead(begin_at) OVER (ORDER BY begin_at, end_at) AS nextstart
FROM slots
) a
)b
GROUP BY grp
ORDER BY 1;
The right solution here is probably to use a recursive CTE to find the large intervals no matter how many smaller intervals need to be combined, and then to remove the intervals that we do not need.
with recursive intervals(idos, begin_at,end_at,prop_ids) as(
select array[ido], begin_at, end_at, array[prop_id]
from slots
union
select i.idos || s.ido
, least(s.begin_at, i.begin_at)
, greatest(s.end_at, i.end_at)
, i.prop_ids || s.prop_id
from intervals i
join slots s
on (s.begin_at, s.end_at) overlaps (i.begin_at, i.end_at)
and not (i.prop_ids && array[s.prop_id]) --check that the prop id is not already in the large interval
where s.begin_at < i.begin_at --to avoid having double intervals
)
select * from intervals i
--finally, remove the intervals that are a subinterval of an included interval
where not exists(select 1 from intervals i2 where i2.idos #> i.idos
and i2.idos <> i.idos);

How to select corresponding record alongside aggregate function with having clause

Let's say I have an orders table with customer_id, order_total, and order_date columns. I'd like to build a report that shows all customers who haven't placed an order in the last 30 days, with a column for the total amount their last order was.
This gets all of the customers who should be on the report:
select customer, max(order_date), (select order_total from orders o2 where o2.customer = orders.customer order by order_date desc limit 1)
from orders
group by 1
having max(order_date) < NOW() - '30 days'::interval
Is there a better way to do this that doesn't require a subquery but instead uses a window function or other more efficient method in order to access the total amount from the most recent order? The techniques from How to select id with max date group by category in PostgreSQL? are related, but the extra having restriction seems to stop me from using something like DISTINCT ON.
demo:db<>fiddle
Solution with row_number window function (https://www.postgresql.org/docs/current/static/tutorial-window.html)
SELECT
customer, order_date, order_total
FROM (
SELECT
*,
first_value(order_date) OVER w as last_order,
first_value(order_total) OVER w as last_total,
row_number() OVER w as row_count
FROM orders
WINDOW w AS (PARTITION BY customer ORDER BY order_date DESC)
) s
WHERE row_count = 1 AND order_date < CURRENT_DATE - 30
Solution with DISTINCT ON (https://www.postgresql.org/docs/9.5/static/sql-select.html#SQL-DISTINCT):
SELECT
customer, order_date, order_total
FROM (
SELECT DISTINCT ON (customer)
*,
first_value(order_date) OVER w as last_order,
first_value(order_total) OVER w as last_total
FROM orders
WINDOW w AS (PARTITION BY customer ORDER BY order_date DESC)
ORDER BY customer, order_date DESC
) s
WHERE order_date < CURRENT_DATE - 30
Explanation:
In both solutions I am working with the first_value window function. The window function's frame is defined by customers. The rows within the customers' groups are ordered descending by date which gives the latest row first (last_value is not working as expected every time). So it is possible to get the last order_date and the last order_total of this order.
The difference between both solutions is the filtering. I showed both versions because sometimes one of them is significantly faster
The window function style is creating a row count within the frames. Every first row can be filtered later. This is done by adding a row_number window function. The benefit of this solution comes out when you are trying to filter the first two or three data sets. You simply have to change the filter from WHERE row_count = 1 to WHERE row_count = 2
But if you want only one single row per group you just need to ensure that the expected row per group is ordered to be the first row in the group. Then the DISTINCT ON function can delete all following rows. DISTINCT ON (customer) gives the first (ordered) row per customer group.
Try to join table on itself
select o1.customer, max(order_date),
from orders o1
join orders o2 on o1.id=o2.id
group by o1.customer
having max(o1.order_date) < NOW() - '30 days'::interval
Subqueries in select is a bad idea, because DB will execute a query for each row
If you use postgres you can also try to use CTE
https://www.postgresql.org/docs/9.6/static/queries-with.html
WITH t as (
select id, order_total from orders o2 where o2.customer = orders.customer
order by order_date desc limit 1
) select o1.customer, max(order_date),
from orders o1
join t t.id=o2.id
group by o1.customer
having max(order_date) < NOW() - '30 days'::interval

how to concatenate timestamp in different rows in postgresql?

I'm looking for a way to concatenate timestamp in two difference row, for an example, I have this table:
I want it to be grouped by weekday and concatenate the min(start_hour) with max(start_hour), to get something like this
and I'm using this query to retrieve the first image result
The query below should give you what you are looking for provided the information supplied. I made some assumptions. That the '00:00:00' in the start and end hours is not a valid time and can be ignored. If they should be considered valid, then Friday's output would be one entry of '00:00:00' - '11:30:00'.
I created two CTEs, one for the start hours and the other for the end hours where the values are not '00:00:00'. Added a row number to the CTEs so i can match up the day & row_number to get you a set.
SELECT day
,array_to_string(array_agg(t.shift), ',') shifts
FROM (
WITH cte_start AS (
SELECT row_number() OVER (PARTITION BY day)
,day
,start_hour
FROM test22
WHERE start_hour <> '00:00:00'::time
)
,cte_stop AS (
SELECT row_number() OVER (PARTITION BY day)
,day
,stop_hour
FROM test22
WHERE stop_hour <> '00:00:00'::time
)
SELECT cte_start.day
,cte_start.start_hour::varchar || ' - ' || cte_stop.stop_hour::varchar AS shift
FROM cte_start
LEFT OUTER JOIN cte_stop ON cte_start.day = cte_stop.day
AND cte_start.row_number = cte_stop.row_number
) T
GROUP BY T.day
-HTH

How to create a function that loops through another function in PostgreSQL?

I'm using PostgreSQL 9.3.9 and I have a procedure called list_all_upsells that takes in the beginning of a month and the end of a month. (see sqlfiddle.com/#!15/abd02 for sample data) For example, the below code would list the count of upselled accounts for the month of October:
select COUNT(up.*) as "Total Upsell Accounts in October" from
list_all_upsells('2015-10-01 00:00:00'::timestamp, '2015-10-31 23:59:59'::timestamp) as up
where up.user_id not in
(select distinct user_id from paid_users_no_more
where concat(extract(month from payment_stop_date),'-',extract(year from payment_stop_date))<>
concat(extract(month from payment_start_date),'-',extract(year from payment_start_date)));
The list_all_upsells procedure looks like this:
DECLARE
payor_email_2 text;
BEGIN
FOR payor_email_2 in select distinct payor_email from paid_users LOOP
return query
execute
'select paid_users.* from paid_users,
(
select payment_start_date as first_time from paid_users
where payor_email = $3
order by payment_start_date limit 1
) as dummy
where payor_email = $3
and payment_start_date > first_time
and payment_start_date between $1 and $2
and first_time < $1'
using a, b, payor_email_2;
END LOOP;
return;
END
I want to be able to run this for all months that we have records and query the data together in one table like this:
Month | Total Upselled Accounts
---------------------------------
08/2014 | 23
09/2014 | 35
ETC...
10/2015 | 56
I have a query to grab the first of each month and last of each month for the months we have been in business:
select distinct date_trunc('month', payment_start_date)::date as startmonth
from paid_users ORDER BY startmonth;
Last of month:
SELECT distinct (date_trunc('MONTH', payment_start_date) +
INTERVAL '1 MONTH - 1 day')::date as endmonth from paid_users
ORDER BY endmonth;
Now how would I create a function to loop through the list_all_upsells and grab the count for each of these months? I.e. the first query for startmonth gives me 2014-03-01, 2014-04-01, ...to 2015-10-01 whereas the second query for endmonth gives me 2014-03-31, 2014-04-30, ...to 2015-10-31. I want to run the list_all_sells on each of these months so that I can get an aggregate count each month of how many upselled accounts we have
My paid_users table looks like this:
CREATE TABLE paid_users
(
user_id integer,
user_email character varying(255),
payor_id integer,
payor_email character varying(255),
payment_start_date timestamp without time zone DEFAULT now()
)
paid_users_no_more:
CREATE TABLE paid_users_no_more
(
user_id integer,
payment_stop_date timestamp without time zone DEFAULT now()
)
You have a couple of issues with your function, so let's start there. The short of it is that (1) you need only a single parameter to indicate the month, using beginning and ending of the month is setting yourself up for problems; (2) you do not need a dynamic query because you are not changing identifiers (table or column names); (3) you do not need a loop; and (4) your logic is wrong. I could also mention that PostgreSQL uses functions and that they all start with a line like CREATE FUNCTION list_all_upsells(...) but that would be just too picky.
To start with the logic: Apparently a user identified by his email address takes out a subscription from a certain payment_start_date until a certain payment_stop_date and can do this multiple times. You are looking for those users who took out their first subscription before the month in question, and who started a new subscription in the month in question but not a first subscription. In that case the filter payment_start_date > first_time is useless because you already filter for a first subscription being prior to the month in question (first_time < $1) and a new subscription (payment_start_date BETWEEN $1 AND $2).
Points (1), (2) and (3) really only become obvious when rewriting the query inside the function:
CREATE FUNCTION list_all_upsells(timestamp) RETURNS SETOF paid_users AS $$
SELECT paid_users.*
FROM paid_users
JOIN ( -- This JOIN keeps only those rows where the payor_email has a prior subscription
SELECT DISTINCT payor_email,
first_value(payment_start_date) OVER (PARTITION BY payor_email ORDER BY payment_start_date) AS dummy
FROM paid_users
WHERE payment_start_date < date_trunc('month', $1)
) dummy USING (payor_email)
-- This filter keeps only those rows with new subscriptions in the month
WHERE date_trunc('month', payment_start_date) = date_trunc('month', $1)
$$ LANGUAGE sql STRICT;
Since the body of the function has reduced to a single SQL statement, the function is now a sql language function, which is more efficient than plpgsql. You now supply only a single parameter, which can be any moment in the month you want the data for, so list_all_upsells(LOCALTIMESTAMP) will give you the results for the current month. In terms of the query you posted it would be:
SELECT count(up.*) AS "Total Upsell Accounts in October"
FROM list_all_upsells(LOCALTIMESTAMP) up
WHERE up.user_id NOT IN
(SELECT DISTINCT user_id FROM paid_users_no_more
WHERE date_trunc('month', payment_stop_date) <>
date_trunc('month', up.payment_start_date)
);
This, incidentally, really begs the question why you have the table paid_users_no_more. Why not simply add a column payment_stop_date to table paid_users? Where that column is NULL the user is still subscribed. But the whole query is rather odd, because list_all_upsells() returns new subscriptions during the month, so why bother with cancelled subscriptions at some other time?
Now on to your real question:
SELECT months.m "Month", coalesce(count(up.*), 0) "Total Upselled Accounts"
FROM generate_series('2014-08-01'::timestamp,
date_trunc('month', LOCALTIMESTAMP),
'1 month') AS months(m)
LEFT JOIN list_all_upsells(months.m) AS up ON date_trunc('month', payment_start_date) = m
GROUP BY 1
ORDER BY 1;
Generate a series of months from some starting month until the current month, then count the new subscriptions for each month, possibly 0.
SQLFiddle