If today's results are blank, show the totals from yesterday - postgresql

My code is an accumulated total of revenue over a period of time. If a single day is blank (no revenue for that day) I need it to show the totals from the day before. CASE WHEN (today is blank), Yesterday's data ELSE Today's Total
I am not sure what the syntax is on this one.
select distinct
date_trunc('day',admit_date) as admit_date,
revenue,
sum(revenue) over(order by admit_date) as running_rev
from dailyrev
order by admit_date
Expected Results:
Day 1: $100
Day 2: $200
Day 3: (no data so show Day 2 data) $200

Maybe this is what you need:
SELECT admit_date,
prev_revs[cardinality(prev_revs)] AS adj_revenue,
sum(prev_revs[cardinality(prev_revs)])
OVER (ORDER BY admit_date) AS running_sum
FROM (SELECT date_trunc('day', admit_date) AS admit_date,
array_remove(array_agg(revenue)
OVER (order by admit_date),
NULL) AS prev_revs
FROM dailyrev) AS q
ORDER BY admit_date;
Unfortunately PostgreSQL doesn't yet support the IGNORE NULLS clause, then it would have been simpler.

I am not sure if this is what you want, but try this:
SELECT
gs.date::date AS admit_date,
(SELECT revenue FROM dailyrev WHERE admit_date::date = gs.date) AS revenue,
(SELECT SUM(revenue) FROM dailyrev WHERE admit_date::date <= gs.date) AS accumulated_total
FROM
generated_series(
(SELECT MIN(admit_date::date) FROM dailyrev),
(SELECT MAX(admit_date::date) FROM dailyrev),
INTERVAL '1 day'
) gs
ORDER BY gs.date::date;
Yes, it does not look that nice, but..

Related

Using 'over' function results in column "table.id" must appear in the GROUP BY clause or be used in an aggregate function

I'm currently writing an application which shows the growth of the total number of events in my table over time, I currently have the following query to do this:
query = session.query(
count(Event.id).label('count'),
extract('year', Event.date).label('year'),
extract('month', Event.date).label('month')
).filter(
Event.date.isnot(None)
).group_by('year', 'month').all()
This results in the following output:
Count
Year
Month
100
2021
1
50
2021
2
75
2021
3
While this is okay on it's own, I want it to display the total number over time, so not just the number of events that month, so the desired outpout should be:
Count
Year
Month
100
2021
1
150
2021
2
225
2021
3
I read on various places I should use a window function using SqlAlchemy's over function, however I can't seem to wrap my head around it and every time I try using it I get the following error:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.GroupingError) column "event.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT count(event.id) OVER (PARTITION BY event.date ORDER...
^
[SQL: SELECT count(event.id) OVER (PARTITION BY event.date ORDER BY EXTRACT(year FROM event.date), EXTRACT(month FROM event.date)) AS count, EXTRACT(year FROM event.date) AS year, EXTRACT(month FROM event.date) AS month
FROM event
WHERE event.date IS NOT NULL GROUP BY year, month]
This is the query I used:
session.query(
count(Event.id).over(
order_by=(
extract('year', Event.date),
extract('month', Event.date)
),
partition_by=Event.date
).label('count'),
extract('year', Event.date).label('year'),
extract('month', Event.date).label('month')
).filter(
Event.date.isnot(None)
).group_by('year', 'month').all()
Could someone show me what I'm doing wrong? I've been searching for hours but can't figure out how to get the desired output as adding event.id in the group by would stop my rows from getting grouped by month and year
The final query I ended up using:
query = session.query(
extract('year', Event.date).label('year'),
extract('month', Event.date).label('month'),
func.sum(func.count(Event.id)).over(order_by=(
extract('year', Event.date),
extract('month', Event.date)
)).label('count'),
).filter(
Event.date.isnot(None)
).group_by('year', 'month')
I'm not 100% sure what you want, but I'm assuming you want the number of events up to that month for each month. You're going to first need to calculate the # of events per month and also sum them with the postgresql window function.
You can do that with in a single select statement:
SELECT extract(year FROM events.date) AS year
, extract(month FROM events.date) AS month
, SUM(COUNT(events.id)) OVER(ORDER BY extract(year FROM events.date), extract(month FROM events.date)) AS total_so_far
FROM events
GROUP BY 1,2
but it might be easier to think about if you split it into two:
SELECT year, month, SUM(events_count) OVER(ORDER BY year, month)
FROM (
SELECT extract(year FROM events.date) AS year
, extract(month FROM events.date) AS month
, COUNT(events.id) AS events_count
FROM events
GROUP BY 1,2
)
but not sure how to do that in SqlAlchemy

postgresql - data for 1st day of each month, or next day if no data for that day

i need to get the balance for the 1st of each month from a table ordered by date, if the 1st is missing from the dataset for a certain month then for that month i want the next available dates data.
I have tried many things but I tried the following to place a case in the where statement which just gives me the first and the second any ideas, maybe an over statement
select date_
, balance
from mytable
where case when extract(day from date_) = 1 then extract(day from date_) = 1 else (extract (day from date_) = 2 )end
group by date_
order by date_ desc
you can use window function row_number:
select * from
(
select * , row_number() over (partition by date_trunc('month',date_) order by date_ ) rn
) t
where rn = 1
DISTINCT ON should do the trick:
SELECT DISTINCT ON (EXTRACT (day FROM date_))
date_, balance
FROM mytable
ORDER BY EXTRACT (day FROM date_), date_;
That will get the first date for each month.

How to form a dynamic pivot table or return multiple values from GROUP BY subquery

I'm having some major issues with the following query formation:
I have projects with start and end dates
Name Start End
---------------------------------------
Project 1 2020-08-01 2020-09-10
Project 2 2020-01-01 2025-01-01
and I'm trying to count the monthly working days within each project with the following subquery
select datetrunc('month', days) as d_month, count(days) as d_count
from generate_series(greatest('2020-08-01'::date, p.start), least('2020-09-14'::date, p.end), '1 day'::interval) days
where extract(DOW from days) not IN (0, 6)
group by d_month
where p.start is from the aliased main query and the dates are hard-coded for now, this correctly gives me the following result:
{"d_month"=>2020-08-01 00:00:00 +0000, "d_count"=>21}
{"d_month"=>2020-09-01 00:00:00 +0000, "d_count"=>10}
However subqueries can't return multiple values. The date range for the query is dynamic, so I would either need to somehow return the query as:
Name Start End 2020-08-01 2020-09-01 ...
-------------------------------------------------------------------------
Project 1 2020-08-01 2020-09-10 21 8
Project 2 2020-01-01 2025-01-01 21 10
Or simply return the whole subquery as JSON, but it doesn't seem to working either.
Any idea on how to achieve this or whether there are simpler solutions for this?
The most correct solution would be to create an actual calendar table that holds every possible day of interest to your business and, at a minimum for your purpose here, marks work days.
Ideally you would have columns to hold fiscal quarters, periods, and weeks to match your industry. You would also mark holidays. Joining to this table makes these kinds of calculations a snap.
create table calendar (
ddate date not null primary key,
is_work_day boolean default true
);
insert into calendar
select ts::date as ddate,
extract(dow from ts) not in (0,6) as is_work_day
from generate_series(
'2000-01-01'::timestamp,
'2099-12-31'::timestamp,
interval '1 day'
) as gs(ts);
Assuming a calendar table is not within scope, you can do this:
with bounds as (
select min(start) as first_start, max("end") as last_end
from my_projects
), cal as (
select ts::date as ddate,
extract(dow from ts) not in (0,6) as is_work_day
from bounds
cross join generate_series(
first_start,
last_end,
interval '1 day'
) as gs(ts)
), bymonth as (
select p.name, p.start, p.end,
date_trunc('month', c.ddate) as month_start,
count(*) as work_days
from my_projects p
join cal c on c.ddate between p.start and p.end
where c.is_work_day
group by p.name, p.start, p.end, month_start
)
select jsonb_object_agg(to_char(month_start, 'YYYY-MM-DD'), work_days)
|| jsonb_object_agg('name', name)
|| jsonb_object_agg('start', start)
|| jsonb_object_agg('end', "end") as result
from bymonth
group by name;
Doing a pivot from rows to columns in SQL is usually a bad idea, so the query produces json for you.

Monthly retention in Amazon redshift

I'm trying to calculate monthly retention rate in Amazon Redshift and have come up with the following query:
Query 1
SELECT EXTRACT(year FROM activity.created_at) AS Year,
EXTRACT(month FROM activity.created_at) AS Month,
COUNT(DISTINCT activity.member_id) AS active_users,
COUNT(DISTINCT future_activity.member_id) AS retained_users,
COUNT(DISTINCT future_activity.member_id) / COUNT(DISTINCT activity.member_id)::float AS retention
FROM ads.fbs_page_view_staging activity
LEFT JOIN ads.fbs_page_view_staging AS future_activity
ON activity.mongo_id = future_activity.mongo_id
AND datediff ('month',activity.created_at,future_activity.created_at) = 1
GROUP BY Year,
Month
ORDER BY Year,
Month
For some reason this query returns zero retained_users and zero retention. I'd appreciate any help regarding why this may be happening or maybe a completely different query for monthly retention would work.
I modified the query as per another SO post and here it goes:
Query 2
WITH t AS (
SELECT member_id
,date_trunc('month', created_at) AS month
,count(*) AS item_transactions
,lag(date_trunc('month', created_at)) OVER (PARTITION BY member_id
ORDER BY date_trunc('month', created_at))
= date_trunc('month', created_at) - interval '1 month'
OR NULL AS repeat_transaction
FROM ads.fbs_page_view_staging
WHERE created_at >= '2016-01-01'::date
AND created_at < '2016-04-01'::date -- time range of interest.
GROUP BY 1, 2
)
SELECT month
,sum(item_transactions) AS num_trans
,count(*) AS num_buyers
,count(repeat_transaction) AS repeat_buyers
,round(
CASE WHEN sum(item_transactions) > 0
THEN count(repeat_transaction) / sum(item_transactions) * 100
ELSE 0
END, 2) AS buyer_retention
FROM t
GROUP BY 1
ORDER BY 1;
This query gives me the following error:
An error occurred when executing the SQL command:
WITH t AS (
SELECT member_id
,date_trunc('month', created_at) AS month
,count(*) AS item_transactions
,lag(date_trunc('m...
[Amazon](500310) Invalid operation: Interval values with month or year parts are not supported
Details:
-----------------------------------------------
error: Interval values with month or year parts are not supported
code: 8001
context: interval months: "1"
query: 616822
location: cg_constmanager.cpp:145
process: padbmaster [pid=15116]
-----------------------------------------------;
I have a feeling that Query 2 would fare better than Query 1, so I'd prefer to fix the error on that.
Any help would be much appreciated.
Query 1 looks good. I tried similar one. See below. You are using self join on table (ads.fbs_page_view_staging) and the same column (created_at). Assuming mongo_id is unique, the datediff('month'....) will always return 0 and datediff ('month',activity.created_at,future_activity.created_at) = 1 will always be false.
-- Count distinct events of join_col_id that have lapsed for one month.
SELECT count(distinct E.join_col_id) dist_ct
FROM public.fact_events E
JOIN public.dim_table Z
ON E.join_col_id = Z.join_col_id
WHERE datediff('month', event_time, sysdate) = 1;
-- 2771654 -- dist_ct

Last 12 months, group by week

I have a table with a column REGDATE, a registration date (YYYY-MM-DD HH:MM:SS). I would like to show an histogram (ExtJS) in order to understand in which period of the years users are signing up. I would like to do this for the past twelve months with respect to the current date and to group dates by week.
Any hints?
FWIW in PostgreSQL, Karaszi has an answer that works, but there is a faster query:
SELECT date_trunc('week', REGDATE) AS "Week" , count(*) AS "No. of users"
FROM <<TABLE>>
WHERE REGDATE > now() - interval '12 months'
GROUP BY 1
ORDER BY 1;
I based this off the work of Ben Goodacre
in MySQL:
SELECT COUNT(*), DATE_FORMAT(regdate, "%X%V") AS regweek FROM table GROUP BY regweek;
or
SELECT COUNT(*), YEARWEEK(NOW(), 2) as regweek FROM table GROUP BY regweek;
in PostgreSQL:
SELECT COUNT(*), EXTRACT(YEAR FROM regdate)::text || EXTRACT(WEEK FROM regdate)::text AS regweek FROM table GROUP BY regweek;
Maybe this?
select to_char(REGDATE,'WW') "Week number",
count(*) "number of signups",
from YOUR_TABLE
where REGDATE > current_date-365
group by to_char(REGDATE,'WW')
order by to_char(REGDATE,'WW')
Hint: (SQL)
SELECT CONVERT (VARCHAR(7), REGDATE, 120) AS [RegistrationMonth]
FROM ...
GROUP BY CONVERT (VARCHAR(7), REGDATE, 120)
ORDER BY CONVERT (VARCHAR(7), REGDATE, 120)