I am working with a database and am using the following query:
SELECT
evt_block_time,
COUNT(*) filter (
WHERE
uniswap_version = 'v1'
) OVER (
ORDER BY
evt_block_time
) as v1_pairs,
COUNT(*) filter (
WHERE
uniswap_version = 'v2'
) OVER (
ORDER BY
evt_block_time
) as v2_pairs
FROM
(
SELECT
'v2' as uniswap_version,
evt_block_time
FROM
uniswap_v2."Factory_evt_PairCreated"
UNION ALL
SELECT
'v1' as uniswap_version,
evt_block_time
FROM
uniswap."Factory_evt_NewExchange"
ORDER BY
evt_block_time
) as creations
Here's a glimpse at what it returns:
I would like to do a few things. First of all, truncate the timestamps, evt_block_time, by week and then group by week.
NOTE: I tried using date_trunc('week', evt_block_time) under each of my select statements, but it throws an error. See below:
SELECT
date_trunc('week', evt_block_time),
COUNT(*) filter (
WHERE
uniswap_version = 'v1'
) OVER (
ORDER BY
evt_block_time
) as v1_pairs,
COUNT(*) filter (
WHERE
uniswap_version = 'v2'
) OVER (
ORDER BY
evt_block_time
) as v2_pairs
FROM
(
SELECT
'v2' as uniswap_version,
date_trunc('week', evt_block_time)
FROM
uniswap_v2."Factory_evt_PairCreated"
UNION ALL
SELECT
'v1' as uniswap_version,
date_trunc('week', evt_block_time)
FROM
uniswap."Factory_evt_NewExchange"
ORDER BY
evt_block_time
) as creations
which returns:
Column "evt_block_time" does not exist at line 31, position 26.
Additionally, though I guess it's not required, I would like to only query data from the last 52 weeks (1 year).
Obviously, I'm kinda new to this SQL thing but I'm trying my best. Any help whatsoever would be appreciated!
The problem is you're selecting evt_block_time from the subquery, but the subquery no longer contains evt_block_time, it contains date_trunc('week', evt_block_time).
To fix this, give it a name like evt_block_week and select that.
Since it's a calculated column you can't order by it, but the order by in the subquery does nothing. Remove it. If you want to apply an order, do it in surrounding query.
The orders in the count filters also do nothing, order doesn't matter for a count. Remove them.
Finally, to get the number of each version of timestamp per week, group by evt_block_week. And also order by evt_block_week.
SELECT
evt_block_week,
COUNT(*) filter (
WHERE
uniswap_version = 'v1'
) as v1_pairs,
COUNT(*) filter (
WHERE
uniswap_version = 'v2'
) as v2_pairs
FROM
(
SELECT
'v2' as uniswap_version,
date_trunc('week', evt_block_time) as evt_block_week
FROM
uniswap_v2."Factory_evt_PairCreated"
UNION ALL
SELECT
'v1' as uniswap_version,
date_trunc('week', evt_block_time) as evt_block_week
FROM
uniswap."Factory_evt_NewExchange"
) as creations
group by evt_block_week
order by evt_block_week
If you want to only do a range of weeks, use generate_series to generate a list of weeks. If you want to see all weeks, use that as the from sub-query and left join with creations. Order and group by the generated week.
SELECT
weeks.week,
COUNT(*) filter (
WHERE
uniswap_version = 'v1'
) as v1_pairs,
COUNT(*) filter (
WHERE
uniswap_version = 'v2'
) as v2_pairs
from (
select
generate_series(
date_trunc('week', '2020-01-01'::date), date_trunc('week', '2020-12-31'::date), '1 week'
) as week
) as weeks
left join
(
SELECT
'v2' as uniswap_version,
date_trunc('week', evt_block_time) as evt_block_week
FROM
uniswap_v2."Factory_evt_PairCreated"
UNION ALL
SELECT
'v1' as uniswap_version,
date_trunc('week', evt_block_time) as evt_block_week
FROM
uniswap."Factory_evt_NewExchange"
) as creations on weeks.week = evt_block_week
group by week
order by week
Demonstration.
Your subquery creations does not have a column with alias evt_block_time, consequently you cannot use that column name in the main query.
Related
I want to get the number of consecutive days from the current date using Postgres SQL.
enter image description here
Above is the scenario in which I have highlighted consecutive days count should be like this.
Below is the SQL query which I have created but it's not returning the expected result
with grouped_dates as (
select user_id, created_at::timestamp::date,
(created_at::timestamp::date - (row_number() over (partition by user_id order by created_at::timestamp::date) || ' days')::interval)::date as grouping_date
from watch_history
)
select * , dense_rank() over (partition by grouping_date order by created_at::timestamp::date) as in_streak
from grouped_dates where user_id = 702
order by created_at::timestamp::date
Can anyone please help me to resolve this issue?
If anyhow we can able to apply distinct for created_at field to below query then I will get solutions for my issue.
WITH list AS
(
SELECT user_id,
(created_at::timestamp::date - (row_number() over (partition by user_id order by created_at::timestamp::date) || ' days')::interval)::date as next_day
FROM watch_history
)
SELECT user_id, count(*) AS number_of_consecutive_days
FROM list
WHERE next_day IS NOT NULL
GROUP BY user_id
Does anyone have an idea how to apply distinct to created_at for the above mentioned query ?
To get the "number of consecutive days" for the same user_id :
WITH list AS
(
SELECT user_id
, array_agg(created_at) OVER (PARTITION BY user_id ORDER BY created_at RANGE BETWEEN CURRENT ROW AND '1 day' FOLLOWING) AS consecutive_days
FROM watch_history
)
SELECT user_id, count(DISTINCT d.day) AS number_of_consecutive_days
FROM list
CROSS JOIN LATERAL unnest(consecutive_days) AS d(day)
WHERE array_length(consecutive_days, 1) > 1
GROUP BY user_id
To get the list of "consecutive days" for the same user_id :
WITH list AS
(
SELECT user_id
, array_agg(created_at) OVER (PARTITION BY user_id ORDER BY created_at RANGE BETWEEN CURRENT ROW AND '1 day' FOLLOWING) AS consecutive_days
FROM watch_history
)
SELECT user_id
, array_agg(DISTINCT d.day ORDER BY d.day) AS list_of_consecutive_days
FROM list
CROSS JOIN LATERAL unnest(consecutive_days) AS d(day)
WHERE array_length(consecutive_days, 1) > 1
GROUP BY user_id
full example & result in dbfiddle
I'm using SQL Server 2000 (80). So, it's not possible to use the LAG function.
I have a code a data set with four columns:
Purchase_Date
Facility_no
Seller_id
Sale_id
I need to identify missing Sale_ids. So every sale_id is a 100% sequential, so the should not be any gaps in order.
This code works for a specific date and store if specified. But i need to work on entire data set looping looping through every facility_id and every seller_id for ever purchase_date
declare #MAXCOUNT int
set #MAXCOUNT =
(
select MAX(Sale_Id)
from #table
where
Facility_no in (124) and
Purchase_date = '2/7/2020'
and Seller_id = 1
)
;WITH TRX_COUNT AS
(
SELECT 1 AS Number
union all
select Number + 1 from TRX_COUNT
where Number < #MAXCOUNT
)
select * from TRX_COUNT
where
Number NOT IN
(
select Sale_Id
from #table
where
Facility_no in (124)
and Purchase_Date = '2/7/2020'
and seller_id = 1
)
order by Number
OPTION (maxrecursion 0)
My Dataset
This column:
case when
Sale_Id=0 or 1=Sale_Id-LAG(Sale_Id) over (partition by Facility_no, Purchase_Date, Seller_id)
then 'OK' else 'Previous Missing' end
will tell you which Seller_Ids have some sale missing. If you want to go a step further and have exactly your desired output, then filter out and distinct the 'Previous Missing' ones, and join with a tally table on not exists.
Edit: OP mentions in comments they can't use LAG(). My suggestion, then, would be:
Make a temp table that that has the max(sale_id) group by facility/seller_id
Then you can get your missing results by this pseudocode query:
Select ...
from temptable t
inner join tally N on t.maxsale <=N.num
where not exists( select ... from sourcetable s where s.facility=t.facility and s.seller=t.seller and s.sale=N.num)
> because the only way to "construct" nonexisting combinations is to construct them all and just remove the existing ones.
This one worked out
; WITH cte_Rn AS (
SELECT *, ROW_NUMBER() OVER(PARTITION BY Facility_no, Purchase_Date, Seller_id ORDER BY Purchase_Date) AS [Rn_Num]
FROM (
SELECT
Facility_no,
Purchase_Date,
Seller_id,
Sale_id
FROM MyTable WITH (NOLOCK)
) a
)
, cte_Rn_0 as (
SELECT
Facility_no,
Purchase_Date,
Seller_id,
Sale_id,
-- [Rn_Num] AS 'Skipped Sale'
-- , case when Sale_id = 0 Then [Rn_Num] - 1 Else [Rn_Num] End AS 'Skipped Sale for 0'
, [Rn_Num] - 1 AS 'Skipped Sale for 0'
FROM cte_Rn a
)
SELECT
Facility_no,
Purchase_Date,
Seller_id,
Sale_id,
-- [Skipped Sale],
[Skipped Sale for 0]
FROM cte_Rn_0 a
WHERE NOT EXISTS
(
select * from cte_Rn_0 b
where b.Sale_id = a.[Skipped Sale for 0]
and a.Facility_no = b.Facility_no
and a.Purchase_Date = b.Purchase_Date
and a.Seller_id = b.Seller_id
)
--ORDER BY Purchase_Date ASC
I have a table called user_activity in Redshift that has department, user_id, activity_type, activity_id, activity_date.
I'd like to query a daily report of how many days since the last event (of any type). Using CROSS APPLY (SQL Server) or LATERAL JOIN (Postgres 9+), I'd do something like...
SELECT d.date, a.last_activity_date
FROM date_table d
CROSS JOIN (
SELECT DISTINCT user_id FROM activity_table
) u
CROSS APPLY (
SELECT TOP 1 activity_date as last_activity_date
FROM activity_table
WHERE user_id = u.user_id AND activity_date <= d.date
ORDER BY activity_date DESC
) a
For now, I write it similar to the below, but it is a bit slow and I am afraid it'll only get slower.
with user_activity as (
select distinct activity_date, user_id from activity_table
)
select
d.date, u.user_id,
max(u.activity_date) as last_activity_date
from date_table d
inner join user_activity u on u.activity_date <= d.date
where d.date between '2020-01-01' and current_date
group by 1, 2
Can someone suggest a good alternative for my needs or for CROSS APPLY / LATERAL JOIN.
As you are seeing cross joining and inequality joining will slow down as you data grows and are generally not the approach you want in Redshift. This is due to the data size increase that comes with this type of action when applied to large data tables that are typical in Redshift.
You want to use window functions to perform this type of analysis. But you will need to step back and rethink how you will structure the SQL. A MAX(activity_date) window function, partitioned by user_id and ordered by date and with a frame clause of all preceding rows, will find the most recent activity to any activity.
Now this will produce only rows for user_ids and dates that exist in the data table and it looks like you want 1 row for each date for each user_id, right? To do this you need to UNION in a frame of data that has 1 row for each date for each user_id ahead of the window function. You will need NULLs in for the other columns so that the data widths match. You will also want the dates in a separate column from activity_date. Now all dates for all user ids will be in the source and the window function will give you the result you want.
You also ask ‘how is this better than the joins?’ Well in the joins you are replicating all the data records by the number of dates which can get really big. In this approach you just have the original data records plus one row per user_id per date (which is the size of your output) and as the number of records per user_id grows this approach doesn’t.
——— Request to modify asker’s code per comments made to their approach ———
Your code is definitely on the right track as you have removed the massive inequality join of your original. I made 2 comments about it. The first is that I believe you need GROUP BY user_id, date to prevent multiple rows per user_id per date that would result if there are records for the same user_id on a single date with differing activity_types. This is a simple oversight.
The second is to state that I intended for you to use UNION ALL, not LEFT JOIN, in combining the actual data and the user_id/date framework. Your approach works fine but I have found that unioning with very large amounts of data is generally faster than joining but you do need to make sure the columns match up. Either way we end up with a data segment with 3 columns - 2 date columns, one with NULLs for framework rows, and 1 user_id. Your approach is fine and the difference in performance is likely very small unless you have huge tables.
Since you asked for a rewrite, here it is with both changes. (NOTE: my laptop is in the shop so I don’t have ready access to Redshift at the moment and this SQL is untested. If the intent is not clear from this and you need me to debug it will be delayed by a few days. I’m keeping your setup methods and SQL structure.)
with date_table as (
select '2000-01-01'::date as date
union all
select '2000-01-02'::date
union all
select '2000-01-03'::date
union all
select '2000-01-04'::date
union all
select '2000-01-05'::date
union all
select '2000-01-06'::date
),
users as (
select 1 as user_id
union all
select 2
union all
select 3
),
user_activity as (
select 1 as user_id, '2000-01-01'::date as activity_date
union all
select 1 as user_id, '2000-01-04'::date as activity_date
union all
select 3 as user_id, '2000-01-03'::date as activity_date
union all
select 1 as user_id, '2000-01-05'::date as activity_date
union all
select 1 as user_id, '2000-01-06'::date as activity_date
),
user_dates as (
select d.date, u.user_id
from date_table d
cross join users u
),
user_date_activity as (
select cal_date, user_id,
lag(max(activity_date), 1) ignore nulls over (partition by user_id order by date) as last_activity_date
from (
Select user_id, date as cal_date, NULL as activity_date from user_dates
Union all
Select user_id, activity_date as cal_date, activity_date from user_activity
)
Group by user_id, cal_date
)
select * from user_date_activity
order by user_id, cal_date```
This was my query based on Bill's answer.
with date_table as (
select '2000-01-01'::date as date
union all
select '2000-01-02'::date
union all
select '2000-01-03'::date
union all
select '2000-01-04'::date
union all
select '2000-01-05'::date
union all
select '2000-01-06'::date
),
users as (
select 1 as user_id
union all
select 2
union all
select 3
),
user_activity as (
select 1 as user_id, '2000-01-01'::date as activity_date
union all
select 1 as user_id, '2000-01-04'::date as activity_date
union all
select 3 as user_id, '2000-01-03'::date as activity_date
union all
select 1 as user_id, '2000-01-05'::date as activity_date
union all
select 1 as user_id, '2000-01-06'::date as activity_date
),
user_dates as (
select d.date, u.user_id
from date_table d
cross join users u
),
user_date_activity as (
select ud.date, ud.user_id,
lag(ua.activity_date, 1) ignore nulls over (partition by ud.user_id order by ud.date) as last_activity_date
from user_dates ud
left join user_activity ua on ud.date = ua.activity_date and ud.user_id = ua.user_id
)
select * from user_date_activity
order by user_id, date
I have 3 tables:
with current_exclusive as(
select id_station, area_type,
count(*) as total_entries
from c1169.data_cashier
where id_station IN(2439,2441,2443,2445,2447,2449) and date >= '2017-10-30' and date <= '2017-12-30'
group by id_station, area_type
), current_table as(
select id_station, area_type,
sum(total_time) filter (where previous_status = 1) as total_time
from c1169.data_table
where id_station IN(2439,2441,2443,2445,2447,2449) and date >= '2017-10-30' and date < '2017-12-30'
group by id_station, area_type
), current_cashier as(
select id_station, area_type,
sum(1) as total_transactions
from c1169.data_cashier
where id_station IN(2439,2441,2443,2445,2447,2449) and date >= '2017-10-30' and date < '2017-12-30'
group by id_station, area_type
)
select *
from current_exclusive
full join current_table on current_exclusive.id_station = current_table.id_station and current_exclusive.area_type = current_table.area_type
full join current_cashier on current_exclusive.id_station = current_cashier.id_station and current_exclusive.area_type = current_cashier.area_type
and the result is:
but my expected result is:
Are there any way to select * and show the expected result? Because when I do full join then id_station and area_type can be null in some tables, so it very hard to choose which column is not null.
Like: select case id_station is not null then id_station else id_station1 end, but I have up to 10 tables so can not do in select case
Use USING, per the documentation:
USING ( join_column [, ...] )
A clause of the form USING ( a, b, ... ) is shorthand for ON left_table.a = right_table.a AND left_table.b = right_table.b .... Also, USING implies that only one of each pair of equivalent columns will be included in the join output, not both.
select *
from current_exclusive
full join current_table using (id_station, area_type)
full join current_cashier using (id_station, area_type)
You cannot accomplish anything if you insist on using select *, since you are getting the values from different tables.
The option you have is to include a COALESCE block which gives you the first non-null value from the list of columns.
So, you could use.
select COALESCE( current_exclusive.id_station, current_table.id_station, current_cashier.id_station ) as id_station ,
COALESCE( current_exclusive.area_type , current_table.area_type, current_cashier.area_type ) as area_type ,.....
...
from current_exclusive
full join current_table..
...
TRANSFORM Count(qryEAOCalls.CALLID) AS CountOfCALLID
SELECT qryEAOCalls.TAPSTAFFNAME, Count(qryEAOCalls.CALLID) AS [Total Calls]
FROM qryEAOCalls
WHERE qryEAOCalls.CALLDATE Between #1/1/1900# And Date()
GROUP BY qryEAOCalls.TAPSTAFFNAME
PIVOT qryEAOCalls.Status In ("Unassigned","Open","Closed","Follow-up Needed");
How do I convert this to T-SQL equivalent?
You should be able to use something similar to the following:
select TAPSTAFFNAME,
Unassigned, Open, Closed, [Follow-up Needed],
TotalCalls
from
(
select e.TAPSTAFFNAME,
e.CALLID,
e.Status,
count(*) over(partition by e.TAPSTAFFNAME) TotalCalls
from qryEAOCalls e
where e.CALLDATE >= '1900-01-01'
and e.CALLDATE <= getdate()
) src
pivot
(
count(CALLID)
for status in (Unassigned, Open, Closed, [Follow-up Needed])
) piv