Counting dates that fall between two dates in the same column - amazon-redshift

I have two tables and for each ID and Level combination in table1, I need to get a count of times matching ID appears in table2 in between sequential times for levels in table1.
So for example, for ID = 1 and Level=1 in table1, two Time entries from table2 for ID=1 fall between Time of Level=1 and Level=2 in table1, so result will be 2 in the result table.
table1:
ID Level Time
1 1 6/7/13 7:03
1 2 6/9/13 7:05
1 3 6/12/13 12:02
1 4 6/17/13 5:01
2 1 6/18/13 8:38
2 3 6/20/13 9:38
2 4 6/23/13 10:38
2 5 6/28/13 1:38
table2:
ID Time
1 6/7/13 11:51
1 6/7/13 14:15
1 6/9/13 16:39
1 6/9/13 19:03
2 6/20/13 11:02
2 6/20/13 15:50
Result would be
ID Level Count
1 1 2
1 2 2
1 3 0
1 4 0
2 1 0
2 3 2
2 4 0
2 5 0

select transformed_tab1.id, transformed_tab1.level, count(tab2.id)
from
(select tab1.id, tab1.level, tm, lead(tm) over (partition by id order by tm) as next_tm
from
(
select 1 as id, 1 as level, '2013-06-07 07:03'::timestamp as tm union
select 1 as id, 2 as level, '2013-06-09 07:05 '::timestamp as tm union
select 1 as id, 3 as level, '2013-06-12 12:02'::timestamp as tm union
select 1 as id, 4 as level, '2013-06-17 05:01'::timestamp as tm union
select 2 as id, 1 as level, '2013-06-18 08:38'::timestamp as tm union
select 2 as id, 3 as level, '2013-06-20 09:38'::timestamp as tm union
select 2 as id, 4 as level, '2013-06-23 10:38'::timestamp as tm union
select 2 as id, 5 as level, '2013-06-28 01:38'::timestamp as tm) tab1
) transformed_tab1
left join
(select 1 as id, '2013-06-07 11:51'::timestamp as tm union
select 1 as id, '2013-06-07 14:15'::timestamp as tm union
select 1 as id, '2013-06-09 16:39'::timestamp as tm union
select 1 as id, '2013-06-09 19:03'::timestamp as tm union
select 2 as id, '2013-06-20 11:02'::timestamp as tm union
select 2 as id, '2013-06-20 15:50'::timestamp as tm) tab2
on transformed_tab1.id=tab2.id and tab2.tm between transformed_tab1.tm and transformed_tab1.next_tm
group by transformed_tab1.id, transformed_tab1.level
order by transformed_tab1.id, transformed_tab1.level
;

SQL Fiddle
select t1.id, level, count(t2.id)
from
(
select id, level,
tsrange(
"time",
lead("time", 1, 'infinity') over(
partition by id order by level
),
'[)'
) as time_range
from t1
) t1
left join
t2 on t1.id = t2.id and t1.time_range #> t2."time"
group by t1.id, level
order by t1.id, level
The solution starts creating a range of timestamps using the lead window function. Notice the [) parameter to the tsrange constructor. It means to include the lower and exclude the upper bound.
Then it joins the two tables with the #> range operator. It means the range includes the element.
It is necessary to left join t1 to have the zero counts.

Related

How to calculate the number of messages within 10 seconds before the previous one?

I have a table with messages and I need to find chats where were two or more messages in period of 10 seconds. table
id message_id time
1 1 2021.11.10 13:09:00
1 2 2021.11.10 13:09:01
1 3 2021.11.10 13:09:50
2 1 2021.11.10 15:18:00
2 2 2021.11.10 15:20:00
3 1 2021.11.12 15:00:00
3 2 2021.11.12 15:10:00
3 2 2021.11.12 15:10:10
So the result looks like
id
1
3
I can't come up with the idea how to group by a period or maybe it can be done other way?
select id
from t
group by id, ?
having count(message_id) > 1
You can join the table with itself, matching them on the chat id and your timeframe.
create table messages (chat_id integer,message_id integer,"time" timestamp);
insert into messages values
(1,1,'2021.11.10 13:09:00'),
(1,2,'2021.11.10 13:09:01'),
(1,3,'2021.11.10 13:09:50'),
(2,1,'2021.11.10 15:18:00'),
(2,2,'2021.11.10 15:20:00'),
(3,1,'2021.11.12 15:00:00'),
(3,2,'2021.11.12 15:10:00'),
(3,2,'2021.11.12 15:10:10');
select target_chat,
target_message,
count(*) "number of messages preceding by no more than 10 seconds"
from
(select t1.chat_id target_chat,
t1.message_id target_message,
t1.time,
t2.chat_id,
t2.message_id,
t2.time
from messages t1
inner join messages t2
on t1.chat_id=t2.chat_id
and t1.message_id<>t2.message_id
and (t2.time<=t1.time-'10 seconds'::interval and t2.time<=t1.time)) a
group by 1,2;
-- target_chat | target_message | number of messages preceding by no more than 10 seconds
---------------+----------------+---------------------------------------------------------
-- 1 | 3 | 2
-- 2 | 2 | 1
-- 3 | 2 | 2
--(3 rows)
From that you can select the records with your desired number of preceding messages.
this is a simple query that finds every previous value that is included in our interval
select id from test_table t where
t.time + interval '10 second' >=
(select time from test_table where id=t.id and time>t.time limit 1)
group by id;
results
id
----
1
3
To find rows within an period of time, you can tipically use a window function which avoids a self join on the table :
SELECT id, count(*) OVER (ORDER BY time RANGE BETWEEN CURRENT ROW AND '10 minutes' FOLLOWING)
FROM t
GROUP BY id
Then you can use this query as a sub-query if you only want the id with count(*) > 1 :
SELECT DISTINCT ON (l.id) l.id
FROM
( SELECT id, count(*) OVER (ORDER BY time RANGE BETWEEN CURRENT ROW AND '10 minutes' FOLLOWING) AS ct
FROM t
GROUP BY id
) AS l
WHERE l.ct > 1 ;

How to enumerate rows by division?

I have the following table
id num sub_id
1 3 1
1 5 2
1 1 1
1 4 2
2 1 5
2 2 5
I want to get this result
id num sub_id number
1 3 1 1
1 5 2 2
1 1 1 1
1 4 2 2
2 1 5 1
2 2 5 1
I tried to do this row_number() over (partition by id order by num,sub_id DESC) but th result is obviosly differs
I don't understand your business because you don't explain your logic and information about that, but maybe this query helps you?
Result and info: dbfiddle
with recursive
cte_r as (
select id,
num,
sub_id,
row_number() over () as rn
from test),
cte as (
select id,
num,
sub_id,
rn,
rn as grp
from cte_r
where rn = 1
union all
select cr.id,
cr.num,
cr.sub_id,
cr.rn,
case
when cr.id != c.id then 1
when cr.id = c.id and cr.sub_id = c.sub_id then c.grp
when cr.id = c.id and cr.sub_id > c.sub_id then c.grp + 1
when cr.id = c.id and cr.sub_id < c.sub_id then 1
end
from cte c,
cte_r cr
where c.rn = cr.rn - 1)
select id,
num,
sub_id,
grp
from cte
order by id
It looks like you actually want to ignore the num column and then use DENSE_RANK on sub_id:
SELECT *, dense_rank() AS number OVER (PARTITION BY id ORDER BY sub_id) FROM …;

Join data from 3 tables

table_1
id customer_id
---------------
1 1
2 2
3 1
4 1
5 3
6 4
table_2
id id_table1 device_mac
-------------------------------------
1 1 aa:bb:cc:dd:ee:ff
2 1 11:22:33:44:55:66
3 2 1a:2a:3a:4a:5a:6a
4 3 2b:3b:4b:5b:6b:7b
5 4 3c:4c:5c:6c:7c:8c
6 2 4d:5d:6d:7d:8d:9d
table_3
id device_mac device_name
---------------------------------------
1 aa:bb:cc:dd:ee:ff loc1
2 11:22:33:44:55:66 loc2
3 1a:2a:3a:4a:5a:6a loc3
4 2b:3b:4b:5b:6b:7b loc4
5 3c:4c:5c:6c:7c:8c loc5
6 4d:5d:6d:7d:8d:9d loc6
I have a requirement where I need to get the below details by customer_id using python and postgres db.
ex: get details with customer_id = 1
table1_id count(table_2) device_names
1 2 [loc1, loc2]
3 1 [loc4]
4 1 [loc5]
I tried with individual queries using python:
select id from table_1 where customer_id=1;
for t1_id from ids above table_1 data:
select * from table_2 where table_id=t1_id
for t2_data from ids above table2_data:
select * from table_3 where device_mac = t2_data.device_mac
# generate expected rows
Can I just do this in a signle query?
Join the tables and aggregate:
SELECT t1.id,
COUNT(*) count,
STRING_AGG(t3.device_name, ',' ORDER BY t3.device_name) device_names
FROM table_1 t1
INNER JOIN table_2 t2 ON t2.id_table1 = t1.id
INNER JOIN table_3 t3 ON t3.device_mac = t2.device_mac
WHERE t1.customer_id = 1
GROUP BY t1.id
If you are getting duplicate device_names you may use DISTINCT:
STRING_AGG(DISTINCT t3.device_name, ',' ORDER BY t3.device_name) device_names
See the demo.
Results:
id
count
device_names
1
2
loc1,loc2
3
1
loc4
4
1
loc5

SQL - add sequential counter column starting at condition

I have a table:
id market
1 mkt1
2 mkt2
3 mkt1
4 special
5 mkt2
6 mkt2
7 special
How can I select all columns from the table while also adding a sequential counter column, which starts counting once a condition has been triggered? In this example, when market=="special":
id market count
1 mkt1 0
2 mkt2 0
3 mkt1 0
4 special 1
5 mkt2 2
6 mkt2 3
7 special 4
Here's one option using row_number with union all:
with cte as (
select min(id) as id from t where market = 'special'
)
select t.id, t.market, 0 rn
from t join cte on t.id < cte.id
union all
select t.id, t.market, row_number() over (order by t.id) rn
from t join cte on t.id >= cte.id
Online Demo
Edited to use min after your edits...

Select rows with second highest value for each ID repeated multiple times

Id values
1 10
1 20
1 30
1 40
2 3
2 9
2 0
3 14
3 5
3 7
Answer should be
Id values
1 30
2 3
3 7
I tried as below
Select distinct
id,
(select max(values)
from table
where values not in(select ma(values) from table)
)
You need the row_number window function. This adds a column with a row count for each group (in your case the ids). In a subquery you are able to ask for the second row of each group.
demo:db<>fiddle
SELECT
id, values
FROM (
SELECT
*,
row_number() OVER (PARTITION BY id ORDER BY values DESC)
FROM
table
) s
WHERE row_number = 2