How to get info about position element in the table? - postgresql

I have query:
Select * from mytable order by 'date'
And result:
date | item_id | user_id | some_data
------------------------------------------
2015-01-01 | 1 | 1 | null
2015-01-01 | 1 | 1 | null
2015-01-02 | 1 | 1 | null
2015-01-03 | 1 | 1 | null
2015-01-03 | 1 | 2 | null
2015-01-04 | 1 | 1 | null
2015-01-05 | 1 | 2 | null
And I want to get position of first row where user_id = 2. In this example it be 5. How to do it?

select pos_overall
from (
select user_id,
row_number() over (order by "date") as pos_overall,
row_number() over (partition by user_id order by "date") as user_pos
from mytable
) t
where user_id = 2
and user_pos = 1

You can use the row_number() function to number the rows in order of date, user_id and then select the minimum value:
select min(rn)
from (
select
user_id, row_number() over (order by date, user_id) as rn
from mytable
) x
where user_id = 2;
If the item_id can change you might want to include that in the order by clause for the row_number function in the derived table.

Related

Distinct Count Dates by timeframe

I am trying to find the daily count of frequent visitors from a very large data-set. Frequent visitors in this case are visitor IDs used on 2 distinct days in a rolling 3 day period.
My data set looks like the below:
ID | Date | Location | State | Brand |
1 | 2020-01-02 | A | CA | XYZ |
1 | 2020-01-03 | A | CA | BCA |
1 | 2020-01-04 | A | CA | XYZ |
1 | 2020-01-06 | A | CA | YQR |
1 | 2020-01-06 | A | WA | XYZ |
2 | 2020-01-02 | A | CA | XYZ |
2 | 2020-01-05 | A | CA | XYZ |
This is the result I am going for. The count in the visits column is equal to the count of distinct days from the date column, -2 days for each ID. So for ID 1 on 2020-01-05, there was a visit on the 3rd and 4th, so the count is 2.
Date | ID | Visits | Frequent Prior 3 Days
2020-01-01 |Null| Null | Null
2020-01-02 | 1 | 1 | No
2020-01-02 | 2 | 1 | No
2020-01-03 | 1 | 2 | Yes
2020-01-03 | 2 | 1 | No
2020-01-04 | 1 | 3 | Yes
2020-01-04 | 2 | 1 | No
2020-01-05 | 1 | 2 | Yes
2020-01-05 | 2 | 1 | No
2020-01-06 | 1 | 2 | Yes
2020-01-06 | 2 | 1 | No
2020-01-07 | 1 | 1 | No
2020-01-07 | 2 | 1 | No
2020-01-08 | 1 | 1 | No
2020-01-09 | 1 | null | Null
I originally tried to use the following line to get the result for the visits column, but end up with 3 in every successive row at whichever date it first got to 3 for that ID.
,
count(ID) over (Partition by ID order by Date ASC rows between 3 preceding and current row) as visits
I've scoured the forum, but every somewhat similar question seems to involve counting the values rather than the dates and haven't been able to figure out how to tweak to get what I need. Any help is much appreciated.
You can aggregate the dataset by user and date, then use window functions with a range frame to look at the three preceding rows.
You did not tell which database you are running - and not all databases support the window ranges, nor have the same syntax for literal intervals. In standard SQL, you would go:
select
id,
date,
count(*) cnt_visits
case
when sum(count(*)) over(
partition by id
order by date
range between interval '3' day preceding and current row
) >= 2
then 'Yes'
else 'No'
end is_frequent_visitor
from mytable
group by id, date
On the other hand, if you want a record for every user and every day (event when there is no visit), then it is a bit different. You can generate the dataset first, then bring the table with a left join:
select
i.id,
d.date,
count(t.id) cnt_visits,
case
when sum(count(t.id)) over(
partition by i.id
order by d.date
rows between '3' day preceding and current row
) >= 2
then 'Yes'
else 'No'
end is_frequent_visitor
from (select distinct id from mytable) i
cross join (select distinct date from mytable) d
left join mytable t
on t.date = d.date
and t.id = i.id
group by i.id, d.date
I would be inclined to approach this by expanding out the days and visitors using a cross join and then just window functions. Assuming you have all dates in the data:
select i.id, d.date,
count(t.id) over (partition by i.id
order by d.date
rows between 2 preceding and current row
) as cnt_visits,
(case when count(t.id) over (partition by i.id
order by d.date
rows between 2 preceding and current row
) >= 2
then 'Yes' else 'No'
end) as is_frequent_visitor
from (select distinct id from t) i cross join
(select distinct date from t) d left join
(select distinct id, date from t) t
on t.date = d.date and
t.id = i.id;

Postgres join and distinct query

I have two tabels
user
id | name
-------------
1 | User1 |
2 | User2 |
3 | User3 |
4 | User4 |
User can change name in any moment.
And another tabel
order
id |user_name | user_id | price | order_date
---------------------------------------------
1 | OldUser3| 3 | 5 | 2017-07-12 08:01:00.000000
2 | NewUser3| 3 | 6 | 2017-07-12 09:01:00.000000
3 | User1 | 1 | 8 | 2017-07-12 10:01:00.000000
4 | NewUser | | 10 | 2017-07-12 11:01:00.000000
5 | NewUser | | 100 | 2017-07-12 12:01:00.000000
user_name copied from tabel user in moment making order and if user change name several times there are can be diferent value.
user_id can be null if it's not registered user
I need result tabel like this
order
no |user_name | user_id | total_pr| count | last_order
---------------------------------------------
1 | NewUser3| 3 | 11 | 2 |2017-07-12 09:01:00.000000
2 | User1 | 1 | 8 | 1 |2017-07-12 10:01:00.000000
3 | NewUser | | 10 | 1 |2017-07-12 11:01:00.000000
4 | NewUser | | 100 | 1 |2017-07-12 12:01:00.000000
user_name value must take from bigest order_date and need to sort by any column
and if user_id is null that all user with the same name it's different users
I try this
SELECT order.user_id, order.user_name, SUM(price), COUNT(order.user_id), MAX(order_date)
FROM order, user
WHERE
order.order_date >= '2017-07-01 08:01:00.000000'
AND order.order_date <= '2017-07-15 08:01:00.000000'
GROUP BY user_id, user_name ORDER BY count ASC
but its not all
Try this
with users_cte (user_name,user_id,total_pr,count,last_order) as (
--Fetching data for members who are in users table
Select user_name,user_id,total_pr,count,last_order from (
SELECT o.user_name, o.user_id, row_number() over (partition by o.user_id order by order_date desc) rno
, SUM(price) over (partition by o.user_id) as total_pr, COUNT(o.user_id) over(partition by o.user_id) as count , MAX(order_date) over (partition by o.user_id) as last_order
FROM orders o
left join users u
on o.user_id = u.id
WHERE
u.id is not null
and o.order_date >= '2017-07-01 08:01:00.000000'
AND o.order_date <= '2017-07-15 08:01:00.000000'
) A WHere A.rno=1
union all
--Fetching data for new members
SELECT o.user_name,null as user_id
, SUM(price) as total_pr, COUNT(o.user_name), MAX(order_date)
FROM orders o
left join users u
on o.user_id = u.id
WHERE
u.id is null
and o.order_date >= '2017-07-01 08:01:00.000000'
AND o.order_date <= '2017-07-15 08:01:00.000000'
GROUP BY o.user_name
)
Select row_number() over(order by last_order) as no,* from users_cte
try:
SELECT order.id, order.user_name, SUM(price), COUNT(order.user_id), MAX(order_date)
FROM order
LEFT OUTER JOIN user on order.user_id = user.id
WHERE
order.order_date >= '2017-07-01 08:01:00.000000'
AND order.order_date <= '2017-07-15 08:01:00.000000'
GROUP BY user_id, user_name ORDER BY count ASC

Updating multiple rows with a certain value from the same table

So, I have the next table:
time | name | ID |
12:00:00| access | 1 |
12:05:00| select | null |
12:10:00| update | null |
12:15:00| insert | null |
12:20:00| out | null |
12:30:00| access | 2 |
12:35:00| select | null |
The table is bigger (aprox 1-1,5 mil rows) and there will be ID equal to 2,3,4 etc and rows between.
The following should be the result:
time | name | ID |
12:00:00| access | 1 |
12:05:00| select | 1 |
12:10:00| update | 1 |
12:15:00| insert | 1 |
12:20:00| out | 1 |
12:30:00| access | 2 |
12:35:00| select | 2 |
What is the most simple method to update the rows without making the log full? Like, one ID at a time.
You can do it with a sub query:
UPDATE YourTable t
SET t.ID = (SELECT TOP 1 s.ID
FROM YourTable s
WHERE s.time < t.time AND s.name = 'access'
ORDER BY s.time DESC)
WHERE t.name <> 'access'
Index on (ID,time,name) will help.
You can do it using CTE as below:
;WITH myCTE
AS ( SELECT time
, name
, ROW_NUMBER() OVER ( PARTITION BY name ORDER BY time ) AS [rank]
, ID
FROM YourTable
)
UPDATE myCTE
SET myCTE.ID = myCTE.rank
SELECT *
FROM YourTable ORDER BY ID

How can I get the sum(value) on the latest gather_time per group(name,col1) in PostgreSQL?

Actually, I got a good answer about the similar issue on below thread, but I need one more solution for different data set.
How to get the latest 2 rows ( PostgreSQL )
The Data set has historical data, and I just want to get sum(value) for the group on the latest gather_time.
The final result should be as following:
name | col1 | gather_time | sum
-------+------+---------------------+-----
first | 100 | 2016-01-01 23:12:49 | 6
first | 200 | 2016-01-01 23:11:13 | 4
However, I just can see the data for the one group(first-100) with a query below meaning that there is no data for the second group(first-200).
Thing is that I need to get the one row per the group.
The number of the group can be vary.
select name,col1,gather_time,sum(value)
from testtable
group by name,col1,gather_time
order by gather_time desc
limit 2;
name | col1 | gather_time | sum
-------+------+---------------------+-----
first | 100 | 2016-01-01 23:12:49 | 6
first | 100 | 2016-01-01 23:11:19 | 6
(2 rows)
Can you advice me to accomplish this requirement?
Data set
create table testtable
(
name varchar(30),
col1 varchar(30),
col2 varchar(30),
gather_time timestamp,
value integer
);
insert into testtable values('first','100','q1','2016-01-01 23:11:19',2);
insert into testtable values('first','100','q2','2016-01-01 23:11:19',2);
insert into testtable values('first','100','q3','2016-01-01 23:11:19',2);
insert into testtable values('first','200','t1','2016-01-01 23:11:13',2);
insert into testtable values('first','200','t2','2016-01-01 23:11:13',2);
insert into testtable values('first','100','q1','2016-01-01 23:11:11',2);
insert into testtable values('first','100','q1','2016-01-01 23:12:49',2);
insert into testtable values('first','100','q2','2016-01-01 23:12:49',2);
insert into testtable values('first','100','q3','2016-01-01 23:12:49',2);
select *
from testtable
order by name,col1,gather_time;
name | col1 | col2 | gather_time | value
-------+------+------+---------------------+-------
first | 100 | q1 | 2016-01-01 23:11:11 | 2
first | 100 | q2 | 2016-01-01 23:11:19 | 2
first | 100 | q3 | 2016-01-01 23:11:19 | 2
first | 100 | q1 | 2016-01-01 23:11:19 | 2
first | 100 | q3 | 2016-01-01 23:12:49 | 2
first | 100 | q1 | 2016-01-01 23:12:49 | 2
first | 100 | q2 | 2016-01-01 23:12:49 | 2
first | 200 | t2 | 2016-01-01 23:11:13 | 2
first | 200 | t1 | 2016-01-01 23:11:13 | 2
One option is to join your original table to a table containing only the records with the latest gather_time for each name, col1 group. Then you can take the sum of the value column for each group to get the result set you want.
SELECT t1.name, t1.col1, MAX(t1.gather_time) AS gather_time, SUM(t1.value) AS sum
FROM testtable t1 INNER JOIN
(
SELECT name, col1, col2, MAX(gather_time) AS maxTime
FROM testtable
GROUP BY name, col1, col2
) t2
ON t1.name = t2.name AND t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND
t1.gather_time = t2.maxTime
GROUP BY t1.name, t1.col1
If you wanted to use a subquery in the WHERE clause, as you attempted in your OP, to restrict to only records with the latest gather_time then you could try the following:
SELECT name, col1, gather_time, SUM(value) AS sum
FROM testtable t1
WHERE gather_time =
(
SELECT MAX(gather_time)
FROM testtable t2
WHERE t1.name = t2.name AND t1.col1 = t2.col1
)
GROUP BY name, col1

PostgreSQL: Combine Count and DISTINCT ON

Given this table
| id | name | created_at |
| 1 | test | 2015-02-24 11:13:28.605968 |
| 2 | other | 2015-02-24 13:04:56.968004 |
| 3 | test | 2015-02-24 11:14:24.670765 |
| 4 | test | 2015-02-24 11:15:05.293904 |
And this query which returns only the rows id 2 and id 4.
SELECT DISTINCT ON (documents.name) documents.*
FROM "documents"
ORDER BY documents.name, documents.created_at DESC
How can i return the number of rows affected? Something like
SELECT COUNT(DISTINCT ON (documents.name) documents.*) FROM "documents"
You can use an outer query:
SELECT COUNT(1)
FROM (
SELECT DISTINCT ON (name) *
FROM documents
ORDER BY name, created_at DESC
) alias