The objective is to display the most recent transaction by custmer_id - postgresql

My code ranks the last customer transaction by row number as planned, but I cannot filter my join to display on the last transaction per customer. The objective is to display the last detailed customer transaction per customer_id. I attempted to use the window function and then filter the resulting column.
CREATE TABLE customer1 (
customer_id INT PRIMARY KEY,
first_name VARCHAR(255),
last_name VARCHAR(255),
email VARCHAR(255),
created_at TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE TABLE purchase (
purchase_id INT PRIMARY KEY,
purchase_time TIMESTAMP WITH TIME ZONE NOT NULL,
customer_id INT NOT NULL,
FOREIGN KEY (customer_id) REFERENCES customer1(customer_id)
);
CREATE TABLE purchase_item (
purchase_item_id INT PRIMARY KEY,
purchase_id INT NOT NULL,
sku VARCHAR(255),
quantity INT NOT NULL,
total_amount_paid DECIMAL(10,2) NOT NULL,
FOREIGN KEY (purchase_id) REFERENCES purchase(purchase_id)
);
INSERT INTO customer1 (customer_id, first_name, last_name, email, created_at) VALUES
(1, 'James', 'Smith', 'jamessmith#example.com', clock_timestamp()),
(2, 'Mary', 'Johnson', 'maryjohnson#example.com', clock_timestamp()),
(3, 'John', 'Williams', 'johnwilliams#example.com', clock_timestamp()),
(4, 'Patricia', 'Brown', 'patriciabrown#example.com', clock_timestamp()),
(5, 'Michael', 'Garcia', 'michaelgarcia#example.com', clock_timestamp());
INSERT INTO purchase (purchase_id, purchase_time, customer_id) VALUES
(100, clock_timestamp(), 1),
(101, clock_timestamp(), 1),
(102, clock_timestamp(), 1),
(103, clock_timestamp(), 2),
(104, clock_timestamp(), 3),
(105, clock_timestamp(), 5);
INSERT INTO purchase_item(purchase_item_id, purchase_id, sku, quantity, total_amount_paid) VALUES
(200, 100, 'shoe_blk_42', 3, 300),
(201, 100, 'shoe_lace_white', 3, 2.5),
(202, 101, 'shorts', 1, 40),
(203, 102, 'bike', 1, 1995),
(204, 103, 'bike', 2, 3990),
(205, 103, 'shoe_wht_39', 2, 200),
(206, 104, 'shirt', 1, 60),
(207, 105, 'headphones', 1, 400);
SELECT DISTINCT customer1.customer_id,
first_name,
last_name,
email,
purchase.purchase_id,
purchase.purchase_time,
purchase_item.quantity,
purchase_item.total_amount_paid,
ROW_NUMBER()OVER (
PARTITION BY purchase.customer_id
ORDER BY
purchase.purchase_time DESC
) As order_queue
FROM customer1
JOIN purchase ON customer1.customer_id = purchase.customer_id
JOIN purchase_item ON purchase.purchase_id = purchase_item.purchase_id
WHERE order_queue = 1;

You can use DISTINCT ON to solve this:
select distinct on (customer1.customer_id)
customer1.customer_id,
first_name,
last_name,
email,
purchase.purchase_id,
purchase.purchase_time,
purchase_item.quantity,
purchase_item.total_amount_paid
FROM customer1
LEFT JOIN purchase ON customer1.customer_id = purchase.customer_id
LEFT JOIN purchase_item ON purchase.purchase_id = purchase_item.purchase_id
ORDER BY customer1.customer_id, purchase_time desc;
customer_id | first_name | last_name | email | purchase_id | purchase_time | quantity | total_amount_paid
-------------+------------+-----------+---------------------------+-------------+-------------------------------+----------+-------------------
1 | James | Smith | jamessmith#example.com | 102 | 2019-06-14 20:17:26.759086+00 | 1 | 1995.00
2 | Mary | Johnson | maryjohnson#example.com | 103 | 2019-06-14 20:17:26.759098+00 | 2 | 200.00
3 | John | Williams | johnwilliams#example.com | 104 | 2019-06-14 20:17:26.759109+00 | 1 | 60.00
4 | Patricia | Brown | patriciabrown#example.com | | | |
5 | Michael | Garcia | michaelgarcia#example.com | 105 | 2019-06-14 20:17:26.75912+00 | 1 | 400.00
(5 rows)
You can change the LEFT JOINs to JOINS if you don't want to see customers with no purchases.

Related

Postgres Query to Generate a Table Matrix

I am attempting to generate a "matrix" (I may be using the term incorrectly here) from 3 tables using a Postgres query.
How can I achieve this using such an SQL query?
Here are the example tables I have at the moment:
Company
+----+-------------+
| id | name |
+----+-------------+
| 1 | 9999999991 |
| 2 | 9999999992 |
| 3 | 9999999993 |
| 4 | 9999999994 |
| 5 | 9999999995 |
| 6 | 9999999996 |
| 7 | 9999999997 |
| 8 | 9999999998 |
+----+-------------+
Services
+----+-------------+
| id | name |
+----+-------------+
| 1 | Service 1 |
| 2 | Service 2 |
| 3 | Service 3 |
| 4 | Service 4 |
+----+-------------+
Service Company Map
+----+----------+---------------+
| id |company | services |
+----+-----------+--------------+
| 1 | 9999999991| 2 |
| 2 | 9999999991| 4 |
| 3 | 9999999992| 1 |
| 4 | 9999999992| 4 |
| 5 | 9999999993| 1 |
| 6 | 9999999993| 3 |
| 7 | 9999999993| 4 |
+----+-----------+--------------+
Here is an example of the matrix I am attempting to generate
+----------+----------+----------+----------+----------+
| | Service 1| Service 2| Service 3| Service 4|
+----------+----------+----------+----------+----------+
| Company 1| - | X | - | X |
| Company 2| X | - | - | X |
| Company 3| X | - | X | X |
| Company 4| - | - | - | - |
+----------+----------+----------+----------+----------+
(Note, I did reference this question, but we seem to be after different things: Postgres query for matrix table)
Update: Basic DDL / Inserts per #SQLpro's request
CREATE TABLE service_client_mappings
( id int NOT NULL,
company_id int NOT NULL,
service_id int NOT NULL,
CONSTRAINT service_client_mappings_pk PRIMARY KEY (id)
);
CREATE TABLE services
( id int NOT NULL,
name char(50) NOT NULL,
CONSTRAINT services_pk PRIMARY KEY (id)
);
CREATE TABLE company
( id int NOT NULL,
name char(50) NOT NULL,
CONSTRAINT company_pk PRIMARY KEY (id)
);
INSERT INTO company
(id, name)
VALUES
(1, 'ACME');
INSERT INTO company
(id, name)
VALUES
(2, 'Target');
INSERT INTO company
(id, name)
VALUES
(3, 'Walmart');
INSERT INTO services
(id, name)
VALUES
(1, 'Service A');
INSERT INTO services
(id, name)
VALUES
(2, 'Service B');
INSERT INTO services
(id, name)
VALUES
(3, 'Service C');
INSERT INTO services
(id, name)
VALUES
(4, 'Service D');
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(1, 1, 2);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(1, 1, 4);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(1, 2, 1);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(1, 3, 2);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(1, 3, 3);
SELECT name,
CASE WHEN EXISTS(SELECT *
FROM services AS S
JOIN service_client_mappings AS m
ON S.id = m.service_id
WHERE s.name = 'Service A'
AND c.id = m.company_id)
THEN 'X'
ELSE '-'
END AS "Service A",
CASE WHEN EXISTS(SELECT *
FROM services AS S
JOIN service_client_mappings AS m
ON S.id = m.service_id
WHERE s.name = 'Service B'
AND c.id = m.company_id)
THEN 'X'
ELSE '-'
END AS "Service B",
CASE WHEN EXISTS(SELECT *
FROM services AS S
JOIN service_client_mappings AS m
ON S.id = m.service_id
WHERE s.name = 'Service C'
AND c.id = m.company_id)
THEN 'X'
ELSE '-'
END AS "Service C",
CASE WHEN EXISTS(SELECT *
FROM services AS S
JOIN service_client_mappings AS m
ON S.id = m.service_id
WHERE s.name = 'Service D'
AND c.id = m.company_id)
THEN 'X'
ELSE '-'
END AS "Service D"
FROM company AS c;
By the way your adata are incorrect... For INSERT INTO service_client_mappings, you should have :
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(1, 1, 2);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(2, 1, 4);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(3, 2, 1);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(4, 3, 2);
INSERT INTO service_client_mappings
(id, company_id, service_id)
VALUES
(5, 3, 3);
Result is :
name Service A Service B Service C Service D
-------------- --------- --------- --------- ---------
ACME - X - X
Target X - - -
Walmart - X X -
In addition, to build this with a variable number of services you can use dynamic sql...
A first way to do this is :
SELECT CONCAT(
'SELECT name,
CASE WHEN EXISTS(SELECT *
FROM services AS S
JOIN service_client_mappings AS m
ON S.id = m.service_id
WHERE s.name = ''', name, '''
AND c.id = m.company_id)
THEN ''X''
ELSE ''-''
END AS "', name , '",') AS SQL_STRING
FROM services;

MYSQL select to get Consecutive Day Count user wise where the value is lesser than previous day value

MySql V 8.0
Question: How to write MySQL select to get Consecutive Day Count where the weight value is lesser than the previous day weight value user wise and break when no longer consecutive or weight value is same or greater than the previous day weight value of the same user.
create table userData (recordDate ,userName varchar(100), weight FLOAT);
insert into userData (recordDate, userName, weight)
values
('2020/8/1','Chris', 78),
('2021/8/2','Chris', 77),
('2021/8/3','Chris', 76),
('2021/8/1','Aamir', 78),
('2021/8/2','Aamir', 77),
('2021/8/1','Alex', 78),
('2021/8/2','Alex', 77),
('2021/8/3','Alex', 76),
('2021/8/5','Chris', 78),
('2021/8/6','Chris', 77),
('2021/8/7','Chris', 76),
('2021/8/8','Chris', 75),
('2021/8/8','Aamir', 78),
('2021/8/8','Alex', 78),
('2021/8/9','John', 78),
('2021/8/1','Ali', 78),
('2021/8/10','Chris', 78);
The expected output is
| userName | streakDays | startingDate | endingDate |
| -------- | ---------- | ------------ | ---------- |
| Alex | 3 | 2021-08-01 | 2021-08-03 |
| Chris | 3 | 2021-08-06 | 2021-08-08 |
| Aamir | 2 | 2021-08-01 | 2021-08-02 |
| Ali | 1 | 2021-08-01 | 2021-08-01 |
| John | 1 | 2021-08-09 | 2021-08-09 |
Any help would be appreciated.
According To Your data inserted in the table , This Select Query Works Fine
select userName as un ,
count((select recordDate WHERE userName = un)) as strekdays,
(select recordDate FROM userdata WHERE userName = un limit 1) as startdate ,
(select recordDate FROM userdata WHERE userName = un order by recordDate DESC limit 1) as enddate
from userdata
group by userName
And It Gives Output Like
userName
streakDays
startingDate
endingDate
Aamir
3
2021-08-01
2021-08-08
Alex
4
2021-08-01
2021-08-08
Ali
1
2021-08-01
2021-08-01
Chris
8
2021-08-01
2021-08-10
John
1
2021-08-09
2021-08-09
Let me know If this Works FOr You or not !
Problem resolved with the following query:
select
streakBreakersRemoved.userName,
streakBreakersRemoved.streakDays,
streakBreakersRemoved.startingDate,
streakBreakersRemoved.endingDate
from
(
select
userName,
count(*) as streakDays,
min(recordDate) as startingDate,
max(recordDate) as endingDate,
row_number() over (partition by userName
order by
count(*) desc) as seqNum
from
(
select
initailRecords.*,
row_number() over (partition by userName
order by
recordDate) as initialSeqNum
from
(
select
userData.*,
lag(weight) over (partition by userName
order by
recordDate) as previousWeight
from
userData
)
initailRecords
where
if(previousWeight is null || previousWeight > weight, 1, 0) = 1
)
recordsWithSeqNum
group by
userName,
to_days(recordDate) - initialSeqNum
)
streakBreakersRemoved
where
seqNum = 1
order by
streakDays desc;
Would appreciate if anyone would like to optimize the above query.

postgresql unique index preventing overlaping

My table permission looks like:
id serial,
person_id integer,
permission_id integer,
valid_from date,
valid_to date
I'd like to prevent creating permissions which overlaps valid_from, valid_to date
eg.
1 | 1 | 1 | 2010-10-01 | 2999-12-31
2 | 1 | 2 | 2010-10-01 | 2020-12-31
3 | 2 | 1 | 2015-10-01 | 2999-12-31
this can be added:
4 | 1 | 3 | 2011-10-01 | 2999-12-31 - because no such permission
5 | 2 | 1 | 2011-10-10 | 2999-12-31 - because no such person
6 | 1 | 2 | 2021-01-01 | 2999-12-31 - because doesn't overlaps id:2
but this can't
7 | 1 | 1 | 2009-10-01 | 2010-02-01 - because overlaps id:1
8 | 1 | 2 | 2019-01-01 | 2022-12-31 - because overlaps id:2
9 | 2 | 1 | 2010-01-01 | 2016-12-31 - beacuse overlaps id:3
I can do outside checking but wonder if possible to do it on database
A unique constraint is based on an equality operator and cannot be used in this case, but you can use an exclude constraint. The constraint uses btree operators <> and =, hence you have to install btree_gist extension.
create extension if not exists btree_gist;
create table permission(
id serial,
person_id integer,
permission_id integer,
valid_from date,
valid_to date,
exclude using gist (
person_id with =,
permission_id with =,
daterange(valid_from, valid_to) with &&)
);
These inserts are successful:
insert into permission values
(1, 1, 1, '2010-10-01', '2999-12-31'),
(2, 1, 2, '2010-10-01', '2020-12-31'),
(3, 2, 1, '2015-10-01', '2999-12-31'),
(4, 1, 3, '2011-10-01', '2999-12-31'),
(5, 3, 1, '2011-10-10', '2999-12-31'), -- you meant person_id = 3 I suppose
(6, 1, 2, '2021-01-01', '2999-12-31'),
(7, 1, 1, '2009-10-01', '2010-02-01'); -- ranges do not overlap!
but this one is not:
insert into permission values
(8, 1, 2, '2019-01-01', '2022-12-31');
ERROR: conflicting key value violates exclusion constraint "permission_person_id_permission_id_daterange_excl"
DETAIL: Key (person_id, permission_id, daterange(valid_from, valid_to))=(1, 2, [2019-01-01,2022-12-31)) conflicts with existing key (person_id, permission_id, daterange(valid_from, valid_to))=(1, 2, [2010-10-01,2020-12-31)).
Try it in db<>fiddle.

DB2: How to join indirectly referenced data

I have the following given table structure (I've removed some columns and created a stub) to support versioning and reduce duplication of data. Imagine an article review process whereas each step is stored in database (article_meta). Whenever the article itself changes, the data is stored in DB, too.
The versioning is done by a reference to the predecessor (pre_meta_id).
WITH
t_article_meta (id, pre_meta_id, user_id, state) as (
values (1, NULL, 101, 'submitted')
union all values (2, 1, 7, 'inreview')
union all values (3, 2, 7, 'rejected')
union all values (4, 3, 101, 'submitted')
union all values (5, NULL, 202, 'submitted')
union all values (6, 5, 7, 'inreview')
union all values (7, 6, 7, 'accepted')
union all values (8, 4, 7, 'inreview')
union all values (9, 8, 7, 'accepted')
),
t_article (id, meta_id, content) as (
values (1, 1, 'Hello wordl')
union all values (2, 4, 'Hello world')
union all values (3, 5, 'Lorem ipsum doloret')
)
SELECT ...;
Now I want to create a view that somehow combines meta data and article data even if there is no direct reference (only indirect via predecessor).
id | pre_meta_id | user_id | state | content (left join) | content (I want to have)
---|-------------|---------|-----------|---------------------|-------------------------
1 | NULL | 101 | submitted | Hello wordl | Hello wordl
2 | 1 | 7 | inreview | NULL | Hello wordl
3 | 2 | 7 | rejected | NULL | Hello wordl
4 | 3 | 101 | submitted | Hello world | Hello world
5 | NULL | 202 | submitted | Lorem ipsum doloret | Lorem ipsum doloret
6 | 5 | 7 | inreview | NULL | Lorem ipsum doloret
7 | 6 | 7 | accepted | NULL | Lorem ipsum doloret
8 | 4 | 7 | inreview | NULL | Hello world
9 | 8 | 7 | accepted | NULL | Hello world
How can I realize something like that in DB2 in a performing way? My first idea: a join on a function (to get the predecessor with an article related) sounds really expensive to me.
This SQL would do the job:
SELECT m.id, successor_id, user_id, state, content,
last_value(content,'IGNORE NULLS') over (order by m.id) as last_value
FROM article_meta m
LEFT JOIN article a
ON m.id = a.article_meta_id
ORDER BY m.id
It is the regular join to combine the tables with an aditional column (with another name compared to your expected result to show the difference)
You might want to rename that column and remove content to get a exact match to you expected result.
For the adjusted requirements the SQL gets more complex as we have to define a recursive query to get the title/content for all the childs - it will look like this:
with temp (id, pre_meta_id, user_id, state, level, parent, root) as (
select m.id, m.pre_meta_id, m.user_id, m.state, 1 as level, m.pre_meta_id as parent, m.id as root
from article_meta m, article a
where m.id = a.meta_id
union all
select m.id, m.pre_meta_id, m.user_id, m.state, level + 1 as level, t.id as parent, t.root
from temp t, article_meta m
where m.pre_meta_id = t.id
and m.id not in (select meta_id from article)
and level < 10
)
select *
from temp t
left join article a
on t.root = a.meta_id
order by 1

Select statement with join, or subquery limit

For few days now I'm trying to solve this problem.
I have table group_user, group_name.
What I wanna to do is select user groups, than description that group (from group_name), and 10 other users from the group.
It's not problem with first two. The problem is, that I'm nowhere to get limit users.
I can select user_group, and other users in that group. I don't know how to limit that.
Using:
SELECT a.g_id,b.group,b.userid
FROM group_user AS a
RIGHT JOIN
(SELECT g_id as group, u_id as userid FROM group_user) AS b ON a.g_id=b.group
WHERE u_id=112
It showing me, my user groups and users in that group. But when I'm trying to limit in subwuery, it limits all, not particular group.
I tried, Select users, with using IN where was goups of my user without luck.
I was thinking maybe group and having will help, but I can't see how I could use it.
So my question is, how can I limit subquery result in MySQL where the subquery is built on result of query.
I think im overload and maybe I don't see something.
UPDATE to show what I really wanna accomplish here's another piece of code.
SELECT g_id FROM group_user WHERE user_id = 112
So I get all groups that user is in let, saye each of that select is var extra_group, so second query will be
SELECT u_id FROM group_user WHERE group_id = extra_group LIMIT 10
I need to do same as above, in one query.
another UPDATE after MIKE post.
I should ADD that, user can be in more than 1 group. So I think the real problem is, that I don't have any clue how to select those groups and in same query select 10 users for selected groups, so in result could be
g_id u_id
1 | 2
1 | 3
1 | 4
3 | 3
3 | 8
where g_id is user groups from that query
SELECT g_id FROM group_user WHERE user_id = 112
Create sample tables and add data:
CREATE TABLE `group_user` (
`u_id` int(11) DEFAULT NULL,
`g_id` int(11) DEFAULT NULL,
`apply_date` date DEFAULT NULL
);
CREATE TABLE `group_name` (
`g_id` int(11) DEFAULT NULL,
`g_name` varchar(255) DEFAULT NULL
);
INSERT INTO `group_name` VALUES
(1, 'Group 1'), (2, 'Group 2'), (3, 'Group 3'), (4, 'Group 4'), (5, 'Group 5');
INSERT INTO `group_user` VALUES
(1, 1, '2010-12-01'), (1, 2, '2010-12-01'), (1, 3, '2010-12-01'), (1, 4, '2010-12-01'), (1, 5, '2010-12-01'),
(2, 1, '2010-12-02'), (2, 2, '2010-12-02'),
(3, 1, '2010-12-03'), (3, 2, '2010-12-03'), (3, 3, '2010-12-03'), (3, 4, '2010-12-03'),
(4, 1, '2010-12-04'), (4, 2, '2010-12-04'),
(5, 1, '2010-12-05'), (5, 2, '2010-12-05'),
(6, 1, '2010-12-06'), (6, 2, '2010-12-06'),
(7, 1, '2010-12-07'), (7, 2, '2010-12-07'), (7, 3, '2010-12-07'), (7, 4, '2010-12-07'), (7, 5, '2010-12-07'),
(8, 1, '2010-12-08'), (8, 2, '2010-12-08'),
(9, 1, '2010-12-09'), (9, 2, '2010-12-09'), (9, 3, '2010-12-09'), (9, 4, '2010-12-09'), (9, 5, '2010-12-09');
Select the groups of which user u_id == 1 is a member. Then for each group select a maximum of 4 members (excluding user u_id == 1), ordered by descending apply_date:
SELECT u3.g_id, g.g_name, u3.u_id, u3.apply_date
FROM (
SELECT
u1.g_id,
u1.u_id,
u1.apply_date,
IF( #prev_gid <> u1.g_id, #user_index := 1, #user_index := #user_index + 1 ) AS user_index,
#prev_gid := u1.g_id AS prev_gid
FROM group_user AS u1
JOIN (SELECT #prev_gid := 0, #user_index := NULL) AS vars
JOIN group_user AS u2
ON u2.g_id = u1.g_id
AND u2.u_id = 1
AND u1.u_id <> 1
ORDER BY u1.g_id, u1.apply_date DESC, u1.u_id
) AS u3
JOIN group_name AS g ON g.g_id = u3.g_id
WHERE u3.user_index <= 4
ORDER BY u3.g_id, u3.apply_date DESC, u3.u_id;
+------+---------+------+------------+
| g_id | g_name | u_id | apply_date |
+------+---------+------+------------+
| 1 | Group 1 | 5 | 2010-12-05 |
| 1 | Group 1 | 4 | 2010-12-04 |
| 1 | Group 1 | 3 | 2010-12-03 |
| 1 | Group 1 | 2 | 2010-12-02 |
| 2 | Group 2 | 5 | 2010-12-05 |
| 2 | Group 2 | 4 | 2010-12-04 |
| 2 | Group 2 | 3 | 2010-12-03 |
| 2 | Group 2 | 2 | 2010-12-02 |
| 3 | Group 3 | 9 | 2010-12-09 |
| 3 | Group 3 | 7 | 2010-12-07 |
| 3 | Group 3 | 3 | 2010-12-03 |
| 4 | Group 4 | 9 | 2010-12-09 |
| 4 | Group 4 | 7 | 2010-12-07 |
| 4 | Group 4 | 3 | 2010-12-03 |
| 5 | Group 5 | 9 | 2010-12-09 |
| 5 | Group 5 | 7 | 2010-12-07 |
+------+---------+------+------------+