How to get unique values out of a GROUP BY clause - tsql

I have a table that has a unique identifier column, a relational key column, and a varchar column.
| Id | ForeignId | Fruits |
---------------------------------
| 1 | 1 | Apple |
| 2 | 2 | Apple |
| 3 | 3 | Apple |
| 4 | 4 | Banana |
What I would like to do, is group the data by the Fruits column, and also return a list of all the ForeignId keys that are in that particular group. Like so:
| Fruit | ForeignKeys |
---------------------------
| Apple | 1, 2, 3 |
| Banana | 4 |
So far I have the SQL that I gets me the grouped Fruit values as that is trivial. But I cannot seem to find a good solution for retrieving the ForeignKeyId values that are contained within the group.
SELECT Fruits FROM FruitTable
I found the GROUP_CONCAT function in MySql which appears to do what I need but it doesn't seem to be available in SQL Server 2017.
Any help on this would be appreciated.

If you are using SQL Server 2014 or older:
SELECT Fruit = Fruits
,ForeignKeys = STUFF(
(SELECT ',' + CAST(ForeignId AS VARCHAR(100))
FROM FruitTable t2
WHERE t1.Fruits = t2.Fruits
FOR XML PATH ('')
)
,1,
1,
''
)
FROM FruitTable t1
GROUP BY Fruits;
If you are using SQL Server 2016 or later, you can also write this simpler way:
SELECT Fruit = Fruits, ForeignKeys = STRING_AGG(ForeignId, ',')
FROM FruitTable
GROUP BY Fruits;

SELECT fruits, foreignkeys = STUFF(
(SELECT ',' + foreignid
FROM fruittable t1
WHERE t1.id = t2.id
FOR XML PATH (''))
, 1, 1, '')
from fruittable
group by fruits
This should work.

Related

How to select rows based on properties of another row?

Had a question..
| a_id | name | r_id | message | date
_____________________________________________
| 1 | bob | 77 | bob here | 1-jan
| 1 | bob | 77 | bob here again | 2-jan
| 2 | jack | 77 | jack here. | 2-jan
| 1 | bob | 79 | in another room| 3-feb
| 3 | gill | 79 | gill here | 4-feb
These are basically accounts (a_id) chatting inside different rooms (r_id)
I'm trying to find the last chat message for every room that jack a_id = 2 is chatting in.
What i've tried so far is using distinct on (r_id) ... ORDER BY r_id, date DESC.
But this incorrectly gives me the last message in every room instead of only giving the last message in everyroom that jack belongs to.
| 2 | jack | 77 | jack here. | 2-jan
| 3 | gill | 79 | gill here | 4-feb
Is this a partition problem instead distinct on?
I would suggest :
to group the rows by r_id with a GROUP BY clause
to select only the groups where a_id = 2 is included with a HAVING clause which aggregates the a_id of each group : HAVING array_agg(a_id) #> array[2]
to select the latest message of each selected group by aggregating its rows in an array with ORDER BY date DESC and selecting the first element of the array : (array_agg(t.*))[1]
to convert the selected rows into a json object and then displaying the expected result by using the json_populate_record function
The full query is :
SELECT (json_populate_record(null :: my_table, (array_agg(to_json(t.*)))[1])).*
FROM my_table AS t
GROUP BY r_id
HAVING array_agg(a_id) #> array[2]
and the result is :
a_id
name
r_id
message
date
1
bob
77
bob here
2022-01-01
see dbfiddle
For last message in every chat room simply would be:
select a_id, name, r_id, to_char(max(date),'dd-mon') from chats
where a_id =2
group by r_id, a_id,name;
Fiddle https://www.db-fiddle.com/f/keCReoaXg2eScrhFetEq1b/0
Or seeing messages
with last_message as (
select a_id, name, r_id, to_char(max(date),'dd-mon') date from chats
where a_id =1
group by r_id, a_id,name
)
select l.*, c.message
from last_message l
join chats c on (c.a_id= l.a_id and l.r_id=c.r_id and l.date=to_char(c.date,'dd-mon'));
Fiddle https://www.db-fiddle.com/f/keCReoaXg2eScrhFetEq1b/1
Though all this complication could by avoided with a primary key on your table.

Counting consecutive days in postgres

I'm trying to count the number of consecutive days in two tables with the following structure:
| id | email | timestamp |
| -------- | -------------- | -------------- |
| 1 | hello#example.com | 2021-10-22 00:35:22 |
| 2 | hello2#example.com | 2021-10-21 21:17:41 |
| 1 | hello#example.com | 2021-10-19 00:35:22 |
| 1 | hello#example.com | 2021-10-18 00:35:22 |
| 1 | hello#example.com | 2021-10-17 00:35:22 |
I would like to count the number of consecutive days of activity. The data above would show:
| id | email | length |
| -------- | -------------- | -- |
| 1 | hello#example.com | 1 |
| 2 | hello2#example.com | 1 |
| 1 | hello#example.com | 3 |
This is made more difficult because I need to join the two tables using a UNION (or something similar and then run the grouping. I tried to build on this query (Finding the length of a series in postgres) but I'm unable to group by consecutive days.
select max(id) as max_id, email, count(*) as length
from (
select *, row_number() over wa - row_number() over wp as grp
from began_playing_video
window
wp as (partition by email order by id desc),
wa as (order by id desc)
) s
group by email, grp
order by 1 desc
Any ideas on how I could do this in Postgres?
First create an aggregate function in order to count the adjacent dates within an ascendant ordered list. The jsonb data type is used because it allows to mix various data types inside the same array :
CREATE OR REPLACE FUNCTION count_date(x jsonb, y jsonb, d date)
RETURNS jsonb LANGUAGE sql AS
$$
SELECT CASE
WHEN d IS NULL
THEN COALESCE(x,y)
ELSE
to_jsonb(d :: text)
|| CASE
WHEN COALESCE(x,y) = '[]' :: jsonb
THEN '[1]' :: jsonb
WHEN COALESCE(x->>0, y->>0) :: date + 1 = d :: date
THEN jsonb_set(COALESCE(x-0, y-0), '{-1}', to_jsonb(COALESCE(x->>-1, y->>-1) :: integer + 1))
ELSE COALESCE(x-0, y-0) || to_jsonb(1)
END
END ;
$$
DROP AGGREGATE IF EXISTS count_date(jsonb, date) ;
CREATE AGGREGATE count_date(jsonb, date)
(
sfunc = count_date
, stype = jsonb
) ;
Then iterate on the count_date on your table grouped by id :
WITH list AS (
SELECT id, email, count_date('[]', timestamp ORDER BY timestamp :: timestamp) as count_list
FROM your_table
GROUP BY id, email
)
SELECT id, email, jsonb_array_elements(count_list-0) AS length
FROM list

Postgresql use more than one row as expression in sub query

As the title says, I need to create a query where I SELECT all items from one table and use those items as expressions in another query. Suppose I have the main table that looks like this:
main_table
-------------------------------------
id | name | location | //more columns
---|------|----------|---------------
1 | me | pluto | //
2 | them | mercury | //
3 | we | jupiter | //
And the sub query table looks like this:
some_table
---------------
id | item
---|-----------
1 | sub-col-1
2 | sub-col-2
3 | sub-col-3
where each item in some_table has a price which is in an amount_table like so:
amount_table
--------------
1 | 1000
2 | 2000
3 | 3000
So that the query returns results like this:
name | location | sub-col-1 | sub-col-2 | sub-col-3 |
----------------------------------------------------|
me | pluto | 1000 | | |
them | mercury | | 2000 | |
we | jupiter | | | 3000 |
My query currently looks like this
SELECT name, location, (SELECT item FROM some_table)
FROM main_table
INNER JOIN amount_table WHERE //match the id's
But I'm running into the error more than one row returned by a subquery used as an expression
How can I formulate this query to return the desired results?
you should decide on expected result.
to get one-tp-many relation:
SELECT name, location, some_table.item
FROM main_table
JOIN some_table on true -- or id if they match
INNER JOIN amount_table --WHERE match the id's
to get one-to-one with all rows:
SELECT name, location, (SELECT array_agg(item) FROM some_table)
FROM main_table
INNER JOIN amount_table --WHERE //match the id's

Resolve many to many relationship in SQL

I'm using Postgresql. Let's say I have 3 tables:
Classes
id | name
1 | Biology
2 | Math
Students
id | name
1 | John
2 | Jane
Student_Classes
id | student_id | class_id | registration_token
1 | 1 | 1 | abc
2 | 1 | 2 | def
3 | 2 | 1 | zxc
I want to obtain a result set like this:
Results
student_name | biology | math
John | abc | def
Jane | zxc | NULL
I can get this result set with this query:
SELECT
student.name as student_name,
biology.registration_token as biology,
math.registration_token as math
FROM
Students
LEFT JOIN (
SELECT registration_token FROM Student_Classes WHERE class_id = (
SELECT id FROM Classes WHERE name = 'Biology'
)
) AS biology
ON Students.id = biology.student_id
LEFT JOIN (
SELECT registration_token FROM Student_Classes WHERE class_id = (
SELECT id FROM Classes WHERE name = 'Math'
)
) AS math
ON Students.id = math.student_id
Is there a way to get this same result set without having a join statement for each class? With this solution, if I want to add a class, I need to add another join statement.
You can do this via postgresql tablefunc extension crosstab but such presentation requirements may be handled better outside of sql.

How can I get the sum(value) on the latest gather_time per group(name,col1) in PostgreSQL?

Actually, I got a good answer about the similar issue on below thread, but I need one more solution for different data set.
How to get the latest 2 rows ( PostgreSQL )
The Data set has historical data, and I just want to get sum(value) for the group on the latest gather_time.
The final result should be as following:
name | col1 | gather_time | sum
-------+------+---------------------+-----
first | 100 | 2016-01-01 23:12:49 | 6
first | 200 | 2016-01-01 23:11:13 | 4
However, I just can see the data for the one group(first-100) with a query below meaning that there is no data for the second group(first-200).
Thing is that I need to get the one row per the group.
The number of the group can be vary.
select name,col1,gather_time,sum(value)
from testtable
group by name,col1,gather_time
order by gather_time desc
limit 2;
name | col1 | gather_time | sum
-------+------+---------------------+-----
first | 100 | 2016-01-01 23:12:49 | 6
first | 100 | 2016-01-01 23:11:19 | 6
(2 rows)
Can you advice me to accomplish this requirement?
Data set
create table testtable
(
name varchar(30),
col1 varchar(30),
col2 varchar(30),
gather_time timestamp,
value integer
);
insert into testtable values('first','100','q1','2016-01-01 23:11:19',2);
insert into testtable values('first','100','q2','2016-01-01 23:11:19',2);
insert into testtable values('first','100','q3','2016-01-01 23:11:19',2);
insert into testtable values('first','200','t1','2016-01-01 23:11:13',2);
insert into testtable values('first','200','t2','2016-01-01 23:11:13',2);
insert into testtable values('first','100','q1','2016-01-01 23:11:11',2);
insert into testtable values('first','100','q1','2016-01-01 23:12:49',2);
insert into testtable values('first','100','q2','2016-01-01 23:12:49',2);
insert into testtable values('first','100','q3','2016-01-01 23:12:49',2);
select *
from testtable
order by name,col1,gather_time;
name | col1 | col2 | gather_time | value
-------+------+------+---------------------+-------
first | 100 | q1 | 2016-01-01 23:11:11 | 2
first | 100 | q2 | 2016-01-01 23:11:19 | 2
first | 100 | q3 | 2016-01-01 23:11:19 | 2
first | 100 | q1 | 2016-01-01 23:11:19 | 2
first | 100 | q3 | 2016-01-01 23:12:49 | 2
first | 100 | q1 | 2016-01-01 23:12:49 | 2
first | 100 | q2 | 2016-01-01 23:12:49 | 2
first | 200 | t2 | 2016-01-01 23:11:13 | 2
first | 200 | t1 | 2016-01-01 23:11:13 | 2
One option is to join your original table to a table containing only the records with the latest gather_time for each name, col1 group. Then you can take the sum of the value column for each group to get the result set you want.
SELECT t1.name, t1.col1, MAX(t1.gather_time) AS gather_time, SUM(t1.value) AS sum
FROM testtable t1 INNER JOIN
(
SELECT name, col1, col2, MAX(gather_time) AS maxTime
FROM testtable
GROUP BY name, col1, col2
) t2
ON t1.name = t2.name AND t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND
t1.gather_time = t2.maxTime
GROUP BY t1.name, t1.col1
If you wanted to use a subquery in the WHERE clause, as you attempted in your OP, to restrict to only records with the latest gather_time then you could try the following:
SELECT name, col1, gather_time, SUM(value) AS sum
FROM testtable t1
WHERE gather_time =
(
SELECT MAX(gather_time)
FROM testtable t2
WHERE t1.name = t2.name AND t1.col1 = t2.col1
)
GROUP BY name, col1