I have a table like this:
id | ciaps
1 | a|b|c
An have a second table like:
cod | desc
a | item a
b | item b
c | item c
I need a code to join this tables like:
id | ciaps
1 | item a|item b|item c
Use array_agg for concatenating string separated by '|' and convert it array_to_string to get the value expected format.
-- PostgreSQL (v11)
SELECT t1.id, t2.descr ciaps
FROM test1 t1
INNER JOIN (SELECT array_to_string(array_agg(cod), '|') cod
, array_to_string(array_agg(descr), '|') descr
FROM test2) t2
ON t1.ciaps = t2.cod;
Please check from url https://dbfiddle.uk/?rdbms=postgres_11&fiddle=6fffc7f1da6a02a48018b3691c99ad17
Related
I have two tables
The first table contains three text fields(username, email, num) the second have only one column with random birth_date DATE.
I need to merge tables without condition
For example
first table:
+----------+--------------+-----------+
| username | email | num |
+----------+--------------+-----------+
| 'user1' | 'user1#mail' | '+794949' |
| 'user2' | 'user2#mail' | '+799999' |
+----------+--------------+-----------+
second table:
+--------------+
| birth_date |
+--------------+
| '2001-01-01' |
| '2002-02-02' |
+--------------+
And I need result like
+----------+------------+-------------+--------------+
| username | email | num | birth_date |
+----------+------------+-------------+--------------+
| 'user1' | 'us1#mail' | '+7979797' | '2001-01-01' |
| 'user2' | 'us2#mail' | '+79898998' | '2002-02-02' |
+----------+------------+-------------+--------------+
I need to get in result table with 100 rows too
Tried different JOIN but there is no condition here
Sure there is a join condition, about the simplest there is: Join on true or cross join. Either is the basic merge tables without condition. However this does not result in what you want as it generates a result set of 10k rows. But you an then use limit:
select *
from table1
join table2 on true
order by random()
limit 100;
select *
from table1
cross join table2
order by random()
limit 100;
There is other option, witch I think may be closer to what you want. Assign a value to each row of each table. Then join on this assigned value:
select <column list>
from (select *, row_number() over() rn from table1) t1
join (select *, row_number() over() rn from table2) t2
on (t1.rn = t2.rn);
To eliminate the assigned value you must specifically list each column desired in the result. But that is the way it should be done anyway.
See demo here. (demo user just 3 rows instead of 100)
This is db query .
select users.Id,jsonb_object_keys(orders.metadata::jsonb) from users left join orders on users.userId=orders.userId where users.userId=2;
users table orders table
------------------- -----------------------------------------------------
|userId| name | | userId|orderId|metadata |
| 1 | john | | 1 | 1 | {"orderName":"chess","quantity":1}|
| 2 | doe | | 1 | 2 | {"orderName":"cube" ,"quantity":1}|
------------------- -----------------------------------------------------
Why there are no rows returned by the query ?
Very Nice and tricky question. to achieve what you want you should try below query:
select
t1.userid,
t2.keys
from
users t1
left join (select userid, orderid, jsonb_object_keys(metadata) as keys from orders) t2
on t1.userid=t2.userid
Your Query seems correct but there is catch. When you are left joining both tables without jsonb_object_keys(metadata), it will work as you are expecting. But when you use with this function then this function will return a set of records for each rows of select statement and perform simple join with rest of the columns internally. That's why it will remove the rows having NULL value in second column.
You should left join to the result of the jsonb_each() call:
select users.userid, meta.*
from users
left join orders on users.userid = orders.userid
left join jsonb_object_keys(orders.metadata::jsonb) as meta on true
where users.userid = 2;
create table dupt(cat varchar(10), num int)
insert dupt(cat,num) values ('A',1),('A',2),('A',3),
('B',1),('B',2),
('C',1),('C',2), ('C',3),
('D',1),('D',2), ('D',4),
('E',1),('E',2),
('F',1),('F',2)
I need to create a report which finds out duplicate data. From the sample data above, report needs to show that data for cat A is duplicated by cat C (notice the num value and no. of records) and cat B is duplicated by cat E and F. What is the best way to show that?
Example output
-------------
|cat | dupby|
-------------
| A | C |
| B | E, F |
-------------
Updated: switched to traditional set matching using common table expression and the stuff() with select ... for xml path ('') method of string concatenation only on the final results:
;with cte as (
select *
, cnt = count(*) over (partition by cat)
from t
)
, duplicates as (
select
x.cat
, dup_cat = x2.cat
from cte as x
inner join cte as x2
on x.cat < x2.cat
and x.num = x2.num
and x.cnt = x2.cnt
group by x.cat, x2.cat, x.cnt
having count(*) = x.cnt
)
select
d.cat
, dupby = stuff((
select ', '+i.dup_cat
from duplicates i
where i.cat = d.cat
for xml path (''), type).value('.','varchar(8000)')
,1,2,'')
from duplicates d
where not exists (
select 1
from duplicates i
where d.cat = i.dup_cat
)
group by d.cat
rextester demo: http://rextester.com/KHAG98718
returns:
+-----+-------+
| cat | dupby |
+-----+-------+
| A | C |
| B | E, F |
+-----+-------+
I'm pretty new to SQL and I can't get this to work I've got these two tables below
Table A Table B
_________________ _________________
| A | 2015-10-4 | B | 2015-11-6
| B | 2015-11-4 | C | 2015-05-4
| C | 2015-05-6 | D | 2015-05-8
| D | 2015-05-7 | C | 2015-05-5
I'm trying to write a stored procedure that will get all letters from table B that has a date less than table A and any letter that doesn't exist in table B.
This is what I have so far
SELECT *
FROM A q JOIN
B c ON q.Letter = c.Letter AND q.Date > c.Date OR c.Letter IS NULL
This returns C but I can't have it return A also. It's confusing to me trying to join and compare tables still.
I do not want duplicate rows, the results I would be expecting would return
| A | 2015-10-4
| C | 2015-05-6
EDIT
I'm running into an issue now where if I have a case like this
Table A Table B
_________________ _________________
| A | 2015-10-4 | B | 2015-11-6
| B | 2015-11-4 | C | 2015-05-4
| C | 2015-05-6 | D | 2015-05-8
| D | 2015-05-7 | C | 2015-05-5
| C | 2015-05-7
It will still return C for some reason. Using a.date > max(b.date) doesn't work because max can't used that way. And I want to assume the max date can be anywhere in the table in table B.
So now my new results would be
| A | 2015-10-4
But I am getting A and C still.
You should use a LEFT JOIN:
SELECT DISTINCT A.letter, A.[Date]
FROM dbo.TableA A
LEFT JOIN dbo.TableB B
ON A.letter = B.letter
WHERE B.[Date] < A.[Date] OR B.letter IS NULL;
UPDATE
You should have explained your requirements as: "get all letters from table B in which every date is lesser than...."
SELECT DISTINCT A.letter, A.[Date]
FROM dbo.TableA A
LEFT JOIN (SELECT letter, MAX([Date]) [Date]
FROM dbo.TableB
GROUP BY letter) B
ON A.letter = B.letter
WHERE B.[Date] < A.[Date] OR B.letter IS NULL;
I would go for a UNION / UNION ALL, so that you get the result subset for the first condition + the ones for the second one.
Something similar to this should do the job:
sqlite> create table A (letter, my_date);
sqlite> create table B (letter, my_date);
sqlite> insert into A values ('A', '2015-10-04');
sqlite> insert into A values ('B', '2015-11-04');
sqlite> insert into A values ('C', '2015-05-06');
sqlite> insert into A values ('D', '2015-05-07');
sqlite> insert into B values ('B', '2015-11-06');
sqlite> insert into B values ('C', '2015-05-04');
sqlite> insert into B values ('D', '2015-05-08');
sqlite> insert into B values ('C', '2015-05-05');
A 2015-10-04
sqlite> select B.* from A, B where A.letter = B.letter and B.my_date < A.my_date UNION ALL select A.* from A where not exists (select 1 from B where B.letter=A.letter);
letter my_date
---------- ----------
C 2015-05-04
C 2015-05-05
A 2015-10-04
I have a group by query which fetches me some records. What if I wish to find other column details representing those records.
Suppose I have a query as follows .Select id,max(date) from records group by id;
to fetch the most recent entry in the table.
I wish to fetch another column representing those records .
I want to do something like this (This incorrect query is just for example) :
Select type from (Select id,max(date) from records group by id) but here type doesnt exist in the inner query.
I am not able to define the question in a simpler manner.I Apologise for that.
Any help is appreciated.
EDIT :
Column | Type | Modifiers
--------+-----------------------+-----------
id | integer |
rdate | date |
type | character varying(20) |
Sample Data :
id | rdate | type
----+------------+------
1 | 2013-11-03 | E1
1 | 2013-12-12 | E1
2 | 2013-12-12 | A3
3 | 2014-01-11 | B2
1 | 2014-01-15 | A1
4 | 2013-12-23 | C1
5 | 2014-01-05 | C
7 | 2013-12-20 | D
8 | 2013-12-20 | D
9 | 2013-12-23 | A1
While I was trying something like this (I'm no good at sql) : select type from records as r1 inner join (Select id,max(rdate) from records group by id) r2 on r1.rdate = r2.rdate ;
or
select type from records as r1 ,(Select id,max(rdate) from records group by id) r2 inner join r1 on r1.rdate = r2.rdate ;
You can easily do this with a window function:
SELECT id, rdate, type
FROM (
SELECT id, rdate, type, rank() OVER (PARTITION BY id ORDER BY rdate DESC) rnk
FROM records
WHERE rnk = 1
) foo
ORDER BY id;
The window definition OVER (PARTITION BY id ORDER BY rdate DESC) takes all records with the same id value, then sorts then from most recent to least recent rdate and assigns a rank to each row. The rank of 1 is the most recent, so equivalent to max(rdate).
If I've understood the question right, then this should work (or at least get you something you can work with):
SELECT
b.id, b.maxdate, a.type
FROM
records a -- this is the records table, where you'll get the type
INNER JOIN -- now join it to the group by query
(select id, max(rdate) as maxdate FROM records GROUP BY id) b
ON -- join on both rdate and id, otherwise you'll get lots of duplicates
b.id = a.id
AND b.maxdate = a.rdate
Note that if you have records with different types for the same id and rdate combination you'll get duplicates.