How to pull out records based on array of values - postgresql

Suppose the following structure:
CREATE SCHEMA IF NOT EXISTS my_schema;
CREATE TABLE IF NOT EXISTS my_schema.user (
id SERIAL PRIMARY KEY,
tag_id BIGINT NOT NULL
);
CREATE TABLE IF NOT EXISTS my_schema.conversation (
id SERIAL PRIMARY KEY,
user_ids BIGINT[] NOT NULL
);
INSERT INTO my_schema.user VALUES
(1, 55555),
(2, 77777);
INSERT INTO my_schema.conversation VALUES
(1, '{1,2}');
I can pull out the my_schema.conversation records if I know the my_schema.user.id values:
SELECT *
FROM my_schema.conversation
WHERE user_ids #> '{1}'
The above works, but I need to use my_schema.user.tag_id instead of my_schema.user.id:
How can I do this?
Fiddle

You would have to join the two tables on the array values
SELECT *
FROM my_schema.user u
JOIN my_schema.conversation c
ON u.id = any(c.chat_ids)
WHERE u.tag_id=55555;

Related

Postgresql not choosing rows grouping

I have query. There is a construction like this example: (online demo)
You will see the in result created_at field. I have to use query the created_at field. So I have to use it in select created_at. I don't want to use it created_at field in select. Because, there are millions of records in the deposits table. How can i escape this problem?
(Note: I have many table to query, like "deposits" table. this is just a short example.)
create table payment_methods
(
payment_method_id bigserial not null
constraint payment_methods_pkey
primary key
);
create table currencies_of_payment_methods
(
copm_id bigserial not null
constraint currencies_of_payment_methods_pkey
primary key,
payment_method_id integer not null
);
create table deposits
(
deposit_id bigserial not null
constraint deposits_pkey
primary key,
amount numeric(18,2) not null,
copm_id integer not null,
created_at timestamp(0)
);
INSERT INTO payment_methods (payment_method_id) VALUES (1);
INSERT INTO payment_methods (payment_method_id) VALUES (2);
INSERT INTO currencies_of_payment_methods (copm_id, payment_method_id) VALUES (1, 1);
INSERT INTO deposits (amount, copm_id, created_at) VALUES (100, 1, '2020-09-10 08:49:37');
INSERT INTO deposits (amount, copm_id, created_at) VALUES (200, 1, '2020-09-10 08:49:37');
INSERT INTO deposits (amount, copm_id, created_at) VALUES (40, 1, '2020-09-10 08:49:37');
Query:
SELECT payment_methods.payment_method_id,
deposit_copm_id.deposit_copm_id,
manuel_deposit_amount.manuel_deposit_amount,
manuel_deposit_amount.created_at
FROM payment_methods
CROSS JOIN lateral
(
SELECT currencies_of_payment_methods.copm_id AS deposit_copm_id
FROM currencies_of_payment_methods
WHERE currencies_of_payment_methods.payment_method_id = payment_methods.payment_method_id) deposit_copm_id
CROSS JOIN lateral
(
SELECT sum(deposits.amount) AS manuel_deposit_amount,
array_agg(deposits.created_at) AS created_at
FROM deposits
WHERE deposits.copm_id = deposit_copm_id.deposit_copm_id) manuel_deposit_amount
WHERE payment_methods.payment_method_id = 1

Inner join request to get the items available for the user

I have four tables:
CREATE TABLE t_users (
user_id varchar PRIMARY KEY,
user_email varchar
);
CREATE TABLE t_items (
item_id varchar PRIMARY KEY,
owner_id varchar not null references t_users(user_id),
title varchar
);
CREATE TABLE t_access_gropes (
access_group_id varchar PRIMARY KEY,
user_id varchar not null references t_users(user_id)
);
CREATE TABLE t_access_sets (
access_set_id varchar PRIMARY KEY,
item_id varchar not null references t_items(item_id),
access_group_id varchar not null references t_access_gropes(access_group_id)
);
With data:
INSERT INTO t_users VALUES ('us123', 'us123#email.com');
INSERT INTO t_users VALUES ('us456', 'us456#email.com');
INSERT INTO t_users VALUES ('us789', 'us789#email.com');
INSERT INTO t_items VALUES ('it123', 'us123', 'title1');
INSERT INTO t_items VALUES ('it456', 'us456', 'title2');
INSERT INTO t_items VALUES ('it678', 'us789', 'title3');
INSERT INTO t_items VALUES ('it323', 'us123', 'title4');
INSERT INTO t_items VALUES ('it764', 'us456', 'title5');
INSERT INTO t_items VALUES ('it826', 'us789', 'title6');
INSERT INTO t_items VALUES ('it568', 'us123', 'title7');
INSERT INTO t_items VALUES ('it038', 'us456', 'title8');
INSERT INTO t_items VALUES ('it728', 'us789', 'title9');
INSERT INTO t_access_gropes VALUES ('ag123', 'us123');
INSERT INTO t_access_gropes VALUES ('ag456', 'us456');
INSERT INTO t_access_gropes VALUES ('ag789', 'us789');
INSERT INTO t_access_sets VALUES ('as123', 'it123', 'ag123');
INSERT INTO t_access_sets VALUES ('as456', 'it456', 'ag123');
The t_access_gropes forms groups of users.
The t_access_sets forms security kits.
How to make a request to get the all items available for the user. Something like:
select *
from t_items
inner join t_users on t_items.owner_id = t_users.user_id
inner join t_access_gropes on t_users.user_id = t_access_gropes.user_id
inner join t_access_sets on t_items.item_id = t_access_sets.item_id
where t_access_gropes.user_id = 'us123';
Thank you.
select u.user_id, u.email, i.item_id, i.title
from t_users u
join t_items i on i.owner_id = u.user_id
where u.user_id = 'us123'
I believe this is what you want for your exact request!
Otherwise what you wrote is fine however I dont see the relevance in the tables you joined together as you directly connected your users table and items table together, which is why you wouldnt need to join the other two tables (groups & sets). Usually in some cases you find a table of user_items inbetween the users and items table.

How to load data as nested JSONB from non-JSONB postgres tables

I'm trying to construct an object for use from my postgres backend. The tables in question look something like this:
We have some Things that essentially act as rows for a matrix where the columns are Field_Columns. Field_Values are filled cells.
Create Table Platform_User (
serial id PRIMARY KEY
)
Create Table Things (
serial id PRIMARY KEY,
INTEGER user_id REFERENCES Platform_User(id)
)
Create Table Field_Columns (
serial id PRIMARY KEY,
TEXT name,
)
Create Table Field_Values (
INTEGER field_column_id REFERENCES Field_Columns(id),
INTEGER thing_id REFERENCES Things(id)
TEXT content,
PRIMARY_KEY(field_column_id, thing_id)
)
This would be simple if I were trying to load just the Field_Values for a single Thing as JSON, which would look like this:
SELECT JSONB_OBJECT(
ARRAY(
SELECT name
FROM Field_Columns
ORDER BY Field_Columns.id
),
ARRAY(
SELECT Field_Values.content
FROM Fields_Columns
LEFT JOIN Field_Values ON Field_Values.field_column_id = Field_Columns.id
AND Field_Values.thing_id = Things.id
ORDER BY Field_Columns.id)
)
)
FROM Things
WHERE Thing.id = $1
however, I'd like to construct the JSON object to look like this when returned. I want to get an object of all the Fields:Field_Values objects for the Things that a user owns
{
14:
{
'first field':'asdf',
'other field':''
}
25:
{
'first field':'qwer',
'other field':'dfgdsfg'
}
43:
{
'first field':'',
'other field':''
}
}
My efforts to construct this query look like this, but I'm running into the problem where the JSONB object function doesn't want to construct an object where the value of the field is an object itself
SELECT (
JSONB_OBJECT(
ARRAY(SELECT Things.id::TEXT
FROM Things
WHERE Things.user_id = $2
ORDER BY Things.id
),
ARRAY(SELECT JSONB_OBJECT(
ARRAY(
SELECT name
FROM Field_Columns
ORDER BY Field_Columns.id),
ARRAY(
SELECT Field_Values.content
FROM Field_Columns
LEFT JOIN Field_Values ON Field_Values.field_column_Id = Field_Columns.id
AND Field_Values.thing_id = Things.id
ORDER BY Field_Columns.id)
)
FROM Things
WHERE Things.user_id = $2
ORDER BY Things.id
)
)
) AS thing_fields
The specific error I get is function jsonb_object(text[], jsonb[]) does not exist. Is there a way to do this that doesn't involve copious text conversions and nonsense like that? Or will I just need to abandon trying to sort my data in the query and do it in my code instead.
Your DDL scripts are syntactically incorrect so I created these for you:
create table platform_users (
id int8 PRIMARY KEY
);
create table things (
id int8 PRIMARY KEY,
user_id int8 REFERENCES platform_users(id)
);
create table field_columns (
id int8 PRIMARY KEY,
name text
);
create table field_values (
field_column_id int8 REFERENCES field_columns(id),
thing_id int8 REFERENCES things(id),
content text,
PRIMARY KEY(field_column_id, thing_id)
);
I also created some scripts to populate the db:
insert into platform_users(id) values (1);
insert into platform_users(id) values (2);
insert into platform_users(id) values (3);
insert into platform_users(id) values (4);
insert into platform_users(id) values (5);
insert into things(id, user_id) values(1, 1);
insert into things(id, user_id) values(2, 1);
insert into things(id, user_id) values(3, 2);
insert into things(id, user_id) values(4, 2);
insert into field_columns(id, name) values(1, 'col1');
insert into field_columns(id, name) values(2, 'col2');
insert into field_values(field_column_id, thing_id, content) values(1, 1, 'thing1 val1');
insert into field_values(field_column_id, thing_id, content) values(2, 1, 'thing1 val2');
insert into field_values(field_column_id, thing_id, content) values(1, 2, 'thing2 val1');
insert into field_values(field_column_id, thing_id, content) values(2, 2, 'thing2 val2');
Please include such scripts next time when you ask for help, and make sure that your scripts are correct. This will reduce the work needed to answer your question.
You can get your jsonb value by aggregating the key value pairs with jsonb_object_agg
select
t.id,
jsonb_object_agg(fc.name, fv.content)
from
things t inner join
field_values fv on fv.thing_id = t.id inner join
field_columns fc on fv.field_column_id = fc.id
group by 1
The results looking like this:
thing_id;jsonb_value
1;"{"col1": "thing1 val1", "col2": "thing1 val2"}"
2;"{"col1": "thing2 val1", "col2": "thing2 val2"}"

Selecting one specific data row (required), and 3 others (specific data row must be included)

I need to select a specific row and 2 other rows that is not that specific row (a total of 3). The specific row must always be included in the 3 results. How should I go about it? I think it can be done with a UNION ALL, but do I have another choice? Thanks all! :)
Here are my scripts to create the sample tables:
create table users (
user_id serial primary key,
user_name varchar(20) not null
);
create table result_table1 (
result_id serial primary key,
user_id int4 references users(user_id),
result_1 int4 not null
);
create table result_table2 (
result_id serial primary key,
user_id int4 references users(user_id),
result_2 int4 not null
);
insert into users (user_name) values ('Kevin'),('John'),('Batman'),('Someguy');
insert into result_table1 (user_id, result_1) values (1, 20),(2, 40),(3, 70),(4, 42);
insert into result_table2 (user_id, result_2) values (1, 4),(2, 3),(3, 7),(4, 5);
Here is my UNION query:
SELECT result_table1.user_id,
result_1,
result_2
FROM result_table1
INNER JOIN (
SELECT user_id
FROM users
) users
ON users.user_id = result_table1.user_id
INNER JOIN (
SELECT result_table2.user_id,
result_2
FROM result_table2
) result_table2
ON result_table2.user_id = result_table1.user_id
WHERE users.user_id = 1
UNION ALL
SELECT result_table1.user_id,
result_1,
result_2
FROM result_table1
INNER JOIN (
SELECT user_id
FROM users
) users
ON users.user_id = result_table1.user_id
INNER JOIN (
SELECT result_table2.user_id,
result_2
FROM result_table2
) result_table2
ON result_table2.user_id = result_table1.user_id
WHERE users.user_id != 1
LIMIT 3;
Are there any options other than a UNION? The query works and does what I want for now, but will it always include user_id = 1 if I had a larger set of rows (assume that user_id = 1 will always be there)? :(
Thank you all! :)

How to use WHILE EXISTS in a loop

CREATE TABLE [CandidateDocsAssociation](
[Row_ID] [bigint] IDENTITY(1,1) NOT NULL,
[Doc_ID] [bigint] NOT NULL,
[Candidate_ID] [bigint] NOT NULL,
) ON [PRIMARY]
GO
I have the above table structure to store the association between documents and candidates. Row_ID is an auto generated primary key. Doc_ID is a foreign key referencing the documents table. Candidate_ID is also a foreign key referencing the Candidates table.
A candidate can be associated with more than one document and one document can be associated with multiple candidates.
What i want to achieve is insert a default common document (Doc_ID) for all candidates(DISTINCT) if a Candidate_ID row with a DOC_ID of 2 does not already exist.
Below is what i'm trying but it ain't working
WHILE EXISTS (SELECT DISTINCT Candidate_ID from CandidateDocsAssociation
WHERE Doc_ID <> (SELECT Doc_ID FROM Doc_Table WHERE Doc_Name = N'Default'))
BEGIN
INSERT CandidateDocsAssociation (Doc_ID, Candidate_ID) VALUES ((SELECT Doc_ID FROM Doc_Table WHERE Doc_Name = N'Default'),Candidate_ID)
END
GO
Forget the loop and do a set-based operation. Assuming you have a Candidates table:
INSERT INTO CandidateDocsAssociation (Doc_ID, Candidate_ID)
SELECT dt.Doc_ID, c.Candidate_ID
FROM Doc_Table dt
CROSS JOIN Candidates c
WHERE dt.Doc_Name = N'Default'
AND NOT EXISTS(SELECT * FROM CandidateDocsAssociation cda
WHERE cda.Candidate_ID=c.Candidate_ID
AND cda.Doc_ID=dt.Doc_ID)
try with this (use NOT IN Clause)
WHILE EXISTS (SELECT DISTINCT Candidate_ID from CandidateDocsAssociation
WHERE Doc_ID NOT IN (SELECT Doc_ID FROM Doc_Table WHERE Doc_Name = N'Default'))
BEGIN
INSERT CandidateDocsAssociation (Doc_ID, Candidate_ID) VALUES ((SELECT Doc_ID FROM Doc_Table WHERE Doc_Name = N'Default'),Candidate_ID)
END
GO