Table a has 100 row with name columns.
Table b is the same structure as Table A, but has 10 million rows.
Create a query to verify that the value of table a is not in table b.
However, comparing the value of table a with the value of table b takes too long.
I want to complete the work in 5 seconds, but I don't know how.
Below is the method I tried. Both table name columns have b-tree indexes.
1.
select
name
from
a
where
and not exists (select
name
from
b
where
a.name = b.name
);
select
a.name
from
a left outer join b
on a.name = b.name
where
b.name is null;
You want the following index on the B table:
CREATE INDEX name_idx ON b (name);
This should allow Postgres to rapidly lookup any of the 100 names in the a table against the above index. This should avoid the need to do a full table scan of the b table, which, as you are seeing, can be costly.
Related
In a Postgres DB, I have 2 varchar[] columns in 2 tables with common ID that contain data as below:
I need to return rows where all values of t_col are present in f_col (with or without extra data).
Desired result:
I tried with the below as conditions in self join query.
select
from t1 a, t2 b
where a.id=b.id
not (a.f_col #> b.t_col)
I have two tables A and B.
Both the tables have same number of columns.
Table A always contains all ids of Table B.
Need to fetch row from Table B first if it does not exist then have
to fetch from Table A.
I was trying to dynamically do this
select
CASE
WHEN b.id is null THEN
a.*
ELSE
b.*
END
from A a
left join B b on b.id = a.id
I think this syntax is not correct.
Can some one suggest how to proceed.
It looks like you want to select all columns from table A except when a matching ID exists in table B. In that case you want to select all columns from table B.
That can be done with this query as long as the number and types of columns in both tables are compatible:
select * from a where not exists (select 1 from b where b.id = a.id)
union all
select * from b
If the number, types, or order of columns differs you will need to explicitly specify the columns to return in each sub query.
I want to insert into a specific column in my table A which belongs to DB 1
from my DB 2 table B
In table A I have a unique ID field called F6 same goes for table B field name F68; both fields are the same they are simply a copy of each other which gives me the opportunity to do a join on them.
So far so good, what I want now is to insert into my table A in the field F110 the values from table B F64 since I did a join on the "ID's" they should be in the right manner.
All fields are of type VARCHAR.
INSERT INTO [D061_15018659].[dbo].[A](F110)
SELECT v.F64,v.F68
FROM [VFM6010061V960P].[dbo].[B] v LEFT JOIN
ON v.F68 = F6
I have the problem that I have an error on "ON" why so ever I can't figure it out.
Your select query provide 2 columns ==> you need concatenate the columns
You need repeat the tabel A in join clause.
Try this :
INSERT INTO [D061_15018659].[dbo].[A] (F110)
SELECT
v.F64 || v.F68 as theNewF110
FROM
[VFM6010061V960P].[dbo].[B] v
LEFT JOIN
[D061_15018659].[dbo].[A] w ON v.F68 = w.F6
I'd like to create a table view and include a new column that is based on values from another table.
Many rows of table B belong to one row of table A.
Table B has a status column (with values like active, completed, etc) and a foreign key (for table A).
In the new table view (for A) I want to create an active column (true / false) that is based on any related rows in table B having a status value of active and a matching foreign key.
If it is just about checking if the value exists, then this should do the job
select A.c1,
A.c2,
-- other columns from A
case when exists (select 1 from B_Table B where A.FK = B.FK and B.status = 'active')
then 'true'
else 'false'
end as Active
from A_Table B
In PostgreSQL, I have N tables, each consisting of two columns: id and value. Within each table, id is a unique identifier and value is numeric.
I would like to join all the tables using id and, for each id, create a sum of values of all the tables where the id is present (meaning the id may be present only in subset of tables).
I was trying the following query:
SELECT COALESCE(a.id, b.id, c.id) AS id,
COALESCE(a.value,0) + COALESCE(b.value,0) + COALESCE(c.value.0) AS value
FROM
a
FULL OUTER JOIN
b
ON (a.id=b.id)
FULL OUTER JOIN
c
ON (b.id=c.id)
But it doesn't work for cases when the id is present in a and c, but not in b.
I suppose I would have to do some bracketing like:
SELECT COALESCE(x.id, c.id) AS id, x.value+c.value AS value
FROM
(SELECT COALESCE(a.id, b.id), a.value+b.value AS value
FROM
a
FULL OUTER JOIN
b
ON (a.id=b.id)
) AS x
FULL OUTER JOIN
c
ON (x.id = c.id)
It was only 3 tables and the code is ugly enough already imho. Is there some elegant, systematic ways how to do the join for N tables? Not to get lost in my code?
I would also like to point out that I did some simplifications in my example. Tables a, b, c, ..., are actually results of quite complex queries over several materialized views. But the syntactical problem remains the same.
I understood you need to sum the values from N tables and group them by id, correct?
For that I would do this:
Select x.id, sum (x.value) from (
Select * from a
Union all
Select * from b
Union all........
) as x group by x.id;
Since the n tables are composed by the same fields you can union them all creating a big table full of all the id - value tuples from all tables. Use union all because union filters for duplicates!
Then just sum all the values grouped by id.