postgresql--- add new column in table 1 by comparing with table 2 - postgresql

table 1 has columns (with rows as) like
date product comment
12.01.2014 1201/sm/pb yes
13.01.2014 1202/sa/pa no
14.01.2014 1215/ja/pc yes
table 2 has columns (with rows as) like
certificate name
1201 pencil
1202 pen
1215 parker
i want add one column (name) in table 1
date product comment name
12.01.2014 1201/sm/pb yes pencil
13.01.2014 1202/sa/pa no pen
14.01.2014 1215/ja/pc yes parker
some one please tell me how can i add a column in which rows should satisfy the
condition (product.table1 = certificate.table2 ==> name in table1)
thank u

You need to join the tables on the prefix of the product column.
select t1.date, t1.product, t1.comment, t2.name
from table_1 t1
joint table_2 t2 on left(t1.product, strpos(t1.product,'/') - 1) = t2.certificate;

Related

remove overlapping entries from a table KDB+/Q

I have two tables, called table1 and table2. Both have date and id as columns, and table2 is a small subset of table1 entries. How do I remove all entries that exist in table2 from table1 by matching up date and id?
select from table1 where not([]date;id)in table2
/ Begin with example input of 10 entries
q)t1:([]date:.z.d+til 10;id:til 10;c:til 10)
/ Pick 3 random entries to form key table t2
q)t2:3?delete c from t1
/ Key t1 by date and id and drop entries matching t2
q)t2 _ `date`id xkey t1

Insert into Table using JOIN T-SQL

I want to insert into a specific column in my table A which belongs to DB 1
from my DB 2 table B
In table A I have a unique ID field called F6 same goes for table B field name F68; both fields are the same they are simply a copy of each other which gives me the opportunity to do a join on them.
So far so good, what I want now is to insert into my table A in the field F110 the values from table B F64 since I did a join on the "ID's" they should be in the right manner.
All fields are of type VARCHAR.
INSERT INTO [D061_15018659].[dbo].[A](F110)
SELECT v.F64,v.F68
FROM [VFM6010061V960P].[dbo].[B] v LEFT JOIN
ON v.F68 = F6
I have the problem that I have an error on "ON" why so ever I can't figure it out.
Your select query provide 2 columns ==> you need concatenate the columns
You need repeat the tabel A in join clause.
Try this :
INSERT INTO [D061_15018659].[dbo].[A] (F110)
SELECT
v.F64 || v.F68 as theNewF110
FROM
[VFM6010061V960P].[dbo].[B] v
LEFT JOIN
[D061_15018659].[dbo].[A] w ON v.F68 = w.F6

How to add a sum of a column where the id is matching to another column in SQL

My table has columns called id, totalsales, and p_sales. p_sales is an empty column that I added and I want to fill in this column with the sum of totalsales where the id is matching.
For example, if the first row has an id of 2, I want the p_sales to be filled with the sum of all totalsales with the id of 2. In the next row, if the id is 6 I want the p_sales to be filled with the sum of all totalsales with an id of 6 and so on so forth.
How do I achieve this using SQL?
Try this, only change table1 and table2 with the name of the tables you need
update dodge t set p_sales = (select sum(t2.totalsales)
from dodge t2 where t2.p_id = t.p_id) ;

Simple SELECT, but adding JOIN returns too many rows

The query below returns 9,817 records. Now, I want to SELECT one more field from another table. See the 2 lines that are commented out, where I've simply selected this additional field and added a JOIN statement to bind this new columns. With these lines added, the query now returns 649,200 records and I can't figure out why! I guess something is wrong with my WHERE criteria in conjunction with the JOIN statement. Please help, thanks.
SELECT DISTINCT dbo.IMPORT_DOCUMENTS.ITEMID, BEGDOC, BATCHID
--, dbo.CATEGORY_COLLECTION_CATEGORY_RESULTS.CATEGORY_ID
FROM IMPORT_DOCUMENTS
--JOIN dbo.CATEGORY_COLLECTION_CATEGORY_RESULTS ON
dbo.CATEGORY_COLLECTION_CATEGORY_RESULTS.ITEMID = dbo.IMPORT_DOCUMENTS.ITEMID
WHERE (BATCHID LIKE 'IC0%' OR BATCHID LIKE 'LP0%')
AND dbo.IMPORT_DOCUMENTS.ITEMID IN
(SELECT dbo.CATEGORY_COLLECTION_CATEGORY_RESULTS.ITEMID FROM
CATEGORY_COLLECTION_CATEGORY_RESULTS
WHERE SCORE >= .7 AND SCORE <= .75 AND CATEGORY_ID IN(
SELECT CATEGORY_ID FROM CATEGORY_COLLECTION_CATS WHERE COLLECTION_ID IN (11,16))
AND Sample_Id > 0)
AND dbo.IMPORT_DOCUMENTS.ITEMID NOT IN
(SELECT ASSIGNMENT_FOLDER_DOCUMENTS.Item_Id FROM ASSIGNMENT_FOLDER_DOCUMENTS)
One possible reason is because one of your tables contains data at lower level, lower than your join key. For example, there may be multiple records per item id. The same item id is repeated X number of times. I would fix the query like the below. Without data knowledge, Try running the below modified query.... If output is not what you're looking for, convert it into SELECT Within a Select...
Hope this helps....
Try this SQL: SELECT DISTINCT a.ITEMID, a.BEGDOC, a.BATCHID, b.CATEGORY_ID FROM IMPORT_DOCUMENTS a JOIN (SELECT DISTINCT ITEMID FROM CATEGORY_COLLECTION_CATEGORY_RESULTS WHERE SCORE >= .7 AND SCORE <= .75 AND CATEGORY_ID IN (SELECT DISTINCT CATEGORY_ID FROM CATEGORY_COLLECTION_CATS WHERE COLLECTION_ID IN (11,16)) AND Sample_Id > 0) B ON a.ITEMID =b.ITEMID WHERE a.(a.BATCHID LIKE 'IC0%' OR a.BATCHID LIKE 'LP0%') AND a.ITEMID NOT IN (SELECT DIDTINCT Item_Id FROM ASSIGNMENT_FOLDER_DOCUMENTS)

Removing rows with duplicate secondary values

This one is probably a softball question for any DBA, but here's my challenge. I have a table that looks like this:
id parent_id active
--- --------- -------
1 5 y
2 6 y
3 6 y
4 6 y
5 7 y
6 8 y
The way the system I am working on operates, it should only have one active row per parent. Thus, it'd be ok if ID #2 and #3 were active = 'n'.
I need to run a query that finds all rows that have duplicate parent_ids who are active and flip all but the highest ID to active = 'y'.
Can this be done in a single query, or do I have to write a script for it? (Using Postgresql, btw)
ANSI style:
update table set
active = 'n'
where
id <> (select max(id) from table t1 where t1.parent_id = table.parent_id)
Postgres specific:
update t1 set
active = 'n'
from
table t1
inner join (select max(id) as topId, parent_id from table group by parent_id) t2 on
t1.id < t2.topId
and t1.parent_id = t2.parent_id
The second one is probably a bit faster, since it's not doing a correlated subquery for each row. Enjoy!