how to create a table which inserts two tables having same primary key without duplicates and i need all the data - postgresql

Postgres:
create table stock(item_id int primary key, balance float);
insert into stock values(10,2200);
insert into stock values(20,1900);
select * from stock;
create table buy(item_id int primary key, volume float);
insert into buy values(10,1000);
insert into buy values(30,300);
select * from buy;
results:
item_id | balance
---------+---------
10 | 2200
20 | 1900
(2 rows)
item_id | volume
---------+--------
10 | 1000
30 | 300
(2 rows)
Now i want another table which include these two table's data.
The new table which has 3 rows of data with item_id(10,20,30) and no duplication
I need query for this; either by merge or join.

I'm guessing:
that you really want a view rather than a table
that the values in the 'buy' table are supposed to be deducted from the 'stock'
so here's what I think you are after:
create view v_current_stock as
select item_id, sum(balance) as balance
from ( select item_id, balance from stock
union all
select item_id, -volume from buy )
group by item_id;
EDIT: seems like my guesswork was a bit off (see comments). Perhaps you are looking for a full join:
create view v as
select * from stock full join buy using (item_id);
select * from v;
item_id | balance | volume
---------+---------+--------
10 | 2200 | 1000
20 | 1900 |
30 | | 300

You can use a insert into ... select syntax :
create table mytable(item_id int primary key, balance float, volume float);
insert into mytable
select distinct stock.item_id, balance, volume
from stock
inner join buy on buy.item_id = stock.item_id;
You can use a different type of join if needed (left join or full join). In your case, I think you need a full join, but since I'm not sure I'll stick with the inner join in the example.

Related

delete duplicates in a table and update references

I have a table with id, we now added a new field where we calculated uniques from an external source, which made us realize we actually have duplicates in the database:
Main Table
id | unique_id | ...
---|------------
4 | A |
5 | A
6 | B
We can see: 5 is actually a duplicate of 4, as they both have the same unique_id.
Now this needs to be cleaned up.
I sadly can not simply delete those duplicates (5), as other tables depend on it:
Other Table (OtherTable.main_id REFERENCES MainTable.id)
id | main_id | ...
---|------------
1 | 4 | Blah
2 | 5
3 | 6
Now I have to clean up the duplicates, here
UPDATE OtherTable SET main_id = 5 WHERE main_id=4
How can I do that in an efficient update?
I tried to simply update every reference to the first one with that same unique_id, however that didn't complete in a day.
UPDATE "OtherTable" SET "main_id" = (SELECT "id" FROM "MainTable" WHERE "unique_id" = (SELECT "unique_id" FROM "MainTable" WHERE "id" == "OtherTable"."main_id") LIMIT 1)
If it helps, the MainTable contains about 750,000 entries, the OtherTable contains 12,000,000 rows.
Probably that's because those tripple select one is quite inefficient.
For the simple part of deletion the duplicates (after I would be done with changing the references to the first one of it's kind) I found this query to work swiftly enough:
DELETE FROM MainTable
WHERE id IN
(SELECT id
FROM
(SELECT id,
ROW_NUMBER() OVER( PARTITION BY unique_id
ORDER BY id ) AS row_num
FROM MainTable ) t
WHERE t.row_num > 1 );
However I need a way to update the references to the non-deleted ones of the duplicates.
Instead of UPDATE with a nested query, I'd suggest using UPDATE FROM for a join, and the same window function as in your DELETE statement:
UPDATE "OtherTable" AS other
SET main_id = main.min_id
FROM (SELECT
id,
first_value(id) OVER (PARTITION BY unique_id ORDER BY id) AS min_id
FROM "MainTable"
) AS main
WHERE main.id = other.main_id
AND main.id <> main.min_id

Postgresql track serial by ID in another column

So I am trying to create a database that can store videos from products, but I do intend to add a few million of them. So obviously I want the performance to be as good as possible.
I wanted to achieve the following:
BIGINT | SMALLSERIAL | VARCHAR(30)
product_id | video_id | video_hash
1 1 Dkfjoie124
1 2 POoieqlgkQ
1 3 Xd2t9dakcx
2 1 Df2459Afdw
However, when I insert a new video for a product:
INSERT INTO TABLE (product_id, video_hash) VALUES (2, DSpewirncS)
I want the following to happen:
BIGINT | SMALLSERIAL | VARCHAR(30)
product_id | video_id | video_hash
1 1 Dkfjoie124
1 2 POoieqlgkQ
1 3 Xd2t9dakcx
2 1 Df2459Afdw
2 2 DSpewirncS
Will this happen when I set the column type for video_id to SMALLSERIAL? Because I am afraid that it will insert a different value (the highest in the entire column), which I do not want.
Thanks.
No, a serial is bound to a sequence and that doesn't reset without telling it to do.
But if you want an ordinal for the videos per products you can query the table to produce it using the row_number() window function.
SELECT product_id,
row_number() OVER (PARTITION BY product_id
ORDER BY video_id) video_ordinal,
video_hash
FROM table;
You could also create a view for this query for convenience, so that you can query the view instead of the table and the view would look like you want it.

How can I SUM distinct records in a Postgres database where there are duplicate records?

Imagine a table that looks like this:
The SQL to get this data was just SELECT *
The first column is "row_id" the second is "id" - which is the order ID and the third is "total" - which is the revenue.
I'm not sure why there are duplicate rows in the database, but when I do a SUM(total), it's including the second entry in the database, even though the order ID is the same, which is causing my numbers to be larger than if I select distinct(id), total - export to excel and then sum the values manually.
So my question is - how can I SUM on just the distinct order IDs so that I get the same revenue as if I exported to excel every distinct order ID row?
Thanks in advance!
Easy - just divide by the count:
select id, sum(total) / count(id)
from orders
group by id
See live demo.
Also handles any level of duplication, eg triplicates etc.
You can try something like this (with your example):
Table
create table test (
row_id int,
id int,
total decimal(15,2)
);
insert into test values
(6395, 1509, 112), (22986, 1509, 112),
(1393, 3284, 40.37), (24360, 3284, 40.37);
Query
with distinct_records as (
select distinct id, total from test
)
select a.id, b.actual_total, array_agg(a.row_id) as row_ids
from test a
inner join (select id, sum(total) as actual_total from distinct_records group by id) b
on a.id = b.id
group by a.id, b.actual_total
Result
| id | actual_total | row_ids |
|------|--------------|------------|
| 1509 | 112 | 6395,22986 |
| 3284 | 40.37 | 1393,24360 |
Explanation
We do not know what the reasons is for orders and totals to appear more than one time with different row_id. So using a common table expression (CTE) using the with ... phrase, we get the distinct id and total.
Under the CTE, we use this distinct data to do totaling. We join ID in the original table with the aggregation over distinct values. Then we comma-separate row_ids so that the information looks cleaner.
SQLFiddle example
http://sqlfiddle.com/#!15/72639/3
Create custom aggregate:
CREATE OR REPLACE FUNCTION sum_func (
double precision, pg_catalog.anyelement, double precision
)
RETURNS double precision AS
$body$
SELECT case when $3 is not null then COALESCE($1, 0) + $3 else $1 end
$body$
LANGUAGE 'sql';
CREATE AGGREGATE dist_sum (
pg_catalog."any",
double precision)
(
SFUNC = sum_func,
STYPE = float8
);
And then calc distinct sum like:
select dist_sum(distinct id, total)
from orders
SQLFiddle
You can use DISTINCT in your aggregate functions:
SELECT id, SUM(DISTINCT total) FROM orders GROUP BY id
Documentation here: https://www.postgresql.org/docs/9.6/static/sql-expressions.html#SYNTAX-AGGREGATES
If we can trust that the total for 1 order is actually 1 row. We could eliminate the duplicates in a sub-query by selecting the the MAX of the PK id column. An example:
CREATE TABLE test2 (id int, order_id int, total int);
insert into test2 values (1,1,50);
insert into test2 values (2,1,50);
insert into test2 values (5,1,50);
insert into test2 values (3,2,100);
insert into test2 values (4,2,100);
select order_id, sum(total)
from test2 t
join (
select max(id) as id
from test2
group by order_id) as sq
on t.id = sq.id
group by order_id
sql fiddle
In difficult cases:
select
id,
(
SELECT SUM(value::int4)
FROM jsonb_each_text(jsonb_object_agg(row_id, total))
) as total
from orders
group by id
I would suggest just use a sub-Query:
SELECT "a"."id", SUM("a"."total")
FROM (SELECT DISTINCT ON ("id") * FROM "Database"."Schema"."Table") AS "a"
GROUP BY "a"."id"
The Above will give you the total of each id
Use below if you want the full total of each duplicate removed:
SELECT SUM("a"."total")
FROM (SELECT DISTINCT ON ("id") * FROM "Database"."Schema"."Table") AS "a"
Using subselect (http://sqlfiddle.com/#!7/cef1c/51):
select sum(total) from (
select distinct id, total
from orders
)
Using CTE (http://sqlfiddle.com/#!7/cef1c/53):
with distinct_records as (
select distinct id, total from orders
)
select sum(total) from distinct_records;

Need an efficient select query

I would like to know an efficient to way to fetch the data in the following case.
There are two tables say Table1 and Table2 having two common field say contry and pincode and other table "Table3" having key fields of first two tables (DNO, MPNO).
Here is the little glitch, In table3 data, if it is having DNO it wont have MPNO
So when in the selection screen(Pic no2) if the use enter any thing, result should be as follows
**MFID | DNO | MPNO | COUNTRY | PINCODE**
----------
00001 | 10011 | novalue | IN | 4444
00002 | Novalue | 1200 | IN | 5555
00003 | 300 | novalue | US | 9999
( as you can observe if DNO present no MPNO , vice versa )
Please have a look at the pictures for a clear picture :-)
Table Relation:
Selection screen with select options:
The code shouldn't be long.
PSEUDO CODE:
Select queries:
Select * from table3 into it_table3.
Select * from table1 FOR ALL ENTRIES IN it_table3 INTO it_table1
WHERE dno = table3-dno.
Select * from table2 FOR ALL ENTRIES IN it_table3 INTO it_table2
WHERE mpno = table3-mpno.
Loop at internal table 3 and build final table.
LOOP at it_table3 into wa_table3.
IF wa_table3-dno IS NOT INITIAL.
READ it_table1 where dno = wa_table3-dno.
ELSE.
READ it_table2 where mpno = wa_table3-mpno.
ENDIF.
ENDLOOP.
Hope this was the answer you were hoping to find!
Building of efficient select will require information about obligatory fields in your selection screen, as well as about alleged production size of all 3 tables. However, without this information let's assume that table1 and table2 are reference tables and table3 is a transaction table, as onr can assume from their structure. It would be sensible to build selection in a following way:
Selecting data from reference tables. As you said fields DNO/MPNO are mutually exclusive then there will be no hits of country/pincode pair in both reference tables, so JOIN is useless here. However we can merge 2 result sets in single itab without any constraints' violations.
TYPES: BEGIN OF tt_result,
dno TYPE table1-dno,
mpno TYPE table2-mpno,
country TYPE table1-country,
pincode TYPE table1-pincode,
...other field from table3
END OF tt_result.
DATA: itab_result TYPE tt_result.
SELECT dno
FROM table1
INTO CORRESPONDING FIELDS OF TABLE itab_result
WHERE pincode IN so_pincode
AND country IN so_country.
SELECT mpno
FROM table2
APPENDING CORRESPONDING FIELDS OF TABLE itab_result
WHERE pincode IN so_pincode
AND country IN so_country.
FOR ALL ENTRIES addition allows specifying the same table in FOR ALL ENTRIES clause and in INTO clause, so we can fill our result table with absent table3 data by DNO/MPNO key.
SELECT *
FROM table3
INTO CORRESPONDING FIELDS OF TABLE itab_result
FOR ALL ENTRIES IN itab_result
ON itab_result~dno = itab3~dno
AND itab_result_mpno = itab3~mpno.

How to rank in postgres query

I'm trying to rank a subset of data within a table but I think I am doing something wrong. I cannot find much information about the rank() feature for postgres, maybe I'm looking in the wrong place. Either way:
I'd like to know the rank of an id that falls within a cluster of a table based on a date. My query is as follows:
select cluster_id,feed_id,pub_date,rank
from (select feed_id,pub_date,cluster_id,rank()
over (order by pub_date asc) from url_info)
as bar where cluster_id = 9876 and feed_id = 1234;
I'm modeling this after the following stackoverflow post: postgres rank
The reason I think I am doing something wrong is that there are only 39 rows in url_info that are in cluster_id 9876 and this query ran for 10 minutes and never came back. (actually re-ran it for quite a while and it returned no results, yet there is a row in cluster 9876 for id 1234) I'm expecting this will tell me something like "id 1234 was 5th for the criteria given). It will return a relative rank according to my query constraints, correct?
This is postgres 8.4 btw.
By placing the rank() function in the subselect and not specifying a PARTITION BY in the over clause or any predicate in that subselect, your query is asking to produce a rank over the entire url_info table ordered by pub_date. This is likely why it ran so long as to rank over all of url_info, Pg must sort the entire table by pub_date, which will take a while if the table is very large.
It appears you want to generate a rank for just the set of records selected by the where clause, in which case, all you need do is eliminate the subselect and the rank function is implicitly over the set of records matching that predicate.
select
cluster_id
,feed_id
,pub_date
,rank() over (order by pub_date asc) as rank
from url_info
where cluster_id = 9876 and feed_id = 1234;
If what you really wanted was the rank within the cluster, regardless of the feed_id, you can rank in a subselect which filters to that cluster:
select ranked.*
from (
select
cluster_id
,feed_id
,pub_date
,rank() over (order by pub_date asc) as rank
from url_info
where cluster_id = 9876
) as ranked
where feed_id = 1234;
Sharing another example of DENSE_RANK() of PostgreSQL.
Find top 3 students sample query.
Reference taken from this blog:
Create a table with sample data:
CREATE TABLE tbl_Students
(
StudID INT
,StudName CHARACTER VARYING
,TotalMark INT
);
INSERT INTO tbl_Students
VALUES
(1,'Anvesh',88),(2,'Neevan',78)
,(3,'Roy',90),(4,'Mahi',88)
,(5,'Maria',81),(6,'Jenny',90);
Using DENSE_RANK(), Calculate RANK of students:
;WITH cteStud AS
(
SELECT
StudName
,Totalmark
,DENSE_RANK() OVER (ORDER BY TotalMark DESC) AS StudRank
FROM tbl_Students
)
SELECT
StudName
,Totalmark
,StudRank
FROM cteStud
WHERE StudRank <= 3;
The Result:
studname | totalmark | studrank
----------+-----------+----------
Roy | 90 | 1
Jenny | 90 | 1
Anvesh | 88 | 2
Mahi | 88 | 2
Maria | 81 | 3
(5 rows)