to write a SQL query which select rows where column value changed from previous row - postgresql

CREATE TABLE status( id serial NOT NULL,
id integer,
plan smallint,
ime timestamp without time zone
CONSTRAINT data_pkey PRIMARY KEY (id))
WITH (OIDS=FALSE);
ALTER TABLE data
OWNER TO postgres;
Index: data_idx
CREATE INDEX data_idx
ON data
USING btree
(time, id);
I have a table like this
id val plan time
1 8300 1 2011-01-01
2 8300 1 2011-01-02
3 8300 2 2011-01-03
4 9600 1 2011-01-04
5 9600 2 2011-01-05
How do I select the rows where sigplan changed from the previous row for that siteId?
In the example above, the query should return the rows
2011-01-03 (sigplan changed from 1 to 2 between 2011-01-01 and 2011-01-03 for 8300),
2011-01-05(sigplan changed from 1 to 2 between 2011-01-04 and 2011-01-05 for 9600).
The table contains lot of data so the query should be optimized.

SELECT siteId, sigplan, MAX(server_time) FROM traffview.status_data
GROUP BY siteId, sigplan
HAVING COUNT(1) > 1 AND MAX(server_time) > 'XXXXX' AND MAX(server_time) < 'XXXXX'

The annoying part is figuring out which is the previous row id with the same siteId. After that it is pretty easy by joining the table with itself.
SELECT t1.* FROM table t1, table t2
WHERE t1.sigplan != t2.sigplan
AND t2.id = (SELECT MAX(t3.id) FROM table t3 WHERE t3.id < t1.id)
If the table is moderately (not extremely) large I would consider doing this in application code instead, or by storing the change flag in its own column when writing a new row. A subquery for each row in the table has very poor performance.

This version doesn't have a sub-query, but does assume that you have consecutive IDs.
SELECT t1.*
FROM traffview AS t1, traffview AS t2
WHERE
t1.siteId = t2.siteId
AND t1.sigplan <> t2.sigplan
AND t1.id - t2.id = 1
ORDER BY
t1.server_time

In case you compare with previous rows it is useful to use LAG function which does the job for you:
SELECT sub.*
FROM (
SELECT
plan AS curr_plan,
LAG(plan) OVER (PARTITION BY val ORDER BY time) AS prev_plan,
val,
time
) sub
WHERE
sub.prev_plan IS NOT NULL AND sub.prev_plan <> sub.curr_plan;

Related

postgresql - removing duplicates

I have 4 tables that i did a UNION with and the total count is roughly about 13 mils for an service_id field and 790k for a distinct service_id field.
cust_id
service_id
1
0423
2
0456
3
0423
When i did a count using
SELECT COUNT(service_id) as full_count, service_id
FROM temp
GROUP BY service_id
HAVING COUNT(1) > 1
I got service_id 0423 x 2
However, when i did
SELECT COUNT(*) as full_count, COUNT(DISTINCT cust_id) as dist_cust_id
FROM temp
I got the result of
full_count
dist_cust_id
3
3
My question is then how do i remove duplicate service_id and keeping 1 and knowing the one i keep is a valid one. Some tables do have a last_update column and some don't which adding to the complexity.
I have tried doing the SELF JOIN of the same table where a.cust_id <> b.cust_id AND a.service_id = b.service_id to identify which service_id has two or more cust_id. Not sure how I can keep the right one.
Thanks

Can lead() return the next row only when a condition is met?

Recently my company upgraded from SQL Server 2008 to 2016, so I want to take advantage of some "new" features, one of which is lead().
I understand the basic usage, but I want to know if I can return the next row only when a condition is met. My original query looked like the following, where x.next_id is null if the next row isn't more than 12 days past the current row.
SELECT
a.id,
a.date_a,
x.next_id
FROM
table a
OUTER APPLY
(SELECT TOP 1
next_id = i.intIndex
FROM
table i
WHERE
i.date_a > DATEADD(DAY, 12, a.date_a)
ORDER BY
date_a, id ASC) x
ORDER BY
date_a, id ASC
Data might look like the following, where the third column is added by the query:
id date_a next_id
--------------------------------
1798678 2014-12-01 NULL
1798689 2013-01-05 1798688
1798688 2014-03-31 NULL
1798696 2013-04-03 1798694
1798694 2013-08-12 1798691
1798691 2014-09-30 NULL
1798698 2013-05-14 1798697
1798697 2013-08-29 NULL
Assuming this data set (your result table; minus the result column):
CREATE TABLE some_table(id INT PRIMARY KEY,date_a DATE);
INSERT INTO some_table(id,date_a)
VALUES (1798678,'2014-12-01'),
(1798689,'2013-01-05'),
(1798688,'2014-03-31'),
(1798696,'2013-04-03'),
(1798694,'2013-08-12'),
(1798691,'2014-09-30'),
(1798698,'2013-05-14'),
(1798697,'2013-08-29');
This query returns the same result set as what the query you have returns:
SELECT
id,
date_a,
next_id=
CASE WHEN LEAD(date_a) OVER (ORDER BY date_a,id)>DATEADD(DAY,12,date_a)
THEN LEAD(id) OVER (ORDER BY date_a,id)
ELSE NULL
END
FROM
some_table
ORDER BY
date_a,id;

Update Multiple Columns in One Statement Based On a Field with the Same Value as the Column Name

Not sure if this is possible without some sort of Dynamic SQL or a Pivot (which I want to stay away from)... I have a report that displays total counts for various types/ various status combinations... These types and statuses are always going to be the same and present on the report, so returning no data for a specific combination yields a zero. As of right now there are only three caseTypes (Vegetation, BOA, and Zoning) and 8 statusTypes (see below).
I am first setting up the skeleton of the report using a temp table. I have been careful to name the temp table columns the same as what the "statusType" column will contain in my second table "#ReportData". Is there a way to update the different columns in "#FormattedData" based on the value of the "statusType" column in my second table?
Creation of Formatted Table (for report):
CREATE TABLE #FormattedReport (
caseType VARCHAR(50)
, underInvestigation INT NOT NULL DEFAULT 0
, closed INT NOT NULL DEFAULT 0
, closedDPW INT NOT NULL DEFAULT 0
, unsubtantiated INT NOT NULL DEFAULT 0
, currentlyMonitored INT NOT NULL DEFAULT 0
, judicialProceedings INT NOT NULL DEFAULT 0
, pendingCourtAction INT NOT NULL DEFAULT 0
, other INT NOT NULL DEFAULT 0
)
INSERT INTO #FormattedReport (caseType) VALUES ('Vegetation')
INSERT INTO #FormattedReport (caseType) VALUES ('BOA')
INSERT INTO #FormattedReport (caseType) VALUES ('Zoning')
Creation of Data Table (to populate #FormattedReport):
SELECT B.Name AS caseType, C.Name AS StatusType, COUNT(*) AS Amount
INTO #ReportData
FROM table1 A
INNER JOIN table2 B ...
INNER JOIN table3 C ...
WHERE ...
GROUP BY B.Name, C.Name
CURRENT Update Statement (Currently will be 1 update per column in #FormattedReport):
UPDATE A SET underInvestigation = Amount FROM #ReportData B
INNER JOIN #FormattedReport A ON B.CaseType LIKE CONCAT('%', A.caseType, '%')
WHERE B.StatusType = 'Under Investigation'
UPDATE A SET closed = Amount FROM #ReportData B
INNER JOIN #FormattedReport A ON B.CaseType LIKE CONCAT('%', A.caseType, '%')
WHERE B.StatusType = 'Closed'
...
REQUESTED Update Statement: Would like to have ONE update statement knowing which column to update when "#ReportData.statusType" is the same as a "#FormattedData" column's name. For my "other" column, I'll just do that one manually using a NOT IN.
Assuming I understand the question, I think you can use conditional aggregation for this:
;WITH CTE AS
(
SELECT CaseType
,SUM(CASE WHEN StatusType = 'Under Investigation' THEN Amount ELSE 0 END) As underInvestigation
,SUM(CASE WHEN StatusType = 'Closed' THEN Amount ELSE 0 END) As closed
-- ... More of the same
FROM #ReportData
GROUP BY CaseType
)
UPDATE A
SET underInvestigation = B.underInvestigation
,closed = b.closed
-- more of the same
FROM #FormattedReport A
INNER JOIN CTE B
ON B.CaseType LIKE CONCAT('%', A.caseType, '%')

How can I SUM distinct records in a Postgres database where there are duplicate records?

Imagine a table that looks like this:
The SQL to get this data was just SELECT *
The first column is "row_id" the second is "id" - which is the order ID and the third is "total" - which is the revenue.
I'm not sure why there are duplicate rows in the database, but when I do a SUM(total), it's including the second entry in the database, even though the order ID is the same, which is causing my numbers to be larger than if I select distinct(id), total - export to excel and then sum the values manually.
So my question is - how can I SUM on just the distinct order IDs so that I get the same revenue as if I exported to excel every distinct order ID row?
Thanks in advance!
Easy - just divide by the count:
select id, sum(total) / count(id)
from orders
group by id
See live demo.
Also handles any level of duplication, eg triplicates etc.
You can try something like this (with your example):
Table
create table test (
row_id int,
id int,
total decimal(15,2)
);
insert into test values
(6395, 1509, 112), (22986, 1509, 112),
(1393, 3284, 40.37), (24360, 3284, 40.37);
Query
with distinct_records as (
select distinct id, total from test
)
select a.id, b.actual_total, array_agg(a.row_id) as row_ids
from test a
inner join (select id, sum(total) as actual_total from distinct_records group by id) b
on a.id = b.id
group by a.id, b.actual_total
Result
| id | actual_total | row_ids |
|------|--------------|------------|
| 1509 | 112 | 6395,22986 |
| 3284 | 40.37 | 1393,24360 |
Explanation
We do not know what the reasons is for orders and totals to appear more than one time with different row_id. So using a common table expression (CTE) using the with ... phrase, we get the distinct id and total.
Under the CTE, we use this distinct data to do totaling. We join ID in the original table with the aggregation over distinct values. Then we comma-separate row_ids so that the information looks cleaner.
SQLFiddle example
http://sqlfiddle.com/#!15/72639/3
Create custom aggregate:
CREATE OR REPLACE FUNCTION sum_func (
double precision, pg_catalog.anyelement, double precision
)
RETURNS double precision AS
$body$
SELECT case when $3 is not null then COALESCE($1, 0) + $3 else $1 end
$body$
LANGUAGE 'sql';
CREATE AGGREGATE dist_sum (
pg_catalog."any",
double precision)
(
SFUNC = sum_func,
STYPE = float8
);
And then calc distinct sum like:
select dist_sum(distinct id, total)
from orders
SQLFiddle
You can use DISTINCT in your aggregate functions:
SELECT id, SUM(DISTINCT total) FROM orders GROUP BY id
Documentation here: https://www.postgresql.org/docs/9.6/static/sql-expressions.html#SYNTAX-AGGREGATES
If we can trust that the total for 1 order is actually 1 row. We could eliminate the duplicates in a sub-query by selecting the the MAX of the PK id column. An example:
CREATE TABLE test2 (id int, order_id int, total int);
insert into test2 values (1,1,50);
insert into test2 values (2,1,50);
insert into test2 values (5,1,50);
insert into test2 values (3,2,100);
insert into test2 values (4,2,100);
select order_id, sum(total)
from test2 t
join (
select max(id) as id
from test2
group by order_id) as sq
on t.id = sq.id
group by order_id
sql fiddle
In difficult cases:
select
id,
(
SELECT SUM(value::int4)
FROM jsonb_each_text(jsonb_object_agg(row_id, total))
) as total
from orders
group by id
I would suggest just use a sub-Query:
SELECT "a"."id", SUM("a"."total")
FROM (SELECT DISTINCT ON ("id") * FROM "Database"."Schema"."Table") AS "a"
GROUP BY "a"."id"
The Above will give you the total of each id
Use below if you want the full total of each duplicate removed:
SELECT SUM("a"."total")
FROM (SELECT DISTINCT ON ("id") * FROM "Database"."Schema"."Table") AS "a"
Using subselect (http://sqlfiddle.com/#!7/cef1c/51):
select sum(total) from (
select distinct id, total
from orders
)
Using CTE (http://sqlfiddle.com/#!7/cef1c/53):
with distinct_records as (
select distinct id, total from orders
)
select sum(total) from distinct_records;

PostgreSQL: set a column with the ordinal of the row sorted via another field

I have a table segnature describing an item with a varchar field deno and a numeric field ord. A foreign key fk_collection tells which collection the row is part of.
I want to update field ord so that it contains the ordinal of that row per each collection, sorted by field deno.
E.g. if I have something like
[deno] ord [fk_collection]
abc 10
aab 10
bcd 10
zxc 20
vbn 20
Then I want a result like
[deno] ord [fk_collection]
abc 1 10
aab 0 10
bcd 2 10
zxc 1 20
vbn 0 20
I tried with something like
update segnature s1 set ord = (select count(*)
from segnature s2
where s1.fk_collection=s2.fk_collection and s2.deno<s1.deno
)
but query is really slow: 150 collections per about 30000 items are updated in 10 minutes about.
Any suggestion to speed up the process?
Thank you!
You can use a window function to generate the "ordinal" number:
with numbered as (
select deno, fk_collection,
row_number() over (partition by fk_collection order by deno) as rn,
ctid as id
from segnature
)
update segnature
set ord = n.rn
from numbered n
where n.id = segnature.ctid;
This uses the internal column ctid to uniquely identify each rows. The ctid comparison is quite slow, so if you have a real primary (or unique) key in that table, use that column instead.
Alternatively without the common table expression:
update segnature
set ord = n.rn
from (
select deno, fk_collection,
row_number() over (partition by fk_collection order by deno) as rn,
ctid as id
from segnature
) as n
where n.id = segnature.ctid;
SQLFiddle example: http://sqlfiddle.com/#!15/e997f/1