How would you change or rather optimize this postgres sql query so that the output format remains the same? - postgresql

Original expensive and slow query is this:
select 'INTEGRATED' as "STATUS", coalesce(sum(response_count),0) as "COUNT" from f_wfc_transaction_summary_isocc_ft_sum_no_tid('2015-07-22 00:00:00','2015-08-05 01:00:00', null, 'LINK')
where wfc_result in ('TM_OK', 'TM_NO_CHANGE')
union
select 'WFC_FALLOUT' as "STATUS", coalesce(sum(response_count),0) as "COUNT" from f_wfc_transaction_summary_isocc_ft_sum_no_tid('2015-07-22 00:00:00','2015-08-05 01:00:00', null, 'LINK')
where wfc_result LIKE 'WFC%'
union
select 'TM_FALLOUT' as "STATUS", coalesce(sum(response_count),0) as "COUNT" from f_wfc_transaction_summary_isocc_ft_sum_no_tid('2015-07-22 00:00:00','2015-08-05 01:00:00', null, 'LINK')
where wfc_result = 'TM_FAIL'
union
select 'WFC_PENDING' as "STATUS", coalesce(sum(response_count),0) as "COUNT" from f_wfc_transaction_summary_isocc_ft_sum_no_tid('2015-07-22 00:00:00','2015-08-05 01:00:00', null, 'LINK')
where wfc_result = 'PENDING';
Output:
Status Count
Integrated 40
TM_FALLOUT 50
WFC_PENDING 60
WFC_FALLOUT 70
The output format above is what I need. But this query takes a lot of time.
The query that I want to use is the following as it takes less time.
select
sum(CASE WHEN wfc_result IN ('TM_OK','TM_NO_CHANGE') THEN response_count ELSE 0 END) as "INTEGRATED",
sum(CASE WHEN wfc_result LIKE 'WFC%' THEN response_count ELSE 0 END) as "WFC_FALLOUT",
sum(CASE WHEN wfc_result = 'TM_FAIL' THEN response_count ELSE 0 END) as "TM_FALLOUT",
sum(CASE WHEN wfc_result = 'PENDING' THEN response_count ELSE 0 END) as "WFC_PENDING"
from f_wfc_transaction_summary_isocc_ft_sum_no_tid('2015-07-22 00:00:00','2015-08-05 01:00:00',null, 'LINK')
However, the output of this query is as follows, I need the same output as I get from the first query.
Integrated WFC_FALLOUT TM_FALLOUT WFC_PENDING
40 50 60 70
I tried several ways but unable to find out how I can edit this?

Assuming that 'wfc_result' WFC% is always WFC_% yo can do the following:
SELECT status.lbl AS status, coalesce(sum(response_count), 0) AS count
FROM (VALUES
('TM_O', 'INTEGRATED'),
('TM_N', 'INTEGRATED'),
('WFC_', 'WFC_FALLOUT'),
('TM_F', 'TM_FALLOUT'),
('PEND', 'WFC_PENDING')) status(cls, lbl)
LEFT JOIN f_wfc_transaction_summary_isocc_ft_sum_no_tid('2015-07-22 00:00:00', '2015-08-05 01:00:00', NULL, 'LINK') wfc
ON substr(wfc.wfc_result, 1, 4) = status.cls
GROUP BY 1;
If there are only a few 'wfc_result' strings LIKE 'WFC%' you could also spell them out (as well as the other classes) in the VALUES clause and then drop the substr() function in the JOIN clause.
To get the total output, the query is much simpler:
SELECT sum(response_count) AS total_count
FROM f_wfc_transaction_summary_isocc_ft_sum_no_tid('2015-07-22 00:00:00', '2015-08-05 01:00:00', NULL, 'LINK');
This does of course make another call to the expensive function, but there is no way to get around that on the database; in your client application you could simply sum up the "count" values returned from the previous query.

Related

Postgresql, set order by desc or asc depending on variable parse into function

I have a function that takes product pricing data from today and yesterday and works out the difference, orders it by price_delta_percentage and then limits to 5. Now currently I order by price_delta_percentage DESC which returns the top 5 products that have increased in price since yesterday.
I would like to parse in a variable - sort - to change the function to either sort by DESC, or ASC. I have tried to use IF statements and get syntax errors and CASE statements which states that price_delta_percentage doesn't exist.
Script:
RETURNS TABLE(
product_id varchar,
name varchar,
price_today numeric,
price_yesterday numeric,
price_delta numeric,
price_delta_percentage numeric
)
LANGUAGE 'sql'
COST 100
STABLE STRICT PARALLEL SAFE
AS $BODY$
WITH cte AS (
SELECT
product_id,
name,
SUM(CASE WHEN rank = 1 THEN trend_price ELSE NULL END) price_today,
SUM(CASE WHEN rank = 2 THEN trend_price ELSE NULL END) price_yesterday,
SUM(CASE WHEN rank = 1 THEN trend_price ELSE 0 END) - SUM(CASE WHEN rank = 2 THEN trend_price ELSE 0 END) as price_delta,
ROUND(((SUM(CASE WHEN rank = 1 THEN trend_price ELSE NULL END) / SUM(CASE WHEN rank = 2 THEN trend_price ELSE NULL END) - 1) * 100), 2) as price_delta_percentage
FROM (
SELECT
magic_sets_cards.name,
pricing.product_id,
pricing.trend_price,
pricing.date,
RANK() OVER (PARTITION BY product_id ORDER BY date DESC) AS rank
FROM pricing
JOIN magic_sets_cards_identifiers ON magic_sets_cards_identifiers.mcm_id = pricing.product_id
JOIN magic_sets_cards ON magic_sets_cards.id = magic_sets_cards_identifiers.card_id
JOIN magic_sets ON magic_sets.id = magic_sets_cards.set_id
WHERE date BETWEEN CURRENT_DATE - days AND CURRENT_DATE
AND magic_sets.code = set_code
AND pricing.trend_price > 0.25) p
WHERE rank IN (1,2)
GROUP BY product_id, name
ORDER BY price_delta_percentage DESC)
SELECT * FROM cte WHERE (CASE WHEN price_today IS NULL OR price_yesterday IS NULL THEN 'NULL' ELSE 'VALID' END) !='NULL'
LIMIT 5;
$BODY$;sql
CASE Statement:
ORDER BY CASE WHEN sort = 'DESC' THEN price_delta_percentage END DESC, CASE WHEN sort = 'ASC' THEN price_delta_percentage END ASC)
Error:
ERROR: column "price_delta_percentage" does not exist
LINE 42: ORDER BY CASE WHEN sort = 'DESC' THEN price_delta_percenta...
You can't use CASE to decide between ASC and DESC like that. Those labels are not data, they are part of the SQL grammar. You would need to do it by combining the text into a string and then executing the string as a dynamic query, which means you would need to use pl/pgsql, not SQL
But since your column is numeric, you could just order by the product of the column and an indicator variable which is either 1 or -1.

Postgres very hard dynamic select statement with COALESCE

Having a table and data like this
CREATE TABLE solicitations
(
id SERIAL PRIMARY KEY,
name text
);
CREATE TABLE donations
(
id SERIAL PRIMARY KEY,
solicitation_id integer REFERENCES solicitations, -- can be null
created_at timestamp without time zone NOT NULL DEFAULT (now() at time zone 'utc'),
amount bigint NOT NULL DEFAULT 0
);
INSERT INTO solicitations (name) VALUES
('solicitation1'), ('solicitation2');
INSERT INTO donations (created_at, solicitation_id, amount) VALUES
('2018-06-26', null, 10), ('2018-06-26', 1, 20), ('2018-06-26', 2, 30),
('2018-06-27', null, 10), ('2018-06-27', 1, 20),
('2018-06-28', null, 10), ('2018-06-28', 1, 20), ('2018-06-28', 2, 30);
How to make solicitation id's dynamic in following select statement using only postgres???
SELECT
"created_at"
-- make dynamic this begins
, COALESCE("no_solicitation", 0) AS "no_solicitation"
, COALESCE("1", 0) AS "1"
, COALESCE("2", 0) AS "2"
-- make dynamic this ends
FROM crosstab(
$source_sql$
SELECT
created_at::date as row_id
, COALESCE(solicitation_id::text, 'no_solicitation') as category
, SUM(amount) as value
FROM donations
GROUP BY row_id, category
ORDER BY row_id, category
$source_sql$
, $category_sql$
-- parametrize with ids from here begins
SELECT unnest('{no_solicitation}'::text[] || ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
-- parametrize with ids from here ends
$category_sql$
) AS ct (
"created_at" date
-- make dynamic this begins
, "no_solicitation" bigint
, "1" bigint
, "2" bigint
-- make dynamic this ends
)
The select should return data like this
created_at no_solicitation 1 2
____________________________________
2018-06-26 10 20 30
2018-06-27 10 20 0
2018-06-28 10 20 30
The solicitation ids that should parametrize select are the same as in
SELECT unnest('{no_solicitation}'::text[] || ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
One can fiddle the code here
I decided to use json, which is much simpler then crosstab
WITH
all_solicitation_ids AS (
SELECT
unnest('{no_solicitation}'::text[] ||
ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
AS col
)
, all_days AS (
SELECT
-- TODO: compute days ad hoc, from min created_at day of donations to max created_at day of donations
generate_series('2018-06-26', '2018-06-28', '1 day'::interval)::date
AS col
)
, all_days_and_all_solicitation_ids AS (
SELECT
all_days.col AS created_at
, all_solicitation_ids.col AS solicitation_id
FROM all_days, all_solicitation_ids
ORDER BY all_days.col, all_solicitation_ids.col
)
, donations_ AS (
SELECT
created_at::date as created_at
, COALESCE(solicitation_id::text, 'no_solicitation') as solicitation_id
, SUM(amount) as amount
FROM donations
GROUP BY created_at, solicitation_id
ORDER BY created_at, solicitation_id
)
, donations__ AS (
SELECT
all_days_and_all_solicitation_ids.created_at
, all_days_and_all_solicitation_ids.solicitation_id
, COALESCE(donations_.amount, 0) AS amount
FROM all_days_and_all_solicitation_ids
LEFT JOIN donations_
ON all_days_and_all_solicitation_ids.created_at = donations_.created_at
AND all_days_and_all_solicitation_ids.solicitation_id = donations_.solicitation_id
)
SELECT
jsonb_object_agg(solicitation_id, amount) ||
jsonb_object_agg('date', created_at)
AS data
FROM donations__
GROUP BY created_at
which results
data
______________________________________________________________
{"1": 20, "2": 30, "date": "2018-06-28", "no_solicitation": 10}
{"1": 20, "2": 30, "date": "2018-06-26", "no_solicitation": 10}
{"1": 20, "2": 0, "date": "2018-06-27", "no_solicitation": 10}
Thought its not the same that I requested.
It returns only data column, instead of date, no_solicitation, 1, 2, ...., to do so I need to use json_to_record, but I dont know how to produce its as argument dynamically

Count based on Or is not differentiating the count

My results are showing both counts the same but there should be some that have different counts as CarCode is sometimes null.
SELECT distinct car.carKey,
car.Weight,
car.CarCode,
COUNT(car.carKey)OVER(PARTITION BY car.carKey) AS TotalCarKeyCount,
COUNT(Case When (car.[Weight] IS not null) and (car.CarCode is null) as CarCountWithoutCode
then 0
else car.carKey End) OVER(PARTITION BY car.carKey) AS CarCount
from car
results show TotalCarKeyCount and CarCountWithoutCode always with the same counts like the case statement isn't working or something.
It sounds like you might want to use SUM() instead:
SELECT distinct car.carKey,
car.Weight,
car.CarCode,
COUNT(car.carKey)OVER(PARTITION BY car.carKey) AS TotalCarKeyCount,
SUM(Case When (car.[Weight] IS not null) and (car.CarCode is null) as CarCountWithoutCode
then 0 else 1 End) OVER(PARTITION BY car.carKey) AS CarCount
from car
SQL Fiddle demo showing the difference between using COUNT() and SUM():
create table test
(
id int
);
insert into test values
(1), (null), (23), (4), (2);
select
count(case when id is null then 0 else id end) [count],
sum(case when id is null then 0 else 1 end) [sum]
from test;
Count returns 5 and Sum returns 4. Or you can change the COUNT() to use null and the null values will be excluded in the final count()
select
count(case when id is null then null else id end) [count],
sum(case when id is null then 0 else 1 end) [sum]
from test;
Your query would be:
SELECT distinct car.carKey,
car.Weight,
car.CarCode,
COUNT(car.carKey)OVER(PARTITION BY car.carKey) AS TotalCarKeyCount,
COUNT(Case When (car.[Weight] IS not null) and (car.CarCode is null) as CarCountWithoutCode
then null else 1 End) OVER(PARTITION BY car.carKey) AS CarCount
from car
Change the then 0 to then null. Zero values are counted, nulls are not.

Merge row columns from a query in sql server

I have a simple query which returns the following rows :
Current rows:
Empl ECode DCode LCode Earn Dedn Liab
==== ==== ===== ===== ==== ==== ====
123 PerHr Null Null 13 0 0
123 Null Union Null 0 10 0
123 Null Per Null 0 20 0
123 Null Null MyHealth 0 0 5
123 Null Null 401 0 0 10
123 Null Null Train 0 0 15
123 Null Null CAFTA 0 0 20
However, I needed to see the above rows as follows :
Empl ECode DCode LCode Earn Dedn Liab
==== ==== ===== ===== ==== ==== ====
123 PerHr Union MyHealth 13 10 5
123 Null Per 401 0 20 10
123 Null Null Train 0 0 15
123 Null Null CAFTA 0 0 20
It's more like merging the succeeding rows into the preceding rows wherever there are Nulls encountered for EarnCode, DednCode & LiabCode. Actually what I wanted to see was to roll up everything to the preceding rows.
In Oracle we had this LAST_VALUE function which we could use, but in this case, I simply cannot figure out what to do with this.
In the example above, ECode's sum value column is Earn, DCode is Dedn, and LCode is Liab; notice that whenever either of ECode, DCode, or LCode is not null, there is a corresponding value in Earn, Dedn, or the Liab columns.
By the way, we are using SQL Server 2008 R2 at work.
Hoping for your advice, thanks.
This is basically the same technique as Tango_Guy does but without the temporary tables and with the sort made explicit. Because the number of rows per Empl is <= the number of rows already in place, I didn't need to make a dummy table for the leftmost table, just filtered the base data to where there was a match amongst the 3 codes. Also, I reviewed your discussion and the Earn and ECode move together. In fact a non-zero Earn in a column without an ECode is effectively lost (this is a good case for a constraint - non-zero Earn is not allowed when ECode is NULL):
http://sqlfiddle.com/#!3/7bd04/3
CREATE TABLE data(ID INT IDENTITY NOT NULL,
Empl VARCHAR(3),
ECode VARCHAR(8),
DCode VARCHAR(8),
LCode VARCHAR(8),
Earn INT NOT NULL,
Dedn INT NOT NULL,
Liab INT NOT NULL ) ;
INSERT INTO data (Empl, ECode, DCode, LCode, Earn, Dedn, Liab)
VALUES ('123', 'PerHr', NULL, NULL, 13, 0, 0),
('123', NULL, 'Union', NULL, 0, 10, 0),
('123', NULL, 'Per', NULL, 0, 20, 0),
('123', NULL, NULL, 'MyHealth', 0, 0, 5),
('123', NULL, NULL, '401', 0, 0, 10),
('123', NULL, NULL, 'Train', 0, 0, 15),
('123', NULL, NULL, 'CAFTA', 0, 0, 20);
WITH basedata AS (
SELECT *, ROW_NUMBER () OVER(ORDER BY ID) AS OrigSort, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY ID) AS EmplSort
FROM data
),
E AS (
SELECT Empl, ECode, Earn, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE ECode IS NOT NULL
),
D AS (
SELECT Empl, DCode, Dedn, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE DCode IS NOT NULL
),
L AS (
SELECT Empl, LCode, Liab, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE LCode IS NOT NULL
)
SELECT basedata.Empl, E.ECode, D.Dcode, L.LCode, E.Earn, D.Dedn, L.Liab
FROM basedata
LEFT JOIN E
ON E.Empl = basedata.Empl AND E.EmplSort = basedata.EmplSort
LEFT JOIN D
ON D.Empl = basedata.Empl AND D.EmplSort = basedata.EmplSort
LEFT JOIN L
ON L.Empl = basedata.Empl AND L.EmplSort = basedata.EmplSort
WHERE E.ECode IS NOT NULL OR D.DCode IS NOT NULL OR L.LCode IS NOT NULL
ORDER BY basedata.Empl, basedata.EmplSort
Not sure if it is what you need but have you tried coalesc
SELECT Name, Class, Color, ProductNumber,
COALESCE(Class, Color, ProductNumber) AS FirstNotNull
FROM Production.Product ;
I have a solution, but it is very kludgy. If anyone has something better, that would be great.
However, an algorithm:
1) Get rownumbers for each distinct list of values in the columns
2) Join all columns based on rownumber
Example:
select Distinct ECode into #Ecode from source_table order by rowid;
select Distinct DCode into #Dcode from source_table order by rowid;
select Distinct LCode into #Lcode from source_table order by rowid;
select Distinct Earn into #Earn from source_table order by rowid;
select Distinct Dedn into #Dedn from source_table order by rowid;
select Distinct Liab into #Liab from source_table order by rowid;
select b.ECode, c.DCode, d.LCode, e.Earn, f.Dedn, g.Liab
from source_table a -- Note: a source for row numbers that will be >= the below
left outer join #Ecode b on a.rowid = b.rowid
left outer join #DCode c on a.rowid = c.rowid
left outer join #LCode d on a.rowid = d.rowid
left outer join #Earn e on a.rowid = e.rowid
left outer join #Dedn f on a.rowid = f.rowid
left outer join #Liab g on a.rowid = g.rowid
where
b.ecode is not null or
c.dcode is not null or
d.lcode is not null or
e.earn is not null or
f.dedn is not null or
g.liab is not null;
I didn't include Empl, since I don't know what role you want it to play. If this is all true for a given Empl, then you could just add it, join on it, and carry it through.
I don't like this solution at all, so hopefully someone else will come up with something more elegant.
Best,
David

SQL Running Subtraction

Just a brief of business scenario is table has been created for a good receipt. So here we have good expected line with PurchaseOrder(PO) in first few line. And then we receive each expected line physically and that time these quantity may be different, due to business case like quantity may damage and short quantity like that. So we maintain a status for that eg: OK, Damage, also we have to calculate short quantity based on total of expected quantity of each item and total of received line.
if object_id('DEV..Temp','U') is not null
drop table Temp
CREATE TABLE Temp
(
ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
Item VARCHAR(32),
PO VARCHAR(32) NULL,
ExpectedQty INT NULL,
ReceivedQty INT NULL,
[STATUS] VARCHAR(32) NULL,
BoxName VARCHAR(32) NULL
)
Please see first few line with PO data will be the expected lines,
and then rest line will be received line
INSERT INTO TEMP (Item,PO,ExpectedQty,ReceivedQty,[STATUS],BoxName)
SELECT 'ITEM01','PO-01','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM01','PO-02','20',NULL,NULL,NULL UNION ALL
SELECT 'ITEM02','PO-01','40',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-01','50',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-02','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-03','20',NULL,NULL,NULL UNION ALL
SELECT 'ITEM04','PO-01','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM01',NULL,NULL,'20','OK','box01' UNION ALL
SELECT 'ITEM01',NULL,NULL,'25','OK','box02' UNION ALL
SELECT 'ITEM01',NULL,NULL,'5','DAMAGE','box03' UNION ALL
SELECT 'ITEM02',NULL,NULL,'38','OK','box04' UNION ALL
SELECT 'ITEM02',NULL,NULL,'2','DAMAGE','box05' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box06' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box07' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box08' UNION ALL
SELECT 'ITEM03',NULL,NULL,'10','DAMAGE','box09' UNION ALL
SELECT 'ITEM04',NULL,NULL,'25','OK','box10'
Below Table is my expected result based on above data.
I need to show those data following way.
So I appreciate if you can give me an appropriate query for it.
Note: first row is blank and it is actually my table header. :)
SELECT '' as 'ITEM', '' as 'PO#', '' as 'ExpectedQty',
'' as 'ReceivedQty','' as 'DamageQty' ,'' as 'ShortQty' UNION ALL
SELECT 'ITEM01','PO-01','30','30','0' ,'0' UNION ALL
SELECT 'ITEM01','PO-02','20','15','5' ,'0' UNION ALL
SELECT 'ITEM02','PO-01','40','38','2' ,'0' UNION ALL
SELECT 'ITEM03','PO-01','50','50','0' ,'0' UNION ALL
SELECT 'ITEM03','PO-02','30','30','0' ,'0' UNION ALL
SELECT 'ITEM03','PO-03','20','10','10','0' UNION ALL
SELECT 'ITEM04','PO-01','30','25','0' ,'5'
Note : we don't received more than expected.
solution should be based on SQL 2000
You should reconsider how you store this data. Separate Expected and Received+Damaged in different tables (you have many unused (null) cells). This way any query should become more readable.
I think what you try to do can be achieved more easily with a stored procedure.
Anyway, try this query:
SELECT Item, PO, ExpectedQty,
CASE WHEN [rec-consumed] > 0 THEN ExpectedQty
ELSE CASE WHEN [rec-consumed] + ExpectedQty > 0
THEN [rec-consumed] + ExpectedQty
ELSE 0
END
END ReceivedQty,
CASE WHEN [rec-consumed] < 0
THEN CASE WHEN DamageQty >= -1*[rec-consumed]
THEN -1*[rec-consumed]
ELSE DamageQty
END
ELSE 0
END DamageQty,
CASE WHEN [rec_damage-consumed] < 0
THEN DamageQty - [rec-consumed]
ELSE 0
END ShortQty
FROM (
select t1.Item,
t1.PO,
t1.ExpectedQty,
st.sum_ReceivedQty_OK
- (sum(COALESCE(t2.ExpectedQty,0))
+t1.ExpectedQty)
[rec-consumed],
st.sum_ReceivedQty_OK + st.sum_ReceivedQty_DAMAGE
- (sum(COALESCE(t2.ExpectedQty,0))
+t1.ExpectedQty)
[rec_damage-consumed],
st.sum_ReceivedQty_DAMAGE DamageQty
from #tt t1
left join #tt t2 on t1.Item = t2.Item
and t1.PO > t2.PO
and t2.PO is not null
join (select Item
, sum(CASE WHEN status = 'OK' THEN ReceivedQty ELSE 0 END)
sum_ReceivedQty_OK
, sum(CASE WHEN status != 'OK' THEN ReceivedQty ELSE 0 END)
sum_ReceivedQty_DAMAGE
from #tt where PO is null
group by Item) st on t1.Item = st.Item
where t1.PO is not null
group by t1.Item, t1.PO, t1.ExpectedQty,
st.sum_ReceivedQty_OK,
st.sum_ReceivedQty_DAMAGE
) a
order by Item, PO