IIF in postgres - postgresql

I am attempting to convert an MS-Access query to a postgres statement so I can use it in SSRS. Seems to work great except for the IIF statement.
SELECT labor_sort_1.ncm_id
,IIf(labor_sort_1.sortby_employeeid = 3721
, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 29 * labor_sort_1.number_of_ops)
, IIf(labor_sort_1.sortby_employeeid = 3722
, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops)
, IIf(labor_sort_1.sortby_employeeid = 3755, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops)
, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 17 * labor_sort_1.number_of_ops)))) AS labor_cost
FROM ...
it returns the following message
function iif(boolean, interval, interval) does not exist
How would I solve this problem?

You'll need to switch the logic over to a CASE expression. CASE expression are standard for most RDBMS's so it's worth learning. In your case (pun intended) it would translate to:
CASE
WHEN labor_sort_1.sortby_employeeid = 3721
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 29 * labor_sort_1.number_of_ops
WHEN labor_sort_1.sortby_employeeid = 3722
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops
WHEN labor_sort_1.sortby_employeeid = 3755
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops
ELSE
(labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 17 * labor_sort_1.number_of_ops)
END AS labor_cost
Which is a lot cleaner looking since you don't have to monkey with nested iif() issues and all that and should you need to add more employeeids to the list of hard-coded labor costs, it's no biggie.
You might also find it advantageous to us the IN condition instead so you only need two WHEN clauses:
CASE
WHEN labor_sort_1.sortby_employeeid = 3721
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 29 * labor_sort_1.number_of_ops
WHEN labor_sort_1.sortby_employeeid IN (3722, 3755)
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops
ELSE
(labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 17 * labor_sort_1.number_of_ops)
END AS labor_cost
Also, you could move the CASE expression into the equation so the logic only needs to determine whatever number you wish to multiply by:
(labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime)
* 24
* CASE
WHEN labor_sort_1.sortby_employeeid = 3721 THEN 29
WHEN labor_sort_1.sortby_employeeid IN (3722,3755) THEN 24
ELSE 17
END
* labor_sort_1.number_of_ops AS labor_cost

    ((this is a Wiki, you can edit!))
Same as #Daniel's answer, but generalizing to any datatype.
CREATE or replace FUNCTION iIF(
condition boolean, -- IF condition
true_result anyelement, -- THEN
false_result anyelement -- ELSE
) RETURNS anyelement AS $f$
SELECT CASE WHEN condition THEN true_result ELSE false_result END
$f$ LANGUAGE SQL IMMUTABLE;
SELECT iif(0=1,1,2);
SELECT iif(0=0,'Hello'::text,'Bye'); -- need to say that string is text.
Good when you are looking for a public-snippets-library.
NOTE about IMMUTABLE and "PLpgSQL vs SQL".
The IMMUTABLE clause is very important for code snippets like this, because, as said in the Guide: "allows the optimizer to pre-evaluate the function when a query calls it with constant arguments"
PLpgSQL is the preferred language, except for "pure SQL". For JIT optimizations (and sometimes for parallelism) SQL can obtain better optimizations. Is something like copy/paste small piece of code instead of use a function call.
Important conclusion: this function, after optimizations, is so fast than the #JNevill's answer; it will compile to (exactly) the same internal representation. So, although it is not standard for PostgreSQL, it can be standard for your projects, by a centralized and reusable "library of snippets", like pg_pubLib.

I know this has been sitting around for a while but another option is to create a user defined function. If you happen to stumble upon this in your internet searches, this may be a solution for you.
CREATE FUNCTION IIF(
condition boolean, true_result TEXT, false_result TEXT
) RETURNS TEXT LANGUAGE plpgsql AS $$
BEGIN
IF condition THEN
RETURN true_result;
ELSE
RETURN false_result;
END IF;
END
$$;
SELECT IIF(2=1,'dan the man','false foobar');
Should text not tickle your fancy then try function overloading

Related

Is possible change operator = by a variable inside a function POSTGRESQL?

I don't know if this is possible, but this is the question. I try to change operator = by > if paramvalue = 0
AS $BODY$
declare
operator text;
begin
operator:='=';
if (paramvalue = 0) then
operator:='>';
end if;
select * from tablaname where id #operator 20
thanks!
I think overloading an operator is more complex than what you need for this behavior. You could try using a CASE statement instead.
SELECT *
FROM tablename
WHERE CASE WHEN paramvalue = 0
THEN id > 20
ELSE id = paramvalue
END
;
If you really want to overload an operator I suggest taking a look at the postgres documentation here.
REMARK NOT TESTED !!
In your scenario solution could be...:
.....
if (paramvalue = 0) then
select * from tablaname where id > 20
Else
select * from tablaname where id = 20
end if;
Please read this 9.16. Conditional Expressions
And this would be rather:
CASE WHEN paramvalue = 0 THEN select * from tablaname where id > 20
ELSE select * from tablaname where id = 20
END

Show Only 2 Decimal Places in SQL Server

I've got some columns that are stored as an INT value that I am doing some addition and division on and I would like to display the results by limiting the digits after the decimal to 2. I've tried different combinations of DECIMAL / NUMERIC / ROUND but I can't get the solution.
Could anyone offer any advice on how to get the desired solution?
Query:
SELECT cc.code AS [CountyID], RTRIM(LTRIM(cc.[description])) AS [CountyName],
SUM(ISNULL(ps.push_count,0)) AS [CountyPushCounts],
SUM(ISNULL(ps.push_unique_count,0)) AS [UniquePushCount],
SUM(ISNULL(ps.error_count,0)) AS [PushErrorCount],
SUM(ISNULL(ps.warning_count,0)) AS [PushWarningCount],
(CAST(SUM(ISNULL(ps.push_unique_count,0)) AS DECIMAL(15,2)) / CAST(SUM(ISNULL(ps.push_count,0)) AS DECIMAL(15,2)) * 100.0) AS [Unique Push Per Day] ,
((CAST(SUM(ISNULL(ps.error_count,0)) AS DECIMAL(15,2)) + CAST(SUM(ISNULL(ps.warning_count,0)) AS DECIMAL(15,2))) / CAST(SUM(ISNULL(ps.push_count,0)) AS DECIMAL(15,2)) * 100.0) AS [Data Error Rate]
FROM dbo.push_stats AS [ps]
INNER JOIN CCIS.dbo.county_codes AS [cc] ON ps.county_code = cc.code
WHERE DATEPART(YEAR,ps.ldstat_date) = 2017
AND DATEPART(MONTH,ps.ldstat_date) = 3
GROUP BY cc.code, cc.[description]
ORDER BY cc.[description];
And my data set looks as follows:
CountyID CountyName PushCounts UniquePushCount PushErrorCount PushWarningCount [Unique Push Per Day] [Data Error Rate]
1 ALACHUA 210422 77046 0 39 36.61499273 0.018534184
2 BAKER 8099 5306 0 71 65.51426102 0.876651438
3 BAY 3178214 434893 117 2793 13.68356568 0.091560857
4 BRADFORD 17654 12119 0 131 68.64733205 0.742041464
I think this will do it:
SELECT cc.code AS [CountyID], RTRIM(LTRIM(cc.[description])) AS [CountyName],
SUM(ISNULL(ps.push_count,0)) AS [CountyPushCounts],
SUM(ISNULL(ps.push_unique_count,0)) AS [UniquePushCount],
SUM(ISNULL(ps.error_count,0)) AS [PushErrorCount],
SUM(ISNULL(ps.warning_count,0)) AS [PushWarningCount],
CAST((SUM(ISNULL(ps.push_unique_count,0)) * 100.00) / SUM(ISNULL(ps.push_count,0)) AS DECIMAL(15,2)) AS [Unique Push Per Day] ,
CAST((SUM(ISNULL(ps.error_count,0)) + SUM(ISNULL(ps.warning_count,0)) * 100.00) / SUM(ISNULL(ps.push_count,0)) AS DECIMAL(15,2)) AS [Data Error Rate]
FROM dbo.push_stats AS [ps]
INNER JOIN CCIS.dbo.county_codes AS [cc] ON ps.county_code = cc.code
WHERE DATEPART(YEAR,ps.ldstat_date) = 2017
AND DATEPART(MONTH,ps.ldstat_date) = 3
GROUP BY cc.code, cc.[description]
ORDER BY cc.[description];
There are two main points here:
With the division operation, get at least one term to a floating point type of some kind before the division happens, so that it doesn't do integer division and truncate the decimal portion of the result. You're okay as long as either term is a floating point type of some kind, and you can accomplish this simply by moving the * 100.00 earlier.
You want to allow much greater than 2 decimal places for the internal calculations, and only set your output format at the very end. Rounding or casting to limited types too soon in an expression can change intermediate values in small ways that are exaggerated in the final results. This means you only want one big CAST() operation around the whole set.
You have to cast the final result, not just the individual numerator and denominator
DECLARE #Numerator INTEGER
DECLARE #Denominator INTEGER
SET #Numerator = 10
SET #Denominator = 3;
-- This will produce 3.33333333
SELECT CAST(#Numerator AS DECIMAL(5,2))/CAST(#Denominator AS DECIMAL(5,2))
-- This will give you 3.33
SELECT CAST(
CAST(#Numerator AS DECIMAL(5,2))
/CAST(#Denominator AS DECIMAL(5,2))
AS DECIMAL(5,2))

Postgresql, maintaining hierarchical data with triggers

I have adjacency list table account, with columns id, code, name, and parent_id.
To make sorting and displaying easier I added two more columns: depth, and path (materialized path). I know, postgresql has a dedicated data type for materialized path, but I'd like to use a more generic approach, not specific to postgresql. I also applied several rules to my design:
1) code can be up to 10 characters long
2) Max depth is 9; so root account can have sub accounts at maximum 8 level deep.
3) Once set, parent_id is never changed, so there's no need to move a branch of tree to another part of the tree.
4) path is an account's materialized path, which is up to 90 characters long; it is built by concatenating account codes, right padded to 10 characters long; for example, like '10000______10001______'.
So, to automatically maintain depth and path columns, I created a trigger and a trigger function for the account table:
CREATE FUNCTION public.fn_account_set_hierarchy()
RETURNS TRIGGER AS $$
DECLARE d INTEGER; p CHARACTER VARYING;
BEGIN
IF TG_OP = 'INSERT' THEN
IF NEW.parent_id IS NULL THEN
NEW.depth := 1;
NEW.path := rpad(NEW.code, 10);
ELSE
BEGIN
SELECT depth, path INTO d, p
FROM public.account
WHERE id = NEW.parent_id;
NEW.depth := d + 1;
NEW.path := p || rpad(NEW.code, 10);
END;
END IF;
ELSE
IF NEW.code IS DISTINCT FROM OLD.code THEN
UPDATE public.account
SET path = OVERLAY(path PLACING rpad(NEW.code, 10)
FROM (OLD.depth - 1) * 10 + 1 FOR 10)
WHERE SUBSTRING(path FROM (OLD.depth - 1) * 10 + 1 FOR 10) =
rpad(OLD.code, 10);
END IF;
END IF;
RETURN NEW;
END$$
LANGUAGE plpgsql
CREATE TRIGGER tg_account_set_hierarchy
BEFORE INSERT OR UPDATE ON public.account
FOR EACH ROW
EXECUTE PROCEDURE public.fn_account_set_hierarchy();
The above seems to work for INSERTs. But for UPDATEs, an error is thrown: "UPDATE statement on table 'account' expected to update 1 row(s); 0 were matched.". I have a doubt on "UPDATE public.account ..." part. Can someone help me correct the above trigger?
Well, in the above code, update part updates all records, including the record, on which the trigger was fired (concurrency execption?). That seems not to work. So I had to issue 2 different statements:
UPDATE {0}.{1} SET path = OVERLAY(path PLACING rpad(NEW.code, 10)
FROM (OLD.depth - 1) * 10 + 1 FOR 10)
WHERE SUBSTRING(path FROM (OLD.depth - 1) * 10 + 1 FOR 10) = rpad(OLD.code, 10)
AND id <> NEW.id;
NEW.path = OVERLAY(OLD.path PLACING rpad(NEW.code, 10)
FROM (OLD.depth - 1) * 10 + 1 FOR 10);

Can I temporarily store a value in the Select Statement of SQL?

In my select statement, I have two returns that are calculations. It looks like this:
SELECT
a.FormRate AS 'F-Rate'
,a.LoI AS 'Liability'
,CAST((A.FormRate * a.LoI / 100) AS DECIMAL(38,2)) 'F-Premium'
,a.EndorsementRate AS 'E-Rate'
,CAST((A.EndorsementRate * a.LoI / 100) AS DECIMAL(38,2)) 'E-Premium'
FROM tblAspecs a
Once I have those five statements in the select, I'd like to be able to total E-Premium + F-Premium into a new column. I could always do it this way:
,CAST(((A.EndorsementRate * a.Loi / 100) + (a.FormRate * a.LoI / 100)) AS DECIMAL(38,2)) 'Total Premium'
...but that just just seems to be quite sloppy and bulky. Is there a way to store the individual premiums so that I can just do a simple CAST((F-Premium + E-Premium) AS DECIMAL(38,2)) 'Total Premium'
The bulky way also doesn't leave F-Premium and E-Premium dynamic so that if I ever change how they're calculated, I'd have to also change the calculation in the Total Premium column.
Using Microsoft SQL Server Management Studio 2010
Research Common Table Expressions.
It will look something like this:
;WITH [Data]
(
[FormRate],
[Liability],
[FormPremium],
[EndorsementRate],
[EndorsementPremium]
)
AS
(
SELECT
a.[FormRate],
a.[LoI] AS [Liability],
CAST((a.[FormRate] * a.[LoI] / 100) AS DECIMAL(38,2)),
a.[EndorsementRate],
CAST((a.[EndorsementRate] * a.[LoI] / 100) AS DECIMAL(38,2))
FROM
tblAspecs a
)
SELECT
[Data].*,
CAST(([Data].[FormPremium] + [Data].[EndorsementPremium]) AS DECIMAL(38,2)) 'Total Premium'
FROM
[Data]

Re-Write complex query (COS, SIN, RADIANS, ACOS) into Entity-to-SQL

The following store procedure retrives nearest 500 addresses for the given latitude and longitude. Many applications use it, and it is one of the useful query.
Is it possible to rewrite with Entity-to-SQL? If so, could you please point me to the right direction (I am not new to Entity-to-SQL)? Thanks in advance.
DECLARE #CntXAxis FLOAT
DECLARE #CntYAxis FLOAT
DECLARE #CntZAxis FLOAT
SET #CntXAxis = COS(RADIANS(-118.4104684)) * COS(RADIANS(34.1030032))
SET #CntYAxis = COS(RADIANS(-118.4104684)) * SIN(RADIANS(34.1030032))
SET #CntZAxis = SIN(RADIANS(-118.4104684))
SELECT
500 *,
ProxDistance = 3961 * ACOS( dbo.XAxis(LAT, LONG)*#CntXAxis + dbo.YAxis(LAT, LONG)*#CntYAxis + dbo.ZAxis(LAT)*#CntZAxis)
FROM
tbl_ProviderLocation
WHERE
(3961 * ACOS( dbo.XAxis(LAT, LONG)*#CntXAxis + dbo.YAxis(LAT, LONG)*#CntYAxis + dbo.ZAxis(LAT)*#CntZAxis) <= 10)
ORDER BY
ProxDistance ASC
If you are using Ms Sql Server, you can use SqlClient functions with Entity SQL
http://msdn.microsoft.com/en-us/library/bb399586.aspx
According to this those functions are available for LINQ queries aswell. I couldn't find an example but it seems straightforward.
var qry = from r in mytable
select new {Acos = SqlFunctions.ACos(r.mycloumn)};