Dynamic calculation in postgres - postgresql

I am trying to do a dynamic calculation in postgres
as following
drop table if exists calculation_t;
create table calculation_t(
Id serial,
calculation_logic varchar(50))
Inserted a calculation logic :
insert into calculation_t(calculation_logic)
values ('$1-$2')
checking the calculation by providing dynamic numbers 2,1
select (2,1,calculation_logic::numeric) from calculation_t
throws an error : "ERROR: Can't serialize transient record types"
I am expecting to get the result
of '1' (because my calculation logic is $2-$1 from table caculation_t table for id 1, its 2-1) by doing the select
select (2,1,calculation_logic::numeric) from calculation_t
Also tried :
select replace(replace(calculation_logic, '$1', 2),'$2',1) from calculation_t
result :
"2-1"
But I needed 2-1 actual value from subtraction instead, that is 1

Related

PGSQL - How to efficiently flatten key/value table [duplicate]

Does any one know how to create crosstab queries in PostgreSQL?
For example I have the following table:
Section Status Count
A Active 1
A Inactive 2
B Active 4
B Inactive 5
I would like the query to return the following crosstab:
Section Active Inactive
A 1 2
B 4 5
Is this possible?
Install the additional module tablefunc once per database, which provides the function crosstab(). Since Postgres 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION IF NOT EXISTS tablefunc;
Improved test case
CREATE TABLE tbl (
section text
, status text
, ct integer -- "count" is a reserved word in standard SQL
);
INSERT INTO tbl VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7); -- ('C', 'Active') is missing
Simple form - not fit for missing attributes
crosstab(text) with 1 input parameter:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- needs to be "ORDER BY 1,2" here
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | 7 | -- !!
No need for casting and renaming.
Note the incorrect result for C: the value 7 is filled in for the first column. Sometimes, this behavior is desirable, but not for this use case.
The simple form is also limited to exactly three columns in the provided input query: row_name, category, value. There is no room for extra columns like in the 2-parameter alternative below.
Safe form
crosstab(text, text) with 2 input parameters:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- could also just be "ORDER BY 1" here
, $$VALUES ('Active'::text), ('Inactive')$$
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | | 7 -- !!
Note the correct result for C.
The second parameter can be any query that returns one row per attribute matching the order of the column definition at the end. Often you will want to query distinct attributes from the underlying table like this:
'SELECT DISTINCT attribute FROM tbl ORDER BY 1'
That's in the manual.
Since you have to spell out all columns in a column definition list anyway (except for pre-defined crosstabN() variants), it is typically more efficient to provide a short list in a VALUES expression like demonstrated:
$$VALUES ('Active'::text), ('Inactive')$$)
Or (not in the manual):
$$SELECT unnest('{Active,Inactive}'::text[])$$ -- short syntax for long lists
I used dollar quoting to make quoting easier.
You can even output columns with different data types with crosstab(text, text) - as long as the text representation of the value column is valid input for the target type. This way you might have attributes of different kind and output text, date, numeric etc. for respective attributes. There is a code example at the end of the chapter crosstab(text, text) in the manual.
db<>fiddle here
Effect of excess input rows
Excess input rows are handled differently - duplicate rows for the same ("row_name", "category") combination - (section, status) in the above example.
The 1-parameter form fills in available value columns from left to right. Excess values are discarded.
Earlier input rows win.
The 2-parameter form assigns each input value to its dedicated column, overwriting any previous assignment.
Later input rows win.
Typically, you don't have duplicates to begin with. But if you do, carefully adjust the sort order to your requirements - and document what's happening.
Or get fast arbitrary results if you don't care. Just be aware of the effect.
Advanced examples
Pivot on Multiple Columns using Tablefunc - also demonstrating mentioned "extra columns"
Dynamic alternative to pivot with CASE and GROUP BY
\crosstabview in psql
Postgres 9.6 added this meta-command to its default interactive terminal psql. You can run the query you would use as first crosstab() parameter and feed it to \crosstabview (immediately or in the next step). Like:
db=> SELECT section, status, ct FROM tbl \crosstabview
Similar result as above, but it's a representation feature on the client side exclusively. Input rows are treated slightly differently, hence ORDER BY is not required. Details for \crosstabview in the manual. There are more code examples at the bottom of that page.
Related answer on dba.SE by Daniel Vérité (the author of the psql feature):
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
SELECT section,
SUM(CASE status WHEN 'Active' THEN count ELSE 0 END) AS active, --here you pivot each status value as a separate column explicitly
SUM(CASE status WHEN 'Inactive' THEN count ELSE 0 END) AS inactive --here you pivot each status value as a separate column explicitly
FROM t
GROUP BY section
You can use the crosstab() function of the additional module tablefunc - which you have to install once per database. Since PostgreSQL 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION tablefunc;
In your case, I believe it would look something like this:
CREATE TABLE t (Section CHAR(1), Status VARCHAR(10), Count integer);
INSERT INTO t VALUES ('A', 'Active', 1);
INSERT INTO t VALUES ('A', 'Inactive', 2);
INSERT INTO t VALUES ('B', 'Active', 4);
INSERT INTO t VALUES ('B', 'Inactive', 5);
SELECT row_name AS Section,
category_1::integer AS Active,
category_2::integer AS Inactive
FROM crosstab('select section::text, status, count::text from t',2)
AS ct (row_name text, category_1 text, category_2 text);
DB Fiddle here:
Everything works: https://dbfiddle.uk/iKCW9Uhh
Without CREATE EXTENSION tablefunc; you get this error: https://dbfiddle.uk/j8W1CMvI
ERROR: function crosstab(unknown, integer) does not exist
LINE 4: FROM crosstab('select section::text, status, count::text fro...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Solution with JSON aggregation:
CREATE TEMP TABLE t (
section text
, status text
, ct integer -- don't use "count" as column name.
);
INSERT INTO t VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7);
SELECT section,
(obj ->> 'Active')::int AS active,
(obj ->> 'Inactive')::int AS inactive
FROM (SELECT section, json_object_agg(status,ct) AS obj
FROM t
GROUP BY section
)X
Sorry this isn't complete because I can't test it here, but it may get you off in the right direction. I'm translating from something I use that makes a similar query:
select mt.section, mt1.count as Active, mt2.count as Inactive
from mytable mt
left join (select section, count from mytable where status='Active')mt1
on mt.section = mt1.section
left join (select section, count from mytable where status='Inactive')mt2
on mt.section = mt2.section
group by mt.section,
mt1.count,
mt2.count
order by mt.section asc;
The code I'm working from is:
select m.typeID, m1.highBid, m2.lowAsk, m1.highBid - m2.lowAsk as diff, 100*(m1.highBid - m2.lowAsk)/m2.lowAsk as diffPercent
from mktTrades m
left join (select typeID,MAX(price) as highBid from mktTrades where bid=1 group by typeID)m1
on m.typeID = m1.typeID
left join (select typeID,MIN(price) as lowAsk from mktTrades where bid=0 group by typeID)m2
on m1.typeID = m2.typeID
group by m.typeID,
m1.highBid,
m2.lowAsk
order by diffPercent desc;
which will return a typeID, the highest price bid and the lowest price asked and the difference between the two (a positive difference would mean something could be bought for less than it can be sold).
There's a different dynamic method that I've devised, one that employs a dynamic rec. type (a temp table, built via an anonymous procedure) & JSON. This may be useful for an end-user who can't install the tablefunc/crosstab extension, but can still create temp tables or run anon. proc's.
The example assumes all the xtab columns are the same type (INTEGER), but the # of columns is data-driven & variadic. That said, JSON aggregate functions do allow for mixed data types, so there's potential for innovation via the use of embedded composite (mixed) types.
The real meat of it can be reduced down to one step if you want to statically define the rec. type inside the JSON recordset function (via nested SELECTs that emit a composite type).
dbfiddle.uk
https://dbfiddle.uk/N1EzugHk
Crosstab function is available under the tablefunc extension. You'll have to create this extension one time for the database.
CREATE EXTENSION tablefunc;
You can use the below code to create pivot table using cross tab:
create table test_Crosstab( section text,
status text,
count numeric)
insert into test_Crosstab values ( 'A','Active',1)
,( 'A','Inactive',2)
,( 'B','Active',4)
,( 'B','Inactive',5)
select * from crosstab(
'select section
,status
,count
from test_crosstab'
)as ctab ("Section" text,"Active" numeric,"Inactive" numeric)

Inserting records into table1 depending on row value in table2

For each row in table exam 'where exam.examRegulation isnull', I want to insert one corresponding row in table examRegulation and copy columnvalues from exam to examregulation. Apparently the following query ist too naive and must be approved:
insert into examRegulation (graduation, course, examnumber, examversion)
values (exam.graduation, exam.course, exam.examnumber, exam.examversion)
where ?? (select graduation, course, examnumber, examversion
from exam
where exam.examRegulation isnull)
Is there a way to do this in postgresql?
You may rephrase this as an INSERT INTO ... SELECT statement:
INSERT INTO examRegulation (graduation, course, examnumber, examversion)
SELECT graduation, course, examnumber, examversion
FROM exam
WHERE examRegulation IS NULL;
The VALUES clause, as the name implies, can only be used with literal values. If you need to populate an insert using query logic, then you need to use a SELECT clause.

Conversion failure varchar to int on a column cast as int

I've created a view called vReceivedEmail which includes a varchar column called RowPrimaryKeyValue. This column usually stores primary key id values, such as 567781, but every now and then has the descriptive text 'Ad hoc message'.
I've set the view up so that it only shows records that would hold the primary key values and then CAST the column as int.
SELECT CAST(Email.RowPrimaryKeyValue as int) as DetailID
FROM MessageStore.dbo.Email Email
WHERE ISNUMERIC(Email.RowPrimaryKeyValue) = 1
When I test the view, I only get records with the primary key values I expected, and when I look at the column listing for the view in the object explorer, the column is saved at the int data type.
I've then tried to apply the following WHERE clause in a separate query, referencing my saved view:
SELECT PotAcc.DetailID
FROM PotentialAccounts PotAcc
WHERE PotAcc.DetailID NOT IN (
SELECT RecEmail.DetailID
FROM vReceivedEmail RecEmail
)
...but I'm returning the following error:
Conversion failed when converting the varchar value 'Ad hoc message'
to data type int.
'Ad hoc message' is data from the column I filtered and converted to int, but I don't know how it's managing to trip up on this since the view explcitly filters this out and converts the column to int.
Since you are on SQL Server 2012, how about using try_convert(int,RowPrimaryKeyValue) is not null instead of isnumeric()?
Your view would look like so:
select try_convert(int, RowPrimaryKeyValue) as DetailID
from MessageStore.dbo.Email Email
where try_convert(int, RowPrimaryKeyValue) is not null
In SQL Server 2012 and up: each of these will return null when the conversion fails instead of an error.
try_convert(datatype,val)
try_cast(val as datatype)
try_parse(val as datatype [using culture])
Why doesn’t isnumeric() work correctly? (SQL Spackle)
Isnumeric() can be deceiving... however I'm betting this is due to SQL Server optimization. Though your view works, that doesn't mean the outer query guarantees that only INT is returned. Instead of using a view... use a CTE and try it. And since you are on 2012, this can all be avoided how SqlZim explained above.
WITH CTE AS(
SELECT CAST(Email.RowPrimaryKeyValue as int) as DetailID
FROM MessageStore.dbo.Email Email
WHERE ISNUMERIC(Email.RowPrimaryKeyValue) = 1)
SELECT PotAcc.DetailID
FROM PotentialAccounts PotAcc
WHERE PotAcc.DetailID NOT IN (
SELECT RecEmail.DetailID
FROM CTE RecEmail
)

unique date field postgresql default value

I have a date column which I want to be unique once populated, but want the date field to be ignored if it is not populated.
In MySQL the way this is accomplished is to set the date column to "not null" and give it a default value of '0000-00-00' - this allows all other fields in the unique index to be "checked" even if the date column is not populated yet.
This does not work in PosgreSQL because '0000-00-00' is not a valid date, so you cannot store it in a date field (this makes sense to me).
At first glance, leaving the field nullable seemed like an option, but this creates a problem:
=> create table uniq_test(NUMBER bigint not null, date DATE, UNIQUE(number, date));
CREATE TABLE
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> select * from uniq_test;
number | date
--------+------
1 |
1 |
1 |
1 |
(4 rows)
NULL apparently "isn't equal to itself" and so it does not count towards constraints.
If I add an additional unique constraint only on the number field, it checks only number and not date and so I cannot have two numbers with different dates.
I could select a default date that is a 'valid date' (but outside working scope) to get around this, and could (in fact) get away with that for the current project, but there are actually cases I might be encountering in the next few years where it will not in fact be evident that the date is a non-real date just because it is "a long time ago" or "in the future."
The advantage the '0000-00-00' mechanic had for me was precisely that this date isn't real and therefore indicated a non-populated entry (where 'non-populated' was a valid uniqueness attribute). When I look around for solutions to this on the internet, most of what I find is "just use NULL" and "storing zeros is stupid."
TL;DR
Is there a PostgreSQL best practice for needing to include "not populated" as a possible value in a unique constraint including a date field?
Not clear what you want. This is my guess:
create table uniq_test (number bigint not null, date date);
create unique index i1 on uniq_test (number, date)
where date is not null;
create unique index i2 on uniq_test (number)
where date is null;
There will be an unique constraint for not null dates and another one for null dates effectively turning the (number, date) tuples into distinct values.
Check partial index
It's not a best practice, but you can do it such way:
t=# create table so35(i int, d date);
CREATE TABLE
t=# create unique index i35 on so35(i, coalesce(d,'-infinity'));
CREATE INDEX
t=# insert into so35 (i) select 1;
INSERT 0 1
t=# insert into so35 (i) select 2;
INSERT 0 1
t=# insert into so35 (i) select 2;
ERROR: duplicate key value violates unique constraint "i35"
DETAIL: Key (i, (COALESCE(d, '-infinity'::date)))=(2, -infinity) already exists.
STATEMENT: insert into so35 (i) select 2;

Temporary Table Value into a Table-Value UDF

I was having some trouble with an SQL 2k sproc and which we moved to SQL 2k5 so we could used Table Value UDF's instead of Scalar UDF's.
This is simplified, but this is my problem.
I have a temporary table that I fill up with product information. I then pass that product information into a UDF and return the information back to my main results set. It doesn't seem to work.
Am I not allowed to pass a Temporary Table value into an CROSS APPLY'd Table Value UDF?
--CREATE AND FILL #brandInfo
SELECT sku, upc, prd_id, cp.customerPrice
FROM products p
JOIN #brandInfo b ON p.brd_id=b.brd_id
CROSS APPLY f_GetCustomerPrice(b.priceAdjustmentValue, b.priceAdjustmentAmount, p.Price) cp
--f_GetCUstomerPrice uses the AdjValue, AdjAmount, and Price to calculate users actual price
When I put dummy values in for b.priceAdjustmentValue and b.priceAdjustmentAmount it works great. But as soon as I try to load the temp table values in it bombs.
Msg 207, Level 16, State 1, Line 140
Invalid column name 'b.priceAdjustmentValue'.
Msg 207, Level 16, State 1, Line 140
Invalid column name 'b.priceAdjustmentAmount'.
Have you tried:
--CREATE AND FILL #brandInfo
SELECT sku, upc, prd_id, cp.customerPrice
FROM products p
JOIN #brandInfo b ON p.brd_id=b.brd_id
CROSS APPLY (
SELECT *
FROM f_GetCustomerPrice(b.priceAdjustmentValue, b.priceAdjustmentAmount, p.Price) cp
)
--f_GetCUstomerPrice uses the AdjValue, AdjAmount, and Price to calculate users actual price
Giving the UDF the proper context in order to resolve the column references?
EDIT:
I have built the following UDF in my local Northwind 2005 database:
CREATE FUNCTION dbo.f_GetCustomerPrice(#adjVal DECIMAL(28,9), #adjAmt DECIMAL(28,9), #price DECIMAL(28,9))
RETURNS TABLE
AS RETURN
(
SELECT Level = 'One', AdjustValue = #adjVal, AdjustAmount = #adjAmt, Price = #price
UNION
SELECT Level = 'Two', AdjustValue = 2 * #adjVal, AdjustAmount = 2 * #adjAmt, Price = 2 * #price
)
GO
And referenced it in the following query without issue:
SELECT p.ProductID,
p.ProductName,
b.CompanyName,
f.Level
FROM Products p
JOIN Suppliers b
ON p.SupplierID = b.SupplierID
CROSS APPLY dbo.f_GetCustomerPrice(p.UnitsInStock, p.ReorderLevel, p.UnitPrice) f
Are you certain that your definition of #brandInfo has the priceAdjustmentValue and priceAdjustmentAmount columns defined on it? More importantly, if you are putting this in a stored procedure as you mentioned, does there exist a #brandInfo table already without those columns defined? I know #brandInfo is a temporary table, but if it exists at the time you attempt to create the stored procedure and it lacks the columns, the parsing engine may be getting tripped up. Oddly, if the table doesn't exist at all, the parsing engine simply glides past the missing table and creates the SP for you.