Postgresql, maintaining hierarchical data with triggers - postgresql

I have adjacency list table account, with columns id, code, name, and parent_id.
To make sorting and displaying easier I added two more columns: depth, and path (materialized path). I know, postgresql has a dedicated data type for materialized path, but I'd like to use a more generic approach, not specific to postgresql. I also applied several rules to my design:
1) code can be up to 10 characters long
2) Max depth is 9; so root account can have sub accounts at maximum 8 level deep.
3) Once set, parent_id is never changed, so there's no need to move a branch of tree to another part of the tree.
4) path is an account's materialized path, which is up to 90 characters long; it is built by concatenating account codes, right padded to 10 characters long; for example, like '10000______10001______'.
So, to automatically maintain depth and path columns, I created a trigger and a trigger function for the account table:
CREATE FUNCTION public.fn_account_set_hierarchy()
RETURNS TRIGGER AS $$
DECLARE d INTEGER; p CHARACTER VARYING;
BEGIN
IF TG_OP = 'INSERT' THEN
IF NEW.parent_id IS NULL THEN
NEW.depth := 1;
NEW.path := rpad(NEW.code, 10);
ELSE
BEGIN
SELECT depth, path INTO d, p
FROM public.account
WHERE id = NEW.parent_id;
NEW.depth := d + 1;
NEW.path := p || rpad(NEW.code, 10);
END;
END IF;
ELSE
IF NEW.code IS DISTINCT FROM OLD.code THEN
UPDATE public.account
SET path = OVERLAY(path PLACING rpad(NEW.code, 10)
FROM (OLD.depth - 1) * 10 + 1 FOR 10)
WHERE SUBSTRING(path FROM (OLD.depth - 1) * 10 + 1 FOR 10) =
rpad(OLD.code, 10);
END IF;
END IF;
RETURN NEW;
END$$
LANGUAGE plpgsql
CREATE TRIGGER tg_account_set_hierarchy
BEFORE INSERT OR UPDATE ON public.account
FOR EACH ROW
EXECUTE PROCEDURE public.fn_account_set_hierarchy();
The above seems to work for INSERTs. But for UPDATEs, an error is thrown: "UPDATE statement on table 'account' expected to update 1 row(s); 0 were matched.". I have a doubt on "UPDATE public.account ..." part. Can someone help me correct the above trigger?

Well, in the above code, update part updates all records, including the record, on which the trigger was fired (concurrency execption?). That seems not to work. So I had to issue 2 different statements:
UPDATE {0}.{1} SET path = OVERLAY(path PLACING rpad(NEW.code, 10)
FROM (OLD.depth - 1) * 10 + 1 FOR 10)
WHERE SUBSTRING(path FROM (OLD.depth - 1) * 10 + 1 FOR 10) = rpad(OLD.code, 10)
AND id <> NEW.id;
NEW.path = OVERLAY(OLD.path PLACING rpad(NEW.code, 10)
FROM (OLD.depth - 1) * 10 + 1 FOR 10);

Related

PLSQL to TSQL - REGEXP

Im trying to convert a script from PLSQL to TSQL and am stuff with a couple of lines
table(cast(multiset(select level from dual connect by level <= len (regexp_replace(t.image, '[^**]+'))/2) as sys.OdciNumberList)) levels
where substr(REGEXP_SUBSTR (t.image, '[^**]+',1, levels.column_value),1,instr( REGEXP_SUBSTR (t.image, '[^**]+',1, levels.column_value),'=',1) -1)
IMAGE
Any help would be great.
Chris
For a better answer it would be good to include some sample input and desired results. Especially when addressing a different version of SQL. Perhaps including a PL/SQL tag would help find someone who understands PL/SQL and T-SQL. It would also be helpful to include DDL, specifically the datatype for "Level". Again, I say this not to be critical but rather guide you towards getting better answers here.
All That said, you can accomplish what you are trying to do in T-SQL leveraging a tally table, an N-Grams function and a couple other functions which I are included at the end of this post.
regexp_replace
To replace or remove characters that match a pattern in t-SQL you can use patreplace8k. Here's an example of how to use it to replace numbers with *'s:
SELECT pr.NewString
FROM samd.patReplace8K('My phone number is 555-2211','[0-9]','*') AS pr;
Returns: My phone number is -*
regexp_subsr
Here's an example of how to extract all phone numbers from a string:
DECLARE
#string VARCHAR(8000) = 'Call me later at 222-3333 or tomorrow at 312.555.2222,
(313)555-6789, or at 1+800-555-4444 before noon. Thanks!',
#pattern VARCHAR(50) = '%[^0-9()+.-]%';
-- EXTRACTOR
SELECT ItemNumber = ROW_NUMBER() OVER (ORDER BY f.position),
ItemIndex = f.position,
ItemLength = itemLen.l,
Item = SUBSTRING(f.token, 1, itemLen.l)
FROM
(
SELECT ng.position, SUBSTRING(#string,ng.position,DATALENGTH(#string))
FROM samd.NGrams8k(#string, 1) AS ng
WHERE PATINDEX(#pattern, ng.token) < --<< this token does NOT match the pattern
ABS(SIGN(ng.position-1)-1) + --<< are you the first row? OR
PATINDEX(#pattern,SUBSTRING(#string,ng.position-1,1)) --<< always 0 for 1st row
) AS f(position, token)
CROSS APPLY (VALUES(ISNULL(NULLIF(PATINDEX(#pattern,f.token),0), --CROSS APPLY (VALUES(ISNULL(NULLIF(PATINDEX('%'+#pattern+'%',f.token),0),
DATALENGTH(#string)+2-f.position)-1)) AS itemLen(l)
WHERE itemLen.L > 6 -- this filter is more harmful to the extractor than the splitter
ORDER BY ItemNumber;
T-SQL INSTR Function
I included a T-SQL version of Oracles INSTR function at the end of this post. Note these examples:
DECLARE
#string VARCHAR(8000) = 'AABBCC-AA123-AAXYZPDQ-AA-54321',
#search VARCHAR(8000) = '-AA',
#position INT = 1,
#occurance INT = 2;
-- 1.1. Get me the 2nd #occurance "-AA" in #string beginning at #position 1
SELECT f.* FROM samd.instr8k(#string,#search,#position,#occurance) AS f;
-- 1.2. Retreive everything *BEFORE* the second instance of "-AA"
SELECT
ItemIndex = f.ItemIndex,
Item = SUBSTRING(#string,1,f.itemindex-1)
FROM samd.instr8k(#string,#search,#position,#occurance) AS f;
-- 1.3. Retreive everything *AFTER* the second instance of "-AA"
SELECT
ItemIndex = MAX(f.ItemIndex),
Item = MAX(SUBSTRING(#string,f.itemindex+f.itemLength,8000))
FROM samd.instr8k(#string,#search,#position,#occurance) AS f;
regexp_replace (ADVANCED)
Here's a more complex example, leveraging ngrams8k to replace phone numbers with the text "REMOVED"
DECLARE
#string VARCHAR(8000) = 'Call me later at 222-3333 or tomorrow at 312.555.2222, (313)555-6789, or at 1+800-555-4444 before noon. Thanks!',
#pattern VARCHAR(50) = '%[0-9()+.-]%';
SELECT NewString = (
SELECT IIF(IsMatch=1 AND patSplit.item LIKE '%[0-9][0-9][0-9]%','<REMOVED>', patSplit.item)
FROM
(
SELECT 1, i.Idx, SUBSTRING(#string,1,i.Idx), CAST(0 AS BIT)
FROM (VALUES(PATINDEX(#pattern,#string)-1)) AS i(Idx) --FROM (VALUES(PATINDEX('%'+#pattern+'%',#string)-1)) AS i(Idx)
WHERE SUBSTRING(#string,1,1) NOT LIKE #pattern
UNION ALL
SELECT r.RN,
itemLength = LEAD(r.RN,1,DATALENGTH(#string)+1) OVER (ORDER BY r.RN)-r.RN,
item = SUBSTRING(#string,r.RN,
LEAD(r.RN,1,DATALENGTH(#string)+1) OVER (ORDER BY r.RN)-r.RN),
isMatch = ABS(t.p-2+1)
FROM core.rangeAB(1,DATALENGTH(#string),1,1) AS r
CROSS APPLY (VALUES (
CAST(PATINDEX(#pattern,SUBSTRING(#string,r.RN,1)) AS BIT),
CAST(PATINDEX(#pattern,SUBSTRING(#string,r.RN-1,1)) AS BIT),
SUBSTRING(#string,r.RN,r.Op+1))) AS t(c,p,s)
WHERE t.c^t.p = 1
) AS patSplit(ItemIndex, ItemLength, Item, IsMatch)
FOR XML PATH(''), TYPE).value('.','varchar(8000)');
Returns:
Call me later at or tomorrow at , , or at before noon. Thanks!
CREATE FUNCTION core.rangeAB
(
#Low BIGINT, -- (start) Lowest number in the set
#High BIGINT, -- (stop) Highest number in the set
#Gap BIGINT, -- (step) Difference between each number in the set
#Row1 BIT -- Base: 0 or 1; should RN begin with 0 or 1?
)
/****************************************************************************************
[Purpose]:
Creates a lazy, in-memory, forward-ordered sequence of up to 531,441,000,000 integers
starting with #Low and ending with #High (inclusive). RangeAB is a pure, 100% set-based
alternative to solving SQL problems using iterative methods such as loops, cursors and
recursive CTEs. RangeAB is based on Itzik Ben-Gan's getnums function for producing a
sequence of integers and uses logic from Jeff Moden's fnTally function which includes a
parameter for determining if the "row-number" (RN) should begin with 0 or 1.
I wanted to use the name "Range" because it functions and performs almost identically to
the Range function built into Python and Clojure. RANGE is a reserved SQL keyword so I
went with "RangeAB". Functions/Algorithms developed using rangeAB can be easilty ported
over to Python, Clojure or any other programming language that leverages a lazy sequence.
The two major differences between RangeAB and the Python/Clojure versions are:
1. RangeAB is *Inclusive* where the other two are *Exclusive". range(0,3) in Python and
Clojure return [0 1 2], core.rangeAB(0,3) returns [0 1 2 3].
2. RangeAB has a fourth Parameter (#Row1) to determine if RN should begin with 0 or 1.
[Author]:
Alan Burstein
[Compatibility]:
SQL Server 2008+
[Syntax]:
SELECT r.RN, r.OP, r.N1, r.N2
FROM core.rangeAB(#Low,#High,#Gap,#Row1) AS r;
[Parameters]:
#Low = BIGINT; represents the lowest value for N1.
#High = BIGINT; represents the highest value for N1.
#Gap = BIGINT; represents how much N1 and N2 will increase each row. #Gap is also the
difference between N1 and N2.
#Row1 = BIT; represents the base (first) value of RN. When #Row1 = 0, RN begins with 0,
when #row = 1 then RN begins with 1.
[Returns]:
Inline Table Valued Function returns:
RN = BIGINT; a row number that works just like T-SQL ROW_NUMBER() except that it can
start at 0 or 1 which is dictated by #Row1. If you need the numbers:
(0 or 1) through #High, then use RN as your "N" value, ((#Row1=0 for 0, #Row1=1),
otherwise use N1.
OP = BIGINT; returns the "finite opposite" of RN. When RN begins with 0 the first number
in the set will be 0 for RN, the last number in will be 0 for OP. When returning the
numbers 1 to 10, 1 to 10 is retrurned in ascending order for RN and in descending
order for OP.
Given the Numbers 1 to 3, 3 is the opposite of 1, 2 the opposite of 2, and 1 is the
opposite of 3. Given the numbers -1 to 2, the opposite of -1 is 2, the opposite of 0
is 1, and the opposite of 1 is 0.
The best practie is to only use OP when #Gap > 1; use core.O instead. Doing so will
improve performance by 1-2% (not huge but every little bit counts)
N1 = BIGINT; This is the "N" in your tally table/numbers function. this is your *Lazy*
sequence of numbers starting at #Low and incrementing by #Gap until the next number
in the sequence is greater than #High.
N2 = BIGINT; a lazy sequence of numbers starting #Low+#Gap and incrementing by #Gap. N2
will always be greater than N1 by #Gap. N2 can also be thought of as:
LEAD(N1,1,N1+#Gap) OVER (ORDER BY RN)
[Dependencies]:
N/A
[Developer Notes]:
1. core.rangeAB returns one billion rows in exactly 90 seconds on my laptop:
4X 2.7GHz CPU's, 32 GB - multiple versions of SQL Server (2005-2019)
2. The lowest and highest possible numbers returned are whatever is allowable by a
bigint. The function, however, returns no more than 531,441,000,000 rows (8100^3).
3. #Gap does not affect RN, RN will begin at #Row1 and increase by 1 until the last row
unless its used in a subquery where a filter is applied to RN.
4. #Gap must be greater than 0 or the function will not return any rows.
5. Keep in mind that when #Row1 is 0 then the highest RN value (ROWNUMBER) will be the
number of rows returned minus 1
6. If you only need is a sequential set beginning at 0 or 1 then, for best performance
use the RN column. Use N1 and/or N2 when you need to begin your sequence at any
number other than 0 or 1 or if you need a gap between your sequence of numbers.
7. Although #Gap is a bigint it must be a positive integer or the function will
not return any rows.
8. The function will not return any rows when one of the following conditions are true:
* any of the input parameters are NULL
* #High is less than #Low
* #Gap is not greater than 0
To force the function to return all NULLs instead of not returning anything you can
add the following code to the end of the query:
UNION ALL
SELECT NULL, NULL, NULL, NULL
WHERE NOT (#High&#Low&#Gap&#Row1 IS NOT NULL AND #High >= #Low AND #Gap > 0)
This code was excluded as it adds a ~5% performance penalty.
9. There is no performance penalty for sorting by RN ASC; there is a large performance
penalty, however for sorting in descending order. If you need a descending sort the
use OP in place of RN then sort by rn ASC.
10. When setting the #Row1 to 0 and sorting by RN you will see that the 0 is added via
MERGE JOIN concatination. Under the hood the function is essentially concatinating
but, because it's using a MERGE JOIN operator instead of concatination the cost
estimations are needlessly high. You can circumvent this problem by changing:
ORDER BY core.rangeAB.RN to: ORDER BY ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
[Examples]:
-----------------------------------------------------------------------------------------
[Revision History]:
Rev 00 - 20140518 - Initial Development - AJB
Rev 05 - 20191122 - Developed this "core" version for open source distribution;
updated notes and did some final code clean-up
*****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
WITH
L1(N) AS
(
SELECT 1
FROM (VALUES
($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),
($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),
($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),
($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),($),
($),($)) T(N) -- 90 values
),
L2(N) AS (SELECT 1 FROM L1 a CROSS JOIN L1 b CROSS JOIN L1 c),
iTally(RN) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) FROM L2 a CROSS JOIN L2 b)
SELECT r.RN, r.OP, r.N1, r.N2
FROM
(
SELECT
RN = 0,
OP = (#High-#Low)/#Gap,
N1 = #Low,
N2 = #Gap+#Low
WHERE #Row1 = 0
UNION ALL -- (#High-#Low)/#Gap+1:
SELECT TOP (ABS((ISNULL(#High,0)-ISNULL(#Low,0))/ISNULL(#Gap,0)+ISNULL(#Row1,1)))
RN = i.RN,
OP = (#High-#Low)/#Gap+(2*#Row1)-i.RN,
N1 = (i.rn-#Row1)*#Gap+#Low,
N2 = (i.rn-(#Row1-1))*#Gap+#Low
FROM iTally AS i
ORDER BY i.RN
) AS r
WHERE #High&#Low&#Gap&#Row1 IS NOT NULL AND #High >= #Low
AND #Gap > 0;
GO
CREATE FUNCTION samd.ngrams8k
(
#String VARCHAR(8000), -- Input string
#N INT -- requested token size
)
/*****************************************************************************************
[Purpose]:
A character-level N-Grams function that outputs a contiguous stream of #N-sized tokens
based on an input string (#String). Accepts strings up to 8000 varchar characters long.
For more information about N-Grams see: http://en.wikipedia.org/wiki/N-gram.
[Author]:
Alan Burstein
[Compatibility]:
SQL Server 2008+, Azure SQL Database
[Syntax]:
--===== Autonomous
SELECT ng.Position, ng.Token
FROM samd.ngrams8k(#String,#N) AS ng;
--===== Against a table using APPLY
SELECT s.SomeID, ng.Position, ng.Token
FROM dbo.SomeTable AS s
CROSS APPLY samd.ngrams8k(s.SomeValue,#N) AS ng;
[Parameters]:
#String = The input string to split into tokens.
#N = The size of each token returned.
[Returns]:
Position = BIGINT; the position of the token in the input string
token = VARCHAR(8000); a #N-sized character-level N-Gram token
[Dependencies]:
1. core.rangeAB (iTVF)
[Developer Notes]:
1. ngrams8k is not case sensitive;
2. Many functions that use ngrams8k will see a huge performance gain when the optimizer
creates a parallel execution plan. One way to get a parallel query plan (if the
optimizer does not choose one) is to use make_parallel by Adam Machanic which can be
found here:
sqlblog.com/blogs/adam_machanic/archive/2013/07/11/next-level-parallel-plan-porcing.aspx
3. When #N is less than 1 or greater than the datalength of the input string then no
tokens (rows) are returned. If either #String or #N are NULL no rows are returned.
This is a debatable topic but the thinking behind this decision is that: because you
can't split 'xxx' into 4-grams, you can't split a NULL value into unigrams and you
can't turn anything into NULL-grams, no rows should be returned.
For people who would prefer that a NULL input forces the function to return a single
NULL output you could add this code to the end of the function:
UNION ALL
SELECT 1, NULL
WHERE NOT(#N > 0 AND #N <= DATALENGTH(#String)) OR (#N IS NULL OR #String IS NULL)
4. ngrams8k is deterministic. For more about deterministic functions see:
https://msdn.microsoft.com/en-us/library/ms178091.aspx
[Examples]:
--===== 1. Split the string, "abcd" into unigrams, bigrams and trigrams
SELECT ng.Position, ng.Token FROM samd.ngrams8k('abcd',1) AS ng; -- unigrams (#N=1)
SELECT ng.Position, ng.Token FROM samd.ngrams8k('abcd',2) AS ng; -- bigrams (#N=2)
SELECT ng.Position, ng.Token FROM samd.ngrams8k('abcd',3) AS ng; -- trigrams (#N=3)
[Revision History]:
------------------------------------------------------------------------------------------
Rev 00 - 20140310 - Initial Development - Alan Burstein
Rev 01 - 20150522 - Removed DQS N-Grams functionality, improved iTally logic. Also Added
conversion to bigint in the TOP logic to remove implicit conversion
to bigint - Alan Burstein
Rev 05 - 20171228 - Small simplification; changed:
(ABS(CONVERT(BIGINT,(DATALENGTH(ISNULL(#String,''))-(ISNULL(#N,1)-1)),0)))
to:
(ABS(CONVERT(BIGINT,(DATALENGTH(ISNULL(#String,''))+1-ISNULL(#N,1)),0)))
Rev 06 - 20180612 - Using CHECKSUM(N) in the to convert N in the token output instead of
using (CAST N as int). CHECKSUM removes the need to convert to int.
Rev 07 - 20180612 - re-designed to: Use core.rangeAB - Alan Burstein
*****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT
Position = r.RN,
Token = SUBSTRING(#String,CHECKSUM(r.RN),#N)
FROM core.rangeAB(1,LEN(#String)+1-#N,1,1) AS r
WHERE #N > 0 AND #N <= LEN(#String);
GO
CREATE FUNCTION samd.patReplace8K
(
#string VARCHAR(8000),
#pattern VARCHAR(50),
#replace VARCHAR(20)
)
/*****************************************************************************************
[Purpose]:
Given a string (#string), a pattern (#pattern), and a replacement character (#replace)
patReplace8K will replace any character in #string that matches the #Pattern parameter
with the character, #replace.
[Author]:
Alan Burstein
[Compatibility]:
SQL Server 2008+
[Syntax]:
--===== Basic Syntax Example
SELECT pr.NewString
FROM samd.patReplace8K(#String,#Pattern,#Replace) AS pr;
[Developer Notes]:
1. Required SQL Server 2008+
2. #Pattern IS case sensitive but can be easily modified to make it case insensitive
3. There is no need to include the "%" before and/or after your pattern since since we
are evaluating each character individually
4. Certain special characters, such as "$" and "%" need to be escaped with a "/"
like so: [/$/%]
[Examples]:
--===== 1. Replace numeric characters with a "*"
SELECT pr.NewString
FROM samd.patReplace8K('My phone number is 555-2211','[0-9]','*') AS pr;
[Revision History]:
Rev 00 - 10/27/2014 Initial Development - Alan Burstein
Rev 01 - 10/29/2014 Mar 2007 - Alan Burstein
- Redesigned based on the dbo.STRIP_NUM_EE by Eirikur Eiriksson
(see: http://www.sqlservercentral.com/Forums/Topic1585850-391-2.aspx)
- change how the cte tally table is created
- put the include/exclude logic in a CASE statement instead of a WHERE clause
- Added Latin1_General_BIN Colation
- Add code to use the pattern as a parameter.
Rev 02 - 20141106
- Added final performane enhancement (more cudo's to Eirikur Eiriksson)
- Put 0 = PATINDEX filter logic into the WHERE clause
Rev 03 - 20150516
- Updated to deal with special XML characters
Rev 04 - 20170320
- changed #replace from char(1) to varchar(1) to address how spaces are handled
Rev 05 - Re-write using samd.NGrams
*****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT newString =
(
SELECT CASE WHEN #string = CAST('' AS VARCHAR(8000)) THEN CAST('' AS VARCHAR(8000))
WHEN #pattern+#replace+#string IS NOT NULL THEN
CASE WHEN PATINDEX(#pattern,token COLLATE Latin1_General_BIN)=0
THEN ng.token ELSE #replace END END
FROM samd.NGrams8K(#string, 1) AS ng
ORDER BY ng.position
FOR XML PATH(''),TYPE
).value('text()[1]', 'VARCHAR(8000)');
GO
CREATE FUNCTION samd.Instr8k
(
#string VARCHAR(8000),
#search VARCHAR(8000),
#position INT,
#occurance INT
)
/*****************************************************************************************
[Purpose]:
Returns the position (ItemIndex) of the Nth(#occurance) occurrence of one string(#search) within
another(#string). Similar to Oracle's PL/SQL INSTR funtion.
https://www.techonthenet.com/oracle/functions/instr.php
[Author]:
Alan Burstein
[Compatibility]:
SQL Server 2008+
[Syntax]:
--===== Autonomous
SELECT ins.ItemIndex, ins.ItemLength, ins.ItemCount
FROM samd.Instr8k(#string,#search,#position,#occurance) AS ins;
--===== Against a table using APPLY
SELECT s.SomeID, ins.ItemIndex, ins.ItemLength, ins.ItemCount
FROM dbo.SomeTable AS s
CROSS APPLY samd.Instr8k(s.string,#search,#position,#occurance) AS ins
[Parameters]:
#string = VARCHAR(8000); Input sting to evaluate
#search = VARCHAR(8000); Token to search for inside of #string
#position = INT; Where to begin searching for #search; identical to the third
parameter in SQL Server CHARINDEX [, start_location]
#occurance = INT; Represents the Nth instance of the search string (#search)
[Returns]:
ItemIndex = Position of the Nth (#occurance) instance of #search inside #string
ItemLength = Length of #search (in case you need it, no need to re-evaluate the string)
ItemCount = Number of times #search appears inside #string
[Dependencies]:
1. samd.ngrams8k
1.1. dbo.rangeAB (iTVF)
2. samd.substringCount8K_lazy
[Developer Notes]:
1. samd.Instr8k does not treat the input strings (#string and #search) as case sensitive.
2. Don't use instr8k for "SubstringBetween" functionality; for better performance use
samd.SubstringBetween8k instead.
3. The #position parameter is the key benefit of this function when dealing with long
strings where the search item is towards the back of the string. For example, take a
5000 character string where, what you are looking for is always *at least* 3000
characters deep. Setting #position to 3000 will dramatically improve performance.
4. Unlike Oracle's PL/SQL INSTR function, Instr8k does not accept numbers less than 1.
[Examples]:
[Revision History]:
------------------------------------------------------------------------------------------
Rev 00 - 20191112 - Initial Development - Alan Burstein
*****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT
ItemIndex = ISNULL(MAX(ISNULL(instr.Position,1)+(a.Pos-1)),0),
ItemLength = ISNULL(MAX(LEN(#search)),LEN(#search)),
ItemCount = ISNULL(MAX(items.SubstringCount),0)
FROM (VALUES(ISNULL(#position,1),LEN(#search))) AS a(Pos,SrchLn)
CROSS APPLY (VALUES(SUBSTRING(#string,a.Pos,8000))) AS f(String)
CROSS APPLY samd.substringCount8K_lazy(f.string,#search) AS items
CROSS APPLY
(
SELECT TOP (#occurance) RN = ROW_NUMBER() OVER (ORDER BY ng.position), ng.position
FROM samd.ngrams8k(f.string,a.SrchLn) AS ng
WHERE ng.token = #search
ORDER BY RN
) AS instr
WHERE a.Pos > 0
AND #occurance <= items.SubstringCount
AND instr.RN = #occurance;
GO
CREATE FUNCTION samd.substringCount8K_lazy
(
#string varchar(8000),
#searchstring varchar(1000)
)
/*****************************************************************************************
[Purpose]:
Scans the input string (#string) and counts how many times the search character
(#searchChar) appears. This function is Based on Itzik Ben-Gans cte numbers table logic
[Compatibility]:
SQL Server 2008+
Uses TABLE VALUES constructor (not available pre-2008)
[Author]: Alan Burstein
[Syntax]:
--===== Autonomous
SELECT f.substringCount
FROM samd.substringCount8K_lazy(#string,#searchString) AS f;
--===== Against a table using APPLY
SELECT f.substringCount
FROM dbo.someTable AS t
CROSS APPLY samd.substringCount8K_lazy(t.col, #searchString) AS f;
Parameters:
#string = VARCHAR(8000); input string to analyze
#searchString = VARCHAR(1000); substring to search for
[Returns]:
Inline table valued function returns -
substringCount = int; Number of times that #searchChar appears in #string
[Developer Notes]:
1. substringCount8K_lazy does NOT take overlapping values into consideration. For
example, this query will return a 1 but the correct result is 2:
SELECT substringCount FROM samd.substringCount8K_lazy('xxx','xx')
When overlapping values are a possibility or concern then use substringCountAdvanced8k
2. substringCount8K_lazy is what is referred to as an "inline" scalar UDF." Technically
it's aninline table valued function (iTVF) but performs the same task as a scalar
valued user defined function (UDF); the difference is that it requires the APPLY table
operator to accept column values as a parameter. For more about "inline" scalar UDFs
see thisarticle by SQL MVP Jeff Moden:
http://www.sqlservercentral.com/articles/T-SQL/91724/
and for more about how to use APPLY see the this article by SQL MVP Paul White:
http://www.sqlservercentral.com/articles/APPLY/69953/.
Note the above syntax example and usage examples below to better understand how to
use the function. Although the function is slightly more complicated to use than a
scalar UDF it will yield notably better performance for many reasons. For example,
unlike a scalar UDFs or multi-line table valued functions, the inline scalar UDF does
not restrict the query optimizer's ability generate a parallel query execution plan.
3. substringCount8K_lazy returns NULL when either input parameter is NULL and returns 0
when either input parameter is blank.
4. substringCount8K_lazy does not treat parameters as cases senstitive
5. substringCount8K_lazy is deterministic. For more deterministic functions see:
https://msdn.microsoft.com/en-us/library/ms178091.aspx
[Examples]:
--===== 1. How many times does the substring "abc" appear?
SELECT f.* FROM samd.substringCount8k_lazy('abc123xxxabc','abc') AS f;
--===== 2. Return records from a table where the substring "ab" appears more than once
DECLARE #table TABLE (string varchar(8000));
DECLARE #searchString varchar(1000) = 'ab';
INSERT #table VALUES ('abcabc'),('abcd'),('bababab'),('baba'),(NULL);
SELECT searchString = #searchString, t.string, f.substringCount
FROM #table AS t
CROSS APPLY samd.substringCount8k_lazy(string,'ab') AS f
WHERE f.substringCount > 1;
-----------------------------------------------------------------------------------------
[Revision History]:
Rev 00 - 20180625 - Initial Development - Alan Burstein
Rev 01 - 20190102 - Added logic to better handle #searchstring = char(32) - Alan Burstein
*****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT substringCount = (LEN(v.s)-LEN(REPLACE(v.s,v.st,'')))/d.l
FROM (VALUES(DATALENGTH(#searchstring))) AS d(l)
CROSS APPLY (VALUES(#string,CASE WHEN d.l>0 THEN #searchstring END)) AS v(s,st);
GO

How to make a self referential window functions

I have a table like this:
amount type app owe
1 a 10 10
2 a 8 -2
3 a 20 12
4 i 30 10
5 a 40 10
owe is:
(type == 'a')?app - sum(owe) where amount < (amount for current row):max(app-sum(owe)where amount<(amount for current row),0)
So I'd need a window function on the column that the window function is on. There are these partition on rows between rows unlimited preceding and prior row, but it has to be on a different column, not the column I'm summing. Is there a way to reference the same column the window function is on
I tried an alias
case
when type = a
then app - sum(owe)over(ROWS BETWEEN UNBOUNDED PRECEDING AND 1 preceding) as owe
else
greatest(0,app - sum(owe)over(ROWS BETWEEN UNBOUNDED PRECEDING AND 1 preceding))
end as owe
But since owe doesn't exist when I made it, I get:
owe doesn't exist.
Is there some other way?
You cannot do that with window functions. Your only chance using SQL is a recursive CTE:
WITH RECURSIVE tab_owe AS (
SELECT amount, type, app,
CASE WHEN type = 'a'
THEN app
ELSE GREATEST(app, 0)
END AS owe
FROM tab
ORDER BY amount LIMIT 1
UNION ALL
SELECT t.amount, t.type, t.app,
CASE WHEN t.type = 'a'
THEN t.app - sum(tab_owe.owe)
ELSE GREATEST(t.app - sum(tab_owe.owe), 0)
END AS owe
FROM (SELECT amount, type, app
FROM tab
WHERE amount > (SELECT max(amount) FROM tab_owe)
ORDER BY amount
LIMIT 1) AS t
CROSS JOIN tab_owe
GROUP BY t.amount, t.type, t.app
)
SELECT amount, type, app, owe
FROM tab_owe;
(untested)
This would be much easier to write in procedural code, sou consider using a table function.
This is what I came up with. Of course, I'm not a real programmer, so I'm sure there's a smarter way:
insert into mort (amount, "type", app)
values
(1,'a',10),
(2,'a',8),
(3,'a',20),
(4,'i',30),
(5,'a',40)
CREATE OR REPLACE FUNCTION mort_v ()
RETURNS TABLE (
zamount int,
ztype text,
zapp int,
zowe double precision
) AS $$
DECLARE
var_r record;
charlie double precision;
sam double precision;
BEGIN
charlie = 0;
FOR var_r IN(SELECT
amount,
"type",
app
FROM mort order by 1)
LOOP
zamount = var_r.amount;
ztype = var_r.type;
zapp = var_r.app;
sam = var_r.app - charlie;
if ztype = 'a' then
zowe = sam;
else
zowe = greatest(sam, 0);
end if;
charlie = charlie + zowe;
RETURN NEXT;
END LOOP;
END; $$
LANGUAGE 'plpgsql';
select * from mort_v()
So with my limited skills you'll notice I had to add a 'z' in front of the columns that are already in the table so I can spit it out again. If your table has 30 columns you'd normally have to do this 30 times. But, I asked a real engineer and he mentioned that if you just spit out the primary key with the calculated column, you can just join it back to the original table. That's smarter than what I have. If there's an even better solution, that would be great. This does serve as a nice reference to how to do something like a cursor in postgre and how to make variables without a '#' in front like in mssqlserver.

How can I write an iterative function that bases the current row output off of the prior row output?

I need to determine whether my current row value is positive or negative, which is a function of a starting value, scheduled increases, and daily decrement (which is different depending on if the prior day output value was positive or negative).
I only know my starting number for day 1, my schedule of increases, and my decrement values if positive or negative.
If "Prior Day Output" + "Today scheduled increase" is positive, then "Prior Day Output" + "Today scheduled increase" - 2(decrement value)
If "Prior Day Output" + "Today scheduled increase" is negative, then "Prior Day Output" + "Today scheduled increase" - 1(decrement value)
I haven't tried anything, as I can't think of an algebraic way to perform this. New to iterative functions or loops.
Here is the data I have to start with:
Here is what I want to end with:
I believe I have a solution for you.
Note: You have a stipulation saying if start_val is positive (>= 0) set the decrement to 2 but, on Day 11 of your output example you have the decrement set to 1 where start_val + increase = 0.
This solution will match your example output which considers 0 to be negative. That is easily changeable in the segment that sets new_dec. Just move the = to the appropriate location.
CREATE OR REPLACE FUNCTION update_vals()
RETURNS SETOF test
AS
$$
DECLARE
new_dec integer;
new_end integer;
new_start integer;
rec record;
BEGIN
FOR rec IN
SELECT * FROM test
LOOP
new_start := NULL::integer;
IF rec.start_val IS NULL
THEN
SELECT end_val
INTO new_start
FROM
(
SELECT MAX(id) last_id FROM test WHERE id < rec.id
) a
JOIN test ON id = a.last_id
;
END IF;
IF COALESCE(rec.start_val, new_start) + rec.increase > 0
THEN
new_dec := 2;
ELSIF COALESCE(rec.start_val, new_start) + rec.increase <= 0
THEN
new_dec := 1;
END IF;
new_end := COALESCE(rec.start_val, new_start) + rec.increase - new_dec;
IF new_start IS NOT NULL
THEN
RETURN QUERY
UPDATE test
SET (start_val, decrement, end_val) = (new_start, new_dec, new_end)
WHERE id = rec.id
RETURNING *
;
ELSE
RETURN QUERY
UPDATE test
SET (decrement, end_val) = (new_dec, new_end)
WHERE id = rec.id
RETURNING *
;
END IF;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
Here is a db-fiddle to show a working example.

How to maintain a postgreSQL lock from a trigger "before update" through the update operation itself

I apologize if this is an answered question, I did some research, and I couldn't find an answer.
I'm maintaining a folder/file like structure in my code where I have ordered items that cascade order changes on update and deletion operations. However, these triggers need to both lock rows to ensure that the order changes are completed and continue to lock through the completion of the operation
The updating process is relatively simple. This is the governing pseudo-code for the entire operation:
check if pg_trigger_depth() >= 1
return because this was a cascaded update from a trigger
lock the table for update on items with the old folder_parent_id
lock the table for update on items with the new folder_parent_id
update the old rows setting order_number -= 1 where the order_number is > the old order_number, and the folder_parent_id is the same as the old one
update the new rows setting order_number +=1 where the order_number is >= the new order_number and the folder_parent_id is the same as the new one
allow the update operation to go through (setting the order_number/folder_parent_id of this row to its new location)
release the lock for update on items with the old folder_parent_id
release the lock for update on items with the new folder_parent_id
If the lock is released before the actual operation goes through, this sort of race condition can happen. In this sample problem, two updates are being called simultaneously:
Given children of a folder: a(0), b(1), c(2), d(3), e(4)
the letters are the identifying properties and the numbers are the order numbers
we want to run these operations: c(2 -> 1), d(3 -> 0)
Here's the timeline for these operations:
BEFORE UPDATE ON c:
decrement everything > OLD c.order_number (d--, e--)
increment everything >= NEW c.order_number (b++, d++, e++)
CURRENT STATE: a(0), b(2), c(2), d(3), e(4)
BEFORE UPDATE ON d:
decrement everything > OLD d.order_number (e--)
increment everything > NEW d.order_number (a++, b++, c++, e++)
CURRENT STATE: a(1), b(3), c(3), d(3), e(4)
SET c = 1
SET d = 0
FINAL STATE: d(0), a(1), c(1), b(3), e(4)
Clearly, the race condition here is the fact that c and d both alter each other's position in the list, but if the before operation trigger runs on each one before the state change happens, then the operations they perform on each other are discarded.
Is there a straightforward way to either make sure that locks are maintained on these tables through from start to finish of this operation, or otherwise to do this in a way that fixes this sort of race condition? I've been considering creating a separate table File_Structure_Lock that would be locked for update in a before trigger, and then unlocked in the after trigger to circumvent the PostgreSQL locking system, but I figured that there had to be a better method.
EDIT: I was asked for the actual SQL. My issue here is in preparation for a refactor on code that was already existing due to that code having race conditions that were causing errors. I can try to mark this up in a minute, but here's the raw code that I'm working with, with a few variable name changes to make it more generally understandable
CREATE OR REPLACE FUNCTION getOrderLock() RETURNS TRIGGER AS $getOrderLock$
BEGIN
PERFORM * FROM Folders FOR UPDATE;
PERFORM * FROM Files FOR UPDATE;
IF (TG_OP = 'INSERT' OR TG_OP = 'UPDATE') THEN
RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
RETURN OLD;
END IF;
END;
$getOrderLock$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_folder_lock_rows
BEFORE INSERT OR UPDATE OR DELETE ON Folders
FOR EACH STATEMENT
WHEN (pg_trigger_depth() < 1)
EXECUTE PROCEDURE getOrderLock();
CREATE TRIGGER trigger_file_lock_rows
BEFORE INSERT OR UPDATE OR DELETE ON Files
FOR EACH STATEMENT
WHEN (pg_trigger_depth() < 1)
EXECUTE PROCEDURE getOrderLock();
CREATE OR REPLACE FUNCTION adjust_order_numbers_after_folder_update() RETURNS TRIGGER AS $adjust_order_numbers_after_nav_update$
BEGIN
--update old location
UPDATE Folders
SET order_number = Folders.order_number - 1
WHERE Folders.order_number >= OLD.order_number
AND Folders.page_id = OLD.page_id
AND COALESCE(Folders.folder_parent_id, 0) = COALESCE(OLD.folder_parent_id, 0)
AND Folders.id != NEW.id;
UPDATE Files
SET order_number = Files.order_number - 1
WHERE Files.order_number >= OLD.order_number
AND Files.page_id = OLD.page_id
AND COALESCE(Files.folder_parent_id, 0) = COALESCE(OLD.folder_parent_id, 0);
--update new location
UPDATE Folders
SET order_number = Folders.order_number + 1
WHERE Folders.order_number >= NEW.order_number
AND Folders.page_id = NEW.page_id
AND COALESCE(Folders.folder_parent_id, 0) = COALESCE(NEW.folder_parent_id, 0)
AND Folders.id != NEW.id;
UPDATE Files
SET order_number = Files.order_number + 1
WHERE Files.order_number >= NEW.order_number
AND Files.page_id = NEW.page_id
AND COALESCE(Files.folder_parent_id, 0) = COALESCE(NEW.folder_parent_id, 0);
RETURN NEW;
END;
$adjust_order_numbers_after_nav_update$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION adjust_order_numbers_after_file_update() RETURNS TRIGGER AS $adjust_order_numbers_after_file_update$
BEGIN
--update old location
UPDATE Folders
SET order_number = Folders.order_number - 1
WHERE Folders.order_number >= OLD.order_number
AND Folders.page_id = OLD.page_id
AND COALESCE(Folders.folder_parent_id, 0) = COALESCE(OLD.folder_parent_id, 0);
UPDATE Files
SET order_number = Files.order_number - 1
WHERE Files.order_number >= OLD.order_number
AND Files.page_id = OLD.page_id
AND COALESCE(Files.folder_parent_id, 0) = COALESCE(OLD.folder_parent_id, 0)
AND Files.id != NEW.id;
--update new location
UPDATE Folders
SET order_number = Folders.order_number + 1
WHERE Folders.order_number >= NEW.order_number
AND Folders.page_id = NEW.page_id
AND COALESCE(Folders.folder_parent_id, 0) = COALESCE(NEW.folder_parent_id, 0);
UPDATE Files
SET order_number = Files.order_number + 1
WHERE Files.order_number >= NEW.order_number
AND Files.page_id = NEW.page_id
AND COALESCE(Files.folder_parent_id, 0) = COALESCE(NEW.folder_parent_id, 0)
AND Files.id != NEW.id;
RETURN NEW;
END;
$adjust_order_numbers_after_file_update$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_folder_order_shift
AFTER UPDATE ON Folders
FOR EACH ROW
WHEN (
(
COALESCE(OLD.folder_parent_id, 0) != COALESCE(NEW.folder_parent_id, 0)
OR OLD.order_number != NEW.order_number
OR Old.page_id != New.page_id
)
AND pg_trigger_depth() < 1
)
EXECUTE PROCEDURE adjust_order_numbers_after_folder_update();
CREATE TRIGGER trigger_file_order_shift
AFTER UPDATE ON Files
FOR EACH ROW
WHEN (
(
COALESCE(OLD.folder_parent_id, 0) != COALESCE(NEW.folder_parent_id, 0)
OR OLD.order_number != NEW.order_number
OR Old.page_id != New.page_id
)
AND pg_trigger_depth() < 1
)
EXECUTE PROCEDURE adjust_order_numbers_after_file_update();
The problem seems to come from the order_number that you insist in being a gap-less sequence of integers ordering the items in each folder. If you want to maintain that, you have to shuffle all items around, and it is indeed hard to do that without some major locking.
But if all you want to do is to maintain a certain order of the items, I would relax the requirement of a gap-less sequence and instead use double precision values to describe the order of items. Then it is easy to insert an item anywhere without changing order_number in any other element – you can always assign the moved item an order_number that is between any two existing ones.

PL/pgSQL Control Structures non-SETOF issue

I have two tables with date-columns (year, month, day and a combination of the three called "stamp") and associated values (avalue). I need my function to return the ratio between two tables (at a specified date), return a fixed value after a previously-specified limit-date, and if the date is not available in the data (but lower than the limit), it should choose the first available date following the input.
Here's the code I wrote:
CREATE OR REPLACE FUNCTION myfunction(theyear int, themonth int, theday int) RETURNS real AS '
DECLARE
foo tablenamea%rowtype;
BEGIN
IF ((theyear >= 2000) AND (themonth >= 6)) OR (theyear > 2000) THEN
RETURN 0.1;
ELSE
FOR foo IN SELECT (a.avalue/b.avalue) FROM tablenamea AS a, tablenameb AS b
WHERE a.stamp = b.stamp AND a.year = theyear AND a.month = themonth AND a.day >= theday ORDER BY a.year, a.month, a.day
LOOP
RETURN NEXT foo;
END LOOP;
RETURN;
END IF;
END;
' LANGUAGE plpgsql;
This keeps giving me this error:
cannot use RETURN NEXT in a non-SETOF function
It's telling you that you should return a real, instead of foo (which is a row).
Probably return foo.somefield instead.
Also, add a limit 1 instead of a for loop, since I presume you're only really interested in the first row. If not, declare it as returning e.g. table (ratio real) and use return query.