I am interested in algorithm in T-SQL calculating Levenshtein distance.
I implemented the standard Levenshtein edit distance function in TSQL with several optimizations that improves the speed over the other versions I'm aware of. In cases where the two strings have characters in common at their start (shared prefix), characters in common at their end (shared suffix), and when the strings are large and a max edit distance is provided, the improvement in speed is significant. For example, when the inputs are two very similar 4000 character strings, and a max edit distance of 2 is specified, this is almost three orders of magnitude faster than the edit_distance_within function in the accepted answer, returning the answer in 0.073 seconds (73 milliseconds) vs 55 seconds. It's also memory efficient, using space equal to the larger of the two input strings plus some constant space. It uses a single nvarchar "array" representing a column, and does all computations in-place in that, plus some helper int variables.
Optimizations:
skips processing of shared prefix and/or suffix
early return if larger string starts or ends with entire smaller string
early return if difference in sizes guarantees max distance will be exceeded
uses only a single array representing a column in the matrix (implemented as nvarchar)
when a max distance is given, time complexity goes from (len1*len2) to (min(len1,len2)) i.e. linear
when a max distance is given, early return as soon as max distance bound is known not to be achievable
Here is the code (updated 1/20/2014 to speed it up a bit more):
-- =============================================
-- Computes and returns the Levenshtein edit distance between two strings, i.e. the
-- number of insertion, deletion, and sustitution edits required to transform one
-- string to the other, or NULL if #max is exceeded. Comparisons use the case-
-- sensitivity configured in SQL Server (case-insensitive by default).
--
-- Based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
-- at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm,
-- with some additional optimizations.
-- =============================================
CREATE FUNCTION [dbo].[Levenshtein](
#s nvarchar(4000)
, #t nvarchar(4000)
, #max int
)
RETURNS int
WITH SCHEMABINDING
AS
BEGIN
DECLARE #distance int = 0 -- return variable
, #v0 nvarchar(4000)-- running scratchpad for storing computed distances
, #start int = 1 -- index (1 based) of first non-matching character between the two string
, #i int, #j int -- loop counters: i for s string and j for t string
, #diag int -- distance in cell diagonally above and left if we were using an m by n matrix
, #left int -- distance in cell to the left if we were using an m by n matrix
, #sChar nchar -- character at index i from s string
, #thisJ int -- temporary storage of #j to allow SELECT combining
, #jOffset int -- offset used to calculate starting value for j loop
, #jEnd int -- ending value for j loop (stopping point for processing a column)
-- get input string lengths including any trailing spaces (which SQL Server would otherwise ignore)
, #sLen int = datalength(#s) / datalength(left(left(#s, 1) + '.', 1)) -- length of smaller string
, #tLen int = datalength(#t) / datalength(left(left(#t, 1) + '.', 1)) -- length of larger string
, #lenDiff int -- difference in length between the two strings
-- if strings of different lengths, ensure shorter string is in s. This can result in a little
-- faster speed by spending more time spinning just the inner loop during the main processing.
IF (#sLen > #tLen) BEGIN
SELECT #v0 = #s, #i = #sLen -- temporarily use v0 for swap
SELECT #s = #t, #sLen = #tLen
SELECT #t = #v0, #tLen = #i
END
SELECT #max = ISNULL(#max, #tLen)
, #lenDiff = #tLen - #sLen
IF #lenDiff > #max RETURN NULL
-- suffix common to both strings can be ignored
WHILE(#sLen > 0 AND SUBSTRING(#s, #sLen, 1) = SUBSTRING(#t, #tLen, 1))
SELECT #sLen = #sLen - 1, #tLen = #tLen - 1
IF (#sLen = 0) RETURN #tLen
-- prefix common to both strings can be ignored
WHILE (#start < #sLen AND SUBSTRING(#s, #start, 1) = SUBSTRING(#t, #start, 1))
SELECT #start = #start + 1
IF (#start > 1) BEGIN
SELECT #sLen = #sLen - (#start - 1)
, #tLen = #tLen - (#start - 1)
-- if all of shorter string matches prefix and/or suffix of longer string, then
-- edit distance is just the delete of additional characters present in longer string
IF (#sLen <= 0) RETURN #tLen
SELECT #s = SUBSTRING(#s, #start, #sLen)
, #t = SUBSTRING(#t, #start, #tLen)
END
-- initialize v0 array of distances
SELECT #v0 = '', #j = 1
WHILE (#j <= #tLen) BEGIN
SELECT #v0 = #v0 + NCHAR(CASE WHEN #j > #max THEN #max ELSE #j END)
SELECT #j = #j + 1
END
SELECT #jOffset = #max - #lenDiff
, #i = 1
WHILE (#i <= #sLen) BEGIN
SELECT #distance = #i
, #diag = #i - 1
, #sChar = SUBSTRING(#s, #i, 1)
-- no need to look beyond window of upper left diagonal (#i) + #max cells
-- and the lower right diagonal (#i - #lenDiff) - #max cells
, #j = CASE WHEN #i <= #jOffset THEN 1 ELSE #i - #jOffset END
, #jEnd = CASE WHEN #i + #max >= #tLen THEN #tLen ELSE #i + #max END
WHILE (#j <= #jEnd) BEGIN
-- at this point, #distance holds the previous value (the cell above if we were using an m by n matrix)
SELECT #left = UNICODE(SUBSTRING(#v0, #j, 1))
, #thisJ = #j
SELECT #distance =
CASE WHEN (#sChar = SUBSTRING(#t, #j, 1)) THEN #diag --match, no change
ELSE 1 + CASE WHEN #diag < #left AND #diag < #distance THEN #diag --substitution
WHEN #left < #distance THEN #left -- insertion
ELSE #distance -- deletion
END END
SELECT #v0 = STUFF(#v0, #thisJ, 1, NCHAR(#distance))
, #diag = #left
, #j = case when (#distance > #max) AND (#thisJ = #i + #lenDiff) then #jEnd + 2 else #thisJ + 1 end
END
SELECT #i = CASE WHEN #j > #jEnd + 1 THEN #sLen + 1 ELSE #i + 1 END
END
RETURN CASE WHEN #distance <= #max THEN #distance ELSE NULL END
END
As mentioned in the comments of this function, the case sensitivity of the character comparisons will follow the collation that's in effect. By default, SQL Server's collation is one that will result in case insensitive comparisons.
One way to modify this function to always be case sensitive would be to add a specific collation to the two places where strings are compared. However, I have not thoroughly tested this, especially for side effects when the database is using a non-default collation.
These are how the two lines would be changed to force case sensitive comparisons:
-- prefix common to both strings can be ignored
WHILE (#start < #sLen AND SUBSTRING(#s, #start, 1) = SUBSTRING(#t, #start, 1) COLLATE SQL_Latin1_General_Cp1_CS_AS)
and
SELECT #distance =
CASE WHEN (#sChar = SUBSTRING(#t, #j, 1) COLLATE SQL_Latin1_General_Cp1_CS_AS) THEN #diag --match, no change
Arnold Fribble had two proposals on sqlteam.com/forums
one from june 2005 and
another updated one from may 2006
This is the younger one from 2006:
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE FUNCTION edit_distance_within(#s nvarchar(4000), #t nvarchar(4000), #d int)
RETURNS int
AS
BEGIN
DECLARE #sl int, #tl int, #i int, #j int, #sc nchar, #c int, #c1 int,
#cv0 nvarchar(4000), #cv1 nvarchar(4000), #cmin int
SELECT #sl = LEN(#s), #tl = LEN(#t), #cv1 = '', #j = 1, #i = 1, #c = 0
WHILE #j <= #tl
SELECT #cv1 = #cv1 + NCHAR(#j), #j = #j + 1
WHILE #i <= #sl
BEGIN
SELECT #sc = SUBSTRING(#s, #i, 1), #c1 = #i, #c = #i, #cv0 = '', #j = 1, #cmin = 4000
WHILE #j <= #tl
BEGIN
SET #c = #c + 1
SET #c1 = #c1 - CASE WHEN #sc = SUBSTRING(#t, #j, 1) THEN 1 ELSE 0 END
IF #c > #c1 SET #c = #c1
SET #c1 = UNICODE(SUBSTRING(#cv1, #j, 1)) + 1
IF #c > #c1 SET #c = #c1
IF #c < #cmin SET #cmin = #c
SELECT #cv0 = #cv0 + NCHAR(#c), #j = #j + 1
END
IF #cmin > #d BREAK
SELECT #cv1 = #cv0, #i = #i + 1
END
RETURN CASE WHEN #cmin <= #d AND #c <= #d THEN #c ELSE -1 END
END
GO
IIRC, with SQL Server 2005 and later you can write stored procedures in any .NET language: Using CLR Integration in SQL Server 2005. With that it shouldn't be hard to write a procedure for calculating Levenstein distance.
A simple Hello, World! extracted from the help:
using System;
using System.Data;
using Microsoft.SqlServer.Server;
using System.Data.SqlTypes;
public class HelloWorldProc
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void HelloWorld(out string text)
{
SqlContext.Pipe.Send("Hello world!" + Environment.NewLine);
text = "Hello world!";
}
}
Then in your SQL Server run the following:
CREATE ASSEMBLY helloworld from 'c:\helloworld.dll' WITH PERMISSION_SET = SAFE
CREATE PROCEDURE hello
#i nchar(25) OUTPUT
AS
EXTERNAL NAME helloworld.HelloWorldProc.HelloWorld
And now you can test run it:
DECLARE #J nchar(25)
EXEC hello #J out
PRINT #J
Hope this helps.
You can use Levenshtein Distance Algorithm for comparing strings
Here you can find a T-SQL example at http://www.kodyaz.com/articles/fuzzy-string-matching-using-levenshtein-distance-sql-server.aspx
CREATE FUNCTION edit_distance(#s1 nvarchar(3999), #s2 nvarchar(3999))
RETURNS int
AS
BEGIN
DECLARE #s1_len int, #s2_len int
DECLARE #i int, #j int, #s1_char nchar, #c int, #c_temp int
DECLARE #cv0 varbinary(8000), #cv1 varbinary(8000)
SELECT
#s1_len = LEN(#s1),
#s2_len = LEN(#s2),
#cv1 = 0x0000,
#j = 1, #i = 1, #c = 0
WHILE #j <= #s2_len
SELECT #cv1 = #cv1 + CAST(#j AS binary(2)), #j = #j + 1
WHILE #i <= #s1_len
BEGIN
SELECT
#s1_char = SUBSTRING(#s1, #i, 1),
#c = #i,
#cv0 = CAST(#i AS binary(2)),
#j = 1
WHILE #j <= #s2_len
BEGIN
SET #c = #c + 1
SET #c_temp = CAST(SUBSTRING(#cv1, #j+#j-1, 2) AS int) +
CASE WHEN #s1_char = SUBSTRING(#s2, #j, 1) THEN 0 ELSE 1 END
IF #c > #c_temp SET #c = #c_temp
SET #c_temp = CAST(SUBSTRING(#cv1, #j+#j+1, 2) AS int)+1
IF #c > #c_temp SET #c = #c_temp
SELECT #cv0 = #cv0 + CAST(#c AS binary(2)), #j = #j + 1
END
SELECT #cv1 = #cv0, #i = #i + 1
END
RETURN #c
END
(Function developped by Joseph Gama)
Usage :
select
dbo.edit_distance('Fuzzy String Match','fuzzy string match'),
dbo.edit_distance('fuzzy','fuzy'),
dbo.edit_distance('Fuzzy String Match','fuzy string match'),
dbo.edit_distance('levenshtein distance sql','levenshtein sql server'),
dbo.edit_distance('distance','server')
The algorithm simply returns the stpe count to change one string into other by replacing a different character at one step
I was looking for a code example for the Levenshtein algorithm, too, and was happy to find it here. Of course I wanted to understand how the algorithm is working and I was playing around a little bit with one of the above examples I was playing around a little bit that was posted by Veve. In order to get a better understanding of the code I created an EXCEL with the Matrix.
distance for FUZZY compared with FUZY
Images say more than 1000 words.
With this EXCEL I found that there was potential for additional performance optimization. All values in the upper right red area do not need to be calculated. The value of each red cell results in the value of the left cell plus 1. This is because, the second string will be always longer in that area than the first one, what increases the distance by the value of 1 for each character.
You can reflect that by using the statement IF #j <= #i and increasing the value of #i Prior to this statement.
CREATE FUNCTION [dbo].[f_LevenshteinDistance](#s1 nvarchar(3999), #s2 nvarchar(3999))
RETURNS int
AS
BEGIN
DECLARE #s1_len int;
DECLARE #s2_len int;
DECLARE #i int;
DECLARE #j int;
DECLARE #s1_char nchar;
DECLARE #c int;
DECLARE #c_temp int;
DECLARE #cv0 varbinary(8000);
DECLARE #cv1 varbinary(8000);
SELECT
#s1_len = LEN(#s1),
#s2_len = LEN(#s2),
#cv1 = 0x0000 ,
#j = 1 ,
#i = 1 ,
#c = 0
WHILE #j <= #s2_len
SELECT #cv1 = #cv1 + CAST(#j AS binary(2)), #j = #j + 1;
WHILE #i <= #s1_len
BEGIN
SELECT
#s1_char = SUBSTRING(#s1, #i, 1),
#c = #i ,
#cv0 = CAST(#i AS binary(2)),
#j = 1;
SET #i = #i + 1;
WHILE #j <= #s2_len
BEGIN
SET #c = #c + 1;
IF #j <= #i
BEGIN
SET #c_temp = CAST(SUBSTRING(#cv1, #j + #j - 1, 2) AS int) + CASE WHEN #s1_char = SUBSTRING(#s2, #j, 1) THEN 0 ELSE 1 END;
IF #c > #c_temp SET #c = #c_temp
SET #c_temp = CAST(SUBSTRING(#cv1, #j + #j + 1, 2) AS int) + 1;
IF #c > #c_temp SET #c = #c_temp;
END;
SELECT #cv0 = #cv0 + CAST(#c AS binary(2)), #j = #j + 1;
END;
SET #cv1 = #cv0;
END;
RETURN #c;
END;
In TSQL the best and fastest way to compare two items are SELECT statements that join tables on indexed columns. Therefore this is how I suggest to implement the editing distance if you want to benefit from the advantages of a RDBMS engine. TSQL Loops will work too, but Levenstein distance calculations will be faster in other languages than in TSQL for large volume comparisons.
I have implemented the editing distance in several systems using series of Joins against temporary tables designed for that purpose only. It requires some heavy pre-processing steps - the preparation of the temporary tables - but it works very well with large number of comparisons.
In a few words: the pre-processing consists of creating, populating and indexing temp tables. The first one contains reference ids, a one-letter column and a charindex column. This table is populated by running a series of insert queries that split every word into letters (using SELECT SUBSTRING) to create as many rows as word in the source list have letters (I know, that's a lot of rows but SQL server can handle billions of rows). Then make a second table with a 2-letter column, another table with a 3-letter column, etc. The end results is a series of tables which contain reference ids and substrings of each the words, as well a the reference of their position in the word.
Once this is done, the whole game is about duplicating these tables and joining them against their duplicate in a GROUP BY select query which counts the number of matches. This creates a series of measures for every possible pair of words, which are then re-aggregated into a single Levenstein distance per pair of words.
Technically this is very different than most other implementations of the Levenstein distance (or its variants) so you need to deeply understand how the Levenstein distance works and why it was designed as it is. Investigate the alternatives as well because with that method you end up with a series of underlying metrics which can help calculate many variants of the editing distance at the same time, providing you with interesting machine learning potential improvements.
Another point already mentioned by previous answers in this page: try to pre process as much as possible to eliminate the pairs that do not require distance measurement. For example a pair of two words that have not a single letter in common should be excluded, because the editing distance can be obtained from the length of the strings. Or do not measure the distance between two copies of the same word, since it is 0 by nature. Or remove duplicates before doing the measurement, if your list of words comes from a long text it is likely that the same words will appear more than once, so measuring the distance only once will save processing time, etc.
My mods for Azure Synapse (changed to use SET instead of SELECT):
-- =============================================
-- Computes and returns the Levenshtein edit distance between two strings, i.e. the
-- number of insertion, deletion, and sustitution edits required to transform one
-- string to the other, or NULL if #max is exceeded. Comparisons use the case-
-- sensitivity configured in SQL Server (case-insensitive by default).
--
-- Based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
-- at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm,
-- with some additional optimizations.
-- =============================================
CREATE FUNCTION [db0].[Levenshtein](
#s nvarchar(4000)
, #t nvarchar(4000)
, #max int
)
RETURNS int
WITH SCHEMABINDING
AS
BEGIN
DECLARE #distance int = 0 -- return variable
, #v0 nvarchar(4000)-- running scratchpad for storing computed distances
, #start int = 1 -- index (1 based) of first non-matching character between the two string
, #i int, #j int -- loop counters: i for s string and j for t string
, #diag int -- distance in cell diagonally above and left if we were using an m by n matrix
, #left int -- distance in cell to the left if we were using an m by n matrix
, #sChar nchar -- character at index i from s string
, #thisJ int -- temporary storage of #j to allow SELECT combining
, #jOffset int -- offset used to calculate starting value for j loop
, #jEnd int -- ending value for j loop (stopping point for processing a column)
-- get input string lengths including any trailing spaces (which SQL Server would otherwise ignore)
, #sLen int = datalength(#s) / datalength(left(left(#s, 1) + '.', 1)) -- length of smaller string
, #tLen int = datalength(#t) / datalength(left(left(#t, 1) + '.', 1)) -- length of larger string
, #lenDiff int -- difference in length between the two strings
-- if strings of different lengths, ensure shorter string is in s. This can result in a little
-- faster speed by spending more time spinning just the inner loop during the main processing.
IF (#sLen > #tLen) BEGIN
SET #v0 = #s
SET #i = #sLen -- temporarily use v0 for swap
SET #s = #t
SET #sLen = #tLen
SET #t = #v0
SET #tLen = #i
END
SET #max = ISNULL(#max, #tLen)
SET #lenDiff = #tLen - #sLen
IF #lenDiff > #max RETURN NULL
-- suffix common to both strings can be ignored
WHILE(#sLen > 0 AND SUBSTRING(#s, #sLen, 1) = SUBSTRING(#t, #tLen, 1))
SET #sLen = #sLen - 1
SET #tLen = #tLen - 1
IF (#sLen = 0) RETURN #tLen
-- prefix common to both strings can be ignored
WHILE (#start < #sLen AND SUBSTRING(#s, #start, 1) = SUBSTRING(#t, #start, 1))
SET #start = #start + 1
IF (#start > 1) BEGIN
SET #sLen = #sLen - (#start - 1)
SET #tLen = #tLen - (#start - 1)
-- if all of shorter string matches prefix and/or suffix of longer string, then
-- edit distance is just the delete of additional characters present in longer string
IF (#sLen <= 0) RETURN #tLen
SET #s = SUBSTRING(#s, #start, #sLen)
SET #t = SUBSTRING(#t, #start, #tLen)
END
-- initialize v0 array of distances
SET #v0 = ''
SET #j = 1
WHILE (#j <= #tLen) BEGIN
SET #v0 = #v0 + NCHAR(CASE WHEN #j > #max THEN #max ELSE #j END)
SET #j = #j + 1
END
SET #jOffset = #max - #lenDiff
SET #i = 1
WHILE (#i <= #sLen) BEGIN
SET #distance = #i
SET #diag = #i - 1
SET #sChar = SUBSTRING(#s, #i, 1)
-- no need to look beyond window of upper left diagonal (#i) + #max cells
-- and the lower right diagonal (#i - #lenDiff) - #max cells
SET #j = CASE WHEN #i <= #jOffset THEN 1 ELSE #i - #jOffset END
SET #jEnd = CASE WHEN #i + #max >= #tLen THEN #tLen ELSE #i + #max END
WHILE (#j <= #jEnd) BEGIN
-- at this point, #distance holds the previous value (the cell above if we were using an m by n matrix)
SET #left = UNICODE(SUBSTRING(#v0, #j, 1))
SET #thisJ = #j
SET #distance =
CASE WHEN (#sChar = SUBSTRING(#t, #j, 1)) THEN #diag --match, no change
ELSE 1 + CASE WHEN #diag < #left AND #diag < #distance THEN #diag --substitution
WHEN #left < #distance THEN #left -- insertion
ELSE #distance -- deletion
END
END
SET #v0 = STUFF(#v0, #thisJ, 1, NCHAR(#distance))
SET #diag = #left
SET #j = case when (#distance > #max) AND (#thisJ = #i + #lenDiff)
then #jEnd + 2
else #thisJ + 1 end
END
SET #i = CASE WHEN #j > #jEnd + 1 THEN #sLen + 1 ELSE #i + 1 END
END
RETURN CASE WHEN #distance <= #max THEN #distance ELSE NULL END
END
Related
I want to convert/ cast these operators ( '/' and '*') if this is possible. Maybe you know how that works or you know what the internal coding of division and multiplication is?! Maybe then I can continue with that?
What I want to do is, based on the last number of the current system time, I want to decide if that number is odd or even, and then do a multiplication or division in the next calculations.
CREATE FUNCTION CalculateFactor()
RETURNS NVARCHAR(50)
AS
BEGIN
DECLARE #time DATETIME2(7)
DECLARE #length INT
DECLARE #value NVARCHAR(20)
DECLARE #operator NVARCHAR(20)
SET #time = SYSDATETIME()
SELECT #length = LEN(#time)
SELECT #value = RIGHT(#time, 1);
IF (#value % 2 = 0)
SET #operator = '*'
ELSE SET #operator = '/'
DECLARE #SQL NVARCHAR(20)
SET #SQL = '10' + #operator + '2' --I get only 10 / 2 and 10 * 2 because that are strings and that is why I cannot get a result of the operation, but I want 5 and 20
RETURN #SQL
END
GO
How to generate a matrix of random numbers where the values in each row add up to X in T-SQL?
The solution matrix should be dynamic:
User can specify number of columns to be returned in the result
User can specify number of rows to be returned in the result
Each row must sum to X (eg. 1)
create proc RandomNumberGenerator
(
#rows int
, #cols int
, #rowsumtotal float
)
as
....
First create a UDF...
CREATE FUNCTION [dbo].[_ex_fn_SplitToTable] (#str varchar(5000), #sep varchar(1) = null)
RETURNS #ReturnVal table (n int, s varchar(5000))
AS
/*
Alpha Test
-----------
select * from [dbo].[_ex_fn_SplitToTable_t2]('a b c d e',' ')
*/
BEGIN
if #sep = ' '
begin
set #sep = CHAR(167)
set #str = REPLACE(#str,' ',CHAR(167))
end
declare #str2 varchar(5000)
declare #sep2 varchar(1)
if LEN(ISNULL(#sep,'')) = 0
begin
declare #i int
set #i = 0
set #str2 = ''
declare #char varchar(1)
startloop:
set #i += 1
--print #i
set #char = substring(#str,#i,1)
set #str2 = #str2 + #char + ','
if LEN(#str) <= #i
goto exitloop
goto startloop
exitloop:
set #str2 = left(#str2,LEN(#str2) - 1)
set #sep2 = ','
--print #str2
end
else
begin
set #str2 = #str
set #sep2 = #sep
end
;WITH Pieces(n, start, stop) AS (
SELECT 1, 1, CHARINDEX(#sep2, #str2)
UNION ALL
SELECT n + 1, stop + 1, CHARINDEX(#sep2, #str2, stop + 1)
FROM Pieces
WHERE stop > 0
)
insert into #ReturnVal(n,s)
SELECT n,
SUBSTRING(#str2, start, CASE WHEN stop > 0 THEN stop-start ELSE 5000 END) AS s
FROM Pieces option (maxrecursion 32767)
RETURN
END
GO
Then, create this stored proc...
CREATE PROC [dbo].[RandomNumberGenerator]
(
#Pockets int = 6,
#SumTo float = 100,
#i_iterations int = 100
)
/*
ALPHA TEST
----------
exec RandomNumberGenerator 10, 100, 500
*/
AS
if object_id('tempdb..#_Random_00') is not null drop table #_Random_00
declare #columnstring varchar(max) = (SELECT REPLICATE('c ',#Pockets) as Underline)
print #columnstring
if object_id('tempdb..#_Random_columns') is not null drop table #_Random_columns
select s+CONVERT(varchar,dbo.PadLeft(convert(varchar,n),'0',3)) cols
into #_Random_columns
from [dbo].[_ex_fn_SplitToTable](#columnstring,' ') where LEN(s) > 0
-- ===========================
--select * from #_Random_columns
-- ===========================
declare #columns_sql varchar(max)
set #columns_sql =
(
select distinct
stuff((SELECT distinct + cast(cols as varchar(50)) + ' float, '
FROM (
select cols
from #_Random_columns
) t2
--where t2.n = t1.n
FOR XML PATH('')),3,0,'')
from (
select cols
from #_Random_columns
) t1
)
set #columns_sql = LEFT(#columns_sql,LEN(#columns_sql) - 1)
print #columns_sql
declare #sql varchar(max)
set #sql = 'if object_id(''tempdb..##_proctable_Random_01'') is not null drop table ##_proctable_Random_01 '
print #sql
execute(#sql)
set #sql = 'create table ##_proctable_Random_01 (rowid int,' + #columns_sql + ')'
print #sql
execute(#sql)
declare #TotalOfRand float
declare #i_inner int
declare #i_outer int
set #i_outer = 0
start_outer:
set #i_outer = #i_outer + 1
set #i_inner = 0
declare #sumstring varchar(max)
set #sumstring = ''
start_inner:
set #i_inner = #i_inner+1
set #sumstring = #sumstring + CONVERT(varchar, rand()) + ','
if #i_inner >= #Pockets
goto exit_inner
goto start_inner
exit_inner:
set #TotalOfRand = ( select sum(convert(float,s)) from dbo._ex_fn_SplitToTable(#sumstring,',') )
declare #sumstring_quotient varchar(max)
set #sumstring_quotient = replace(#sumstring,',', '/' + Convert(varchar,#TotalOfRand) + '*' + convert(varchar,#SumTo) + ',')
set #sumstring_quotient = LEFT(#sumstring_quotient,len(#sumstring_quotient) - 1)
print #sumstring_quotient
set #sql = '
insert into ##_proctable_Random_01
select
( select count(*) + 1 from ##_proctable_Random_01 ) rowid,' + #sumstring_quotient
execute(#sql)
if #i_outer >= #i_iterations
goto exit_outer
goto start_outer
exit_outer:
select * from ##_proctable_Random_01
drop table ##_proctable_Random_01
GO
my problem is pretty simple. I get a value from a sql select which looks like this:
ARAMAUBEBABRBGCNDKDEEEFOFIFRGEGRIEISITJPYUCAKZKG
and I need it like this:
AR,AM,AU,BE,BA,BR,BG,CN,DK,DE,EE,FO,FI,FR,GE,GR,IE,IS,IT,JP,YU,CA,KZ,KG
The length is different in each dataset.
I tried it with format(), stuff() and so on but nothing brought me the result I need.
Thanks in advance
With a little help of a numbers table and for xml path.
-- Sample table
declare #T table
(
Value nvarchar(100)
)
-- Sample data
insert into #T values
('ARAMAU'),
('ARAMAUBEBABRBGCNDKDEEEFOFIFRGEGRIEISITJPYUCAKZKG')
declare #Len int
set #Len = 2;
select stuff(T2.X.value('.', 'nvarchar(max)'), 1, 1, '')
from #T as T1
cross apply (select ','+substring(T1.Value, 1+Number*#Len, #Len)
from Numbers
where Number >= 0 and
Number < len(T1.Value) / #Len
order by Number
for xml path(''), type) as T2(X)
Try on SE-Data
Time to update your resume.
create function DontDoThis (
#string varchar(max),
#count int
)
returns varchar(max)
as
begin
declare #result varchar(max) = ''
declare #token varchar(max) = ''
while DATALENGTH(#string) > 0
begin
select #token = left(#string, #count)
select #string = REPLACE(#string, #token, '')
select #result += #token + case when DATALENGTH(#string) = 0 then '' else ',' end
end
return #result
end
Call:
declare #test varchar(max) = 'ARAMAUBEBABRBGCNDKDEEEFOFIFRGEGRIEISITJPYUCAKZKG'
select dbo.DontDoThis(#test, 2)
gbn's comment is exactly right, if not very diplomatic :) TSQL is a poor language for string manipulation, but if you write a CLR function to do this then you will have the best of both worlds: .NET string functions called from pure TSQL.
I believe this is what QQping is looking for.
-- select .dbo.DelineateEachNth('ARAMAUBEBABRBGCNDKDEEEFOFIFRGEGRIEISITJPYUCAKZKG',2,',')
create function DelineateEachNth
(
#str varchar(max), -- Incoming String to parse
#length int, -- Length of desired segment
#delimiter varchar(100) -- Segment delimiter (comma, tab, line-feed, etc)
)
returns varchar(max)
AS
begin
declare #resultString varchar(max) = ''
-- only set delimiter(s) when lenght of string is longer than desired segment
if LEN(#str) > #length
begin
-- continue as long as there is a remaining string to parse
while len(#str) > 0
begin
-- as long as know we still need to create a segment...
if LEN(#str) > #length
begin
-- build result string from leftmost segment length
set #resultString = #resultString + left(#str, #length) + #delimiter
-- continually shorten result string by current segment
set #str = right(#str, len(#str) - #length)
end
-- as soon as the remaining string is segment length or less,
-- just use the remainder and empty the string to close the loop
else
begin
set #resultString = #resultString + #str
set #str = ''
end
end
end
-- if string is less than segment length, just pass it through
else
begin
set #resultString = #str
end
return #resultString
end
With a little help from Regex
select Wow=
(select case when MatchIndex %2 = 0 and MatchIndex!=0 then ',' + match else match end
from dbo.RegExMatches('[^\n]','ARAMAUBEBABRBGCNDKDEEEFOFIFRGEGRIEISITJPYUCAKZKG',1)
for xml path(''))
The question is self explanatory. Could you please point out a way to put spaces between each capital letter of a string.
SELECT dbo.SpaceBeforeCap('ThisIsATestString')
would result in
This Is A Test String.
This will add spaces only if the previous and next character is lowercase. That way 'MyABCAnalysis' will be 'My ABC Analysis'.
I added a check for a previous space too. Since some of our strings are prefixed with 'GR_' and some also contain underscores, we can use the replace function as follows:
select dbo.GR_SpaceBeforeCap(replace('GR_ABCAnalysis_Test','_',' '))
Returns 'GR ABC Analysis Test'
CREATE FUNCTION GR_SpaceBeforeCap (
#str nvarchar(max)
)
returns nvarchar(max)
as
begin
declare
#i int, #j int
, #cp nchar, #c0 nchar, #c1 nchar
, #result nvarchar(max)
select
#i = 1
, #j = len(#str)
, #result = ''
while #i <= #j
begin
select
#cp = substring(#str,#i-1,1)
, #c0 = substring(#str,#i+0,1)
, #c1 = substring(#str,#i+1,1)
if #c0 = UPPER(#c0) collate Latin1_General_CS_AS
begin
-- Add space if Current is UPPER
-- and either Previous or Next is lower
-- and Previous or Current is not already a space
if #c0 = UPPER(#c0) collate Latin1_General_CS_AS
and (
#cp <> UPPER(#cp) collate Latin1_General_CS_AS
or #c1 <> UPPER(#c1) collate Latin1_General_CS_AS
)
and #cp <> ' '
and #c0 <> ' '
set #result = #result + ' '
end -- if #co
set #result = #result + #c0
set #i = #i + 1
end -- while
return #result
end
Assuming SQL Server 2005 or later, this modified from code taken here: http://www.kodyaz.com/articles/case-sensitive-sql-split-function.aspx
CREATE FUNCTION SpaceBeforeCap
(
#str nvarchar(max)
)
returns nvarchar(max)
as
begin
declare #i int, #j int
declare #returnval nvarchar(max)
set #returnval = ''
select #i = 1, #j = len(#str)
declare #w nvarchar(max)
while #i <= #j
begin
if substring(#str,#i,1) = UPPER(substring(#str,#i,1)) collate Latin1_General_CS_AS
begin
if #w is not null
set #returnval = #returnval + ' ' + #w
set #w = substring(#str,#i,1)
end
else
set #w = #w + substring(#str,#i,1)
set #i = #i + 1
end
if #w is not null
set #returnval = #returnval + ' ' + #w
return ltrim(#returnval)
end
This can then be called just as you have suggested above.
This function combines previous answers. Selectively choose to preserve adjacent CAPS:
CREATE FUNCTION SpaceBeforeCap (
#InputString NVARCHAR(MAX),
#PreserveAdjacentCaps BIT
)
RETURNS NVARCHAR(MAX)
AS
BEGIN
DECLARE
#i INT, #j INT,
#previous NCHAR, #current NCHAR, #next NCHAR,
#result NVARCHAR(MAX)
SELECT
#i = 1,
#j = LEN(#InputString),
#result = ''
WHILE #i <= #j
BEGIN
SELECT
#previous = SUBSTRING(#InputString,#i-1,1),
#current = SUBSTRING(#InputString,#i+0,1),
#next = SUBSTRING(#InputString,#i+1,1)
IF #current = UPPER(#current) COLLATE Latin1_General_CS_AS
BEGIN
-- Add space if Current is UPPER
-- and either Previous or Next is lower or user chose not to preserve adjacent caps
-- and Previous or Current is not already a space
IF #current = UPPER(#current) COLLATE Latin1_General_CS_AS
AND (
#previous <> UPPER(#previous) COLLATE Latin1_General_CS_AS
OR #next <> UPPER(#next) collate Latin1_General_CS_AS
OR #PreserveAdjacentCaps = 0
)
AND #previous <> ' '
AND #current <> ' '
SET #result = #result + ' '
END
SET #result = #result + #current
SET #i = #i + 1
END
RETURN #result
END
GO
SELECT dbo.SpaceBeforeCap('ThisIsASampleDBString', 1)
GO
SELECT dbo.SpaceBeforeCap('ThisIsASampleDBString', 0)
CLR and regular expressions or 26 replace statements a case sensitive collate clause and a trim.
Another strategy would be to check the ascii value of each character:
create function SpaceBeforeCap
(#str nvarchar(max))
returns nvarchar(max)
as
begin
declare #result nvarchar(max)= left(#str, 1),
#i int = 2
while #i <= len(#str)
begin
if ascii(substring(#str, #i, 1)) between 65 and 90
select #result += ' '
select #result += substring(#str, #i, 1)
select #i += 1
end
return #result
end
/***
SELECT dbo.SpaceBeforeCap('ThisIsATestString')
**/
To avoid loops altogether, use of a tally table can help here. If you are running on SQL 2022, then the generate_series function can remove even this dependency. This method will be significantly faster than iterating through a loop.
create function core.ufnAddSpaceBeforeCapital
(
#inputString nvarchar(max)
)
returns nvarchar(max)
as
begin
declare #outputString nvarchar(max)
select
#outputString = string_agg(iif(t.value = 1, upper(substring(#inputString,t.value,1)),iif(ascii(substring(#inputString,t.value,1)) between 65 and 90, ' ','') + substring(#inputString,t.value,1)),'')
from
generate_series(1,cast(len(#inputString) as int)) t
return #outputString
end
The scalar function is not inlineable, so I've provided an alternative inline table-valued function if that's what you need.
create function core.ufnAddSpaceBeforeCapitalITVF
(
#inputString nvarchar(max)
)
returns table
as
return
(
select
string_agg(iif(t.value = 1, upper(substring(#inputString,t.value,1)),iif(ascii(substring(#inputString,t.value,1)) between 65 and 90, ' ','') + substring(#inputString,t.value,1)),'') as outputString
from
generate_series(1,cast(len(#inputString) as int)) t
)
end
While I really like the char looping answers I was not thrilled with the performance. I have found this performs in a fraction of the time for my use case.
CREATE function SpaceBeforeCap
(#examine nvarchar(max))
returns nvarchar(max)
as
begin
DECLARE #index as INT
SET #index = PatIndex( '%[^ ][A-Z]%', #examine COLLATE Latin1_General_BIN)
WHILE #index > 0 BEGIN
SET #examine = SUBSTRING(#examine, 1, #index) + ' ' + SUBSTRING(#examine, #index + 1, LEN(#examine))
SET #index = PatIndex( '%[^ ][A-Z]%', #examine COLLATE Latin1_General_BIN)
END
RETURN LTRIM(#examine)
end
This makes use of the fact that
case sensitive pattern search only works in some collations. The character class [^ ] means anything except space, so as we add the missing spaces we match farther into the string until it is complete.
I'm am trying to use a levenshtein algorithm I found on the 'net to calculate the closest value to a search term. In order to implement fuzzy term matching. My current query runs about 45 seconds long. I'm hoping I can optimize it. I've already added indexes for the fields that I'm calculated the levenshtein value for. The levenshtein function I found may not be the most optimized and I take no credit in it's implementation. Here is that function:
CREATE FUNCTION [dbo].[LEVENSHTEIN]( #s NVARCHAR(MAX), #t NVARCHAR(MAX) )
/*
Levenshtein Distance Algorithm: TSQL Implementation
by Joseph Gama
http://www.merriampark.com/ldtsql.htm
Returns the Levenshtein Distance between strings s1 and s2.
Original developer: Michael Gilleland http://www.merriampark.com/ld.htm
Translated to TSQL by Joseph Gama
Fixed by Herbert Oppolzer / devio
as described in http://devio.wordpress.com/2010/09/07/calculating-levenshtein-distance-in-tsql
*/
RETURNS INT AS
BEGIN
DECLARE #d NVARCHAR(MAX), #LD INT, #m INT, #n INT, #i INT, #j INT,
#s_i NCHAR(1), #t_j NCHAR(1),#cost INT
--Step 1
SET #n = LEN(#s)
SET #m = LEN(#t)
SET #d = REPLICATE(NCHAR(0),(#n+1)*(#m+1))
IF #n = 0
BEGIN
SET #LD = #m
GOTO done
END
IF #m = 0
BEGIN
SET #LD = #n
GOTO done
END
--Step 2
SET #i = 0
WHILE #i <= #n BEGIN
SET #d = STUFF(#d,#i+1,1,NCHAR(#i)) --d(i, 0) = i
SET #i = #i+1
END
SET #i = 0
WHILE #i <= #m BEGIN
SET #d = STUFF(#d,#i*(#n+1)+1,1,NCHAR(#i)) --d(0, j) = j
SET #i = #i+1
END
--Step 3
SET #i = 1
WHILE #i <= #n BEGIN
SET #s_i = SUBSTRING(#s,#i,1)
--Step 4
SET #j = 1
WHILE #j <= #m BEGIN
SET #t_j = SUBSTRING(#t,#j,1)
--Step 5
IF #s_i = #t_j
SET #cost = 0
ELSE
SET #cost = 1
--Step 6
SET #d = STUFF(#d,#j*(#n+1)+#i+1,1,
NCHAR(dbo.MIN3(
UNICODE(SUBSTRING(#d,#j*(#n+1)+#i-1+1,1))+1,
UNICODE(SUBSTRING(#d,(#j-1)*(#n+1)+#i+1,1))+1,
UNICODE(SUBSTRING(#d,(#j-1)*(#n+1)+#i-1+1,1))+#cost)
))
SET #j = #j+1
END
SET #i = #i+1
END
--Step 7
SET #LD = UNICODE(SUBSTRING(#d,#n*(#m+1)+#m+1,1))
done:
RETURN #LD
END
And here is the query I'm using:
SELECT [Address], [dbo].[LEVENSHTEIN](#searchTerm, [Address]) As LevenshteinDistance
FROM Streets
Order By LevenshteinDistance
I'm not a DBA so please forgive my ignorance in any best practices - that's why I'm here to learn :). I really don't want to offload this processing in the business layer and am hoping to keep it in the data layer but with only 16k records taking 45 seconds to process it's currently not usable. This is only with a small subset of the records which will comprise the entire data store once I'm done processing the input files. Thanks in advance.
If you want it to run really fast, consider creating a dll in C#. It will improve your speed by 150% ;)
Here is my blog that explains you how to do it.
http://levenshtein.blogspot.com/2011/04/how-it-is-done.html
You should read this thread and those links: http://www.vbforums.com/showthread.php?t=575471