I need to be able to query a SharePoint database for survey results. The type of data I'm having problems with is a "Rating Scale" value. So the data in each table column represents a whole group of sub-questions and their answers.
So the following is an example of what is found in ONE column:
1. Our function has defined how Availability is measured the hardware/software in Production;#3#2. Availability threshold levels exist for our function (e.g., SLA's);#3#3. Our function follows a defined process when there are threshold breaches;#4#4. Our function collects and maintains Availability data;#4#5. Comparative analysis helps identify trending with the Availability data;#4#6. Operating Level Agreements (OLA's) guide our interaction with other internal teams;#4#
The Questions end with a semi-colon and their answers are inside the two # signs. So the answer to the first question is 3.
When I export the results of the survey it formats each question as a column header and the answer as the value in the cell below, which is ideal to get an average for each question, and would love to be able to replicate that from a SQL query.
But if I could get query results into two columns (Question, Answer)...I'd be thrilled with that.
Any help is appreciated.
Thanks very much
Hank Stallings
*****ADDENDUM:**
This was my version of astander's solution...THANKS again!
DECLARE #Table TABLE(
QuestionSource VARCHAR(50),
QA VARCHAR(5000)
)
DECLARE #ReturnTable TABLE(
QuestionSource VARCHAR(50),
Question VARCHAR(5000),
Answer int
)
DECLARE #XmlField XML,
#QuestionSource VARCHAR(50)
INSERT INTO #Table SELECT
'Availability' AS QuestionSource,CONVERT(varchar(5000),ntext1) FROM UserData WHERE tp_ContentType = 'My Survey'
INSERT INTO #Table SELECT
'Capacity' AS QuestionSource,CONVERT(varchar(5000),ntext2) FROM UserData WHERE tp_ContentType = 'My Survey'
--SELECT * FROM #Table
DECLARE Cur CURSOR FOR
SELECT QuestionSource,
CAST(Val AS XML) XmlVal
FROM (
SELECT QuestionSource,
LEFT(Vals, LEN(Vals) - LEN('<option><q>')) Val
FROM (
SELECT QuestionSource,
'<option><q>' + REPLACE(REPLACE(REPLACE(QA,'&','&'), ';#','</q><a>'), '#', '</a></option><option><q>') Vals
FROM #Table
) sub
) sub
OPEN Cur
FETCH NEXT FROM Cur INTO #QuestionSource,#XmlField
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO #ReturnTable
SELECT #QuestionSource,
T.split.query('q').value('.', 'nvarchar(max)') question,
T.split.query('a').value('.', 'nvarchar(max)') answer
FROM #XmlField.nodes('/option') T(split)
FETCH NEXT FROM Cur INTO #QuestionSource,#XmlField
END
CLOSE Cur
DEALLOCATE Cur
SELECT * FROM #ReturnTable
You have to have a split function set up, but once you have it, try this cursor free solution:
I prefer the number table approach to split a string in TSQL
For this method to work, you need to do this one time table setup:
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO Numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
Once the Numbers table is set up, create this split function, which WILL return empty strings and row numbers:
CREATE FUNCTION [dbo].[FN_ListToTableRows]
(
#SplitOn char(1) --REQUIRED, the character to split the #List string on
,#List varchar(8000)--REQUIRED, the list to split apart
)
RETURNS TABLE
AS
RETURN
(
----------------
--SINGLE QUERY-- --this WILL return empty rows
----------------
SELECT
ROW_NUMBER() OVER(ORDER BY number) AS RowNumber
,LTRIM(RTRIM(SUBSTRING(ListValue, number+1, CHARINDEX(#SplitOn, ListValue, number+1)-number - 1))) AS ListValue
FROM (
SELECT #SplitOn + #List + #SplitOn AS ListValue
) AS InnerQuery
INNER JOIN Numbers n ON n.Number < LEN(InnerQuery.ListValue)
WHERE SUBSTRING(ListValue, number, 1) = #SplitOn
);
GO
You can now easily split a CSV string into a table and join on it, NOTE this split function returns empty strings and row numbers:
select * from dbo.FN_ListToTableRows(',','1,2,3,,,4,5,6777,,,')
OUTPUT:
RowNumber ListValue
-------------------- ------------
1 1
2 2
3 3
4
5
6 4
7 5
8 6777
9
10
11
(11 row(s) affected)
Your can now use a CROSS APPLY to split every row in your table like:
DECLARE #YourTable table (RowID int, RowValue varchar(8000))
INSERT INTO #YourTable VALUES (1,'1. Our function has defined how Availability is measured the hardware/software in Production;#3#2. Availability threshold levels exist for our function (e.g., SLA''s);#3#3. Our function follows a defined process when there are threshold breaches;#4#4. Our function collects and maintains Availability data;#4#5. Comparative analysis helps identify trending with the Availability data;#4#6. Operating Level Agreements (OLA''s) guide our interaction with other internal teams;#4#')
INSERT INTO #YourTable VALUES (2,'1. one;#1#2. two;#2#3. three;#3#')
INSERT INTO #YourTable VALUES (3,'1. aaa;#1#2. bbb;#2#3. ccc;#3#')
;WITH AllRows As
(
SELECT
o.RowID,st.RowNumber,st.ListValue AS RowValue
FROM #YourTable o
CROSS APPLY dbo.FN_ListToTableRows('#',LEFT(o.RowValue,LEN(o.RowValue)-1)) AS st
)
SELECT
a.RowID,a.RowValue AS Question, b.RowValue AS Answer
FROM AllRows a
LEFT OUTER JOIN AllRows b ON a.RowID=b.RowID AND a.RowNumber+1=b.RowNumber
WHERE a.RowNumber % 2 = 1
OUTPUT:
RowID Question Answer
----------- ----------------------------------------------------------------------------------------------- -------
1 1. Our function has defined how Availability is measured the hardware/software in Production; 3
1 2. Availability threshold levels exist for our function (e.g., SLA's); 3
1 3. Our function follows a defined process when there are threshold breaches; 4
1 4. Our function collects and maintains Availability data; 4
1 5. Comparative analysis helps identify trending with the Availability data; 4
1 6. Operating Level Agreements (OLA's) guide our interaction with other internal teams; 4
2 1. one; 1
2 2. two; 2
2 3. three; 3
3 1. aaa; 1
3 2. bbb; 2
3 3. ccc; 3
(12 row(s) affected)
OK, let see. I had to use a cursor, as this would probably have been better achieved from a programming language like C#, but here goes... Using Sql Server 2005, try the following. Let me know if you need any explanations.
DECLARE #Table TABLE(
QuestionSource VARCHAR(50),
QA VARCHAR(1000)
)
DECLARE #ReturnTable TABLE(
QuestionSource VARCHAR(50),
Question VARCHAR(1000),
Answer VARCHAR(10)
)
DECLARE #XmlField XML,
#QuestionSource VARCHAR(40)
INSERT INTO #Table SELECT
'Availability','1. Our function has defined how Availability is measured the hardware/software in Production;#3#2. Availability threshold levels exist for our function (e.g., SLA''s);#3#3. Our function follows a defined process when there are threshold breaches;#4#4. Our function collects and maintains Availability data;#4#5. Comparative analysis helps identify trending with the Availability data;#4#6. Operating Level Agreements (OLA''s) guide our interaction with other internal teams;#4#'
INSERT INTO #Table SELECT
'Capacity', '1. Our function has defined how Availability is measured the hardware/software in Production;#1#2. Availability threshold levels exist for our function (e.g., SLA''s);#2#3. Our function follows a defined process when there are threshold breaches;#3#4. Our function collects and maintains Availability data;#4#5. Comparative analysis helps identify trending with the Availability data;#5#6. Operating Level Agreements (OLA''s) guide our interaction with other internal teams;#6#'
DECLARE Cur CURSOR FOR
SELECT QuestionSource,
CAST(Val AS XML) XmlVal
FROM (
SELECT QuestionSource,
LEFT(Vals, LEN(Vals) - LEN('<option><q>')) Val
FROM (
SELECT QuestionSource,
'<option><q>' + REPLACE(REPLACE(QA, ';#','</q><a>'), '#', '</a></option><option><q>') Vals
FROM #Table
) sub
) sub
OPEN Cur
FETCH NEXT FROM Cur INTO #QuestionSource, #XmlField
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO #ReturnTable
SELECT #QuestionSource,
T.split.query('q').value('.', 'nvarchar(max)') question,
T.split.query('a').value('.', 'nvarchar(max)') answer
FROM #XmlField.nodes('/option') T(split)
FETCH NEXT FROM Cur INTO #QuestionSource, #XmlField
END
CLOSE Cur
DEALLOCATE Cur
SELECT *
FROM #ReturnTable
Related
I have this Declare Statement
declare #ReferralLevelData table([Type of Contact] varchar(10));
insert into #ReferralLevelData values ('f2f'),('nf2f'),('Travel'),('f2f'),('nf2f'),('Travel'),('f2f'),('nf2f'),('Travel');
select (row_number() over (order by [Type of Contact]) % 3) +1 as [Referral ID]
,[Type of Contact]
from #ReferralLevelData
order by [Referral ID]
,[Type of Contact];
It does not insert into the table so i feel this is not working as expect, i.e it doesn't modify the table.
If it did work I was hoping to modify the statement to make it update.
At the moment the table just prints this result
1 f2f
1 nf2f
1 Travel
2 f2f
2 nf2f
2 Travel
3 f2f
3 nf2f
3 Travel
EDIT:
I want TO Update the table to enter recurring data in groups of three.
I have a table of data, it is duplicated twice in the same table to make three sets.
Its "ReferenceID" is the primary key, i want to in a way group the 3 same ReferenceID's and inject these three values "f2f" "NF2F" "Travel" into the row called "Type" in any order but ensure that each ReferenceID only has one of those values.
Do you mean the following?
declare #ReferralLevelData table(
[Referral ID] int,
[Type of Contact] varchar(10)
);
insert into #ReferralLevelData([Referral ID],[Type of Contact])
select
(row_number() over (order by [Type of Contact]) % 3) +1 as [Referral ID]
,[Type of Contact]
from
(
values ('f2f'),('nf2f'),('Travel'),('f2f'),('nf2f'),('Travel'),('f2f'),('nf2f'),('Travel')
) v([Type of Contact]);
If it suits you then you also can use the next query to generate data:
select r.[Referral ID],ct.[Type of Contact]
from
(
values ('f2f'),('nf2f'),('Travel')
) ct([Type of Contact])
cross join
(
values (1),(2),(3)
) r([Referral ID]);
I have a table with unstructured data I am trying to analyze to try to build a relational lookup. I do not have use of word cloud software.
I really have no idea how to solve this problem. Searching for solutions has lead me to tools that might do this for me that cost money, not coded solutions.
Basically my data looks like this:
CK1 CK2 Comment
--------------------------------------------------------------
1 A This is a comment.
2 A Another comment here.
And this is what I need to create:
CK1 CK2 Words
--------------------------------------------------------------
1 A This
1 A is
1 A a
1 A comment.
2 A Another
2 A comment
2 A here.
What you are trying to do is tokenize a string using a space as a Delimiter. In the SQL world people often refer to functions that do this as a "Splitter". The potential pitfall of using a splitter for this type of thing is how words can be separated by multiple spaces, tabs, CHAR(10)'s, CHAR(13)'s, CHAR()'s, etc. Poor grammar, such as not adding a space after a period results in this:
" End of sentence.Next sentence"
sentence.Next is returned as a word.
The way I like to tokenize human text is to:
Replace any text that isn't a character with a space
Replace duplicate spaces
Trim the string
Split the newly transformed string using a space as the delimiter.
Below is my solution followed by the DDL to create the functions used.
-- Sample Data
DECLARE #yourtable TABLE (CK1 INT, CK2 CHAR(1), Comment VARCHAR(8000));
INSERT #yourtable (CK1, CK2, Comment)
VALUES
(1,'A','This is a typical comment...Follewed by another...'),
(2,'A','This comment has double spaces and tabs and even carriage
returns!');
-- Solution
SELECT t.CK1, t.CK2, split.itemNumber, split.itemIndex, split.itemLength, split.item
FROM #yourtable AS t
CROSS APPLY samd.patReplace(t.Comment,'[^a-zA-Z ]',' ') AS c1
CROSS APPLY samd.removeDupChar8K(c1.newString,' ') AS c2
CROSS APPLY samd.delimitedSplitAB8K(LTRIM(RTRIM(c2.NewString)),' ') AS split;
Results (truncated for brevity):
CK1 CK2 itemNumber itemIndex itemLength item
----------- ---- -------------------- ----------- ----------- --------------
1 A 1 1 4 This
1 A 2 6 2 is
1 A 3 9 1 a
1 A 4 11 7 typical
1 A 5 19 7 comment
...
2 A 1 1 4 This
2 A 2 6 7 comment
2 A 3 14 3 has
2 A 4 18 6 double
...
Note that the splitter I'm using is based of Jeff Moden's Delimited Split8K with a couple tweeks.
Functions used:
CREATE FUNCTION dbo.rangeAB
(
#low bigint,
#high bigint,
#gap bigint,
#row1 bit
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
WITH L1(N) AS
(
SELECT 1
FROM (VALUES
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0)) T(N) -- 90 values
),
L2(N) AS (SELECT 1 FROM L1 a CROSS JOIN L1 b CROSS JOIN L1 c),
iTally AS (SELECT rn = ROW_NUMBER() OVER (ORDER BY (SELECT 1)) FROM L2 a CROSS JOIN L2 b)
SELECT r.RN, r.OP, r.N1, r.N2
FROM
(
SELECT
RN = 0,
OP = (#high-#low)/#gap,
N1 = #low,
N2 = #gap+#low
WHERE #row1 = 0
UNION ALL -- COALESCE required in the TOP statement below for error handling purposes
SELECT TOP (ABS((COALESCE(#high,0)-COALESCE(#low,0))/COALESCE(#gap,0)+COALESCE(#row1,1)))
RN = i.rn,
OP = (#high-#low)/#gap+(2*#row1)-i.rn,
N1 = (i.rn-#row1)*#gap+#low,
N2 = (i.rn-(#row1-1))*#gap+#low
FROM iTally AS i
ORDER BY i.rn
) AS r
WHERE #high&#low&#gap&#row1 IS NOT NULL AND #high >= #low AND #gap > 0;
GO
CREATE FUNCTION samd.NGrams8k
(
#string VARCHAR(8000), -- Input string
#N INT -- requested token size
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT
position = r.RN,
token = SUBSTRING(#string, CHECKSUM(r.RN), #N)
FROM dbo.rangeAB(1, LEN(#string)+1-#N,1,1) AS r
WHERE #N > 0 AND #N <= LEN(#string);
GO
CREATE FUNCTION samd.patReplace8K
(
#string VARCHAR(8000),
#pattern VARCHAR(50),
#replace VARCHAR(20)
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT newString =
(
SELECT CASE WHEN #string = CAST('' AS VARCHAR(8000)) THEN CAST('' AS VARCHAR(8000))
WHEN #pattern+#replace+#string IS NOT NULL THEN
CASE WHEN PATINDEX(#pattern,token COLLATE Latin1_General_BIN)=0
THEN ng.token ELSE #replace END END
FROM samd.NGrams8K(#string, 1) AS ng
ORDER BY ng.position
FOR XML PATH(''),TYPE
).value('text()[1]', 'VARCHAR(8000)');
GO
CREATE FUNCTION samd.delimitedSplitAB8K
(
#string VARCHAR(8000), -- input string
#delimiter CHAR(1) -- delimiter
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT
itemNumber = ROW_NUMBER() OVER (ORDER BY d.p),
itemIndex = CHECKSUM(ISNULL(NULLIF(d.p+1, 0),1)),
itemLength = CHECKSUM(item.ln),
item = SUBSTRING(#string, d.p+1, item.ln)
FROM (VALUES (DATALENGTH(#string))) AS l(s) -- length of the string
CROSS APPLY
(
SELECT 0 UNION ALL -- for handling leading delimiters
SELECT ng.position
FROM samd.NGrams8K(#string, 1) AS ng
WHERE token = #delimiter
) AS d(p) -- delimiter.position
CROSS APPLY (VALUES( --LEAD(d.p, 1, l.s+l.d) OVER (ORDER BY d.p) - (d.p+l.d)
ISNULL(NULLIF(CHARINDEX(#delimiter,#string,d.p+1),0)-(d.p+1), l.s-d.p))) AS item(ln);
GO
CREATE FUNCTION dbo.RemoveDupChar8K(#string varchar(8000), #char char(1))
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT NewString =
replace(replace(replace(replace(replace(replace(replace(
#string COLLATE LATIN1_GENERAL_BIN,
replicate(#char,33), #char), --33
replicate(#char,17), #char), --17
replicate(#char,9 ), #char), -- 9
replicate(#char,5 ), #char), -- 5
replicate(#char,3 ), #char), -- 3
replicate(#char,2 ), #char), -- 2
replicate(#char,2 ), #char); -- 2
GO
1) If we are using SQL Server 2016 and above then we should probably
use the built-in function STRING_SPLIT
-- SQL 2016and above
DECLARE #txt NVARCHAR(100) = N'This is a comment.'
select [value] from STRING_SPLIT(#txt, ' ')
2) Only if 1 does not fit, then if the number of separation (the space in our case) is less then 3 which fit your sample data, then we should probably use PARSENAME
-- BEFORE SQL 2016 if we have less than 4 parts
DECLARE #txt NVARCHAR(100) = N'This is a comment.'
DECLARE #Temp NVARCHAR(200) = REPLACE (#txt,'.','#')
SELECT t FROM (VALUES(1),(2),(3),(4))T1(n)
CROSS APPLY (SELECT REPLACE(PARSENAME(REPLACE(#Temp,' ','.'),T1.n), '#','.'))T2(t)
3) Only if the 1 and 2 does not fit, then we should use SQLCLR function
http://dataeducation.com/sqlclr-string-splitting-part-2-even-faster-even-more-scalable/
4) Only if we cannot use 1,2 and we cannot use SQLCLR (which implies a real problematic administration and has nothing with security since you can have all the SQLCLR function in a read-only database for the use of all users, as I explain in my lectures), then you can use T-SQL and create UDF.
https://sqlperformance.com/2012/07/t-sql-queries/split-strings
I want my Firebird SQL to loop through part of the code WHILE a condition is meet.
Initially I didn't even think it was possible. However I have done some reading and now believe that I can use WHILE loop.
I understand a FOR loop is not what I want as it applies to the whole code, not just part of it.
I am using this in Excel and could use some VBA code to do what I want, but it would be better if I can do it all via Firebird SQL as then I can apply it elsewhere.
SELECT
'1' as "Qty",
'of ' || ALP3.PROPERTYVALUE AS "Total Qty"
FROM ASSEMBLYLINES
LEFT JOIN ASSEMBLYLINEPROPS ALP1 ON ALP1.HEADERSYSUNIQUEID = ASSEMBLYLINES.SYSUNIQUEID AND ALP1.PROPERTYNAME = 'Process2'
LEFT JOIN ASSEMBLYLINEPROPS ALP2 ON ALP2.HEADERSYSUNIQUEID = ASSEMBLYLINES.SYSUNIQUEID AND ALP2.PROPERTYNAME = 'Process3'
LEFT JOIN ASSEMBLYLINEPROPS ALP3 ON ALP3.HEADERSYSUNIQUEID = ASSEMBLYLINES.SYSUNIQUEID AND ALP3.PROPERTYNAME = 'Job Quantity'
LEFT JOIN ASSEMBLYLINEPROPS ALP4 ON ALP4.HEADERSYSUNIQUEID = ASSEMBLYLINES.SYSUNIQUEID AND ALP4.PROPERTYNAME = 'Drawing No'
WHERE ASSEMBLYLINES.ORDERNUMBER='16708R01'
AND ASSEMBLYLINES.LINECODE='FABPART'
AND ASSEMBLYLINES.SYSUSERCREATED <> 'EXTERNAL USER'
ORDER BY ALP4.PROPERTYVALUE
My results using the code above is:
Qty Total Qty
1 4
However, what I want is:
My results using the code above is:
Qty Total Qty
1 4
2 4
3 4
4 4
I understand the While loop would be something like:
While Qty <= ALP3.PROPERTYVALUE Do
<<output>>
Loop
Qty Total Qty
1 4
2 4
3 4
4 4
I understand the While loop would be something like:
While Qty <= ALP3.PROPERTYVALUE Do
<<output>>
Loop
So, your "quantity" column is not actually a quantity of some real data (like quantity of containers in cargo ship), but a row number in some your output report/grid.
And then what you want is limiting the output "rowset" - matrix, table, grid - to some N first rows.
Well, that is exactly how it is done, asking for the first rows only.
Select FIRST(4) column1, column2, column3
From table 1
Where condition1 and condition2 or condition3
See the "first" clause in documentation: https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-dml-select.html
Also see "Limiting result rows" chapyer in Wikipedia: https://en.wikipedia.org/wiki/Select_%28SQL%29#Limiting_result_rows
You can also use "window functions" starting with Firebird version 3, but they are somewhat overkill for the simple task of "only give me first N rows".
Now, there is one more method that provides for embedding a totally voluntary condition, but that is from "ugly hacks" toolsets and does not work in a typical situation when several simultaneous connections form different client programs are running. You can use a "generator" as part of the WHILE clause:
Select .....
Where (GEN_ID(cancel_generator_name, 0) = 0) AND ( ...you normal conditions...)
You set the generator value to 0 before the query, and your client evaluates some conditions of your choice while reading the data, and when it wants to - from some another SQL command library object it issues the generator change command, which would immediately skip the rest of the query. However while sometimes this is a useful technique, but only in very specific rare situations.
Since Mark seems to be better guessing than me, then some outlines for future guesswork.
SP is a standard abbreviation for SQL Stored Procedure. Firebird's Execute Block is essentially an anonymous non-persistent SP.
So, we start with a persistent and named SP.
create or alter procedure SEQ (
FROM_1_TO integer not null)
returns (
COUNTER integer)
as
begin
counter = 1;
while ( counter <= from_1_to ) do begin
suspend;
counter = counter + 1;
end
end
Select 1, s.counter from rdb$database, seq(5) s
CONSTANT COUNTER
1 1
1 2
1 3
1 4
1 5
The next question would be how to
join the table with SP (stored procedure) dependent upon specific table row values
avoid SP being executed with NULL parameter values
The answer is - LEFT JOIN, as shown in the FAQ: http://www.firebirdfaq.org/faq143/
CREATE TABLE T2 (
ID INTEGER NOT NULL PRIMARY KEY,
TITLE VARCHAR(10) NOT NULL,
QTY INTEGER NOT NULL
);
INSERT INTO T2 (ID, TITLE, QTY) VALUES (1, 'aaaa', 2);
INSERT INTO T2 (ID, TITLE, QTY) VALUES (2, 'bbbb', 5);
INSERT INTO T2 (ID, TITLE, QTY) VALUES (3, 'ccccc', 4);
Select * from t2 t
left join seq(t.qty) s on 1=1
ID TITLE QTY COUNTER
1 aaaa 2 1
1 aaaa 2 2
2 bbbb 5 1
2 bbbb 5 2
2 bbbb 5 3
2 bbbb 5 4
2 bbbb 5 5
3 ccccc 4 1
3 ccccc 4 2
3 ccccc 4 3
3 ccccc 4 4
If you would have many different queries on different tables/fields that would require this rows-cloning added then having a dedicated counter-generating SP makes sense.
However if you only need this rather exotic rows cloning once, then maybe polluting a global namespace with an SP you would never need again would be less of a good idea.
It seems one can not select from an EB, though: Select from execute block?
So you would have to make a specific ad-hoc EB exactly for your select statement. Which, arguably, might be the very reason d'etre for anonymous non-persistent EB.
execute block
returns (ID INTEGER, TITLE VARCHAR(10), QTY INTEGER, COUNTER INTEGER)
as
begin
for select
id, title, qty from t2
into :id, :title, :qty
do begin
counter = 1;
while
(counter <= qty)
do begin
suspend;
counter = counter + 1;
end
end
end
However the data access library your application uses to connect to Firebird should understand then that while this query is not SELECT-query it still returns the "rowset". Usually they do, but who knows.
I have a scenario wherein I have to remove all the strings except a or b or c
My sample table is as follows:
Id Product
------------------
1. a,b,Da,c
2. Ty,a,b,c
3. a,sds,b
Sample output
Id Product
----------------
1. a,b,c
2. a,b,c
3. a,b
My current version is Microsoft SQL Server 2008 R2
This should help you out. As I state in the comments, I make use of Jeff Moden's DelimitedSplit8k, as you're using an older version of SQL Server. if you were using 2016+, you would have access to STRING_SPLIT. I also normalise your data; as storing delimited data is almost always a bad idea.
CREATE TABLE #Sample (id int, Product varchar(20));
INSERT INTO #Sample
VALUES (1,'a,b,Da,c'),
(2,'Ty,a,b,c'),
(3,'a,sds,b');
GO
--The first problem you have is you're storing delimited data
--You really should be storing each item on a separate row.
--This is, however, quite easy to do. i'm going to use a different
--table, however, you can change this fairly easily for your
--needs.
CREATE TABLE #Sample2 (id int, Product varchar(2));
GO
--You can split the data out by using a Splitter.
--My personal preference is Jeff Moden's DelimitedSplit8K
--which I've linked to above.
INSERT INTO #Sample2 (id, Product)
SELECT id, Item AS Product
FROM #Sample S
CROSS APPLY dbo.DelimitedSplit8K(S.Product,',') DS
WHERE DS.Item IN ('a','b','c');
GO
--And hey presto! Your normalised data, and without the unwanted values
SELECT *
FROM #Sample2;
GO
DROP TABLE #Sample;
DROP TABLE #Sample2;
If you have to keep the delimited format, you can use STUFF and FOR XML PATH:
WITH Split AS(
SELECT id,
Item AS Product,
ItemNumber
FROM #Sample S
CROSS APPLY dbo.DelimitedSplit8K(S.Product,',') DS
WHERE DS.Item IN ('a','b','c'))
SELECT id,
STUFF((SELECT ',' + Product
FROM Split sq
WHERE sq.id = S.id
ORDER BY ItemNumber
FOR XML PATH('')),1,1,'')
FROM Split S
GROUP BY id;
This also will do the thing, using xml only:
select * into #t from (values('a,b,Da,c'),('Ty,a,b,c'),('a,sds,b'))v(Product)
;
with x as (
SELECT t.Product, st.sProduct
FROM #t t
cross apply (
SELECT CAST(N'<root><r>' + REPLACE(t.Product,',', N'</r><r>') + N'</r></root>' as xml) xProduct
)xt
cross apply (
select CAST(r.value('.','NVARCHAR(MAX)') as nvarchar) sProduct
from xt.xProduct.nodes(N'//root/r') AS RECORDS(r)
) st
where st.sProduct in ('a', 'b', 'c')
)
select distinct x.Product, REVERSE(SUBSTRING(REVERSE(cleared.cProduct), 2, 999)) cleared
from x
cross apply ( select (
select distinct ref.sProduct + ','
from x ref
where ref.Product = x.Product
for xml path('') )
)cleared(cProduct)
;
drop table #t
To satisfy security requirements, I need to find a way to replace SSN's with unique, random 9 digit numbers, before providing said database to a developer. The SSN is in a column in a table of a database. There may be 10's of thousands of rows in said table. The number does not need hyphens. I am a beginner with SQL and programming in general.
I have been unable to find a solution for my specific needs. Nothing seems quite right. But if you know of a thread that I have missed, please let me know.
Thanks for any help!
Here is one way.
I'm assuming that you already have a backup of the real data as this update is not reversible.
Below I've assumed your table name is Person with your ssn column named SSN.
UPDATE Person SET
SSN = CAST(LEFT(CAST(ABS(CAST(CAST(NEWID() as BINARY(10)) as int)) as varchar(max)) + '00000000',9) as int)
If they do not have to be random, you could just replace them with ascending numeric values. Failing that, you’d have to generate a random number. As you may have discovered, the RAND function will only generate a single value per query statement (select, update, etc.); the work-around to that is the newid() function, which would generate a GUID for each row produced by a query (run SELECT newid() from MyTable to see how this works). Wrap this in a checksum() to generate an integer; modulus that by 1,000,00,000 to get a value within the SSN range (0 to 999,999,999); and, assuming you’re storing it as a char(9) prefix it with leading zeros.
Next trick is ensuring it’s unique for all values in your table. This gets tricky, and I’d do it by setting up a temp table with the values, populating it, then copying them over. Lessee now…
DECLARE #DummySSN as table
(
PrimaryKey int not null
,NewSSN char(9) not null
)
-- Load initial values
INSERT #DummySSN
select
UserId
,right('000000000' + cast(abs(checksum(newid()))%1000000000 as varchar(9)), 9)
from Users
-- Check for dups
select NewSSN from #DummySSN group by NewSSN having count(*) > 1
-- Loop until values are unique
IF exists (SELECT 1 from #DummySSN group by NewSSN having count(*) > 1)
UPDATE #DummySSN
set NewSSN = right('000000000' + cast(abs(checksum(newid()))%1000000000 as varchar(9)), 9)
where NewSSN in (select NewSSN from #DummySSN group by NewSSN having count(*) > 1)
-- Check for dups
select NewSSN from #DummySSN group by NewSSN having count(*) > 1
This works for a small table I have, and it should work for a large one. I don’t see this turning into an infinite loop, but even so you might want to add a check to exit the loop after say 10 iterations,
I've run a couple million tests in this and it seems to generate random (URN) 9 digit numbers (no leading zeros).
I cannot think of a more efficient way to do this.
SELECT CAST(FLOOR(RAND(CHECKSUM(NEWID())) * 900000000 ) + 100000000 AS BIGINT)
The test used;
;WITH Fn(N) AS
(
SELECT CAST(FLOOR(RAND(CHECKSUM(NEWID())) * 900000000 ) + 100000000 AS BIGINT)
UNION ALL
SELECT CAST(FLOOR(RAND(CHECKSUM(NEWID())) * 900000000 ) + 100000000 AS BIGINT)
FROM Fn
)
,Tester AS
(
SELECT TOP 5000000 *
FROM Fn
)
SELECT LEN(MIN(N))
,LEN(MAX(N))
,MIN(N)
,MAX(N)
FROM Tester
OPTION (MAXRECURSION 0)
Not so fast, but easiest... I added some dot's...
DECLARE #tr NVARCHAR(40)
SET #tr = CAST(ROUND((888*RAND()+111),0) AS CHAR(3)) + '.' +
CAST(ROUND((8888*RAND()+1111),0) AS CHAR(4)) + '.' + CAST(ROUND((8888*RAND()+1111),0) AS
CHAR(4)) + '.' + CAST(ROUND((88*RAND()+11),0) AS CHAR(2))
PRINT #tr
If the requirement is to obfuscate a database then this will return the same unique value for each distinct SSN in any table preserving referential integrity in the output without having to do a lookup and translate.
SELECT CAST(RAND(SSN)*999999999 AS INT)