Convert Varchar to Ascii - tsql

I'm trying to convert the contents of a VARCHAR field to be unique number that can be easily referenced by a 3rd party.
How can I convert a varchar to the ascii string equivalent? In TSQL? The ASCII() function converts a single character but what can I do to convert an entire string?
I've tried using
CAST(ISNULL(ASCII(Substring(RTRIM(LTRIM(PrimaryContactRegion)),1,1)),'')AS VARCHAR(3))
+ CAST(ISNULL(ASCII(Substring(RTRIM(LTRIM(PrimaryContactRegion)),2,1)),'')AS VARCHAR(3))
....but this is tedious, stupid looking, and just doesn't really work if I had long strings. Or if it is better how would I do the same thing in SSRS?

try something like this:
DECLARE #YourString varchar(500)
SELECT #YourString='Hello World!'
;WITH AllNumbers AS
(
SELECT 1 AS Number
UNION ALL
SELECT Number+1
FROM AllNumbers
WHERE Number<LEN(#YourString)
)
SELECT
(SELECT
ASCII(SUBSTRING(#YourString,Number,1))
FROM AllNumbers
ORDER BY Number
FOR XML PATH(''), TYPE
).value('.','varchar(max)') AS NewValue
--OPTION (MAXRECURSION 500) --<<needed if you have a string longer than 100
OUTPUT:
NewValue
---------------------------------------
72101108108111328711111410810033
(1 row(s) affected)
just to test it out:
;WITH AllNumbers AS
(
SELECT 1 AS Number
UNION ALL
SELECT Number+1
FROM AllNumbers
WHERE Number<LEN(#YourString)
)
SELECT SUBSTRING(#YourString,Number,1),ASCII(SUBSTRING(#YourString,Number,1)),* FROM AllNumbers
OUTPUT:
Number
---- ----------- -----------
H 72 1
e 101 2
l 108 3
l 108 4
o 111 5
32 6
W 87 7
o 111 8
r 114 9
l 108 10
d 100 11
! 33 12
(12 row(s) affected)
Also, you might want to use this:
RIGHT('000'+CONVERT(varchar(max),ASCII(SUBSTRING(#YourString,Number,1))),3)
to force all ASCII values into 3 digits, I'm not sure if this is necessary based on your usage or not.
Output using 3 digits per character:
NewValue
-------------------------------------
072101108108111032087111114108100033
(1 row(s) affected)

Well, I think that a solution to this will be very slow, but i guess that you could do something like this:
DECLARE #count INT, #string VARCHAR(100), #ascii VARCHAR(MAX)
SET #count = 1
SET #string = 'put your string here'
SET #ascii = ''
WHILE #count <= DATALENGTH(#string)
BEGIN
SELECT #ascii = #ascii + '&#' + ASCII(SUBSTRING(#string, #count, 1)) + ';'
SET #count = #count + 1
END
SET #ascii = LEFT(#ascii,LEN(#ascii)-1)
SELECT #ascii
I'm not in a pc with a database engine, so i can't really test this code. If it works, then you can create a UDF based on this.

Related

Taking N-samples from each group in PostgreSQL

I have a table containing data that has a column named id that looks like below:
id
value 1
value 2
value 3
1
244
550
1000
1
251
551
700
1
540
60
1200
...
...
...
...
2
19
744
2000
2
10
903
100
2
44
231
600
2
120
910
1100
...
...
...
...
I want to take 50 sample rows per id that exists but if less than 50 exist for the group to simply take the entire set of data points.
For example I would like a maximum 50 data points randomly selected from id = 1, id = 2 etc...
I cannot find any previous questions similar to this but have tried taking a stab at at least logically working through the solution where I could iterate and union all queries by id and limit to 50:
SELECT * FROM (SELECT * FROM schema.table AS tbl WHERE tbl.id = X LIMIT 50) UNION ALL;
But it's obvious that you cannot use this type of solution because UNION ALL requires aggregating outputs from one id to the next and I do not have a list of id values to use in place of X in tbl.id = X.
Is there a way to accomplish this by gathering that list of unique id values and union all results or is there a more optimal way this could be done?
If you want to select a random sample for each id, then you need to randomize the rows somehow. Here is a way to do it:
select * from (
select *, row_number() over (partition by id order by random()) as u
from schema.table
) as a
where u <= 50;
Example (limiting to 3, and some row number for each id so you can see the selection randomness):
setup
DROP TABLE IF EXISTS foo;
CREATE TABLE foo
(
id int,
value1 int,
idrow int
);
INSERT INTO foo
select 1 as id, (1000*random())::int as value1, generate_series(1, 100) as idrow
union all
select 2 as id, (1000*random())::int as value1, generate_series(1, 100) as idrow
union all
select 3 as id, (1000*random())::int as value1, generate_series(1, 100) as idrow;
Selection
select * from (
select *, row_number() over (partition by id order by random()) as u
from foo
) as a
where u <= 3;
Output:
id
value1
idrow
u
1
542
6
1
1
24
86
2
1
155
74
3
2
505
95
1
2
100
46
2
2
422
33
3
3
966
88
1
3
747
89
2
3
664
19
3
In case you are looking to get 50 (or less) from each group of IDs then you can use windowing -
From question - "I want to take 50 sample rows per id that exists but if less than 50 exist for the group to simply take the entire set of data points."
Query -
with data as (
select row_number() over (partition by id order by random()) rn,
* from table_name)
select * from data where rn<=50 order by id;
Fiddle.
Your description of trying to get the UNION ALL without specifying all the branches ahead of time is aiming for a LATERAL join. And that is one way to solve the problem. But unless you have a table of all distinct ids, you would have to compute one on the fly. For example (using the same fiddle as Pankaj used):
with uniq as (select distinct id from test)
select foo.* from uniq cross join lateral
(select * from test where test.id=uniq.id order by random() limit 3) foo
This could be either slower or faster than the Window Function method, depending on your system and your data and your indexes. In my hands, it was quite a bit faster even with the need to dynamically compute the list of distinct ids.

T-SQL: Combining rows based on another table

I am seeking to alter a table content based on information of another table using a stored procedure. To make my point (and dodge my rusty English skills) I created the following simplification.
I have a table with fragment amounts of the form
SELECT * FROM [dbo].[obtained_fragments] ->
fragment amount
22 42
76 7
101 31
128 4
177 22
212 6
and a table that lists all possible combinations to combine these fragments to other fragments.
SELECT * FROM [dbo].[possible_combinations] ->
fragment consists_of_f1 f1_amount_needed consists_of_f2 f2_amount_needed
1001 128 1 22 3
1004 151 1 101 12
1012 128 1 177 6
1047 212 1 76 4
My aim is to alter the first table so that all possible fragment combinations are performed, leading to
SELECT * FROM [dbo].[obtained_fragments] ->
fragment amount
22 30
76 3
101 31
177 22
212 5
1001 4
1047 1
In words, combined fragments are added to the table based on [dbo].[possible_combinations], and the amount of needed fragments is reduced. Depleted fragments are removed from the table.
How do I achieve this fragment transformation in an easy way? I started writing a while loop, checking if sufficient fragments are available, inside of a for loop, interating through the fragment numbers. However, I am unable to come up with a functional amount check and begin to wonder if this is even possible in T-SQL this way.
The code doesn't have to be super efficient since both tables will always be smaller than 200 rows.
It is important to note that it doesn't matter which combinations are created.
It might come in handy that [f1_amount_needed] always has a value of 1.
UPDATE
Using the solution of iamdave, which works perfectly fine as long I don't touch it, I receive the following error message:
Column name or number of supplied values does not match table definition.
I barely changed anything really. Is there a chance that using existing tables with more than the necessary columns instead of declaring the tables (as iamdave did) makes this difference?
DECLARE #t TABLE(Binding_ID int, Exists_of_Binding_ID_2 int, Exists_of_Pieces_2 int, Binding1 int, Binding2 int);
WHILE 1=1
BEGIN
DELETE #t
INSERT INTO #t
SELECT TOP 1
k.Binding_ID
,k.Exists_of_Binding_ID_2
,k.Exists_of_Pieces_2
,g1.mat_Binding_ID AS Binding1
,g2.mat_Binding_ID AS Binding2
FROM [dbo].[vwCombiBinding] AS k
JOIN [leer].[sandbox5] AS g1
ON k.Exists_of_Binding_ID_1 = g1.mat_Binding_ID AND g1.Amount >= 1
JOIN [leer].[sandbox5] AS g2
ON k.Exists_of_Binding_ID_2 = g2.mat_Binding_ID AND g2.Amount >= k.Exists_of_Pieces_2
ORDER BY k.Binding_ID
IF (SELECT COUNT(1) FROM #t) = 1
BEGIN
UPDATE g
SET Amount = g.Amount +1
FROM [leer].[sandbox5] AS g
JOIN #t AS t
ON g.mat_Binding_ID = t.Binding_ID
INSERT INTO [leer].[sandbox5]
SELECT
t.Binding_ID
,1
FROM #t AS t
WHERE NOT EXISTS (SELECT NULL FROM [leer].[sandbox5] AS g WHERE g.mat_Binding_ID = t.Binding_ID);
UPDATE g
SET Amount = g.Amount - 1
FROM [leer].[sandbox5] AS g
JOIN #t AS t
ON g.mat_Binding_ID = t.Binding1
UPDATE g
SET Amount = g.Amount - t.Exists_of_Pieces_2
FROM [leer].[sandbox5] AS g
JOIN #t AS t
ON g.mat_Binding_ID = t.Binding2
END
ELSE
BREAK
END
SELECT * FROM [leer].[sandbox5]
You can do this with a while loop that contains several statements to handle your iterative data updates. As you need to make changes based on a re-assessment of your data each iteration this has to be done in a loop of some kind:
declare #f table(fragment int,amount int);
insert into #f values (22 ,42),(76 ,7 ),(101,31),(128,4 ),(177,22),(212,6 );
declare #c table(fragment int,consists_of_f1 int,f1_amount_needed int,consists_of_f2 int,f2_amount_needed int);
insert into #c values (1001,128,1,22,3),(1004,151,1,101,12),(1012,128,1,177,6),(1047,212,1,76,4);
declare #t table(fragment int,consists_of_f2 int,f2_amount_needed int,fragment1 int,fragment2 int);
while 1 = 1
begin
-- Clear out staging area
delete #t;
-- Populate with the latest possible combination
insert into #t
select top 1 c.fragment
,c.consists_of_f2
,c.f2_amount_needed
,f1.fragment as fragment1
,f2.fragment as fragment2
from #c as c
join #f as f1
on c.consists_of_f1 = f1.fragment
and f1.amount >= 1
join #f as f2
on c.consists_of_f2 = f2.fragment
and f2.amount >= c.f2_amount_needed
order by c.fragment;
-- Update fragments table if a new combination can be made
if (select count(1) from #t) = 1
begin
-- Update if additional fragment
update f
set amount = f.amount + 1
from #f as f
join #t as t
on f.fragment = t.fragment;
-- Insert if a new fragment
insert into #f
select t.fragment
,1
from #t as t
where not exists(select null
from #f as f
where f.fragment = t.fragment
);
-- Update fragment1 amounts
update f
set amount = f.amount - 1
from #f as f
join #t as t
on f.fragment = t.fragment1;
-- Update fragment2 amounts
update f
set amount = f.amount - t.f2_amount_needed
from #f as f
join #t as t
on f.fragment = t.fragment2;
end
else -- If no new combinations possible, break the loop
break
end;
select *
from #f;
Output:
+----------+--------+
| fragment | amount |
+----------+--------+
| 22 | 30 |
| 76 | 3 |
| 101 | 31 |
| 128 | 0 |
| 177 | 22 |
| 212 | 5 |
| 1001 | 4 |
| 1047 | 1 |
+----------+--------+

PostgreSQL - dynamic INSERT on column names

I'm looking to dynamically insert a set of columns from one table to another in PostgreSQL. What I think I'd like to do is read in a 'checklist' of column headings (those columns which exist in table 1 - the storage table), and if they exist in the export table (table 2) then insert them in all at once from table 1. Table 2 will be variable in its columns though - once imported ill drop it and import new data to be imported with potentially different column structure. So I need to import it based on the column names.
e.g.
Table 1. - The storage table
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
Table 2. - The export table
ID NAME MG# METHOD SIO2 TIO2 CAO MGO
1 Amy 4 Method1 65 10 5 5
2 Poe 3 Method2 63 8 2 3
3 Ben 2 Method3 77 10 2 2
As you can see the export table may include columns which do not exist in the storage table, so these would be ignored.
I want to insert all of these columns at once, as I've found if I do it individually by column it extends the number of rows each time on the insert (maybe someone can solve this issue instead? Currently I've written a function to check if a column name exists in table 2, if it does, insert it, but as said this extends the rows of the table every time and NULL the rest of the columns).
The INSERT line from my function:
EXECUTE format('INSERT INTO %s (%s) (SELECT %s::%s FROM %s);',_tbl_import, _col,_col,_type,_tbl_export);
As a type of 'code example' for my question:
EXECUTE FORMAT('INSERT INTO table1 (%s) (SELECT (%s) FROM table2)',columns)
where 'columns' would be some variable denoting the columns that exist in the export table that need to go into the storage table. This will be variable as table 2 will be different every time.
This would ideally update Table 1 as:
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
4 Amy NULL NULL NULL 65 10 5 5 NULL
5 Poe NULL NULL NULL 63 8 2 3 NULL
6 Ben NULL NULL NULL 77 10 2 2 NULL
UPDATED answer
As my original answer did not meet requirement came out later but was asked to post an alternative example for information_schema solution so here it is.
I made two versions for solutions:
V1 - is equivalent to already given example using information_schema. But that solution relies on table1 column DEFAULTs. Meaning, if table1 column that does not exist at table2 does not have DEFAULT NULL then it will be filled with whatever the default is.
V2 - is modified to force 'NULL' in case of two table columns mismatch and does not inherit table1 own DEFAULTs
Version1:
CREATE OR REPLACE FUNCTION insert_into_table1_v1()
RETURNS void AS $main$
DECLARE
columns text;
BEGIN
SELECT string_agg(c1.attname, ',')
INTO columns
FROM pg_attribute c1
JOIN pg_attribute c2
ON c1.attrelid = 'public.table1'::regclass
AND c2.attrelid = 'public.table2'::regclass
AND c1.attnum > 0
AND c2.attnum > 0
AND NOT c1.attisdropped
AND NOT c2.attisdropped
AND c1.attname = c2.attname
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- -[ RECORD 1 ]----------------------
-- string_agg | name,si02,ti02,cao,mgo
EXECUTE format(
' INSERT INTO table1 ( %1$s )
SELECT %1$s
FROM table2
',
columns
);
END;
$main$ LANGUAGE plpgsql;
Version2:
CREATE OR REPLACE FUNCTION insert_into_table1_v2()
RETURNS void AS $main$
DECLARE
t1_cols text;
t2_cols text;
BEGIN
SELECT string_agg( c1.attname, ',' ),
string_agg( COALESCE( c2.attname, 'NULL' ), ',' )
INTO t1_cols,
t2_cols
FROM pg_attribute c1
LEFT JOIN pg_attribute c2
ON c2.attrelid = 'public.table2'::regclass
AND c2.attnum > 0
AND NOT c2.attisdropped
AND c1.attname = c2.attname
WHERE c1.attrelid = 'public.table1'::regclass
AND c1.attnum > 0
AND NOT c1.attisdropped
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- t1_cols | t2_cols
-- --------------------------------------------------------+--------------------------------------------
-- name,year,lith_age,prov_age,si02,ti02,cao,mgo,comments | name,NULL,NULL,NULL,si02,ti02,cao,mgo,NULL
-- (1 row)
EXECUTE format(
' INSERT INTO table1 ( %s )
SELECT %s
FROM table2
',
t1_cols,
t2_cols
);
END;
$main$ LANGUAGE plpgsql;
Also link to documentation about pg_attribute table columns if something is unclear: https://www.postgresql.org/docs/current/static/catalog-pg-attribute.html
Hopefully this helps :)

Round number UP to multiple of 10

How do I round number UP to the multiple of 10 in PostgreSQL easily?
Example:
In Out
100 --> 100
111 --> 120
123 --> 130
Sample data:
create table sample(mynumber numeric);
insert into sample values (100);
insert into sample values (111);
insert into sample values (123);
I can use:
select
mynumber,
case
when mynumber = round(mynumber,-1) then mynumber
else round(mynumber,-1) + 10 end as result
from
sample;
This works well, but looks ugly. Is there simpler way of doing this?
You can find SQLFiddle here
select ceil(a::numeric / 10) * 10
from (values (100), (111), (123)) s(a);
?column?
----------
100
120
130

How to generate larger sets of lottery numbers efficiently

I am a beginner with SQL and I was looking for more experiences with SQL hence I decided to design a procedure to generate X amount of random lotto picks. The lottery here in my area allows you to pick 5 numbers from 1-47 and 1 "mega" number from 1-27. The trick is the "mega" number could repeat with the 5 numbers previously, i.e. 1, 2, 3, 4, 5, mega 1.
I created the following procedure to generate 10 million lottery picks, and it took 12 hours and 57 minutes for the process to finish. While my friends tested the same thing with java and it took seconds. I was wondering if there's any improvements I can make to the code or if there's any mistakes that I've made? I'm new at this hence I am trying to learn better approaches etc, all comments welcome.
USE lotto
DECLARE
#counter INT,
#counter1 INT,
#pm SMALLINT,
#i1 SMALLINT,
#i2 SMALLINT,
#i3 SMALLINT,
#i4 SMALLINT,
#i5 SMALLINT,
#sort int
SET #counter1=0
TRUNCATE TABLE picks
WHILE #counter1<10000000
BEGIN
TRUNCATE TABLE sort
SET #counter = 1
WHILE #counter < 6
BEGIN
INSERT INTO sort (pick)
SELECT CAST(((47+ 1) - 0) * RAND() + 1 AS TINYINT)
IF (SELECT count(distinct pick) FROM sort)<#counter
BEGIN
TRUNCATE TABLE sort
SET #counter=1
END
ELSE IF (SELECT COUNT(DISTINCT pick) FROM sort)=#counter
BEGIN
SET #counter = #counter + 1
END
END
SET #sort = 0
WHILE #sort<5
BEGIN
UPDATE sort
SET sort=#sort
WHERE pick = (SELECT min(pick) FROM sort WHERE sort is null)
SET #sort=#sort + 1
END
SET #i1 = (SELECT pick FROM sort WHERE sort = 0)
SET #i2 = (SELECT pick FROM sort WHERE sort = 1)
SET #i3 = (SELECT pick FROM sort WHERE sort = 2)
SET #i4 = (SELECT pick FROM sort WHERE sort = 3)
SET #i5 = (SELECT pick FROM sort WHERE sort = 4)
SET #pm = (CAST(((27+ 1) - 0) * RAND() + 1 AS TINYINT))
INSERT INTO picks(
First,
Second,
Third,
Fourth,
Fifth,
Mega,
Sequence
)
Values(
#i1,
#i2,
#i3,
#i4,
#i5,
#pm,
#counter1
)
SET #counter1 = #counter1+1
END
I generated 10000 rows in 0 sec. I did it i another way. Hope this will help you
;WITH Nbrs ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 10000 )
SELECT
(ABS(CHECKSUM(NewId())) % 47 + 1) AS First,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Second,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Third,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fourth,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fifth,
(ABS(CHECKSUM(NewId())) % 27 + 1) AS Mega,
Nbrs.n AS Sequence
FROM
Nbrs
OPTION ( MAXRECURSION 0 )
10000 rows 0 sec
100000 rows 1 sec
1000000 rows 13 sec
10000000 rows 02 min 21 sec
Or with cross joins
WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1),
E02(N) AS (SELECT 1 FROM E00 a, E00 b),
E04(N) AS (SELECT 1 FROM E02 a, E02 b),
E08(N) AS (SELECT 1 FROM E04 a, E04 b),
E16(N) AS (SELECT 1 FROM E08 a, E08 b),
E32(N) AS (SELECT 1 FROM E16 a, E16 b),
Nbrs(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E32)
SELECT
(ABS(CHECKSUM(NewId())) % 47 + 1) AS First,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Second,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Third,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fourth,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fifth,
(ABS(CHECKSUM(NewId())) % 27 + 1) AS Mega,
Nbrs.n AS Sequence
FROM Nbrs
WHERE N <= 10000000;
10000 rows 0 sec
100000 rows 1 sec
1000000 rows 14 sec
10000000 rows 03 min 29 sec
I should also mention that the reason I am using
(ABS(CHECKSUM(NewId())) % 47 + 1)
is that it returns a random number per row. The solution with
CAST(((47+ 1) - 0) * RAND() + 1 AS TINYINT)
return the same random number for each row if you select them in one go. To test this run this example:
;WITH Nbrs ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 5 )
SELECT
CAST(((47+ 1) - 0) * RAND() + 1 AS TINYINT) AS Random,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS RadomCheckSum,
Nbrs.n AS Sequence
FROM Nbrs
Ok. So I did see your comment and I have a solution for that as well. If you really want to order the numbers. The complexity of the algorithm elevates and that also means that the time of the algorithm increases. But i still think it is doable. But not in the same neat way.
--Yeah declaring a temp table for just the random order number
DECLARE #tbl TABLE(value int)
--The same function but with the number of the random numbers
;WITH Nbrs ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 5 )
INSERT INTO #tbl
(
value
)
SELECT
Nbrs.n AS Sequence
FROM Nbrs
;WITH Nbrs ( n ) AS (
SELECT CAST(1 as BIGINT) UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 100000 )
SELECT
tblOrderRandomNumbers.[1] AS First,
tblOrderRandomNumbers.[2] AS Second,
tblOrderRandomNumbers.[3] AS Third,
tblOrderRandomNumbers.[4] AS Fourth,
tblOrderRandomNumbers.[5] AS Fifth,
(ABS(CHECKSUM(NewId())) % 27 + 1) AS Mega,
Nbrs.n AS Sequence
FROM
Nbrs
--This cross join. Joins with the declared table
CROSS JOIN
(
SELECT
[1], [2], [3], [4], [5]
FROM
(
SELECT
Random,
ROW_NUMBER() OVER(ORDER BY tblRandom.Random ASC) AS RowNumber
FROM
(
SELECT
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Random
FROM
#tbl AS tblNumbers
) AS tblRandom
)AS tblSortedRadom
--A pivot makes the rows to columns. Using the row index over order of the random number
PIVOT
(
AVG(Random)
FOR RowNumber IN ([1], [2], [3], [4],[5])
) as pivottable
) AS tblOrderRandomNumbers
OPTION ( MAXRECURSION 0 )
But still i manage to do it in a little time
10000 Rows : 0 sec
100000 Rows : 4 sec
1000000 Rows : 43 sec
10000000 Rows : 7 min 9 sec
I Hope this help
I wrote this script just out of curiousity. It should do better than your script, but I cant tell for sure.
Beware that I use a declared table, and if you use a real table performance should be better when generating larger amounts of rows.
I generated 10000 rows on about 13 seconds, that counts to about 3.5 hours to generate 10 000 000 rows. Still far worse than the Java-case you described.
set nocount on
go
declare #i int = 1
declare #t table(nr1 int, nr2 int, nr3 int, nr4 int, nr5 int, mega int, seq int)
while #i <= 10000
begin
;with numbers(nr)
as
(
select 1
union all
select nr+1
from numbers
where nr < 47
)
,mega(nr)
as
(
select 1
union all
select nr+1
from mega
where nr < 27
)
,selectednumbers(nr)
as
(
select top 5 nr
from numbers
order by newid()
)
,selectedmega(mega)
as
(
select top 1 nr
from mega
order by newid()
)
,tmp
as
(
select *
,row_number() over(order by nr) as rownr
from selectednumbers
)
insert into #t
select max(nr1) as nr1
,max(nr2) as nr2
,max(nr3) as nr3
,max(nr4) as nr4
,max(nr5) as nr5
,(select mega from selectedmega) as mega
,#i as seq
from (
select case when rownr = 1 then nr else 0 end as nr1
,case when rownr = 2 then nr else 0 end as nr2
,case when rownr = 3 then nr else 0 end as nr3
,case when rownr = 4 then nr else 0 end as nr4
,case when rownr = 5 then nr else 0 end as nr5
from tmp
) x
set #i = #i + 1
end
select * from #t