How to use limit in sqlite - perl

get '/' => sub {
my $db = connect_db();
my $sql = 'select id, title, text from entries order by id desc limit 3 offset 4';
my $sth = $db->prepare($sql) or die $db->errstr;
$sth->execute or die $sth->errstr;
template 'show_entries.tt', {
'msg' => get_flash(),
'add_entry_url' => uri_for('/add'),
'entries' => $sth->fetchall_hashref('id'),
};
};
I want to show on the main page the 3 themes , but for some reason Shows all. What i'm doing wrong?

Works for me
$ perl -E'
use DBI;
my $dbh = DBI->connect("dbi:SQLite:", undef, undef,
{ PrintError => 0, RaiseError => 1, AutoCommit => 1 });
$dbh->do("CREATE TEMP TABLE MyTable ( id INT )");
$dbh->do("INSERT INTO MyTable VALUES (?)", undef, $_) for 1..9;
my $sql = "SELECT * FROM MyTable LIMIT 3 OFFSET 4";
my $sth = $dbh->prepare($sql);
$sth->execute();
say for keys %{ $sth->fetchall_hashref("id") };
'
5
6
7
Or from the command line client
$ sqlite3
SQLite version 3.3.6
Enter ".help" for instructions
sqlite> CREATE TEMP TABLE MyTable ( id INT );
sqlite> INSERT INTO MyTable VALUES (1);
sqlite> INSERT INTO MyTable VALUES (2);
sqlite> INSERT INTO MyTable VALUES (3);
sqlite> INSERT INTO MyTable VALUES (4);
sqlite> INSERT INTO MyTable VALUES (5);
sqlite> INSERT INTO MyTable VALUES (6);
sqlite> INSERT INTO MyTable VALUES (7);
sqlite> INSERT INTO MyTable VALUES (8);
sqlite> INSERT INTO MyTable VALUES (9);
sqlite> SELECT * FROM MyTable LIMIT 3 OFFSET 4;
5
6
7
sqlite> .q
Versions:
$ perl -MDBI -E'
my $dbh = DBI->connect("dbi:SQLite:", undef, undef);
say for
$DBD::SQLite::VERSION,
$dbh->{sqlite_version}
'
1.40
3.7.17

Related

pivot or reshapre sql [duplicate]

I've been tasked with coming up with a means of translating the following data:
date category amount
1/1/2012 ABC 1000.00
2/1/2012 DEF 500.00
2/1/2012 GHI 800.00
2/10/2012 DEF 700.00
3/1/2012 ABC 1100.00
into the following:
date ABC DEF GHI
1/1/2012 1000.00
2/1/2012 500.00
2/1/2012 800.00
2/10/2012 700.00
3/1/2012 1100.00
The blank spots can be NULLs or blanks, either is fine, and the categories would need to be dynamic. Another possible caveat to this is that we'll be running the query in a limited capacity, which means temp tables are out. I've tried to research and have landed on PIVOT but as I've never used that before I really don't understand it, despite my best efforts to figure it out. Can anyone point me in the right direction?
Dynamic SQL PIVOT:
create table temp
(
date datetime,
category varchar(3),
amount money
)
insert into temp values ('1/1/2012', 'ABC', 1000.00)
insert into temp values ('2/1/2012', 'DEF', 500.00)
insert into temp values ('2/1/2012', 'GHI', 800.00)
insert into temp values ('2/10/2012', 'DEF', 700.00)
insert into temp values ('3/1/2012', 'ABC', 1100.00)
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX);
SET #cols = STUFF((SELECT distinct ',' + QUOTENAME(c.category)
FROM temp c
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT date, ' + #cols + ' from
(
select date
, amount
, category
from temp
) x
pivot
(
max(amount)
for category in (' + #cols + ')
) p '
execute(#query)
drop table temp
Results:
Date ABC DEF GHI
2012-01-01 00:00:00.000 1000.00 NULL NULL
2012-02-01 00:00:00.000 NULL 500.00 800.00
2012-02-10 00:00:00.000 NULL 700.00 NULL
2012-03-01 00:00:00.000 1100.00 NULL NULL
Dynamic SQL PIVOT
Different approach for creating columns string
create table #temp
(
date datetime,
category varchar(3),
amount money
)
insert into #temp values ('1/1/2012', 'ABC', 1000.00)
insert into #temp values ('2/1/2012', 'DEF', 500.00)
insert into #temp values ('2/1/2012', 'GHI', 800.00)
insert into #temp values ('2/10/2012', 'DEF', 700.00)
insert into #temp values ('3/1/2012', 'ABC', 1100.00)
DECLARE #cols AS NVARCHAR(MAX)='';
DECLARE #query AS NVARCHAR(MAX)='';
SELECT #cols = #cols + QUOTENAME(category) + ',' FROM (select distinct category from #temp ) as tmp
select #cols = substring(#cols, 0, len(#cols)) --trim "," at end
set #query =
'SELECT * from
(
select date, amount, category from #temp
) src
pivot
(
max(amount) for category in (' + #cols + ')
) piv'
execute(#query)
drop table #temp
Result
date ABC DEF GHI
2012-01-01 00:00:00.000 1000.00 NULL NULL
2012-02-01 00:00:00.000 NULL 500.00 800.00
2012-02-10 00:00:00.000 NULL 700.00 NULL
2012-03-01 00:00:00.000 1100.00 NULL NULL
I know this question is older but I was looking thru the answers and thought that I might be able to expand on the "dynamic" portion of the problem and possibly help someone out.
First and foremost I built this solution to solve a problem a couple of coworkers were having with inconstant and large data sets needing to be pivoted quickly.
This solution requires the creation of a stored procedure so if that is out of the question for your needs please stop reading now.
This procedure is going to take in the key variables of a pivot statement to dynamically create pivot statements for varying tables, column names and aggregates. The Static column is used as the group by / identity column for the pivot(this can be stripped out of the code if not necessary but is pretty common in pivot statements and was necessary to solve the original issue), the pivot column is where the end resultant column names will be generated from, and the value column is what the aggregate will be applied to. The Table parameter is the name of the table including the schema (schema.tablename) this portion of the code could use some love because it is not as clean as I would like it to be. It worked for me because my usage was not publicly facing and sql injection was not a concern. The Aggregate parameter will accept any standard sql aggregate 'AVG', 'SUM', 'MAX' etc. The code also defaults to MAX as an aggregate this is not necessary but the audience this was originally built for did not understand pivots and were typically using max as an aggregate.
Lets start with the code to create the stored procedure. This code should work in all versions of SSMS 2005 and above but I have not tested it in 2005 or 2016 but I can not see why it would not work.
create PROCEDURE [dbo].[USP_DYNAMIC_PIVOT]
(
#STATIC_COLUMN VARCHAR(255),
#PIVOT_COLUMN VARCHAR(255),
#VALUE_COLUMN VARCHAR(255),
#TABLE VARCHAR(255),
#AGGREGATE VARCHAR(20) = null
)
AS
BEGIN
SET NOCOUNT ON;
declare #AVAIABLE_TO_PIVOT NVARCHAR(MAX),
#SQLSTRING NVARCHAR(MAX),
#PIVOT_SQL_STRING NVARCHAR(MAX),
#TEMPVARCOLUMNS NVARCHAR(MAX),
#TABLESQL NVARCHAR(MAX)
if isnull(#AGGREGATE,'') = ''
begin
SET #AGGREGATE = 'MAX'
end
SET #PIVOT_SQL_STRING = 'SELECT top 1 STUFF((SELECT distinct '', '' + CAST(''[''+CONVERT(VARCHAR,'+ #PIVOT_COLUMN+')+'']'' AS VARCHAR(50)) [text()]
FROM '+#TABLE+'
WHERE ISNULL('+#PIVOT_COLUMN+','''') <> ''''
FOR XML PATH(''''), TYPE)
.value(''.'',''NVARCHAR(MAX)''),1,2,'' '') as PIVOT_VALUES
from '+#TABLE+' ma
ORDER BY ' + #PIVOT_COLUMN + ''
declare #TAB AS TABLE(COL NVARCHAR(MAX) )
INSERT INTO #TAB EXEC SP_EXECUTESQL #PIVOT_SQL_STRING, #AVAIABLE_TO_PIVOT
SET #AVAIABLE_TO_PIVOT = (SELECT * FROM #TAB)
SET #TEMPVARCOLUMNS = (SELECT replace(#AVAIABLE_TO_PIVOT,',',' nvarchar(255) null,') + ' nvarchar(255) null')
SET #SQLSTRING = 'DECLARE #RETURN_TABLE TABLE ('+#STATIC_COLUMN+' NVARCHAR(255) NULL,'+#TEMPVARCOLUMNS+')
INSERT INTO #RETURN_TABLE('+#STATIC_COLUMN+','+#AVAIABLE_TO_PIVOT+')
select * from (
SELECT ' + #STATIC_COLUMN + ' , ' + #PIVOT_COLUMN + ', ' + #VALUE_COLUMN + ' FROM '+#TABLE+' ) a
PIVOT
(
'+#AGGREGATE+'('+#VALUE_COLUMN+')
FOR '+#PIVOT_COLUMN+' IN ('+#AVAIABLE_TO_PIVOT+')
) piv
SELECT * FROM #RETURN_TABLE'
EXEC SP_EXECUTESQL #SQLSTRING
END
Next we will get our data ready for the example. I have taken the data example from the accepted answer with the addition of a couple of data elements to use in this proof of concept to show the varied outputs of the aggregate change.
create table temp
(
date datetime,
category varchar(3),
amount money
)
insert into temp values ('1/1/2012', 'ABC', 1000.00)
insert into temp values ('1/1/2012', 'ABC', 2000.00) -- added
insert into temp values ('2/1/2012', 'DEF', 500.00)
insert into temp values ('2/1/2012', 'DEF', 1500.00) -- added
insert into temp values ('2/1/2012', 'GHI', 800.00)
insert into temp values ('2/10/2012', 'DEF', 700.00)
insert into temp values ('2/10/2012', 'DEF', 800.00) -- addded
insert into temp values ('3/1/2012', 'ABC', 1100.00)
The following examples show the varied execution statements showing the varied aggregates as a simple example. I did not opt to change the static, pivot, and value columns to keep the example simple. You should be able to just copy and paste the code to start messing with it yourself
exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','sum'
exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','max'
exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','avg'
exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','min'
This execution returns the following data sets respectively.
Updated version for SQL Server 2017 using STRING_AGG function to construct the pivot column list:
create table temp
(
date datetime,
category varchar(3),
amount money
);
insert into temp values ('20120101', 'ABC', 1000.00);
insert into temp values ('20120201', 'DEF', 500.00);
insert into temp values ('20120201', 'GHI', 800.00);
insert into temp values ('20120210', 'DEF', 700.00);
insert into temp values ('20120301', 'ABC', 1100.00);
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX);
SET #cols = (SELECT STRING_AGG(category,',') FROM (SELECT DISTINCT category FROM temp WHERE category IS NOT NULL)t);
set #query = 'SELECT date, ' + #cols + ' from
(
select date
, amount
, category
from temp
) x
pivot
(
max(amount)
for category in (' + #cols + ')
) p ';
execute(#query);
drop table temp;
There's my solution cleaning up the unnecesary null values
DECLARE #cols AS NVARCHAR(MAX),
#maxcols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(CodigoFormaPago)
from PO_FormasPago
order by CodigoFormaPago
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
select #maxcols = STUFF((SELECT ',MAX(' + QUOTENAME(CodigoFormaPago) + ') as ' + QUOTENAME(CodigoFormaPago)
from PO_FormasPago
order by CodigoFormaPago
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT CodigoProducto, DenominacionProducto, ' + #maxcols + '
FROM
(
SELECT
CodigoProducto, DenominacionProducto,
' + #cols + ' from
(
SELECT
p.CodigoProducto as CodigoProducto,
p.DenominacionProducto as DenominacionProducto,
fpp.CantidadCuotas as CantidadCuotas,
fpp.IdFormaPago as IdFormaPago,
fp.CodigoFormaPago as CodigoFormaPago
FROM
PR_Producto p
LEFT JOIN PR_FormasPagoProducto fpp
ON fpp.IdProducto = p.IdProducto
LEFT JOIN PO_FormasPago fp
ON fpp.IdFormaPago = fp.IdFormaPago
) xp
pivot
(
MAX(CantidadCuotas)
for CodigoFormaPago in (' + #cols + ')
) p
) xx
GROUP BY CodigoProducto, DenominacionProducto'
t #query;
execute(#query);
The below code provides the results which replaces NULL to zero in the output.
Table creation and data insertion:
create table test_table
(
date nvarchar(10),
category char(3),
amount money
)
insert into test_table values ('1/1/2012','ABC',1000.00)
insert into test_table values ('2/1/2012','DEF',500.00)
insert into test_table values ('2/1/2012','GHI',800.00)
insert into test_table values ('2/10/2012','DEF',700.00)
insert into test_table values ('3/1/2012','ABC',1100.00)
Query to generate the exact results which also replaces NULL with zeros:
DECLARE #DynamicPivotQuery AS NVARCHAR(MAX),
#PivotColumnNames AS NVARCHAR(MAX),
#PivotSelectColumnNames AS NVARCHAR(MAX)
--Get distinct values of the PIVOT Column
SELECT #PivotColumnNames= ISNULL(#PivotColumnNames + ',','')
+ QUOTENAME(category)
FROM (SELECT DISTINCT category FROM test_table) AS cat
--Get distinct values of the PIVOT Column with isnull
SELECT #PivotSelectColumnNames
= ISNULL(#PivotSelectColumnNames + ',','')
+ 'ISNULL(' + QUOTENAME(category) + ', 0) AS '
+ QUOTENAME(category)
FROM (SELECT DISTINCT category FROM test_table) AS cat
--Prepare the PIVOT query using the dynamic
SET #DynamicPivotQuery =
N'SELECT date, ' + #PivotSelectColumnNames + '
FROM test_table
pivot(sum(amount) for category in (' + #PivotColumnNames + ')) as pvt';
--Execute the Dynamic Pivot Query
EXEC sp_executesql #DynamicPivotQuery
OUTPUT :
A version of Taryn's answer with performance improvements:
Data
CREATE TABLE dbo.Temp
(
[date] datetime NOT NULL,
category nchar(3) NOT NULL,
amount money NOT NULL,
INDEX [CX dbo.Temp date] CLUSTERED ([date]),
INDEX [IX dbo.Temp category] NONCLUSTERED (category)
);
INSERT dbo.Temp
([date], category, amount)
VALUES
({D '2012-01-01'}, N'ABC', $1000.00),
({D '2012-01-02'}, N'DEF', $500.00),
({D '2012-01-02'}, N'GHI', $800.00),
({D '2012-02-10'}, N'DEF', $700.00),
({D '2012-03-01'}, N'ABC', $1100.00);
Dynamic pivot
DECLARE
#Delimiter nvarchar(4000) = N',',
#DelimiterLength bigint,
#Columns nvarchar(max),
#Query nvarchar(max);
SET #DelimiterLength = LEN(REPLACE(#Delimiter, SPACE(1), N'#'));
-- Before SQL Server 2017
SET #Columns =
STUFF
(
(
SELECT
[text()] = #Delimiter,
[text()] = QUOTENAME(T.category)
FROM dbo.Temp AS T
WHERE T.category IS NOT NULL
GROUP BY T.category
ORDER BY T.category
FOR XML PATH (''), TYPE
)
.value(N'text()[1]', N'nvarchar(max)'),
1, #DelimiterLength, SPACE(0)
);
-- Alternative for SQL Server 2017+ and database compatibility level 110+
SELECT #Columns =
STRING_AGG(CONVERT(nvarchar(max), QUOTENAME(T.category)), N',')
WITHIN GROUP (ORDER BY T.category)
FROM
(
SELECT T2.category
FROM dbo.Temp AS T2
WHERE T2.category IS NOT NULL
GROUP BY T2.category
) AS T;
IF #Columns IS NOT NULL
BEGIN
SET #Query =
N'SELECT [date], ' +
#Columns +
N'
FROM
(
SELECT [date], amount, category
FROM dbo.Temp
) AS S
PIVOT
(
MAX(amount)
FOR category IN (' +
#Columns +
N')
) AS P;';
EXECUTE sys.sp_executesql #Query;
END;
Execution plans
Results
date
ABC
DEF
GHI
2012-01-01 00:00:00.000
1000.00
NULL
NULL
2012-01-02 00:00:00.000
NULL
500.00
800.00
2012-02-10 00:00:00.000
NULL
700.00
NULL
2012-03-01 00:00:00.000
1100.00
NULL
NULL
CREATE TABLE #PivotExample(
[ID] [nvarchar](50) NULL,
[Description] [nvarchar](50) NULL,
[ClientId] [smallint] NOT NULL,
)
GO
INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc1',1008)
INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc2',2000)
INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc3',3000)
INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc4',4000)
INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI2','ACI2Desc1',5000)
INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI2','ACI2Desc2',6000)
INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI2','ACI2Desc3', 7000)
SELECT * FROM #PivotExample
--Declare necessary variables
DECLARE #SQLQuery AS NVARCHAR(MAX)
DECLARE #PivotColumns AS NVARCHAR(MAX)
--Get unique values of pivot column
SELECT #PivotColumns= COALESCE(#PivotColumns + ',','') + QUOTENAME([Description])
FROM (SELECT DISTINCT [Description] FROM [dbo].#PivotExample) AS PivotExample
--SELECT #PivotColumns
--Create the dynamic query with all the values for
--pivot column at runtime
SET #SQLQuery =
N' -- Your pivoted result comes here
SELECT ID, ' + #PivotColumns + '
FROM
(
-- Source table should in a inner query
SELECT ID,[Description],[ClientId]
FROM #PivotExample
)AS P
PIVOT
(
-- Select the values from derived table P
SUM(ClientId)
FOR [Description] IN (' + #PivotColumns + ')
)AS PVTTable'
--SELECT #SQLQuery
--Execute dynamic query
EXEC sp_executesql #SQLQuery
Drop table #PivotExample
Fully generic way that will work in non-traditional MS SQL environments (e.g. Azure Synapse Analytics Serverless SQL Pools) - it's in a SPROC but no need to use as such...
-- DROP PROCEDURE IF EXISTS
if object_id('dbo.usp_generic_pivot') is not null
DROP PROCEDURE dbo.usp_generic_pivot
GO;
CREATE PROCEDURE dbo.usp_generic_pivot (
#source NVARCHAR (100), -- table or view object name
#pivotCol NVARCHAR (100), -- the column to pivot
#pivotAggCol NVARCHAR (100), -- the column with the values for the pivot
#pivotAggFunc NVARCHAR (20), -- the aggregate function to apply to those values
#leadCols NVARCHAR (100) -- comma seprated list of other columns to keep and order by
)
AS
BEGIN
DECLARE #pivotedColumns NVARCHAR(MAX)
DECLARE #tsql NVARCHAR(MAX)
SET #tsql = CONCAT('SELECT #pivotedColumns = STRING_AGG(qname, '','') FROM (SELECT DISTINCT QUOTENAME(', #pivotCol,') AS qname FROM ',#source, ') AS qnames')
EXEC sp_executesql #tsql, N'#pivotedColumns nvarchar(max) out', #pivotedColumns out
SET #tsql = CONCAT ( 'SELECT ', #leadCols, ',', #pivotedColumns,' FROM ',' ( SELECT ',#leadCols,',',
#pivotAggCol,',', #pivotCol, ' FROM ', #source, ') as t ',
' PIVOT (', #pivotAggFunc, '(', #pivotAggCol, ')',' FOR ', #pivotCol,
' IN (', #pivotedColumns,')) as pvt ',' ORDER BY ', #leadCols)
EXEC (#tsql)
END
GO;
-- TEST EXAMPLE
EXEC dbo.usp_generic_pivot
#source = '[your_db].[dbo].[form_answers]',
#pivotCol = 'question',
#pivotAggCol = 'answer',
#pivotAggFunc = 'MAX',
#leadCols = 'candidate_id, candidate_name'
GO;

Local variable in Posgresql

I'm moving from SQL Server to Postgresql. In SQL Server I have a code like this:
Declare #usr int = (select id from users where name = 'root');
Select * from info1 where usr = #usr;
Select * from info2 where usr = #usr;
Select * from info3 where usr = #usr;
I want to write such a code in Postgres. I found no local variables in Postgresql. I try this way:
create temporary table _usr on commit drop as
select id from users where name = 'root';
select * from info1 where usr = (select id from _usr);
select * from info2 where usr = (select id from _usr);
select * from info3 where usr = (select id from _usr);
The question is: Is where an any other way to save a local variables other than a temporary tables? May be I missing some type of objects in Postgresql?
PS. Data structure for this example:
create table users
(
id int not null,
name varchar(100)
);
create table info1
(
usr int not null,
i1a text,
i1b text
);
create table info2
(
usr int not null,
i2a text,
i2b text,
i2c text
);
create table info3
(
usr int not null,
i3a text,
i2b decimal(10),
i2c decimal(10),
i2d decimal(10)
);
insert into users values (1, 'root'), (2, 'admin');
insert into info1 values (1, 'one', 'first'), (1, 'two', 'second');
insert into info2 values (1, '1', '11', 't111'), (1, '2', '22', '222');
insert into info3 values (1, '1', 1, 10, 100), (1, '2', 2, 20, 200);
insert into info3 values (2, '3', 3, 30, 300);
UPDATE. The way with anonymous code block seems to be much uglier:
do $$
declare _usr int;
begin
_usr = (select id from users where name = 'root');
create temporary table _info1 on commit drop as
select * from info1 where usr = _usr;
create temporary table _info2 on commit drop as
select * from info2 where usr = _usr;
create temporary table _info3 on commit drop as
select * from info3 where usr = _usr;
end
$$;
select * from _info1;
select * from _info2;
select * from _info3;
There are no variables in SQL in PostgreSQL.
I think the best and simplest solution would be to repeat the (cheap) subquery:
SELECT info1.+
FROM info1
JOIN users ON users.name = info1.usr
WHERE users.name = 'root';
SELECT info2.+
FROM info2
JOIN users ON users.name = info2.usr
WHERE users.name = 'root';
SELECT info3.+
FROM info3
JOIN users ON users.name = info3.usr
WHERE users.name = 'root';
If the three tables have the same structure, you could UNION the queries and use a CTE:
WITH u AS (
SELECT id FROM users WHERE name = 'root'
)
SELECT info1.*
FROM info1
JOIN u ON u.id = info1.usr
UNION ALL
SELECT info2.*
FROM info2
JOIN u ON u.id = info2.usr
UNION ALL
SELECT info3.*
FROM info3
JOIN u ON u.id = info3.usr;

Take next N rows recursively and sequentially from sql table

I have two tables "main" and "moved". "main" table has records from where I move 3 rows sequentially to "moved" table by executing a stored procedure. So every time I execute the stored procedure it should check to see the next set of 3 rows are sequentially read from "main" table from the row of last move happened and inserted into "moved" table.
select rowid from #main
rowid
-----------------------
1
2
3
4
5
Now when I execute the query, it should take the 3 rows from "main" table and insert into "moved" table. Say if I run the query 5 times, this is how I expect the "moved" table to contain each time I run it.
1,2,3 --"moved" table has these rows 1st time when I run the query
4,5,1 --"moved" table has these rows 2nd time when I run the query
2,3,4 --"moved" table has these rows 3rd time when I run the query
5,1,2 --"moved" table has these rows 4th time when I run the query
3,4,5 --"moved" table has these rows 5th time when I run the query
1,2,3 --"moved" table has these rows 6th time when I run the query
...so the sequential read continues. Read sequentially from next row where it ended from last run.
I asked this already, but the answer works only partially. It doesn't work when the values in the "main" table grows or not in order.
Thanks in advance for your help.
The following example may give you a starting point. Note that there must be an explicit order provided for the values that you want to insert. A table has no natural order and "sequential" is meaningless.
-- Sample data.
declare #Template as Table ( TemplateValue VarChar(10), TemplateRank Int );
-- NB: TemplateRank values are assumed to start at zero and be dense.
insert into #Template ( TemplateValue, TemplateRank ) values
( 'one', 0 ), ( 'two', 1 ), ( 'three', 2 ), ( 'four', 3 );
declare #NumberOfTemplateValues Int = ( select Max( TemplateRank ) from #Template ) + 1;
select * from #Template;
declare #AccumulatedStuff as Table ( Id Int Identity, Value VarChar(10) );
declare #GroupSize as Int = 5;
-- Add a block of GroupSize rows to the AccumulatedStuff .
declare #LastValue as VarChar(10) = ( select top 1 Value from #AccumulatedStuff order by Id desc );
declare #LastRank as Int = ( select TemplateRank from #Template where TemplateValue = #LastValue );
select #LastValue as LastValue, #LastRank as LastRank;
with Numbers as (
select 1 as Number
union all
select Number + 1
from Numbers
where Number < #GroupSize ),
RankedValues as (
select TemplateValue, TemplateRank
from #Template as T inner join
Numbers as N on ( N.Number + Coalesce( #LastRank, #NumberOfTemplateValues - 1 ) ) % #NumberOfTemplateValues = T.TemplateRank )
insert into #AccumulatedStuff
select TemplateValue from RankedValues;
select * from #AccumulatedStuff;
-- Repeat.
set #LastValue = ( select top 1 Value from #AccumulatedStuff order by Id desc );
set #LastRank = ( select TemplateRank from #Template where TemplateValue = #LastValue );
select #LastValue as LastValue, #LastRank as LastRank;
with Numbers as (
select 1 as Number
union all
select Number + 1
from Numbers
where Number < #GroupSize ),
RankedValues as (
select TemplateValue, TemplateRank
from #Template as T inner join
Numbers as N on ( N.Number + Coalesce( #LastRank, #NumberOfTemplateValues - 1 ) ) % #NumberOfTemplateValues = T.TemplateRank )
insert into #AccumulatedStuff
select TemplateValue from RankedValues;
select * from #AccumulatedStuff;
DECLARE #main TABLE
(
RowID INT IDENTITY(1,1),
DataID INT
)
DECLARE #selected TABLE
(
RowID INT IDENTITY(1,1),
DataID INT
)
DECLARE #final TABLE
(
RowID INT IDENTITY(1,1),
DataID INT
)
INSERT INTO #main values (1),(2),(3),(4),(5),(6),(7)
--INSERT INTO #Selected values (1),(2),(3)
--INSERT INTO #Selected values (4),(5),(6)
--INSERT INTO #Selected values (7),(1),(2)
--INSERT INTO #Selected values (3),(4),(5)
INSERT INTO #Selected values (6),(7),(1)
--INSERT INTO #Selected values (2),(3),(4)
--INSERT INTO #Selected values (5),(6),(7)
--INSERT INTO #Selected values (1),(2),(3)
DECLARE
#LastDataID INT,
#FinalCount INT
SELECT
TOP 1
#LastDataID = DataID
FROM
#selected
ORDER BY
RowID
DESC
INSERT INTO #final
SELECT
TOP 3
DataID
FROM
#main
WHERE
DataID > #LastDataID
SELECT #FinalCount = COUNT(1) FROM #final
IF (ISNULL(#FinalCount, 0 ) < 3)
BEGIN
INSERT INTO #final
SELECT
TOP (3 - #FinalCount)
DataID
FROM
#main
END
SELECT * FROM #final
The key was to have reference to the last record moved to the "moved/selected" table. It worked as expected.

SQL Server 2008 T-SQL UDF odds and ends

I am trying to take a data string from one column and split it into several different columns in SQL Ser 2008. Eample: Name Account 445566 0010020056893010445478008 AFD 369. I'm useing a borrowed space delimited split function which workS great. The problem is I'm new to T-SQL and have a few questions.
How do I get the function to run through a full table not just a string literal?
This generates a temparary table how do you take these values and insert them into my table? Is it just an insert statement?
Here is the script and usage:
CREATE FUNCTION [dbo].[Split]
(
#String varchar(max)
,#Delimiter char
)
RETURNS #Results table
(
Ordinal int
,StringValue varchar(max)
)
as
begin
set #String = isnull(#String,'')
set #Delimiter = isnull(#Delimiter,'')
declare
#TempString varchar(max) = #String
,#Ordinal int = 0
,#CharIndex int = 0
set #CharIndex = charindex(#Delimiter, #TempString)
while #CharIndex != 0 begin
set #Ordinal += 1
insert #Results values
(
#Ordinal
,substring(#TempString, 0, #CharIndex)
)
set #TempString = substring(#TempString, #CharIndex + 1, len(#TempString) - #CharIndex)
set #CharIndex = charindex(#Delimiter, #TempString)
end
if #TempString != '' begin
set #Ordinal += 1
insert #Results values
(
#Ordinal
,#TempString
)
end
return
end
--USAGE
select
s.*
from dbo.Split('Name Account 445566 0010020056893010445478008 AFD 369', ' ') as s
where rtrim(s.StringValue) != ''
GO
To use a table valued udf against a table, you need CROSS APPLY (or maybe OUTER APPLY depending on how you want to deal with "no rows" from the udf). This applies the row-by-row operation of the udf against your table which itself is a table
SELECT
*
FROM
mytable M
CROSS APPLY
[dbo].[Split] (M.TheColumn) S
To INSERT
INSERT AnotherTable (col1, col2, ...)
SELECT
col1, col2, ...
FROM
mytable M
CROSS APPLY
[dbo].[Split] (M.TheColumn) S

Can we use Select query output as Column alias

i need a select statements output as a columns's alias
is it possible
examples
select columnname as (select value from tablename) from tablename
output
values
1
2
3
no you can't, see this:
declare #a table (rowvalue varchar(20))
insert #a values ('aaa')
insert #a values ('bbb')
insert #a values ('ccc')
insert #a values ('ddd')
declare #x table (id int,rowvalue varchar(20))
insert #x values (1, 'wowwee')
insert #x values (2, 'noooo!')
select a.rowvalue as (select rowvalue from #x where id=1)
from #a a
OUTPUT:
Msg 102, Level 15, State 1, Line 11
Incorrect syntax near '('.
Msg 156, Level 15, State 1, Line 12
Incorrect syntax near the keyword 'from'.
you'll have to use dynamic sql for this.
EDIT to show dynamic sql:
declare #SQL varchar(500)
,#ColumnName varchar(20)
create table #a (rowvalue varchar(20))
insert #a values ('aaa')
insert #a values ('bbb')
insert #a values ('ccc')
insert #a values ('ddd')
declare #x table (id int,rowvalue varchar(20))
insert #x values (1, 'wowwee')
insert #x values (2, 'noooo!')
SELECT #ColumnName=rowvalue FROM #x where id=1
set #SQL='select a.rowvalue as '+#ColumnName+' from #a a'
exec(#SQL)
OUTPUT:
wowwee
--------------------
aaa
bbb
ccc
ddd
(4 row(s) affected)
Try this
Select (Select value from Tablename) as Columnname from Tablename
The output should resemble the following
ColumnName
- 1
- 2
- 3
I think you have to quit de AS word in tsql, I mean:
select column name (select value from tablename) from tablename