I wonder if anyone has a solution to the following requirement. I have a stored procedure which returns a result set of for example 1000 rows. Now I need to limit this to 100 rows at a time. So I will pass in a start and end index value and I only want the records between the start index rowcount and the end index rowcount
So for example my stored procedure call signature looks like this:-
stp_mystoredproc(startIndex INTEGER, endIndex INTEGER)
So if I set startIndex = 100 and endIndex = 200 then I want the stored procedure to return the records in rows 100 to 200 out of the total reset set of 1000.
My first attempt is put the result set in a temp table with an identity column then select based on the identity the range I need but this is somewhat slow. I know Oracle supports pagination so you can page through your result set. Anyone know if Sybase IQ (v12.6 or v12.7) supports something similar?
The end goal is to page through the entire result set (1000 records) but in 100 row pages at a time.
I don't know sybase. But maybe you could do something like this
myproc(#count int, #lastid int)
select top #count *
from MyTabel
where id > #lastid
order by id
first call
exec myproc(100, 0)
gives you something like
3 appels
4 banana
..
..
..
346 potatto
next call
exec myproc myproc(100,346)
Sybase IQ and Sybase SQL Anywhere share the same query execution engine and (mostly) SQL syntax, so you can generally use SQL Anywhere syntax. Try this:
select top (endIndex-startIndex) start at startIndex from <query>
I'm not sure if you can use an expression in the top clause, so you may have to create a string and use execute immediate.
See http://dcx.sybase.com/index.html#1201/en/dbreference/select-statement.html
Related
I have a procedure with 2 selects, looks something like this...
PROCEDURE getRec
#pId INTEGER
AS
BEGIN
SELECT 'anything'
SELECT id, name
FROM my_table
WHERE id = #pId
END
When my perl script calls this stored procedure, if my_table has a matching record then it is displayed. However, if the ID passed in has no matches then the stored procedure returns 'anything'.
If there are no rows in the second select then I just want the procedure to return an empty result set. How can I achieve this?
Every SELECT will produce a result set (with the exception of SELECT #var = ...)
So will first be receiving the 'anything' result set
After that, you will receive the empty result set.
You need to get your perl code to fetch all rows in the first result set, then get the next result set and fetch all rows in that result set.
The functions to get the next result set will greatly depend on the perl library you are using.
1.I want to write a DB2 procedure to do common insert/update/delete to a table, problem is how to generate SQL statement with random values? for example, if a column of integer type, the store procedure could generate numbers between 1 to 10000, or for a column of varchar type, the store procedure could generate string of random chosen characters with a fixed length,say 10;
2.if the DB2 SQL syntax support sth to put the data from file into a LOB column for a randomly chosen row, say, I have a table t1(c0 integer,c1 clob), then how could I do sth like "insert into t1 values(100,some_path_to_a_text_file)" ?
3.using DB2 "import" to load data, if the file contains 10000 rows,it seems DB2 by default will commit the entire 10000 rows of insertion in one single transaction. Is there any configuration/option I could use to divide the "import" process into like 10 transaction, each with 1000 rows?
Thank you very much!
1) To do a random operation, get a random value, and process it according to set of rules. I have a similar case in an utility I am currently developping.
https://github.com/angoca/log4db2/blob/master/src/examples/sql-pl/bank/DemoBankRandom.sql
It realizes an insert, a select, an update or a delete based on a random value.
2) No idea. What is sth?
3) For more frequent commits, you put commitcount. For more info please check the infoCenter http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0008304.html
If there's a queue of work todo in a table that is going to be periodically polled by a number of different worker clients...what's the best way to prevent each worker from getting the same item to work on?
Say a table like: ItemId, LastAttemptDateTime, AttemptCount, and various item details.
Given an index on LastAttemptDateTime and sorted in ascending order and various clients are querying the table to grab an item to be worked on.
I use a stored procedure in MS SQL to do this...something like:
CREATE PROCEDURE GetNextQueueItem AS
SET NOCOUNT ON
DECLARE #ItemId INT
UPDATE myqueue SET #ItemId=ItemId, AttemptCount=AttemptCount+1, LastAttemptDateTime=GetDate()
WHERE ItemId=(SELECT TOP 1 ItemId
FROM myqueue
ORDER BY LastAttemptDateTime ASC)
SELECT ItemId, AttemptCount, and various item detail fields
FROM myqueue
WHERE ItemId = #ItemId
I'm fairly new to PostgreSQL and was wondering if there's alternate approaches available. (The TOP 1 will change to LIMIT 1.)
PostgreSQL equivalent could look like this:
CREATE OR REPLACE FUNCTION get_next_queue_item()
RETURNS SETOF myqueue AS
$BODY$
BEGIN
RETURN QUERY
UPDATE myqueue
SET attempt_count = attempt_count + 1
,last_attempt_ts = now()
WHERE item_id = (
SELECT item_id
FROM myqueue
ORDER BY last_attempt_ts
LIMIT 1
)
RETURNING myqueue.*;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Major points
You only need 1 statement to do it all. UPDATE can return the updated row in the same command with the RETURNING clause.
State of the row is post-update. There is ways to get the pre-update state if needed.
No need for any variables.
I changed all identifiers to lower case, which is the cleanest style in PostgreSQL.
I renamed your column LastAttemptDateTime to last_attempt_ts
ts .. for "timestamp", because that's the name of the timestamp / datetime type in Postgres.
As you mentioned yourself, LIMIT 1 instead of TOP 1.
I use RETURNS SETOF myqueue as return type.
myqueue is the associated row-type of the table myqueue - for every table or view a row-type of the same name is automatically created in PostgreSQL.
This declaration allows for multiple rows to be returned, but LIMIT 1 guarantees that it will only ever be one.
This return type allows for RETURN QUERY to return the resulting row directly without any intermediate step. Fast, clean.
Actually, you don't need a plpgsql function at all. You can do it with a simple SQL statement:
UPDATE myqueue
SET attempt_count = attempt_count + 1
,last_attempt_ts = now()
WHERE item_id = (
SELECT item_id
FROM myqueue
ORDER BY last_attempt_ts
LIMIT 1
)
RETURNING myqueue.*;
Since PostgreSQL has sequences separate to identity columns incremented with them that can be used for other things, one nice way to do have a sequence used to set an id on the table, and another for getting the item:
Look at the currval of the sequence, if it's higher than or equal to the max id of the table, there's no items waiting.
Obtain nextval. If there is no item with a matching id then loop back to 1 (this can happen if an insert to the table failed).
Obtain the row with the matching id.
This isn't the only way to skin this cat (and not the way I've used with other databases), but has the advantage of being light on writes to the database (altering only the sequence, not the table.
I have a page on my site which has multiple drop down boxes as filters.
So the SQL procedure for that page would be something like this
IF #Filter1 = 0, #Filter2 = 0, #Filter3 = 0
BEGIN
SELECT * FROM Table1
END
ELSE IF #Filter1 = 1, #Filter2 = 0, #Filter3 = 0
BEGIN
SELECT * FROM Table2
END
At the beginning, there were only a few results per filter so there weren't that many permutations. However, more filters have been added such that there are over 20 IF ELSE checks now.
So if each filter has 5 options, I will need to do 5*5*5 = 125 IF ELSE checks to return data dependent on the the filters.
Update
The first filter alters the WHERE condition, the second filter adds more tables to the result set, the third filter alters the ORDER BY condition
How can I make this query more scalable such that I don't have to write a new bunch of IF ELSE statements to check for every condition everytime a new filter is added to the list besides using dynamic SQL...
You must have to have a rule table with formulaes maybe bitwise and construct a query that might plug variable data from the table and appends to a string to form the sql and the use dynamic sql to run them.
As much as I dislike dynamic SQL, this may be the time for it. You can build the query a little at a time, then execute it at the end.
If you're unfamiliar, the syntax is something like:
DECLARE #SQL VARCHAR(1000)
SELECT #SQL = 'SELECT * FROM ' + 'SOME_TABLE'
EXEC(#SQL)
Make sure you deal with SQL injection attacks, proper spacing, etc.
In this case, I'd do my best to put this logic in application code, but that's not always possible. If you're using LINQ-to-SQL or another LINQ framework, you should be able to do this safely, but it may take some creativity to get the LINQ query built properly.
You can set up a bunch of views, one for each "filter" and then select from the appropriate view based on which "filter" was selected.
Imagine the scene, you're updating some legacy Sybase code and come across a cursor. The stored procedure builds up a result set in a #temporary table which is all ready to be returned except that one of columns isn't terribly human readable, it's an alphanumeric code.
What we need to do, is figure out the possible distinct values of this code, call another stored procedure to cross reference these discrete values and then update the result set with the newly deciphered values:
declare c_lookup_codes for
select distinct lookup_code
from #workinprogress
while(1=1)
begin
fetch c_lookup_codes into #lookup_code
if ##sqlstatus<>0
begin
break
end
exec proc_code_xref #lookup_code #xref_code OUTPUT
update #workinprogress
set xref = #xref_code
where lookup_code = #lookup_code
end
Now then, whilst this may give some folks palpitations, it does work. My question is, how best would one avoid this kind of thing?
_NB: for the purposes of this example you can also imagine that the result set is in the region of 500k rows and that there are 100 distinct values of look_up_code and finally, that it is not possible to have a table with the xref values in as the logic in proc_code_xref is too arcane._
You have to have a XRef table if you want to take out the cursor. Assuming you know the 100 distinct lookup values (and that they're static) it's simple to generate one by calling proc_code_xref 100 times and inserting the results into a table
Unless you are willing to duplicate the code in the xref proc, there is no way to avoid using a cursor.
They say, that if you must use cursor, then, you must have done something wrong ;-) here's solution without cursor:
declare #lookup_code char(8)
select distinct lookup_code
into #lookup_codes
from #workinprogress
while 1=1
begin
select #lookup_code = lookup_code from #lookup_codes
if ##rowcount = 0 break
exec proc_code_xref #lookup_code #xref_code OUTPUT
delete #lookup_codes
where lookup_code = #lookup_code
end