Creating a pagination with db2 - db2

I have read that DB2 doesn't support Limit and Offset. I also read that you have to uses ROW_NUMBER() and subqueries to get the desired result.
If this is an SQL query:
$sql = "SELECT * FROM ITEMS LIMIT $offset, $rowsperpage";
where $offset is the offset and $rowsperpage is the amount of rows from database I want to be displayed on the page, what could be equivalent of this as a DB2 query.

Well, depending on what platform of DB2 you are using, you didn't read the full story. DB2 LUW has support for LIMIT and OFFSET, but you have to turn it on (don't forget to restart DB2 after setting the flag). If you want to use DB2 with ROW_NUMBER() as you asked for, you could write the query as follows:
SELECT *
FROM (SELECT ROW_NUMBER() OVER() AS rn,
items.*
FROM items)
WHERE rn BETWEEN computelowerboundaryhere AND computeupperboundaryhere;
There is also an overview article describing the different ways of doing the LIMIT/OFFSET work in DB2.

DB2 for I also has support for LIMIT and OFFSET as of the latest technology refresh (7.1 TR11 and 7.2 TR3).

Related

Amazon Redshift how to get the last date a table inserted data

I am trying to get the last date an insert was performed in a table (on Amazon Redshift), is there any way to do this using the metadata? The tables do not store any timestamp column, and even if they had it, we need to find out for 3k tables so it would be impractical so a metadata approach is our strategy. Any tips?
All insert execution steps for queries are logged in STL_INSERT. This query should give you the information you're looking for:
SELECT sti.schema, sti.table, sq.endtime, sq.querytxt
FROM
(SELECT MAX(query) as query, tbl, MAX(i.endtime) as last_insert
FROM stl_insert i
GROUP BY tbl
ORDER BY tbl) inserts
JOIN stl_query sq ON sq.query = inserts.query
JOIN svv_table_info sti ON sti.table_id = inserts.tbl
ORDER BY inserts.last_insert DESC;
Note: The STL tables only retain approximately two to five days of log history.

Line numbering in result grid in MySQL Workbench

Is there any way to add some line numbers in the result grid in MySQL Workbench?
E.g. (red numbers):
I don't want to have to change the SQL query, which I know I can do using tricks like
SELECT #n := #n + 1 `Number of Submissions`, t.*
FROM (SELECT #n:=0) initvars,
( SELECT COUNT(*) AS count
FROM moocdb.submissions
GROUP BY user_id
ORDER BY count DESC
) t
I also don't want to have to export the results.
Not sure if that is a good question for SO, but anyway: no this is not possible. Nobody asked for that so far, so, file a feature request at http://bugs.mysql.com to have that in.
MySQL does not provide row_number like Microsoft SQL Server, Oracle, or PostgreSQL. Fortunately, MySQL provides session variables that you can use to emulate the row_number function.
SET #row_number = 0;
SELECT (#row_number:=#row_number + 1) AS num, col_1
FROM
Table

Fetching rows in DB2

I know in DB2 (using version 9.7) I can select the first 10 rows of a table by using this query:
SELECT *
FROM myTable
ORDER BY id
FETCH FIRST 10 ROWS ONLY
But how can I get, for example, rows 11 to 20?
I can't use the primary key or the ID to help me...
Thanks in advance!
Here's a sample query that will get rows from a table contain state names, abbreviations, etc.
SELECT *
FROM (
SELECT stabr, stname, ROW_NUMBER() OVER(ORDER BY stname) AS rownumber
FROM states
WHERE stcnab = 'US'
) AS xxx
WHERE rownumber BETWEEN 11 AND 20 ORDER BY stname
Edit: ORDER BY is necessary to guarantee that the row numbering is consistent
between executions of the query.
You can also use the MYSQL compatibility. You just need to activate the vector compatibility for MYS, and then use Limit and Offset in your queries.
db2set DB2_COMPATIBILITY_VECTOR=MYS
db2stop
db2start
An excellent article written by DB2 experts from IBM https://www.ibm.com/developerworks/mydeveloperworks/blogs/SQLTips4DB2LUW/entry/limit_offset?lang=en
Compatibility vector in InfoCenter http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.apdv.porting.doc/doc/r0052867.html
A blog about this http://victorsergienko.com/db2-supports-limit-and-offset/

How to write rownum in where clause in PostgreSQL

I am quite new to Postgres database. I have one query:
select offer_id, offer_date
from CMS_OFFER
where ROWNUM < 300
which executes in Oracle but in Postgres it is not excuted.
I tried with row_number() also. It is not able to execute. Please help me: how I can achieve this?
While not exactly the same as Oracle's ROWNUM, Postgresql has LIMIT:
select offer_id,offer_date from CMS_OFFER LIMIT 299
The difference is that ROWNUM is applied before sorting, and LIMIT after sorting (which is usually what you want anyway).
select offer_id,offer_date from CMS_OFFER limit 299

Equivalent of LIMIT for DB2

How do you do LIMIT in DB2 for iSeries?
I have a table with more than 50,000 records and I want to return records 0 to 10,000, and records 10,000 to 20,000.
I know in SQL you write LIMIT 0,10000 at the end of the query for 0 to 10,000 and LIMIT 10000,10000 at the end of the query for 10000 to 20,000
So, how is this done in DB2? Whats the code and syntax?
(full query example is appreciated)
Using FETCH FIRST [n] ROWS ONLY:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db29.doc.perf/db2z_fetchfirstnrows.htm
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY
FROM EMP
ORDER BY SALARY DESC
FETCH FIRST 20 ROWS ONLY;
To get ranges, you'd have to use ROW_NUMBER() (since v5r4) and use that within the WHERE clause: (stolen from here: http://www.justskins.com/forums/db2-select-how-to-123209.html)
SELECT code, name, address
FROM (
SELECT row_number() OVER ( ORDER BY code ) AS rid, code, name, address
FROM contacts
WHERE name LIKE '%Bob%'
) AS t
WHERE t.rid BETWEEN 20 AND 25;
Developed this method:
You NEED a table that has an unique value that can be ordered.
If you want rows 10,000 to 25,000 and your Table has 40,000 rows, first you need to get the starting point and total rows:
int start = 40000 - 10000;
int total = 25000 - 10000;
And then pass these by code to the query:
SELECT * FROM
(SELECT * FROM schema.mytable
ORDER BY userId DESC fetch first {start} rows only ) AS mini
ORDER BY mini.userId ASC fetch first {total} rows only
Support for OFFSET and LIMIT was recently added to DB2 for i 7.1 and 7.2. You need the following DB PTF group levels to get this support:
SF99702 level 9 for IBM i 7.2
SF99701 level 38 for IBM i 7.1
See here for more information: OFFSET and LIMIT documentation, DB2 for i Enhancement Wiki
Here's the solution I came up with:
select FIELD from TABLE where FIELD > LASTVAL order by FIELD fetch first N rows only;
By initializing LASTVAL to 0 (or '' for a text field), then setting it to the last value in the most recent set of records, this will step through the table in chunks of N records.
#elcool's solution is a smart idea, but you need to know total number of rows (which can even change while you are executing the query!). So I propose a modified version, which unfortunately needs 3 subqueries instead of 2:
select * from (
select * from (
select * from MYLIB.MYTABLE
order by MYID asc
fetch first {last} rows only
) I
order by MYID desc
fetch first {length} rows only
) II
order by MYID asc
where {last} should be replaced with row number of the last record I need and {length} should be replaced with the number of rows I need, calculated as last row - first row + 1.
E.g. if I want rows from 10 to 25 (totally 16 rows), {last} will be 25 and {length} will be 25-10+1=16.
Try this
SELECT * FROM
(
SELECT T.*, ROW_NUMBER() OVER() R FROM TABLE T
)
WHERE R BETWEEN 10000 AND 20000
The LIMIT clause allows you to limit the number of rows returned by the query. The LIMIT clause is an extension of the SELECT statement that has the following syntax:
SELECT select_list
FROM table_name
ORDER BY sort_expression
LIMIT n [OFFSET m];
In this syntax:
n is the number of rows to be returned.
m is the number of rows to skip before returning the n rows.
Another shorter version of LIMIT clause is as follows:
LIMIT m, n;
This syntax means skipping m rows and returning the next n rows from the result set.
A table may store rows in an unspecified order. If you don’t use the ORDER BY clause with the LIMIT clause, the returned rows are also unspecified. Therefore, it is a good practice to always use the ORDER BY clause with the LIMIT clause.
See Db2 LIMIT for more details.
You should also consider the OPTIMIZE FOR n ROWS clause. More details on all of this in the DB2 LUW documentation in the Guidelines for restricting SELECT statements topic:
The OPTIMIZE FOR clause declares the intent to retrieve only a subset of the result or to give priority to retrieving only the first few rows. The optimizer can then choose access plans that minimize the response time for retrieving the first few rows.
There are 2 solutions to paginate efficiently on a DB2 table :
1 - the technique using the function row_number() and the clause OVER which has been presented on another post ("SELECT row_number() OVER ( ORDER BY ... )"). On some big tables, I noticed sometimes a degradation of performances.
2 - the technique using a scrollable cursor. The implementation depends of the language used. That technique seems more robust on big tables.
I presented the 2 techniques implemented in PHP during a seminar next year. The slide is available on this link :
http://gregphplab.com/serendipity/uploads/slides/DB2_PHP_Best_practices.pdf
Sorry but this document is only in french.
Theres these available options:-
DB2 has several strategies to cope with this problem.
You can use the "scrollable cursor" in feature.
In this case you can open a cursor and, instead of re-issuing a query you can FETCH forward and backward.
This works great if your application can hold state since it doesn't require DB2 to rerun the query every time.
You can use the ROW_NUMBER() OLAP function to number rows and then return the subset you want.
This is ANSI SQL
You can use the ROWNUM pseudo columns which does the same as ROW_NUMBER() but is suitable if you have Oracle skills.
You can use LIMIT and OFFSET if you are more leaning to a mySQL or PostgreSQL dialect.