Performing static SQL queries against DB2 without PureQuery - jpa

I'd like to use JPA over JDBC for a new application. I'm strictly using Named queries and basic CRUD methods of JPA Entity manager, which allows me (with the help of Hibernate, or any other JPA implementation) to extract all SQL native queries that will be performed on the database. With this list of static queries, I understand that I can build a DB2 package that is all execution plans of my requests.
So my question is: Does performing those queries through JDBC against DB2 will take advantage of those execution plans, or not ? I understand that the PureQuery product can capture the list of sql orders. Does it, still through JDBC and not through PureQuery specific API, provide more ? such a specific DB2 static bind feature ? or it is equivalent to JDBC?
Thank you for any piece of answer.

JDBC applications execute dynamic SQL only (i.e. DB2 does not use static packages).
There are only 2 ways to get static SQL (where the queries are stored in a package in the database): Write your application using SQLJ (which eliminates JPA/Hibernate) or use pureQuery (which sits between JDBC and the database).
Keep in mind that even with dynamic SQL, DB2 does cache the execution plans for queries, so if they are executed frequently enough (i.e., they remain in the cache), then you won't see the overhead from query compilation. The cache is only useful if the queries are an exact byte-for-byte match, so select * from t1 where c1 = 1 is not the same as select * from t1 where c1 = 2, nor is select * from t1 where C1 = 1 (which gives the same result, but the query differs). Using parameter markers (select * from t1 where c1 = ?) is key. Your DBA can tune the size of the catalog cache to help maximize the hit ratio on this cache.
Although caching helps avoid repeatedly compiling a query, it does not offer the plan stability that static SQL does, so YMMV.

Related

What is the PostgreSQL equivalent of `allowMultiQueries=true`?

I want to execute multiple select statements in the same JDBC query, in order to amortize network latency over a number of related queries, as described in this question:
Multiple queries executed in java in single statement
The accepted answer is to use allowMultiQueries=true. Unfortunately, this is a feature specific to the MySQL JDBC driver.
What is the equivalent in PostgreSQL JDBC?

Difference between the use of FOR FETCH ONLY in query & ResultSet.CONCUR_READ_ONLY in JDBC statement

My application is running 200 select statements per second (like SELECT A, B, C FROM DUMMYSC.DUMMYTB, etc.). 10-15% of the queries fail with the error below:
DB2 SQL Error: SQLCODE=-913, SQLSTATE=57033, SQLERRMC=00C9008E;00000304;DSNDB06 .SYSTSTSS.X'000001C5'.X'0C'
I'm looking to use one of the solutions below, but unable to understand the difference between the two.
ResultSet.CONCUR_READ_ONLY in
statement = connection.createStatement (ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
&
FOR FETCH ONLY in SELECT A, B, C FROM DUMMYSC.DUMMYTB FOR FETCH ONLY.
For fetch only (aka For Read only ) prevents the cursor from being used in a positioned update or positioned delete statement (i.e. Update ...WHERE CURRENT OF cursor-name, or DELETE...WHERE CURRENT OF cursor-name).
At jdbc level on the client, the ResultSet concurrency option determines whether the java code can update the result-set contents or not. If you do not need the cursor to be scrollable then don't use TYPE_SCROLL_*, instead use TYPE_FORWARD_ONLY as that should improve concurrency. CONCUR_READ_ONLY and FOR FETCH ONLY work together.
Sometimes it's best to ensure a plan-specific isolation level by using a WITH CS or WITH UR clause on the query, instead of depending on the package isolation or some default that you don't control.
For Db2-on-Z/OS, If your application can cope with incomplete results, i.e. if that makes business sense, then you can use SKIP LOCKED DATA in your query. For Db2 for Linux/Unix/Windows, other registry settings and special register settings are available to get similar behaviour.
There's also the USE AND KEEP...LOCKS syntax in the isolation clause of the query, which influences the duration of locks.
Cannot tell from your question whether the result-set is read-only by nature (for example, if the query is from a read only view ), or how your java code runs the query (via a prepared statement or not?) , these influence outcomes.
A DBA will be able to show you exactly what locks your transaction is taking, for a specific combination of settings for the jdbc cursor/Resultset and query syntax .
The information you posted is not enough to decide what caused the timeout on the table space access. It could be other SQLs holding the lock or some of these 200 SQLs attempting update, or others.
But if you know for sure that you don't need to update the data in your SQL and you don't worry about dirty read, then you should specify "FOR READ ONLY WITH UR" in your query. This not only avoids any potential timeout caused by other SQLs but also lowers the resource overhead and improves the system performance.

Need to join oracle and sql server tables in oledb source without using linked server

My ssis package has an oledb source which joins oracle and sql server to get source data and loads it into sql server oledb destination. Earlier we were using linked server for this purpose but we cannot use linked server anymore.
So I am taking the data from sql server and want to return it to the in clause of the oracle query which i am keeping as sql command oledb source.
I tried parsing an object type variable from sql server and putting it into the in clause of oracle query in oledb source but i get error that oracle cannot have more than 1000 literals in the in statement. So basically I think I have to do something like this:
select * from oracle.db where id in (select id from sqlserver.db).
Since I cannot use linked server so i was thinking if I could have a temp table which can be used throughout the package.
I tried out another way of using merge join in ssis. but my source data set is really large and the merge join is returning fewer rows than expecetd. I am badly stuck at this point. I have tried a number if things nothung seems to be working.
Can someone please help. Any help will be greatly appreciated.
A couple of options to try.
Lookup:
My first instinct was a Lookup Task, but that might not be a great solution depending on the size of your data sets, since all of the records from both tables have to pulled over the wire and stored in memory on the SSIS server. But if you were able to pull off a Merge Join, then a Lookup should also work, but it might be slow.
Set an OLE DB Source to pull the Oracle data, without the WHERE clause.
Set a Lookup to pull the id column from your SQL Server table.
On the General tab of the Lookup, under Specify how to handle rows with no matching entries, select Redirect rows to no-match output.
The output of the Lookup will just be the Oracle rows that found a matching row in your SQL Server query.
Working Table on the Oracle server
If you have the option of creating a table in the Oracle database, you could create a Data Flow Task to pipe the results of your SQL Server query into a working table on the Oracle box. Then, in a subsequent Data Flow, just construct your Oracle query to use that working table as a filter.
Probably follow that up with an Execute SQL Task to truncate that working table.
Although this requires write access to Oracle, it has the advantage of off-loading the heavy lifting of the query to the database machine, and only pulling the rows you care about over the wire.

HSQL DB: is it possible to simulate Oracle IN clause item limit?

is there some HSQL DB property which would say how much items can be in the list used in IN clause? Oracle limits it to 1000 items, when I have more elements, I split the list by 1000 and execute more queries, but I'd need the HSQL database to simulate this scenario (I am writing an automated test and I'd like it to fail when someone removes this list splitting mechanism in the future)
No such limit can be set in HSQLDB. You should be able to check for the limit with a stored procedure in Oracle and in HSQLDB, so it is not affected by others modifying the application code.

How to perform a Bulk Insert in Sybase SQL

I need to insert a Big amount of data(Some Millions) and I need to perform it Quickly.
I read about Bulk insert via ODBC on .NET and JAVA But I need to perform it directly on the Database.
I also read about Batch Insert but What I have tried have not seemed to work
Batch Insert, Example
I'm executing a INSERT SELECT but it's taking something like 0,360s per row, this is very slow and I need to perform some improvements here.
I would really appreciate some guidance here with examples and documentation if possible.
DATABASE: SYBASE ASE 15.7
Expanding on some of the comments ...
blocking, slow disk IO, and any other 'wait' events (ie, anything other than actual insert/update activity) can be ascertained from the master..monProcessWaits table (where SPID = spid_of_your_insert_update_process) [see the P&T manual for Monitoring Tables (aka MDA tables)]
master..monProcessObject and master..monProcessStatement will show logical/physical IOs for currently running queries [again, see P&T manual for MDA tables]
master..monSysStatement will show logical/physical IOs for recently completed queries [again, see P&T manual for MDA tables]
for UPDATE statements you'll want to take a look at the query plan to see if you're suffering from a poor join order; also of key importance ... direct (fast/good) updates vs deferred (slow/bad) updates; deferred updates can occur for many reasons ... some fixable, some not ... updating indexed columns, poor join order, updates that cause page splits and/or row forwardings
RI (PK/FK) constraints can be viewed with sp_helpconstraint table_name; query plans will also show the under-the-covers joins required when performing RI (PK/FK) validations during inserts/updates/deletes
triggers are a bit harder to locate (an official sp_helptrigger doesn't show up until ASE 16); check the sysobjects.[ins|upd|del]trig where name = your_table - these represent the object id(s) of any insert/update/delete triggers on the table; also check sysobjects records where type = 'TR' and deltrig = object_id(your_table) - provides support for additional insert/update/delete triggers (don't recall at moment if this is just ASE 16+)
if triggers are being fired, need to review the associated query plans to make sure the inserted and deleted tables (if referenced) are driving any queries where these pseudo tables are joined with permanent tables
There are likely some areas I'm forgetting (off the top of my head) ... key take away is that there could be many reasons for 'slow' DML statements.
One (relatively) quick way to find out if RI (PK/FK) constraints or triggers are at play ...
set showplan on
go
insert/update/delete statements
go
Then review the resulting query plan(s); if you see references to any tables other than the ones explicitly listed in the insert/update/delete statements then you're likely dealing with RI constraints and/or triggers.