In free form RPG using embedded SQL, I specified this for my SQL options:
exec sql
set option commit=*CS,
datfmt=*iso,
closqlcsr=*endmod;
If I specify commit=*CS, do I need to specify WITH CS on my SQL select statement or is that assumed since I specified it in the set option?
If I specify commit=*none, and then specify WITH CS on my SQL select statement will the WITH CS take effect since on my set option commit I said *none?
the set option statement sets the default for all statements in the module.
That default can be overridden by the WITH clause on an individual statement.
So with
exec sql
set option commit=*CS,
datfmt=*iso,
closqlcsr=*endmod;
A statement without a WITH clause will use commitment control and an isolation level of cursor stability. A statement with WITH NC will not use commitment control and a statement with WITH RS will use commitment control and an isolation level read stability.
Note: closqlcsr=*endmod will hurt performance. It's usually used as a band-aid for poorly designed and/or outdated applications.
Related
My application is running 200 select statements per second (like SELECT A, B, C FROM DUMMYSC.DUMMYTB, etc.). 10-15% of the queries fail with the error below:
DB2 SQL Error: SQLCODE=-913, SQLSTATE=57033, SQLERRMC=00C9008E;00000304;DSNDB06 .SYSTSTSS.X'000001C5'.X'0C'
I'm looking to use one of the solutions below, but unable to understand the difference between the two.
ResultSet.CONCUR_READ_ONLY in
statement = connection.createStatement (ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
&
FOR FETCH ONLY in SELECT A, B, C FROM DUMMYSC.DUMMYTB FOR FETCH ONLY.
For fetch only (aka For Read only ) prevents the cursor from being used in a positioned update or positioned delete statement (i.e. Update ...WHERE CURRENT OF cursor-name, or DELETE...WHERE CURRENT OF cursor-name).
At jdbc level on the client, the ResultSet concurrency option determines whether the java code can update the result-set contents or not. If you do not need the cursor to be scrollable then don't use TYPE_SCROLL_*, instead use TYPE_FORWARD_ONLY as that should improve concurrency. CONCUR_READ_ONLY and FOR FETCH ONLY work together.
Sometimes it's best to ensure a plan-specific isolation level by using a WITH CS or WITH UR clause on the query, instead of depending on the package isolation or some default that you don't control.
For Db2-on-Z/OS, If your application can cope with incomplete results, i.e. if that makes business sense, then you can use SKIP LOCKED DATA in your query. For Db2 for Linux/Unix/Windows, other registry settings and special register settings are available to get similar behaviour.
There's also the USE AND KEEP...LOCKS syntax in the isolation clause of the query, which influences the duration of locks.
Cannot tell from your question whether the result-set is read-only by nature (for example, if the query is from a read only view ), or how your java code runs the query (via a prepared statement or not?) , these influence outcomes.
A DBA will be able to show you exactly what locks your transaction is taking, for a specific combination of settings for the jdbc cursor/Resultset and query syntax .
The information you posted is not enough to decide what caused the timeout on the table space access. It could be other SQLs holding the lock or some of these 200 SQLs attempting update, or others.
But if you know for sure that you don't need to update the data in your SQL and you don't worry about dirty read, then you should specify "FOR READ ONLY WITH UR" in your query. This not only avoids any potential timeout caused by other SQLs but also lowers the resource overhead and improves the system performance.
going by this link I should be able to isolate the rows which are being read using a select statement but when I run below steps it doesn't lock the rows
create table test ( col varchar(50));
INSERT INTO test values('100');
select COL from mdmsysdb.test WITH RS USE AND KEEP EXCLUSIVE LOCKS or select COL from mdmsysdb.test WITH RR USE AND KEEP EXCLUSIVE LOCKS
in a parallel application when I run update statement it goes through properly.
What is wrong with my approach why is the row getting updated in step 4 from a parallel application when the select is defined to hold exlusive lock?
If you are using RHEL and running the SQL statements on the shell command line (bash or ksh etc). then you need to understand the default autocommit-behaviour.
Take care to use the correct SQL syntax for the version and platform of the Db2-server. These differ between Linux/Unix/Windows and i-Series and Z/OS. Each platform can behave differently and different settings per platform can adjust the locking behaviour.
The Db2 CLP on Windows/Linux/Unix will autocommit by default. So any locks taken by the statement are immediately released on statement completion when the automatic commit happens. This explains why (in different sessions) you cannot force to wait for a lock - the lock is already gone!
So the observed behaviour is correct - working as designed, just not what you incorrectly imagined. You can change the default behaviour by selectively disabling autocommit.
To disable autocomit, you have different choices. You can do it on the command line of the CLP to impact the current command line (use the +c option) , or you can use the DB2OPTIONS environment variable to set it permanently for the session (usually a bad idea), or you can enable/disable autocommit on the fly inside a script via the update command options using c off; and update command options using c on; commands.
To disable autommit on the command-line of the Db2 CLP , just for a single statement, then use the +c option, for example:
db2 +c "select COL from mdmsysdb.test WITH RS USE AND KEEP EXCLUSIVE LOCKS"
When you disable autocommit, you become responsible for performing an explicit commit or rollback. If you have used the +c option, any db2 command that omits the option will revert to default behaviour or DB2OPTIONS if set. So you have to know what you are doing and take care to properly test.
anyone has the experience that cloudsql replication change this parameter to true, which doesn't allow multiple statement in a transaction, or create temp table.
how do I change it back to false in cloudsql?
Thx
H., this is Danny from CloudSQL. We switched to use GTID for replication which ensures no data loss during replica creation or failover. It's going to be default after MySQL 5.7. With GTID enabled, the flag "enforce-gtid-consistency" has to be set. When enabled, this option enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. It follows that the operations listed here cannot be used with this option:
1. CREATE TABLE ... SELECT statements
2. CREATE TEMPORARY TABLE statements inside transactions
3. Transactions or statements that update both transactional and nontransactional tables.
If you can share your query, I can help you find a walk-around to separate the temp table from multiple statement transaction. Sorry about the inconvenience.
I have the same issue but its with magento 2. I cannot easily change how they do their queries so the only option to me was to build a msql server in compute engine.
this is very sad, CREATE TABLE ... SELECT is not an unusual feature to just discard
I want to log the actual sql statements executed against a POSTGRES instance. I am aware that I can enable logging of the sql statements. Unfortunately, this doesn't log the actual sql, but rather a parsed version, with certain parameters stripped out and listed separately.
Is there a tool for reliably reconstituting this output into executable sql statements?
Or is there a way of intercepting the sql that is send to the postgres instance, such that that sql can be logged?
We want to be able to replay these sql statements against another database.
Thanks for your help!
Actually, PostgreSQL does log exactly the SQL that got executed. It doesn't strip parameters out. Rather, it doesn't interpolate them in, it logs what the application sent, with bind parameters separate. If your app sends insert into x(a,b) values ($1, $2) with bind params 42 and 18, that's what gets logged.
There's no logging option to interpolate bind parameters into the query string.
Your last line is the key part. You don't want logging at all. You're trying to do statement based replication via the logs. This won't work well, if at all, due to volatile functions, the search_path, per-user settings, sequence allocation order/gap issues, and more. If you want replication don't try to do it by log parsing.
If you want to attempt statement-based replication look into PgPool-II. It has a limited ability to do so, with caveats aplenty.
Via setting log_statement to all on postgresql.conf. See the documentation for runtime-config-logging
I'm writing a PL/1 subroutine that reads data from DB2. Depending on the input, it uses one of 3 cursors. These have to be opened, fetched, closed, etc. On every of these cursor-specific operations I have to specify its name. This leads to very redundant code, because the remaining operations are exactly the same for every case.
Is it possible to create a reference, to which I would assign the appropriate cursor? Then I could use this to perform the necessary tasks only once.
Because of safety-related restrictions, I'm not allowed to use dynamic (prepared) SQL.
And is there a reference containing all commands I can use in my EXEC SQL statements?
Thanks in advance
David
And is there a reference containing all commands I can use in my EXEC SQL statements?
IBM has documentation for DB2, which contains an SQL reference for the product.