going by this link I should be able to isolate the rows which are being read using a select statement but when I run below steps it doesn't lock the rows
create table test ( col varchar(50));
INSERT INTO test values('100');
select COL from mdmsysdb.test WITH RS USE AND KEEP EXCLUSIVE LOCKS or select COL from mdmsysdb.test WITH RR USE AND KEEP EXCLUSIVE LOCKS
in a parallel application when I run update statement it goes through properly.
What is wrong with my approach why is the row getting updated in step 4 from a parallel application when the select is defined to hold exlusive lock?
If you are using RHEL and running the SQL statements on the shell command line (bash or ksh etc). then you need to understand the default autocommit-behaviour.
Take care to use the correct SQL syntax for the version and platform of the Db2-server. These differ between Linux/Unix/Windows and i-Series and Z/OS. Each platform can behave differently and different settings per platform can adjust the locking behaviour.
The Db2 CLP on Windows/Linux/Unix will autocommit by default. So any locks taken by the statement are immediately released on statement completion when the automatic commit happens. This explains why (in different sessions) you cannot force to wait for a lock - the lock is already gone!
So the observed behaviour is correct - working as designed, just not what you incorrectly imagined. You can change the default behaviour by selectively disabling autocommit.
To disable autocomit, you have different choices. You can do it on the command line of the CLP to impact the current command line (use the +c option) , or you can use the DB2OPTIONS environment variable to set it permanently for the session (usually a bad idea), or you can enable/disable autocommit on the fly inside a script via the update command options using c off; and update command options using c on; commands.
To disable autommit on the command-line of the Db2 CLP , just for a single statement, then use the +c option, for example:
db2 +c "select COL from mdmsysdb.test WITH RS USE AND KEEP EXCLUSIVE LOCKS"
When you disable autocommit, you become responsible for performing an explicit commit or rollback. If you have used the +c option, any db2 command that omits the option will revert to default behaviour or DB2OPTIONS if set. So you have to know what you are doing and take care to properly test.
Related
In building a script that will run against a production SQL Server I'd like to build and test it interactively.
I.e., create a script with a BEGIN TRANSACTION followed with some statements to delete and/or insert and/or update, possibly in batches if required. Then I'd like to execute the script in a query window and with the transaction still active proceed to query the database (in that window? in a different window?) in various ways to see how it would look if the transaction was committed, and then finally rollback.
Is this possible? Or what should I be doing instead?
It would be best to do testing on a PreProd server, but it's technically possible to do what you're saying.
If you begin a transaction and then run some statements, with no COMMIT, you can then query the affected tables in another window, by first declaring SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED. When you're done, you can then go back to the first window and execute a ROLLBACK statement.
In free form RPG using embedded SQL, I specified this for my SQL options:
exec sql
set option commit=*CS,
datfmt=*iso,
closqlcsr=*endmod;
If I specify commit=*CS, do I need to specify WITH CS on my SQL select statement or is that assumed since I specified it in the set option?
If I specify commit=*none, and then specify WITH CS on my SQL select statement will the WITH CS take effect since on my set option commit I said *none?
the set option statement sets the default for all statements in the module.
That default can be overridden by the WITH clause on an individual statement.
So with
exec sql
set option commit=*CS,
datfmt=*iso,
closqlcsr=*endmod;
A statement without a WITH clause will use commitment control and an isolation level of cursor stability. A statement with WITH NC will not use commitment control and a statement with WITH RS will use commitment control and an isolation level read stability.
Note: closqlcsr=*endmod will hurt performance. It's usually used as a band-aid for poorly designed and/or outdated applications.
I need to load large amount of data on a table in DB2 database. I am using CLI load mode on a table written in C using SQLSetStmtAttr function. Select statements does not work (the table gets locked) when it is set.
When the loading of the data completes I am doing load mode off. After that the table becomes accessible so that i can perform select from db2 command line tools (or control center).
But the problem is when my C program crashes or fails before doing load mode off. The table is always locked. I have to drop the table and all previous data is lost.
My question is whether there is a way to recover the table?
DBMS Documentation is your friend. You can read the description of SQL0668N (or any other error!) to find out what reason code 3 means, as well as how to fix it.
Basically, when a LOAD operation fails, you need to perform some cleanup on the table – either restart or terminate it. This can be done using the LOAD utility from outside of your program (e.g., LOAD from /dev/null of del TERMINATE into yourtable nonrecoverable) but you can also do it programatically.
Typically you would do this using the db2Load() API, and setting the piLongActionString member of the db2LoadStruct parameter you pass to db2Load(), with the same RESTART or TERMINATE operation.
It looks like you can set the SQL_ATTR_LOAD_INFO statement to the same db2LoadStruct when using a CLI Load, too, but I am not sure if this would actually work to complete a load restart / terminate.
I want to do some basic experiment on PostgreSQL, for example to generate deadlocks, to create non-repeatable reads, etc. But I could not figure out how to run multiple transactions at once to see such behavior.
Can anyone give me some Idea?
Open more than one psql session, one terminal per session.
If you're on Windows you can do that by launching psql via the Start menu multiple times. On other platforms open a couple of new terminals or terminal tabs and start psql in each.
I routinely do this when I'm examining locking and concurrency issues, used in answers like:
https://stackoverflow.com/a/12456645/398670
https://stackoverflow.com/a/12831041/398670
... probably more. A useful trick when you want to set up a race condition is to open a third psql session and BEGIN; LOCK TABLE the_table_to_race_on;. Then run statements in your other sessions; they'll block on the lock. ROLLBACK the transaction holding the table lock and the other sessions will race. It's not perfect, since it doesn't simulate offset-start-time concurrency, but it's still very helpful.
Other alternatives are outlined in this later answer on a similar topic.
pgbench is probably the best solution in yours case. It allows you to test different complex database resource contentions, deadlocks, multi-client, multi-threaded access.
To get dealocks you can simply right some script like this ('bench_script.sql):
DECLARE cnt integer DEFAULT 0;
BEGIN;
LOCK TABLE schm.tbl IN SHARE MODE;
select count(*) from schm.tabl into cnt;
insert into schm.tbl values (1 + 9999*random(), 'test descr' );
END;
and pass it to pgbench with -f parameter.
For more detailed pgbench usage I would recommend to read the official manual for postgreSQL pgBench
and get acquented with my pgbench question resolved recently here.
Craig Ringer provide a way that open mutiple transactions manualy, if you find that is not very convenient, You can use pgbench run multiple transactions at once.
I'm encountering some major performance problems with simple SQL queries generated by the Entity Framework (4.2) running against SQL Server 2008 R2. In some situations (but not all), EF uses the following syntax:
exec sp_executesql 'DYNAMIC-SQL-QUERY-HERE', #param1...
In other situations is simply executes the raw SQL with the provided parameters baked into the query. The problem I'm encountering is that queries executed with the sp_executesql are ignoring all indexes on my target tables, resulting in an extremely poor performing query (confirmed by examining the execution plan in SSMS).
After a bit of research, it sounds like the issue might be caused by 'parameter sniffing'. If I append the OPTION(RECOMPILE) query hint like so:
exec sp_executesql 'DYNAMIC-SQL-QUERY-HERE OPTION(RECOMPILE)', #param1...
The indexes on the target tables are used and the query executes extremely quickly. I've also tried toggling on the trace flag used to disable parameter sniffing (4136) on the database instance (http://support.microsoft.com/kb/980653), however this didn't appear to have any effect whatsoever.
This leaves me with a few questions:
Is there anyway to append the OPTION(RECOMPILE) query hint to the SQL generated by Entity Framework?
Is there anyway to prevent Entity Framework from using exec sp_executesql, and instead simply run the raw SQL?
Is anyone else running into this problem? Any other hints/tips?
Additional Information:
I did restart the database instance through SSMS, however, I will try restarting the service from the service management console.
Parameterization is set to SIMPLE (is_parameterization_forced: 0)
Optimize for adhoc workloads has the following settings
value: 0
minimum: 0
maximum: 1
value_in_use: 0
is_dynamic: 1
is_advanced: 1
I should also mention that if I restart the SQL Server Service via the service management console AFTER enabling trace flag 4136 with the below script, appears to actually clear the trace flag...perhaps I should be doing this a different way...
DBCC TRACEON(4136,-1)
tl;dr
update statistics
We had a delete query with one parameter (the primary key) that took ~7 seconds to complete when called through EF and sp_executesql. Running the query manually, with the parameter embedded in the first argument to sp_executesql made the query run quickly (~0.2 seconds). Adding option (recompile) also worked. Of course, those two workarounds aren't available to us since were using EF.
Probably due to cascading foreign key constraints, the execution plan for the long running query was, uhmm..., huge. When I looked at the execution plan in SSMS I noticed that the arrows between the different steps in some cases were wider than others, possibly indicating that SQL Server had trouble making the right decisions. That led me to thinking about statistics. I looked at the steps in the execution plan to see what table was involved in the suspect steps. Then I ran update statistics Table for that table. Then I re-ran the bad query. And I re-ran it again. And again just to make sure. It worked. Our perf was back to normal. (Still somewhat worse than non-sp_executesql performance, but hey!)
It turned out that this was only a problem in our development environment. (And it was a big problem because it made our integration tests take forever.) In our production environment, we had a job running that updated all statistics on a regular basis.
At this point I would recommend:
Set the optimize for ad hoc workloads setting to true.
EXEC sp_configure 'show advanced', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'optimize for ad hoc', 1;
GO
RECONFIGURE WITH OVERRIDE
GO
EXEC sp_configure 'show advanced', 0;
GO
RECONFIGURE WITH OVERRIDE;
GO
If after some time this setting doesn't seem to have helped, only then would I try the additional support of the trace flag. These are usually reserved as a last resort. Set the trace flag using the command line via SQL Server Configuration Manager, as opposed to in a query window and using the global flag. See http://msdn.microsoft.com/en-us/library/ms187329.aspx