In my Perl script, I use DBD::Sybase (via DBI module) to connect to a SQL Server 2008. The base program as below runs without problem:
use DBI;
# assign values to $host, $usr, $pwd
my $dbh = DBI->connect("dbi:Sybase:$host", $usr, $pwd);
$dbh->do("BEGIN TRAN tr1");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
$update->execute(100, 'apple');
$dbh->do("END TRAN tr1");
however, if I insert one more prepare statement right before the existing prepare statement, to have the program look like:
...
my $insert = $dbh->prepare("INSERT INTO mytable (name, qty) VALUES (?, ?)");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
...
and the rest is all the same, then when I run it, I got:
DBD::Sybase::db do failed: Server message number=3902 severity=16 state=1 line=1 server=xxx text=The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
So looks like the additional prepare statement somehow disrupted the entire transaction flow. I had been running the same code via the DBD::ODBC driver with no problem against a SQL SERVER 2005. (But my firm upgraded to 2008 and I had to use the DBD::Sybase to get around some other problems.)
Any help / suggestion on how to resolve this issue would be much appreciated. In particular, using a different db handle for the other prepare is not a desired solution since that will beat the purpose of having them in a single transaction.
UPDATE: Turns out if I execute at least once on the additional insert, then the program is again run fine. So looks like every prepared statement needs to be run under Sybase. But that isn't a requirement with ODBC and isn't a reasonable requirement in general. Anyway to get around it?
You are learning perl AND Sybase basics and making several incorrect conclusions.
Forget about what it does under ODBC for a moment. ODBC most probably has AUTOCOMMIT turned on, and thus you have no transaction control whatsoever. (Why anyone would use ODBC when the DBD:: supports DB-Lib and CT-Lib is beyond me, but that's a separate story.)
Re: "So looks like every prepared statement needs to be run under Sybase."
Rawheiser is correct. What exactly do you expect to achieve by preparing a batch but performing a Do instead ? Where else do you expect to execute the batch prepared under Sybase, other than under Sybase?
Do vs prepare/execute are quite different. prepare/execute for Sybase works just fine in millions of programs. you just have to learn what it does, not what you think it should do. prepare let's you load a batch, a block of commands terminated by GO in the normal Sybase sense. Execute executes the prepared batch (supplies the GO and sends the batch to the server), and captures whatever is returned (according to whatever array/variables you have set).
Do is immediate, single command, with no prepare. A prepare+execute combined.
Performing only single-statement do's, and only dynamic SQL, simply because that's all that you could get to work, is very limiting and quite unnecessary.
You currently have:
Prepare:
UPDATE
Execute (100)
ExecuteImmediate(Do):
COMMIT TRAN
So of course, there is no BEGIN TRAN. (The first "do" executed, the BEGIN TRAN is gone)
I think what you want (intended originally) is this. Forget the 'do':
Prepare:
BEGIN TRAN
UPDATE
COMMIT TRAN
Execute (100)
Then change it to:
BEGIN TRAN
INSERT
UPDATE
COMMIT TRAN
Execute (100)
Your $update and $insert will confuse you (you're executing a multi-statement batch, right ?not a isolated single command in the middle of a prepare batch). If you get rid of them, and think in terms of $execute [whatever you have prepared in the batch], it might help you to understand the problem better.
Do not form conclusions until you have all the above working as intended.
And read up on BEGIN/COMMIT TRAN.
Last, What exactly is a "END TRAN" ? I do not think the code block you have posted is real.
Don't dynamically create SQL, it is dangerous (sql injection).
You should be able to prepare multiple inserts/updates and your link to the DBI documentation does not say you cannot, it says some drivers may not be able to tell you much about a statement which is ONLY prepared.
I'd post a failing example with error to the dbi-users list for comment as the DBD::Sybase maintainer hangs out there (see dbi.perl.org).
Turns out that DBI's prepare method is not quite portable across various database drivers as noted here. For the Sybase driver, it is most likely that prepare is not working as intended. One way to tell is that after running prepare, the variable $insert->{NUM_OF_FIELDS} is undefined.
To get around the problem, do one of the following:
1) do not prepare anything. Just dynamically construct the statement in text string and run $dbh->do($stmt), or
2) run finish on all outstanding statement handles (under that database handle) before running COMMIT TRAN. I personally prefer this way much better.
Related
I am using Postgres 13.5 and I am unsure how to combine commit and error handling in a stored procedure or DO block. I know that if I include the EXCEPTION clause in my block, then I cannot include a commit.
I am new to Postgres. It has also been over 15 years since I have written SQL that was working with transactions. When I was working with transactions I was using Oracle and recall using AUTONOMOUS_TRANSACTION to resolve some of these issues. I am just not sure how to do something like that in Postgres.
Here is a very simplified DO block. As I said above, I know that the Commits will cause the procedure to throw and exception. But, if I remove the EXCEPTION clause, then how will I trap an error if it happens? After reading many things, I still have not found a solution. So, I am not understanding something that will lead me to the solution.
Do
$$
DECLARE
v_Start timestamptz;
v_id integer;
v_message_type varchar(500);
Begin
select current_timestamp into start;
select q.id, q.message_type into (v_id, v_message_type) from message_queue;
call Load_data(v_id, v_message_type);
commit; -- if Load_Data completes successfully, I want to commmit the data
insert into log (id, message_type, Status, start, end)
values (v_id, v_message_type, 'Success', v_start, Currrent_Timestamp);
commit; -- commit the log issert for success
EXCEPTION
WHEN others THEN
insert into log (id, message_type, status, start, end, error_message)
values (v_id, v_message_type, 'Failue', v_start, Currrent_Timestamp, SQLERRM || '', ' ||
SQLSTATE );
commit; -- commit the log insert for failure.
end;
$$
Thanks!
Since this is a pattern that I will have to do tens of times, I want to understand the right way to do this.
Since you cannot use transaction management statements in a subtransaction, you will have to move part of the processing to the client side.
But your sample code doesn't need any transaction management at all! Simply remove all the COMMIT statements, and the procedure will work just as you want it to. Remember that PostgreSQL uses the autocommit mode, so your procedure call from the client will automatically run in its own transaction and commit when it is done.
But perhaps your sample code is simplified, and you would like more complicated processing (looping etc.) in your actual use cases. So let's discuss your options:
One option is to remove the EXCEPTION handler and move only that part to the client side: if the procedure causes an error, roll back and insert a log message. Another, perhaps cleaner, method is to move the whole transaction management to the client side. In that case, you would replace the complete procedure with client code and call load_data directly from client code.
When I'm sketching out SQL statements I have a file of all the queries I have used to analyse my live data. Each time I write a new statement or group of statements at the end of the fileI select them and click 'execute' to see the results. I'm paranoid that I may forget the selection stage and accidentally run all the queries sequentially in the entire file and so I head the file with the line
USE FakeDatabase
so that the queries will fail as they will be run against a non-existing database. But no, instead I get the error
USE statement is not supported to switch between databases
(N.B. I am using SQL Server Management Studio v17.0 RC1 against a v12 Azure SQL Server database.)
What tSQL statement can I use that will prevent further execution of tSQL statements in a file?
use is not supported in AZURE...you can try below ,but there can be many options depending on your use case
Replace use Database with below statement
if db_name() <>'Fakedatabase'
return;
You could, instead, put something like this in each script:
IF ##SERVERNAME <> 'Not-Really-My-Server'
BEGIN
raiserror('Database Name Not Set', 20, -1) with log
END
-- Rest of my query...
I have failed to understand the need for pgScript, which could be executed using pgAdmin tool. When it should be used? What it can do that plpgSQL cannot do? What is equivalent of it in Microsoft SQL Server?
pgScript is a client-side scripting language, while pl/PgSQL runs on the server. This means they have entirely different use cases. For example, PgScript can manage transaction status while pl/PgSQL cannot, but pl/Pgsql can be used to extend the language of SQL while pgScript cannot do that.
Additionally it means the two will handle many other things quite differently ranging from query plans to dynamic SQL, and while pgScript requires round trips between queries pl/Pgsql does not.
One use for pgScript is to define variables and use them later in your SQLs.
For example, you could do something like this:
declare #mytbl, #maxid;
set #mytbl = 'sometable';
set #maxid = 2500;
set #res = select count(*) from #mytbl where id <= #maxid;
print #res;
This approach is to just have any variables you want to change at the top of your script, rather than those getting buried deep inside complex SQL queries.
Of course, pgScript is a feature available only inside PgAdmin III client like #{Craig Ringer} mentioned in his comment.
I used DDL script generator from Toad Data Modeler.
I ran the script using normal query execution in PgAdmin-III but it kept giving me error:
"ERROR: relation "user" already exists SQL state: 42P07".
I ran Execute pgScript in PgAdmin-III and it worked fine.
I am calling update statements one after the other from a servlet to DB2. I am getting error sqlstate 40001, reason code 68 which i found it is due to deadlock timeout.
How can I resolve this issue?
Can it be resolved by setting query timeout?
If yes then how to use it with update statements in servlet or where to use it?
The reason code 68 already tells you this is due to a lock timeout (deadlock is reason code 2) It could be due to other users running queries at the same time that use the same data you are accessing, or your own multiple updates.
Begin by running db2pd -db locktest -locks show detail from a db2 command line to see where the locks are. You'll then need to run something like:
select tabschema, tabname, tableid, tbspaceid
from syscat.tables where tbspaceid = # and tableid = #
filling in the # symbols with the ID number you get from the db2pd command output.
Once you see where the locks are, here are some tips:
◦Deadlock frequency can sometimes be reduced by ensuring that all applications access their common data in the same order – meaning, for example, that they access (and therefore lock) rows in Table A, followed by Table B, followed by Table C, and so on.
taken from: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.trb.doc/doc/t0055074.html
recommended reading: http://www.ibm.com/developerworks/data/library/techarticle/dm-0511bond/index.html
Addendum: if your servlet or another guilty application is using select statements found to be involved in the deadlock, you can try appending with ur to the select statements if accuracy of the newly updated (or inserted) data isn't important.
For me, the solution was adding FOR READ ONLY WITH UR at the end of all my SELECT statements. (Apparently my select statements were returning so much data, it locked the tables long enough to interfere with other SQL statements)
See https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html
we have a stored procedure that ran fine until 10 minutes ago and then it just hangs after you call it.
Observations:
Copying the code into a query window yields the query result in 1 second
SP takes > 2.5 minutes until I cancel it
Activity Monitor shows it's not being blocked by anything, it's just doing a SELECT.
Running sp_recompile on the SP doesn't help
Dropping and recreating the SP doesn't help
Setting LOCK_TIMEOUT to 1 second does not help
What else can be going on?
UPDATE: I'm guessing it had to do with parameter sniffing. I used Adam Machanic's routine to find out which subquery was hanging. I found things wrong with the query plan thanks to the hint by Martin Smith. I learned about EXEC ... WITH RECOMPILE, OPTION(RECOMPILE) for subqueries within the SP, and OPTION (OPTIMIZE FOR (#parameter = 1)) in order to attack parameter sniffing. I still don't know what was wrong in this particular case but I came out of this battle seasoned and much better armed. I know what to do next time. So here's the points!
I think that this is related to parameter sniffing and the need to parameterize your input params to local params within the SP. Adding with recompile causes the execution plan to be recreated and eliminates much of the benefits of having a SP. We were using With Recompile on many reports in an attempt to eliminate this hanging issue and it occassionally resulted in hanging SP's that may have been related to other locks and/or transactions accessing the same tables simultaneously. See this link for more details
Parameter Sniffing (or Spoofing) in SQL Server and change your SP's to the following to fix this:
CREATE PROCEDURE [dbo].[SPNAME] #p1 int, #p2 int
AS
DECLARE #localp1 int, #localp2 int
SET #localp1=#p1
SET #localp2=#p2
Run Adam Machanic's excellent sp_WhoIsActive stored proc while your query is running. It'll give you the wait information - meaning, what the stored proc is waiting on - plus things like the execution plan:
http://www.brentozar.com/archive/2010/09/sql-server-dba-scripts-how-to-find-slow-sql-server-queries/
If you want the outer command (like a calling stored procedure's full text), use the #get_outer_command = 1 parameter as well.
First thing First.
Please check if there are any uncommitted transactions. A begin transaction without "COMMIT TRANSACTION"
Thanks for all comments.
I still haven't found the answer, but I will post the progress here.
I failed to reproduce the problem before, but today I chanced upon another stored procedure with the same problem. Again the same symptoms appeared:
Hanging piece of query runs fine and quick (3 secs) in normal query window (hanging piece identified with sp_whoisactive)
No locks, according to Activity Monitor SPID is doing SELECT
Stored procedure runs for over 6 hours without response
Parameters passed to SP and variables declared in window are the same
Using above hints, I found the SP execution plan and it showed nothing out of the ordinary (to me, at least). Creating a new stored procedure with same contents did not solve the problem either. So I started stripping the SP to less and less contents until I encountered a UDF call to another database. When I removed that (replaced the call by the inline contents of the function, a CASE statement), it ran fine again.
So this COULD have been the problem, but I am not very certain, as last time the problem disappeared by itself and I also changed a lot of other things while stripping this SP.
When we add new data sometimes the execution plan becomes invalid or out of date then the stored procedure starts going into this limbo phase. Run the following commands on your database
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
It will flush the cache memory and rebuild the execution plan next time you will run the stored proc.
msdn.microsoft.com
I think I had the same problem. I removed my parameters from the subqueries. It ran fine after that. Not sure if this is possible in your script but that is what solved it for me.
An answer of Brent Ozar might work, but it returns only active command text by default. For example, it returns WAITFOR DELAY '00:00:05' for query like:
CREATE PROCEDURE spGetChangeNotifications
AS
BEGIN
SET NOCOUNT ON;
DECLARE
#actionType TINYINT;
WHILE #actionType IS NULL
BEGIN
WAITFOR DELAY '00:00:05';
SELECT TOP 1
#actionType = [ActionType]
FROM
TableChangeNotifications;
END;
SELECT
TOP 1000 [RecordID], [Component], [TableName], [ActionType], [Key1], [Key2], [Key3]
FROM
TableChangeNotifications;
END;
How it looks like:
Thus, check the parameter #get_outer_command as described here.
Also, try this one instead(slightly modified procedure from MS Docs):
DECLARE
#sessions TABLE
(
[SPID] INT,
STATUS VARCHAR(MAX),
[Login] VARCHAR(MAX),
[HostName] VARCHAR(MAX),
[BlkBy] VARCHAR(MAX),
[DBName] VARCHAR(MAX),
[Command] VARCHAR(MAX),
[CPUTime] INT,
[DiskIO] INT,
[LastBatch] VARCHAR(MAX),
[ProgramName] VARCHAR(MAX),
[SPID_1] INT,
[REQUESTID] INT
);
INSERT INTO #sessions
EXEC sp_who2;
SELECT
[req].[session_id],
[A].[Login] AS 'login',
[A].[HostName] AS 'hostname',
[req].[start_time],
[cpu_time] AS 'cpu_time_ms',
OBJECT_NAME([st].[objectid], [st].[dbid]) AS 'object_name',
SUBSTRING(REPLACE(REPLACE(SUBSTRING([ST].text, ([req].[statement_start_offset] / 2) + 1, ((CASE [statement_end_offset]
WHEN -1
THEN DATALENGTH([ST].text)
ELSE [req].[statement_end_offset]
END - [req].[statement_start_offset]) / 2) + 1), CHAR(10), ' '), CHAR(13), ' '), 1, 512) AS [statement_text],
[ST].text AS 'full_query_text'
FROM
sys.dm_exec_requests AS req
CROSS APPLY
sys.dm_exec_sql_text(req.sql_handle) AS ST
LEFT JOIN #sessions AS A
ON A.SPID = req.session_id
ORDER BY
[cpu_time] DESC;
How it looks like:
Of course, it's possible to modify code from Brent Ozar answer so it would select a full query text, too, though. Nearly same technique is chosen there(link of code of 18.07.2020 so might change after time):
I had the same problem today and I don't know what causes it but I found a solution. I took the input parameter and saved it into a new parameter, i.e.
declare #parameter2 as x = #parameter
Then i changed the references to the parameter in the queries from #parameter to #parameter2.