How to see variable values in SQL Profiler when Trigger fires? - tsql

I am creating an "After Update" Trigger on a SQL Server 2008 table. The trigger fires just fine but one of the values it's updating in another table isn't correct. I am looking at a trace in SQL Profiler, but I can't see my variable's values in there.
I read this other question and so added the RPC: Completed Event to my trace, but there were not instances of that event in my trace for some reason. That is, I see it at other places in the trace but not where my trigger is firing.
Just to (hopefully) be clear, my trigger is EXECUTING an SP like this:
EXEC SP_UpdateSomeStuff #variable1, #variable2
... and that's all that I see in the trace. What I wish to see is:
EXEC SP_UpdateSomeStuff #variable1 = 111, #variable2 = 222
... but I can't figure out which events to add to get this. Thanks for any ideas.

"RPC" stands for "Remote Procedure Call" -- generally, queries submitted "from outside" to SQL Server. Trigger events are anything but outside calls, which should be why you are not seeing them in Profiler.
I suspect that you won't be able to see your paremeter values via SQL Profiler. Can you temporarily put in debugging code (insert DebugTable values (Wvariable1, etc.), such that the value you are working with get logged somewhere?

Related

Executing SP_Open cursor causes SMSS to have a severe error

Our program executes a stored procedure VIA SP_Opencursor, i've managed to extract the exact call via a trace, and when running the code directly in SMSS, we get a severe error
declare #p1 int
set #p1=0
declare #p3 int
set #p3=16388
declare #p4 int
set #p4=8196
declare #p5 int
set #p5=0
exec sp_cursoropen #p1 output,N' EXEC [dbo].[EventSearchSP] ''hip'' ',#p3 output,#p4 output,#p5 output
select #p1, #p3, #p4, #p5
When running the above code we get the following error:
Executing SQL directly; no cursor.
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
This code used to work, and only fairly recently has it been failing. It fails across all Databases on our instance (over 100)
When running the code on other servers, the procedure executes correctly and it does not fail.
The actual data of the Stored procedure is fairly generic, a single column of numbers. I don't belive it's the stored procedure itself as this problem is happening when executing any Stored procedure from the client program
I'm fast running out of ideas as to try to resolve the issue, does anyone know what could be causing this error to suddenly start, where it used to be working fine?
We've found the problem, and If anyone knows why it is a problem i would love to know so i can find a way around it.
We recently set XACT_ABORT ON feature in the connections on SQL server after doing research and it being generally accepted as best practice. However with this setting on we get the above error, after turning the feature off again, the procedure began working as before.
Unfortunately we turned XACT_ABORT ON in order to solve another issue, which we now need to find another solution to!

PostgreSQL - how to determine whether a transaction is active?

Let me open by saying: yes, I am aware of Determine if a transaction is active (Postgres)
Unfortunately the sole answer to that question is far too specific to the use case provided, and doesn't actually indicate whether or not a transaction is active.
The select txid_current(); trick suggested by How to check for pending operations in a PostgreSQL transaction doesn't appear to work - I always get the same transaction ID from adjacent calls to that function. Possibly this is because I'm trying to test it from pgAdmin, which is transparently starting transactions...? (Note: I don't actually care whether there are any pending changes or active locks, so looking at pg_locks isn't helpful - what if nothing's been touched since the transaction was started?)
So: How can I determine in PostgreSQL PL/pgSQL code if a transaction is currently active?
One possible use case is: the SP/FN in question will be doing its own explicit transaction management, and calling it with a transaction already active will greatly interfere with that. I want to raise an error so that the coding mistake of calling this SP/FN in a transaction can be corrected.
There are other use cases, though.
Ideally what I'm looking for is an equivalent to MSSQL's ##TRANCOUNT (though I don't really care how deeply the transactions may be nested...)
Postgres runs PL/pgSQL inside the transaction. Thus you can't control transaction from inside PL/pgSQL. Code will look like:
begin;
select plpgsql_fn();
do '/*same any plpgsql*/';
end;
So answering your question:
If you have PL/pgSQL running ATM, you have your transaction active ATM...
Of course you can do some trick, like starting/ending work over dblink or such. but then you can check select txid_current(); over the dblink successfully...
If you want to determine if there have been any data modifications in your transaction, call txid_current_if_assigned(). It returns NULL if nothing has been modified yet.
If you only want to know if you are inside some transaction, you can save yourself the trouble, because you always are.
Before PostgreSQL v11, you cannot use transaction control statements in a function.
I haven't found a clean way to do that, but you can always call BEGIN and if it succeeds it means there is no transaction in progress (don't forget to rollback). If it fails with "there is already a transaction in progress" this means you are within transaction (better not to rollback then).

Identify cause of cascading trigger

DB2
I am getting SQLCODE-724, indicating cascading trigger at 17th level, and I suppose that some procedure called by a procedure called by a trigger updates a field that calls the trigger.
How would I create a monitor to help identify the sequence in which procedures/triggers are being called so I can put an end to this cascading-trigger problem?
You can use the DB2 profiler written by Serge Rielau:
More than a TRACE. Extending the SQL PL Profiler to do tracing https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/tracing?lang=en
Reality Check: SQL PL Profiler and plan explain with actual row counts https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/sql_pl_profiler_and_plan_explain_with_actual_row_counts23?lang=en
Also, you can put a logger tool in the code, like log4db2 https://github.com/angoca/log4db2
With these tools, you can identify what is happening in the code, and identify the source of the problem.

DB2 lock timeout

We have a WebSphere cluster with four clones. Identical code runs on each of the clones. We have Quartz periodically kick off a job that runs the code.
The code tries to update a row in a table so that only one of the clones will be able to successfully update the table, and then that clone will run the rest of the job. Something like:
update <table> set status = 'RUNNING' where job_name = 'JOB1' and status = 'STOPPED'
We do not start a transaction when we execute the update statement.
What we see sometimes is that all four clones fail to update the table, and all get a lock timeout error (sql code -913).
We've also tried an alternative where we start a transaction, select to see if the row is marked as running, and if not, then performing an update and committing; and otherwise rolling back.
That had the same problem.
One solution we did not try yet is to modify the select to be a "select for update" although from my googleing, I have doubts as to whether that will help.
Any suggestions?
This ended up not being a problem (that's what I get for listening to someone without checking it out myself).
I tested this out in our development environment with two clones. One of the clones would see the -913 lock timeout error occasionally while the other clone would successfully update the table. Other than the ugly log message, everything worked as it should.
Usually, however, we would not get the -913 error, but rather a warning indicating that there was no row to update from one of the clones. Again, this behavior is fine.
So, as we originally thought, and Clockwork-Muse also suggests, using UPDATE statements in this manner to enforce a lock works just fine in DB2.

When does SQL SELECT statements throw exceptions?

TSQL here. Specifically Server 2008(literally just upgraded)
Concerning stored procedures: Try/Catch
I was trying to make a list of cases when a Select Statement will throw an exception. The ones I can think of are syntax related(includes null variables) and divide by zero. I'm only guessing there are just a whole boat load of them for Insert/Alter and Create/Truncate.
If you happen to know of a good source link, that would be great.
This question came up when I was reading this exhaustive blog post about error handling for SQL server. It's titled for SQL Server 2000, but I think most of it still applies.
edit
Sorry, I meant to link this earlier. . .
http://msdn.microsoft.com/en-us/library/aa175920(v=sql.80).aspx
Outside for compile ("didnt' run") errors, you have at least these runtime errors
arithmetic errors
These change based on various SET statement
Example: get sql server to warn about truncation / rounding
overflow errors
example: one of the rows overflows smallint in some calculation
CAST errors
eg you try ISNUMERIC in a WHERE or CASE and try to cast 'bob^' or 1.23 to int
See Why use Select Top 100 Percent?
However, you'd always want to use TRY/CATCH though, surely...?
Adding to gbn's post, you can also get locking errors like lock wait timeouts and deadlocks.
If you are referencing #Temp tables, you can get "Invalid object name '#Temp'" errors, because these are unbound until the statement executes.
If you are in READ UNCOMMITTED or WITH (NOLOCK), you can get error: 601 - "Could not continue scan with NOLOCK due to data movement."
If your code runs .NET code, you would probably get exceptions from there.
If your code selects from a remote server, you could a whole different set of errors about connections.