Executing SP_Open cursor causes SMSS to have a severe error - tsql

Our program executes a stored procedure VIA SP_Opencursor, i've managed to extract the exact call via a trace, and when running the code directly in SMSS, we get a severe error
declare #p1 int
set #p1=0
declare #p3 int
set #p3=16388
declare #p4 int
set #p4=8196
declare #p5 int
set #p5=0
exec sp_cursoropen #p1 output,N' EXEC [dbo].[EventSearchSP] ''hip'' ',#p3 output,#p4 output,#p5 output
select #p1, #p3, #p4, #p5
When running the above code we get the following error:
Executing SQL directly; no cursor.
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
This code used to work, and only fairly recently has it been failing. It fails across all Databases on our instance (over 100)
When running the code on other servers, the procedure executes correctly and it does not fail.
The actual data of the Stored procedure is fairly generic, a single column of numbers. I don't belive it's the stored procedure itself as this problem is happening when executing any Stored procedure from the client program
I'm fast running out of ideas as to try to resolve the issue, does anyone know what could be causing this error to suddenly start, where it used to be working fine?

We've found the problem, and If anyone knows why it is a problem i would love to know so i can find a way around it.
We recently set XACT_ABORT ON feature in the connections on SQL server after doing research and it being generally accepted as best practice. However with this setting on we get the above error, after turning the feature off again, the procedure began working as before.
Unfortunately we turned XACT_ABORT ON in order to solve another issue, which we now need to find another solution to!

Related

does every procedure call create a recursive session in oracle

I'm trying to solve a case of constant ora--00018 "maximum open sessions exceeded" crashes even though sessions parameter in set to 1500, during those crushes number of v)session entries is sometimes 50 % off v$ resource_limit. current_utilization value of sessions parameter. So we suspect there are a lot of recursive sessions being made and that quickly brings A DB Server down. I know that triggers can cause a lot off recursive sessions . Does every procedure call create the same effect or are any specific kind of procedure that generate it. I tried to test by checking the current_utilization value before and after I ran a simple procedure and didn't see the difference. Maybe because my test procedure is too simple and too fast to notice. I've read this article http://tech.e2sn.com/oracle/oracle-internals-and-architecture/recursive-sessions-and-ora-00018-maximum-number-of-sessions-exceeded but it's not clear to me if every procedure is run on a different session . I'm using oracle 10g

Postgres syntax error at or near "VALUESNSERT"

We are trying to load the data from one postgres table to another postgres table in the same database using informatica. And we are having the following issue -
The error message is as follows:
Message Code: WRT_8229
Message: Database errors occurred:
FnName: Execute -- [Informatica][ODBC PostgreSQL Wire Protocol driver][PostgreSQL]ERROR: VERROR; syntax error at or near "VALUESNSERT"(Position 135; File scan.l; Line 1134; Routine scanner_yyerror; ) Error in parameter 6.
FnName: Execute -- [Informatica][ODBC PostgreSQL Wire Protocol driver][PostgreSQL]Failed transaction. The current transaction rolled back. Error in parameter 6.
FnName: Execute -- [DataDirect][ODBC lib] Function sequence error
It is working fine if we are not loading one of the string column which is of 3000 bytes. Can anyone please shed some light on this issue -
Note: There are no reserved/keywords in our table structure
if you have already identified the error-causing column then, you can follow below steps to find the root cause -
1. You can check the data type of the column in informatica - if it is matching to the target in DB in terms of length and data type.
2. Make sure you import the target from database. Creating target from other process or adding column to existing target can lead to such error.
3. run in verbose mode or debug to see where exactly its causing issue. Check if its reading, transforming, and loading data properly etc.
4. remove postgres target and attach a flat file - if this works then there is issue in database table. Check for index, constraints etc. which can lead to this issue.
5. Check ODBC version as well which may have lots of limitations like data type, length handling. ODBC is also not good at generating errors so you may have to do some guesswork etc to find out.
Thanks everyone. My issue got resolved after implementing Informatica PDO.

Identify cause of cascading trigger

DB2
I am getting SQLCODE-724, indicating cascading trigger at 17th level, and I suppose that some procedure called by a procedure called by a trigger updates a field that calls the trigger.
How would I create a monitor to help identify the sequence in which procedures/triggers are being called so I can put an end to this cascading-trigger problem?
You can use the DB2 profiler written by Serge Rielau:
More than a TRACE. Extending the SQL PL Profiler to do tracing https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/tracing?lang=en
Reality Check: SQL PL Profiler and plan explain with actual row counts https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/sql_pl_profiler_and_plan_explain_with_actual_row_counts23?lang=en
Also, you can put a logger tool in the code, like log4db2 https://github.com/angoca/log4db2
With these tools, you can identify what is happening in the code, and identify the source of the problem.

How to see variable values in SQL Profiler when Trigger fires?

I am creating an "After Update" Trigger on a SQL Server 2008 table. The trigger fires just fine but one of the values it's updating in another table isn't correct. I am looking at a trace in SQL Profiler, but I can't see my variable's values in there.
I read this other question and so added the RPC: Completed Event to my trace, but there were not instances of that event in my trace for some reason. That is, I see it at other places in the trace but not where my trigger is firing.
Just to (hopefully) be clear, my trigger is EXECUTING an SP like this:
EXEC SP_UpdateSomeStuff #variable1, #variable2
... and that's all that I see in the trace. What I wish to see is:
EXEC SP_UpdateSomeStuff #variable1 = 111, #variable2 = 222
... but I can't figure out which events to add to get this. Thanks for any ideas.
"RPC" stands for "Remote Procedure Call" -- generally, queries submitted "from outside" to SQL Server. Trigger events are anything but outside calls, which should be why you are not seeing them in Profiler.
I suspect that you won't be able to see your paremeter values via SQL Profiler. Can you temporarily put in debugging code (insert DebugTable values (Wvariable1, etc.), such that the value you are working with get logged somewhere?

When does SQL SELECT statements throw exceptions?

TSQL here. Specifically Server 2008(literally just upgraded)
Concerning stored procedures: Try/Catch
I was trying to make a list of cases when a Select Statement will throw an exception. The ones I can think of are syntax related(includes null variables) and divide by zero. I'm only guessing there are just a whole boat load of them for Insert/Alter and Create/Truncate.
If you happen to know of a good source link, that would be great.
This question came up when I was reading this exhaustive blog post about error handling for SQL server. It's titled for SQL Server 2000, but I think most of it still applies.
edit
Sorry, I meant to link this earlier. . .
http://msdn.microsoft.com/en-us/library/aa175920(v=sql.80).aspx
Outside for compile ("didnt' run") errors, you have at least these runtime errors
arithmetic errors
These change based on various SET statement
Example: get sql server to warn about truncation / rounding
overflow errors
example: one of the rows overflows smallint in some calculation
CAST errors
eg you try ISNUMERIC in a WHERE or CASE and try to cast 'bob^' or 1.23 to int
See Why use Select Top 100 Percent?
However, you'd always want to use TRY/CATCH though, surely...?
Adding to gbn's post, you can also get locking errors like lock wait timeouts and deadlocks.
If you are referencing #Temp tables, you can get "Invalid object name '#Temp'" errors, because these are unbound until the statement executes.
If you are in READ UNCOMMITTED or WITH (NOLOCK), you can get error: 601 - "Could not continue scan with NOLOCK due to data movement."
If your code runs .NET code, you would probably get exceptions from there.
If your code selects from a remote server, you could a whole different set of errors about connections.