When does SQL SELECT statements throw exceptions? - tsql

TSQL here. Specifically Server 2008(literally just upgraded)
Concerning stored procedures: Try/Catch
I was trying to make a list of cases when a Select Statement will throw an exception. The ones I can think of are syntax related(includes null variables) and divide by zero. I'm only guessing there are just a whole boat load of them for Insert/Alter and Create/Truncate.
If you happen to know of a good source link, that would be great.
This question came up when I was reading this exhaustive blog post about error handling for SQL server. It's titled for SQL Server 2000, but I think most of it still applies.
edit
Sorry, I meant to link this earlier. . .
http://msdn.microsoft.com/en-us/library/aa175920(v=sql.80).aspx

Outside for compile ("didnt' run") errors, you have at least these runtime errors
arithmetic errors
These change based on various SET statement
Example: get sql server to warn about truncation / rounding
overflow errors
example: one of the rows overflows smallint in some calculation
CAST errors
eg you try ISNUMERIC in a WHERE or CASE and try to cast 'bob^' or 1.23 to int
See Why use Select Top 100 Percent?
However, you'd always want to use TRY/CATCH though, surely...?

Adding to gbn's post, you can also get locking errors like lock wait timeouts and deadlocks.
If you are referencing #Temp tables, you can get "Invalid object name '#Temp'" errors, because these are unbound until the statement executes.
If you are in READ UNCOMMITTED or WITH (NOLOCK), you can get error: 601 - "Could not continue scan with NOLOCK due to data movement."
If your code runs .NET code, you would probably get exceptions from there.
If your code selects from a remote server, you could a whole different set of errors about connections.

Related

Executing SP_Open cursor causes SMSS to have a severe error

Our program executes a stored procedure VIA SP_Opencursor, i've managed to extract the exact call via a trace, and when running the code directly in SMSS, we get a severe error
declare #p1 int
set #p1=0
declare #p3 int
set #p3=16388
declare #p4 int
set #p4=8196
declare #p5 int
set #p5=0
exec sp_cursoropen #p1 output,N' EXEC [dbo].[EventSearchSP] ''hip'' ',#p3 output,#p4 output,#p5 output
select #p1, #p3, #p4, #p5
When running the above code we get the following error:
Executing SQL directly; no cursor.
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
This code used to work, and only fairly recently has it been failing. It fails across all Databases on our instance (over 100)
When running the code on other servers, the procedure executes correctly and it does not fail.
The actual data of the Stored procedure is fairly generic, a single column of numbers. I don't belive it's the stored procedure itself as this problem is happening when executing any Stored procedure from the client program
I'm fast running out of ideas as to try to resolve the issue, does anyone know what could be causing this error to suddenly start, where it used to be working fine?
We've found the problem, and If anyone knows why it is a problem i would love to know so i can find a way around it.
We recently set XACT_ABORT ON feature in the connections on SQL server after doing research and it being generally accepted as best practice. However with this setting on we get the above error, after turning the feature off again, the procedure began working as before.
Unfortunately we turned XACT_ABORT ON in order to solve another issue, which we now need to find another solution to!

Cached plan must not change result type

Our service team is getting the error Cached plan must not change result type sometimes when I modify the length of a column or add a new column in the table.
I tried solutions mentioned on Stack Overflow like Postgres: "ERROR: cached plan must not change result type"
I have tried autosave=conservative to resolve this issue but still, I am able to reproduce this issue. I used below JDBC connection string
jdbc-url: jdbc:postgresql://172.16.244.10:5432/testdb?autosave=conservative
why is this property not working in my case?
Also, I tested with prepareThreshold=0 and its working fine. But I think it will impact performance because it will never use client-side prepared statements.
I just want to know the best solution to avoid this error.

How to skip to the next iteration in case of an error

In Netlogo Behavior space, if one of the runs is throwing an error, how to skip that run and ask netlogo to proceed with the next run?
Is it even possible?
From the docs,
If you do want spreadsheet output, note that if anything interrupts
the experiment, such as a runtime error, running out of memory, or a
crash or power outage, no spreadsheet results will be written. For
long experiments, you may want to also enable table format as a
precaution so that if something happens and you get no spreadsheet
output you'll at least get partial table output.)
So, I'll assume this isn't possible and the best way to fix this would be to handle the situation where your code has an error. Alternatively, you could use the carefully command to handle the error messages.

SQLState 02000 No row was found for FETCH, UPDATE, or DELETE

I'm running jobs through Datastage with the DELETE then INSERT connector. I'm having several jobs failing for this error:
DB2_Connector: DB2 reported: SQLSTATE = 02000 Native Error Code = 100, Msg = IBM[CLIDriver][DB2/NT64] SQL01000W No row was found for FETCH, UPDATE, or DELETE
When I run the delete statement in Data Studio directly in DB2, it gives this same error so I know it's a DB2 error, not a Datastage error.
Is there anyway to supress the message in Datastage or when I run the statement in DB2 is there anyway I can keep that message from coming up? It's stopping my DS jobs now with a Fatal error and not continuing to load.
There has got to be a way to turn off the message. I know in SQL Server if no rows are found it does not give this error, it just says zero or doesn't return records but in DB2 this error is coming up and I'm not sure if there is a way to turn it off.
First of all, you seem to be confused about precisely what an error is, and what a message is.
An error is when something goes wrong.
A message is when some piece of software is kind enough to let you know that something went wrong.
From this it follows that suppressing a message has no bearing whatsoever on the actual error. Your software is not failing because of the message, your software is failing because something is going wrong. Receiving a message about it is actually a good thing: the alternative would be your software failing without you being given any clue whatsoever as to what is going wrong.
Suppressing or otherwise ignoring errors is like hiding your head in the sand: you are still going to end up as meal.
So, what you need to make go away is the error, not the message.
Which means that you have to figure out what you did wrong.
Luckily, you have the message giving you a hint as to what you did wrong, though you have to keep in mind that messages are sometimes misleading.
SQLState 02000 is not an error, it is a warning. (And note that DB2_Connector is not saying ERROR!!!1!:, it is saying DB2 reported:.) Luckily JDBC issues warnings when it detects situations that might be indicative of errors; there is a lot of software out there that ignores JDBC warnings, (essentially hiding your head in the sand for you, how nice,) but luckily DB2_Connector reports them.
What this means is that one of two things is going wrong:
Either your assumption that it is okay if no rows are found is wrong, and the fact that no rows were found is the cause of your problem, which means that you have to somehow make sure that some rows are found, or
Your assumption that it is okay if no rows are found is correct, in which case the warning reported has absolutely nothing to do with the problem at hand, so it can safely be ignored, and you have to look at the problem elsewhere.

verbose error information for SQL Server bulk insert

I'm using SQL Server Express 2008 and I'm doing a bulk insert of data. I'd like to have more verbose error messages, ideally printing the data that failed to be inserted. Is that possible?
It is possible, but it can require a lot of effort to to do this--I recall working on a subsystem for a few days before I got it to do everything it needed to do. I believe this is one of the (few but still too many) places where, upon hitting an error, SQL will return two (2) error messages back-to-back, the second message is vague and indistinct, and all the error handling functions can only access info pertaining to that second lame message, and not the first one where the real info is. I don't have the code in front of me, but the logic was something like:
Use the "errorfile" option on BULK INSERT to generate an error file IF the bulk insert fails
TRY/CATCH the bulk insert call, and carefully check the returned error number
If the error is the appropriate type, open and read the contents of the file to determine what went wrong where, and build your error message around that
Awkward as anything, but ultimately it worked out pretty well. So long as the drive+path+filename you were inserting from didn't exceed 128 characters (in SQL 2005, and I just bet they didn't fix that in 2008.) I do not count Bulk Insert as one of my favorite commands.