How does ANSI_NULLS work in TSQL? - tsql

SET ANSI_NULLS OFF seems to give different results in TSQL depending on whether you're comparing a field from a table or a value. Can anyone help me understand why the last 2 of my queries give no results? I'm not looking for a solution, just an explanation.
select 1 as 'Col' into #a
select NULL as 'Col' into #b
--This query gives results, as expected.
SET ANSI_NULLS OFF
select * from #b
where NULL = Col
--This query gives results, as expected.
SET ANSI_NULLS OFF
select * from #a
where NULL != Col
--This workaround gives results, too.
select * from #a a, #b b
where isnull(a.Col, '') != isnull(b.Col, '')
--This query gives no results, why?
SET ANSI_NULLS OFF
select * from #a a, #b b
where a.Col != b.Col
--This query gives no results, why?
SET ANSI_NULLS OFF
select * from #a a, #b b
where b.Col != a.Col

The reason the last two queries fail is that SET ANSI_NULLS ON/OFF only applies when you are comparing against a variable or the NULL value. It does not apply when you are comparing column values. From the BOL:
SET ANSI_NULLS ON affects a comparison
only if one of the operands of the
comparison is either a variable that
is NULL or a literal NULL. If both
sides of the comparison are columns or
compound expressions, the setting does
not affect the comparison.

Anything compared to a null value fails. Even comparing two null values will fail. Even the != will fail because of the (IMHO) stupid handling of NULL.
That said, the != queries could be rewritten to say:
select * from #a a where a.Col not in (select b.Col from #b b)
The last query is identical to the second to last query as the order of the comparison doesn't matter.
Incidentally, your workaround works simply because you are testing for a null value in the #b.Col column and explicitly converting it to a '' which then allows your query to do a string compare between them. An alternative way of writing that would be:
select * from #a a, #b b
where a.Col != COALESCE(b.Col, '')

Related

Update with ISNULL and operation

original query looks like this :
UPDATE reponse_question_finale t1, reponse_question_finale t2 SET
t1.nb_question_repondu = (9-(ISNULL(t1.valeur_question_4)+ISNULL(t1.valeur_question_6)+ISNULL(t1.valeur_question_7)+ISNULL(t1.valeur_question_9))) WHERE t1.APPLICATION = t2.APPLICATION;
I know you cannot update 2 tables in a single query so i tried this :
UPDATE reponse_question_finale t1
SET nb_question_repondu = (9-(COALESCE(t1.valeur_question_4,'')::int+COALESCE(t1.valeur_question_6,'')::int+COALESCE(t1.valeur_question_7)::int+COALESCE(t1.valeur_question_9,'')::int))
WHERE t1.APPLICATION = t1.APPLICATION;
But this query gaves me an error : invalid input syntax for integer: ""
I saw that the Postgres equivalent to MySQL is COALESCE() so i think i'm on the good way here.
I also know you cannot add varchar to varchar so i tried to cast it to integer to do that. I'm not sure if i casted it correctly with parenthesis at the good place and regarding to error maybe i cannot cast to int with coalesce.
Last thing, i can certainly do a co-related sub-select to update my two tables but i'm a little lost at this point.
The output must be an integer matching the number of questions answered to a backup survey.
Any thoughts?
Thanks.
coalesce() returns the first non-null value from the list supplied. So, if the column value is null the expression COALESCE(t1.valeur_question_4,'') returns an empty string and that's why you get the error.
But it seems you want something completely different: you want check if the column is null (or empty) and then subtract a value if it is to count the number of non-null columns.
To return 1 if a value is not null or 0 if it isn't you can use:
(nullif(valeur_question_4, '') is null)::int
nullif returns null if the first value equals the second. The IS NULL condition returns a boolean (something that MySQL doesn't have) and that can be cast to an integer (where false will be cast to 0 and true to 1)
So the whole expression should be:
nb_question_repondu = 9 - (
(nullif(t1.valeur_question_4,'') is null)::int
+ (nullif(t1.valeur_question_6,'') is null)::int
+ (nullif(t1.valeur_question_7,'') is null)::int
+ (nullif(t1.valeur_question_9,'') is null)::int
)
Another option is to unpivot the columns and do a select on them in a sub-select:
update reponse_question_finale
set nb_question_repondu = (select count(*)
from (
values
(valeur_question_4),
(valeur_question_6),
(valeur_question_7),
(valeur_question_9)
) as t(q)
where nullif(trim(q),'') is not null);
Adding more columns to be considered is quite easy then, as you just need to add a single line to the values() clause

postgres `order by` argument type

What is the argument type for the order by clause in Postgresql?
I came across a very strange behaviour (using Postgresql 9.5). Namely, the query
select * from unnest(array[1,4,3,2]) as x order by 1;
produces 1,2,3,4 as expected. However the query
select * from unnest(array[1,4,3,2]) as x order by 1::int;
produces 1,4,3,2, which seems strange. Similarly, whenever I replace 1::int with whatever function (e.g. greatest(0,1)) or even case operator, the results are unordered (on the contrary to what I would expect).
So which type should an argument of order by have, and how do I get the expected behaviour?
This is expected (and documented) behaviour:
A sort_expression can also be the column label or number of an output column
So the expression:
order by 1
sorts by the first column of the result set (as defined by the SQL standard)
However the expression:
order by 1::int
sorts by the constant value 1, it's essentially the same as:
order by 'foo'
By using a constant value for the order by all rows have the same sort value and thus aren't really sorted.
To sort by an expression, just use that:
order by
case
when some_column = 'foo' then 1
when some_column = 'bar' then 2
else 3
end
The above sorts the result based on the result of the case expression.
Actually I have a function with an integer argument which indicates the column to be used in the order by clause.
In a case when all columns are of the same type, this can work: :
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1
WHEN 2 THEN column2
.....
WHEN 1235 THEN column1235
END
If columns are of different types, you can try:
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1::varchar
WHEN 2 THEN column2::varchar
.....
WHEN 1235 THEN column1235::varchar
END
But these "workarounds" are horrible. You need some other approach than the function returning a column number.
Maybe a dynamic SQL ?
I would say that dynamic SQL (thanks #kordirko and the others for the hints) is the best solution to the problem I originally had in mind:
create temp table my_data (
id serial,
val text
);
insert into my_data(id, val)
values (default, 'a'), (default, 'c'), (default, 'd'), (default, 'b');
create function fetch_my_data(col text)
returns setof my_data as
$f$
begin
return query execute $$
select * from my_data
order by $$|| quote_ident(col);
end
$f$ language plpgsql;
select * from fetch_my_data('val'); -- order by val
select * from fetch_my_data('id'); -- order by id
In the beginning I thought this could be achieved using case expression in the argument of the order by clause - the sort_expression. And here comes the tricky part which confused me: when sort_expression is a kind of identifier (name of a column or a number of a column), the corresponding column is used when ordering the results. But when sort_expression is some value, we actually order the results using that value itself (computed for each row). This is #a_horse_with_no_name's answer rephrased.
So when I queried ... order by 1::int, in a way I have assigned value 1 to each row and then tried to sort an array of ones, which clearly is useless.
There are some workarounds without dynamic queries, but they require writing more code and do not seem to have any significant advantages.

Initializing multiple variables with one SELECT statement

It is a well known practice to this:
DECLARE #A INT
,#B INT
SELECT #A = [Column01]
,#B = [Column02]
FROM [dbo].[data]
I am wondering is it true for the following code:
DECLARE #A INT
,#B INT
SELECT #A = [Column01]
,#B = #A + [Column02]
FROM [dbo].[data]
that #A is always getting value before #B?
In my real case [Column01] and Column02 are an expressions with many columns and T-SQL functions and using #A as reference is simplifying the initialization of #B.
that #A is always getting value before #B?
The answer is no.
Quoted from Itzik Ben-Gan's Microsoft SQL Server 2012 T-SQL Fundamentals
SQL supports a concept called all-at-once operations, which means that all expressions that appear in the same logical query processing phase are evaluated logically at the same point in time.
Please also check All-at-Once Operations in T-SQL
In my experience so far this has always worked 'top down' for me. However, I too feel a bit queasy whenever I write something like that and have been known to split it into two separate commands when I had a more paranoid day, maybe I should have more of those =)
That said, the question could be extended to these syntaxes of which I 'assume by experience' they are correct but again wonder if anyone has a more definite answer to it :
DECLARE #a int,
#b int,
#x int
SELECT #a = (CASE name WHEN 'A' THEN value ELSE #a END),
#b = (CASE name WHEN 'B' THEN value ELSE #b END)
FROM myTable
WHERE name IN ('A', 'B')
which gives the same result as below but is quite a bit faster, especially if you have to fetch many of them
SELECT #a = value FROM myTable WHERE name = 'A'
SELECT #b = value FROM myTable WHERE name = 'B'
Or, this one:
DECLARE #a int = 8,
#b int = 5,
#x int
UPDATE myTable
SET #x = #a * leftField + #b * rightField,
mySum = #x,
mySquare = Power(#x, 2)
WHERE ...
Where I use #x to calculate an intermediate value for a given record and use said value later on to set a field or as part of a formula again. (I agree it's a stupid example but right now can't come up with something more sensible)
Or this one that now seems to be generally accepted as 'OK', but I do remember the days when this would go haywire if you'd add an ORDER BY to it, which is often a requirement.
DECLARE #a varchar(max)
SELECT #a = (CASE WHEN #a IS NULL THEN name ELSE #a + ',' + name END)
FROM sys.objects
WHERE type = 'S'
SELECT #a
UPDATE: these things (likely) usually work fine as long as the datasets are small but odd things start to happen when the size of the data grows and the QO decides to use a different plan using multi-threading etc... So I tried to 'break' things by setting up a largish table that would no longer fit in memory and then see what would happen. I took a simple example that could easily be split into multiple threads. The result can be found here but you'll need to adapt it to your hardware off course (please don't put sqlFiddle to its knees!). The results so far are that things keep working but the query plans differ depending on the query we run (with or without #x and #y) !

SQL invalid conversion return null instead of throwing error

I have a table with a varchar column, and I want to find values that match a certain number. So lets say that column contains the following entries (except with millions of rows in real life):
123456789012
2345678
3456
23 45
713?2
00123456789012
So I decide I want all the rows which are numerically 123456789012 write a statement that looks something like this:
SELECT * FROM MyTable WHERE CAST(MyColumn as bigint) = 123456789012
It should return the first and last row, but instead the whole query blows up because it can't convert the "23 45" and "713?2" to bigint.
Is there another way to do the conversion that will return NULL for values that can't convert?
SQL Server does NOT guarantee boolean operator short-circuit, see On SQL Server boolean operator short-circuit. So all solution using ISNUMERIC(...) AND CAST(...) are fundamentally flawed (they may work, but hey can arbitrarily fail later dependiong on the generated plan). A better solution is using CASE, as Thomas suggests: CASE ISNUMERIC(...) WHEN 1 THEN CAST(...) ELSE NULL END. But, as gbn pointed out, ISNUMERIC is notoriously finicky in identifying what 'numeric' means and many cases where one would expect it to return 0 it returns 1. So mixing the CASE with the LIKE:
CASE WHEN MyRow NOT LIKE '%[^0-9]%' THEN CAST(MyRow as bigint) ELSE NULL END
But the real problem is that if you have millions of rows and you have to search them like this, you'll always end up scanning end-to-end since the expression is not SARG-able (no matter how we rewrite it). The real issue here is data purity, and should be addressed at the appropriate level, where the data is populated. Another thing to consider is if is possible to create a persisted computed column with this expression and create a filtered index on it which eliminates NULL (ie. non-numeric). That would speed up things a little.
If you are using SQL Server 2012 you can use the 2 new methods:
TRY_CAST()
TRY_CONVERT()
Both methods are equivalent. They return a value cast to the specified data type if the cast succeeds; otherwise, returns null. The only difference is that CONVERT is SQL Server specific, CAST is ANSI. using CAST will make your code more portable (although not sure if any other database provider implements TRY_CAST)
ISNUMERIC will accept empty string and values like 1.23 or 5E-04 so could be unreliable.
And you don't know what order things will be evaluated in so it could still fail (SQL is declarative, not procedural, so the WHERE clause probably won't be evaluated left to right)
So:
you want to accept value that consist only of the characters 0-9
you need to materialise the "number" filter so it's applied before CAST
Something like:
SELECT
*
FROM
(
SELECT TOP 2000000000 *
FROM MyTable
WHERE MyColumn NOT LIKE '%[^0-9]%' --double negative rejects anything except 0-9
ORDER BY MyColumn
) foo
WHERE
CAST(MyColumn as bigint) = 123456789012 --applied after number check
Edit: quick example that fails.
CREATE TABLE #foo (bigintstring varchar(100))
INSERT #foo (bigintstring )VALUES ('1.23')
INSERT #foo (bigintstring )VALUES ('1 23')
INSERT #foo (bigintstring )VALUES ('123')
SELECT * FROM #foo
WHERE
ISNUMERIC(bigintstring) = 1
AND
CAST(bigintstring AS bigint) = 123
SELECT *
FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as float) = 123456789012
The ISNUMERIC() function should give you what you need.
SELECT * FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as bigint) = 123456789012
And to add a case statement like Thomas suggested:
SELECT * FROM MyTable
WHERE CASE(ISNUMERIC(MyRow)
WHEN 1 THEN CAST(MyRow as bigint)
ELSE NULL
END = 123456789012
http://msdn.microsoft.com/en-us/library/ms186272.aspx
SELECT *
FROM MyTable
WHERE (ISNUMERIC(MyColumn) = 1) AND (CAST(MyColumn as bigint) = 123456789012)
Additionally you can use a CASE statement in order to get null values.
SELECT
CASE
WHEN (ISNUMERIC(MyColumn) = 1) THEN CAST(MyColumn as bigint)
ELSE NULL
END AS 'MyColumnAsBigInt'
FROM tableName
If you require additional filtering, for numerics which are not valid to be cast to bigint, you can use the following instead of ISNUMERIC:
PATINDEX('%[^0-9]%',MyColumn)) = 0
If you need decimal values instead of integers, cast to float instead and change the regex to '%[^0-9.]%'

How can I query 'between' numeric data on a not numeric field?

I've got a query that I've just found in the database that is failing causing a report to fall over. The basic gist of the query:
Select *
From table
Where IsNull(myField, '') <> ''
And IsNumeric(myField) = 1
And Convert(int, myField) Between #StartRange And #EndRange
Now, myField doesn't contain numeric data in all the rows [it is of nvarchar type]... but this query was obviously designed such that it only cares about rows where the data in this field is numeric.
The problem with this is that T-SQL (near as I understand) doesn't shortcircuit the Where clause thus causing it to ditch out on records where the data is not numeric with the exception:
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the nvarchar value '/A' to data type int.
Short of dumping all the rows where myField is numeric into a temporary table and then querying that for rows where the field is in the specified range, what can I do that is optimal?
My first parse purely to attempt to analyse the returned data and see what was going on was:
Select *
From (
Select *
From table
Where IsNull(myField, '') <> ''
And IsNumeric(myField) = 1
) t0
Where Convert(int, myField) Between #StartRange And #EndRange
But I get the same error I did for the first query which I'm not sure I understand as I'm not converting any data that shouldn't be numeric at this point. The subquery should only have returned rows where myField contains numeric data.
Maybe I need my morning tea, but does this make sense to anyone? Another set of eyes would help.
Thanks in advance
IsNumeric only tells you that the string can be converted to one of the numeric types in SQL Server. It may be able to convert it to money, or to a float, but may not be able to convert it to an int.
Change your
IsNumeric(myField) = 1
to be:
not myField like '%[^0-9]%' and LEN(myField) < 9
(that is, you want myField to contain only digits, and fit in an int)
Edit examples:
select ISNUMERIC('.'),ISNUMERIC('£'),ISNUMERIC('1d9')
result:
----------- ----------- -----------
1 1 1
(1 row(s) affected)
You'd have to force SQL to evaluate the expressions in a certain order.
Here is one solution
Select *
From ( TOP 2000000000
Select *
From table
Where IsNumeric(myField) = 1
And IsNull(myField, '') <> ''
ORDER BY Key
) t0
Where Convert(int, myField) Between #StartRange And #EndRange
and another
Select *
From table
Where
CASE
WHEN IsNumeric(myField) = 1 And IsNull(myField, '') <> ''
THEN Convert(int, myField) ELSE #StartRange-1
END Between #StartRange And #EndRange
The first technique is "intermediate materialisation": it forces a sort on a working table.
The 2nd relies on CASE ORDER evaluation is guaranteed
Neither is pretty or whizzy
SQL is declarative: you tell the optimiser what you want, not how to do it. The tricks above force things to be done in a certain order.
Not sure if this helps you, but I did read somewhere that incorrect conversion using CONVERT will always generate error in SQL. So I think it would be better to use CASE in where clause to avoid having CONVERT to run on all rows
Use a CASE statement.
declare #StartRange int
declare #EndRange int
set #StartRange = 1
set #EndRange = 3
select *
from TestData
WHERE Case WHEN ISNUMERIC(Value) = 0 THEN 0
WHEN Value IS NULL THEN 0
WHEN Value = '' THEN 0
WHEN CONVERT(int, Value) BETWEEN #StartRange AND #EndRange THEN 1
END = 1