SQL invalid conversion return null instead of throwing error - tsql

I have a table with a varchar column, and I want to find values that match a certain number. So lets say that column contains the following entries (except with millions of rows in real life):
123456789012
2345678
3456
23 45
713?2
00123456789012
So I decide I want all the rows which are numerically 123456789012 write a statement that looks something like this:
SELECT * FROM MyTable WHERE CAST(MyColumn as bigint) = 123456789012
It should return the first and last row, but instead the whole query blows up because it can't convert the "23 45" and "713?2" to bigint.
Is there another way to do the conversion that will return NULL for values that can't convert?

SQL Server does NOT guarantee boolean operator short-circuit, see On SQL Server boolean operator short-circuit. So all solution using ISNUMERIC(...) AND CAST(...) are fundamentally flawed (they may work, but hey can arbitrarily fail later dependiong on the generated plan). A better solution is using CASE, as Thomas suggests: CASE ISNUMERIC(...) WHEN 1 THEN CAST(...) ELSE NULL END. But, as gbn pointed out, ISNUMERIC is notoriously finicky in identifying what 'numeric' means and many cases where one would expect it to return 0 it returns 1. So mixing the CASE with the LIKE:
CASE WHEN MyRow NOT LIKE '%[^0-9]%' THEN CAST(MyRow as bigint) ELSE NULL END
But the real problem is that if you have millions of rows and you have to search them like this, you'll always end up scanning end-to-end since the expression is not SARG-able (no matter how we rewrite it). The real issue here is data purity, and should be addressed at the appropriate level, where the data is populated. Another thing to consider is if is possible to create a persisted computed column with this expression and create a filtered index on it which eliminates NULL (ie. non-numeric). That would speed up things a little.

If you are using SQL Server 2012 you can use the 2 new methods:
TRY_CAST()
TRY_CONVERT()
Both methods are equivalent. They return a value cast to the specified data type if the cast succeeds; otherwise, returns null. The only difference is that CONVERT is SQL Server specific, CAST is ANSI. using CAST will make your code more portable (although not sure if any other database provider implements TRY_CAST)

ISNUMERIC will accept empty string and values like 1.23 or 5E-04 so could be unreliable.
And you don't know what order things will be evaluated in so it could still fail (SQL is declarative, not procedural, so the WHERE clause probably won't be evaluated left to right)
So:
you want to accept value that consist only of the characters 0-9
you need to materialise the "number" filter so it's applied before CAST
Something like:
SELECT
*
FROM
(
SELECT TOP 2000000000 *
FROM MyTable
WHERE MyColumn NOT LIKE '%[^0-9]%' --double negative rejects anything except 0-9
ORDER BY MyColumn
) foo
WHERE
CAST(MyColumn as bigint) = 123456789012 --applied after number check
Edit: quick example that fails.
CREATE TABLE #foo (bigintstring varchar(100))
INSERT #foo (bigintstring )VALUES ('1.23')
INSERT #foo (bigintstring )VALUES ('1 23')
INSERT #foo (bigintstring )VALUES ('123')
SELECT * FROM #foo
WHERE
ISNUMERIC(bigintstring) = 1
AND
CAST(bigintstring AS bigint) = 123

SELECT *
FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as float) = 123456789012

The ISNUMERIC() function should give you what you need.
SELECT * FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as bigint) = 123456789012
And to add a case statement like Thomas suggested:
SELECT * FROM MyTable
WHERE CASE(ISNUMERIC(MyRow)
WHEN 1 THEN CAST(MyRow as bigint)
ELSE NULL
END = 123456789012
http://msdn.microsoft.com/en-us/library/ms186272.aspx

SELECT *
FROM MyTable
WHERE (ISNUMERIC(MyColumn) = 1) AND (CAST(MyColumn as bigint) = 123456789012)
Additionally you can use a CASE statement in order to get null values.
SELECT
CASE
WHEN (ISNUMERIC(MyColumn) = 1) THEN CAST(MyColumn as bigint)
ELSE NULL
END AS 'MyColumnAsBigInt'
FROM tableName
If you require additional filtering, for numerics which are not valid to be cast to bigint, you can use the following instead of ISNUMERIC:
PATINDEX('%[^0-9]%',MyColumn)) = 0
If you need decimal values instead of integers, cast to float instead and change the regex to '%[^0-9.]%'

Related

Cast to int instead of decimal?

I have field that has up to 9 comma separated values each of which have a string value and a numeric value separated by colon. After parsing them all some of the values between 0 and 1 are being set to an integer rather than a numeric as cast. The problem is obviously related to data type but I am unsure what is causing it or how to fix it. The problem only exists in the case statement, the split_part function seems to be working perfect.
Things I have tried:
nvl(split_part(one,':',2),0) = COALESCE types text and integer cannot be matched
nvl(split_part(one,':',2)::numeric,0) => Invalid input syntax for type numeric
numerous other cast/convert variations
(CASE WHEN split_part(one,':',2) = '' THEN 0::numeric ELSE split_part(one,':',2)::numeric END)::numeric => runs but get int value of 0
When using the split_part function outside of case statement it does work correctly. However, I need the result to be zero for null values.
split_part(one,':',2) => 0.02068278096187390979 (expected result)
When running the code above I get zero but expect 0.02068278096187390979
Field "one" has the following value 'xyz: 0.02068278096187390979' before the split_part function.
EXAMPLE:
create table test(one varchar);
insert into test values('XYZ: 0.50000000000000000000')
select
one ,split_part(one,':',2) as correct_value_for_those_that_are_not_null ,
case
when split_part(one,':',2) = '' then null
else split_part(one,':',2)::numeric
end::numeric as this_one_is_the_problem
from test
However, I need the result to be zero for null values.
Your example does not deal with NULL values at all, though. Only addressing the empty string ('').
To replace either with 0 reliably, efficiently and without casting issues:
SELECT part1, CASE WHEN part2 <> '' THEN part2::numeric ELSE numeric '0' END AS part2
FROM (
SELECT split_part(one, ':', 1) AS part1
, split_part(one, ':', 2) AS part2
FROM test
) sub;
See:
Best way to check for "empty or null value"
Also note that all SQL CASE branches must agree on a common data type. There have been minor adjustments in the logic that determines the resulting type in the past, so the version of Postgres may play a role in corner cases. Don't recall the details now.
nvl()is not a Postgres function. You probably meant COALESCE. The manual:
This SQL-standard function provides capabilities similar to NVL and IFNULL, which are used in some other database systems.

Update with ISNULL and operation

original query looks like this :
UPDATE reponse_question_finale t1, reponse_question_finale t2 SET
t1.nb_question_repondu = (9-(ISNULL(t1.valeur_question_4)+ISNULL(t1.valeur_question_6)+ISNULL(t1.valeur_question_7)+ISNULL(t1.valeur_question_9))) WHERE t1.APPLICATION = t2.APPLICATION;
I know you cannot update 2 tables in a single query so i tried this :
UPDATE reponse_question_finale t1
SET nb_question_repondu = (9-(COALESCE(t1.valeur_question_4,'')::int+COALESCE(t1.valeur_question_6,'')::int+COALESCE(t1.valeur_question_7)::int+COALESCE(t1.valeur_question_9,'')::int))
WHERE t1.APPLICATION = t1.APPLICATION;
But this query gaves me an error : invalid input syntax for integer: ""
I saw that the Postgres equivalent to MySQL is COALESCE() so i think i'm on the good way here.
I also know you cannot add varchar to varchar so i tried to cast it to integer to do that. I'm not sure if i casted it correctly with parenthesis at the good place and regarding to error maybe i cannot cast to int with coalesce.
Last thing, i can certainly do a co-related sub-select to update my two tables but i'm a little lost at this point.
The output must be an integer matching the number of questions answered to a backup survey.
Any thoughts?
Thanks.
coalesce() returns the first non-null value from the list supplied. So, if the column value is null the expression COALESCE(t1.valeur_question_4,'') returns an empty string and that's why you get the error.
But it seems you want something completely different: you want check if the column is null (or empty) and then subtract a value if it is to count the number of non-null columns.
To return 1 if a value is not null or 0 if it isn't you can use:
(nullif(valeur_question_4, '') is null)::int
nullif returns null if the first value equals the second. The IS NULL condition returns a boolean (something that MySQL doesn't have) and that can be cast to an integer (where false will be cast to 0 and true to 1)
So the whole expression should be:
nb_question_repondu = 9 - (
(nullif(t1.valeur_question_4,'') is null)::int
+ (nullif(t1.valeur_question_6,'') is null)::int
+ (nullif(t1.valeur_question_7,'') is null)::int
+ (nullif(t1.valeur_question_9,'') is null)::int
)
Another option is to unpivot the columns and do a select on them in a sub-select:
update reponse_question_finale
set nb_question_repondu = (select count(*)
from (
values
(valeur_question_4),
(valeur_question_6),
(valeur_question_7),
(valeur_question_9)
) as t(q)
where nullif(trim(q),'') is not null);
Adding more columns to be considered is quite easy then, as you just need to add a single line to the values() clause

postgres `order by` argument type

What is the argument type for the order by clause in Postgresql?
I came across a very strange behaviour (using Postgresql 9.5). Namely, the query
select * from unnest(array[1,4,3,2]) as x order by 1;
produces 1,2,3,4 as expected. However the query
select * from unnest(array[1,4,3,2]) as x order by 1::int;
produces 1,4,3,2, which seems strange. Similarly, whenever I replace 1::int with whatever function (e.g. greatest(0,1)) or even case operator, the results are unordered (on the contrary to what I would expect).
So which type should an argument of order by have, and how do I get the expected behaviour?
This is expected (and documented) behaviour:
A sort_expression can also be the column label or number of an output column
So the expression:
order by 1
sorts by the first column of the result set (as defined by the SQL standard)
However the expression:
order by 1::int
sorts by the constant value 1, it's essentially the same as:
order by 'foo'
By using a constant value for the order by all rows have the same sort value and thus aren't really sorted.
To sort by an expression, just use that:
order by
case
when some_column = 'foo' then 1
when some_column = 'bar' then 2
else 3
end
The above sorts the result based on the result of the case expression.
Actually I have a function with an integer argument which indicates the column to be used in the order by clause.
In a case when all columns are of the same type, this can work: :
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1
WHEN 2 THEN column2
.....
WHEN 1235 THEN column1235
END
If columns are of different types, you can try:
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1::varchar
WHEN 2 THEN column2::varchar
.....
WHEN 1235 THEN column1235::varchar
END
But these "workarounds" are horrible. You need some other approach than the function returning a column number.
Maybe a dynamic SQL ?
I would say that dynamic SQL (thanks #kordirko and the others for the hints) is the best solution to the problem I originally had in mind:
create temp table my_data (
id serial,
val text
);
insert into my_data(id, val)
values (default, 'a'), (default, 'c'), (default, 'd'), (default, 'b');
create function fetch_my_data(col text)
returns setof my_data as
$f$
begin
return query execute $$
select * from my_data
order by $$|| quote_ident(col);
end
$f$ language plpgsql;
select * from fetch_my_data('val'); -- order by val
select * from fetch_my_data('id'); -- order by id
In the beginning I thought this could be achieved using case expression in the argument of the order by clause - the sort_expression. And here comes the tricky part which confused me: when sort_expression is a kind of identifier (name of a column or a number of a column), the corresponding column is used when ordering the results. But when sort_expression is some value, we actually order the results using that value itself (computed for each row). This is #a_horse_with_no_name's answer rephrased.
So when I queried ... order by 1::int, in a way I have assigned value 1 to each row and then tried to sort an array of ones, which clearly is useless.
There are some workarounds without dynamic queries, but they require writing more code and do not seem to have any significant advantages.

Prepare dynamic case statement using PostgreSQL 9.3

I have the following case statement to prepare as a dynamic as shown below:
Example:
I have the case statement:
case cola
when cola between '2001-01-01' and '2001-01-05' then 'G1'
when cola between '2001-01-10' and '2001-01-15' then 'G2'
when cola between '2001-01-20' and '2001-01-25' then 'G3'
when cola between '2001-02-01' and '2001-02-05' then 'G4'
when cola between '2001-02-10' and '2001-02-15' then 'G5'
else ''
end
Note: Now I want to create dynamic case statement because of the values dates and name passing as a parameter and it may change.
Declare
dates varchar = '2001-01-01to2001-01-05,2001-01-10to2001-01-15,
2001-01-20to2001-01-25,2001-02-01to2001-02-05,
2001-02-10to2001-02-15';
names varchar = 'G1,G2,G3,G4,G5';
The values in the variables may change as per the requirements, it will be dynamic. So the case statement should be dynamic without using loop.
You may not need any function for this, just join to a mapping data-set:
with cola_map(low, high, value) as (
values(date '2001-01-01', date '2001-01-05', 'G1'),
('2001-01-10', '2001-01-15', 'G2'),
('2001-01-20', '2001-01-25', 'G3'),
('2001-02-01', '2001-02-05', 'G4'),
('2001-02-10', '2001-02-15', 'G5')
-- you can include as many rows, as you want
)
select table_name.*,
coalesce(cola_map.value, '') -- else branch from case expression
from table_name
left join cola_map on table_name.cola between cola_map.low and cola_map.high
If your date ranges could collide, you can use DISTINCT ON or GROUP BY to avoid row duplication.
Note: you can use a simple sub-select too, I used a CTE, because it's more readable.
Edit: passing these data (as a single parameter) can be achieved by passing a multi-dimensional array (or an array of row-values, but that requires you to have a distinct, predefined composite type).
Passing arrays as parameters can depend on the actual client (& driver) you use, but in general, you can use the array's input representation:
-- sql
with cola_map(low, high, value) as (
select d[1]::date, d[2]::date, d[3]
from unnest(?::text[][]) d
)
select table_name.*,
coalesce(cola_map.value, '') -- else branch from case expression
from table_name
left join cola_map on table_name.cola between cola_map.low and cola_map.high
// client pseudo code
query = db.prepare(sql);
query.bind(1, "{{2001-01-10,2001-01-15,G2},{2001-01-20,2001-01-25,G3}}");
query.execute();
Passing each chunk of data separately is also possible with some clients (or with some abstractions), but this is highly depends on your driver/orm/etc. you use.

How can I query 'between' numeric data on a not numeric field?

I've got a query that I've just found in the database that is failing causing a report to fall over. The basic gist of the query:
Select *
From table
Where IsNull(myField, '') <> ''
And IsNumeric(myField) = 1
And Convert(int, myField) Between #StartRange And #EndRange
Now, myField doesn't contain numeric data in all the rows [it is of nvarchar type]... but this query was obviously designed such that it only cares about rows where the data in this field is numeric.
The problem with this is that T-SQL (near as I understand) doesn't shortcircuit the Where clause thus causing it to ditch out on records where the data is not numeric with the exception:
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the nvarchar value '/A' to data type int.
Short of dumping all the rows where myField is numeric into a temporary table and then querying that for rows where the field is in the specified range, what can I do that is optimal?
My first parse purely to attempt to analyse the returned data and see what was going on was:
Select *
From (
Select *
From table
Where IsNull(myField, '') <> ''
And IsNumeric(myField) = 1
) t0
Where Convert(int, myField) Between #StartRange And #EndRange
But I get the same error I did for the first query which I'm not sure I understand as I'm not converting any data that shouldn't be numeric at this point. The subquery should only have returned rows where myField contains numeric data.
Maybe I need my morning tea, but does this make sense to anyone? Another set of eyes would help.
Thanks in advance
IsNumeric only tells you that the string can be converted to one of the numeric types in SQL Server. It may be able to convert it to money, or to a float, but may not be able to convert it to an int.
Change your
IsNumeric(myField) = 1
to be:
not myField like '%[^0-9]%' and LEN(myField) < 9
(that is, you want myField to contain only digits, and fit in an int)
Edit examples:
select ISNUMERIC('.'),ISNUMERIC('£'),ISNUMERIC('1d9')
result:
----------- ----------- -----------
1 1 1
(1 row(s) affected)
You'd have to force SQL to evaluate the expressions in a certain order.
Here is one solution
Select *
From ( TOP 2000000000
Select *
From table
Where IsNumeric(myField) = 1
And IsNull(myField, '') <> ''
ORDER BY Key
) t0
Where Convert(int, myField) Between #StartRange And #EndRange
and another
Select *
From table
Where
CASE
WHEN IsNumeric(myField) = 1 And IsNull(myField, '') <> ''
THEN Convert(int, myField) ELSE #StartRange-1
END Between #StartRange And #EndRange
The first technique is "intermediate materialisation": it forces a sort on a working table.
The 2nd relies on CASE ORDER evaluation is guaranteed
Neither is pretty or whizzy
SQL is declarative: you tell the optimiser what you want, not how to do it. The tricks above force things to be done in a certain order.
Not sure if this helps you, but I did read somewhere that incorrect conversion using CONVERT will always generate error in SQL. So I think it would be better to use CASE in where clause to avoid having CONVERT to run on all rows
Use a CASE statement.
declare #StartRange int
declare #EndRange int
set #StartRange = 1
set #EndRange = 3
select *
from TestData
WHERE Case WHEN ISNUMERIC(Value) = 0 THEN 0
WHEN Value IS NULL THEN 0
WHEN Value = '' THEN 0
WHEN CONVERT(int, Value) BETWEEN #StartRange AND #EndRange THEN 1
END = 1