CREATE FUNCTION failed because a column name is not specified for column 1. error for the Multiple parameter of function - tsql

Wanted to create the multiple parameter of function but it gives me this error:
CREATE FUNCTION failed because a column name is not specified for
column 1.
Code below:
create function dmt.Impacted(
#nameOfColumn varchar , #nameOfParam varchar)
returns table
as
return
(select
case when '['+#nameOfColumn+']' is null or len(rtrim('['+#nameOfColumn+']')) = 0
then Convert(nvarchar(2),0)
else
#nameOfParam end from employee) ;

As the error message clearly said, the column in the returned result need a name. Either give it an alias in the SELECT like
SELECT CASE
...
END a_column_name
...
or define it in the declaration of the return type as in
...
RETURNS TABLE
(a_column_name nvarchar(max)
...
As you can see in the second form you have to specify a data type. As your current code doesn't make much sense now I cannot figure out what is the right one there. You'd need to amend it.
Note, that len(rtrim('['+#nameOfColumn+']')) = 0 is never true as len(rtrim('['+#nameOfColumn+']')) is either NULL, when #nameOfColumn is NULL or at least 2 because of the added brackets.
If #nameOfColumn is supposed to be a column name you shouldn't use varchar (especially without a length specified for it) but sysname which is a special type for object names.
Either way you should define a length for #nameOfColumn and #nameOfParam as just varchar without any length means varchar(1), which is probably not what you want. And maybe instead of varchar you want nvarchar.
You may also want to look into quotename().

Define name of column in SELECT statement :
(select case when '['+#nameOfColumn+']' is null or
len(rtrim('['+#nameOfColumn+']')) = 0
then Convert(nvarchar(2),0)
else #nameOfParam
end as name_column -- define column name
from employee)
Also, your function parameter has no data length, by default it will accept only 1 character #nameOfColumn varchar , #nameOfParam varchar & rest will trim.

Related

How to limit the length of array text objects in PostgreSQL?

Is there any way to add a constraint on a column that is an array to limit length text objects?
I know that I can do this without constraint:
colA varchar(100)[] not null
I tried to do it in the following way:
alter table "tableA" ADD CONSTRAINT "colA_text_size"
CHECK ((SELECT max(length(pc)) from unnest(colA) as pc) <= 100) NOT VALID;
alter table "tableA" VALIDATE CONSTRAINT colA_text_size;
But got error: cannot use subquery in check constraint (SQLSTATE 0A000)
Try the following definition for your check constraint: (see demo, for demo I limit length to 25).
check (length(replace(array_to_string( text_array ,','), ',','')) <= 100)
What it does:
First the function array_to_string( ... ) converts the array to a csv.
The replace() function then removes the commas replacing them with the zero length string ''.
The length() function gets number of remaining characters in the string.
Finally that number is compared to the limit value (100) and the check constraint is either passed of failed.
References:
array_to_string(),
replace(), length()

Check if character varying is between range of numbers

I hava data in my database and i need to select all data where 1 column number is between 1-100.
Im having problems, because i cant use - between 1 and 100; Because that column is character varying, not integer. But all data are numbers (i cant change it to integer).
Code;
dst_db1.eachRow("Select length_to_fault from diags where length_to_fault between 1 AND 100")
Error - operator does not exist: character varying >= integer
Since your column supposed to contain numeric values but is defined as text (or version of text) there will be times when it does not i.e. You need 2 validations: that the column actually contains numeric data and that it falls into your value restriction. So add the following predicates to your query.
and length_to_fault ~ '^\+?\d+(\.\d*)?$'
and length_to_fault::numeric <# ('[1.0,100.0]')::numrange;
The first builds a regexp that insures the column is a valid floating point value. The second insures the numeric value fall within the specified numeric range. See fiddle.
I understand you cannot change the database, but this looks like a good place for a check constraint esp. if n/a is the only non-numeric are allowed. You may want to talk with your DBA ans see about the following constraint.
alter table diags
add constraint length_to_fault_check
check ( lower(length_to_fault) = 'n/a'
or ( length_to_fault ~ '^\+?\d+(\.\d*)?$'
and length_to_fault::numeric <# ('[1.0,100.0]')::numrange
)
);
Then your query need only check that:
lower(lenth_to_fault) != 'n/a'
The below PostgreSQL query will work
SELECT length_to_fault FROM diags WHERE regexp_replace(length_to_fault, '[\s+]', '', 'g')::numeric BETWEEN 1 AND 100;

Update with ISNULL and operation

original query looks like this :
UPDATE reponse_question_finale t1, reponse_question_finale t2 SET
t1.nb_question_repondu = (9-(ISNULL(t1.valeur_question_4)+ISNULL(t1.valeur_question_6)+ISNULL(t1.valeur_question_7)+ISNULL(t1.valeur_question_9))) WHERE t1.APPLICATION = t2.APPLICATION;
I know you cannot update 2 tables in a single query so i tried this :
UPDATE reponse_question_finale t1
SET nb_question_repondu = (9-(COALESCE(t1.valeur_question_4,'')::int+COALESCE(t1.valeur_question_6,'')::int+COALESCE(t1.valeur_question_7)::int+COALESCE(t1.valeur_question_9,'')::int))
WHERE t1.APPLICATION = t1.APPLICATION;
But this query gaves me an error : invalid input syntax for integer: ""
I saw that the Postgres equivalent to MySQL is COALESCE() so i think i'm on the good way here.
I also know you cannot add varchar to varchar so i tried to cast it to integer to do that. I'm not sure if i casted it correctly with parenthesis at the good place and regarding to error maybe i cannot cast to int with coalesce.
Last thing, i can certainly do a co-related sub-select to update my two tables but i'm a little lost at this point.
The output must be an integer matching the number of questions answered to a backup survey.
Any thoughts?
Thanks.
coalesce() returns the first non-null value from the list supplied. So, if the column value is null the expression COALESCE(t1.valeur_question_4,'') returns an empty string and that's why you get the error.
But it seems you want something completely different: you want check if the column is null (or empty) and then subtract a value if it is to count the number of non-null columns.
To return 1 if a value is not null or 0 if it isn't you can use:
(nullif(valeur_question_4, '') is null)::int
nullif returns null if the first value equals the second. The IS NULL condition returns a boolean (something that MySQL doesn't have) and that can be cast to an integer (where false will be cast to 0 and true to 1)
So the whole expression should be:
nb_question_repondu = 9 - (
(nullif(t1.valeur_question_4,'') is null)::int
+ (nullif(t1.valeur_question_6,'') is null)::int
+ (nullif(t1.valeur_question_7,'') is null)::int
+ (nullif(t1.valeur_question_9,'') is null)::int
)
Another option is to unpivot the columns and do a select on them in a sub-select:
update reponse_question_finale
set nb_question_repondu = (select count(*)
from (
values
(valeur_question_4),
(valeur_question_6),
(valeur_question_7),
(valeur_question_9)
) as t(q)
where nullif(trim(q),'') is not null);
Adding more columns to be considered is quite easy then, as you just need to add a single line to the values() clause

SQL invalid conversion return null instead of throwing error

I have a table with a varchar column, and I want to find values that match a certain number. So lets say that column contains the following entries (except with millions of rows in real life):
123456789012
2345678
3456
23 45
713?2
00123456789012
So I decide I want all the rows which are numerically 123456789012 write a statement that looks something like this:
SELECT * FROM MyTable WHERE CAST(MyColumn as bigint) = 123456789012
It should return the first and last row, but instead the whole query blows up because it can't convert the "23 45" and "713?2" to bigint.
Is there another way to do the conversion that will return NULL for values that can't convert?
SQL Server does NOT guarantee boolean operator short-circuit, see On SQL Server boolean operator short-circuit. So all solution using ISNUMERIC(...) AND CAST(...) are fundamentally flawed (they may work, but hey can arbitrarily fail later dependiong on the generated plan). A better solution is using CASE, as Thomas suggests: CASE ISNUMERIC(...) WHEN 1 THEN CAST(...) ELSE NULL END. But, as gbn pointed out, ISNUMERIC is notoriously finicky in identifying what 'numeric' means and many cases where one would expect it to return 0 it returns 1. So mixing the CASE with the LIKE:
CASE WHEN MyRow NOT LIKE '%[^0-9]%' THEN CAST(MyRow as bigint) ELSE NULL END
But the real problem is that if you have millions of rows and you have to search them like this, you'll always end up scanning end-to-end since the expression is not SARG-able (no matter how we rewrite it). The real issue here is data purity, and should be addressed at the appropriate level, where the data is populated. Another thing to consider is if is possible to create a persisted computed column with this expression and create a filtered index on it which eliminates NULL (ie. non-numeric). That would speed up things a little.
If you are using SQL Server 2012 you can use the 2 new methods:
TRY_CAST()
TRY_CONVERT()
Both methods are equivalent. They return a value cast to the specified data type if the cast succeeds; otherwise, returns null. The only difference is that CONVERT is SQL Server specific, CAST is ANSI. using CAST will make your code more portable (although not sure if any other database provider implements TRY_CAST)
ISNUMERIC will accept empty string and values like 1.23 or 5E-04 so could be unreliable.
And you don't know what order things will be evaluated in so it could still fail (SQL is declarative, not procedural, so the WHERE clause probably won't be evaluated left to right)
So:
you want to accept value that consist only of the characters 0-9
you need to materialise the "number" filter so it's applied before CAST
Something like:
SELECT
*
FROM
(
SELECT TOP 2000000000 *
FROM MyTable
WHERE MyColumn NOT LIKE '%[^0-9]%' --double negative rejects anything except 0-9
ORDER BY MyColumn
) foo
WHERE
CAST(MyColumn as bigint) = 123456789012 --applied after number check
Edit: quick example that fails.
CREATE TABLE #foo (bigintstring varchar(100))
INSERT #foo (bigintstring )VALUES ('1.23')
INSERT #foo (bigintstring )VALUES ('1 23')
INSERT #foo (bigintstring )VALUES ('123')
SELECT * FROM #foo
WHERE
ISNUMERIC(bigintstring) = 1
AND
CAST(bigintstring AS bigint) = 123
SELECT *
FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as float) = 123456789012
The ISNUMERIC() function should give you what you need.
SELECT * FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as bigint) = 123456789012
And to add a case statement like Thomas suggested:
SELECT * FROM MyTable
WHERE CASE(ISNUMERIC(MyRow)
WHEN 1 THEN CAST(MyRow as bigint)
ELSE NULL
END = 123456789012
http://msdn.microsoft.com/en-us/library/ms186272.aspx
SELECT *
FROM MyTable
WHERE (ISNUMERIC(MyColumn) = 1) AND (CAST(MyColumn as bigint) = 123456789012)
Additionally you can use a CASE statement in order to get null values.
SELECT
CASE
WHEN (ISNUMERIC(MyColumn) = 1) THEN CAST(MyColumn as bigint)
ELSE NULL
END AS 'MyColumnAsBigInt'
FROM tableName
If you require additional filtering, for numerics which are not valid to be cast to bigint, you can use the following instead of ISNUMERIC:
PATINDEX('%[^0-9]%',MyColumn)) = 0
If you need decimal values instead of integers, cast to float instead and change the regex to '%[^0-9.]%'

Postgres query error

I have a query in postgres
insert into c_d (select * from cd where ak = '22019763');
And I get the following error
ERROR: column "region" is of type integer but expression is of type character varying
HINT: You will need to rewrite or cast the expression.
An INSERT INTO table1 SELECT * FROM table2 depends entirely on order of the columns, which is part of the table definition. It will line each column of table1 up with the column of table2 with the same order value, regardless of names.
The problem you have here is whatever column from cd with the same order value as c_d of the table "region" has an incompatible type, and an implicit typecast is not available to clear the confusion.
INSERT INTO SELECT * statements are stylistically bad form unless the two tables are defined, and will forever be defined, exactly the same way. All it takes is for a single extra column to get added to cd, and you'll start getting errors about extraneous extra columns.
If it is at all possible, what I would suggest is explicitly calling out the columns within the SELECT statement. You can call a function to change type within each of the column references (or you could define a new type cast to do this implicitly -- see CREATE CAST), and you can use AS to set the column label to match that of your target column.
If you can't do this for some reason, indicate that in your question.
Check out the PostgreSQL insert documentation. The syntax is:
INSERT INTO table [ ( column [, ...] ) ]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) | query }
which here would look something like:
INSERT INTO c_d (column1, column2...) select * from cd where ak = '22019763'
This is the syntax you want to use when inserting values from one table to another where the column types and order are not exactly the same.