I have the following statement in a stored procedure. I am passing the name of the column as parameter and also the value to be checked in another variable. Is it possible to accomplish this in SQL server. Please let me know.
SELECT CaseId FROM app_Case
where #SearchCat=#keywords
ORDER BY CreatedDate DESC
I think the only way to do this would be to generate a dynamic SQL statement. The other option would be to take all column values as parameters, default them to null, and check for that.
ie
WHERE (cola = #cola OR #cola IS NULL) AND (colb = #colb OR #colb IS NULL) etc.
You need to create a string of SQL inside the SP and execute it.
Declare #SQL As VARCHAR(8000)
SET #SQL = 'SELECT CaseId FROM app_Case where ' +
#SearchCat + ' = '' + #keywords +
'' ORDER BY CreatedDate DESC'
EXEC(#SQL)
You can build a dynamic query Essentially you build a string and then execute it. (Watch out for SQL injection attacks).
Another approach would be to use a case statement which if you don't have a lot of options might be worth trying:
select CaseId from app_Case
where case when #searchCat='field1'
then field1
else #searchVal
end = #searchVal and
case when #searchCat='field2'
then field2
else #searchVal
end = #searchVal
Another approach is do the same thing using or clauses:
select CaseId from app_Case
where (#searchCat='Field1' and Field1=#searchVal) OR
(#serachCat='Field2' and Field2=#searchVal)
Related
emailVariable = john#example.com, sally#testing.com
SQL Query:
select *
from [table_1]
where email in (?);
Parameter set to use emailVariable.
This returns nothing, both emails are valid.
Am I doing something wrong?
I am using an OLE DB Source Editor.
You can also use string_split:
declare #stringToSplit varchar(255) = 'john#example.com, sally#testing.com'
select *
from [table_1]
where email in (
select ltrim(rtrim(value)) from string_split(?,',')
)
String_Split will return a table of values based on your input string and the delimiter. In your case you also need ltrim and rtrim because of extra spaces.
This is a classic mistake. Although the following works:
where email in ('john#example.com','sally#testing.com')
you cannot use one variable to put a multitude of values. The comma(s) is not part of the value string, it is considered code. What you can do is use dynamic sql:
declare #emailVariable nvarchar(max)=N'''john#example.com'',''sally#testing.com''' -- notice the escaped quotes
declare #sql nvarchar(max)
set #sql=N'select * from [Table_1] where email in (' + #emailVariable + ')'
exec(#sql)
How do I convert this code from T-SQL into DB2 LUW, it seems so easy with T-SQL but in DB2 can't find any solution. See code below:
DECLARE #sqlCommand varchar(1000)
DECLARE #columnList varchar(75)
DECLARE #city varchar(75)
SET #columnList = 'AddressID, AddressLine1, City'
SET #city = '''London'''
SET #sqlCommand = 'SELECT ' + #columnList + ' FROM Person.Address WHERE City = ' + #city
EXEC (#sqlCommand)
The problem is that you can’t ‘select to nowhere’ in a compound statement in DB2. Db2 CLP can return you the result set of a single sql statement, but it doesn’t try to do the same for select statements in a compound statement. If you want to print the result set from a select statement in a compound statement, you can, for example, declare a cursor, fetch it in a loop, and use dbms_output.put_line calls to print the values of variables.
Not Pretty but you can find an example of the bottom of this page:
Stored Procedures and Dynamic SQL Returning a Result set
Essentially you most:
1) create a dynamic SQL string
2) prepare the string into a statement
3) Link the statement to a cursor you're going to declare as WITH RETURN
Opening the cursor will be the last line in your procedure.
Suppose the following:
CREATE PROCEDURE [MySPROC]
AS
BEGIN
CREATE TABLE #tempSubset(
[MyPrimaryKey] [bigint] NOT NULL,
[OtherColumn] [int] NOT NULL)
INSERT INTO #tempSubset (MyPrimaryKey, OtherColumn)
SELECT SomePrimaryKey, SomeColumn
FROM SomeHugeTable
WHERE LimitingCondition = true
SELECT MyPrimaryKey, OtherColumn
FROM #tempSubset
WHERE SomeExpensiveCondition = true
END
When I generate a function import or map a return type, EF doesn't generate a complex type or tells me:
The selected stored procedure or function returns no columns
How to overcome this?
Other answers suggest using table variables (not going to do this for performance reasons) faking the return schema and commenting out the real stored procedure, other suggest doing similar with views... but there must be a way to do this without having to add unnecessary overhead or requiring me to break a stored procedure to update the model?
CREATE PROCEDURE [MySPROC]
AS
BEGIN
--supplying a data contract
IF 1 = 2 BEGIN
SELECT
cast(null as bigint) as MyPrimaryKey,
cast(null as int) as OtherColumn
WHERE
1 = 2
END
CREATE TABLE #tempSubset(
[MyPrimaryKey] [bigint] NOT NULL,
[OtherColumn] [int] NOT NULL)
INSERT INTO #tempSubset (MyPrimaryKey, OtherColumn)
SELECT SomePrimaryKey, SomeColumn
FROM SomeHugeTable
WHERE LimitingCondition = true
SELECT MyPrimaryKey, OtherColumn
FROM #tempSubset
WHERE SomeExpensiveCondition = true
END
Supplying a faux data contract for the result set is the easiest, cleanest and fastest way to take care of the issue. This same problem exists in data source controls in SSIS too. .NET will read the result set from the unreachable "contract" section of the query and supply the metadata for the complex type. No performance impact and no need to comment out the SQL that does the actual work.
Adding this to the top of the stored procedure definition: SET FMTONLY OFF allowed the model to infer the schema from the temporary table without issue. As a bonus, it doesn't require additional maintenance for a contract.
Example:
SET FMTONLY OFF
CREATE TABLE #tempTable (
...
)
...
SELECT * FROM #tempTable
Solution 1
Use a table variable instead of a temporary table.
Solution 2
Use the Set FMTONLY off; SQL command in the procedure and you will get the column information to create a new complex type.
Solution 3
This is not a good way, but it's a very easy way. Just add a select statement with dummy data and it will not execute because 1=0.
you can check details on this link
This is incomplete but when set fmtonly off does not work, you can generate the data contract using the following:
SELECT *
FROM tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..#u');
select case system_type_id
when 62 then 'cast(null as float) as '
when 175 then 'cast(null as char(' + cast(max_length as varchar(50)) + ')) as '
when 167 then 'cast(null as varchar(' + cast(max_length as varchar(50)) + ')) as '
when 56 then 'cast(null as int) as '
when 104 then 'cast(null as bit) as '
when 106 then 'cast(null as decimal(' + cast(precision as varchar(50)) + ',' + cast(scale as varchar(50)) + ')) as '
when 40 then 'cast(null as date) as '
end
+ name + ','
from tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..#u');
After getting great help in securing against SQL injection from classic ASP protection against SQL injection, I've encountered a major issue which cannot be solved using parameterized queries.
name = Trim(Request.QueryString("name"))
flds = Trim(Request.QueryString("flds"))
sql = "set rowcount 0 select " & flds & " from [TABLE] where Name = '" & name & "'"
From what I understand, a parameterized query will protect against SQL injection in the WHERE clause (in this case, the name field.
flds is a comma-separated list of parameters that the users wants returned. As it is obvious, it is very vulnerable to SQL injection.
One idea I have to secure my code is to have a statically generated dict of valid fields, split the flds string by ",", verify each one of the values against the dict, and construct the SQL query that will consist of all the fields that are present in the dict.
It seems to me that although this method will work for security, it will require me to modify the static list at every change in the database (however rare those are).
Are there better/proper ways of securing this code against SQL injection attacks?
Create a split function in SQL Server (there are better ones for newer versions, but this is what you get in SQL Server 2000):
CREATE FUNCTION dbo.SplitStrings
(
#List NVARCHAR(4000),
#Delimiter CHAR(1)
)
RETURNS #Items TABLE
(
Item NVARCHAR(4000)
)
AS
BEGIN
DECLARE
#Item VARCHAR(12),
#Pos INT;
WHILE LEN(#List)>0
BEGIN
SET #Pos = CHARINDEX(#Delimiter, #List);
IF #Pos = 0
SET #Pos = LEN(#List)+1;
SET #Item = LEFT(#List, #Pos-1);
INSERT #Items SELECT LTRIM(RTRIM(#Item));
SET #List = SUBSTRING(#List, #Pos + LEN(#Delimiter), LEN(#List));
IF LEN(#List) = 0 BREAK;
END
RETURN;
END
GO
Then create a stored procedure:
CREATE PROCEDURE dbo.RunScaryQuery
#columns NVARCHAR(4000),
#table NVARCHAR(255)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #collist NVARCHAR(4000), #sql NVARCHAR(4000);
SELECT #collist = COALESCE(#collist + ',', '') + c.name
FROM syscolumns AS c
INNER JOIN dbo.SplitStrings(#columns, ',') AS s
ON s.Item = c.name
WHERE c.id = OBJECT_ID(#table);
SELECT #sql = 'SELECT ' + #collist + ' FROM ' + #table
-- where ...
;
EXEC sp_executesql #sql;
END
GO
Now call that stored procedure from ASP with a properly parameterized command object.
This will ensure that your SQL query is generated only using column names that actually exist in the table. (Any nonsense will be ignored.)
This presumes that you will get at least one valid column name in the list.
I'm at home, no db to test but this should do it
Basically, get all the fields from the db that fit the where, get the requested fields in an array and compare the two lists, putting out only the requested fields.
name = Trim(Request.QueryString("name"))
flds = split(Trim(Request.QueryString("flds")),",")
sql = "set rowcount 0 select * from [TABLE] where Name = '" & name & "'"
set oRst = oConn.execute(sql)
on error resume next
do while not oRst.eof
result = ""
separator = ""
for each field in flds
for each requested_field in flds
if uCase(field.name) = uCase(trim(requested_field)) then
result = result & separator & field.value
separator = ","
end if
next
next
response.write result & "<br>"
oRst.movenext
loop
hm... so I'm going with another solution.
I first have an SQL query that return all the valid fields
select
tcol.name
from
sysObjects tobj
join syscolumns tcol on tobj.id = tcol.id
where
tobj.xtype = 'U'
and tobj.name = '[TABLE]'
and then I validate every element as suggested by #peter. All the validated parameters are then used to build the query string, and name is passed as a parameter in the second query.
This seems to minimize the overhead and the strain on the database.
Have a look at http://www.userfriendlythinking.com/Blog/BlogDetail.asp?p1=7013&p2=119&p7=3001
which shows usage of parameterized queries.
I'm writing a parameterized stored proc. I know that you can set the parameter value such that it displays all the results when the parameter is not mentioned in the execute command.. But i'm unable to recall how that is achieved. Any help is highly appreciated... Please..
I'd recommend parameterized dynamic sql (sp_executesql)
Going this route, you can discard any irrelevant parameter when building your where clause.
Example procedure:
create proc dbo.SearchForStuff
(
#Id int = 0
,#Description varchar(100) = ''
)
as
begin
set nocount on;
declare #select nvarchar(max) = '
select
s.*
from Stuff as s'
declare #where varchar(max) = ''
if isnull(#ID,0) != 0 begin
set #where += case #where when '' then ' where ' else ' and ' end + 's.Id = #Id'
end
if isnull(#Description,'') != '' begin
set #where += case #where when '' then ' where ' else ' and ' end + 's.[Description] = #Description'
end
set #select += #where
exec sp_executesql
#select
,N'
,#Id int = 0
,#Description varchar(100) = '''''
,#Id
,#Description
end
Usage:
exec SearchForStuff #Id = 1, #Description = 'omg' -- Returns every item where Id is 1 and Description is 'omg'
exec SearchForStuff #Id = 1 -- Returns every item where Id is 1
exec SearchForStuff #Description = 'omg' -- Returns every item where Description is 'omg'
exec SearchForStuff --returns every item
In this fashion your final query is not littered with useless conditions. Further, you can get a bit more granular than I did here. Based upon which parameters were passed, you can tailor your where/join clauses to take advantage of your indexes such that you get the best possible performance. The only drawback is a slight loss of readability (imo).
You can make your WHERE conditions like this:
WHERE (#myParam IS NULL OR #myParam = someValue)
You may be able to use OPTION (RECOMPILE) is SQL2008SP1+ (or similar, don't know other options) in the sproc, depending on your RDBMS, to get this to be performant.
Method from Erland Sommarskog:
http://www.sommarskog.se/dyn-search-2008.html#static
From the link:
"The effect of all the #x IS NULL clauses is that if that input parameter is NULL, then that AND-condition is always true. Thus, the only conditions that are in effect are those where the search parameter has a non-NULL value.
As far as maintainability goes, it's difficult to think of a better solution for the search conditions at hand. It's compact, easy to read and to extend. And performance? Very good as long as you include the query hint OPTION (RECOMPILE). This hint forces the query to be recompiled each time, in which case SQL Server will use the actual variable values as if they were constants."
If it is an int you can use
SELECT X,Y
FROM T
WHERE C BETWEEN COALESCE(#P, -2147483648) AND COALESCE(#P, 2147483647)
The definitive article on the subject