DESCENDING/ASCENDING Parameter to a stored procedure - tsql

I have the following SP
CREATE PROCEDURE GetAllHouses
set #webRegionID = 2
set #sortBy = 'case_no'
set #sortDirection = 'ASC'
AS
BEGIN
Select
tbl_houses.*
from tbl_houses
where
postal in (select zipcode from crm_zipcodes where web_region_id = #webRegionID)
ORDER BY
CASE UPPER(#sortBy)
when 'CASE_NO' then case_no
when 'AREA' then area
when 'FURNISHED' then furnished
when 'TYPE' then [type]
when 'SQUAREFEETS' then squarefeets
when 'BEDROOMS' then bedrooms
when 'LIVINGROOMS' then livingrooms
when 'BATHROOMS' then bathrooms
when 'LEASE_FROM' then lease_from
when 'RENT' then rent
else case_no
END
END
GO
Now everything in that SP works but I want to be able to choose whether I want to sort ASCENDING or DESCENDING.
I really can't fint no solution for that using SQL and can't find anything in google.
As you can see I have the parameter sortDirection and I have tried using it in multiple ways but always with errors... Tried Case Statements, IF statements and so on but it is complicated by the fact that I want to insert a keyword.
Help will be very much appriciated, I have tried must of the things that comes into mind but haven't been able to get it right.

You could use two order by fields:
CASE #sortDir WHEN 'ASC' THEN
CASE UPPER(#sortBy)
...
END
END ASC,
CASE #sortDir WHEN 'DESC' THEN
CASE UPPER(#sortBy)
...
END
END DESC
A CASE will evaluate as NULL if none of the WHEN clauses match, so that causes one of the two fields to evaluate to NULL for every row (not affecting the sort order) and the other has the appropriate direction.
One drawback, though, is that you'd need to duplicate your #sortBy CASE statement. You could achieve the same thing using dynamic SQL with sp_executesql and writing a 'ASC' or 'DESC' literal depending on the parameter.

That code is going to get very unmanageable very quickly as you'll need to double nest your CASE WHEN's... one set for the Column to order by, and nested set for whethers it's ASC or DESC
Might be better to consider using Dynamic SQL here...
DECLARE #sql nvarchar(max)
SET #sql = '
Select
tbl_houses.*
from tbl_houses
where
postal in (select zipcode from crm_zipcodes where web_region_id = ' + #webRegionID + ') ORDER BY '
SET #sql = #sql + ' ' + #sortBy + ' ' + #sortDirection
EXEC (#sql)

You could do it with some dynamic SQL and calling it with an EXEC. Beware SQL injection though if the user has any control over the parameters.
CREATE PROCEDURE GetAllHouses
set #webRegionID = 2
set #sortBy = 'case_no'
set #sortDirection = 'ASC'
AS
BEGIN
DECLARE #dynamicSQL NVARCHAR(MAX)
SET #dynamicSQL =
'
SELECT
tbl_houses.*
FROM
tbl_houses
WHERE
postal
IN
(
SELECT
zipcode
FROM
crm_zipcodes
WHERE
web_region_id = ' + CONVERT(nvarchar(10), #webRegionID) + '
)
ORDER BY
' + #sortBy + ' ' + #sortDirection
EXEC(#dynamicSQL)
END
GO

Related

Convert value to unique value (ex John to John_1)

The user writes his name and i want to store it into the database. If the name is already in the database i want to insert a postfix. ie Convert 'John' to the first one available between ('John_1', 'John_2' ... etc).
This is my way of doing this so far, but i'm sure there's a better way.
select n from
(
select 'John' n ,0 v
union
select 'John'||'_'||generate_series(1,100),generate_series(1,100)
) possible_names
where n not in
(select my_name from all_names u)
order by v
limit 1
Any suggestions?
If you need to worry about concurrency, the simplest way to guarantee uniqueness is by issuing insert statements until one succeeds. (This assumes you've a unique constraint, of course.)
Pseudocode:
while true
if db.execute(insert_sql, [..., name + postfix, ...])
break
end
counter += 1
postfix = '_' + counter
end
You can make the procedure run in a shorter amount of time by starting at the maximum existing postfix (see the other answers with approaches to do that).
An awkward alternative would be to find the maximum existing postfix using a select statement, and then to try to acquire an advisory lock on something unique to the applicable name and postfix, e.g. 'username:' + name + postfix. It's much less robust though, because it opens up the possibility of two transactions finding the same max_postfix, and then one transaction trying to acquire the lock immediately after other is done committing its insert and releasing that lock -- thus resulting in a duplicate.
SELECT CASE WHEN num IS NULL THEN 'John' ELSE 'John' || '_' || num END AS new_name
FROM (
SELECT max(substr(my_name, position('_' in my_name) + 1)::int) + 1 AS num
FROM all_names
WHERE my_name ilike 'John' || '_%'
) new_number
With all three instances of 'John' being where you pass in the name entered. (This is assuming that the user can't make an underscore part of their name and a number will always follow the underscore.)
Edit: This is also assuming that 'John' and 'john' should be treated the same. If they shouldn't, then replace the ilike with like instead.
CREATE FUNCTION get_username_proposal(text) RETURNS text AS $$
SELECT
CASE WHEN (SELECT COUNT(*) FROM all_names WHERE my_name = $1)=0 THEN
$1
ELSE
$1 || '_' || COALESCE(MAX(LTRIM(SUBSTRING(my_name FROM '_[0-9]+$'), '_')::int), 0)+1
END
FROM
all_names
WHERE
my_name ~ ($1 || '_[0-9]+$');
$$ LANGUAGE SQL STABLE;

Count previous occurences of a value split by date ranges

Here's a simple query we do for ad hoc requests from our Marketing department on the leads we received in the last 90 days.
SELECT ID
,FIRST_NAME
,LAST_NAME
,ADDRESS_1
,ADDRESS_2
,CITY
,STATE
,ZIP
,HOME_PHONE
,MOBILE_PHONE
,EMAIL_ADDRESS
,ROW_ADDED_DTM
FROM WEB_LEADS
WHERE ROW_ADDED_DTM BETWEEN #START AND #END
They are asking for more derived columns to be added that show the number of previous occurences of ADDRESS_1 where the EMAIL_ADDRESS matches. But they want is for different date ranges.
So the derived columns would look like this:
,COUNT_ADDRESS_1_LAST_1_DAYS,
,COUNT_ADDRESS_1_LAST_7_DAYS
,COUNT_ADDRESS_1_LAST_14_DAYS
etc.
I've manually filled these derived columns using update statements when there was just a few. The above query is really just a sample of a much larger query with many more columns. The actual request has blossomed into 6 date ranges for 13 columns. I'm asking if there's a better way then using 78 additional update statements.
I think you will have a hard time writing a query that includes all of these 78 metrics per e-mail address without actually creating a query that hard-codes the different choices. However you can generate such a pivot query with dynamic SQL, which will save you some keystrokes and will adjust dynamically as you add more columns to the table.
The result you want to end up with will look something like this (but of course you won't want to type it):
;WITH y AS
(
SELECT
EMAIL_ADDRESS,
/* aggregation portion */
[ADDRESS_1] = COUNT(DISTINCT [ADDRESS_1]),
[ADDRESS_2] = COUNT(DISTINCT [ADDRESS_2]),
... other columns
/* end agg portion */
FROM dbo.WEB_LEADS AS wl
WHERE ROW_ADDED_DTM >= /* one of 6 past dates */
GROUP BY wl.EMAIL_ADDRESS
)
SELECT EMAIL_ADDRESS,
/* pivot portion */
COUNT_ADDRESS_1_LAST_1_DAYS = *count address 1 from 1 day ago*,
COUNT_ADDRESS_1_LAST_7_DAYS = *count address 1 from 7 days ago*,
... other date ranges ...
COUNT_ADDRESS_2_LAST_1_DAYS = *count address 2 from 1 day ago*,
COUNT_ADDRESS_2_LAST_7_DAYS = *count address 2 from 7 days ago*,
... other date ranges ...
... repeat for 11 more columns ...
/* end pivot portion */
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;
This is a little involved, and it should all be run as one script, but I'm going to break it up into chunks to intersperse comments on how the above portions are populated without typing them. (And before long #Bluefeet will probably come along with a much better PIVOT alternative.) I'll enclose my interspersed comments in /* */ so that you can still copy the bulk of this answer into Management Studio and run it with the comments intact.
Code/comments to copy follows:
/*
First, let's build a table of dates that can be used both to derive labels for pivoting and to assist with aggregation. I've added the three ranges you've mentioned and guessed at a fourth, but hopefully it is clear how to add more:
*/
DECLARE #d DATE = SYSDATETIME();
CREATE TABLE #L(label NVARCHAR(15), d DATE);
INSERT #L(label, d) VALUES
(N'LAST_1_DAYS', DATEADD(DAY, -1, #d)),
(N'LAST_7_DAYS', DATEADD(DAY, -8, #d)),
(N'LAST_14_DAYS', DATEADD(DAY, -15, #d)),
(N'LAST_MONTH', DATEADD(MONTH, -1, #d));
/*
Next, let's build the portions of the query that are repeated per column name. First, the aggregation portion is just in the format col = COUNT(DISTINCT col). We're going to go to the catalog views to dynamically derive the list of column names (except ID, EMAIL_ADDRESS and ROW_ADDED_DTM) and stuff them into a #temp table for re-use.
*/
SELECT name INTO #N FROM sys.columns
WHERE [object_id] = OBJECT_ID(N'dbo.WEB_LEADS')
AND name NOT IN (N'ID', N'EMAIL_ADDRESS', N'ROW_ADDED_DTM');
DECLARE #agg NVARCHAR(MAX) = N'', #piv NVARCHAR(MAX) = N'';
SELECT #agg += ',
' + QUOTENAME(name) + ' = COUNT(DISTINCT '
+ QUOTENAME(name) + ')' FROM #N;
PRINT #agg;
/*
Next we'll build the "pivot" portion (even though I am angling for the poor man's pivot - a bunch of CASE expressions). For each column name we need a conditional against each range, so we can accomplish this by cross joining the list of column names against our labels table. (And we'll use this exact technique again in the query later to make the /* one of past 6 dates */ portion work.
*/
SELECT #piv += ',
COUNT_' + n.name + '_' + l.label
+ ' = MAX(CASE WHEN label = N''' + l.label
+ ''' THEN ' + QUOTENAME(n.name) + ' END)'
FROM #N as n CROSS JOIN #L AS l;
PRINT #piv;
/*
Now, with those two portions populated as we'd like them, we can build a dynamic SQL statement that fills out the rest:
*/
DECLARE #sql NVARCHAR(MAX) = N';WITH y AS
(
SELECT
EMAIL_ADDRESS, l.label' + #agg + '
FROM dbo.WEB_LEADS AS wl
CROSS JOIN #L AS l
WHERE wl.ROW_ADDED_DTM >= l.d
GROUP BY wl.EMAIL_ADDRESS, l.label
)
SELECT EMAIL_ADDRESS' + #piv + '
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;';
PRINT #sql;
EXEC sp_executesql #sql;
GO
DROP TABLE #N, #L;
/*
Now again, this is a pretty complex piece of code, and perhaps it can be made easier with PIVOT. But I think even #Bluefeet will write a version of PIVOT that uses dynamic SQL because there is just way too much to hard-code here IMHO.
*/

SQL SErver Trigger not evaluating as Insert or Update properly

I want to have one trigger to handle updates and inserts. Most of the sql actions in the trigger are for both. The only exception is the fields I'm using to record date and username for an insert and an update. This is what I have, but the updates of the fields used to track update and insert are not firing right. If I insert a new record, I get CreatedBy, CreatedOn, LastEditedBy, LastEditedOn populated, with LastEditedOn as 1 second after CreatedOn (which I dont want to happen). When I update the record, only the LastEditedBy & LastEditedOn changes (which is correct). I'm including my full trigger for reference:
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
-- =================================================================================
-- Author: Paul J. Scipione
-- Create date: 2/15/2012
-- Update date: 6/5/2012
-- Description: To concatenate several fields into a set formatted UnitDescription,
-- to total Span & Loop footages, to set appropriate AcctCode, & track
-- user inserts
-- =================================================================================
IF OBJECT_ID('ProcessCable', 'TR') IS NOT NULL
DROP TRIGGER ProcessCable
GO
CREATE TRIGGER ProcessCable
ON Cable
AFTER INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON;
-- IF TRIGGER_NESTLEVEL() > 1 RETURN
IF ((SELECT TRIGGER_NESTLEVEL()) > 1 )
RETURN
ELSE
BEGIN
-- record user and date of insert or update
IF EXISTS (SELECT * FROM DELETED)
UPDATE Cable SET LastEditedOn = getdate(), LastEditedBy = REPLACE(user_name(), 'GRTINET\', '')
ELSE IF NOT EXISTS (SELECT * FROM DELETED)
UPDATE Cable SET CreatedOn = getdate(), CreatedBy = REPLACE(user_name(), 'GRTINET\', '')
-- reset Suffix if applicable
UPDATE Cable SET Suffix = NULL WHERE Suffix = 'n/a'
-- create UnitDescription value
UPDATE Cable SET UnitDescription =
isnull (Type, '') +
isnull (CONVERT (NVARCHAR (10), Size), '') +
'-' +
isnull (CONVERT (NVARCHAR (10), Gauge), '') +
CASE
WHEN ExtraTrench IS NOT NULL AND ExtraTrench > 0 THEN
CASE
WHEN Suffix IS NULL THEN 'TE' + '(' + CONVERT (NVARCHAR (10), ExtraTrench) + ')'
ELSE 'TE' + '(' + CONVERT (NVARCHAR (10), ExtraTrench) + ')' + Suffix
END
ELSE isnull (Suffix, '')
END
-- convert any accidental negative numbers entered
UPDATE Cable SET Length = ABS(Length)
-- sum Length with LoopFootage into TotalFootage
UPDATE Cable SET TotalFootage = isnull(Length, 0) + isnull(LoopFootage, 0)
-- set proper AcctCode based on Type
UPDATE Cable SET AcctCode =
CASE
WHEN Type IN ('SEA', 'CW', 'CJ') THEN '32.2421.2'
WHEN Type IN ('BFC', 'BJ', 'SEB') THEN '32.2423.2'
WHEN Type IN ('TIP','UF') THEN '32.2422.2'
WHEN Type = 'unknown' OR Type IS NULL THEN 'unknown'
END
WHERE AcctCode IS NULL OR AcctCode = ' '
END
END
GO
A few things jump out at me when I look at your trigger:
You are doing several additional updates rather than a single update (performance-wise, a single update would be better).
Your update statements are unconstrained (there is no JOIN to the inserted/deleted tables to limit the number of records that you perform these additional updates on).
Most of this logic feels like it should be in the application layer rather than in the database; Or, perhaps in some cases implemented differently.
Some quick examples:
Suffix of "n/a" should be removed before inserted.
Cable length absolute value should be done before inserted (with a CHECK CONSTRAINT to verify that bad data cannot be inserted).
TotalFootage should be a computed column so it is always correct.
The Type/AcctCode relationship seems like it should be a column value in a foreign key reference.
But ultimately, I think the reason you are seeing the unexpected dates is because of the unconstrained updates. Without addressing any of the other concerns I brought up above, the statement that sets the audit fields should be more like this:
UPDATE Cable SET LastEditedOn = getdate(), LastEditedBy = REPLACE(user_name(), 'GRTINET\', '')
FROM Cable
JOIN deleted on Cable.PrimaryKeyColumn = deleted.PrimaryKeyColumn
UPDATE Cable SET CreatedOn = getdate(), CreatedBy = REPLACE(user_name(), 'GRTINET\', '')
FROM Cable
JOIN inserted on Cable.PrimaryKeyColumn = inserted.PrimaryKeyColumn
LEFT JOIN deleted on Cable.PrimaryKeyColumn = deleted.PrimaryKeyColumn
WHERE deleted.PrimaryKeyColumn IS NULL

TSQL - CONCAT_NULL_YIELDS_NULL ON Not Returning Null?

I have had a look around and seem to have come across a strange issue with SQL Server 2008 R2.
I understand that with CONCAT_NULL_YIELDS_NULL = ON means that the following will always resolve to NULL
SELECT NULL + 'My String'
I'm happy with that, however when using this in conjunction with COALESCE() it doesn’t appear to be working on my database.
Consider the following query where MyString is VARCHAR(2000)
SELECT COALESCE(MyString + ', ', '') FROM MyTableOfValues
Now in my query, when MyString IS NULL it returns an empty (NOT NULL) string. I can see this in the query results window.
However unusually enough, when running this in conjunction with an INSERT it fails to recognise the CONCAT_NULL_YIELDS_NULL instead, inserting a blank ‘, ‘.
Query is as follows for insert.
CONCAT_NULL_YIELDS_NULL ON
INSERT INTO Mytable(StringValue)
SELECT COALESCE(MyString + ', ', '')
FROM MyTableOfValues
Further to this I have also checked the database and CONCAT_NULL_YIELDS_NULL = TRUE…
Use NULLIF(MyString, '') instead of just MyString:
SELECT COALESCE(NULLIF(MyString, '') + ', ', '') FROM MyTableOfValues
Coalesce returns the first nonnull expression among its arguments.
You're getting a ', ' because it's the first nonnull expression in your coalesce call.
http://msdn.microsoft.com/en-us/library/ms190349.aspx
From some of the answers provided I was able to assertain a more in depth understanding of COALESCE().
The reason the above query did not fully work was because although I was checking for nulls, and empty string ('') is not considered null. Therefore although the above query worked, I should have checked for empty strings in my table first.
e.g.
SELECT COALESCE(FirstName + ', ', '') + Surname
FROM
(
SELECT 'Joe' AS Firstname, 'Bloggs' AS Surname UNION ALL
SELECT NULL, 'Jones' UNION ALL
SELECT '', 'Jones' UNION ALL
SELECT 'Bob', 'Tielly'
) AS [MyTable]
Will return
FullName
-----------
Joe, Bloggs
Jones
, Jones
Bob, Tielly
Now row 3 has returned a "," character which I was not originally expecting due to a Blank but NOT NULL value.
The following code now works as expected as it checks for blank values. It works, but it looks like I took the long way around. There may be a better way.
-- Ammended Query
SELECT COALESCE(REPLACE(FirstName, Firstname , Firstname + ', '), '') + Surname AS FullName0
FROM
(
SELECT 'Joe' AS Firstname, 'Bloggs' AS Surname UNION ALL
SELECT NULL, 'Jones' UNION ALL
SELECT '', 'Jones' UNION ALL
SELECT 'Bob', 'Tielly'
) AS [MyTable]

Extract the first word of a string in a SQL Server query

What's the best way to extract the first word of a string in sql server query?
SELECT CASE CHARINDEX(' ', #Foo, 1)
WHEN 0 THEN #Foo -- empty or single word
ELSE SUBSTRING(#Foo, 1, CHARINDEX(' ', #Foo, 1) - 1) -- multi-word
END
You could perhaps use this in a UDF:
CREATE FUNCTION [dbo].[FirstWord] (#value varchar(max))
RETURNS varchar(max)
AS
BEGIN
RETURN CASE CHARINDEX(' ', #value, 1)
WHEN 0 THEN #value
ELSE SUBSTRING(#value, 1, CHARINDEX(' ', #value, 1) - 1) END
END
GO -- test:
SELECT dbo.FirstWord(NULL)
SELECT dbo.FirstWord('')
SELECT dbo.FirstWord('abc')
SELECT dbo.FirstWord('abc def')
SELECT dbo.FirstWord('abc def ghi')
I wanted to do something like this without making a separate function, and came up with this simple one-line approach:
DECLARE #test NVARCHAR(255)
SET #test = 'First Second'
SELECT SUBSTRING(#test,1,(CHARINDEX(' ',#test + ' ')-1))
This would return the result "First"
It's short, just not as robust, as it assumes your string doesn't start with a space. It will handle one-word inputs, multi-word inputs, and empty string inputs.
Enhancement of Ben Brandt's answer to compensate even if the string starts with space by applying LTRIM(). Tried to edit his answer but rejected, so I am now posting it here separately.
DECLARE #test NVARCHAR(255)
SET #test = 'First Second'
SELECT SUBSTRING(LTRIM(#test),1,(CHARINDEX(' ',LTRIM(#test) + ' ')-1))
Adding the following before the RETURN statement would solve for the cases where a leading space was included in the field:
SET #Value = LTRIM(RTRIM(#Value))
Marc's answer got me most of the way to what I needed, but I had to go with patIndex rather than charIndex because sometimes characters other than spaces mark the ends of my data's words. Here I'm using '%[ /-]%' to look for space, slash, or dash.
Select race_id, race_description
, Case patIndex ('%[ /-]%', LTrim (race_description))
When 0 Then LTrim (race_description)
Else substring (LTrim (race_description), 1, patIndex ('%[ /-]%', LTrim (race_description)) - 1)
End race_abbreviation
from tbl_races
Results...
race_id race_description race_abbreviation
------- ------------------------- -----------------
1 White White
2 Black or African American Black
3 Hispanic/Latino Hispanic
Caveat: this is for a small data set (US federal race reporting categories); I don't know what would happen to performance when scaled up to huge numbers.
DECLARE #string NVARCHAR(50)
SET #string = 'CUT STRING'
SELECT LEFT(#string,(PATINDEX('% %',#string)))
Extract the first word from the indicated field:
SELECT SUBSTRING(field1, 1, CHARINDEX(' ', field1)) FROM table1;
Extract the second and successive words from the indicated field:
SELECT SUBSTRING(field1, CHARINDEX(' ', field1)+1, LEN (field1)-CHARINDEX(' ', field1)) FROM table1;
A slight tweak to the function returns the next word from a start point in the entry
CREATE FUNCTION [dbo].[GetWord]
(
#value varchar(max)
, #startLocation int
)
RETURNS varchar(max)
AS
BEGIN
SET #value = LTRIM(RTRIM(#Value))
SELECT #startLocation =
CASE
WHEN #startLocation > Len(#value) THEN LEN(#value)
ELSE #startLocation
END
SELECT #value =
CASE
WHEN #startLocation > 1
THEN LTRIM(RTRIM(RIGHT(#value, LEN(#value) - #startLocation)))
ELSE #value
END
RETURN CASE CHARINDEX(' ', #value, 1)
WHEN 0 THEN #value
ELSE SUBSTRING(#value, 1, CHARINDEX(' ', #value, 1) - 1)
END
END
GO
SELECT dbo.GetWord(NULL, 1)
SELECT dbo.GetWord('', 1)
SELECT dbo.GetWord('abc', 1)
SELECT dbo.GetWord('abc def', 4)
SELECT dbo.GetWord('abc def ghi', 20)
Try This:
Select race_id, race_description
, Case patIndex ('%[ /-]%', LTrim (race_description))
When 0 Then LTrim (race_description)
Else substring (LTrim (race_description), 1, patIndex ('%[ /-]%', LTrim (race_description)) - 1)
End race_abbreviation
from tbl_races