I have a stored procedure in an old SQL 2000 database that takes a comment column that is formatted as a varchar and exports it out as a money object. At the time this table structure was setup, it was assumed this would be the only data going into this field. The current procedure functions simply this this:
SELECT CAST(dbo.member_category_assign.coment AS money)
FROM dbo.member_category_assign
WHERE member_id = #intMemberId
AND
dbo.member_category_assign.eff_date <= #dtmEndDate
AND
(
dbo.member_category_assign.term_date >= #dtmBeginDate
OR
dbo.member_category_assign.term_date Is Null
)
However, data is now being inserted into this column that is not parsable to a money object and is causing the procedure to crash. I am unable to remove the "bad" data (since this is a third party product), but need to update the stored procedure to test for a money parsable entry and return that.
How can I update this procedure so that it will only return the value that is parsable as a money object? Do I create a temporary table and iterate through every item, or is there a more clever way to do this? I'm stuck with legacy SQL 2000 (version 6.0) so using any of the newer functions unfortunately is not available.
Checking for IsNumeric may help you - you can simply return a Zero value. If you want to return a 'N/a' or some other string value
I created the sample below with the columns from your query.
The first query just returns all rows.
The second query returns a MONEY value.
The third one returns a String value with N/A in place of the non-integer value.
set nocount on
drop table #MoneyTest
create table #MoneyTest
(
MoneyTestId Int Identity (1, 1),
coment varchar (100),
member_id int,
eff_date datetime,
term_date datetime
)
insert into #MoneyTest (coment, member_id, eff_date, term_date)
values
(104, 1, '1/1/2008', '1/1/2009'),
(200, 1, '1/1/2008', '1/1/2009'),
(322, 1, '1/1/2008', '1/1/2009'),
(120, 1, '1/1/2008', '1/1/2009')
insert into #MoneyTest (coment, member_id, eff_date, term_date)
values ('XX', 1, '1/1/2008', '1/1/2009')
Select *
FROM #MoneyTest
declare #intMemberId int = 1
declare #dtmBeginDate DateTime = '1/1/2008'
declare #dtmEndDate DateTime = '1/1/2009'
SELECT
CASE WHEN ISNUMERIC (Coment)=1 THEN CAST(#MoneyTest.coment AS money) ELSE cast (0 as money) END MoneyValue
FROM #MoneyTest
WHERE member_id = #intMemberId
AND #MoneyTest.eff_date <= #dtmEndDate
AND
(
#MoneyTest.term_date >= #dtmBeginDate
OR
#MoneyTest.term_date Is Null
)
SELECT
CASE WHEN ISNUMERIC (Coment)=1 THEN CAST (CAST(#MoneyTest.coment AS money) AS VARCHAR) ELSE 'N/a' END StringValue
FROM #MoneyTest
WHERE member_id = #intMemberId
AND #MoneyTest.eff_date <= #dtmEndDate
AND
(
#MoneyTest.term_date >= #dtmBeginDate
OR
#MoneyTest.term_date Is Null
)
Apologies for making a new answer, where a comment would suffice, but I lack the required permissions to do so. Onto the answer to your question, I would only like to add that you should use the above ISNUMERIC carefully. While it works much as expected, it also parses things like '1.3E-2' as a value numeric, which strangely enough you cannot cast into a numeric or money without generating an exception. I generally end up using:
SELECT
CASE WHEN ISNUMERIC( some_value ) = 1 AND CHARINDEX( 'E', Upper( some_value ) ) = 0
THEN Cast( some_value as money )
ELSE Cast( 0 as money )
END as money_value
Related
This is a scalar query, originally within a function. The result datatype varies, depending on which field I'm intresting of.
In this example, I expect a scalar result of datatype NVARCHAR 'Andy' but got an error:
Msg 245, Level 16, State 1, Line xx Conversion failed when converting
the nvarchar value 'Andy' to data type int.
Is there any way to get around this?
CREATE TABLE ATable (
Idf INT PRIMARY KEY,
Col1 INT,
Col2 NVARCHAR(255),
)
GO
INSERT INTO Atable (Idf, Col1, Col2) VALUES (1, 75, 'Andy')
INSERT INTO Atable (Idf, Col1, Col2) VALUES (2, 39, 'Pete')
GO
DECLARE #Idf INT = 1
DECLARE #Col NVARCHAR(15) = 'Col2'
SELECT
CASE
WHEN #Col='Col1' THEN Col1
WHEN #Col='Col2' THEN Col2
ELSE NULL
END AS MyScalarResultOfDynamicDatatype
FROM ATable
WHERE Idf=#Idf
If it is really, really necessary... you might use the sql_variant data type.
CASE
WHEN #Col='Col1' THEN CAST(Col1 AS SQL_VARIANT)
WHEN #Col='Col2' THEN CAST(Col2 AS SQL_VARIANT)
END AS MyScalarResultOfDynamicDatatype
(Note that you do not specifically need an ELSE NULL in your CASE-statement. If there is no matching WHEN, the result will be NULL by default.)
Edit:
Based on a question in comment regarding the drawbacks, I would like to refer to the article Problems Caused by the Use of the SQL_VARIANT Datatype, written by somebody under the alias Phil Factor (Redgate Simple Talk, Redgate Blog, GitHub).
How can I ensure that TSQL will not bark at me with
these values returned:
'1.00000000'
or
NULL
or
''
or
'some value'
When i convert to an int
If you are using SQL Server 2012 or later, you may use the TRY_CONVERT function, e.g.
WITH yourTable AS (
SELECT 123 AS intVal UNION ALL
SELECT '123' UNION ALL
SELECT NULL
)
SELECT
intVal,
CASE WHEN TRY_CONVERT(int, intVal) IS NOT NULL THEN 'yes' ELSE 'no' END AS can_parse
FROM yourTable;
Demo
The TRY_CONVERT function will return NULL in this case if it can't convert the input to an integer. So, this is a safe way to probe your data before trying a formal cast or conversion.
Here was the answer I found that worked for me...
TSQL - Cast string to integer or return default value
I'm not on 2012 or higher due to customer...
Don't give me credit though :) I was only good at searching for the answer that worked for me...
Although I changed it from returning null to returning zero since the stupid varchar should be an int column with a default of zero :)
Here's one that works for any value that is truly a VARCHAR and not an int
since VARCHAR is really a variable length string data type
WITH tmpTable AS (
SELECT '123' as intVal UNION ALL
SELECT 'dog' UNION ALL
SELECT '345' UNION ALL
SELECT 'cat' UNION ALL
SELECT '987' UNION ALL
SELECT '4f7g7' UNION ALL
SELECT NULL
)
SELECT
intVal
,case when intVal not like '%[^0-9]%' then 'yes' else 'no' end FROM tmpTable;
credit given to Tim Biegeleisen for his answer above....
All though when characters are found with his solution it will
still error out... hence the changes
Demo
Right now I have a generic notification function that is triggered after create on a couple of tables in my database (there's a node process on the other end listening for notifications). Here's what my update trigger looks like:
CREATE OR REPLACE FUNCTION notify_create() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
PERFORM pg_notify('update_watchers',
json_build_object(
'eventType', 'new',
'type', TG_TABLE_NAME,
'payload', row_to_json(NEW)
)::text
);
RETURN NEW;
END;
$$;
The problem is, if NEW is too big, this will overflow the limit of 8000 bytes in a couple of limited corner cases (I rarely have a new item in the table that is that big). In the notify_update function, I just report on which columns have changed by listing the column names. That would work here, but what I would rather do is only have row_to_json pull out entries from NEW that are of type integer.
That is because sometimes what I'm notifying is "hey there's a new entry in an entity table". The new entry could be from a couple of different tables (documents, profiles, etc). In that case, I really only need the id, since anyone who is interested in the new value ends up fetching it later anyway.
Sometimes I'm notifying "hey, there's a new entry in a join table", in which case I don't have an id field but instead have something like documents_id and profiles_id.
I could just write a bunch of different notify_create functions, for each scenario. I'd prefer to have one that did something like
row_to_json(NEW.filter(t => typeof t === 'number'))
to mix together plpgsql and javascript notation, but I'm sure you get the point: only include those fields of NEW that are number typed
Is this possible, or should I just write a bunch of different notifiers?
You can easily eliminate json objects of type other than number, e.g.:
with my_table(int1, text1, int2, date1, float1) as (
values
(1, 'text1', 100, '2017-01-01'::date, 123.54)
)
select jsonb_object_agg(key, value) filter (where jsonb_typeof(value) = 'number')
from my_table,
jsonb_each(to_jsonb(my_table))
jsonb_object_agg
--------------------------------------------
{"int1": 1, "int2": 100, "float1": 123.54}
(1 row)
The function below leaves only integers:
create or replace function leave_integers(jdata jsonb)
returns jsonb language sql as $$
select jsonb_object_agg(key, value)
filter (
where jsonb_typeof(value) = 'number'
and value::text not like '%.%')
from jsonb_each(jdata)
$$;
with my_table(int1, text1, int2, date1, float1) as (
values
(1, 'text1', 100, '2017-01-01'::date, 123.54)
)
select leave_integers(to_jsonb(my_table))
from my_table;
leave_integers
--------------------------
{"int1": 1, "int2": 100}
(1 row)
Alternative (better) solution
This function checks Postgres types directly and returns values strictly from integer columns.
create or replace function integer_columns_to_jsonb(anyelement)
returns jsonb language sql as $$
select jsonb_object_agg(key, value)
from jsonb_each(to_jsonb($1))
where key in (
select attname
from pg_type t
join pg_attribute on typrelid = attrelid
where t.oid = pg_typeof($1)
and atttypid = 'int'::regtype)
$$;
The example shows that the function eliminates some corner cases handled incorrectly by leave_integers():
create table my_table (int1 int, int2 int, float1 float, text1 text);
insert into my_table values (1, 2, 3, '4');
select integer_columns_to_jsonb(t), leave_integers(to_jsonb(t))
from my_table t;
integer_columns_to_jsonb | leave_integers
--------------------------+-------------------------------------
{"int1": 1, "int2": 2} | {"int1": 1, "int2": 2, "float1": 3}
(1 row)
-- Given a CSV string like this:
declare #roles varchar(800)
select #roles = 'Pub,RegUser,ServiceAdmin'
-- Question: How to get roles into a table view like this:
select 'Pub'
union
select 'RegUser'
union
select 'ServiceAdmin'
After posting this, I started playing with some dynamic SQL. This seems to work, but seems like there might be some security risks by using dynamic SQL - thoughts on this?
declare #rolesSql varchar(800)
select #rolesSql = 'select ''' + replace(#roles, ',', ''' union select ''') + ''''
exec(#rolesSql)
If you're working with SQL Server compatibility level 130 then the STRING_SPLIT function is now the most succinct method available.
Reference link: https://msdn.microsoft.com/en-gb/library/mt684588.aspx
Usage:
SELECT * FROM string_split('Pub,RegUser,ServiceAdmin',',')
RESULT:
value
-----------
Pub
RegUser
ServiceAdmin
See my answer from here
But basically you would:
Create this function in your DB:
CREATE FUNCTION dbo.Split(#origString varchar(max), #Delimiter char(1))
returns #temptable TABLE (items varchar(max))
as
begin
declare #idx int
declare #split varchar(max)
select #idx = 1
if len(#origString )<1 or #origString is null return
while #idx!= 0
begin
set #idx = charindex(#Delimiter,#origString)
if #idx!=0
set #split= left(#origString,#idx - 1)
else
set #split= #origString
if(len(#split)>0)
insert into #temptable(Items) values(#split)
set #origString= right(#origString,len(#origString) - #idx)
if len(#origString) = 0 break
end
return
end
and then call the function and pass in the string you want to split.
Select * From dbo.Split(#roles, ',')
Here's a thorough discussion of your options:
Arrays and Lists in SQL Server
What i do in this case is just using some string replace to convert it to json and open the json like a table. May not be suitable for every use case but it is very simple to get running and works with strings and files. With files you just need to watch your line break character, mostly i find it to be "Char(13)+Char(10)"
declare #myCSV nvarchar(MAX)= N'"Id";"Duration";"PosX";"PosY"
"•P001";223;-30;35
"•P002";248;-28;35
"•P003";235;-26;35'
--CSV to JSON
--convert to json by replacing some stuff
declare #myJson nvarchar(MAX)= '[['+ replace(#myCSV, Char(13)+Char(10), '],[' ) +']]'
set #myJson = replace(#myJson, ';',',') -- Optional: ensure coma delimiters for json if the current delimiter differs
-- set #myJson = replace(#myJson, ',,',',null,') -- Optional: empty in between
-- set #myJson = replace(#myJson, ',]',',null]') -- Optional: empty before linebreak
SELECT
ROW_NUMBER() OVER (ORDER BY (SELECT 0))-1 AS LineNumber, *
FROM OPENJSON( #myJson )
with (
col0 varchar(255) '$[0]'
,col1 varchar(255) '$[1]'
,col2 varchar(255) '$[2]'
,col3 varchar(255) '$[3]'
,col4 varchar(255) '$[4]'
,col5 varchar(255) '$[5]'
,col6 varchar(255) '$[6]'
,col7 varchar(255) '$[7]'
,col8 varchar(255) '$[8]'
,col9 varchar(255) '$[9]'
--any name column count is possible
) csv
order by (SELECT 0) OFFSET 1 ROWS --hide header row
Using SQL Server's built in XML parsing is also an option. Of course, this glosses over all the nuances of an RFC-4180 compliant CSV.
-- Given a CSV string like this:
declare #roles varchar(800)
select #roles = 'Pub,RegUser,ServiceAdmin'
-- Here's the XML way
select split.csv.value('.', 'varchar(100)') as value
from (
select cast('<x>' + replace(#roles, ',', '</x><x>') + '</x>' as xml) as data
) as csv
cross apply data.nodes('/x') as split(csv)
If you are using SQL 2016+, using string_split is better, but this is a common way to do this prior to SQL 2016.
Using BULK INSERT you can import a csv file into your sql table -
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/
Even the accepted answer is working fine. but I got this function much faster even for thousands of record. create below function and use.
IF EXISTS (
SELECT 1
FROM Information_schema.Routines
WHERE Specific_schema = 'dbo'
AND specific_name = 'FN_CSVToStringListTable'
AND Routine_Type = 'FUNCTION'
)
BEGIN
DROP FUNCTION [dbo].[FN_CSVToStringListTable]
END
GO
CREATE FUNCTION [dbo].[FN_CSVToStringListTable] (#InStr VARCHAR(MAX))
RETURNS #TempTab TABLE (Id NVARCHAR(max) NOT NULL)
AS
BEGIN
;-- Ensure input ends with comma
SET #InStr = REPLACE(#InStr + ',', ',,', ',')
DECLARE #SP INT
DECLARE #VALUE VARCHAR(1000)
WHILE PATINDEX('%,%', #INSTR) <> 0
BEGIN
SELECT #SP = PATINDEX('%,%', #INSTR)
SELECT #VALUE = LEFT(#INSTR, #SP - 1)
SELECT #INSTR = STUFF(#INSTR, 1, #SP, '')
INSERT INTO #TempTab (Id)
VALUES (#VALUE)
END
RETURN
END
GO
---Test like this.
declare #v as NVARCHAR(max) = N'asdf,,as34df,234df,fs,,34v,5fghwer,56gfg,';
SELECT Id FROM dbo.FN_CSVToStringListTable(#v)
I was about you use the solution mentioned in the accepted answer, but doing more research led me to use Table Value Types:
These are far more efficient and you don't need a TVF (Table valued function) just to create a table from csv. You can use it directly in your scripts or pass that to a stored procedure as a Table Value Parameter. The Type can be created as :
CREATE TYPE [UniqueIdentifiers] AS TABLE(
[Id] [varchar](20) NOT NULL
)
If I select from a table group by the month, day, year,
it only returns rows with records and leaves out combinations without any records, making it appear at a glance that every day or month has activity, you have to look at the date column actively for gaps. How can I get a row for every day/month/year, even when no data is present, in T-SQL?
Create a calendar table and outer join on that table
My developer got back to me with this code, underscores converted to dashes because StackOverflow was mangling underscores -- no numbers table required. Our example is complicated a bit by a join to another table, but maybe the code example will help someone someday.
declare #career-fair-id int
select #career-fair-id = 125
create table #data ([date] datetime null, [cumulative] int null)
declare #event-date datetime, #current-process-date datetime, #day-count int
select #event-date = (select careerfairdate from tbl-career-fair where careerfairid = #career-fair-id)
select #current-process-date = dateadd(day, -90, #event-date)
while #event-date <> #current-process-date
begin
select #current-process-date = dateadd(day, 1, #current-process-date)
select #day-count = (select count(*) from tbl-career-fair-junction where attendanceregister <= #current-process-date and careerfairid = #career-fair-id)
if #current-process-date <= getdate()
insert into #data ([date], [cumulative]) values(#current-process-date, #day-count)
end
select * from #data
drop table #data
Look into using a numbers table. While it can be hackish, it's the best method I've come by to quickly query missing data, or show all dates, or anything where you want to examine values within a range, regardless of whether all values in that range are used.
Building on what SQLMenace said, you can use a CROSS JOIN to quickly populate the table or efficiently create it in memory.
http://www.sitepoint.com/forums/showthread.php?t=562806
The task calls for a complete set of dates to be left-joined onto your data, such as
DECLARE #StartInt int
DECLARE #Increment int
DECLARE #Iterations int
SET #StartInt = 0
SET #Increment = 1
SET #Iterations = 365
SELECT
tCompleteDateSet.[Date]
,AggregatedMeasure = SUM(ISNULL(t.Data, 0))
FROM
(
SELECT
[Date] = dateadd(dd,GeneratedInt, #StartDate)
FROM
[dbo].[tvfUtilGenerateIntegerList] (
#StartInt,
,#Increment,
,#Iterations
)
) tCompleteDateSet
LEFT JOIN tblData t
ON (t.[Date] = tCompleteDateSet.[Date])
GROUP BY
tCompleteDateSet.[Date]
where the table-valued function tvfUtilGenerateIntegerList is defined as
-- Example Inputs
-- DECLARE #StartInt int
-- DECLARE #Increment int
-- DECLARE #Iterations int
-- SET #StartInt = 56200
-- SET #Increment = 1
-- SET #Iterations = 400
-- DECLARE #tblResults TABLE
-- (
-- IterationId int identity(1,1),
-- GeneratedInt int
-- )
-- =============================================
-- Author: 6eorge Jetson
-- Create date: 11/22/3333
-- Description: Generates and returns the desired list of integers as a table
-- =============================================
CREATE FUNCTION [dbo].[tvfUtilGenerateIntegerList]
(
#StartInt int,
#Increment int,
#Iterations int
)
RETURNS
#tblResults TABLE
(
IterationId int identity(1,1),
GeneratedInt int
)
AS
BEGIN
DECLARE #counter int
SET #counter= 0
WHILE (#counter < #Iterations)
BEGIN
INSERT #tblResults(GeneratedInt) VALUES(#StartInt + #counter*#Increment)
SET #counter = #counter + 1
END
RETURN
END
--Debug
--SELECT * FROM #tblResults