Iterate through result-sets in T-SQL - tsql

I would like to write a stored procedure using a statement to iterate through a result-set of records provided by another statement, and union the end results into one single result-set. Can anyone advise on an approach for this?
For example, a generic set of records to iterate through:
SELECT sys.schemas.name + '.' + sys.objects.name as [schm_obj]
FROM sys.objects
INNER JOIN sys.schemas
ON sys.objects.schema_id = sys.schemas.schema_id
AND sys.schemas.name IN ('dbo')
Generic query to be executed on each record:
SELECT DISTINCT referenced_schema_name + '.' + referenced_entity_name
FROM sys.dm_sql_referenced_entities(#schm_obj,'OBJECT')
The parameter #schm_obj to be replaced by a single field value returned in each row of the first query; eventually, I would like to union all results. Any advice would be greatly appreciated.

You need to do dyanmic sql to do this. I am confused as you are stating a procedure then showing a table function. Table Functions unless something changed in 2012 cannot do dynamic sql. If you want to basically create a programmable object you can use to get data that YOU SUPPLY the schema or meta detail to get to it I would use a dynamic procedure that would label what you got and then give you the value of it. Then you could insert this into a table variable and iterate or insert into it till your heart's content.
create proc dbo.Dynaminizer
(
#ObjName nvarchar(64)
)
as
BEGIN
declare #SQL nvarchar(1024)
Select #SQL = 'Select ''' + #ObjName + ''' as Name, ' + #ObjName + ' from sys.tables'
EXECUTE sp_executesql #SQL
END
declare #temp table ( name varchar(32), value varchar(128))
insert into #Temp
exec Dynaminizer 'name'
insert into #Temp
exec Dynaminizer 'object_id'
select *
from #Temp
UPDATE LATER THAT DAY....
If you just want to take values from one table or set and then perform operations in a function that takes a schema and a name combo then there is another method to do that. I would use a cte to make a custom column and then do a cross apply to perform that dataset in a function for how many rows that are in the set. This will evaluate your function like it was executing it for each value and then posting the results. There is no need to union unless you are more specific for what you are wanting to do. You can take it even further and if you want the dependencies inline(as opposed to seperate rows) you can relate the dataset back to itself and then do an xml type to cast out the dependencies into a comma seperated list.
Example usage of both:
-- multi row example to show referencing relationships
with procs as
(
Select
schema_name(schema_id) + '.' + name as Name
from sys.procedures p
)
select
p.Name
, referenced_schema_name + '.' + referenced_entity_name as Ref
from procs p
cross apply sys.dm_sql_referenced_entities(p.Name,'OBJECT')
;
-- take it a step further and put relationships inside a single column
with procs as
(
Select
schema_name(schema_id) + '.' + name as Name
from sys.procedures p
)
, setup as
(
select
p.Name
, referenced_schema_name + '.' + referenced_entity_name as Ref
from procs p
cross apply sys.dm_sql_referenced_entities(p.Name,'OBJECT')
)
Select distinct
Name
, stuff(
(
select
', ' + Ref
from setup x
where x.Name = m.Name
for xml path('')
)
, 1, 2, '') as Dependencies
from setup m

Related

Is there a way to dynamically create tables without knowing how many columns the table will have beforehand?

The following query uses a pivot to turn the values in field [expireDate Year-Month] into column headings. Because the number of year-months regularly increases and is not fixed, this is done dynamically.Is there a way to also dynamically create a table from the output without knowing how many columns the table will have beforehand?
DECLARE #SQLQuery AS NVARCHAR(MAX)
DECLARE #PivotColumns AS NVARCHAR(MAX)
--Get unique values of pivot column
SELECT #PivotColumns= COALESCE(#PivotColumns + ',','') + QUOTENAME([expireDate Year-Month])
FROM (SELECT DISTINCT [expireDate Year-Month] FROM REPORTING_DATA.tableau.vw_vehicleInspDetailsHistMonthlyFinal) AS PivotExample
SELECT #PivotColumns
--Create the dynamic query with all the values for
--pivot column at runtime
SET #SQLQuery =
N'SELECT DISTINCT Vehicle#, ' + #PivotColumns + '
FROM REPORTING_DATA.tableau.vw_vehicleInspDetailsHistMonthlyFinal
PIVOT( MAX(inspectionResult)
FOR [expireDate Year-Month] IN (' + #PivotColumns + '))
AS P
ORDER BY Vehicle# '
SELECT #SQLQuery
--Execute dynamic query
EXEC sp_executesql #SQLQuery

Removing all the Alphabets from a string using a single SQL Query [duplicate]

I'm currently doing a data conversion project and need to strip all alphabetical characters from a string. Unfortunately I can't create or use a function as we don't own the source machine making the methods I've found from searching for previous posts unusable.
What would be the best way to do this in a select statement? Speed isn't too much of an issue as this will only be running over 30,000 records or so and is a once off statement.
You can do this in a single statement. You're not really creating a statement with 200+ REPLACEs are you?!
update tbl
set S = U.clean
from tbl
cross apply
(
select Substring(tbl.S,v.number,1)
-- this table will cater for strings up to length 2047
from master..spt_values v
where v.type='P' and v.number between 1 and len(tbl.S)
and Substring(tbl.S,v.number,1) like '[0-9]'
order by v.number
for xml path ('')
) U(clean)
Working SQL Fiddle showing this query with sample data
Replicated below for posterity:
create table tbl (ID int identity, S varchar(500))
insert tbl select 'asdlfj;390312hr9fasd9uhf012 3or h239ur ' + char(13) + 'asdfasf'
insert tbl select '123'
insert tbl select ''
insert tbl select null
insert tbl select '123 a 124'
Results
ID S
1 390312990123239
2 123
3 (null)
4 (null)
5 123124
CTE comes for HELP here.
;WITH CTE AS
(
SELECT
[ProductNumber] AS OrigProductNumber
,CAST([ProductNumber] AS VARCHAR(100)) AS [ProductNumber]
FROM [AdventureWorks].[Production].[Product]
UNION ALL
SELECT OrigProductNumber
,CAST(STUFF([ProductNumber], PATINDEX('%[^0-9]%', [ProductNumber]), 1, '') AS VARCHAR(100) ) AS [ProductNumber]
FROM CTE WHERE PATINDEX('%[^0-9]%', [ProductNumber]) > 0
)
SELECT * FROM CTE
WHERE PATINDEX('%[^0-9]%', [ProductNumber]) = 0
OPTION (MAXRECURSION 0)
output:
OrigProductNumber ProductNumber
WB-H098 098
VE-C304-S 304
VE-C304-M 304
VE-C304-L 304
TT-T092 092
RichardTheKiwi's script in a function for use in selects without cross apply,
also added dot because in my case I use it for double and money values within a varchar field
CREATE FUNCTION dbo.ReplaceNonNumericChars (#string VARCHAR(5000))
RETURNS VARCHAR(1000)
AS
BEGIN
SET #string = REPLACE(#string, ',', '.')
SET #string = (SELECT SUBSTRING(#string, v.number, 1)
FROM master..spt_values v
WHERE v.type = 'P'
AND v.number BETWEEN 1 AND LEN(#string)
AND (SUBSTRING(#string, v.number, 1) LIKE '[0-9]'
OR SUBSTRING(#string, v.number, 1) LIKE '[.]')
ORDER BY v.number
FOR
XML PATH('')
)
RETURN #string
END
GO
Thanks RichardTheKiwi +1
Well if you really can't use a function, I suppose you could do something like this:
SELECT REPLACE(REPLACE(REPLACE(LOWER(col),'a',''),'b',''),'c','')
FROM dbo.table...
Obviously it would be a lot uglier than that, since I only handled the first three letters, but it should give the idea.

SQL 2008R2: What's the fastest way to do a 'INSERT INTO target <all columns except two> SELECT <all columns> FROM source'

I'm doing a SQL-to-SQL conversion, and have 50+ tables to convert from old (source) to new (target) database. I think the answer is 'there's no really fast way to do this', but I'll ask the question anyways.
Each 'group' has..
Two Source tables: Anywhere from 10 to 700 columns.
These two tables have the same schema, although some columns have different collations.
Target tables: Number of columns = Columns in source tables + 2, as I added start_dt and end_dt.
I can't do a 'INSERT INTO Target SELECT * FROM Source' because of the two extra columns.
Question: What's the fastest way to do a 'INSERT INTO target SELECT FROM source'
Using a view in the designer I don't see a way to select all and have it show all columns, and then just remove the two I don't need. * displays as * instead of all column names.
I'll entertain third party apps on this one.
Thanks.
Jim
In SQL Server Management studio, expand your table. You will see a couple nodes appear below the table name. Columns, Keys, Constraints, etc... Drag the "Column" node in to a query window and all of the columns will be added to the query window. Tack on your 2 extra columns and execute it.
This is still somewhat manual, but it will save you a ton of typing.
Always try to stay away from SELECT *. In SSMS right click on the table/view -> script as -> select. (The wording may not be exact. I am working from memory.) Then you don't have to type out all the fields.
If you mean the speed of getting the data over.
If you can on the destination:
turn off all triggers
drop indexes
change the recovery of the DB to
bulk logged
Do the inserts in batches
What about using SSIS?
How many records are you talking about?
Question was answered here.
DECLARE #source_table sysname
DECLARE #target_table sysname
DECLARE #col_list varchar(max)
DECLARE #sql varchar(max)
--naturally replaced by cursor table loop in final code
SET #source_table = 'dbs'
SET #target_table = 'dbs'
SELECT #col_list = STUFF((
SELECT ', ' + src.name
FROM sys.columns src
INNER JOIN sys.columns trg ON
trg.name = src.name
WHERE
src.object_id = OBJECT_ID(#source_table) AND
trg.object_id = OBJECT_ID(#target_table)
ORDER BY src.column_id
FOR XML PATH('')
), 1, 1, '')
SET #sql = 'INSERT INTO ' + #target_table + ' ( ' + #col_list + ' ) SELECT ' + #col_list + ' FROM ' + #source_table + ' '
EXEC (#sql)
SET #sql = 'INSERT INTO ' + #target_table + ' ( ' + #col_list + ' ) SELECT ' + #col_list + ' FROM ' + #source_table + '_History '
EXEC (#sql)

Passing In Array Like Variables T-SQL

I can someone help me turn this sql into a stored proc..
select * from voilets
where cfrw = 'F16'
UNION
(select *
from voilets
where cfrw in ('B05','B12','R02','F01','F16','F17','U11','U03','U04','U21'))
ORDER BY DSCA
Where 'F16 is a variable called #default
and
'B05','B12','R02','F01','F16','F17','U11','U03','U04','U21' is an array of #voilets
This is not working for me:
#sCarrierSelect varchar(max)
AS
BEGIN
declare #SQL nvarchar(4000)
set #SQL = '
select * from voilets
where t_cfrw = ' + #default + '
UNION
(select *
from carriers
where t_cfrw in (' + #voilets+'))
ORDER BY T_DSCA
'
print #SQL
exec sp_executesql #SQL
END
IF you SQL Server IS >=2008 then:
USE tempdb;
GO
CREATE TABLE voilets
(cfrw char(3), DSCA int)
go
INSERT INTO voilets VALUES ('R02', 2)
INSERT INTO voilets VALUES ('F16', 5)
INSERT INTO voilets VALUES ('F16', 4)
INSERT INTO voilets VALUES ('X77', 9)
go
CREATE TYPE myType AS TABLE (id CHAR(3));
GO
CREATE PROCEDURE usp_myProc
#default char(3),
#voiletsTVP myType READONLY
AS
select * from voilets
where cfrw = #default
UNION
(select *
from voilets
where cfrw in (SELECT * FROM #voiletsTVP))
ORDER BY DSCA
GO
-------------------------
DECLARE #default char(3)
SET #default='F16'
DECLARE #voiletsTVP AS myType;
INSERT INTO #voiletsTVP SELECT * FROM (VALUES ('B05'),('B12'),('R02'),('F01'),('F16'),('F17'),('U11'),('U03'),('U04'),('U21')) q(x)
EXEC usp_myProc #default,#voiletsTVP
GO
Result-set:
cfrw DSCA
R02 2
F16 4
F16 5
Performing that safely in a sproc is actually quite tricky; there are a few common approaches:
use a udf to split a string on a token - google for "split udf" (there will be many), and join on the results
use a table valued parameter
Personally, I rarely use sprocs these days; I'd use dapper:
List<string> foo = ...
var items = conn.Query<SomeType>(
"select * from [table] where colName in #foo", new { foo }).ToList();
Most LINQ providers and ORMs will have options here too, involving Contains etc.
You can learn about Passing Arrays in SQL Parameters using XML Data Type in SQL Server 2005
See sample:
/* for this xml:
<list>
<item>42</item>
<item>73</item>
<item>2007</item>
</list>
*/
CREATE FUNCTION [lm].[SplitList]
(
#list AS XML
)
RETURNS TABLE
AS
RETURN
(
SELECT tempTable.item.value('.', 'VARCHAR(MAX)') AS Item
FROM #list.nodes('list/item') tempTable(item)
);
Why not use a sql CLR function to split your values, passing those into your procedure. Here is a very good and fast split string implementation: CLR Split String. If you can't use sql clr, then look online for 'sql split string'. Whichever you use you put the result of that work into a temporary table and join that to your main table.

Convert multiple rows into one with comma as separator [duplicate]

This question already has answers here:
How to concatenate text from multiple rows into a single text string in SQL Server
(47 answers)
Closed 7 years ago.
If I issue SELECT username FROM Users I get this result:
username
--------
Paul
John
Mary
but what I really need is one row with all the values separated by comma, like this:
Paul, John, Mary
How do I do this?
select
distinct
stuff((
select ',' + u.username
from users u
where u.username = username
order by u.username
for xml path('')
),1,1,'') as userlist
from users
group by username
had a typo before, the above works
This should work for you. Tested all the way back to SQL 2000.
create table #user (username varchar(25))
insert into #user (username) values ('Paul')
insert into #user (username) values ('John')
insert into #user (username) values ('Mary')
declare #tmp varchar(250)
SET #tmp = ''
select #tmp = #tmp + username + ', ' from #user
select SUBSTRING(#tmp, 0, LEN(#tmp))
good review of several approaches:
http://blogs.msmvps.com/robfarley/2007/04/07/coalesce-is-not-the-answer-to-string-concatentation-in-t-sql/
Article copy -
Coalesce is not the answer to string concatentation in T-SQL I've seen many posts over the years about using the COALESCE function to get string concatenation working in T-SQL. This is one of the examples here (borrowed from Readifarian Marc Ridey).
DECLARE #categories varchar(200)
SET #categories = NULL
SELECT #categories = COALESCE(#categories + ',','') + Name
FROM Production.ProductCategory
SELECT #categories
This query can be quite effective, but care needs to be taken, and the use of COALESCE should be properly understood. COALESCE is the version of ISNULL which can take more than two parameters. It returns the first thing in the list of parameters which is not null. So really it has nothing to do with concatenation, and the following piece of code is exactly the same - without using COALESCE:
DECLARE #categories varchar(200)
SET #categories = ''
SELECT #categories = #categories + ',' + Name
FROM Production.ProductCategory
SELECT #categories
But the unordered nature of databases makes this unreliable. The whole reason why T-SQL doesn't (yet) have a concatenate function is that this is an aggregate for which the order of elements is important. Using this variable-assignment method of string concatenation, you may actually find that the answer that gets returned doesn't have all the values in it, particularly if you want the substrings put in a particular order. Consider the following, which on my machine only returns ',Accessories', when I wanted it to return ',Bikes,Clothing,Components,Accessories':
DECLARE #categories varchar(200)
SET #categories = NULL
SELECT #categories = COALESCE(#categories + ',','') + Name
FROM Production.ProductCategory
ORDER BY LEN(Name)
SELECT #categories
Far better is to use a method which does take order into consideration, and which has been included in SQL2005 specifically for the purpose of string concatenation - FOR XML PATH('')
SELECT ',' + Name
FROM Production.ProductCategory
ORDER BY LEN(Name)
FOR XML PATH('')
In the post I made recently comparing GROUP BY and DISTINCT when using subqueries, I demonstrated the use of FOR XML PATH(''). Have a look at this and you'll see how it works in a subquery. The 'STUFF' function is only there to remove the leading comma.
USE tempdb;
GO
CREATE TABLE t1 (id INT, NAME VARCHAR(MAX));
INSERT t1 values (1,'Jamie');
INSERT t1 values (1,'Joe');
INSERT t1 values (1,'John');
INSERT t1 values (2,'Sai');
INSERT t1 values (2,'Sam');
GO
select
id,
stuff((
select ',' + t.[name]
from t1 t
where t.id = t1.id
order by t.[name]
for xml path('')
),1,1,'') as name_csv
from t1
group by id
;
FOR XML PATH is one of the only situations in which you can use ORDER BY in a subquery. The other is TOP. And when you use an unnamed column and FOR XML PATH(''), you will get a straight concatenation, with no XML tags. This does mean that the strings will be HTML Encoded, so if you're concatenating strings which may have the < character (etc), then you should maybe fix that up afterwards, but either way, this is still the best way of concatenating strings in SQL Server 2005.
building on mwigdahls answer. if you also need to do grouping here is how to get it to look like
group, csv
'group1', 'paul, john'
'group2', 'mary'
--drop table #user
create table #user (groupName varchar(25), username varchar(25))
insert into #user (groupname, username) values ('apostles', 'Paul')
insert into #user (groupname, username) values ('apostles', 'John')
insert into #user (groupname, username) values ('family','Mary')
select
g1.groupname
, stuff((
select ', ' + g.username
from #user g
where g.groupName = g1.groupname
order by g.username
for xml path('')
),1,2,'') as name_csv
from #user g1
group by g1.groupname
You can use this query to do the above task:
DECLARE #test NVARCHAR(max)
SELECT #test = COALESCE(#test + ',', '') + field2 FROM #test
SELECT field2 = #test
For detail and step by step explanation visit the following link
http://oops-solution.blogspot.com/2011/11/sql-server-convert-table-column-data.html
DECLARE #EmployeeList varchar(100)
SELECT #EmployeeList = COALESCE(#EmployeeList + ', ', '') +
CAST(Emp_UniqueID AS varchar(5))
FROM SalesCallsEmployees
WHERE SalCal_UniqueID = 1
SELECT #EmployeeList
source:
http://www.sqlteam.com/article/using-coalesce-to-build-comma-delimited-string
In SQLite this is simpler. I think there are similar implementations for MySQL, MSSql and Orable
CREATE TABLE Beatles (id integer, name string );
INSERT INTO Beatles VALUES (1, "Paul");
INSERT INTO Beatles VALUES (2, "John");
INSERT INTO Beatles VALUES (3, "Ringo");
INSERT INTO Beatles VALUES (4, "George");
SELECT GROUP_CONCAT(name, ',') FROM Beatles;
you can use stuff() to convert rows as comma separated values
select
EmployeeID,
stuff((
SELECT ',' + FPProjectMaster.GroupName
FROM FPProjectInfo AS t INNER JOIN
FPProjectMaster ON t.ProjectID = FPProjectMaster.ProjectID
WHERE (t.EmployeeID = FPProjectInfo.EmployeeID)
And t.STatusID = 1
ORDER BY t.ProjectID
for xml path('')
),1,1,'') as name_csv
from FPProjectInfo
group by EmployeeID;
Thanks #AlexKuznetsov for the reference to get this answer.
A clean and flexible solution in MS SQL Server 2005/2008 is to create a CLR Agregate function.
You'll find quite a few articles (with code) on google.
It looks like this article walks you through the whole process using C#.
If you're executing this through PHP, what about this?
$hQuery = mysql_query("SELECT * FROM users");
while($hRow = mysql_fetch_array($hQuery)) {
$hOut .= $hRow['username'] . ", ";
}
$hOut = substr($hOut, 0, strlen($hOut) - 1);
echo $hOut;