How to add a column in TSQL after a specific column? - tsql
I have a table:
MyTable
ID
FieldA
FieldB
I want to alter the table and add a column so it looks like:
MyTable
ID
NewField
FieldA
FieldB
In MySQL I would so a:
ALTER TABLE MyTable ADD COLUMN NewField int NULL AFTER ID;
One line, nice, simple, works great. How do I do this in Microsoft's world?
Unfortunately you can't.
If you really want them in that order you'll have to create a new table with the columns in that order and copy data. Or rename columns etc. There is no easy way.
solution:
This will work for tables where there are no dependencies on the changing table which would trigger cascading events. First make sure you can drop the table you want to restructure without any disastrous repercussions. Take a note of all the dependencies and column constraints associated with your table (i.e. triggers, indexes, etc.). You may need to put them back in when you are done.
STEP 1: create the temp table to hold all the records from the table you want to restructure. Do not forget to include the new column.
CREATE TABLE #tmp_myTable
( [new_column] [int] NOT NULL, <-- new column has been inserted here!
[idx] [bigint] NOT NULL,
[name] [nvarchar](30) NOT NULL,
[active] [bit] NOT NULL
)
STEP 2: Make sure all records have been copied over and that the column structure looks the way you want.
SELECT TOP 10 * FROM #tmp_myTable ORDER BY 1 DESC
-- you can do COUNT(*) or anything to make sure you copied all the records
STEP 3: DROP the original table:
DROP TABLE myTable
If you are paranoid about bad things could happen, just rename the original table (instead of dropping it). This way it can be always returned back.
EXEC sp_rename myTable, myTable_Copy
STEP 4: Recreate the table myTable the way you want (should match match the #tmp_myTable table structure)
CREATE TABLE myTable
( [new_column] [int] NOT NULL,
[idx] [bigint] NOT NULL,
[name] [nvarchar](30) NOT NULL,
[active] [bit] NOT NULL
)
-- do not forget any constraints you may need
STEP 5: Copy the all the records from the temp #tmp_myTable table into the new (improved) table myTable.
INSERT INTO myTable ([new_column],[idx],[name],[active])
SELECT [new_column],[idx],[name],[active]
FROM #tmp_myTable
STEP 6: Check if all the data is back in your new, improved table myTable. If yes, clean up after yourself and DROP the temp table #tmp_myTable and the myTable_Copy table if you chose to rename it instead of dropping it.
You should be able to do this if you create the column using the GUI in Management Studio. I believe Management studio is actually completely recreating the table, which is why this appears to happen.
As others have mentioned, the order of columns in a table doesn't matter, and if it does there is something wrong with your code.
In SQL Enterprise Management Studio, open up your table, add the column where you want it, and then -- instead of saving the change -- generate the change script. You can see how it's done in SQL.
In short, what others have said is right. SQL Management studio pulls all your data into a temp table, drops the table, recreates it with columns in the right order, and puts the temp table data back in there. There is no simple syntax for adding a column in a specific position.
/*
Script to change the column order of a table
Note this will create a new table to replace the original table.
WARNING : Original Table could be dropped.
HOWEVER it doesn't copy the triggers or other table properties - just the data
*/
Generate a new table with the columns in the order that you require
Select Column2, Column1, Column3 Into NewTable from OldTable
Delete the original table
Drop Table OldTable;
Rename the new table
EXEC sp_rename 'NewTable', 'OldTable';
In Microsoft SQL Server Management Studio (the admin tool for MSSQL) just go into "design" on a table and drag the column to the new position. Not command line but you can do it.
This is absolutely possible. Although you shouldn't do it unless you know what you are dealing with.
Took me about 2 days to figure it out.
Here is a stored procedure where i enter:
---database name
(schema name is "_" for readability)
---table name
---column
---column data type
(column added is always null, otherwise you won't be able to insert)
---the position of the new column.
Since I'm working with tables from SAM toolkit (and some of them have > 80 columns) , the typical variable won't be able to contain the query. That forces the need of external file. Now be careful where you store that file and who has access on NTFS and network level.
Cheers!
USE [master]
GO
/****** Object: StoredProcedure [SP_Set].[TrasferDataAtColumnLevel] Script Date: 8/27/2014 2:59:30 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [SP_Set].[TrasferDataAtColumnLevel]
(
#database varchar(100),
#table varchar(100),
#column varchar(100),
#position int,
#datatype varchar(20)
)
AS
BEGIN
set nocount on
exec ('
declare #oldC varchar(200), #oldCDataType varchar(200), #oldCLen int,#oldCPos int
create table Test ( dummy int)
declare #columns varchar(max) = ''''
declare #columnVars varchar(max) = ''''
declare #columnsDecl varchar(max) = ''''
declare #printVars varchar(max) = ''''
DECLARE MY_CURSOR CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR
select column_name, data_type, character_maximum_length, ORDINAL_POSITION from ' + #database + '.INFORMATION_SCHEMA.COLUMNS where table_name = ''' + #table + '''
OPEN MY_CURSOR FETCH NEXT FROM MY_CURSOR INTO #oldC, #oldCDataType, #oldCLen, #oldCPos WHILE ##FETCH_STATUS = 0 BEGIN
if(#oldCPos = ' + #position + ')
begin
exec(''alter table Test add [' + #column + '] ' + #datatype + ' null'')
end
if(#oldCDataType != ''timestamp'')
begin
set #columns += #oldC + '' , ''
set #columnVars += ''#'' + #oldC + '' , ''
if(#oldCLen is null)
begin
if(#oldCDataType != ''uniqueidentifier'')
begin
set #printVars += '' print convert('' + #oldCDataType + '',#'' + #oldC + '')''
set #columnsDecl += ''#'' + #oldC + '' '' + #oldCDataType + '', ''
exec(''alter table Test add ['' + #oldC + ''] '' + #oldCDataType + '' null'')
end
else
begin
set #printVars += '' print convert(varchar(50),#'' + #oldC + '')''
set #columnsDecl += ''#'' + #oldC + '' '' + #oldCDataType + '', ''
exec(''alter table Test add ['' + #oldC + ''] '' + #oldCDataType + '' null'')
end
end
else
begin
if(#oldCLen < 0)
begin
set #oldCLen = 4000
end
set #printVars += '' print #'' + #oldC
set #columnsDecl += ''#'' + #oldC + '' '' + #oldCDataType + ''('' + convert(character,#oldCLen) + '') , ''
exec(''alter table Test add ['' + #oldC + ''] '' + #oldCDataType + ''('' + #oldCLen + '') null'')
end
end
if exists (select column_name from INFORMATION_SCHEMA.COLUMNS where table_name = ''Test'' and column_name = ''dummy'')
begin
alter table Test drop column dummy
end
FETCH NEXT FROM MY_CURSOR INTO #oldC, #oldCDataType, #oldCLen, #oldCPos END CLOSE MY_CURSOR DEALLOCATE MY_CURSOR
set #columns = reverse(substring(reverse(#columns), charindex('','',reverse(#columns)) +1, len(#columns)))
set #columnVars = reverse(substring(reverse(#columnVars), charindex('','',reverse(#columnVars)) +1, len(#columnVars)))
set #columnsDecl = reverse(substring(reverse(#columnsDecl), charindex('','',reverse(#columnsDecl)) +1, len(#columnsDecl)))
set #columns = replace(replace(REPLACE(#columns, '' '', ''''), char(9) + char(9),'' ''), char(9), '''')
set #columnVars = replace(replace(REPLACE(#columnVars, '' '', ''''), char(9) + char(9),'' ''), char(9), '''')
set #columnsDecl = replace(replace(REPLACE(#columnsDecl, '' '', ''''), char(9) + char(9),'' ''),char(9), '''')
set #printVars = REVERSE(substring(reverse(#printVars), charindex(''+'',reverse(#printVars))+1, len(#printVars)))
create table query (id int identity(1,1), string varchar(max))
insert into query values (''declare '' + #columnsDecl + ''
DECLARE MY_CURSOR CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR '')
insert into query values (''select '' + #columns + '' from ' + #database + '._.' + #table + ''')
insert into query values (''OPEN MY_CURSOR FETCH NEXT FROM MY_CURSOR INTO '' + #columnVars + '' WHILE ##FETCH_STATUS = 0 BEGIN '')
insert into query values (#printVars )
insert into query values ( '' insert into Test ('')
insert into query values (#columns)
insert into query values ( '') values ( '' + #columnVars + '')'')
insert into query values (''FETCH NEXT FROM MY_CURSOR INTO '' + #columnVars + '' END CLOSE MY_CURSOR DEALLOCATE MY_CURSOR'')
declare #path varchar(100) = ''C:\query.sql''
declare #query varchar(500) = ''bcp "select string from query order by id" queryout '' + #path + '' -t, -c -S '' + ##servername + '' -T''
exec master..xp_cmdshell #query
set #query = ''sqlcmd -S '' + ##servername + '' -i '' + #path
EXEC xp_cmdshell #query
set #query = ''del '' + #path
exec xp_cmdshell #query
drop table ' + #database + '._.' + #table + '
select * into ' + #database + '._.' + #table + ' from Test
drop table query
drop table Test ')
END
Even if the question is old, a more accurate answer about Management Studio would be required.
You can create the column manually or with Management Studio. But Management Studio will require to recreate the table and will result in a time out if you have too much data in it already, avoid unless the table is light.
To change the order of the columns you simply need to move them around in Management Studio. This should not require (Exceptions most likely exists) that Management Studio to recreate the table since it most likely change the ordination of the columns in the table definitions.
I've done it this way on numerous occasion with tables that I could not add columns with the GUI because of the data in them. Then moved the columns around with the GUI of Management Studio and simply saved them.
You will go from an assured time out to a few seconds of waiting.
If you are using the GUI to do this you must deselect the following option allowing the table to be dropped,
Create New Add new Column Table Script ex: [DBName].[dbo].[TableName]_NEW
COPY old table data to new table: INSERT INTO newTable ( col1,col2,...) SELECT col1,col2,... FROM oldTable
Check records old and new are the same:
DROP old table
rename newtable to oldtable
rerun your sp add new colum value
-- 1. Create New Add new Column Table Script
CREATE TABLE newTable
( [new_column] [int] NOT NULL, <-- new column has been inserted here!
[idx] [bigint] NOT NULL,
[name] [nvarchar](30) NOT NULL,
[active] [bit] NOT NULL
)
-- 2. COPY old table data to new table:
INSERT INTO newTable ([new_column],[idx],[name],[active])
SELECT [new_column],[idx],[name],[active]
FROM oldTable
-- 3. Check records old and new are the same:
select sum(cnt) FROM (
SELECT 'table_1' AS table_name, COUNT(*) cnt FROM newTable
UNION
SELECT 'table_2' AS table_name, -COUNT(*) cnt FROM oldTable
) AS cnt_sum
-- 4. DROP old table
DROP TABLE oldTable
-- 5. rename newtable to oldtable
USE [DB_NAME]
EXEC sp_rename newTable, oldTable
You have to rebuild the table. Luckily, the order of the columns doesn't matter at all!
Watch as I magically reorder your columns:
SELECT ID, Newfield, FieldA, FieldB FROM MyTable
Also this has been asked about a bazillion times before.
Related
pivot or reshapre sql [duplicate]
I've been tasked with coming up with a means of translating the following data: date category amount 1/1/2012 ABC 1000.00 2/1/2012 DEF 500.00 2/1/2012 GHI 800.00 2/10/2012 DEF 700.00 3/1/2012 ABC 1100.00 into the following: date ABC DEF GHI 1/1/2012 1000.00 2/1/2012 500.00 2/1/2012 800.00 2/10/2012 700.00 3/1/2012 1100.00 The blank spots can be NULLs or blanks, either is fine, and the categories would need to be dynamic. Another possible caveat to this is that we'll be running the query in a limited capacity, which means temp tables are out. I've tried to research and have landed on PIVOT but as I've never used that before I really don't understand it, despite my best efforts to figure it out. Can anyone point me in the right direction?
Dynamic SQL PIVOT: create table temp ( date datetime, category varchar(3), amount money ) insert into temp values ('1/1/2012', 'ABC', 1000.00) insert into temp values ('2/1/2012', 'DEF', 500.00) insert into temp values ('2/1/2012', 'GHI', 800.00) insert into temp values ('2/10/2012', 'DEF', 700.00) insert into temp values ('3/1/2012', 'ABC', 1100.00) DECLARE #cols AS NVARCHAR(MAX), #query AS NVARCHAR(MAX); SET #cols = STUFF((SELECT distinct ',' + QUOTENAME(c.category) FROM temp c FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set #query = 'SELECT date, ' + #cols + ' from ( select date , amount , category from temp ) x pivot ( max(amount) for category in (' + #cols + ') ) p ' execute(#query) drop table temp Results: Date ABC DEF GHI 2012-01-01 00:00:00.000 1000.00 NULL NULL 2012-02-01 00:00:00.000 NULL 500.00 800.00 2012-02-10 00:00:00.000 NULL 700.00 NULL 2012-03-01 00:00:00.000 1100.00 NULL NULL
Dynamic SQL PIVOT Different approach for creating columns string create table #temp ( date datetime, category varchar(3), amount money ) insert into #temp values ('1/1/2012', 'ABC', 1000.00) insert into #temp values ('2/1/2012', 'DEF', 500.00) insert into #temp values ('2/1/2012', 'GHI', 800.00) insert into #temp values ('2/10/2012', 'DEF', 700.00) insert into #temp values ('3/1/2012', 'ABC', 1100.00) DECLARE #cols AS NVARCHAR(MAX)=''; DECLARE #query AS NVARCHAR(MAX)=''; SELECT #cols = #cols + QUOTENAME(category) + ',' FROM (select distinct category from #temp ) as tmp select #cols = substring(#cols, 0, len(#cols)) --trim "," at end set #query = 'SELECT * from ( select date, amount, category from #temp ) src pivot ( max(amount) for category in (' + #cols + ') ) piv' execute(#query) drop table #temp Result date ABC DEF GHI 2012-01-01 00:00:00.000 1000.00 NULL NULL 2012-02-01 00:00:00.000 NULL 500.00 800.00 2012-02-10 00:00:00.000 NULL 700.00 NULL 2012-03-01 00:00:00.000 1100.00 NULL NULL
I know this question is older but I was looking thru the answers and thought that I might be able to expand on the "dynamic" portion of the problem and possibly help someone out. First and foremost I built this solution to solve a problem a couple of coworkers were having with inconstant and large data sets needing to be pivoted quickly. This solution requires the creation of a stored procedure so if that is out of the question for your needs please stop reading now. This procedure is going to take in the key variables of a pivot statement to dynamically create pivot statements for varying tables, column names and aggregates. The Static column is used as the group by / identity column for the pivot(this can be stripped out of the code if not necessary but is pretty common in pivot statements and was necessary to solve the original issue), the pivot column is where the end resultant column names will be generated from, and the value column is what the aggregate will be applied to. The Table parameter is the name of the table including the schema (schema.tablename) this portion of the code could use some love because it is not as clean as I would like it to be. It worked for me because my usage was not publicly facing and sql injection was not a concern. The Aggregate parameter will accept any standard sql aggregate 'AVG', 'SUM', 'MAX' etc. The code also defaults to MAX as an aggregate this is not necessary but the audience this was originally built for did not understand pivots and were typically using max as an aggregate. Lets start with the code to create the stored procedure. This code should work in all versions of SSMS 2005 and above but I have not tested it in 2005 or 2016 but I can not see why it would not work. create PROCEDURE [dbo].[USP_DYNAMIC_PIVOT] ( #STATIC_COLUMN VARCHAR(255), #PIVOT_COLUMN VARCHAR(255), #VALUE_COLUMN VARCHAR(255), #TABLE VARCHAR(255), #AGGREGATE VARCHAR(20) = null ) AS BEGIN SET NOCOUNT ON; declare #AVAIABLE_TO_PIVOT NVARCHAR(MAX), #SQLSTRING NVARCHAR(MAX), #PIVOT_SQL_STRING NVARCHAR(MAX), #TEMPVARCOLUMNS NVARCHAR(MAX), #TABLESQL NVARCHAR(MAX) if isnull(#AGGREGATE,'') = '' begin SET #AGGREGATE = 'MAX' end SET #PIVOT_SQL_STRING = 'SELECT top 1 STUFF((SELECT distinct '', '' + CAST(''[''+CONVERT(VARCHAR,'+ #PIVOT_COLUMN+')+'']'' AS VARCHAR(50)) [text()] FROM '+#TABLE+' WHERE ISNULL('+#PIVOT_COLUMN+','''') <> '''' FOR XML PATH(''''), TYPE) .value(''.'',''NVARCHAR(MAX)''),1,2,'' '') as PIVOT_VALUES from '+#TABLE+' ma ORDER BY ' + #PIVOT_COLUMN + '' declare #TAB AS TABLE(COL NVARCHAR(MAX) ) INSERT INTO #TAB EXEC SP_EXECUTESQL #PIVOT_SQL_STRING, #AVAIABLE_TO_PIVOT SET #AVAIABLE_TO_PIVOT = (SELECT * FROM #TAB) SET #TEMPVARCOLUMNS = (SELECT replace(#AVAIABLE_TO_PIVOT,',',' nvarchar(255) null,') + ' nvarchar(255) null') SET #SQLSTRING = 'DECLARE #RETURN_TABLE TABLE ('+#STATIC_COLUMN+' NVARCHAR(255) NULL,'+#TEMPVARCOLUMNS+') INSERT INTO #RETURN_TABLE('+#STATIC_COLUMN+','+#AVAIABLE_TO_PIVOT+') select * from ( SELECT ' + #STATIC_COLUMN + ' , ' + #PIVOT_COLUMN + ', ' + #VALUE_COLUMN + ' FROM '+#TABLE+' ) a PIVOT ( '+#AGGREGATE+'('+#VALUE_COLUMN+') FOR '+#PIVOT_COLUMN+' IN ('+#AVAIABLE_TO_PIVOT+') ) piv SELECT * FROM #RETURN_TABLE' EXEC SP_EXECUTESQL #SQLSTRING END Next we will get our data ready for the example. I have taken the data example from the accepted answer with the addition of a couple of data elements to use in this proof of concept to show the varied outputs of the aggregate change. create table temp ( date datetime, category varchar(3), amount money ) insert into temp values ('1/1/2012', 'ABC', 1000.00) insert into temp values ('1/1/2012', 'ABC', 2000.00) -- added insert into temp values ('2/1/2012', 'DEF', 500.00) insert into temp values ('2/1/2012', 'DEF', 1500.00) -- added insert into temp values ('2/1/2012', 'GHI', 800.00) insert into temp values ('2/10/2012', 'DEF', 700.00) insert into temp values ('2/10/2012', 'DEF', 800.00) -- addded insert into temp values ('3/1/2012', 'ABC', 1100.00) The following examples show the varied execution statements showing the varied aggregates as a simple example. I did not opt to change the static, pivot, and value columns to keep the example simple. You should be able to just copy and paste the code to start messing with it yourself exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','sum' exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','max' exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','avg' exec [dbo].[USP_DYNAMIC_PIVOT] 'date','category','amount','dbo.temp','min' This execution returns the following data sets respectively.
Updated version for SQL Server 2017 using STRING_AGG function to construct the pivot column list: create table temp ( date datetime, category varchar(3), amount money ); insert into temp values ('20120101', 'ABC', 1000.00); insert into temp values ('20120201', 'DEF', 500.00); insert into temp values ('20120201', 'GHI', 800.00); insert into temp values ('20120210', 'DEF', 700.00); insert into temp values ('20120301', 'ABC', 1100.00); DECLARE #cols AS NVARCHAR(MAX), #query AS NVARCHAR(MAX); SET #cols = (SELECT STRING_AGG(category,',') FROM (SELECT DISTINCT category FROM temp WHERE category IS NOT NULL)t); set #query = 'SELECT date, ' + #cols + ' from ( select date , amount , category from temp ) x pivot ( max(amount) for category in (' + #cols + ') ) p '; execute(#query); drop table temp;
There's my solution cleaning up the unnecesary null values DECLARE #cols AS NVARCHAR(MAX), #maxcols AS NVARCHAR(MAX), #query AS NVARCHAR(MAX) select #cols = STUFF((SELECT ',' + QUOTENAME(CodigoFormaPago) from PO_FormasPago order by CodigoFormaPago FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') select #maxcols = STUFF((SELECT ',MAX(' + QUOTENAME(CodigoFormaPago) + ') as ' + QUOTENAME(CodigoFormaPago) from PO_FormasPago order by CodigoFormaPago FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set #query = 'SELECT CodigoProducto, DenominacionProducto, ' + #maxcols + ' FROM ( SELECT CodigoProducto, DenominacionProducto, ' + #cols + ' from ( SELECT p.CodigoProducto as CodigoProducto, p.DenominacionProducto as DenominacionProducto, fpp.CantidadCuotas as CantidadCuotas, fpp.IdFormaPago as IdFormaPago, fp.CodigoFormaPago as CodigoFormaPago FROM PR_Producto p LEFT JOIN PR_FormasPagoProducto fpp ON fpp.IdProducto = p.IdProducto LEFT JOIN PO_FormasPago fp ON fpp.IdFormaPago = fp.IdFormaPago ) xp pivot ( MAX(CantidadCuotas) for CodigoFormaPago in (' + #cols + ') ) p ) xx GROUP BY CodigoProducto, DenominacionProducto' t #query; execute(#query);
The below code provides the results which replaces NULL to zero in the output. Table creation and data insertion: create table test_table ( date nvarchar(10), category char(3), amount money ) insert into test_table values ('1/1/2012','ABC',1000.00) insert into test_table values ('2/1/2012','DEF',500.00) insert into test_table values ('2/1/2012','GHI',800.00) insert into test_table values ('2/10/2012','DEF',700.00) insert into test_table values ('3/1/2012','ABC',1100.00) Query to generate the exact results which also replaces NULL with zeros: DECLARE #DynamicPivotQuery AS NVARCHAR(MAX), #PivotColumnNames AS NVARCHAR(MAX), #PivotSelectColumnNames AS NVARCHAR(MAX) --Get distinct values of the PIVOT Column SELECT #PivotColumnNames= ISNULL(#PivotColumnNames + ',','') + QUOTENAME(category) FROM (SELECT DISTINCT category FROM test_table) AS cat --Get distinct values of the PIVOT Column with isnull SELECT #PivotSelectColumnNames = ISNULL(#PivotSelectColumnNames + ',','') + 'ISNULL(' + QUOTENAME(category) + ', 0) AS ' + QUOTENAME(category) FROM (SELECT DISTINCT category FROM test_table) AS cat --Prepare the PIVOT query using the dynamic SET #DynamicPivotQuery = N'SELECT date, ' + #PivotSelectColumnNames + ' FROM test_table pivot(sum(amount) for category in (' + #PivotColumnNames + ')) as pvt'; --Execute the Dynamic Pivot Query EXEC sp_executesql #DynamicPivotQuery OUTPUT :
A version of Taryn's answer with performance improvements: Data CREATE TABLE dbo.Temp ( [date] datetime NOT NULL, category nchar(3) NOT NULL, amount money NOT NULL, INDEX [CX dbo.Temp date] CLUSTERED ([date]), INDEX [IX dbo.Temp category] NONCLUSTERED (category) ); INSERT dbo.Temp ([date], category, amount) VALUES ({D '2012-01-01'}, N'ABC', $1000.00), ({D '2012-01-02'}, N'DEF', $500.00), ({D '2012-01-02'}, N'GHI', $800.00), ({D '2012-02-10'}, N'DEF', $700.00), ({D '2012-03-01'}, N'ABC', $1100.00); Dynamic pivot DECLARE #Delimiter nvarchar(4000) = N',', #DelimiterLength bigint, #Columns nvarchar(max), #Query nvarchar(max); SET #DelimiterLength = LEN(REPLACE(#Delimiter, SPACE(1), N'#')); -- Before SQL Server 2017 SET #Columns = STUFF ( ( SELECT [text()] = #Delimiter, [text()] = QUOTENAME(T.category) FROM dbo.Temp AS T WHERE T.category IS NOT NULL GROUP BY T.category ORDER BY T.category FOR XML PATH (''), TYPE ) .value(N'text()[1]', N'nvarchar(max)'), 1, #DelimiterLength, SPACE(0) ); -- Alternative for SQL Server 2017+ and database compatibility level 110+ SELECT #Columns = STRING_AGG(CONVERT(nvarchar(max), QUOTENAME(T.category)), N',') WITHIN GROUP (ORDER BY T.category) FROM ( SELECT T2.category FROM dbo.Temp AS T2 WHERE T2.category IS NOT NULL GROUP BY T2.category ) AS T; IF #Columns IS NOT NULL BEGIN SET #Query = N'SELECT [date], ' + #Columns + N' FROM ( SELECT [date], amount, category FROM dbo.Temp ) AS S PIVOT ( MAX(amount) FOR category IN (' + #Columns + N') ) AS P;'; EXECUTE sys.sp_executesql #Query; END; Execution plans Results date ABC DEF GHI 2012-01-01 00:00:00.000 1000.00 NULL NULL 2012-01-02 00:00:00.000 NULL 500.00 800.00 2012-02-10 00:00:00.000 NULL 700.00 NULL 2012-03-01 00:00:00.000 1100.00 NULL NULL
CREATE TABLE #PivotExample( [ID] [nvarchar](50) NULL, [Description] [nvarchar](50) NULL, [ClientId] [smallint] NOT NULL, ) GO INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc1',1008) INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc2',2000) INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc3',3000) INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI1','ACI1Desc4',4000) INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI2','ACI2Desc1',5000) INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI2','ACI2Desc2',6000) INSERT #PivotExample ([ID],[Description], [ClientId]) VALUES ('ACI2','ACI2Desc3', 7000) SELECT * FROM #PivotExample --Declare necessary variables DECLARE #SQLQuery AS NVARCHAR(MAX) DECLARE #PivotColumns AS NVARCHAR(MAX) --Get unique values of pivot column SELECT #PivotColumns= COALESCE(#PivotColumns + ',','') + QUOTENAME([Description]) FROM (SELECT DISTINCT [Description] FROM [dbo].#PivotExample) AS PivotExample --SELECT #PivotColumns --Create the dynamic query with all the values for --pivot column at runtime SET #SQLQuery = N' -- Your pivoted result comes here SELECT ID, ' + #PivotColumns + ' FROM ( -- Source table should in a inner query SELECT ID,[Description],[ClientId] FROM #PivotExample )AS P PIVOT ( -- Select the values from derived table P SUM(ClientId) FOR [Description] IN (' + #PivotColumns + ') )AS PVTTable' --SELECT #SQLQuery --Execute dynamic query EXEC sp_executesql #SQLQuery Drop table #PivotExample
Fully generic way that will work in non-traditional MS SQL environments (e.g. Azure Synapse Analytics Serverless SQL Pools) - it's in a SPROC but no need to use as such... -- DROP PROCEDURE IF EXISTS if object_id('dbo.usp_generic_pivot') is not null DROP PROCEDURE dbo.usp_generic_pivot GO; CREATE PROCEDURE dbo.usp_generic_pivot ( #source NVARCHAR (100), -- table or view object name #pivotCol NVARCHAR (100), -- the column to pivot #pivotAggCol NVARCHAR (100), -- the column with the values for the pivot #pivotAggFunc NVARCHAR (20), -- the aggregate function to apply to those values #leadCols NVARCHAR (100) -- comma seprated list of other columns to keep and order by ) AS BEGIN DECLARE #pivotedColumns NVARCHAR(MAX) DECLARE #tsql NVARCHAR(MAX) SET #tsql = CONCAT('SELECT #pivotedColumns = STRING_AGG(qname, '','') FROM (SELECT DISTINCT QUOTENAME(', #pivotCol,') AS qname FROM ',#source, ') AS qnames') EXEC sp_executesql #tsql, N'#pivotedColumns nvarchar(max) out', #pivotedColumns out SET #tsql = CONCAT ( 'SELECT ', #leadCols, ',', #pivotedColumns,' FROM ',' ( SELECT ',#leadCols,',', #pivotAggCol,',', #pivotCol, ' FROM ', #source, ') as t ', ' PIVOT (', #pivotAggFunc, '(', #pivotAggCol, ')',' FOR ', #pivotCol, ' IN (', #pivotedColumns,')) as pvt ',' ORDER BY ', #leadCols) EXEC (#tsql) END GO; -- TEST EXAMPLE EXEC dbo.usp_generic_pivot #source = '[your_db].[dbo].[form_answers]', #pivotCol = 'question', #pivotAggCol = 'answer', #pivotAggFunc = 'MAX', #leadCols = 'candidate_id, candidate_name' GO;
Create tables based on contents of other table
I have a table that contains names of tables to create and the columns that that table should have: CREATE TABLE Tables2Create ( table_name nvarchar(256) ,colum_names nvarchar(max) ) INSERT INTO Tables2Create VALUES ('People','Name|Occupation|Hobby') INSERT INTO Tables2Create VALUES ('Schools','Name|Place|Type ') Now I need some TSQL that will dynamically create tables for each table in the field table_names and that will split the field column_names to decide which columns each table should have. All fields can be nvarchar's. CREATE TABLE People ( Name nvarchar(256) ,Occupation nvarchar(256) ,Hobby nvarchar(256) ) Any idea how to do this?
Below is an example using STRING_SPLIT to extract the column names and STRING_AGG to concatenate the column names and CREATE TABLE statements. DECLARE #SQL nvarchar(MAX); SELECT #SQL = STRING_AGG(CreateTableStatement, '') FROM ( SELECT 'CREATE TABLE ' + QUOTENAME(table_name) + N' (' + ( SELECT STRING_AGG(QUOTENAME(value) + ' nvarchar(256)',',') FROM STRING_SPLIT(column_names,'|') ) + N');' FROM dbo.Tables2Create ) AS CreateTableStatements(CreateTableStatement) EXEC(#SQL);
Can you do this? Yes, you can. Should you? Probably not, if I am honest. Storing delimited data, as I mention in my comment, is always a design flaw; at least normalise your design. That being said, the method I use here is an "all in one" solution; no cursors, not iteration. As you've tagged SQL Server 2019 that means we can make use of STRING_AGG. This gives something like this: USE master; GO CREATE DATABASE TestDB; GO USE TestDB; GO CREATE TABLE Tables2Create (table_name sysname, --correct data type for object names column_names nvarchar(max) ) INSERT INTO Tables2Create VALUES (N'People',N'Name|Occupation|Hobby') INSERT INTO Tables2Create VALUES (N'Schools',N'Name|Place|Type '); GO DECLARE #SQL nvarchar(MAX), #CRLF nchar(2) = NCHAR(13) + NCHAR(10); DECLARE #Delim nvarchar(10) = N',' + #CRLF + N' ' SET #SQL = (SELECT STRING_AGG(S.SQL,'') FROM(SELECT #CRLF + #CRLF + N'CREATE TABLE dbo.' + QUOTENAME(T2C.table_name) + N' (' + #CRLF + N' ' + STRING_AGG(QUOTENAME(SS.value) + N' nvarchar(256)',#Delim) WITHIN GROUP (ORDER BY SS.[Value]) + #CRLF + N');' AS SQL FROM dbo.Tables2Create T2C CROSS APPLY STRING_SPLIT(T2C.column_names,N'|') SS GROUP BY T2C.table_name) S); PRINT #SQL; --Your best friend EXEC sys.sp_executesql #SQL; GO USE master; GO DROP DATABASE TestDB; db<>fiddle
Is there a way to dynamically create tables without knowing how many columns the table will have beforehand?
The following query uses a pivot to turn the values in field [expireDate Year-Month] into column headings. Because the number of year-months regularly increases and is not fixed, this is done dynamically.Is there a way to also dynamically create a table from the output without knowing how many columns the table will have beforehand? DECLARE #SQLQuery AS NVARCHAR(MAX) DECLARE #PivotColumns AS NVARCHAR(MAX) --Get unique values of pivot column SELECT #PivotColumns= COALESCE(#PivotColumns + ',','') + QUOTENAME([expireDate Year-Month]) FROM (SELECT DISTINCT [expireDate Year-Month] FROM REPORTING_DATA.tableau.vw_vehicleInspDetailsHistMonthlyFinal) AS PivotExample SELECT #PivotColumns --Create the dynamic query with all the values for --pivot column at runtime SET #SQLQuery = N'SELECT DISTINCT Vehicle#, ' + #PivotColumns + ' FROM REPORTING_DATA.tableau.vw_vehicleInspDetailsHistMonthlyFinal PIVOT( MAX(inspectionResult) FOR [expireDate Year-Month] IN (' + #PivotColumns + ')) AS P ORDER BY Vehicle# ' SELECT #SQLQuery --Execute dynamic query EXEC sp_executesql #SQLQuery
T-SQL Replacement for Access Normalization Using Record Set?
I'm relatively new to T-SQL, so I hope someone with more experience/knowledge can help. I have inherited an Access database that I'm moving to SQL Server. The original database imports and normalizes transaction data from Excel files in the following steps: imports the Excel file to a staging table, updates tables related to several of the columns if any new values are found, and finally moves the data to the main table, but with inner joins to the PK columns on the updated tables from step 2 replacing the actual values. Step 2 above makes use of a "normalizing" table: CREATE TABLE [dbo].[tblNormalize]( [Normalize_ID] [int] IDENTITY(1,1) NOT NULL, [Table_Raw] [nvarchar](255) NULL, [Field_Raw] [nvarchar](255) NULL, [Table_Normal] [nvarchar](255) NULL, [Field_Normal] [nvarchar](255) NULL, [Data_Type] [nvarchar](255) NULL, CONSTRAINT [tblNormalize$ID] PRIMARY KEY CLUSTERED ( [Normalize_ID] ASC )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] GO [Table_Raw] is the name of the staging table. [Field_Raw] is the name of the field in the staging table - necessary since the field name could be different from what's in the tables to be updated. [Table_Normal] is the name of the table to be updated. [Field_Normal] is the name of the field to be updated. For example, if one of the values in the Location column of the staging table is "Tennessee", this step would check the corresponding Location column in the Location table to make sure that "Tennessee" exists, and if not, inserts it as a new record and creates a new primary key. So my question: How do I accomplish this step in T-SQL, without using a record set in Access? I've figured out how to use MERGE in a stored procedure to do it for individual columns with the relevant tables, but still using a record set in VBA to move through each row of the normalizing table while calling the stored procedure. (All the tables now reside on on SQL Server, and I've linked to them in Access using ODBC.) Here's what I have so far: VBA: Public Function funTestNormalize(strTableRaw As String) '---Normalizes the data in the tblPayrollStaging table after it's been imported, using the dbo_tblNormalize table--- Dim db As Database, rst As Recordset, qdef As DAO.QueryDef Set db = CurrentDb Set rst = db.OpenRecordset("Select * From dbo_tblNormalize WHERE Table_Raw = '" & strTableRaw & "';", dbOpenDynaset, dbSeeChanges) 'Cycle through each row of dbo_tblNormalize (corresponds to the fields in tblPayrollStaging) If Not rst.EOF Then rst.MoveFirst DoCmd.SetWarnings False Set qdef = CurrentDb.QueryDefs("qryPassThru") 'Sets the QueryDef qdef.Connect = CurrentDb.TableDefs("dbo_tblSheet").Connect 'Assigns a connection to the QueryDef qdef.ReturnsRecords = False 'Avoids the "3065 error" Do Until rst.EOF With qdef .SQL = "EXEC uspUpdateNormalizingTables " & rst![Table_Raw] & ", " & rst![Field_Raw] & ", " & rst![Table_Normal] & ", " & rst!Field_Normal & ";" 'Sets the .SQL value to the needed T-SQL .Execute dbFailOnError 'Executes the QueryDef End With rst.MoveNext Loop End If rst.Close End Function SQL Server (using SSMS): CREATE PROCEDURE [dbo].[uspUpdateNormalizingTables] -- Add the parameters for the stored procedure here #tableRaw nvarchar(50), #fieldRaw nvarchar(50), #tableNormal nvarchar(50), #fieldNormal nvarchar(50) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements. SET NOCOUNT ON EXEC('INSERT INTO ' + #tableNormal + ' (' + #fieldNormal + ')' + ' SELECT DISTINCT ' + #tableRaw + '.' + #fieldRaw + ' FROM ' + #tableRaw + ' WHERE (NOT EXISTS (SELECT ' + #fieldNormal + ' FROM ' + #tableNormal + ' WHERE ' + #tableNormal + '.' + #fieldNormal + ' = ' + #tableRaw + '.' + #fieldRaw + ')) AND (' + #tableRaw + '.' + #fieldRaw + ' IS NOT NULL);') END GO Would I need to use cursors (which I haven't used yet, and would have to figure out), or is there maybe a more elegant solution which I haven't considered? Any help you can give is appreciated!
T-SQL find string with lowercase and uppercase
I have a database with several tables and I need to search every varchar column across the database, for columns that simultaneously contain lower and upper case characters. To clarify: If one column contains helLo the name of the column should be returned by the query, but if the column values only contain either hello or HELLO then the name of the column is not returned.
Let's exclude all UPPER and all LOWER, the rest will be MIXED. SELECT someColumn FROM someTable WHERE someColumn <> UPPER(someColumn) AND someColumn <> LOWER(someColumn) EDIT: As suggested in comments and described in detail here I need to specify a case-sensitive collation. SELECT someColumn FROM someTable WHERE someColumn <> UPPER(someColumn) AND someColumn <> LOWER(someColumn) Collate SQL_Latin1_General_CP1_CS_AS
It sounds like you are after a case sensitive search, so you'd need to use a case sensitive collation for there WHERE clause. e.g. if your collation is currently SQL_Latin1_General_CP1_CI_AS which is case insensitive, you can write a case sensitive query using: SELECT SomeColumn FROM dbo.SomeTable WHERE SomeField LIKE '%helLo%' COLLATE SQL_Latin1_General_CP1_CS_AS Here, COLLATE SQL_Latin1_General_CP1_CS_AS tells it to use a case sensitive collation to perform the filtering.
I think I understand that you want to find any varchar column with mixed case data within it? If so, you can achieve this with a cursor looking at your column types, which then executes some dynamic SQL on the varchar columns it finds to check the data for mixed case values. I thoroughly recommend doing this on a non-production server using a copy of your database, not least because you need to create a table to deposit your findings into: create table VarcharColumns (TableName nvarchar(max), ColumnName nvarchar(max)) declare #sql nvarchar(max) declare my_cursor cursor local static read_only forward_only for select 'insert into VarcharColumns select t,c from(select ''' + s.name + '.' + tb.name + ''' t, ''' + c.name + ''' c from ' + s.name + '.' + tb.name + ' where ' + c.name + ' like ''%[abcdefghijklmnopqrstuvwxyz]%'' COLLATE SQL_Latin1_General_CP1_CS_AS and ' + c.name + ' like ''%[ABCDEFGHIJKLMNOPQRSTUVWXYZ]%'' COLLATE SQL_Latin1_General_CP1_CS_AS having count(1) > 0) a' as s from sys.columns c inner join sys.types t on(c.system_type_id = t.system_type_id and t.name = 'varchar' ) inner join sys.tables tb on(c.object_id = tb.object_id) inner join sys.schemas s on(tb.schema_id = s.schema_id) open my_cursor fetch next from my_cursor into #sql while ##fetch_status = 0 begin print #sql exec(#sql) fetch next from my_cursor into #sql end close my_cursor deallocate my_cursor select * from VarcharColumns
You can check the hash compared to its upper and lower values... here is a simple test: declare #test varchar(256) set #test = 'MIX' -- Try changing this to a mix case, and then all lower case select case when hashbytes('SHA1',#test) <> hashbytes('SHA1',upper(#test)) and hashbytes('SHA1',#test) <> hashbytes('SHA1',lower(#test)) then 'MixedCase' else 'Not Mixed Case' end So using this in a table... you can do something like this create table #tempT (SomeColumn varchar(256)) insert into #tempT (SomeColumn) values ('some thing lower'),('SOME THING UPPER'),('Some Thing Mixed') SELECT SomeColumn FROM #tempT WHERE 1 = case when hashbytes('SHA1',SomeColumn) <> hashbytes('SHA1',upper(SomeColumn)) and hashbytes('SHA1',SomeColumn) <> hashbytes('SHA1',lower(SomeColumn)) then 1 else 0 end