Codefluent SQL Server producer - codefluent

We are using the SQL Server producer. We want an index for each foreign key column. SQL Server does not put indexes onto foreign key columns automatically. How can we create an index for each foreign key column automatically? Should we code an aspect for this?

CodeFluent Entities does not generate indices by default. However you can set index="true" on a property:
<cf:property name="Customer" index="true" />
And use the SQL Server Template Engine and the template provided by CodeFluent Entities "C:\Program Files (x86)\SoftFluent\CodeFluent\Modeler\Templates\SqlServer\[Template]CreateIndexes.sql" to create indices.
If you don't want to add index=true on each property, you can change the template to automatically include all properties, or you can write an aspect to add the attribute (this is more complex).
Another solution is to use a SQL script:
DECLARE #SQL NVARCHAR(max)
SET #SQL = ''
SELECT #SQL = #SQL +
'IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N''[dbo].[' + tab.name + ']'') AND name = N''IX_' + cols.name + ''')' + CHAR(13)+CHAR(10) +
'CREATE NONCLUSTERED INDEX [IX_' + cols.name + '] ON [dbo].[' + tab.name + ']( [' + cols.name + '] ASC ) ON [PRIMARY];' + CHAR(13)+CHAR(10)
FROM sys.foreign_keys keys
INNER JOIN sys.foreign_key_columns keyCols ON keys.object_id = keyCols.constraint_object_id
INNER JOIN sys.columns cols ON keyCols.parent_object_id = cols.object_id AND keyCols.parent_column_id = cols.column_id
INNER JOIN sys.tables tab ON keyCols.parent_object_id = tab.object_id
ORDER BY tab.name, cols.name
EXEC(#SQL)

Related

How to create PK in thousands of datases where not exist

I have trouble working out how to create a pk on thousands of databases. I have tried using
sp_ineachdb by Aaron Bertrand, but it only works on the first database. I need that the script that finds the PK's to be created, runs against the current Db, which doesn't seem to be the case.
DECLARE #PKScript2 VARCHAR(max)='';
SELECT #PKScript2 += ' ALTER TABLE ' + QUOTENAME(SCHEMA_NAME(obj.SCHEMA_ID))+'.'+
QUOTENAME(obj.name) + ' ADD CONSTRAINT PK_'+ obj.name+
' PRIMARY KEY CLUSTERED (' + QUOTENAME(icol.name) + ')' + CHAR(13)
FROM sys.identity_columns icol INNER JOIN
sys.objects obj on icol.object_id= obj.object_id
WHERE NOT EXISTS (SELECT * FROM sys.key_constraints k
WHERE k.parent_object_id = obj.object_id AND k.type = 'PK')
AND obj.type = 'U'
Order by obj.name
PRINT (#PKScript2);
EXEC [master].[dbo].[sp_ineachdb] #command = #PKScript2, #database_list= '[vosk][vpb][vpbk][vsb][vsh][vst]'
For the sake of the example, I have only used 6 databases.

T-SQL Replacement for Access Normalization Using Record Set?

I'm relatively new to T-SQL, so I hope someone with more experience/knowledge can help.
I have inherited an Access database that I'm moving to SQL Server. The original database imports and normalizes transaction data from Excel files in the following steps:
imports the Excel file to a staging table,
updates tables related to several of the columns if any new values are found, and
finally moves the data to the main table, but with inner joins to the PK columns on the updated tables from step 2 replacing the actual values.
Step 2 above makes use of a "normalizing" table:
CREATE TABLE [dbo].[tblNormalize](
[Normalize_ID] [int] IDENTITY(1,1) NOT NULL,
[Table_Raw] [nvarchar](255) NULL,
[Field_Raw] [nvarchar](255) NULL,
[Table_Normal] [nvarchar](255) NULL,
[Field_Normal] [nvarchar](255) NULL,
[Data_Type] [nvarchar](255) NULL,
CONSTRAINT [tblNormalize$ID] PRIMARY KEY CLUSTERED
(
[Normalize_ID] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
[Table_Raw] is the name of the staging table.
[Field_Raw] is the name of the field in the staging table - necessary since the field name could be different from what's in the tables to be updated.
[Table_Normal] is the name of the table to be updated.
[Field_Normal] is the name of the field to be updated.
For example, if one of the values in the Location column of the staging table is "Tennessee", this step would check the corresponding Location column in the Location table to make sure that "Tennessee" exists, and if not, inserts it as a new record and creates a new primary key.
So my question: How do I accomplish this step in T-SQL, without using a record set in Access? I've figured out how to use MERGE in a stored procedure to do it for individual columns with the relevant tables, but still using a record set in VBA to move through each row of the normalizing table while calling the stored procedure. (All the tables now reside on on SQL Server, and I've linked to them in Access using ODBC.) Here's what I have so far:
VBA:
Public Function funTestNormalize(strTableRaw As String)
'---Normalizes the data in the tblPayrollStaging table after it's been imported, using the dbo_tblNormalize table---
Dim db As Database, rst As Recordset, qdef As DAO.QueryDef
Set db = CurrentDb
Set rst = db.OpenRecordset("Select * From dbo_tblNormalize WHERE Table_Raw = '" & strTableRaw & "';", dbOpenDynaset, dbSeeChanges)
'Cycle through each row of dbo_tblNormalize (corresponds to the fields in tblPayrollStaging)
If Not rst.EOF Then
rst.MoveFirst
DoCmd.SetWarnings False
Set qdef = CurrentDb.QueryDefs("qryPassThru") 'Sets the QueryDef
qdef.Connect = CurrentDb.TableDefs("dbo_tblSheet").Connect 'Assigns a connection to the QueryDef
qdef.ReturnsRecords = False 'Avoids the "3065 error"
Do Until rst.EOF
With qdef
.SQL = "EXEC uspUpdateNormalizingTables " & rst![Table_Raw] & ", " & rst![Field_Raw] & ", " & rst![Table_Normal] & ", " & rst!Field_Normal & ";" 'Sets the .SQL value to the needed T-SQL
.Execute dbFailOnError 'Executes the QueryDef
End With
rst.MoveNext
Loop
End If
rst.Close
End Function
SQL Server (using SSMS):
CREATE PROCEDURE [dbo].[uspUpdateNormalizingTables]
-- Add the parameters for the stored procedure here
#tableRaw nvarchar(50),
#fieldRaw nvarchar(50),
#tableNormal nvarchar(50),
#fieldNormal nvarchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
SET NOCOUNT ON
EXEC('INSERT INTO ' + #tableNormal + ' (' + #fieldNormal + ')' +
' SELECT DISTINCT ' + #tableRaw + '.' + #fieldRaw +
' FROM ' + #tableRaw +
' WHERE (NOT EXISTS (SELECT ' + #fieldNormal + ' FROM ' + #tableNormal + ' WHERE ' + #tableNormal + '.' + #fieldNormal + ' = ' + #tableRaw + '.' + #fieldRaw + ')) AND (' + #tableRaw + '.' + #fieldRaw + ' IS NOT NULL);')
END
GO
Would I need to use cursors (which I haven't used yet, and would have to figure out), or is there maybe a more elegant solution which I haven't considered? Any help you can give is appreciated!

Should I delete Hypothetical Indexes?

I have noticed that Hypothetical indexes exist in a certain database. I have searched around and it appeared that this type of indexes are created by Tuning Advisor and are not always deleted.
There are several topics including official documentation of how to clear/delete these indexed, but I was not able to find if these indexes have any impact to the server themselves.
What I have check using the script below is that there is no size information about them:
SELECT OBJECT_NAME(I.[object_id]) AS TableName
,I.[name] AS IndexName
,I.[index_id] AS IndexID
,8 * SUM(A.[used_pages]) AS 'Indexsize(KB)'
FROM [sys].[indexes] AS I
INNER JOIN [sys].[partitions] AS P
ON P.[object_id] = I.[object_id]
AND P.[index_id] = I.[index_id]
INNER JOIN [sys].[allocation_units] AS A
ON A.[container_id] = P.[partition_id]
WHERE I.[is_hypothetical] = 1
GROUP BY I.[object_id]
,I.[index_id]
,I.[name]
ORDER BY 8 * SUM(A.[used_pages]) DESC
and having them, I have decided to check if there are some usage information about them in order to leave these who are often used, but again nothing was return. (I have use the "Existing Indexes Usage Statistics" from this article).
Could anyone tell why keeping these indexes is wrong and if I can define which of them should be kept?
Just USE the database you want to clean and run this:
DECLARE #sql VARCHAR(MAX) = ''
SELECT
#sql = #sql + 'DROP INDEX [' + i.name + '] ON [dbo].[' + t.name + ']' + CHAR(13) + CHAR(10)
FROM
sys.indexes i
INNER JOIN sys.tables t
ON i.object_id = t.object_id
WHERE
i.is_hypothetical = 1
EXECUTE sp_sqlexec #sql
Just delete them, they aren't actually taking up any space or causing any performance hit/benefit at all, but if you're looking at which indexes are defined on a table and forget to exclude hypothetical indexes, it might cause some confusion, also in the unlikely event that you try to create an index with the same name as one of these indexes, it will fail as it already exists.
If you use custom schemas and checked analyzing indexing views, you need some further improvements to the above scripts:
DECLARE #sql VARCHAR(MAX) = ''
SELECT #sql = #sql
+ 'DROP INDEX [' + i.name + ']'
+ 'ON [' + OBJECT_SCHEMA_NAME(t.[object_id]) + '].[' + t.name + ']'
+ CHAR(13) + CHAR(10)
FROM sys.indexes i
INNER JOIN sys.[all_objects] t
ON i.object_id = t.object_id
WHERE i.is_hypothetical = 1
PRINT #sql
EXECUTE sp_sqlexec #sql

TSQL: Rowcounts of All Tables In A Server

I am trying to obtain the row count of all tables in a server (NOT a particular database, but all the databases on a server, excluding the msdb, model, master, etc). I don't need any other details to be returned other than the database name, the table name, and row count.
My approach to this problem is to get all the databases in a server and place an id on them, which will be referred to in a while loop (beginning with id one until the maximum id). Then, within the while loop, I obtain the tables and row counts in the matching database ID. My problem is that the USE DatabaseName doesn't seem to allow me to make it dynamic, meaning that I can't store a database name in a variable and use it as the referred to database when performing the table with row count query.
Is there another approach to this that I'm missing (I've looked at many other examples - often using cursors, which seem to be much longer in code and appear to use more resources - this is a relatively fast query even if I use the largest database by tables, except that it doesn't hit the next database and so on), or am I missing something obvious in the code to make this dynamic?
DECLARE #ServerTable TABLE(
DatabaseID INT IDENTITY(1,1),
DatabaseName VARCHAR(50)
)
DECLARE #count INT
DECLARE #start INT = 1
SELECT #count = COUNT(*) FROM sys.databases WHERE name NOT IN ('master','tempdb','model','msdb')
INSERT INTO #ServerTable (DatabaseName)
SELECT name
FROM sys.databases
WHERE name NOT IN ('master','tempdb','model','msdb')
WHILE #start < #count
BEGIN
DECLARE #db VARCHAR(50)
SELECT #db = DatabaseName FROM #ServerTable WHERE DatabaseID = #start
-- This is the problem, as the USE doesn't seem to allow it to be dynamic.
USE #db
GO
SELECT #db
,o.name [Name]
,ddps.row_count [Row Count]
FROM sys.indexes AS i
INNER JOIN sys.objects AS o ON i.OBJECT_ID = o.OBJECT_ID
INNER JOIN sys.dm_db_partition_stats AS ddps ON i.OBJECT_ID = ddps.OBJECT_ID AND i.index_id = ddps.index_id
WHERE i.index_id < 2 AND o.is_ms_shipped = 0
ORDER BY o.NAME
SET #start = #start + 1
END
Note: I tried checking in the sys.objects and sys.indexes to see if I could filter with a database name, but I had no luck.
Update: I tried turning the SELECT into something dynamic with no success (note the below code only shows the change SELECT):
SET #sql = '
SELECT ' + #db + ' [Database]
,o.name [Name]
,ddps.row_count [Row Count]
FROM ' + #db + '.sys.objects
INNER JOIN ' + #db + ' sys.objects AS o ON i.OBJECT_ID = o.OBJECT_ID
INNER JOIN ' + #db + ' sys.dm_db_partition_stats AS ddps ON i.OBJECT_ID = ddps.OBJECT_ID AND i.index_id = ddps.index_id
WHERE i.index_id < 2 AND o.is_ms_shipped = 0
ORDER BY o.NAME'
No, that is essentially the way you do it.
I'm not sure why you think a while loop is faster than a cursor (though this is a common misconception). They are essentially the same thing. I don't always use cursors, but when I do, I use LOCAL FAST_FORWARD - make sure that you do too. See this article for more info:
What impact can different cursor options have?
To reduce the code required for individual tasks like this, you might be interested in the sp_MSforeachdb replacement I wrote (sp_MSforeachdb is a built-in, undocumented and unsupported stored procedure that will repeat a command for every database, but it is not possible to, say, filter out system databases, and it also has a severe bug where it will sometimes halt execution):
Making a more reliable and flexible sp_MSforeachdb
Execute a Command in the Context of Each Database in SQL Server
Another way would be dynamic SQL.
DECLARE #sql NVARCHAR(MAX) = N'';
SELECT #sql += '
SELECT db = N''' + name + '''
,o.name [Name]
,ddps.row_count [Row Count]
FROM ' + QUOTENAME(name) + '.sys.indexes AS i
INNER JOIN ' + QUOTENAME(name) + '.sys.objects AS o
ON i.OBJECT_ID = o.OBJECT_ID
INNER JOIN ' + QUOTENAME(name) + '.sys.dm_db_partition_stats AS ddps
ON i.OBJECT_ID = ddps.OBJECT_ID AND i.index_id = ddps.index_id
WHERE i.index_id < 2 AND o.is_ms_shipped = 0
ORDER BY o.NAME;'
FROM sys.databases
WHERE database_id > 4;
PRINT #sql;
--EXEC sp_executesql #sql;
(The print is there so that you can inspect the command before executing. It may be truncated at 8K if you have a large number of databases, but don't be alarmed - that is just a display issue in SSMS, the command is complete.)
You could also build a #temp table first, and insert into that, so that you have a single resultset to work with, e.g.
CREATE TABLE #x(db SYSNAME, o SYSNAME, rc SYSNAME);
DECLARE #sql NVARCHAR(MAX) = N'';
SELECT #sql += 'INSERT #x(db,o,rc)
SELECT db = N''' + name + '''
,o.name [Name]
,ddps.row_count [Row Count]
FROM ' + QUOTENAME(name) + '.sys.indexes AS i
INNER JOIN ' + QUOTENAME(name) + '.sys.objects AS o
ON i.OBJECT_ID = o.OBJECT_ID
INNER JOIN ' + QUOTENAME(name) + '.sys.dm_db_partition_stats AS ddps
ON i.OBJECT_ID = ddps.OBJECT_ID AND i.index_id = ddps.index_id
WHERE i.index_id < 2 AND o.is_ms_shipped = 0
ORDER BY o.NAME;'
FROM sys.databases
WHERE database_id > 4;
EXEC sp_executesql #sql;
SELECT db, o, rc FROM #x ORDER BY db, o;
Now, don't be fooled into believing this isn't also using a cursor or loop - it is. But it is building the command in a loop as opposed to executing it in a loop.
In terms of you making a dynamic query, rather than use a using, you could do a fully qualified name for your table names, using your selected #db variable.
So it would be 'FROM ' + #db+'.sys.objects' etc.
You would have to check that your DB name is valid (for instance, if you had a name that needed brackets for some reason).

How to add a column in TSQL after a specific column?

I have a table:
MyTable
ID
FieldA
FieldB
I want to alter the table and add a column so it looks like:
MyTable
ID
NewField
FieldA
FieldB
In MySQL I would so a:
ALTER TABLE MyTable ADD COLUMN NewField int NULL AFTER ID;
One line, nice, simple, works great. How do I do this in Microsoft's world?
Unfortunately you can't.
If you really want them in that order you'll have to create a new table with the columns in that order and copy data. Or rename columns etc. There is no easy way.
solution:
This will work for tables where there are no dependencies on the changing table which would trigger cascading events. First make sure you can drop the table you want to restructure without any disastrous repercussions. Take a note of all the dependencies and column constraints associated with your table (i.e. triggers, indexes, etc.). You may need to put them back in when you are done.
STEP 1: create the temp table to hold all the records from the table you want to restructure. Do not forget to include the new column.
CREATE TABLE #tmp_myTable
( [new_column] [int] NOT NULL, <-- new column has been inserted here!
[idx] [bigint] NOT NULL,
[name] [nvarchar](30) NOT NULL,
[active] [bit] NOT NULL
)
STEP 2: Make sure all records have been copied over and that the column structure looks the way you want.
SELECT TOP 10 * FROM #tmp_myTable ORDER BY 1 DESC
-- you can do COUNT(*) or anything to make sure you copied all the records
STEP 3: DROP the original table:
DROP TABLE myTable
If you are paranoid about bad things could happen, just rename the original table (instead of dropping it). This way it can be always returned back.
EXEC sp_rename myTable, myTable_Copy
STEP 4: Recreate the table myTable the way you want (should match match the #tmp_myTable table structure)
CREATE TABLE myTable
( [new_column] [int] NOT NULL,
[idx] [bigint] NOT NULL,
[name] [nvarchar](30) NOT NULL,
[active] [bit] NOT NULL
)
-- do not forget any constraints you may need
STEP 5: Copy the all the records from the temp #tmp_myTable table into the new (improved) table myTable.
INSERT INTO myTable ([new_column],[idx],[name],[active])
SELECT [new_column],[idx],[name],[active]
FROM #tmp_myTable
STEP 6: Check if all the data is back in your new, improved table myTable. If yes, clean up after yourself and DROP the temp table #tmp_myTable and the myTable_Copy table if you chose to rename it instead of dropping it.
You should be able to do this if you create the column using the GUI in Management Studio. I believe Management studio is actually completely recreating the table, which is why this appears to happen.
As others have mentioned, the order of columns in a table doesn't matter, and if it does there is something wrong with your code.
In SQL Enterprise Management Studio, open up your table, add the column where you want it, and then -- instead of saving the change -- generate the change script. You can see how it's done in SQL.
In short, what others have said is right. SQL Management studio pulls all your data into a temp table, drops the table, recreates it with columns in the right order, and puts the temp table data back in there. There is no simple syntax for adding a column in a specific position.
/*
Script to change the column order of a table
Note this will create a new table to replace the original table.
WARNING : Original Table could be dropped.
HOWEVER it doesn't copy the triggers or other table properties - just the data
*/
Generate a new table with the columns in the order that you require
Select Column2, Column1, Column3 Into NewTable from OldTable
Delete the original table
Drop Table OldTable;
Rename the new table
EXEC sp_rename 'NewTable', 'OldTable';
In Microsoft SQL Server Management Studio (the admin tool for MSSQL) just go into "design" on a table and drag the column to the new position. Not command line but you can do it.
This is absolutely possible. Although you shouldn't do it unless you know what you are dealing with.
Took me about 2 days to figure it out.
Here is a stored procedure where i enter:
---database name
(schema name is "_" for readability)
---table name
---column
---column data type
(column added is always null, otherwise you won't be able to insert)
---the position of the new column.
Since I'm working with tables from SAM toolkit (and some of them have > 80 columns) , the typical variable won't be able to contain the query. That forces the need of external file. Now be careful where you store that file and who has access on NTFS and network level.
Cheers!
USE [master]
GO
/****** Object: StoredProcedure [SP_Set].[TrasferDataAtColumnLevel] Script Date: 8/27/2014 2:59:30 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [SP_Set].[TrasferDataAtColumnLevel]
(
#database varchar(100),
#table varchar(100),
#column varchar(100),
#position int,
#datatype varchar(20)
)
AS
BEGIN
set nocount on
exec ('
declare #oldC varchar(200), #oldCDataType varchar(200), #oldCLen int,#oldCPos int
create table Test ( dummy int)
declare #columns varchar(max) = ''''
declare #columnVars varchar(max) = ''''
declare #columnsDecl varchar(max) = ''''
declare #printVars varchar(max) = ''''
DECLARE MY_CURSOR CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR
select column_name, data_type, character_maximum_length, ORDINAL_POSITION from ' + #database + '.INFORMATION_SCHEMA.COLUMNS where table_name = ''' + #table + '''
OPEN MY_CURSOR FETCH NEXT FROM MY_CURSOR INTO #oldC, #oldCDataType, #oldCLen, #oldCPos WHILE ##FETCH_STATUS = 0 BEGIN
if(#oldCPos = ' + #position + ')
begin
exec(''alter table Test add [' + #column + '] ' + #datatype + ' null'')
end
if(#oldCDataType != ''timestamp'')
begin
set #columns += #oldC + '' , ''
set #columnVars += ''#'' + #oldC + '' , ''
if(#oldCLen is null)
begin
if(#oldCDataType != ''uniqueidentifier'')
begin
set #printVars += '' print convert('' + #oldCDataType + '',#'' + #oldC + '')''
set #columnsDecl += ''#'' + #oldC + '' '' + #oldCDataType + '', ''
exec(''alter table Test add ['' + #oldC + ''] '' + #oldCDataType + '' null'')
end
else
begin
set #printVars += '' print convert(varchar(50),#'' + #oldC + '')''
set #columnsDecl += ''#'' + #oldC + '' '' + #oldCDataType + '', ''
exec(''alter table Test add ['' + #oldC + ''] '' + #oldCDataType + '' null'')
end
end
else
begin
if(#oldCLen < 0)
begin
set #oldCLen = 4000
end
set #printVars += '' print #'' + #oldC
set #columnsDecl += ''#'' + #oldC + '' '' + #oldCDataType + ''('' + convert(character,#oldCLen) + '') , ''
exec(''alter table Test add ['' + #oldC + ''] '' + #oldCDataType + ''('' + #oldCLen + '') null'')
end
end
if exists (select column_name from INFORMATION_SCHEMA.COLUMNS where table_name = ''Test'' and column_name = ''dummy'')
begin
alter table Test drop column dummy
end
FETCH NEXT FROM MY_CURSOR INTO #oldC, #oldCDataType, #oldCLen, #oldCPos END CLOSE MY_CURSOR DEALLOCATE MY_CURSOR
set #columns = reverse(substring(reverse(#columns), charindex('','',reverse(#columns)) +1, len(#columns)))
set #columnVars = reverse(substring(reverse(#columnVars), charindex('','',reverse(#columnVars)) +1, len(#columnVars)))
set #columnsDecl = reverse(substring(reverse(#columnsDecl), charindex('','',reverse(#columnsDecl)) +1, len(#columnsDecl)))
set #columns = replace(replace(REPLACE(#columns, '' '', ''''), char(9) + char(9),'' ''), char(9), '''')
set #columnVars = replace(replace(REPLACE(#columnVars, '' '', ''''), char(9) + char(9),'' ''), char(9), '''')
set #columnsDecl = replace(replace(REPLACE(#columnsDecl, '' '', ''''), char(9) + char(9),'' ''),char(9), '''')
set #printVars = REVERSE(substring(reverse(#printVars), charindex(''+'',reverse(#printVars))+1, len(#printVars)))
create table query (id int identity(1,1), string varchar(max))
insert into query values (''declare '' + #columnsDecl + ''
DECLARE MY_CURSOR CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR '')
insert into query values (''select '' + #columns + '' from ' + #database + '._.' + #table + ''')
insert into query values (''OPEN MY_CURSOR FETCH NEXT FROM MY_CURSOR INTO '' + #columnVars + '' WHILE ##FETCH_STATUS = 0 BEGIN '')
insert into query values (#printVars )
insert into query values ( '' insert into Test ('')
insert into query values (#columns)
insert into query values ( '') values ( '' + #columnVars + '')'')
insert into query values (''FETCH NEXT FROM MY_CURSOR INTO '' + #columnVars + '' END CLOSE MY_CURSOR DEALLOCATE MY_CURSOR'')
declare #path varchar(100) = ''C:\query.sql''
declare #query varchar(500) = ''bcp "select string from query order by id" queryout '' + #path + '' -t, -c -S '' + ##servername + '' -T''
exec master..xp_cmdshell #query
set #query = ''sqlcmd -S '' + ##servername + '' -i '' + #path
EXEC xp_cmdshell #query
set #query = ''del '' + #path
exec xp_cmdshell #query
drop table ' + #database + '._.' + #table + '
select * into ' + #database + '._.' + #table + ' from Test
drop table query
drop table Test ')
END
Even if the question is old, a more accurate answer about Management Studio would be required.
You can create the column manually or with Management Studio. But Management Studio will require to recreate the table and will result in a time out if you have too much data in it already, avoid unless the table is light.
To change the order of the columns you simply need to move them around in Management Studio. This should not require (Exceptions most likely exists) that Management Studio to recreate the table since it most likely change the ordination of the columns in the table definitions.
I've done it this way on numerous occasion with tables that I could not add columns with the GUI because of the data in them. Then moved the columns around with the GUI of Management Studio and simply saved them.
You will go from an assured time out to a few seconds of waiting.
If you are using the GUI to do this you must deselect the following option allowing the table to be dropped,
Create New Add new Column Table Script ex: [DBName].[dbo].[TableName]_NEW
COPY old table data to new table: INSERT INTO newTable ( col1,col2,...) SELECT col1,col2,... FROM oldTable
Check records old and new are the same:
DROP old table
rename newtable to oldtable
rerun your sp add new colum value
-- 1. Create New Add new Column Table Script
CREATE TABLE newTable
( [new_column] [int] NOT NULL, <-- new column has been inserted here!
[idx] [bigint] NOT NULL,
[name] [nvarchar](30) NOT NULL,
[active] [bit] NOT NULL
)
-- 2. COPY old table data to new table:
INSERT INTO newTable ([new_column],[idx],[name],[active])
SELECT [new_column],[idx],[name],[active]
FROM oldTable
-- 3. Check records old and new are the same:
select sum(cnt) FROM (
SELECT 'table_1' AS table_name, COUNT(*) cnt FROM newTable
UNION
SELECT 'table_2' AS table_name, -COUNT(*) cnt FROM oldTable
) AS cnt_sum
-- 4. DROP old table
DROP TABLE oldTable
-- 5. rename newtable to oldtable
USE [DB_NAME]
EXEC sp_rename newTable, oldTable
You have to rebuild the table. Luckily, the order of the columns doesn't matter at all!
Watch as I magically reorder your columns:
SELECT ID, Newfield, FieldA, FieldB FROM MyTable
Also this has been asked about a bazillion times before.