I have a list of values:
(56957,85697,56325,45698,21367,56397,14758,39656)
and a 'template' row in a table.
I want to do this:
for value in valuelist:
{
insert into table1 (field1, field2, field3, field4)
select value1, value2, value3, (value)
from table1
where ID = (ID of template row)
}
I know how I would do this in code, like c# for instance, but I'm not sure how to 'loop' this while passing in a new value to the insert statement. (i know that code makes no sense, just trying to convey what I'm trying to accomplish.
There is no need to loop here, SQL is a set based language and you apply your operations to entire sets of data all at once as opposed to looping through row by row.
insert statements can come from either an explicit list of values or from the result of a regular select statement, for example:
insert into table1(col1, col2)
select col3
,col4
from table2;
There is nothing stopping you selecting your data from the same place you are inserting to, which will duplicate all your data:
insert into table1(col1, col2)
select col1
,col2
from table1;
If you want to edit one of these column values - say by incrementing the value currently held, you simply apply this logic to your select statement and make sure the resultant dataset matches your target table in number of columns and data types:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1;
Optionally, if you only want to do this for a subset of those values, just add a standard where clause:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1
where col1 = <your value>;
Now if this isn't enough for you to work it out by yourself, you can join your dataset to you values list to get a version of the data to be inserted for each value in that list. Because you want each row to join to each value, you can use a cross join:
declare #v table(value int);
insert into #v values(56957),(85697),(56325),(45698),(21367),(56397),(14758),(39656);
insert into table1(col1, col2, value)
select t.col1
,t.col2
,v.value
from table1 as t
cross join #v as v
Related
I am inserting data into a table and want to return the inserted data. The inserted data contains foreign keys. I would like to get the whole data with the joins of the foreign keys.
I have tried putting a SELECT in RETURNING without luck. Is this even possible or do I just have to do another query after inserting the data?
Insert statement:
INSERT INTO someTable (col1, col2, col3, foreign_id)
VALUES ('value1', 'value2', 'value3', 1);
So in this case, could I have a RETURNING that basically would give me:
SELECT someTable.*, foreignTable.*
FROM someTable
JOIN foreignTable ON someTable.foreign_id = foreignTable.id;
demo:db<>fiddle
You can use a CTE for this:
WITH inserting AS (
INSERT INTO...
RETURNING <new data>
)
SELECT i.*, ft.*
FROM inserting i JOIN foreign_table ft ...
In this case the INSERT statement will be executed. The SELECT statement will be executed after that. This can reference the inserted data.
You can use a CTE for that:
with new_row as (
INSERT INTO some_table (col1, col2, col3, foreign_id)
VALUES ('value1', 'value2', 'value3', 1)
returning *
)
SELECT new_row.*, ft.*
FROM new_row
JOIN foreign_table ft ON new_row.foreign_id = ft.id;
I have the following table:
RecordID
Name
Col1
Col2
....
ColN
The RecordID is BIGINT PRIMARY KEY CLUSTERED IDENTITY(1,1) and RecordID and Name are initialized. The other columns are NULLs.
I have a function which returns information about the other columns by Name.
To initialized my table I use the following algorithm:
Create a LOOP
Get a row, select its Name value
Execute the function using the selected name, and store its result
in temp variables
Insert the temp variables in the table
Move to the next record
Is there a way to do this without looping?
Cross apply was basically built for this
SELECT D.deptid, D.deptname, D.deptmgrid
,ST.empid, ST.empname, ST.mgrid
FROM Departments AS D
CROSS APPLY fn_getsubtree(D.deptmgrid) AS ST;
Using APPLY
UPDATE some_table
SET some_row = another_row,
some_row2 = another_row/2
FROM some_table st
CROSS APPLY
(SELECT TOP 1 another_row FROM another_table at WHERE at.shared_id=st.shared_id)
WHERE ...
using cross apply in an update statement
You can simply say the following if you already have the records in the table.
UPDATE MyTable
SET
col1 = dbo.col1Method(Name),
col2 = dbo.col2Method(Name),
...
While inserting new records, assuming RecordID is auto-generated, you can say
INSERT INTO MyTable(Name, Col1, Col2, ...)
VALUES(#Name, dbo.col1Method(#Name), dbo.col2Method(#name), ...)
where #Name contains the value for the Name column.
I have a table with an XML type column. This column contains a dynamic list of attributes that may be different between records.
I am trying to GROUP BY COUNT over these attributes without having to go through the table separately for each attribute.
For example, one record could have attributes A, B and C and the other would have B, C, D then, when I do the GROUP BY COUNT I would get A = 1, B = 2, C = 2 and D = 1.
Is there any straightforward way to do this?
EDIT in reply to Andrew's answer:
Because my knowledge of this construct is superficial at best I had to fiddle with it to get it to do what I want. In my actual code I needed to group by the TimeRange, as well as only select some attributes depending on their name. I am pasting the actual query below:
WITH attributes AS (
SELECT
Timestamp,
N.a.value('#name[1]', 'nvarchar(max)') AS AttributeName,
N.a.value('(.)[1]', 'nvarchar(max)') AS AttributeValue
FROM MyTable
CROSS APPLY AttributesXml.nodes('/Attributes/Attribute') AS N(a)
)
SELECT Datepart(dy, Timestamp), AttributeValue, COUNT(AttributeValue)
FROM attributes
WHERE AttributeName IN ('AttributeA', 'AttributeB')
GROUP BY Datepart(dy, Timestamp), AttributeValue
As a side-note: Is there any way to reduce this further?
WITH attributes AS (
SELECT a.value('(.)[1]', 'nvarchar(max)') AS attribute
FROM YourTable
CROSS APPLY YourXMLColumn.nodes('//path/to/attributes') AS N(a)
)
SELECT attribute, COUNT(attribute)
FROM attributes
GROUP BY attribute
CROSS APPLY is like being able to JOIN the xml as a table. The WITH is needed because you can't have xml methods in a group clause.
Here is a way to get the attribute data into a way that you can easily work with it and reduce the number of times you need to go through the main table.
--create test data
declare #tmp table (
field1 varchar(20),
field2 varchar(20),
field3 varchar(20))
insert into #tmp (field1, field2, field3)
values ('A', 'B', 'C'),
('B', 'C', 'D')
--convert the individual fields from seperate columns to one column
declare #table table(
field varchar(20))
insert into #table (field)
select field1 from #tmp
union all
select field2 from #tmp
union all
select field3 from #tmp
--run the group by and get the count
select field, count(*)
from #table
group by field
I know the title may seem strange but this is what I want to do:
I have table with many records.
I want to get some of this records and insert them in other table. Something like this:
INSERT INTO TableNew SELECT * FROM TableOld WHERE ...
The tricky part is that I want this rows that I have inserted to be deleted form the origin table as well.
Is there a easy way to do this, because the only think that I have managed to do is to use a temporary table for saving the selected records and then to put them in the second table and delete rows that match with them from the first table. It is a solution, but with so many records (over 3 millions and half) I am looking for some other idea...
In 2005+ use OUTPUT clause like this:
DELETE FROM TableOld
OUTPUT DELETED.* INTO TableNew
WHERE YourCondition
It will be performed in single transaction and either completed or roll back simultaneously
You can use the insert ... output clause to store the ID's of the copied rows in a temporary table. Then you can delete the rows from the original table based on the temporary table.
declare #Table1 table (id int, name varchar(50))
declare #Table2 table (id int, name varchar(50))
insert #Table1 (id,name)
select 1, 'Mitt'
union all select 2, 'Newt'
union all select 3, 'Rick'
union all select 4, 'Ron'
declare #copied table (id int)
insert #Table2
(id, name)
output inserted.id
into #copied
select id
, name
from #Table1
where name <> 'Mitt'
delete #Table1
where id in
(
select id
from #copied
)
select *
from #Table1
Working example at Data Explorer.
You should do some thing like this:
INSERT INTO "table1" ("column1", "column2", ...)
SELECT "column3", "column4", ...
FROM "table2"
WHERE ...
DELETE FROM "table1"
WHERE ...
I have a table like this:
create table1 (field1 int,
field2 int default 5557,
field3 int default 1337,
field4 int default 1337)
I want to insert a row which has the default values for field2 and field4.
I've tried insert into table1 values (5,null,10,null) but it doesn't work and ISNULL(field2,default) doesn't work either.
How can I tell the database to use the default value for the column when I insert a row?
Best practice it to list your columns so you're independent of table changes (new column or column order etc)
insert into table1 (field1, field3) values (5,10)
However, if you don't want to do this, use the DEFAULT keyword
insert into table1 values (5, DEFAULT, 10, DEFAULT)
Just don't include the columns that you want to use the default value for in your insert statement. For instance:
INSERT INTO table1 (field1, field3) VALUES (5, 10);
...will take the default values for field2 and field4, and assign 5 to field1 and 10 to field3.
This works if all the columns have associated defaults and one does not want to specify the column names:
insert into your_table
default values
Try it like this
INSERT INTO table1 (field1, field3) VALUES (5,10)
Then field2 and field4 should have default values.
I had a case where I had a very simple table, and I basically just wanted an extra row with just the default values. Not sure if there is a prettier way of doing it, but here's one way:
This sets every column in the new row to its default value:
INSERT INTO your_table VALUES ()
Note: This is extra useful for MySQL where INSERT INTO your_table DEFAULT VALUES does not work.
If your columns should not contain NULL values, you need to define the columns as NOT NULL as well, otherwise the passed in NULL will be used instead of the default and not produce an error.
If you don't pass in any value to these fields (which requires you to specify the fields that you do want to use), the defaults will be used:
INSERT INTO
table1 (field1, field3)
VALUES (5,10)
You can write in this way
GO
ALTER TABLE Table_name ADD
column_name decimal(18, 2) NOT NULL CONSTRAINT Constant_name DEFAULT 0
GO
ALTER TABLE Table_name SET (LOCK_ESCALATION = TABLE)
GO
COMMIT
To insert the default values you should omit them something like this :
Insert into Table (Field2) values(5)
All other fields will have null or their default values if it has defined.
CREATE TABLE #dum (id int identity(1,1) primary key, def int NOT NULL default(5), name varchar(25))
-- this works
INSERT #dum (def, name) VALUES (DEFAULT, 'jeff')
SELECT * FROM #dum;
DECLARE #some int
-- this *doesn't* work and I think it should
INSERT #dum (def, name)
VALUES (ISNULL(#some, DEFAULT), 'george')
SELECT * FROM #dum;
CREATE PROC SP_EMPLOYEE --By Using TYPE parameter and CASE in Stored procedure
(#TYPE INT)
AS
BEGIN
IF #TYPE=1
BEGIN
SELECT DESIGID,DESIGNAME FROM GP_DESIGNATION
END
IF #TYPE=2
BEGIN
SELECT ID,NAME,DESIGNAME,
case D.ISACTIVE when 'Y' then 'ISACTIVE' when 'N' then 'INACTIVE' else 'not' end as ACTIVE
FROM GP_EMPLOYEEDETAILS ED
JOIN GP_DESIGNATION D ON ED.DESIGNATION=D.DESIGID
END
END