How to fill table variable with correct IDENTITY values? - tsql

Well, I have two tables:
CREATE TABLE Temp(
TEMP_ID int IDENTITY(1,1) NOT NULL, ... )
CREATE TABLE TEMP1(
TEMP1_ID int IDENTITY(1,1) NOT NULL,
TEMP_ID int, ... )
they are linked with TEMP_ID foreign key.
In a stored procedure I need to create tons of
Temp and Temp1 rows and update them, so I created a table variable (#TEMP) and I am dealing with it and finally make one big INSERT into Temp. My question is: how can I fill #Temp with correct TEMP_ID's without insert safely from multiple sessions?

you can use Scope_Identity() to find out last inserted row. You can use Output clause to find all newly inserted (or updated) rows.
create table #t1
(
id int primary key identity,
val int
)
Insert into #t1 (val)
output inserted.id, inserted.val
values (10), (20), (30)

Related

Duplicate rows on insert in Firebird DB and Python

I have a database with a DDL like this:
CREATE TABLE table(
ID INTEGER NOT NULL,
COLUMN1 VARCHAR(50),
PRIMARY KEY (ID)
);
and with ID autoincrementing on insert like this:
CREATE TRIGGER table_BI FOR table BEFORE INSERT AS
BEGIN
NEW.ID = GEN_ID(generator, 1);
END
My problem is when I add a new row with the same value in the column1 as in an already existing row I get two rows with the same values but different ID.

Is there pattern to have union table for different items?

I'd like to have column constraint based combination of 2 columns. I don't find the way to use foreign key here, because it should be conditional FK, then. Hope this basic SQL shows the problem:
CREATE TABLE performer_type (
id serial primary key,
type varchar
);
INSERT INTO performer_type ( id, type ) VALUES (1, 'singer'), ( 2, 'band');
CREATE TABLE singer (
id serial primary key,
name varchar
);
INSERT INTO singer ( id, name ) VALUES (1, 'Robert');
CREATE TABLE band (
id serial primary key,
name varchar
);
INSERT INTO band ( id, name ) VALUES (1, 'Animates'), ( 2, 'Zed Leppelin');
CREATE TABLE gig (
id serial primary key,
performer_type_id int default null, /* FK, no problem */
performer_id int default null /* want FK based on previous FK, no good solution so far */
);
INSERT INTO gig ( performer_type_id, performer_id ) VALUES ( 1,1 ), (2,1), (2,2), (1,2), (2,3);
Now, the last INSERT works, but for last 2 value pairs I'd like it fail, because there is no singer ID 2 nor band ID 3. How to set such constraint?
I already asked similar question in Mysql context and only solution was to use trigger. Problem with trigger was: you can't have dynamic list of types and table set. I'd like to add types (and related tables) on the fly.
I also found very promising pattern, but this is upside down for me, I did not figured out, how to turn it to work in my case.
What I am looking here seems to me so useful pattern, I think there must be some common way for it. Is it?
Edit.
Seems, I choose bad items in my examples, so I try make it clear: different performer tables (singer and band) have NO relation between them. gig-table just has to list tasks for different performers, without setting any relations between them.
Another example would items in stock: I may have item_type-table, which defines hundreds of item-types with related tables (for example, orange and house), and there should be table stock which enlists all appearances of items.
PostgreSQL I use is 9.6
Based on #Laurenz Albe answer I form a solution for example above. Main difference: there is parent table performer, which PK is FK/PK for specific performer-tables and is referenced also from gig table.
CREATE TABLE performer_type (
id serial primary key,
type varchar
);
INSERT INTO performer_type ( id, type ) VALUES (1, 'singer' ), ( 2, 'band' );
CREATE TABLE performer (
id serial primary key,
performer_type_id int REFERENCES performer_type(id)
);
CREATE TABLE singer (
id int primary key REFERENCES performer(id),
name varchar
);
INSERT INTO performer ( performer_type_id ) VALUES (1); -- get PK 1 for next statement
INSERT INTO singer ( id, name ) VALUES (1, 'Robert');
CREATE TABLE band (
id int primary key REFERENCES performer(id),
name varchar
);
INSERT INTO performer ( performer_type_id ) VALUES (2); -- get PK 2 for next statement
INSERT INTO singer ( id, name ) VALUES (2, 'Animates');
INSERT INTO performer ( performer_type_id ) VALUES (2); -- get PK 3 for next statement
INSERT INTO singer ( id, name ) VALUES (3, 'Zed Leppelin');
CREATE TABLE gig (
id serial primary key,
performer_id int REFERENCES performer(id)
);
INSERT INTO gig ( performer_id ) VALUES (1), (2), (3), (4);
And the last INSERT fails, as expected:
ERROR: insert or update on table "gig" violates foreign key constraint "gig_performer_id_fkey"
DETAIL: Key (performer_id)=(4) is not present in table "performer".
But
For me there is annoying problem: I have no good way to make distinction which ID is for singer and which for band etc. (in original example I had performer_type_id in gig-table for that), because any performer_id may belong any performer. So I'd like any performer type has it's own ID range, so I create dummy table for every sequence
CREATE TABLE band_id (
id int primary key,
dummy boolean default null
);
CREATE SEQUENCE band_id_seq START 1;
ALTER TABLE band_id ALTER COLUMN id SET DEFAULT nextval('band_id_seq');
CREATE TABLE singer_id (
id int primary key,
dummy boolean default null
);
CREATE SEQUENCE singer_id_seq START 2000000;
ALTER TABLE singer_id ALTER COLUMN id SET DEFAULT nextval('singer_id_seq');
Now, to insert new row into specific perfomer table I have to get next ID for it:
INSERT INTO band_id (dummy) VALUES (NULL);
Trying to figure out, is it possible to solve this process on DB level, or has something to done in App-level. It would be nice, if inserting into band table could:
before trigger inserting into band_id to genereate specific ID
before trigger inserting this new ID into performer-table
include this new ID into INSERT into band
Frist 2 points are easy, but the last point is not clear for now.

sql drop primary key from temp table

I want to create e temp table using select into syntax. Like:
select top 0 * into #AffectedRecord from MyTable
Mytable has a primary key. When I insert record using merge into syntax primary key be a problem. How could I drop pk constraint from temp table
The "SELECT TOP (0) INTO.." trick is clever but my recommendation is to script out the table yourself for reasons just like this. SELECT INTO when you're actually bringing in data, on the other hand, is often faster than creating the table and doing the insert. Especially on 2014+ systems.
The existence of a primary key has nothing to do with your problem. Key Constraints and indexes don't get created when using SELECT INTO from another table, the data type and NULLability does. Consider the following code and note my comments:
USE tempdb -- a good place for testing on non-prod servers.
GO
IF OBJECT_ID('dbo.t1') IS NOT NULL DROP TABLE dbo.t1;
IF OBJECT_ID('dbo.t2') IS NOT NULL DROP TABLE dbo.t2;
GO
CREATE TABLE dbo.t1
(
id int identity primary key clustered,
col1 varchar(10) NOT NULL,
col2 int NULL
);
GO
INSERT dbo.t1(col1) VALUES ('a'),('b');
SELECT TOP (0)
id, -- this create the column including the identity but NOT the primary key
CAST(id AS int) AS id2, -- this will create the column but it will be nullable. No identity
ISNULL(CAST(id AS int),0) AS id3, -- this this create the column and make it nullable. No identity.
col1,
col2
INTO dbo.t2
FROM t1;
Here's the (cleaned up for brevity) DDL for the new table I created:
-- New table
CREATE TABLE dbo.t2
(
id int IDENTITY(1,1) NOT NULL,
id2 int NULL,
id3 int NOT NULL,
col1 varchar(10) NOT NULL,
col2 int NULL
);
Notice that the primary key is gone. When I brought in id as-is it kept the identity. Casting the id column as an int (even though it already is an int) is how I got rid of the identity insert. Adding an ISNULL is how to make a column nullable.
By default, identity insert is set to off here to this query will fail:
INSERT dbo.t2 (id, id3, col1) VALUES (1, 1, 'x');
Msg 544, Level 16, State 1, Line 39
Cannot insert explicit value for identity column in table 't2' when IDENTITY_INSERT is set to OFF.
Setting identity insert on will fix the problem:
SET IDENTITY_INSERT dbo.t2 ON;
INSERT dbo.t2 (id, id3, col1) VALUES (1, 1, 'x');
But now you MUST provide a value for that column. Note the error here:
INSERT dbo.t2 (id3, col1) VALUES (1, 'x');
Msg 545, Level 16, State 1, Line 51
Explicit value must be specified for identity column in table 't2' either when IDENTITY_INSERT is set to ON
Hopefully this helps.
On a side-note: this is a good way to play around with and understand how select insert works. I used a perm table because it's easier to find.

How to insert an identity value into another table

I have two tables:
create table Clients
(
id_client int not null identity(1,1) Primary Key,
name_client varchar(10) not null,
phone_client int not null
)
create table Sales
(
id_sale int not null identity(1,1) Primary Key,
date_sale date not null,
total_sale float not null
id_clients int not null identity(1,1) Foreign Key references Clients
)
So, let's insert into Clients ('Ralph', 00000000), the id_client is going to be 1 (obviously). The question is: How could I insert that 1 into Sales?
FIrst of all - you cannot have two columns defined as identity in any table - you will get an error
Msg 2744, Level 16, State 2, Line 1
Multiple identity columns specified for table 'Sales'. Only one identity column per table is allowed.
So you will not be able to actually create that Sales table.
The id_clients column in the Sales table references an identity column - but in itself, it should not be defined as identity - it gets whatever value your client has.
create table Sales
(
id_sale int not null identity(1,1) Primary Key,
date_sale date not null,
total_sale float not null
id_clients int not null foreign key references clients(id_client)
)
-- insert a new client - this will create an "id_client" for that entry
insert into dbo.Clients(name_client, phone_client)
values('John Doe', '+44 44 444 4444')
-- get that newly created "id_client" from the INSERT operation
declare #clientID INT = SCOPE_IDENTITY()
-- insert the new "id_client" into your sales table along with other values
insert into dbo.Sales(......, id_clients)
values( ......., #clientID)
This works for me like a charm, because as far as I understand, if I migrate user or customer data etc from an old copy of the database, - the identity column and the UId(user id) or CId(customer id) used to give each row it's own unique identity - will become unsynced and therefor I use the second column(TId/UId etc) that contains a copy of the identity column's value to relate my data purely through app logic flow.
I can offset the actual SQL Identity Column(IdColumn) when a migration takes place via inserting and deleting dummy data to the count of the largest number found in that TId/CId etc column with the old data to increment the new database table(s) identity column(s) away from having duplicates on new inserts by using the dummy inserts to push the identity column up to old data values and therefor new customers or users etc will continue to source unique values from the identity column upon new inserts after being filled with the old data which will still contain the old identity values in the second column so that the other data(transactions etc) in other tables will still be in sync with which customer/user is related to said data, be it a transaction or whatever. But not have duplicates as the dummy data will have pushed up the IdColumn value to match the old data.
So if I have say 920 user records and the largest UId is 950 because 30 got deleted. Then I will dummy insert and delete 31 rows into the new table before adding the old 920 records to ensure UId's will have no duplicates.
It is very around the bush, but it works for my limited understanding of SQL XD
What this also allows me to do is delete a migrated user/customer/transaction at a later time using the original UId/CId/TId(copy of identity) and not have to worry that it will not be the correct item being deleted if I did not have the copy and were to be aiming at the actual identity column which would be out of sync if the data is migrated data(inserted into database from old database). Having a copy = happy days. I could use (SET IDENTITY_INSERT [dbo].[UserTable] ON) but I will probably screw that up eventually, so this way is FOR ME - fail safe.
Essentially making the data instance agnostic to an extent.
I use OUTPUT inserted.IdColumn to then save any pictures related using that in the file name and am able to call them specifically for the picture viewer and that data is also migration safe.
To do this, a copy, specifically at insert time, is working great.
declare #userId INT = SCOPE_IDENTITY() to store the new identity value-UPDATE dbo.UserTable to select where to update all in this same transaction
-SET UId = #userId to set it to my second column
-WHERE IdColumn = #userId; to aim at the correct row
USE [DatabaseName]
GO
/****** Object: StoredProcedure [dbo].[spInsertNewUser] Script Date:
2022/02/04 12:09:19 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[spInsertNewUser]
#Name varchar(50),
#PhoneNumber varchar(20),
#IDNumber varchar(50),
#Address varchar(200),
#Note varchar(400),
#UserPassword varchar(MAX),
#HideAccount int,
#UserType varchar(50),
#UserString varchar(MAX)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
INSERT INTO dbo.UserTable
([Name],
[PhoneNumber],
[IDNumber],
[Address],
[Note],
[UserPassword],
[HideAccount],
[UserType],
[IsDeleted],
[UserString])
OUTPUT inserted.IdColumn
VALUES
(#Name,
#PhoneNumber,
#IDNumber,
#Address,
#Note,
#UserPassword,
#HideAccount,
#UserType,
#UserString);
declare #userId INT = SCOPE_IDENTITY()
UPDATE dbo.UserTable
SET UId = #userId
WHERE IdColumn = #userId;
END
Here is the create for the table for testing
CREATE TABLE [dbo].[UserTable](
[IdColumn] [int] IDENTITY(1,1) NOT NULL,
[UId] [int] NULL,
[Name] [varchar](50) NULL,
[PhoneNumber] [varchar](20) NULL,
[IDNumber] [varchar](50) NULL,
[Address] [varchar](200) NULL,
[Note] [varchar](400) NULL,
[UserPassword] [varchar](max) NULL,
[HideAccount] [int] NULL,
[UserType] [varchar](50) NULL,
[IsDeleted] [int] NULL,
[UserString] [varchar](max) NULL,
CONSTRAINT [PK_UserTable] PRIMARY KEY CLUSTERED
(
[IdColumn] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Generate a random key including current inserting column value in oracle is it possible to create

I'm trying to generate a random key including current inserting column value in oracle is it possible to create?
CREATE TABLE MY_TABLE
(
KEY VARCHAR2(12) not null,
SITEID varchar2(25) not null,
SITENAME varchar2(50),
CONSTRAINT MY_pk PRIMARY KEY (KEY)
);
INSERT INTO MY_TABLE (KEY, SITEID, SITENAME)
VALUES(('ABCD001'||SITEID), 'HYD001', 'HYDERABADSITE');
It would be better to use a BEFORE TRIGGER to do this. like,
CREATE OR REPLACE
TRIGGER my_table_trigger
BEFORE INSERT ON my_table
FOR EACH ROW
BEGIN
:NEW.KEY := 'ABCD001'||:NEW.siteid;
END;