Is it possible to configure a check-constraint so that it is applied (checked) on every modification? I.e., irrespective of whether or not the particular checked field is actually modified.
CREATE TABLE dbo.SomeTable (
Id INT IDENTITY(1, 1) NOT NULL
, Name NVARCHAR(50) NOT NULL
, Who NVARCHAR(128) NOT NULL
, CONSTRAINT PK_SomeTable PRIMARY KEY CLUSTERED (Id ASC)
)
GO
ALTER TABLE dbo.SomeTable ADD CONSTRAINT DF_dbo_SomeTable_Who DEFAULT (SUSER_SNAME()) FOR Who;
GO
ALTER TABLE dbo.SomeTable WITH CHECK ADD CONSTRAINT CHK_dbo_SomeTable_WhoEqualsUser CHECK (Who = SUSER_NAME());
GO
INSERT INTO dbo.SomeTable (Name) VALUES (N'SomeBody');
EXECUTE AS USER = N'NotMe';
GO
UPDATE dbo.SomeTable SET Name = N'SomebodyElse'; -- Want this update-attempt to throw an exception
-- But it doesn't throw here because the check constraint isn't checked
UPDATE dbo.SomeTable SET Name = N'SomebodyElse' -- Don't want this update-attempt to throw
, Who = N'NotMe' -- since Who = SUSER_NAME()
GO
REVERT
Related
I want to create the following tables (simplified to the keys for example):
CREATE TABLE a (
TestVer VARCHAR(50) PRIMARY KEY,
TestID INT NOT NULL
);
CREATE TABLE b (
RunID SERIAL PRIMARY KEY,
TestID INT NOT NULL
);
Where TestID is not unique, but I want table b's TestID to only contain values from table a's `TestID'.
I'm fairly certain I can't make it a foreign key, as the target of a foreign key has to be either a key or unique, and found that supported by this post.
It appears possible with Triggers according to this post where mine on insert would look something like:
CREATE TRIGGER id_constraint FOR b
BEFORE INSERT
POSITION 0
AS BEGIN
IF (NOT EXISTS(
SELECT TestID
FROM a
WHERE TestID = NEW.TestID)) THEN
EXCEPTION my_exception 'There is no Test with id=' ||
NEW.TestID;
END
But I would rather not use a trigger. What are other ways to do this if any?
A trigger is the only way to continuously maintain such a constraint, however you can delete all unwanted rows as part of a query that uses table b:
with clean_b as (
delete from b
where not exists (select from a where a.TestID = b.TestID)
)
select *
from b
where ...
I have two tables:
create table Clients
(
id_client int not null identity(1,1) Primary Key,
name_client varchar(10) not null,
phone_client int not null
)
create table Sales
(
id_sale int not null identity(1,1) Primary Key,
date_sale date not null,
total_sale float not null
id_clients int not null identity(1,1) Foreign Key references Clients
)
So, let's insert into Clients ('Ralph', 00000000), the id_client is going to be 1 (obviously). The question is: How could I insert that 1 into Sales?
FIrst of all - you cannot have two columns defined as identity in any table - you will get an error
Msg 2744, Level 16, State 2, Line 1
Multiple identity columns specified for table 'Sales'. Only one identity column per table is allowed.
So you will not be able to actually create that Sales table.
The id_clients column in the Sales table references an identity column - but in itself, it should not be defined as identity - it gets whatever value your client has.
create table Sales
(
id_sale int not null identity(1,1) Primary Key,
date_sale date not null,
total_sale float not null
id_clients int not null foreign key references clients(id_client)
)
-- insert a new client - this will create an "id_client" for that entry
insert into dbo.Clients(name_client, phone_client)
values('John Doe', '+44 44 444 4444')
-- get that newly created "id_client" from the INSERT operation
declare #clientID INT = SCOPE_IDENTITY()
-- insert the new "id_client" into your sales table along with other values
insert into dbo.Sales(......, id_clients)
values( ......., #clientID)
This works for me like a charm, because as far as I understand, if I migrate user or customer data etc from an old copy of the database, - the identity column and the UId(user id) or CId(customer id) used to give each row it's own unique identity - will become unsynced and therefor I use the second column(TId/UId etc) that contains a copy of the identity column's value to relate my data purely through app logic flow.
I can offset the actual SQL Identity Column(IdColumn) when a migration takes place via inserting and deleting dummy data to the count of the largest number found in that TId/CId etc column with the old data to increment the new database table(s) identity column(s) away from having duplicates on new inserts by using the dummy inserts to push the identity column up to old data values and therefor new customers or users etc will continue to source unique values from the identity column upon new inserts after being filled with the old data which will still contain the old identity values in the second column so that the other data(transactions etc) in other tables will still be in sync with which customer/user is related to said data, be it a transaction or whatever. But not have duplicates as the dummy data will have pushed up the IdColumn value to match the old data.
So if I have say 920 user records and the largest UId is 950 because 30 got deleted. Then I will dummy insert and delete 31 rows into the new table before adding the old 920 records to ensure UId's will have no duplicates.
It is very around the bush, but it works for my limited understanding of SQL XD
What this also allows me to do is delete a migrated user/customer/transaction at a later time using the original UId/CId/TId(copy of identity) and not have to worry that it will not be the correct item being deleted if I did not have the copy and were to be aiming at the actual identity column which would be out of sync if the data is migrated data(inserted into database from old database). Having a copy = happy days. I could use (SET IDENTITY_INSERT [dbo].[UserTable] ON) but I will probably screw that up eventually, so this way is FOR ME - fail safe.
Essentially making the data instance agnostic to an extent.
I use OUTPUT inserted.IdColumn to then save any pictures related using that in the file name and am able to call them specifically for the picture viewer and that data is also migration safe.
To do this, a copy, specifically at insert time, is working great.
declare #userId INT = SCOPE_IDENTITY() to store the new identity value-UPDATE dbo.UserTable to select where to update all in this same transaction
-SET UId = #userId to set it to my second column
-WHERE IdColumn = #userId; to aim at the correct row
USE [DatabaseName]
GO
/****** Object: StoredProcedure [dbo].[spInsertNewUser] Script Date:
2022/02/04 12:09:19 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[spInsertNewUser]
#Name varchar(50),
#PhoneNumber varchar(20),
#IDNumber varchar(50),
#Address varchar(200),
#Note varchar(400),
#UserPassword varchar(MAX),
#HideAccount int,
#UserType varchar(50),
#UserString varchar(MAX)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
INSERT INTO dbo.UserTable
([Name],
[PhoneNumber],
[IDNumber],
[Address],
[Note],
[UserPassword],
[HideAccount],
[UserType],
[IsDeleted],
[UserString])
OUTPUT inserted.IdColumn
VALUES
(#Name,
#PhoneNumber,
#IDNumber,
#Address,
#Note,
#UserPassword,
#HideAccount,
#UserType,
#UserString);
declare #userId INT = SCOPE_IDENTITY()
UPDATE dbo.UserTable
SET UId = #userId
WHERE IdColumn = #userId;
END
Here is the create for the table for testing
CREATE TABLE [dbo].[UserTable](
[IdColumn] [int] IDENTITY(1,1) NOT NULL,
[UId] [int] NULL,
[Name] [varchar](50) NULL,
[PhoneNumber] [varchar](20) NULL,
[IDNumber] [varchar](50) NULL,
[Address] [varchar](200) NULL,
[Note] [varchar](400) NULL,
[UserPassword] [varchar](max) NULL,
[HideAccount] [int] NULL,
[UserType] [varchar](50) NULL,
[IsDeleted] [int] NULL,
[UserString] [varchar](max) NULL,
CONSTRAINT [PK_UserTable] PRIMARY KEY CLUSTERED
(
[IdColumn] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
This is my table :
CREATE TABLE [dbo].[TestTable]
(
[Name1] varchar(50) COLLATE French_CI_AS NOT NULL,
[Name2] varchar(255) COLLATE French_CI_AS NULL,
CONSTRAINT [TestTable_uniqueName1] UNIQUE ([Name1]),
CONSTRAINT [TestTable_uniqueName1Name2] UNIQUE ([Name1], [Name2])
)
ALTER TABLE [dbo].[TestTable]
ADD CONSTRAINT [TestTable_uniqueName1]
UNIQUE NONCLUSTERED ([Name1])
ALTER TABLE [dbo].[TestTable]
ADD CONSTRAINT [TestTable_uniqueName1Name2]
UNIQUE NONCLUSTERED ([Name1], [Name2])
GO
ALTER INDEX [TestTable_uniqueName1]
ON [dbo].[TestTable]
DISABLE
GO
My idea is to enable/disable one or other unique contraint depending on the customer application. With this way, I can catch the thrown exception in my c# code, and display a specific error message to the GUI.
Now, my problem is to alter the collation of columns Name1 & Name2, I need to make them case sensitive (French_CS_AS). To alter these fields, I have to drop the two constraints and recreate it. According to the explained schema, I cannot create an enabled constraint and then disable it, because by some customers, I have duplicate keys for one or other constraint.
For my update script, my idea number 1 was
Save the name of enabled constraints in a temp table
Drop the constraints
Alter columns
Create DISABLED unique constraints
Enable specific constraints according to the saved values in points 1.
My problem is in point 4., I don't find how to create a disabled unique constraint with an ALTER TABLE statement. Is it possible to create it directly in the sys.indexes table ?
My idea number 2 was
Rename TestTable to TestTableCopy
Recreate TestTable with the new fields collation, and otherwise the same schema (indexes, FK, triggers, ...)
Disable specifical unique contraints in TestTable
Migrate data from TestTableCopy to TestTable
Drop TestTableCopy
In this way, my fear is to loose some links with other tables/dependencies, beceause it is a central table in my database.
Is there any other way to achieve my goal?
If necessary, I can use unique indexes instead of unique constraints.
It looks like it is impossible to create a unique index on a column that already has duplicate values.
So, rather than having a disabled unique index either:
not have an index at all (which is the same as having a disabled index from the query processor point of view),
or create a non-unique index.
For those instanses where your client has unique data create unique index. For those instanses where your client has non-unique data create non-unique index.
CREATE PROCEDURE [dbo].[spUsers_AddUsers]
#Name1 varchar(50) ,
#Name2 varchar(50) ,
#Unique bit
AS
declare #err int
begin tran
if #Unique = 1 begin
if not exists (SELECT * FROM Users WHERE Name1 = #Name1 and Name2 = #Name2)
begin
INSERT INTO Users (Name1,Name2)
VALUES (#Name1,#Name2)
set #err = ##ERROR
end else
begin
UPDATE Users
set Name1 = #Name1,
Name2 = #Name2
where Name1 = #Name1 and Name2 = #Name2
set #err = ##ERROR
end
end else begin
if not exists ( SELECT * FROM Users WHERE Name1 = #Name1 )
begin
INSERT INTO Users (Name1,Name2)
VALUES (#Name1,#Name2)
set #err = ##ERROR
end else
begin
UPDATE Users
set Name1 = #Name1,
Name2 = #Name2
where Name1 = #Name1
set #err = ##ERROR
end
if #err = 0 commit tran
else rollback tran
So first you check if you need an unique Name1 and Name2 or just Name1. Then if you do you an insert/update based on what constrain you have.
I currently have a parent table:
CREATE TABLE members (
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
first_name varchar(20)
last_name varchar(20)
address address (composite type)
contact_numbers varchar(11)[3]
date_joined date
type varchar(5)
);
and two related tables:
CREATE TABLE basic_member (
activities varchar[3])
INHERITS (members)
);
CREATE TABLE full_member (
activities varchar[])
INHERITS (members)
);
If the type is full the details are entered to the full_member table or if type is basic into the basic_member table. What I want is that if I run an update and change the type to basic or full the tuple goes into the corresponding table.
I was wondering if I could do this with a rule like:
CREATE RULE tuple_swap_full
AS ON UPDATE TO full_member
WHERE new.type = 'basic'
INSERT INTO basic_member VALUES (old.member_id, old.first_name, old.last_name,
old.address, old.contact_numbers, old.date_joined, new.type, old.activities);
... then delete the record from the full_member
Just wondering if my rule is anywhere near or if there is a better way.
You don't need
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
A PRIMARY KEY implies UNIQUE NOT NULL automatically:
member_id SERIAL PRIMARY KEY
I wouldn't use hard coded max length of varchar(20). Just use text and add a check constraint if you really must enforce a maximum length. Easier to change around.
Syntax for INHERITS is mangled. The key word goes outside the parens around columns.
CREATE TABLE full_member (
activities text[]
) INHERITS (members);
Table names are inconsistent (members <-> member). I use the singular form everywhere in my test case.
Finally, I would not use a RULE for the task. A trigger AFTER UPDATE seems preferable.
Consider the following
Test case:
Tables:
CREATE SCHEMA x; -- I put everything in a test schema named "x".
-- DROP TABLE x.members CASCADE;
CREATE TABLE x.member (
member_id SERIAL PRIMARY KEY
,first_name text
-- more columns ...
,type text);
CREATE TABLE x.basic_member (
activities text[3]
) INHERITS (x.member);
CREATE TABLE x.full_member (
activities text[]
) INHERITS (x.member);
Trigger function:
Data-modifying CTEs (WITH x AS ( DELETE ..) are the best tool for the purpose. Requires PostgreSQL 9.1 or later.
For older versions, first INSERT then DELETE.
CREATE OR REPLACE FUNCTION x.trg_move_member()
RETURNS trigger AS
$BODY$
BEGIN
CASE NEW.type
WHEN 'basic' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.basic_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
WHEN 'full' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.full_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
END CASE;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Trigger:
Note that it is an AFTER trigger and has a WHEN condition.
WHEN condition requires PostgreSQL 9.0 or later. For earlier versions, you can just leave it away, the CASE statement in the trigger itself takes care of it.
CREATE TRIGGER up_aft
AFTER UPDATE
ON x.member
FOR EACH ROW
WHEN (NEW.type IN ('basic ','full')) -- OLD.type cannot be IN ('basic ','full')
EXECUTE PROCEDURE x.trg_move_member();
Test:
INSERT INTO x.member (first_name, type) VALUES ('peter', NULL);
UPDATE x.member SET type = 'full' WHERE first_name = 'peter';
SELECT * FROM ONLY x.member;
SELECT * FROM x.basic_member;
SELECT * FROM x.full_member;
I've got a T-SQL script, that converts field to IDENTITY (in a weird way).
How do I convert it to PL/SQL? (and, probably, figure out, if there is a simpler way to do this - without creating a temporary table).
The T-SQL script:
-- alter table ts_changes add TS_THREADID VARCHAR(100) NULL;
-- Change Field TS_ID TS_NOTIFICATIONEVENTS to IDENTITY
BEGIN TRANSACTION
GO
CREATE TABLE dbo.Tmp_TS_NOTIFICATIONEVENTS
(
TS_ID int NOT NULL IDENTITY (1, 1),
TS_TABLEID int NOT NULL,
TS_CASEID int NULL,
TS_WORKFLOWID int NULL,
TS_NOTIFICATIONID int NULL,
TS_PRIORITY int NULL,
TS_STARTDATE int NULL,
TS_TIME int NULL,
TS_WAITSTATUS int NULL,
TS_RECIPIENTID int NULL,
TS_LASTCHANGEDATE int NULL,
TS_ELAPSEDCYCLES int NULL
) ON [PRIMARY]
SET IDENTITY_INSERT dbo.Tmp_TS_NOTIFICATIONEVENTS ON
GO
IF EXISTS(SELECT * FROM dbo.TS_NOTIFICATIONEVENTS)
EXEC('INSERT INTO dbo.Tmp_TS_NOTIFICATIONEVENTS (TS_ID, TS_TABLEID, TS_CASEID, TS_WORKFLOWID, TS_NOTIFICATIONID, TS_PRIORITY, TS_STARTDATE, TS_TIME, TS_WAITSTATUS, TS_RECIPIENTID, TS_LASTCHANGEDATE, TS_ELAPSEDCYCLES)
SELECT TS_ID, TS_TABLEID, TS_CASEID, TS_WORKFLOWID, TS_NOTIFICATIONID, TS_PRIORITY, TS_STARTDATE, TS_TIME, TS_WAITSTATUS, TS_RECIPIENTID, TS_LASTCHANGEDATE, TS_ELAPSEDCYCLES FROM dbo.TS_NOTIFICATIONEVENTS WITH (HOLDLOCK TABLOCKX)')
GO
SET IDENTITY_INSERT dbo.Tmp_TS_NOTIFICATIONEVENTS OFF
GO
DROP TABLE dbo.TS_NOTIFICATIONEVENTS
GO
EXECUTE sp_rename N'dbo.Tmp_TS_NOTIFICATIONEVENTS', N'TS_NOTIFICATIONEVENTS', 'OBJECT'
GO
ALTER TABLE dbo.TS_NOTIFICATIONEVENTS ADD CONSTRAINT
aaaaaTS_NOTIFICATIONEVENTS_PK PRIMARY KEY NONCLUSTERED
(
TS_ID
) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
COMMIT
From version 12C, Oracle supports the IDENTITY data type e.g.:
CREATE TABLE Tmp_TS_NOTIFICATIONEVENTS
( TS_ID int NOT NULL GENERATED ALWAYS AS IDENTITY.
...
Prior to version 12C, Oracle doesn't have an IDENTITY data type, so there is no equivalent PL/SQL code for this. If you want to ensure that all future inserts automatically get assigned a unique value for TS_ID you can do this:
1) Find out the highest value currently used:
select max(ts_id) from TS_NOTIFICATIONEVENTS;
2) Create a sequence that starts with a value higher than that, e.g.:
create sequence ts_id_seq start with 100000;
3) Create a trigger to populate the column from the sequence on insert:
create or replace trigger ts_id_trig
before insert on TS_NOTIFICATIONEVENTS
for each row
begin
:new.ts_id := ts_id_seq.nextval;
-- or if pre 11G:
-- select ts_id_seq.nextval into :new.ts_id from dual;
end;