SELECT COUNT(*) FROM table_name;
My algorithm is:
check count
count+1 is the new primary key starting point
Then keep on incrementing before every insert operation
But what is this GUID? Does SQL Server provide something where it automatically generates and incremented primary key?
There are 3 options
CREATE TABLE A
(
ID INT IDENTITY(1,1) PRIMARY KEY,
... Other Columns
)
CREATE TABLE B
(
ID UNIQUEIDENTIFIER DEFAULT NEWID() PRIMARY KEY,
... Other Columns
)
CREATE TABLE C
(
ID UNIQUEIDENTIFIER DEFAULT NEWSEQUENTIALID() PRIMARY KEY,
... Other Columns
)
One reason why you might prefer C rather than B would be to reduce fragmentation if you were to use the ID as the clustered index.
I'm not sure if you're also asking about IDENTITY or not- but a GUID is a unique identifier that is (almost) guaranteed to be unique. It can be used on primary keys but isn't recommended unless you're doing offline work or planning on merging databases.
For example a "normal", IDENTITY primary key is
1 Jason
2 Jake
3 Mike
which when merging with another database which looks like
1 Lisa
2 John
3 Sam
will be tricky. You've got to re-key some columns, make sure that your FKs are in order, etc. Using GUIDs, the data looks like this, and is easy to merge:
1FB74D3F-2C84-43A6-9FB6-0EFC7092F4CE Jason
845D5184-6383-473F-A5D6-4DE98DBFBC39 Jake
8F515331-4457-49D0-A9F5-5814EE7F50BA Mike
CE789C89-E01F-4BCE-AC05-CBDF10419E78 Lisa
4D51B568-107C-4B63-9F7F-24592704118F John
7FA4ED64-7356-4013-A78A-C8CCAB329954 Sam
Note that a GUID takes a lot more space than an INT, and because of this it's recommended to use an INT as a primary key unless you absolutely need to.
create table your table
(id int indentity(1,1) primary key,
col1 varchar(10)
)
will automatically create the primary key for you.
Check GUID in the T-SQL, don't have it at hand right now.
The issue with using count , then count +1 as key, is that were you to delete a record from the middle, you would end up with a duplicate key generated.
EG:
Key Data
1 A
2 B
3 C
4 D
now delete B (count becomes 3), and insert E. This tries to make the new primary key as 4, which already exists.
Key Data
1 A
3 C
4 D <--After delete count = 3 here
4 E <--Attempted insert with key 4
You could use primary key and auto increment to make sure you don't have this issue
CREATE TABLE myTable
(
P_Id int NOT NULL AUTO_INCREMENT,
PRIMARY KEY (P_Id)
)
Or you could use GUID. How GUIDs work is by creating a 128 bit integer (represented as a 32 char hex string)
Key Data
24EC84E0-36AA-B489-0C7B-074837BCEA5D A
.
.
This results in 2^128 possible values (reaaally large), so the chances of similar values created by one computer is extremely small. Besides there are algorithms to help try and ensure that this doesn't happen. So GUID are a pretty good choice for a key as well.
As for whether you use an integer or a GUID, is usually dependent on application, policy etc.
Related
I have two tables which I want to fill their corresponding FOREIGN KEYs simultaneously through a TRIGGER at the time of inserting data into customers table:
CREATE TABLE customers (
customer_id SERIAL PRIMARY KEY,
sld_id integer,
customer_name varchar(35)
);
CREATE TABLE slds (
sld_id SERIAL PRIMARY KEY,
customer_id integer,
sld_code varchar(8) UNIQUE
);
ALTER TABLE customers
ADD CONSTRAINT customers_sld_id_fk
FOREIGN KEY (sld_id)
REFERENCES slds(sld_id);
ALTER TABLE slds
ADD CONSTRAINT slds_customer_id_fk
FOREIGN KEY(customer_id)
REFERENCES customers(customer_id);
I have tried to use an AFTER INSERT trigger function, but NEW.customer_id returned NULL.
Then I used BEFORE INSERT which got me the value of NEW.customer_id. However, because of the constraint and the fact that the insertion didn't take place yet the FOREIGN KEY CONSTRAINT is not fulfilled and I get an error.
I have read here that currval() and lastval() can be used but not recommended.
So I created a proxy table to store the generated values. Then, an AFTER INSERT trigger to fill in those fields back in the related tables.
I thought of using a CREATE TEMP TABLE, but found out that it only lasts for the duration of the calling function and not the connection session. Maybe I misunderstood the error message.
Is this a normal efficient practice? Namely, having a dirty table around just to use for such situations.
Or maybe there is another way to achieve this without using a proxy table?
EDITED:
SAMPLE DATA
customersTABLE:
customer_id slds_id customer_name
1 1 johns
3 2 jenn
4 3 thomas
7 4 jeff
8 5 robin
9 6 chris
10 7 larry
slds TABLE:
slds_id slds_code customer_id
1 SL747561 1
2 SL710031 3
3 SL719995 4
4 SL765369 7
5 SL738011 8
6 SL722232 9
7 SL751591 10
EDIT 2:
Forgot to mention that slds_code is generated within a trigger function:
sld_code varchar(8) := 'SL7'||to_char(floor(random() * 100000 + 1)::int, 'fm00000');
I'm guessing the answer is no, but... is it possible to enforce uniqueness on only part of a composite primary key?
create table foo (
id integer,
yesno boolean,
extra text,
primary key (id, yesno, extra)
)
The idea here is that I want id + yesno to be unique for this particular table, but I want to include extra in the index so I can take advantage of Postgres index-only scans.
Yes, I could create a second, unique index on id + yesno, but that would be wasteful.
You can use the INCLUDE option to add extra columns in the index that are not actually part of the index itself.
create table foo (
id integer not null,
yesno boolean not null,
extra text
);
Create unique index foo_uk
on foo (id, yesno)
include (extra);
You did not indicate what Postgres version you have, so this may not be appropriate, as you need at least version 11.
I've here a problem that I couldn't find a proper solution on my researches, maybe it's because I couldn't find out the exact terms to search for it, so if this is a duplicate I will delete it.
My problem is I want to know if it is possible to avoid a combination of data between two fields. I will show the structure and the kind of data I want to avoid. It will be easier to understand.
Table_A Table_B
------------------------ -------------------------------
id integer (PK) id integer (PK)
description varchar(50) title varchar(50)
id1_fromA (FK A->id)
id2_fromA (FK A->id)
I'm trying to validate the following data on table Table_B (combination is between id1_fromA and id2_fromA)
id title id1_fromA id2_fromA
1 Some Title 1 2 --It will be permmited
2 Some other 1 2 --It is a duplicate NOT ALLOWED
3 One more 1 1 --It is equals NOT ALLOWED
4 Another 2 1 --It is same as registry id 1 so NOT ALLOWED
5 Sample data 3 2 --It is ok
With above data I can easily solve the problem for registry ID=2 with
ALTER TABLE table_B ADD CONSTRAINT UK_TO_A_FKS UNIQUE (id1_fromA, id2_fromA);
And the problem for registry ID=3 with
ALTER TABLE table_B ADD CONSTRAINT CHK_TO_A_FKS CHECK (id1_fromA != id2_fromA);
My Problem is with the registry ID=4 I want to avoid such duplicate of combination as 1,2=2,1. Is it possible to do it with a CONSTRAINT or an INDEX or an UNIQUE or I will need to create a trigger or a procedure to do so?
Thanks in advance.
You can't do this with a unique constraint, but you can do this with a unique index.
create unique index UK_TO_A_FKS
on table_b (least(id1_froma, id2_froma), greatest(id1_froma, id2_froma));
I have a laboratory analysis database and I'm working on the bast data layout. I've seen some suggestions based on similar requirements for using a "Shared Primary Key", but I don't see the advantages over just foreign keys. I'm using PostgreSQL:tables listed below
Sample
___________
sample_id (PK)
sample_type (where in the process the sample came from)
sample_timestamp (when was the sample taken)
Analysis
___________
analysis_id (PK)
sample_id (FK references sample)
method (what analytical method was performed)
analysis_timestamp (when did the analysis take place)
analysis_notes
gc
____________
analysis_id (shared Primary key)
gc_concentration_meoh (methanol concentration)
gc_concentration_benzene (benzene concentration)
spectrophotometer
_____________
analysis_id
spectro_nm (wavelength used)
spectro_abs (absorbance measured)
I could use this design, or I could move the fields from the analysis table into both the gc and spectrophotometer tables, and just use foreign keys between sample, gc, and spectrophotometer tables. The only advantage I see of this design is in cases where I would just want information on how many or what types of analyses were performed, without having to join in the actual results. However, the additional rules to ensure referential integrity between the shared primary keys, and managing extra joins and triggers (on delete cascade, etc) appears to make it more of a headache than the minor advantages. I'm not a DBA, but a scientist, so please let me know what I'm missing.
UPDATE:
A shared primary key (as I understand it) is like a one-to-one foreign key with the additional constraint that each value in the parent tables(analysis) must appear in one of the child tables once, and no more than once.
I've seen some suggestions based on similar requirements for using a
"Shared Primary Key", but I don't see the advantages over just foreign
keys.
If I've understood your comments above, the advantage is that only the first implements the requirement that each row in the parent match a row in one child, and only in one child. Here's one way to do that.
create table analytical_methods (
method_id integer primary key,
method_name varchar(25) not null unique
);
insert into analytical_methods values
(1, 'gc'),(2, 'spec'), (3, 'Atomic Absorption'), (4, 'pH probe');
create table analysis (
analysis_id integer primary key,
sample_id integer not null, --references samples, not shown
method_id integer not null references analytical_methods (method_id),
analysis_timestamp timestamp not null,
analysis_notes varchar(255),
-- This unique constraint lets the pair of columns be the target of
-- foreign key constraints from other tables.
unique (analysis_id, method_id)
);
-- The combination of a) the default value and the check() constraint on
-- method_id, and b) the foreign key constraint on the paired columns
-- analysis_id and method_id guarantee that rows in this table match a
-- gc row in the analysis table.
--
-- If all the child tables have similar constraints, a row in analysis
-- can match a row in one and only one child table.
create table gc (
analysis_id integer primary key,
method_id integer not null
default 1
check (method_id = 1),
foreign key (analysis_id, method_id)
references analysis (analysis_id, method_id),
gc_concentration_meoh integer not null,
gc_concentration_benzene integer not null
);
It looks like in my case this supertype/subtype model in not the best choice. Instead, I should move the fields from the analysis table into all the child tables, and make a series of simple foreign key relationships. The advantage of the supertype/subtype model is when using the primary key of the supertype as a foreign key in another table. Since I am not doing this, the extra layer of complexity will not add anything.
I thought this would be simple, but I can't seem to use AUTO_INCREMENT in my db2 database. I did some searching and people seem to be using "Generated by Default", but this doesn't work for me.
If it helps, here's the table I want to create with the sid being auto incremented.
create table student(
sid integer NOT NULL <auto increment?>
sname varchar(30),
PRIMARY KEY (sid)
);
Any pointers are appreciated.
You're looking for is called an IDENTITY column:
create table student (
sid integer not null GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1)
,sname varchar(30)
,PRIMARY KEY (sid)
);
A sequence is another option for doing this, but you need to determine which one is proper for your particular situation. Read this for more information comparing sequences to identity columns.
You will have to create an auto-increment field with the sequence object (this object generates a number sequence).
Use the following CREATE SEQUENCE syntax:
CREATE SEQUENCE seq_person
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10
The code above creates a sequence object called seq_person, that starts with 1 and will increment by 1. It will also cache up to 10 values for performance. The cache option specifies how many sequence values will be stored in memory for faster access.
To insert a new record into the "Persons" table, we will have to use the nextval function (this function retrieves the next value from seq_person sequence):
INSERT INTO Persons (P_Id,FirstName,LastName)
VALUES (seq_person.nextval,'Lars','Monsen')
The SQL statement above would insert a new record into the "Persons" table. The "P_Id" column would be assigned the next number from the seq_person sequence. The "FirstName" column would be set to "Lars" and the "LastName" column would be set to "Monsen".
hi If you are still not able to make column as AUTO_INCREMENT while creating table. As a work around first create table that is:
create table student(
sid integer NOT NULL
sname varchar(30),
PRIMARY KEY (sid)
);
and then explicitly try to alter column bu using the following
alter table student alter column sid set GENERATED BY DEFAULT AS
IDENTITY
Or
alter table student alter column sid set GENERATED BY DEFAULT
AS IDENTITY (start with 100)
Added a few optional parameters for creating "future safe" sequences.
CREATE SEQUENCE <NAME>
START WITH 1
INCREMENT BY 1
NO MAXVALUE
NO CYCLE
CACHE 10;