This can be marked as duplicate but I am finding issue when I refereed
Create Unqiue case-insensitive constraint on two varchar fields
I have a table std_tbl having some duplicate records in one of the columns say Column_One.
I created a unique constraint on that column
ALTER TABLE std_tbl
ADD CONSTRAINT Unq_Column_One
UNIQUE (Column_One) ENABLE NOVALIDATE;
I used ENABLE NOVALIDATE as I want to keep existing duplicate records and validate future records for duplicates.
But here, the constaint does not look for case sensitive words, like if value of Column_One is 'abcd', it allows 'Abcd' and 'ABCD' to insert in the table.
I want this behaviour to be case insensitive so that it should not look for case while validating data. For this I came up with this solution.
CREATE UNIQUE INDEX Unq_Column_One_indx ON std_tbl (LOWER(Column_One));
But it is giving me the error:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
Please help me out...
This occurs when you try to execute a CREATE UNIQUE INDEX statement on one or more columns that contain duplicate values.
Two ways to resolve (that I know of):
Remove the UNIQUE keyword from your CREATE UNIQUE INDEX statement and rerun the command (i.e. if the values need not be unique).
If they must be unique, delete the extraneous records that are causing the duplicate values and rerun the CREATE UNIQUE INDEX statement.
Related
I have a table ideas with columns idea_id, element_id and element_value.
Initially, I had created a composite primary key(ideas_pkey) using all three columns but I started facing size limit issues with the index associated with the primary key as the element_value column had a huge value.
Hence, I created another unique index hashing the column with possible large values
CREATE UNIQUE INDEX ideas_pindex ON public.ideas USING btree (idea_id, element_id, md5(element_value))
Now I deleted the initial primary key ideas_pkey and wanted to recreate it using this newly created index like so
alter table ideas add constraint ideas_pkey PRIMARY KEY ("idea_id", "element_id", "element_value") USING INDEX ideas_pindex;
But this fails with the following error
ERROR: syntax error at or near "ideas_pindex"
LINE 2: ...a_id", "element_id", "element_value") USING INDEX ideas_...
^
SQL state: 42601
Character: 209
What am I doing wrong?
A primary key index can't be a functional index. You can instead just have a unique index on your table, or create another column storing the md5() of your larger column and use it in the PK.
That being said, there is also another error in your query: If you want to specify an index name, you can't specify the PK columns (they are derived from the underlying index). And if you want to specify the pk columns, you can't specify the index name/definition, as it will be automatically created. See the doc
I have a table and want to update some of the rows like this:
CREATE TABLE abc(i INTEGER, deleted BOOLEAN);
CREATE UNIQUE INDEX myidx ON abc(i) WHERE NOT deleted;
INSERT INTO abc VALUES (4), (5);
UPDATE abc SET i = i - 1;
Which works ok because of the order in which the UPDATE is processing the rows, but when the UPDATE is attempted like this, it fails:
UPDATE abc SET i = i + 1;
ERROR: 23505: duplicate key value violates unique constraint "myidx"
DETAIL: Key (i)=(4) already exists.
SCHEMA NAME: public
TABLE NAME: abc
CONSTRAINT NAME: myidx
LOCATION: _bt_check_unique, nbtinsert.c:534
Time: 0.472 ms
The reason of the error is, in the middle of the update 2 rows would have had the value i = 4, even though at the end of the update all rows would have had unique values.
So I thought of changing the index into a deferred constraint, but according to the docs, this is not possible as my index is partial (so it only enforces uniqueness on some rows):
A uniqueness restriction covering only some rows cannot be written as a unique constraint, but it is possible to enforce such a restriction by creating a unique partial index.
The docs say to use partial indexes, but those can't be deferred, so I go back to the original problem.
So far my solution would be to set i = NULL whenever I mark deleted = true so it's not considered duplicated by my constraint anymore.
Is there a better solution to this? Maybe a way to make the UPDATE go always in the direction I want?
Please note:
I cannot DELETE the row, that's why the deleted column is there. The actual delete is done after some human validation happens.
Update:
The reason I'm bulk-updating that unique column is because this table contains a sequence that is used in the UI for sorting the records (the users drag and drop the records as they wish). And they can also delete them (so I shift the sequences of elements occurring after the one that was deleted).
The actual columns look more like this (name TEXT, description TEXT, ..., sequence NUMBER).
That sequence row is what in the simplified case I called i. So say I have 3 records with (name, sequence):
("Laptop", 1)
("Mobile", 2)
("Desktop", 3)
And I the user deletes the middle one, I want to end up with:
("Laptop", 1)
("Desktop", 2) // <--- updated here
Firstly, I have a table in database USERS with almost 30 Million records in it. I have different indices for each column. But some of the column have only 2 to 3 non null values while others are Null but still their index size is 847 MB a little less than the one index that contain unique value for each row.
Can anyone know why is it like this?
Secondly, in PostgreSQL we have a index for primary key index for each column by default what if we delete that index what will be the consequences?
What that index is really use for?
As i'm searching based on values in other columns only will it be safe to delete index for primary key?
NULL values are stored in indexes just like all other values, so the first part is not surprising.
You cannot delete the primary key index, what you could do is drop the primary key constraint. But then you cannot be certain that no duplicate rows get added to the table. If you think that is no problem, look at the many questions asking for help with exactly that problem.
Every table should have a primary key.
But it might be a good idea to get rid of some other indexes if you don't need them.
There is nothing called primary key index, seems to be you are talking about unique index.
First of all you need to understand the difference between primary key and index. You can have only one primary key in a table. Primary key would be your unique identifier of each column and does not allow nulls. Index is used to speed up your fetching process on particular column and you can have one null if it is unique index. Deleting unique index in your table will not impact any thing apart from performance. Its your way of design to have index or not
Let's say I have the following table
TABLE subgroups (
group_id t_group_id NOT NULL REFERENCES groups(group_id),
subgroup_name t_subgroup_name NOT NULL,
more attributes ...
)
subgroup_name is UNIQUE to a group(group_id).
A group can have many subgroups.
The subgroup_names are user-supplied. (I would like to avoid using a subgroup_id column. subgroup_name has meaning in the model and is more than just a label, I am providing a list of predetermined names but allow a user to add his owns for flexibility).
This table has 2 levels of referencing child tables containing subgroup attributes (with many-to-one relations);
I would like to have a PRIMARY KEY on (group_id, upper(trim(subgroup_name)));
From what I know, postgres doesn't allow to use PRIMARY KEY/UNIQUE on a function.
IIRC, the relational model also requires columns to be used as stored.
CREATE UNIQUE INDEX ON subgroups (group_id, upper(trim(subgroup_name))); doesn't solve my problem
as other tables in my model will have FOREIGN KEYs pointing to those two columns.
I see two options.
Option A)
Store a cleaned up subgroup name in subgroup_name
Add an extra column called subgroup_name_raw that would contained the uncleaned string
Option B)
Create both a UNIQUE INDEX and PRIMARY KEY on my key pair. (seems like a huge waste)
Any insights?
Note: I'm using Postgres 9.2
Actually you can do a UNIQUE constraint on the output of a function. You can't do it in the table definition though. What you need to do is create a unique index after. So something like:
CREATE UNIQUE INDEX subgroups_ukey2 ON subgroups(group_id, upper(trim(subgroup_name)));
PostgreSQL has a number of absolutely amazing indexing capabilities, and the ability to create unique (and partial unique) indexes on function output is quite underrated.
I'm new to postgres. I wonder, what is a PostgreSQL way to set a constraint for a couple of unique values (so that each pair would be unique). Should I create an INDEX for bar and baz fields?
CREATE UNIQUE INDEX foo ON table_name(bar, baz);
If not, what is a right way to do that? Thanks in advance.
If each field needs to be unique unto itself, then create unique indexes on each field. If they need to be unique in combination only, then create a single unique index across both fields.
Don't forget to set each field NOT NULL if it should be. NULLs are never unique, so something like this can happen:
create table test (a int, b int);
create unique index test_a_b_unq on test (a,b);
insert into test values (NULL,1);
insert into test values (NULL,1);
and get no error. Because the two NULLs are not unique.
You can do what you are already thinking of: create a unique constraint on both fields. This way, a unique index will be created behind the scenes, and you will get the behavior you need. Plus, that information can be picked up by information_schema to do some metadata inferring if necessary on the fact that both need to be unique. I would recommend this option. You can also use triggers for this, but a unique constraint is way better for this specific requirement.