I am autogenerated SQL queries based on some condition of the autogenerated resulted into this query
DO
$$
BEGIN
IF NOT EXISTS (SELECT FROM pg_attribute
WHERE attrelid = 'public.registration'::regclass -- table name here
AND attname = 'price' -- column name here
AND NOT attisdropped
) THEN
ALTER TABLE public.registration
ADD COLUMN price text UNIQUE NULL;
ELSE
ALTER TABLE public.registration
ALTER COLUMN price TYPE text,
ADD CONSTRAINT IF NOT EXISTS registration_price_key UNIQUE (price);
END IF;
END
$$
So the table is is already there by if the column is not there it will be added and a unique constraint will be added to that column problem if it does not already exist
I get that an syntax error on this line `near NOT
ADD CONSTRAINT IF NOT EXISTS registration_price_key UNIQUE (price);
but why?
Related
Is it not supposed to delete null values before altering the table? I'm confused...
My query looks roughly like this:
BEGIN;
DELETE FROM my_table
WHERE my_column IS NULL;
ALTER TABLE my_table DROP CONSTRAINT my_table_pk;
ALTER TABLE my_table ADD PRIMARY KEY (id, my_column);
-- this is to repopulate the data afterwards
INSERT INTO my_table (name, other_table_id, my_column)
SELECT
ya.name,
ot.id,
my_column
FROM other_table ot
LEFT JOIN yet_another ya
ON ya.id = ot."fileId"
WHERE NOT EXISTS (
SELECT
1
FROM my_table mt
WHERE ot.id = mt.other_table_id AND ot.my_column = mt.my_column
) AND my_column IS NOT NULL;
COMMIT;
sorry for naming
There are two possible explanations:
A concurrent session inserted a new row with a NULL value between the start of the DELETE and the start of ALTER TABLE.
To avoid that, lock the table in SHARE mode before you DELETE.
There is a row where id has a NULL value.
I am writing migration script to migrate database. I have to duplicate the row by incrementing primary key considering that different database can have n number of different columns in the table. I can't write each and every column in query. If i simply just copy the row then, I am getting duplicate key error.
Query: INSERT INTO table_name SELECT * FROM table_name WHERE id=255;
ERROR: duplicate key value violates unique constraint "table_name_pkey"
DETAIL: Key (id)=(255) already exist
Here, It's good that I don't have to mention all column names. I can select all columns by giving *. But, same time I am also getting duplicate key error.
What's the solution of this problem? Any help would be appreciated. Thanks in advance.
If you are willing to type all column names, you may write
INSERT INTO table_name (
pri_key
,col2
,col3
)
SELECT (
SELECT MAX(pri_key) + 1
FROM table_name
)
,col2
,col3
FROM table_name
WHERE id = 255;
Other option (without typing all columns , but you know the primary key ) is to CREATE a temp table, update it and re-insert within a transaction.
BEGIN;
CREATE TEMP TABLE temp_tab ON COMMIT DROP AS SELECT * FROM table_name WHERE id=255;
UPDATE temp_tab SET pri_key_col = ( select MAX(pri_key_col) + 1 FROM table_name );
INSERT INTO table_name select * FROM temp_tab;
COMMIT;
This is just a DO block but you could create a function that takes things like the table name etc as parameters.
Setup:
CREATE TABLE public.t1 (a TEXT, b TEXT, c TEXT, id SERIAL PRIMARY KEY, e TEXT, f TEXT);
INSERT INTO public.t1 (e) VALUES ('x'), ('y'), ('z');
Code to duplicate values without the primary key column:
DO $$
DECLARE
_table_schema TEXT := 'public';
_table_name TEXT := 't1';
_pk_column_name TEXT := 'id';
_columns TEXT;
BEGIN
SELECT STRING_AGG(column_name, ',')
INTO _columns
FROM information_schema.columns
WHERE table_name = _table_name
AND table_schema = _table_schema
AND column_name <> _pk_column_name;
EXECUTE FORMAT('INSERT INTO %1$s.%2$s (%3$s) SELECT %3$s FROM %1$s.%2$s', _table_schema, _table_name, _columns);
END $$
The query it creates and runs is: INSERT INTO public.t1 (a,b,c,e,f) SELECT a,b,c,e,f FROM public.t1. It's selected all the columns apart from the PK one. You could put this code in a function and use it for any table you wanted, or just use it like this and edit it for whatever table.
I have a PostgeresDB with the following constraint:
CONSTRAINT "Car_Data_3PM_pkey" PRIMARY KEY ("F_ID", "Date"),
CONSTRAINT "Car_Data_3PM_F_ID_fkey" FOREIGN KEY ("F_ID")
REFERENCES "Bike_Data" ("F_ID") MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
When I try to insert multiple values using:
INSERT INTO "Car_Data_3PM" ("F_ID","Date","Price_Type","O","H","L","LT","EQ","V","NAD") VALUES (38,'2016-10-02 08:19:40.056679','x',0,0,0,112.145,0,0,112.145),(14,'2016-10-02 08:19:40.056679','x',0,0,0,5476,0,0,5476),(13,'2016-10-02
I get this error:
ERROR: insert or update on table "Car_Data_3PM" violates foreign key
constraint "Car_Data_3PM_F_ID_fkey" SQL state: 23503 Detail: Key
(F_ID)=(38) is not present in table "Bike_Data".
NO ROWS are inserted.
How can I make Postgres ONLY miss out the rows where the constraint is an issue? i.e Insert most of them?
You can't make Postgres ignore the values, but you can rewrite your statement to not insert those rows:
INSERT INTO "Car_Data_3PM" ("F_ID","Date","Price_Type","O","H","L","LT","EQ","V","NAD")
select *
from (
VALUES
(38,'2016-10-02 08:19:40.056679','x',0,0,0,112.145,0,0,112.145),
(14,'2016-10-02 08:19:40.056679','x',0,0,0,5476,0,0,5476),
... -- all other rows
) as x (id, date, price_type, o, h, l, lt, eq, v nad)
where exists (select 1
from "Bike_Data" bd
where bd."F_ID" = x .id)
One way is to write a trigger that filters out the bad values, like this:
CREATE FUNCTION car_insert_filter() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF EXISTS(SELECT 1 FROM "Bike_Data" WHERE "F_ID" = NEW."F_ID")
THEN
RETURN NEW;
ELSE
RAISE NOTICE 'Skipping row with "F_ID"=% and "Date"=%',
NEW."F_ID", NEW."Date";
RETURN NULL;
END IF;
END;$$;
CREATE TRIGGER car_insert_filter
BEFORE INSERT ON "Car_Data_3PM" FOR EACH ROW
EXECUTE PROCEDURE car_insert_filter();
I'm using PostgreSQL 8.1.23 on x86_64-redhat-linux-gnu
I have to write a database for reserving seats on language courses and there's a requirement there should be a trigger, which will check whether lector, we're trying to write into new group, has any other group at the same time. I have such table:
CREATE TABLE groups (
group_id serial PRIMARY KEY,
lang varchar(3) NOT NULL,
level varchar(3),
seats int4,
lector int4,
start time,
day varchar(3),
FOREIGN KEY (language) REFERENCES languages(lang) ON UPDATE CASCADE ON DELETE CASCADE,
FOREIGN KEY (lector) REFERENCES lectors(lector_id) ON UPDATE CASCADE ON DELETE SET NULL);
and such trigger:
CREATE FUNCTION if_available () RETURNS trigger AS '
DECLARE
r groups%rowtype;
c groups%rowtype;
BEGIN
FOR r IN SELECT * FROM groups WHERE r.lector=NEW.lector ORDER BY group_id LOOP
IF (r.start = NEW.start AND r.day = NEW.day) THEN
RAISE NOTICE ''Lector already has a group at this time!'';
c = NULL;
EXIT;
ELSE
c = NEW;
END IF;
END LOOP;
RETURN c;
END;
' LANGUAGE 'plpgsql';
CREATE TRIGGER if_available_t
BEFORE INSERT OR UPDATE ON grupy
FOR EACH ROW EXECUTE PROCEDURE if_available();
After inserting the new row to a table groups, eg.:
INSERT groups (lang, level, seats, lector, start, day) values ('ger','A-2',12,2,'11:45','wed');
I get an error like this:
ERROR: null value in column "group_id" violates not-null constraint
Without this trigger everything is OK. Could anybody help me how to make it work?
Finally, I have solved it! After BEGIN there should be c = NEW;, because when table groups is empty at the beginning, FOR loop doesn't run and NULL is returned. Also I have changed the condition in FOR loop for: ...WHERE lector = NEW.lector.... And finally, I have changed the condition in IF for IF (r.group_id <> NEW.group_id AND r.start = NEW.start AND r.day = NEW.day) THEN..., because I haven't wanted to run this trigger before one particular update. Maybe this will be helpful for someone :)
This is my table :
CREATE TABLE [dbo].[TestTable]
(
[Name1] varchar(50) COLLATE French_CI_AS NOT NULL,
[Name2] varchar(255) COLLATE French_CI_AS NULL,
CONSTRAINT [TestTable_uniqueName1] UNIQUE ([Name1]),
CONSTRAINT [TestTable_uniqueName1Name2] UNIQUE ([Name1], [Name2])
)
ALTER TABLE [dbo].[TestTable]
ADD CONSTRAINT [TestTable_uniqueName1]
UNIQUE NONCLUSTERED ([Name1])
ALTER TABLE [dbo].[TestTable]
ADD CONSTRAINT [TestTable_uniqueName1Name2]
UNIQUE NONCLUSTERED ([Name1], [Name2])
GO
ALTER INDEX [TestTable_uniqueName1]
ON [dbo].[TestTable]
DISABLE
GO
My idea is to enable/disable one or other unique contraint depending on the customer application. With this way, I can catch the thrown exception in my c# code, and display a specific error message to the GUI.
Now, my problem is to alter the collation of columns Name1 & Name2, I need to make them case sensitive (French_CS_AS). To alter these fields, I have to drop the two constraints and recreate it. According to the explained schema, I cannot create an enabled constraint and then disable it, because by some customers, I have duplicate keys for one or other constraint.
For my update script, my idea number 1 was
Save the name of enabled constraints in a temp table
Drop the constraints
Alter columns
Create DISABLED unique constraints
Enable specific constraints according to the saved values in points 1.
My problem is in point 4., I don't find how to create a disabled unique constraint with an ALTER TABLE statement. Is it possible to create it directly in the sys.indexes table ?
My idea number 2 was
Rename TestTable to TestTableCopy
Recreate TestTable with the new fields collation, and otherwise the same schema (indexes, FK, triggers, ...)
Disable specifical unique contraints in TestTable
Migrate data from TestTableCopy to TestTable
Drop TestTableCopy
In this way, my fear is to loose some links with other tables/dependencies, beceause it is a central table in my database.
Is there any other way to achieve my goal?
If necessary, I can use unique indexes instead of unique constraints.
It looks like it is impossible to create a unique index on a column that already has duplicate values.
So, rather than having a disabled unique index either:
not have an index at all (which is the same as having a disabled index from the query processor point of view),
or create a non-unique index.
For those instanses where your client has unique data create unique index. For those instanses where your client has non-unique data create non-unique index.
CREATE PROCEDURE [dbo].[spUsers_AddUsers]
#Name1 varchar(50) ,
#Name2 varchar(50) ,
#Unique bit
AS
declare #err int
begin tran
if #Unique = 1 begin
if not exists (SELECT * FROM Users WHERE Name1 = #Name1 and Name2 = #Name2)
begin
INSERT INTO Users (Name1,Name2)
VALUES (#Name1,#Name2)
set #err = ##ERROR
end else
begin
UPDATE Users
set Name1 = #Name1,
Name2 = #Name2
where Name1 = #Name1 and Name2 = #Name2
set #err = ##ERROR
end
end else begin
if not exists ( SELECT * FROM Users WHERE Name1 = #Name1 )
begin
INSERT INTO Users (Name1,Name2)
VALUES (#Name1,#Name2)
set #err = ##ERROR
end else
begin
UPDATE Users
set Name1 = #Name1,
Name2 = #Name2
where Name1 = #Name1
set #err = ##ERROR
end
if #err = 0 commit tran
else rollback tran
So first you check if you need an unique Name1 and Name2 or just Name1. Then if you do you an insert/update based on what constrain you have.