I want to get such behaviour on inserting data (conflict on id):
if there is no model with same id in db do INSERT
if there is entry with same id in db and that entry is newer (updated_at field) do NOT UPDATE
if there is entry with same id in db and that entry is older (updated_at field) do UPDATE
I'm using Ecto for that and want to work on constraints, however I cannot find an option to do so in documentation. Pseudo code of constraint could look like:
CHECK: NULL(current.updated_at) or incoming.updated_at > current.updated_at
Is such behaviour possible in Postgres?
PostgreSQL does not support CHECK constraints that reference table
data other than the new or updated row being checked. While a CHECK
constraint that violates this rule may appear to work in simple tests,
it cannot guarantee that the database will not reach a state in which
the constraint condition is false (due to subsequent changes of the
other row(s) involved). This would cause a database dump and reload to
fail. The reload could fail even when the complete database state is
consistent with the constraint, due to rows not being loaded in an
order that will satisfy the constraint. If possible, use UNIQUE,
EXCLUDE, or FOREIGN KEY constraints to express cross-row and
cross-table restrictions.
If what you desire is a one-time check against other rows at row
insertion, rather than a continuously-maintained consistency
guarantee, a custom trigger can be used to implement that. (This
approach avoids the dump/reload problem because pg_dump does not
reinstall triggers until after reloading data, so that the check will
not be enforced during a dump/reload.)
That should be simple using the WHERE clause of ON CONFLICT ... DO UPDATE:
INSERT INTO mytable (id, entry) VALUES (42, '2021-05-29 12:00:00')
ON CONFLICT (id)
DO UPDATE SET entry = EXCLUDED.entry
WHERE mytable.entry < EXCLUDED.entry;
Related
I'm looking at a production table in postgres with the following constraint which due to third party collaboration we need to remove.
"customer_email_unique" UNIQUE CONSTRAINT, btree (customer_email)
This is a production table, what risks are there if I remove the constraint? If it causes problems can it be recreated after to an existing table, with existing data in it?
It looks like the command to drop the constraint is
ALTER TABLE your_table DROP CONSTRAINT customer_email_unique;
We're a React/ Node stack and I can see what the code is doing with regard to what will happen if the constraint is dropped, my lack of knowledge is more towards data and what happens if you drop a constraint.
Thanks,
This is a production table, what risks are there if I remove the constraint? If it causes problems can it be recreated after to an existing table, with existing data in it?
The risk is that you'll drop the constraint and non-unique entries will be inserted. You won't be able to reapply the unique constraint without deleting the non-unique rows or updating them to be non-unique. Another risk it that you'll drop the wrong constraint, or reapply the constraint incorrectly. Finally, there may be code which assumes that column is unique.
To mitigate this risk, write a script to drop the constraint ("up"), and one to restore uniqueness and reapply the constraint ("down"). Test it on an equivalent table on a non-production database.
This is the general idea of schema migrations. Every schema change is done by two scripts, an "up" script to apply the change and a "down" script to undo the change. Many ORMs, such as typeorm, support migrations. They make schemas reproducible so all environments know they have the same schemas, schemas can be tested, and in general mitigate the risk of schema changes.
Can I do row-specific update / delete operations in a DB2 table Via SQL, in a NON QUNIQUE Primary Key Context?
The Table is a PHYSICAL FILE on the NATIVE SYSTEM of the AS/400.
It was, like many other Files, created without the unique definition, which leads DB2 to the conclusion, that The Table, or PF has no qunique Key.
And that's my problem. I can't override the structure of the table to insert a unique ID ROW, because, I would have to recompile ALL my correlating Programs on the AS/400, which is a serious issue, much things would not work anymore, "perhaps". Of course, I can do that refactoring for one table, but our system has thousands of those native FILES, some well done with Unique Key, some without Unique definition...
Well, I work most of the time with db2 and sql on that old files. And all files which have a UNIQUE Key are no problem for me to do those important update / delete operations.
Is there some way to get an additional column to every select with a very unique row id, respective row number. And in addition, what is much more important, how can I update this RowNumber.
I did some research and meanwhile I assume, that there is no chance to do exact alterations or deletes, when there is no unique key present. What I would wish would be some additional ID-ROW which is always been sent with the table, which I can Refer to when I do my update / delete operations. Perhaps my thinking here has an fallacy as non Unique Key Tables are purposed to be edited in other ways.
Try the RRN function.
SELECT RRN(EMPLOYEE), LASTNAME
FROM EMPLOYEE
WHERE ...;
UPDATE EMPLOYEE
SET ...
WHERE RRN(EMPLOYEE) = ...;
I am creating a table in Postgresql 9.5 where id is the primary key. While inserting rows in the table if anyone tries to insert duplicate id, i want it to get ignored instead of raising exception. Is there any way such that i can set this while table creation itself that duplicate entries get ignored.
There are many techniques to resolve duplicate insertion issue while writing insertion query i.e. using ON CONFLICT DO NOTHING, or using WHERE EXISTS clause etc. But i want to handle this at table creation end so that the person writing insertion query doesn't need to bother any.
Creating RULE is one of the possible solution. Are there other possible solutions? Maybe something like this:
`CREATE TABLE dbo.foo (bar int PRIMARY KEY WITH (FILLFACTOR=90, IGNORE_DUP_KEY = ON))`
Although exact this statement doesn't work on Postgresql 9.5 on my machine.
add a trigger before insert or rule on insert do instead - otherwise has to be handled by inserting query. both solutions will require more resources on each insert.
Alternative way to use function with arguments for insert, that will check for duplicates, so end users will use function instead of INSERT statement.
WHERE EXISTS sub-query is not atomic btw - so you can still have exception after check...
9.5 ON CONFLICT DO NOTHING is the best solution still
I want to change a column to NOT NULL:
ALTER TABLE "foos" ALTER "bar_id" SET NOT NULL
The "foos" table has almost 1 000 000 records. It does fairly low volumes of writes, but quite constantly. There are a lot of reads.
In my experience, changing a column in a big table to NOT NULL like this can cause downtime in the app, presumably because it leads to (b)locks.
I've yet to find a good explanation corroborating this, though.
And if it is true, what can I do to avoid it?
EDIT: The docs (via this comment) say:
Adding a column with a DEFAULT clause or changing the type of an existing column will require the entire table and its indexes to be rewritten.
I'm not sure if changing NULL counts as "changing the type of an existing column", but I believe I did have an index on the column the last time I saw this issue.
Perhaps removing the index, making the column NOT NULL, and then adding the index back would improve things?
I think you can do that using a check constraint rather then set not null.
ALTER TABLE foos
add constraint id_not_null check (bar_id is not null) not valid;
This will still require an ACCESS EXCLUSIVE lock on the table, but it is very quick because Postgres doesn't validate the constraint (so it doesn't have to scan the entire table). This will already make sure that new rows (or changed rows) can not put a null value into that column
Then (after committing the alter table!) you can do:
alter table foos validate constraint id_not_null;
Which does not require an ACCESS EXCLUSIVE lock and still allows access to the table.
I have a Postgres 9.6 table with certain columns that must be unique. If I try to insert a duplicate row, I want Postgres to simply ignore the insert and continue, instead of failing or aborting. If the insert is wrapped in a transaction, it shouldn't abort the transaction or affect other updates in the transaction.
I assume there's a way to create the table as described above, but I haven't figured it out yet.
Bonus points if you can show me how to do it in Rails.
This is possible with the ON CONFLICT clause for INSERT:
The optional ON CONFLICT clause specifies an alternative action to
raising a unique violation or exclusion constraint violation error.
For each individual row proposed for insertion, either the insertion
proceeds, or, if an arbiter constraint or index specified by
conflict_target is violated, the alternative conflict_action is taken.
ON CONFLICT DO NOTHING simply avoids inserting a row as its
alternative action.
This is a relatively new feature and only available since Postgres 9.5, but that isn't an issue for you.
This is not something you specific at table creation, you'll need to modify each insert. I don't know how this works with Rails, but I guess you'll have to manually write at least part of the queries to do this.
This feature is also often called UPSERT, which is probably a better term to search for if you want to look for an integrated way in Rails to do this.