I have moved some records from my SOURCE table in DB_1 into an ARCHIVE table in another DB_2 (ie. INSERTED the records from SOURCE into ARCHIVE and then DELETED the records from SOURCE.)
My SOURCE table has the following index created as SOURCE_1:
CREATE UNIQUE NONCLUSTERED INDEX SOURCE_1
ON dbo.SOURCE(TRADE_SET_ID, ORDER_ID)
The problem is - when I try to insert the rows back into SOURCE from ARCHIVE, Sybase throws the following error:
Attempt to insert duplicate key row in object 'SOURCE' with unique index 'SOURCE_1'
And, of course, subsequently fails the insertions.
I confirmed that my SOURCE table does not have these duplicates because the following query returned empty:
select * from DB_1.dbo.SOURCE
join DB_2.dbo.ARCHIVE
on DB_1.dbo.SOURCE.TRADE_SET_ID = DB_2.dbo.ARCHIVE.TRADE_SET_ID
AND DB_1.dbo.SOURCE.ORDER_ID = DB_2.dbo.ARCHIVE.ORDER_ID
If the above query returned nothing, then that means I haven not violated my unique index constraint on the 2 columns, however Sybase complains that I have.
Does anyone have any ideas on why this is happening?
If Sybase is anything like SQL Server in this regard (Which I'm more familiar with), I would suspect that the index is blocking the insert. Try disabling the index (along with any other indexes or autoincrement columns) on your archive version before copying over to it, then re-enabling. Its probable that Sybase would try to automatically create IDs for the insertions, which would interfere with the existing records.
Related
I am migrating data from MSSQL.
I created the database in PostgreSQL via npgsql generated migration. I moved the data across and now when the code tries to insert a value I am getting
'duplicate key value violates unique constraint'
The npgsql tries to insert a column with Id 1..how ever the table already has Id over a thousand.
Npgsql.EntityFrameworkCore.PostgreSQL is 2.2.3 (latest)
In my context builder, I have
modelBuilder.ForNpgsqlUseIdentityColumns();
In which direction should I dig to resolve such an issue?
The code runs fine if the database is empty and doesn't have any imported data
Thank you
The values inserted during the migration contained the primary key value, so the sequence behind the column wasn't incremented and is kept at 1. A normal insert - without specifying the PK value - calls the sequence, get the 1, which already exists in the table.
To fix it, you can bump the sequence to the current max value.
SELECT setval(
pg_get_serial_sequence('myschema.mytable','mycolumn'),
max(mycolumn))
FROM myschema.mytable;
If you already know the sequence name, you can shorten it to
SELECT setval('my_sequence_name', max(mycolumn))
FROM myschema.mytable;
I am trying to bulk load records from a temp table to table using insert as select stmt and on conflict strategy do update.
I want load as many records as possible, currently if there any any foreign key violations no records get inserted, everything gets rolled back. Is there a way to insert valid records and skip the faulty records.
In https://dba.stackexchange.com/a/46477 I saw a strategy of going with the foreign table in the query to ignore the faulty rows. I don't want to do that too as I may have many foreign keys on that table and it will make my query more complex and table specific. I would like it to be generic.
Sample use case, if have 100 rows in the temp table and suppose row number 5 and 7 are causing insertion failure, I want to insert the rest 98 records and identify which two rows failed.
I want to avoid inserting record by record and catch the error, as it is not efficient. I am doing this whole exercise to avoid loading a table row by row.
Oracle provides support to catch bulk errors at a shot.
Sample https://stackoverflow.com/a/36430893/8575780
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1422998100346727312
I have already explored options loading using copy, it catches not null constraints and other data type errors, but when foreign key violation happens nothing gets committed.
I am looking something closer to what pgloader is doing when it faces error.
https://pgloader.readthedocs.io/en/latest/pgloader.html#batches-and-retry-behaviour
I am creating a table in Postgresql 9.5 where id is the primary key. While inserting rows in the table if anyone tries to insert duplicate id, i want it to get ignored instead of raising exception. Is there any way such that i can set this while table creation itself that duplicate entries get ignored.
There are many techniques to resolve duplicate insertion issue while writing insertion query i.e. using ON CONFLICT DO NOTHING, or using WHERE EXISTS clause etc. But i want to handle this at table creation end so that the person writing insertion query doesn't need to bother any.
Creating RULE is one of the possible solution. Are there other possible solutions? Maybe something like this:
`CREATE TABLE dbo.foo (bar int PRIMARY KEY WITH (FILLFACTOR=90, IGNORE_DUP_KEY = ON))`
Although exact this statement doesn't work on Postgresql 9.5 on my machine.
add a trigger before insert or rule on insert do instead - otherwise has to be handled by inserting query. both solutions will require more resources on each insert.
Alternative way to use function with arguments for insert, that will check for duplicates, so end users will use function instead of INSERT statement.
WHERE EXISTS sub-query is not atomic btw - so you can still have exception after check...
9.5 ON CONFLICT DO NOTHING is the best solution still
I am getting a duplicate key error, DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, when I try to INSERT records. The primary key is one column, INTEGER 4, Generated, and it is the first column.
the insert looks like this: INSERT INTO SCHEMA.TABLE1 values (DEFAULT, ?, ?, ...)
It's my understanding that using the value DEFAULT will just let DB2 auto-generate the key at the time of insert, which is what I want. This works most of the time, but sometimes/randomly I get the duplicate key error. Thoughts?
More specifically, I'm running against DB2 9.7.0.3, using Scriptella to copy a bunch of records from one database to another. Sometimes I can process a bunch with no problems, other times I'll get the error right away, other times after 2 records, or 20 records, or 30 records, etc. Does not seem to be a pattern, nor is it the same record every time. If I change the data to copy 1 record instead of a bunch, sometimes I'll get the error one time, then it's fine the next time.
I thought maybe some other process was inserting records during my batch program, and creating keys at the same time. However, the tables I'm copying TO should not have any other users/processes trying to INSERT records during this same time frame, although there could be READS happening.
Edit: adding create info:
Create table SCHEMA.TABLE1 (
SYSTEM_USER_KEY INTEGER NOT NULL
generated by default as identity (start with 1 increment by 1 cache 20),
COL2...,
)
alter table SCHEMA.TABLE1
add constraint SYSTEM_USER_SYSTEM_USER_KEY_IDX
Primary Key (SYSTEM_USER_KEY);
You most likely have records in your table with IDs that are bigger then the next value in your identity sequence. To find out what the current value your sequence is about at, run the following query.
select s.nextcachefirstvalue-s.cache, s.nextcachefirstvalue-s.increment
from syscat.COLIDENTATTRIBUTES as a inner join syscat.sequences as s on a.seqid=s.seqid
where a.tabschema='SCHEMA'
and a.TABNAME='TABLE1'
and a.COLNAME='SYSTEM_USER_KEY'
So basically what happened is that somehow you got records in your table with ids that are bigger then the current last value of your identity sequence. So sooner or later these ids will collide with identity generated ids.
There are different reasons on how this could have happened. One possibility is that data was loaded which already contained values for the id column or that records were inserted with an actual value for the ID. Another option is that the identity sequence was reset to start at a lower value than the max id in the table.
Whatever the cause, you may also want the fix:
SELECT MAX(<primary_key_column>) FROM onsite.forms;
ALTER TABLE <table> ALTER COLUMN <primary_key_column> RESTART WITH <number from previous query + 1>;
I need to insert a table data into another table. Where it is not guaranteed that the source table have all rows correctly where some of the not null fields are having null values. So with this source table I need to enter all valid rows into the table and find all unvalid rows which failed to insert and return them.
I know we can do this by validating all rows before hand. But as this is a bulk insert from a csv and parsed by .net code so from db we wil not validate it but directly enter.
We can also do this by running a loop but performance might hit.
so my question is is any way where we can use a single statement for insert and skip rows which has a problem and insert which are valid.
BULK INSERT is all-or-nothing. SQL Server does not have the ability to shunt erroneous rows into a separate table, alas.
The best thing you can do is to validate all data thoroughly before inserting it. If the insert still fails (maybe due to a bug) you need to retry all rows one-by-one and log the errors that are occurring.
You can also bulk insert to a temp table and move the rows from there to the final table one-by-one.