I have a db.session.query(SomeModel).filter(SomeModel.id == some_id).delete()
like operation in my Flask code. And it fails when some other table refers to SomeModel with a foreign key. It gives the following error:
(psycopg2.errors.ForeignKeyViolation) update or delete on table "some_model" violates foreign key constraint "some_model_id_fkey" on table "other_model" DETAIL: Key (id)=(1) is still referenced from table "other_model".
Also any later operation will suffer from this error:
psycopg2.errors.InFailedSqlTransaction) current transaction is aborted, commands ignored until end of transaction block
I found the solution for this
try:
db.session.query(SomeModel).filter(SomeModel.id == some_id).delete()
db.session.commit()
except Exception as e:
db.session.rollback()
However my question is, can the rollback() here have side-effects? Like some other operation being carried out somewhere in the Flask app could also be rolled back if this is called before the commit() ? I am not sure about this because after this error any other operation on db.session seems to be failing as I have written the error starting with current transaction...
Related
Running a query to create a table, in the framework DBeaver v22, the error returns from a random table, every time I run the SQL script and it hits a query to create a table.
The script has a few thousands of lines, lots of drops and creates tables and. the very same error happens randomly when a CREATE query gets executed.
At the time I created this thread, I executed the script and it returned error in the creation of table1.
But It could have been any other. It doesn`t seem to be an error in he syntax/grammar of my SQL, but somehow in the engine of DBeaver 22.2. Because the error returns in a random table as per script execution.
SQL Error [42P07]: ERROR: relation "table1" already exist
Even though I added the following query to DROP TABLE, right before the one to CREATE table, the error still returns, when the query to create gets executed.
DROP TABLE IF EXISTS sandbox.table1;
CREATE TABLE sandbox.table1 as ();
I wonder if it takes a long time to drop the table so that, the create command line returns error
Is that possible to be the cause ?
Do I need a timer to wait for RDBMS fully drop the table?
SQL Error [42P07]: ERROR: relation "table1" already exist
Accessing further logs I`ve identified the root cause was permission error.
As It couldn't delete then creating table caused the error
org.jkiss.dbeaver.model
Error
Wed Dec 07 11:38:44 BRT 2022
SQL Error [42501]: ERROR: permission denied for relation table1
I'm getting an error when trying to update a table. The SQL statement is:
UPDATE dda_accounts SET TYPE_SK = TYPE_SK - 10 WHERE TYPE_SK > 9;
The error I get is:
SQL Error [57007]: Operation not allowed for reason code "7" on table
"BANK_0002_TEST.DDA_ACCOUNTS".. SQLCODE=-668, SQLSTATE=57007, DRIVER=4.27.25
SQLSTATE 57007 says that there's something incomplete after an ALTER TABLE was executed.
I found this resolution, but it's not clear if it can be fixed or the only way to recover the table is using a backup.
Running a select statement works, only the update fails. What is the way to fix this table?
You need to REORG the table to recover, see this page for details.
When you get an error like this, lookup the SQL066N code with the reason code "7".
This shows:
The table is in the reorg pending state. This can occur after an ALTER
TABLE tatement containing a REORG-recommended operation.
Be aware the the previous alter table (that put the table into this state of reorg needed) might have happened quite some time ago, possibly without your knowledge.
If you lack the authorisation to perform reorg table inplace "BANK_0002_TEST.DDA_ACCOUNTS" , then contact your DBA for assistance. The DBA may choose to also reorg indexes at the same time, and to perform runstats (docs) on the table following completion of the reorg, and to check whether anything else needs rebinding.
I have a question concerning the error messages in PostgreSQL.
I noticed that in case of some failure PostgreSQL make a report in the form of text message but it does not contain error code id.
For instance:
ERROR: Relation "mytable" already exists or
ERROR: duplicate key value violates unique constraint "id"
Could you please suggest a way to make PostgreSQL including native error code id to messages for instance as follows:
42P07 ERROR: Relation "mytable" already exists or
23505 ERROR: duplicate key value violates unique constraint "id"
.
Is it possible ?
Thanks in advance.
You can change the parameter log_error_verbosity in postgresql.conf file to change the amount of information being logged during errors. By default, its value is default. You can change it to verbose to include more information about the errors.
I have a composite UNIQUE set of columns in my table. Therefore if I insert into the table where the unique key is violated, Postgresql returns and error and my PHP script can read this error.
When inserting, instead of doing this:
SELECT id FROM table WHERE col1='x' and col2='y'
(if no rows)
INSERT INTO table...
(else if rows are found)
UPDATE table SET...
I prefer to use:
INSERT INTO table...
(if error occurred then attempt to UPDATE)
UPDATE table SET...
The kind of error returned from the above would be "ERROR: duplicate key value violates unique constraint "xxxxxxxx_key"
However, there is no point doing an UPDATE if the INSERT failed for some other reason, such as invalid data. Is there a way of "knowing" (from PHP/Postgres) if the error actually failed from this duplicate key issue, rather than invalid data? I'm just curious. Performing an UPDATE also would return an error anyway if the data were invalid, but what would you say is best practice?
Many thanks!
Just check the error message to see what kind of error you have. pg_result_error_field() shows it all. Check the PGSQL_DIAG_SQLSTATE and the PostgreSQL manual for the details.
You might want to look into this example in the official documentation.
You're free to add more WHEN EXCEPTION ... THEN handlers, list of available errors can also be found in the documentation.
Although in the example above the function will cause an exception in case any other error appears, except the unique_violation one, which is treated specially.
On my table I have a secondary unique key labeled md5. Before inserting, I check to see if the MD5 exists, and if not, insert it, as shown below:
-- Attempt to find this item
SELECT INTO oResults (SELECT domain_id FROM db.domains WHERE "md5"=oMD5);
IF (oResults IS NULL) THEN
-- Attempt to find this domain
INSERT INTO db.domains ("md5", "domain", "inserted")
VALUES (oMD5, oDomain, now());
RETURN currval('db.domains_seq');
END IF;
This works great for single threaded inserts, my problem is when I have two external applications calling my function concurrently that happen to have the same MD5. I end up with a situation where:
App 1: Sees the MD5 does not exist
App 2: Inserts this MD5 into table
App 1: Goes to now Insert MD5 into table since it thinks it doesnt exist, but gets an error because right after it seen it does not, App 2 inserted it.
Is there a more effective way of doing this?
Can I catch the error on insert and if so, then select the domain_id?
Thanks in advance!
This also seems to be covered at Insert, on duplicate update in PostgreSQL?
You could just go ahead and try to insert the MD5 and catch the error, if you get a "unique constraint violation" error then ignore it and keep going, if you get some other error then bail out. That way you push the duplicate checking right down to the database and your race condition goes away.
Something like this:
Attempt to insert the MD5 value.
If you get a unique violation error, then ignore it and continue on.
If you get some other error, bail out and complain.
If you don't get an error, then continue on.
Do your SELECT INTO oResults (SELECT domain_id FROM db.domains WHERE "md5"=oMD5) to extract the domain_id.
There might be a bit of a performance hit but "correct and a little slow" is better than "fast but broken".
Eventually you might end up with more exceptions that successful inserts. Then you could try to insert in the table the references (through a foreign key) your db.domains and trap the FK violation there. If you had an FK violation, then do the old "insert and ignore unique violations" on db.domains and then retry the insert that gave you the FK violation. This is the same basic idea, it just a matter of choosing which one will probably throw the least exceptions and go with that.