ebean overrides transaction isolation=none in the connection string and uses some level of transaction isolation. This is resulting in java.sql.SQLException: [SQL7008]. For reasons out of my control the table has no journal. I have been unsuccessful finding information on no transaction isolation.
This is the faulty line:
Ebean.createSqlUpdate("UPDATE dummytable SET column1 = 'Test' WHERE id = 1").execute();
This is the connection string:
datasource.as400.databaseUrl=jdbc:as400://localhost/LIBRARY;naming=sql;errors=full;transaction isolation=none;
Related
I have a db.session.query(SomeModel).filter(SomeModel.id == some_id).delete()
like operation in my Flask code. And it fails when some other table refers to SomeModel with a foreign key. It gives the following error:
(psycopg2.errors.ForeignKeyViolation) update or delete on table "some_model" violates foreign key constraint "some_model_id_fkey" on table "other_model" DETAIL: Key (id)=(1) is still referenced from table "other_model".
Also any later operation will suffer from this error:
psycopg2.errors.InFailedSqlTransaction) current transaction is aborted, commands ignored until end of transaction block
I found the solution for this
try:
db.session.query(SomeModel).filter(SomeModel.id == some_id).delete()
db.session.commit()
except Exception as e:
db.session.rollback()
However my question is, can the rollback() here have side-effects? Like some other operation being carried out somewhere in the Flask app could also be rolled back if this is called before the commit() ? I am not sure about this because after this error any other operation on db.session seems to be failing as I have written the error starting with current transaction...
I have tried using entity framework core for my project.
I'm using latest PostgreSQL. And my requirement is to insert bulk data in database, which has a main table and its partitioned tables (horizontal partition).
Partitioned table are inherited from the main table and gets created in advance automatically using database triggers.
PostgreSQL has one more trigger like when the data for insertion arrives it decides in which partition table it has to insert using pre-decided column value.
(lets say there is column of timestamp and it decides according to date).
The issue is when I try to insert data using EF Core methods
(like adding model and then context.SaveChanges())
PostgreSQL throws an error of unknown exception from PgSql.
{"The database operation was expected to affect 1 row(s), but actually affected 0 row(s); data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions."}
Data: {System.Collections.ListDictionaryInternal}
Entries: Count = 1
HResult: -2146233088
HelpLink: null
InnerException: null
Message: "The database operation was expected to affect 1 row(s), but actually affected 0 row(s); data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions."
Source: "Npgsql.EntityFrameworkCore.PostgreSQL"
StackTrace: " at Npgsql.EntityFrameworkCore.PostgreSQL.Update.Internal.NpgsqlModificationCommandBatch.Consume(RelationalDataReader reader)\r\n at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.Execute(IRelationalConnection connection)\r\n at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.Execute(IEnumerable`1 commandBatches, IRelationalConnection connection)\r\n at Microsoft.EntityFrameworkCore.Storage.RelationalDatabase.SaveChanges(IList`1 entries)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(IList`1 entriesToSave)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(StateManager stateManager, Boolean acceptAllChangesOnSuccess)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.<>c.<SaveChanges>b__104_0(DbContext _, ValueTuple`2 t)\r\n at Npgsql.EntityFrameworkCore.PostgreSQL.Storage.Internal.NpgsqlExecutionStrategy.Execute[TState,TResult](TState state, Func`3
operation, Func`3 verifySucceeded)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(Boolean acceptAllChangesOnSuccess)\r\n at Microsoft.EntityFrameworkCore.DbContext.SaveChanges(Boolean acceptAllChangesOnSuccess)\r\n at Microsoft.EntityFrameworkCore.DbContext.SaveChanges()\r\n at Efcore_testapp.Controllers.HomeController.Index() in C:\Users\parthraj.panchal\source\repos\Efcore testapp\Efcore testapp\Controllers\HomeController.cs:line 52"
TargetSite: {Void Consume(Microsoft.EntityFrameworkCore.Storage.RelationalDataReader)}
My observation is that :-
EF core has sent data to pgsql to insert in table T and is expecting confirmation from table T but as pgsql has inserted the data in partitioned table T2 it it sending confirmation from that T2 table .
And thats where the conflict happens between pgsql and EF core.
My test :
As I once tested a scenario where I just disabled the triggers that were deciding where to insert data and all the flow working fine.
Anyone has any idea about this ?
I am trying to run this DB2 query on DBEAVER:
TRUNCATE table departments immediate
but I got this error:
DB2 SQL Error: SQLCODE=-668, SQLSTATE=57016, SQLERRMC=7;DB2INST1.DEPARTMENTS, DRIVER=4.19.49
(it is happening just when I run it on DBEVAER (external channel) on local it's run well.
help someone?
The sqlcode -668 with sqlerrmc=7 (this 7 is the "reason code") means:
SQL0668N Operation not allowed for reason code "" on
table
"".
and the reason code 7 means:
The table is in the reorg pending state. This can occur after an
ALTER TABLE statement containing a REORG-recommended operation.
If your userid has the correct permissions, then try:
reorg table db2inst1.departments
if you have command-line access to Db2, or from jdbc application like DBeaver call admin_cmd ('reorg table db2inst1.departments').
But the reorg will fail if your account lacks permissions, or if the syntax is not allowed on your Db2-server version, and in that case you must ask a DBA to do the work for you, or a become user db2inst1 and run the reorg.
When the reorg completes without errors, retry the truncate table.
I have a problem with prepared statements in JDBC against an OpenEdge database, when we get the following exception:
java.sql.SQLException: [DataDirect][OpenEdge JDBC Driver][OpenEdge] Column Namn in table PUB.Brevlada has value exceeding its max length or precision.
I'm well aware of the fix for this with DBTool but I have another question. When we have used the prepared statement and got this Exception it can't be used again to fetch other entries in the table that hasn't got the width problem. We then get a slightly different exception:
java.sql.SQLException: [DataDirect][OpenEdge JDBC Driver][OpenEdge] Column %s in table %s has value exceeding its max length or precision (7864)
Is this a problem with the SQL Statement Cache on the server, and is there any workaround for this except for re-initialization of the JDBC connection to the database?
We're using flyway with cloudfoundry. In short, we have no control over username/password for the database (by design), and it's a very long string that's greater than 30 characters. When I try to run the migration, I get the following error:
flyway.core.api.FlywayException: Unable to insert metadata table row for version 0
... stacktrace ...
Caused by: org.postgresql.util.PSQLException: ERROR: value too long for type character varying(30)
... more stacktrace ...
Can I configure flyway to ignore the installed-by column in the metadata table? I suspect this could be fixed by building flyway with a larger column, or to concat the username.
EDIT
I was able to mitigate the issue by logging into the database and expanding the column to 50 characters manually:
alter table schema_version alter column installed_by set data type character varying(50);
It's still a manual step in a setup that's supposed to be hands-off, so this might still be a feature request in flyway (support larger usernames).
As per Axel's comment, I filed the enhancement request here.