I have a SQL server database which have two entities and 4 stored procedures for each entity ( Select, Insert, Update, Delete) and I'm using Entity framework to do data access work in my application. (the application is just for training purpose)
I mapped Insert procedure to an entity and it worked but after that I changed the "Result column binding" to "Id" -identity column-
this caused this exception:
Store update, insert, or delete statement affected an unexpected
number of rows (0). Entities may have been modified or deleted since
entities were loaded. Refresh ObjectStateManager entries.
after searching for a short while I didn't find the reason for it,
then I figured it's what I changed "Result column binding" so I removed it.
Just want to know what is the cause of this exception. what went wrong.
Related
I have tried using entity framework core for my project.
I'm using latest PostgreSQL. And my requirement is to insert bulk data in database, which has a main table and its partitioned tables (horizontal partition).
Partitioned table are inherited from the main table and gets created in advance automatically using database triggers.
PostgreSQL has one more trigger like when the data for insertion arrives it decides in which partition table it has to insert using pre-decided column value.
(lets say there is column of timestamp and it decides according to date).
The issue is when I try to insert data using EF Core methods
(like adding model and then context.SaveChanges())
PostgreSQL throws an error of unknown exception from PgSql.
{"The database operation was expected to affect 1 row(s), but actually affected 0 row(s); data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions."}
Data: {System.Collections.ListDictionaryInternal}
Entries: Count = 1
HResult: -2146233088
HelpLink: null
InnerException: null
Message: "The database operation was expected to affect 1 row(s), but actually affected 0 row(s); data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions."
Source: "Npgsql.EntityFrameworkCore.PostgreSQL"
StackTrace: " at Npgsql.EntityFrameworkCore.PostgreSQL.Update.Internal.NpgsqlModificationCommandBatch.Consume(RelationalDataReader reader)\r\n at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.Execute(IRelationalConnection connection)\r\n at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.Execute(IEnumerable`1 commandBatches, IRelationalConnection connection)\r\n at Microsoft.EntityFrameworkCore.Storage.RelationalDatabase.SaveChanges(IList`1 entries)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(IList`1 entriesToSave)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(StateManager stateManager, Boolean acceptAllChangesOnSuccess)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.<>c.<SaveChanges>b__104_0(DbContext _, ValueTuple`2 t)\r\n at Npgsql.EntityFrameworkCore.PostgreSQL.Storage.Internal.NpgsqlExecutionStrategy.Execute[TState,TResult](TState state, Func`3
operation, Func`3 verifySucceeded)\r\n at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(Boolean acceptAllChangesOnSuccess)\r\n at Microsoft.EntityFrameworkCore.DbContext.SaveChanges(Boolean acceptAllChangesOnSuccess)\r\n at Microsoft.EntityFrameworkCore.DbContext.SaveChanges()\r\n at Efcore_testapp.Controllers.HomeController.Index() in C:\Users\parthraj.panchal\source\repos\Efcore testapp\Efcore testapp\Controllers\HomeController.cs:line 52"
TargetSite: {Void Consume(Microsoft.EntityFrameworkCore.Storage.RelationalDataReader)}
My observation is that :-
EF core has sent data to pgsql to insert in table T and is expecting confirmation from table T but as pgsql has inserted the data in partitioned table T2 it it sending confirmation from that T2 table .
And thats where the conflict happens between pgsql and EF core.
My test :
As I once tested a scenario where I just disabled the triggers that were deciding where to insert data and all the flow working fine.
Anyone has any idea about this ?
This error is preventing me from running sequelize db:migrate:undo:all.
The globalLinks table is a table I created in my second migration.
There are associations for this table created in a third migration.
There are no associations in any of the remaining models.
Are the objects mentioned in this error log referring to columns? tables? cells?
I know db:migrate:undo:all would undo each migration in reverse order starting from the most recent, so what would remain by the time I try to drop this table?
If it is any clue, I am undoing all of these migrations because the same table is giving me an issue when I try to add a column - I get the error: ERROR: column "userId" of relation "globalLinks" already exists
What's up with this table?
I am trying to bulk load records from a temp table to table using insert as select stmt and on conflict strategy do update.
I want load as many records as possible, currently if there any any foreign key violations no records get inserted, everything gets rolled back. Is there a way to insert valid records and skip the faulty records.
In https://dba.stackexchange.com/a/46477 I saw a strategy of going with the foreign table in the query to ignore the faulty rows. I don't want to do that too as I may have many foreign keys on that table and it will make my query more complex and table specific. I would like it to be generic.
Sample use case, if have 100 rows in the temp table and suppose row number 5 and 7 are causing insertion failure, I want to insert the rest 98 records and identify which two rows failed.
I want to avoid inserting record by record and catch the error, as it is not efficient. I am doing this whole exercise to avoid loading a table row by row.
Oracle provides support to catch bulk errors at a shot.
Sample https://stackoverflow.com/a/36430893/8575780
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1422998100346727312
I have already explored options loading using copy, it catches not null constraints and other data type errors, but when foreign key violation happens nothing gets committed.
I am looking something closer to what pgloader is doing when it faces error.
https://pgloader.readthedocs.io/en/latest/pgloader.html#batches-and-retry-behaviour
I have a postgres table that is used to hold users files. Two users can have a file with the same name, but a user isn't allowed to have two files with the same name. Currently, if a user tries to upload a file with a name they already used, the database will spit out the error below as it should.
IntegrityError: duplicate key value violates unique constraint "file_user_id_title_key"
What I would like to do is first query the database with the file name and user ID to see if the file name is being used by the user. If the name is already being used, return an error, otherwise write the row.
cur.execute('INSERT INTO files(user_id, title, share)'
'VALUES (%s, %s, %s) RETURNING id;',
(user.id, file.title, file.share))
The problem is that you cannot really do that without opening a race condition:
There is nothing to keep somebody else from inserting a conflicting row between the time you query the table and when you try to insert the row, so the error could still happen (unless you go to extreme measures like locking the table before you do that, which would affect concurrency badly).
Moreover, your proposed technique incurs extra load on the database by adding a superfluous second query.
You are right that you should not confront the user with a database error message, but the correct way to handle this is as follows:
You INSERT the new row like you showed.
You check if you get an error.
If the SQLSTATE of the error is the SQL standard value 23505 (unique_violation), you know that there is already such a file for the user and show the appropriate error to the user.
So you can consider the INSERT statement as an atomic operation check if there is already a matching entry, and if not, add the row.
When I add a column in the database, under what conditions do I need to update my EDMX?
To elaborate:
I know if I add a non-nullable field, I need to update the model if I want to write to the database. What if just I want to read?
What if it's a nullable field? Can I both read and write?
What if I were to change the primary key to the new column but the edmx still has the old column as primary?
1) If you want to port an old database, you need to make sure that every table in your database must have a primary key. This is the only requirement for creating the EDMX.
2) If you've added a column in a table at database side, and have not updated edmx, then you'll simply not be able to use that column though EntityFramework.
If you create a non nullable column with no default value, the insert operation will fail with exception "Cannot insert null into column , statement terminated". And the you'll not be able to read values of that column using entityframeowrk, unless you update the edmx.
3) If you've changed the primary key of any table at database side, and if the edmx is not aware of that, your application might create a runtime exception when performing operations with that table.
Remember, Entity Framework creates SQL queries depending upon its knowledge of database(which is defined in EDMX). So if EDMX is incorrect, the resulting SQL queries so generated might lead to problems at runtime.