Hibernate Envers cannot delete entity - jpa

I really have a weird issue now.
I just want to delete an entity.
I am also using Hibernate envers for auditing. So now I want to delete this entity.
Now I get following message.
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Column 'succeed' cannot be null
So when I removed #Audit from my table all of a sudden I was able to delete the entity.
Now I went to my entity_aud table and deselected NOT NULL for attribute succeed. Then I also again put #Audit above my table.
Now it worked. So why if I want to just delete an entity, I get a NOT NULL error when using Hibernate Envers.
What is the reason for this.

When an entity is removed, Envers will also generate an audit entry for that operation. By default, entity data is not captured when a delete audit record is produced, so essentially Envers attempts to insert a row into the audit table that contains the Primary Key, Revision Number, and Revision Type. All the other columns will be inserted with null values.
Since your audit table had the succeed column specified as NOT NULL, the delete throw an exception.
Besides the primary key, revision, and revision type columns in the audit table, all other columns should be created without the NOT NULL specification for this reason, meaning they are allowed to be NULL. If you can reproduce Envers generating tables that do not adhere to this, please report it as a bug by attaching the entity model to the issue.
A configuration setting, org.hibernate.envers.store_data_at_delete when set to true will tell Envers to not only capture the entity's primary key, revision, and revision type, but all audited columns. This is not enabled by default because in general, the prior revision maintains that same state so replicating it is really unnecessary; however, some users prefer to have it.

Related

Implement relation conditionally at entity level in typeorm

I have a case where I need to implement the softDelete feature of TypeORM. Somewhere in my entity(let's call it Lead), I have a column that maps to another entity(let's call it Customer) with OneToOne relation.
............
#OneToOne(type => Customer, {})
#JoinColumn()
customer: Customer;
..........
The problem here is since soft remove doesn't remove the record completely from the database, whenever I remove any record from the lead table I can't add another lead for the same customer because of the OneToOne relation.
When surfing through the internet, I got a few solutions for similar scenarios of unique constraints like using:
Partial indexes &
Virtual columns
But here I'm searching for some kind of TypeORM level solutions while mapping relations. What could be the best play around for this case?
One to One relationship in TypeORM creates a unique foreign key constraint by default. Though the row is soft-deleted from the Lead table, there will still be a unique value in the table. So, while inserting another lead for the same customer, TypeORM will throw us a unique constraint error.
The solution for this issue is to remove a foreign key constraint from the relationship. This will now allow us to insert data for the same customerId in the Lead table.
Now what we have to make sure of is:
before inserting value in the Lead table, check if there already exists another lead for that customerId that is not soft deleted.
We also have to ensure that before deleting any customer from the Customer table, their particular leads are soft-deleted from the Lead table.
P.S: In a way this solution is a hacky solution. But since soft delete doesn't consider the foreign key constraint references, this is the suitable way that I have found so far.

Imported data, duplicate key value violates unique constraint

I am migrating data from MSSQL.
I created the database in PostgreSQL via npgsql generated migration. I moved the data across and now when the code tries to insert a value I am getting
'duplicate key value violates unique constraint'
The npgsql tries to insert a column with Id 1..how ever the table already has Id over a thousand.
Npgsql.EntityFrameworkCore.PostgreSQL is 2.2.3 (latest)
In my context builder, I have
modelBuilder.ForNpgsqlUseIdentityColumns();
In which direction should I dig to resolve such an issue?
The code runs fine if the database is empty and doesn't have any imported data
Thank you
The values inserted during the migration contained the primary key value, so the sequence behind the column wasn't incremented and is kept at 1. A normal insert - without specifying the PK value - calls the sequence, get the 1, which already exists in the table.
To fix it, you can bump the sequence to the current max value.
SELECT setval(
pg_get_serial_sequence('myschema.mytable','mycolumn'),
max(mycolumn))
FROM myschema.mytable;
If you already know the sequence name, you can shorten it to
SELECT setval('my_sequence_name', max(mycolumn))
FROM myschema.mytable;

Entity Framework 6 Casscade Deletes and DropForeignKey fails on auto generated constraint name

Entity Framework 6 Casscading Deletes and DropForeignKey fails on auto generated constraint name
I've been running into a bit of an issue with Entity Framework and cascade deletes between two tables on several one-to-many relationships.
Initially it looked like the correct path to take was to configure the table mappings with the OnModelCreating method of DbContext turning off cascade delete in a manner such as
modelBuilder.Entity<SourceTable>()
.HasOptional(x => x.NavigationProperty)
.WithOptionalDependent()
.WillCascadeOnDelete(false);
This however did not work throwing an exception stating
Cannot delete or update a parent row: a foreign key constraint fails...
More research lead me to believe that this is because all affected entities must be loaded into the context (eager fetched) so that entity framework may set the FK references to null as part of the transaction. This is not practical for my needs based on the size of the relational graph I'd be dealing with.
My next approach was to modify the Seed method of the Configuration class and run some arbitrary SQL to drop the Foreign Key constraint and re-add it as a ON DELETE SET NULL constaint. This worked in most cases, however one of the consraints has what appears to be an auto generated unpredicatable name that is diffrent on each call of Update-Database. Given that the name can't be predicted the ALTER statments aren't particualr helpful
context.Database.ExecuteSqlCommand(#"ALTER TABLE SourceTable DROP FOREIGN KEY FK_9405957d032142c3a1227821a9ed1fdf;
ALTER TABLE SourceTable
ADD CONSTRAINT FK_ReasonableName
FOREIGN KEY (NavigationProperty_Id) REFERENCES NavigationProperty (Id) ON DELETE SET NULL;");
Finally, I've taken the apprach to use the migration functionality (DbMigration) and override Up method and leveraging the DropForeignKey method along side more explicit SQL to re-add the constraint (EF does not appear to provide a factility to create a ON DELETE SET NULL constraint).
DropForeignKey("SourceTable", "NavigationProperty_Id", "DestinationTable");
Sql("ALTER TABLE SourceTable ADD CONSTRAINT FK_ReasonableName FOREIGN KEY (NavigationProperty_Id) REFERENCES DestinationTable (Id) ON DELETE SET NULL;");
This works great, up until I encounter the constraint with the auto generate name. At this point the DropForeignKey method fails with an exception that is swallowed up by
System.Runtime.Serialization.SerializationException: Type is not resolved for member 'MySql.Data.MySqlClient.MySqlException,MySql.Data...
When dumping the migration to a SQL script file it becomes clear that the DropForeignKey simply generates a FK with a more predictable, non-ambiguous byte stream array.
Is there a proper EF Code First approach to solve the problem of setting FK column values to null when deleting the refrenced row, or am I stuck having to hand code SQL in order to gain this functionality?

When I add a column in the database, under what conditions do I need to update my EDMX?

When I add a column in the database, under what conditions do I need to update my EDMX?
To elaborate:
I know if I add a non-nullable field, I need to update the model if I want to write to the database. What if just I want to read?
What if it's a nullable field? Can I both read and write?
What if I were to change the primary key to the new column but the edmx still has the old column as primary?
1) If you want to port an old database, you need to make sure that every table in your database must have a primary key. This is the only requirement for creating the EDMX.
2) If you've added a column in a table at database side, and have not updated edmx, then you'll simply not be able to use that column though EntityFramework.
If you create a non nullable column with no default value, the insert operation will fail with exception "Cannot insert null into column , statement terminated". And the you'll not be able to read values of that column using entityframeowrk, unless you update the edmx.
3) If you've changed the primary key of any table at database side, and if the edmx is not aware of that, your application might create a runtime exception when performing operations with that table.
Remember, Entity Framework creates SQL queries depending upon its knowledge of database(which is defined in EDMX). So if EDMX is incorrect, the resulting SQL queries so generated might lead to problems at runtime.

Using "rowversion" as primary key column

I am using SQL Server 2012 and I want to create a "changes" table - it will be populated with data from other table when the second table columns values are changed.
I am adding to the "changes" table "datatime2", and "rowversion" columns in order to track when the changes are made.
Is it ok to use "rowversion" as primary key?
I have read here that it will be changed, if the current row is updated and that's why it is not a good candidate for "primary key" making foreign keys invalid.
Anyway, if it won't be used as a foreign key and the rows of "changes" table will never be updated (only new rows will be inserted) is it ok to use the "rowversion" as PK or I should use additional column?
Some good info here:
Careful reading of the MSDN page also shows that duplicate rowversion values are possible if SELECT INTO statements are used improperly. Something to watch out for there.
I would stick with an Identity field in the original data, carried over into the change tracking table that has its own Identity field.