Insert after delete same transaction in Spring Data JPA - jpa

Using Spring Data JPA I have the next flow inside the same transaction (REQUIRES_NEW) :
Remove a set of user's predictions with this Spring Data JPA repository method.
#Query(value = "DELETE FROM TRespuestaUsuarioPrediccion resp WHERE resp.idEvento.id = :eventId AND resp.idUsuario.id = :userId")
#Modifying
void deleteUserPredictions(#Param("userId") int userId, #Param("eventId") int eventId);
Insert the new user's predictions and save the master object (event).
eventRepository.save(event);
When this service finishes, the commit is made by AOP but only works in first attemp...not in the next ones...
How can I manage this situation without iterating over event's predictions entries and updating each one inside?
UPDATE
I tried with that and it doesn't work (the adapter inserts the objects I remove before):
#Transactional(propagation=Propagation.REQUIRES_NEW, rollbackFor=PlayTheGuruException.class)
private void updateUserPredictions(final TUsuario user, final TEvento event, final SubmitParticipationRequestDTO eventParticipationRequestDTO)
{
eventRepository.deleteUserPredictions(user.getId(), event.getId());
EventAdapter.predictionParticipationDto2Model(user, event, eventParticipationRequestDTO);
eventRepository.save(event);
}

Hibernate changed order of the commands. It works in below order :
Execute all SQL and second-level cache updates, in a special order so that foreign-key constraints cannot be violated:
1. Inserts, in the order they were performed
2. Updates
3. Deletion of collection elements
4. Insertion of collection elements
5. Deletes, in the order they were performed
And that is exactly the case. When flushing, Hibernate executes all inserts before delete statements.
The possible option are :
1. To call entityManager.flush() explicitly just after the delete.
OR
2. Wherever possible update existing rows and from rest create ToBeDeleted List. This will ensure that existing records are updated with new values and completely new records are saved.

PostgreSQL (and maybe other databases as well) have the possibility to defer the constraint until the commit. Meaning that it accepts duplicates in the transaction, but enforces the unique constraint when committing.
ALTER TABLE <table name> ADD CONSTRAINT <constraint name> UNIQUE(<column1>, <column2>, ...) DEFERRABLE INITIALLY DEFERRED;

Related

Is it possible to access current column data on conflict

I want to get such behaviour on inserting data (conflict on id):
if there is no model with same id in db do INSERT
if there is entry with same id in db and that entry is newer (updated_at field) do NOT UPDATE
if there is entry with same id in db and that entry is older (updated_at field) do UPDATE
I'm using Ecto for that and want to work on constraints, however I cannot find an option to do so in documentation. Pseudo code of constraint could look like:
CHECK: NULL(current.updated_at) or incoming.updated_at > current.updated_at
Is such behaviour possible in Postgres?
PostgreSQL does not support CHECK constraints that reference table
data other than the new or updated row being checked. While a CHECK
constraint that violates this rule may appear to work in simple tests,
it cannot guarantee that the database will not reach a state in which
the constraint condition is false (due to subsequent changes of the
other row(s) involved). This would cause a database dump and reload to
fail. The reload could fail even when the complete database state is
consistent with the constraint, due to rows not being loaded in an
order that will satisfy the constraint. If possible, use UNIQUE,
EXCLUDE, or FOREIGN KEY constraints to express cross-row and
cross-table restrictions.
If what you desire is a one-time check against other rows at row
insertion, rather than a continuously-maintained consistency
guarantee, a custom trigger can be used to implement that. (This
approach avoids the dump/reload problem because pg_dump does not
reinstall triggers until after reloading data, so that the check will
not be enforced during a dump/reload.)
That should be simple using the WHERE clause of ON CONFLICT ... DO UPDATE:
INSERT INTO mytable (id, entry) VALUES (42, '2021-05-29 12:00:00')
ON CONFLICT (id)
DO UPDATE SET entry = EXCLUDED.entry
WHERE mytable.entry < EXCLUDED.entry;

Spring JPA save update in certain order

Code snippet in my spring service
to update the existing record
//Credit Cards
find.getCreditCards().forEach(creditCard -> {
creditCard.setActiveVersion(false);
businessPartnerCreditCardRepository.save(creditCard);
});
And then insert new 1
//Credit Cards
businessPartner.getCreditCards().forEach(creditCard -> {
creditCard.setVersion(find.getVersion() + 1);
creditCard.setActiveVersion(true);
businessPartnerCreditCardRepository.save(creditCard);
});
The issue is that , Spring JPA first run the INSERT statement and then UPDATE, instead of first run the UPDATE then INSERT.
why I need certain order from UPDATE to INSERT
There is a DB constraint that only 1 record is active at a time. so, when JPA insert without update .. DB shout and kickback to ..... :D
Any Update ?
Do a flush after the update.
You can use saveAndFlush from the JpaRepository or write a custom method in your repository where get the EntityManager injected and perform the flush on it.
Another option would be to make the constraint a deferred constraint so it is only checked at the end of the transaction.

How to INSERT OR UPDATE while MATCHING a non Primary Key without updating existing Primary Key?

I'm currently working with Firebird and attempting to utilize UPDATE OR INSERT functionality in order to solve a particular new case within our software. Basically, we are needing to pull data off of a source and put it into an existing table and then update that data at regular intervals and adding any new references. The source is not a database so it isn't a matter of using MERGE to link the two tables (unless we make a separate table and then merge it, but that seems unnecessary).
The problem rests on the fact we cannot use the primary key of the existing table for matching, because we need to match based off of the ID we get from the source. We can use the MATCHING clause no problem but the issue becomes that the primary key of the existing table will be updated to the next key every time because it has to be in the query because of the insertion chance. Here is the query (along with c# parameter additions) to demonstrate the problem.
UPDATE OR INSERT INTO existingtable (PrimaryKey, UniqueSourceID, Data) VALUES (?,?,?) MATCHING (UniqueSourceID);
this.AddInParameter("PrimaryKey", FbDbType.Integer, itemID);
this.AddInParameter("UniqueSourceID", FbDbType.Integer, source.id);
this.AddInParameter("Data", FbDbType.SmallInt, source.data);
Problem is shown that every time the UPDATE triggers, the primary key will also change to the next incremented key I need a way to leave the primary key alone when updating, but if it is inserting I need to insert it.
Do not generate primary key manually, let a trigger generate it when nessesary:
CREATE SEQUENCE seq_existingtable;
SET TERM ^ ;
CREATE TRIGGER Gen_PK FOR existingtable
ACTIVE BEFORE INSERT
AS
BEGIN
IF(NEW.PrimaryKey IS NULL)THEN NEW.PrimaryKey = NEXT VALUE FOR seq_existingtable;
END^
SET TERM ; ^
Now you can omit the PK field from your statement:
UPDATE OR INSERT INTO existingtable (UniqueSourceID, Data) VALUES (?,?) MATCHING (UniqueSourceID);
and when the insert is triggered by the statement then the trigger will take care of creating the PK. If you need to know the generated PK then use the RETURNING clause of the UPDATE OR INSERT statement.

Possible to let the stored procedure run one by one even if multiple sessions are calling them in postgresql

In postgresql: multiple sessions want to get one record from the the table, but we need to make sure they don't interfere with each other. I could do it using message queue: put the data in a queue, and them let each session get data from the queue. But is it doable in postgresql? since it will be easier for SQL guys to cal stored procedure. Any way to configure a stored procedure so that no concurrent calling will happen, or use some special lock?
I would recommend making sure the stored procedure uses SELECT FOR UPDATE, which should prevent the same row in the table from being accessed by multiple transactions.
Per the Postgres doc:
FOR UPDATE causes the rows retrieved by the SELECT statement to be
locked as though for update. This prevents them from being modified or
deleted by other transactions until the current transaction ends. That
is, other transactions that attempt UPDATE, DELETE, SELECT FOR UPDATE,
SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY SHARE of
these rows will be blocked until the current transaction ends. The FOR
UPDATE lock mode is also acquired by any DELETE on a row, and also by
an UPDATE that modifies the values on certain columns. Currently, the
set of columns considered for the UPDATE case are those that have an
unique index on them that can be used in a foreign key (so partial
indexes and expressional indexes are not considered), but this may
change in the future.
More SELECT info.
So you don't end up locking all of the rows in the table at once (i.e. by SELECTing all of the records), I would recommend you use ORDER BY to sort the table in a consistent manner, and then do a LIMIT 1, so that it only gets the next one in the queue. Also add a WHERE clause that checks for a certain column value (i.e. processed), and then once processed set the column to a value that will prevent the WHERE clause from picking it up.

SQL Merge Query - Executing Additional Query

I have written a working T-SQL MERGE statement. The premise is that Database A contains records about customer's support calls. If they are returning a product for repair, Database B is to be populated with certain data elements from Database A (e.g. customer name, address, product ID, serial number, etc.) So I will run an SQL Server job that executes an SSIS package every half hour or so, in which the MERGE will do one of the following:
If the support call in Database A requires a product return and it
is not in Database B, INSERT it into Database B..
If the support call in Database A requires a product return and it
is in Database B - but data has changed - UPDATE it in Database B.
If there is a product return in Database B but it is no longer
indicated as a product return in Database A (yes, this can happen - a customer can change their mind at a later time/date and not want to pay for a replacement product), DELETE it from Database
B.
My problem is that Database B has an additional table with a 1-to-many FK relationship with the table being populated in the MERGE. I do not know how, or even if, I can go about using a MERGE statement to first delete the records in the table with FK constraint before deleting the records as I am currently doing in my MERGE statement.
Obviously, one way would be to get rid of the DELETE in the MERGE and hack out writing IDs to delete in a temp table, then deleting from the FK table, then the PK table. But if I can somehow delete from both tables in WHEN NOT MATCHED BY SOURCE that would be cleaner code. Can this be done?
You can only UPDATE, DELETE, or INSERT into/from one table per query.
However, if you added an ON DELETE CASCADE to the FK relationship, the sub-table would be cleaned up as you delete from the primary table, and it would be handled in a single operation.