we have a table called job which has a self referencing key. We are using JPA and eclipselink as the JPA provider. Sometimes we are getting the following exception
Exception [EclipseLink-4002] (Eclipse Persistence Services -
2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException Internal
Exception: com.sybase.jdbc3.jdbc.SybSQLException: Your server command
(family id #0, process id #384) encountered a deadlock situation.
Please re-run your command.
We have an action in our UI which when performed a JSM message will go to some external component and a record will be created in our job table and then the job id will be sent to client and he will be redirected to the jobs view which lists all jobs in the table. After he is redirected the client will send an ajax request to list all jobs. While this operation is going we will receive notifications from external components and then we update the jobs table records.
I strongly believe that while the select operation is going we are trying to update the table and this is happening. Can anyone please tell me how to solve this problem.
Thank you all in advance good day.
You may be able to get around the select/update conflict by changing the locking scheme for the table, in addition to having good indexes.
Sybase has good documentation on this here:
Performance and Tuning Series: Locking and Concurrency Control
Related
I am copying java code(using springboot spring batch) and database from dev server to local(desktop) and run it. Getting an error.
It works fine in Dev server. In local , spring-batch is resetting Job instance to 1 and causing primary key error.Is there any option in spring batch so that it starts with next instance id instead of 1? Please let me know
Referred to stackoverflow link below , seems related but posted few years back and reference links does not work anymore.
Duplicate Spring Batch Job Instance
#Configuration
#EnableBatchProcessing
public class Jobclass {
#Rest of the code with Job Bean and steps which works fine in Dev server
}
Error:
com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY
constraint 'PK__BATCH_JO__4848154AFB5435C7'. Cannot insert duplicate key
in object 'dbo.BATCH_JOB_INSTANCE'. The duplicate key value is (5).
I've had the same thing happening to me when moving an anonymized production database to another system. Turns out that the anonymization tool in question (PostgreSQL Anonymizer), has a bug which results in stripping the commands which set the next value for the exported sequences, so that was the root cause.
This would also cause the ID reported in the stacktrace to be incremented by 1 on with every attempt - since the sequence was erroneously starting at 1, but a lot of previous executions were stored in Spring Batch's tables.
When I sidestepped the issue by setting the next value myself, the problem vanished. In my case, this amounted to:
SELECT pg_catalog.setval('public.batch_job_execution_seq', 6482, true);
SELECT pg_catalog.setval('public.batch_job_seq', 6482, true);
SELECT pg_catalog.setval('public.batch_step_execution_seq', 6482, true);
Is there any option in spring batch so that it starts with next instance id
To answer your question, the "option" you are looking for is the RunIdIncrementer. It will increment a job parameter called "run.id" each time so you will have a new instance on each run.
However, this is not how I would fix the issue (See my comment). You need to check why this duplicate key exception is happening and fix it. Check if you are launching the job with same parameters resulting in the same instance (and even if this happens, you should not have such an exception if the transaction's isolation level of your job repository is correctly configured, I would expect a JobInstanceAlreadyCompleteException if the last execution succeeded or a JobExecutionAlreadyRunningException if the last execution failed and another one is currently running).
Here is the sample project where the exception is reproduced.
This sample illustrates the issue when many concurrent transactions are modifiying Account balance. Account can have many Card entities bound. Transactions are related to Order and last in time. Each Thread executes as follows:
client requests '/order/{hashId}' for first available Order by given card hash id
client starts new tx for given order - '/tx/{orderId}/start'
client completes tx - '/tx/{txId}/stop/{amount}' where the tx amount is subtracted from Account balance.
Entity Locking
Account and Order entities are versioned with #javax.persistence.Version. In last step Account entity is locked with pessimistic write lock:
#Override
public Account getLockedAccount(Integer id) {
final Account account = findOne(id);
em.lock(account, LockModeType.PESSIMISTIC_WRITE);
return account;
}
Testing
To test the concurrent access use JMeter script src/main/resources/StressTest.jmx. NB: Extra libs have to be installed to JMeter home to run the script due to usage of JSON Path extractor. With these specific settings on an average laptop you can get around 10% of errors for TxEnd request:
{
"timestamp":1425407408204,
"status":500,
"error":"Internal Server Error",
"exception":"org.springframework.orm.ObjectOptimisticLockingFailureException",
"message":"Object of class [sample.data.jpa.domain.Account] with identifier [1]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [sample.data.jpa.domain.Account#1]",
"path":"/tx/1443/stop/46.4"
}
Question
Despite of using pessimistic write lock I still get the optimistic locking exception. Is there any other approach to ensure the integrity of account without creating a task execution queue for all updates or synchronizing methods?
UPD: The work around with task executor is placed in another branch. Spring ThreadPoolTaskExecutor combined with transactional task remediates the issue.
Between find and locking, the Account object may have been already modified.
You need to do it in one statement
EM.find(Account.class, id, LockModeType.PESSIMISTIC_WRITE)
I want to reuse an existing, transactional,paginated service class, which retrieves the items using JPA from a database, inside a Spring batch job, as a reader. I want to do that instead of using directly the JpaPagingItemReader basically because the JPA query is more complex to build and the service already provides this functionality.
My question would be what are the things I should take into account when developing the Spring batch adapter over this service. Although the reference documentation http://docs.spring.io/spring-batch/trunk/reference/html/readersAndWriters.html#pagingItemReaders has a section on reusing existing services, it doesn't say anything regarding the constraints, if there are any, of using such a transactional service.
Now, I looked at the JpaPagingItemReader as an example for building the reader, and I came up with a couple of questions I couldn't find answers for netiher in the documentation or on stackoverflow, although this post https://stackoverflow.com/a/26549831/4473261 helped.
The first thing I noticed is that a new transaction is used by the JpaPagingItemReader for reading a page of data. The above post says that this new transaction is needed "so that features like retry and skip can be correctly performed.". I have also found this article related to the matter https://blog.codecentric.de/en/2012/03/transactions-in-spring-batch-part-3-skip-and-retry/ that says that "when a skippable exception occurs during reading, we just increase the skip count and keep the exception for a later call on the onSkipInRead method of the SkipListener, if configured. There’s no rollback". So I assume that the reader has to do any reading of the records in a new transaction so that if a rollback of the transaction started when the processing of the chunk began happened, then the reader is not affected. I am wondering if this is true and if in this case my adapter should create a new transaction, invoke the service inside that transaction and then commit the transaction, similarly to how the JpaPagingItemReader does it. If that's true though, I wonder why there isn't any template provided by the framework which creates the transaction, delegates to the service the actual call to retrieve the data and then commits the transaction.
Greetings,
Cristi
From a reader perspective, there really isn't much to be concerned about. You can see in our JmsItemReader which obviously works with a transactional store that we don't take any additional precautions within the ItemReader itself.
What really matters is how you configure your step. When configuring your step, you'll need to mark the reader as transactional so that Spring Batch handles rollback correctly. When Spring Batch reads items in a fault tollerant step, the default behavior is to buffer them so that they won't be re-read on failure (retry, skip, etc). However, since the items read from a transactional store are tied to the transaction (and therefore reset when the rollback occurs), you need to tell Spring Batch to not buffer the items as they are read.
To mark the ItemReader as transactional, you'll set the not-quite-well-named flag is-reader-transactional-queue to true. You can read more about configuring steps and transactions in the documentation here: http://docs.spring.io/spring-batch/trunk/reference/html/configureStep.html
Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness
HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.