I have an aggregate root like this:
public class Room extends AbstractAggregateRoot implements Serializable {
// lots of code
#OneToMany(mappedBy = "room", cascade = {CascadeType.MERGE, CascadeType.PERSIST,
CascadeType.REMOVE})
private List<Message> messages = new ArrayList<>();
public void sendMessage(Message message) {
message.setRoom(this);
messages.add(message);
registerEvent(new MessageSavedEvent((message)));
}
}
and I'm creating a message like this:
Room room = roomRepository.findByUid(uid);
room.sendMessage(MessageFactory.createMessage(params.getContent(), sender));
roomRepository.save(room);
thing is that the event is being published before the message is saved, so in my message saved handler, the id and other things like createdAt are null. I know that I can publish events manually through ApplicationContext but I would be violating DDD not publishing events from the aggregator. Is ther any other thing I can do?
Spring Data doesn't offer out of the box support for this. But JPA has post persist events: http://www.objectdb.com/java/jpa/persistence/event
So I guess what you could do is write a listener for that, triggering the publishing of events.
One challenge is that JPA might replace your instance, and it won't copy fields over that don't get persisted. This might lead to your messages being gone once the post-persist event fires. So you might want to use the existing infrastructure t store the message during the persist somewhere independent of the aggregate root and then publish them afterwards.
Related
I have a question regarding Spring Data Mongo and Mongo Transactions.
I have successfully implemented Transactions, and have verified the commit and rollback works as expected utilizing the Spring #Transactional annotation.
However, I am having a hard time getting the transactions to work the way I would expect in the Spring Data environment.
Spring data does Mongo -> Java Object mapping. So, the typical pattern for updating something is to fetch it from the database, and then make modifications, then save it back to the database. Prior to implementing transactions, we have been using Spring's Optimistic Locking to account for the possibility of updates happening to a record between the fetch and the updated.
I was hoping that I would be able to not include the optimistic locking infrastructure for all of my updates once we were able to use Transactions. So, I was hoping that, in the context of a transaction, the fetch would create a lock, so that I could then do my updates and save, and I would be isolated so that no one could get in and make changes like previously.
However, based on what I have seen, the fetch does not create any kind of lock, so nothing prevents any other connection from updating the record, which means it appears that I have to maintain all of my optimistic locking code despite having native mongodb transaction support.
I know I could use mongodb findAndUpdate methods to do my updates and that would not allow interim modifications from occurring, but that is contrary to the standard pattern of Spring Data which loads the data into a Java Object. So, rather than just being able to manipulate Java Objects, I would have to either sprinkle mongo specific code throughout the app, or create Repository methods for every particular type of update I want to make.
Does anyone have any suggestions on how to handle this situation cleanly while maintaining the Spring Data paradigm of just using Java Objects?
Thanks in advance!
I was unable to find any way to do a 'read' lock within a Spring/MongoDB transaction.
However, in order to be able continue to use following pattern:
fetch record
make changes
save record
I ended up creating a method which does a findAndModify in order to 'lock' a record during fetch, then I can make the changes and do the save, and it all happens in the same transaction. If another process/thread attempts to update a 'locked' record during the transaction, it is blocked until my transaction completes.
For the lockForUpdate method, I leveraged the version field that Spring already uses for Optimistic locking, simply because it is convenient and can easily be modified for a simply lock operation.
I also added my implementation to a Base Repository implementation to enable 'lockForUpdate' on all repositories.
This is the gist of my solution with a bit of domain specific complexity removed:
public class BaseRepositoryImpl<T, ID extends Serializable> extends SimpleMongoRepository<T, ID>
implements BaseRepository<T, ID> {
private final MongoEntityInformation<T, ID> entityInformation;
private final MongoOperations mongoOperations;
public BaseRepositoryImpl(MongoEntityInformation<T, ID> metadata, MongoOperations mongoOperations) {
super(metadata, mongoOperations);
this.entityInformation = metadata;
this.mongoOperations = mongoOperations;
}
public T lockForUpdate(ID id) {
// Verify the class has a version before trying to increment the version in order to lock a record
try {
getEntityClass().getMethod("getVersion");
} catch (NoSuchMethodException e) {
throw new InvalidConfigurationException("Unable to lock record without a version field", e);
}
return mongoOperations.findAndModify(query(where("_id").is(id)),
new Update().inc("version", 1L), new FindAndModifyOptions().returnNew(true), getEntityClass());
}
private Class<T> getEntityClass() {
return entityInformation.getJavaType();
}
}
Then you can make calls along these lines when in the context of a Transaction:
Record record = recordRepository.lockForUpdate(recordId);
...make changes to record...
recordRepository.save();
I have an JPA Entity Class that is also an Elasticsearch Document. The enviroment is a Spring Boot Application using Spring Data Jpa and Spring Data Elasticsearch.
#Entity
#Document(indexname...etc)
#EntityListeners(MyJpaEntityListener.class)
public class MyEntity {
//ID, constructor and stuff following here
}
When an instance of this Entity gets created, updated or deleted it gets reindexed to Elasticsearch. This is currently achieved with an JPA EntityListener which reacts on PostPersist, PostUpdate and PostRemove events.
public class MyJpaEntityListener {
#PostPersist
#PostUpdate
public void postPersistOrUpdate(MyEntity entity) {
//Elasticsearch indexing code gets here
}
#PostRemove
public void postPersistOrUpdate(MyEntity entity) {
//Elasticsearch indexing code gets here
}
}
That´s all working fine at the moment when a single or a few entities get modified during a single transaction. Each modification triggers a separate index operation. But if a lot of entities get modified inside a transaction it is getting slow.
I would like to bulkindex all entities that got modified at the end (or after commit) of a transaction. I took a look at TransactionalEventListeners, AOP and TransactionSynchronizationManager but wasn´t able to come up with a good setup till now.
How can I collect all modified entities per transaction in an elegant way without doing it per hand in every service method myself?
And how can I trigger a bulkindex at the end of a transaction with the collected entities of this transaction.
Thanks for your time and help!
One different and in my opinion elegant approach, as you don't mix your services and entities with elasticsearch related code, is to use spring aspects with #AfterReturning in the service layer transactional methods.
The pointcut expression can be adjusted to catch all the service methods you want.
#Order(1) guaranties that this code will run after the transaction commit.
The code below is just a sample...you have to adapt it to work with your project.
#Aspect
#Component()
#Order(1)
public class StoreDataToElasticAspect {
#Autowired
private SampleElasticsearhRepository elasticsearchRepository;
#AfterReturning(pointcut = "execution(* com.example.DatabaseService.bulkInsert(..))")
public void synonymsInserted(JoinPoint joinPoint) {
Object[] args = joinPoint.getArgs();
//create elasticsearch documents from method params.
//can also inject database services if more information is needed for the documents.
List<String> ids = (List) args[0];
//create batch from ids
elasticsearchRepository.save(batch);
}
}
And here is an example with a logging aspect.
I have a Java EE application with JPA implemented using Eclipselink. I have implemented a basic user login system using the ExternalContext session map. However it seems that often a session becomes out of sync with the database.
The basic process is
1. User A Creates a BidOrder entity.
2. User B creates an AskOrder entity.
3. A monitor checks the two orders match and if so creates an OrderBook entity
4. The changes are Pushed to all users (using Primefaces 5.3 Push)
When I check the prices I use this method in my SessionScoped backing bean for my main view:
public void findLatestPrices()
{
logger.log(Level.INFO, "findLatestPrices with user {0} ",user.getUserId());
findAllOrderBooks();
latestPricesId = new ArrayList<String>();
setLatestPricesResults(request.findLatestPrices());
for (Iterator<OrderBook> it = orderBookResults.iterator(); it.hasNext();)
{
OrderBook orderBook = it.next();
logger.log(Level.INFO, "Found {0} orderbook price", orderBook.getPrice());
}
logger.log(Level.INFO, "End of findLatestPrices with user {0} ",user.getUserId());
}
This calls my RequestScoped Stateful ejb:
public List<OrderBook> findLatestPrices() {
List<OrderBook> orderBooks;
List<OrderBook> orderBooksFiltered;
Map<Member, OrderBook> map = new LinkedHashMap<Member, OrderBook>();
try {
orderBooks = em.createNamedQuery(
"findAllOrderBooks").setHint("javax.persistence.cache.storeMode", "REFRESH")
.getResultList();
for (Iterator<OrderBook> it = orderBooks.iterator(); it.hasNext();) {
OrderBook orderBook = it.next();
Member member = orderBook.getBidOrderId().getMember();
map.put(member, orderBook);
logger.log(Level.INFO, "findLatestPrices orderbook price : {0}",
orderBook.getPrice());
logger.log(Level.INFO, "findLatestPrices orderbook bidorder member : {0}",
orderBook.getBidOrderId().getMember().getMemberId());
logger.log(Level.INFO, "findLatestPrices orderbook lastupdate : {0}",
orderBook.getLastUpdate());
}
...}
I create the EntityManager in the above bean in the following way:
#PersistenceContext
private EntityManager em;
From the logging I can see that sessions return data that is out of sync with the database, i.e. single results when I'd expect two etc. As you can see I've tried setHint to refresh the cache. I've also tried #Cacheable(false) on my OrderBook entity and #Cache(refreshAlways=true) but to no avail.
I'm sending a push event in the #PostPersist of the entity that is created (OrderBook). The javascript event handler in my xhtml page then calls the following remotecommand:
<p:remoteCommand name="updateWidget"
autoRun="false"
actionListener="#{parliamentManager.findLatestPrices}"
update="resultDisplay"
onstart="onStart()"
oncomplete="onComplete()"
onsuccess="onSuccess()"
onerror="onError()">
<f:actionListener binding="#{parliamentManager.findTraders()}" />
<f:actionListener binding="# {parliamentManager.findPortfolios()}" />
</p:remoteCommand>
It seems that often the results of findLatestPrices does not include the latest OrderBook entities for all sessions. Is it possible that an entity is not persisted immediately on a call to #PostPersist, working on the theory that the push is sent to some sessions before the entity is fully persisted and reflected by JPA?
To demonstrate I added a simple command button to call updateWidget() manually. If the session is not updated and I click the button it always updates to the latest data.
Thanks,
Zobbo
There is no locking between sessions, so I'm not quite sure what you mean. Optimistic locking to prevent overwriting stale data is recommended in most JPA provider documentation.
You haven't shown or specified how you are obtaining the EntityManager, or how long it is lived, but there are two levels of caching. The first is the EntityManager itself, which is used to track changes to manage entities and maintain their identity. JPA allows but doesn't mandate a second level of caching, shared at the EntityManagerFactory level. This second level of cache is what the javax.persistence.cache.storeMode is aimed at - it controls what happens when entities are pulled from the shared cache. If the entities are already loaded in the first level cache, because this is meant to represent a transactional scope, they are returned as-is, preserving any unsynchronized changes the application might have been made and JPA provider is required to track.
The only way JPA gives to force a refresh of a managed entity is by calling em.refresh(), though it can also be accomplished by calling em.clear, then re-reading the entity using the javax.persistence.cache.storeMode refresh hint. EclipseLink also has an "eclipselink.refresh" query hint that can be used to force the query to refresh the managed entity instance.
In a Spring Boot Applicaion, I have an entity Task with a status that changes during execution:
#Entity
public class Task {
public enum State {
PENDING,
RUNNING,
DONE
}
#Id #GeneratedValue
private long id;
private String name;
private State state = State.PENDING;
// Setters omitted
public void setState(State state) {
this.state = state; // THIS SHOULD BE WRITTEN TO THE DATABASE
}
public void start() {
this.setState(State.RUNNING);
// do useful stuff
try { Thread.sleep(2000); } catch(InterruptedException e) {}
this.setState(State.DONE);
}
}
If state changes, the object should be saved in the database. I'm using this Spring Data interface as repository:
public interface TaskRepository extends CrudRepository<Task,Long> {}
And this code to create and start a Task:
Task t1 = new Task("Task 1");
Task persisted = taskRepository.save(t1);
persisted.start();
From my understanding persisted is now attached to a persistence session and if the object changes this changes should be stored in the database. But this is not happening, when reloading it the state is PENDING.
Any ideas what I'm doing wrong here?
tl;dr
Attaching an instance to a persistence context does not mean every change of the state of the object gets persisted directly. Change detection only occurs on certain events during the lifecycle of persistence context.
Details
You seem to misunderstood the way change detection works. A very central concept of JPA is the so called persistence context. It is basically an implementation of the unit-of-work pattern. You can add entities to it in two ways: by loading them from the database (executing a query or issuing an EntityManager.find(…)) or by actively adding them to the persistence context. This is what the call to the save(…) method effectively does.
An important point to realize here is that "adding an entity to the persistence context" does not have to be equal to "stored in the database". The persistence provider is free to postpone the database interaction as long as it thinks is reasonable. Providers usually do that to be able to batch up modifying operations on the data. In a lot of cases however, an initial save(…) (which translates to an EntityManager.persist(…)) will be executed directly, e.g. if you're using auto id increment.
That said, now the entity has become a managed entity. That means, the persistence context is aware of it and will persist the changes made to the entity transparent, if events occur that need that to take place. The two most important ones are the following ones:
The persistence context gets closed. In Spring environments the lifecycle of the persistence context is usually bound to a transaction. In your particular example, the repositories have a default transaction (and thus persistence context) boundary. If you need the entity to stay managed around it, you need to extend the transaction lifecycle (usually by introducing a service layer that has #Transactional annotations). In web applications we often see the Open Entity Manager In View Pattern, which is basically a request-bound lifecycle.
The persistence context is flushed. This can either happen manually (by calling EntityManager.flush() or transparently. E.g. if the persistence provider needs to issue a query, it will usually flush the persistence context to make sure, currently pending changes can be found by the query. Imagine you loaded a user, changed his address to a new place and then issue a query to find users by their addresses. The provider will be smart enough to flush the address change first and execute the query afterwards.
I'm new to the whole JPA thing so I have multiple questions about the best way to handle JPA merge and persist.
I have an user object which should be updated (some values like date and name). Do I have to merge the passed object first or is it safer to find the new object?
Currently my code for updating a user looks like this:
public void updateUserName(User user, String name) {
// maybe first merge it?
user.setName(name);
user.setChangeDate(new Date());
em.merge(user);
}
How can I be sure that the user has not been manipulated before the update method is called? Is it safer to do something like this:
public void updateUserName(int userId, String name) {
User user = em.getReference(User.class, userId);
user.setName(name);
user.setChangeDate(new Date());
em.merge(user);
}
Maybe other solutions? I watched multiple videos and saw many examples but they all were different and nobody explained what the best practice is.
What is the best approach to add children to relationships? For example my user object has a connection to multiple groups. Should I call the JPA handler for users and just add the new group to the user group list or should I create the group in a so group handler with persist and add it manually to my user object?
Hope somebody here has a clue ;)
It depends on what you want to achieve and how much information you have about the origin of the object you're trying to merge.
Firstly, if you invoke em.merge(user) in the first line of your method or in the end it doesn't matter. If you use JTA and CMT, your entity will be updated when method invocation finishes. The only difference is that if you invoke em.merge(user) before changing the user you should use the returned instance instead of your parameter, so either it is:
public void updateUserName(User user, String name) {
User managedUser = em.merge(user);
managedUser.setChangeDate(new Date());
// no need of additional em.merge(-) here.
// JTA automatically commits the transaction for this business method.
}
or
public void updateUserName(User user, String name) {
user.setChangeDate(new Date());
em.merge(user);
// JTA automatically commits the transaction for this business method.
}
Now about updating entity.
If you just want to update some well-defined fields in your entity - use the second approach as it's safer. You can't be sure if a client of your method hasn't modified some other fields of your entity. Therefore, em.merge(-) will update them as well which might not be what you wanted to achieve.
On the other hand - if you want to accept all changes made by user and just override / add some properties like changeDate in your example, the first approach is also fine (merge whole entity passed to the business method.) It really depends on your use-case.
I guess it depends on your cascading settings. If you want to automatically persist / merge all Groups whenever the User entity is changed - it's safe to just add it to the User's collection (something like User#addGroup(Group g) { groups.add(g)}. If you don't want cascading, you can always create your own methods that will propagate to the other side of the relationship. It might be something like: User#addGroup(Group g) that automatically invokes g.addUser(this);.
Question 1
The merge method must be called on a detached entity.
The merge method will return the merged object attached to the entityManager.
What does it mean ?
An entity is detached as soon as the entityManager you use to fetch it is closed. (i.e. most of the time because you fetch it in a previous transaction).
In your second sample of code: user is attached (because you just fetch it) and so calling merge is useless. (BTW : it is not getReference but find)
In your first sample: we don't know the state of user (detached entity or not ?). If it is detached, it make sense to call merge , but careful that merge don't modify it's object passed as an argument. So here is my version of your first sample:
/**
* #param user : a detached entity
* #return : the attached updated entity
**/
public User updateUserName(User user, String name) {
user.setName(name);
user.setChangeDate(new Date());
return em.merge(user);
}
Question 2
Maybe some code sample to explain what you mean by jpa handler can help us to understand your concern. Anyway, I'll try to help you.
If you have a persistent user and you need to create a new group and associating it with the persistent user:
User user = em.find(User.class, userId);
Group group = new Group();
...
em.persist(group);
user.addToGroups(group);
group.addToUsers(user); //JPA won't update the other side of the relationship
//so you have to do it by hand OR being aware of that
If you have a persistent user and a persistent group and you need to associate them:
User user = em.find(User.class, userId);
Group group = em.find(Group.class, groupId);
...
user.addToGroups(group);
group.addToUsers(user);
General considerations
The best practices regarding all of this really depends on you manage the transactions (and so the lifecycle of the entityManager) vs the lifecycle of your objects.
Most of the time: an entityManager is a really short time living object. On the other hand your business objects may live for longer and so you will have to call merge (and being careful about the fact that merge don't modify the object passed in argument !!!).
You can decide to fetch and modify your business objects in the same transaction (i.e. with the same entityManager): it means a lot more database access and this strategy must generally be combined with a second-level cache for performance reason. But in this case you won't have to call merge.
I hope this help.