Hi have created a EventSubscriber in TypeORM to listen to a specific entity and it's events on database level (quite straightforward)
But this Subscriber is being triggered at any CRUD operation in any table, or maybe fired due to indirect relations with the targeted entity (hopefully not) without the targeted entity/table not being CRUD-ed
This is how my subscriber looks:
#EventSubscriber()
export class ImpactViewSubscriber
implements EntitySubscriberInterface<TargetedEntity>
{
logger: Logger = new Logger(ImpactViewSubscriber.name);
listenTo(): any {
return TargetedEntity;
}
afterTransactionCommit(event: TransactionCommitEvent): Promise<any> | void {
this.logger.log(`Event subscriber fired...`);
return event.queryRunner.query(
`some query...`,
);
}
}
And it's (apparently) properly imported in typeorm.config.ts
....
subscribers: [join(__dirname, '..', '**/*.subscriber.{ts,js}')],
So for some reason the logic inside afterTransactionCommit() is being triggered at any interaction of any table, also when I first run the app (which is annoying).
What am I doing wrong? I just want to fire the logic when any CRUD operation is donde to my target entity, ideally after a transaction, as my target entity will only receive bulk INSERTS or DELETES
Any idea of where is the error?
Thanks in advance
UPDATE
I also tested using afterInsert() and afterRemove()
which does not make the logic get triggered at any event of any other table, but it is being triggered for each row inserted in the target table. And since I only have bulk operations, this is not useful.
My use cases are: Bulk inserts in the table, and bulk deletes by cascade. I am making sure those happens on a single transaction. Any ideas as to what can I do using typeorm avoiding to manually create specific DB triggers or similar?
Thanks!
I know this is a quite old post.
But have you tried to remove :any from the listener call?
listenTo() {
return TargetedEntity;
}
Related
I have a question regarding Spring Data Mongo and Mongo Transactions.
I have successfully implemented Transactions, and have verified the commit and rollback works as expected utilizing the Spring #Transactional annotation.
However, I am having a hard time getting the transactions to work the way I would expect in the Spring Data environment.
Spring data does Mongo -> Java Object mapping. So, the typical pattern for updating something is to fetch it from the database, and then make modifications, then save it back to the database. Prior to implementing transactions, we have been using Spring's Optimistic Locking to account for the possibility of updates happening to a record between the fetch and the updated.
I was hoping that I would be able to not include the optimistic locking infrastructure for all of my updates once we were able to use Transactions. So, I was hoping that, in the context of a transaction, the fetch would create a lock, so that I could then do my updates and save, and I would be isolated so that no one could get in and make changes like previously.
However, based on what I have seen, the fetch does not create any kind of lock, so nothing prevents any other connection from updating the record, which means it appears that I have to maintain all of my optimistic locking code despite having native mongodb transaction support.
I know I could use mongodb findAndUpdate methods to do my updates and that would not allow interim modifications from occurring, but that is contrary to the standard pattern of Spring Data which loads the data into a Java Object. So, rather than just being able to manipulate Java Objects, I would have to either sprinkle mongo specific code throughout the app, or create Repository methods for every particular type of update I want to make.
Does anyone have any suggestions on how to handle this situation cleanly while maintaining the Spring Data paradigm of just using Java Objects?
Thanks in advance!
I was unable to find any way to do a 'read' lock within a Spring/MongoDB transaction.
However, in order to be able continue to use following pattern:
fetch record
make changes
save record
I ended up creating a method which does a findAndModify in order to 'lock' a record during fetch, then I can make the changes and do the save, and it all happens in the same transaction. If another process/thread attempts to update a 'locked' record during the transaction, it is blocked until my transaction completes.
For the lockForUpdate method, I leveraged the version field that Spring already uses for Optimistic locking, simply because it is convenient and can easily be modified for a simply lock operation.
I also added my implementation to a Base Repository implementation to enable 'lockForUpdate' on all repositories.
This is the gist of my solution with a bit of domain specific complexity removed:
public class BaseRepositoryImpl<T, ID extends Serializable> extends SimpleMongoRepository<T, ID>
implements BaseRepository<T, ID> {
private final MongoEntityInformation<T, ID> entityInformation;
private final MongoOperations mongoOperations;
public BaseRepositoryImpl(MongoEntityInformation<T, ID> metadata, MongoOperations mongoOperations) {
super(metadata, mongoOperations);
this.entityInformation = metadata;
this.mongoOperations = mongoOperations;
}
public T lockForUpdate(ID id) {
// Verify the class has a version before trying to increment the version in order to lock a record
try {
getEntityClass().getMethod("getVersion");
} catch (NoSuchMethodException e) {
throw new InvalidConfigurationException("Unable to lock record without a version field", e);
}
return mongoOperations.findAndModify(query(where("_id").is(id)),
new Update().inc("version", 1L), new FindAndModifyOptions().returnNew(true), getEntityClass());
}
private Class<T> getEntityClass() {
return entityInformation.getJavaType();
}
}
Then you can make calls along these lines when in the context of a Transaction:
Record record = recordRepository.lockForUpdate(recordId);
...make changes to record...
recordRepository.save();
I'm trying to figure out how to use Entity Framework Cores 2.1 new ChangeTracker.Tracked Event to hook into reading queries. Unfortunately, I'm not being able to understand how to implement this.
Since it's a new feature it's not possible to find any articles on it and the official Microsoft docs site does not provide any help or sample code.
My scenario is pretty simple. I have a database with following columns:
id, customerId, metadata.
When a user queries this table I want to intercept the query result set and for every row, I want to compare the customerId with currently logged in user.
I'm hoping that ChangeTracker.Tracked Event can help me in intercepting the return result set. I'm looking for some sample code on how to achieve above.
Here is a sample usage of the ChangeTracker.Tracked event.
Add the following method to your context (requires using Microsoft.EntityFrameworkCore.ChangeTracking;):
void OnEntityTracked(object sender, EntityTrackedEventArgs e)
{
if (e.FromQuery && e.Entry.Entity is YourEntityClass)
{
var entity = (YourEntityClass)e.Entry.Entity;
bool isCurrentUser = entity.customerId == CurrentUserId;
// do something (not sure what)
}
}
and attach it to the ChangeTracker.Tracked even in your context constructor:
ChangeTracker.Tracked += OnEntityTracked;
As described in the Tracked event documentation:
An event fired when an entity is tracked by the context, either because it was returned from a tracking query, or because it was attached or added to the context.
Some things to mention.
The event is not fired for no-tracking queries
The event is fired for each entity instance created by the tracking query result set and not already tracked by the context
The bool FromQuery property of the event args is used to distinguish if the event is fired from the tracking query materialization process or via user code (Attach, Add etc. calls).
The EntityEntry Entry property of the event args gives you access to the entity instance and other related information (basically the same information that you get when calling the non-generic DbContext.Entry method)
I am deleting rows in a batch as follows (in an EJB).
int i=0;
List<Category> list = // Sent by a client which is JSF in this case.
for(Category category:list) {
if(++i%49==0) {
i=0;
entityManager.flush();
}
entityManager.remove(entityManager.contains(category) ? category : entityManager.merge(category));
}
Where Category is a JPA entity.
There is a callback that listens to this delete event.
#ApplicationScoped
public class CategoryListener {
#PostPersist
#PostUpdate
#PostRemove
public void onChange(Category category) {
//...
}
}
This callback method is invoked as many times as the number of rows which are deleted. For example, this method will be called 10 times, if 10 rows are deleted.
Is there a way to invoke the callback method only once at the end of a transaction i.e. as soon as the EJB method in which this code is executed returns or at least per batch i.e. when entityManager.flush(); occurs? The former is preferred in this case.
Additional Information :
I am doing some real time updates using WebSockets where clients are to be notified when such CRUD operations are performed on a few database tables. It is hereby meaningless to send a message to all the associated clients on deletion of every row which is performed in a batch - every time a single row is deleted. They should rather be notified only once/at once (as soon as) a transaction (or at least a batch) ends.
The following JPA 2.1 criteria batch delete approach does not work because it does not directly operate upon entities. No JPA callbacks will be triggered by this approach neither by using its equivalent JPQL.
CriteriaBuilder criteriaBuilder=entityManager.getCriteriaBuilder();
CriteriaDelete<Category> criteriaDelete = criteriaBuilder.createCriteriaDelete(Category.class);
Root<Category> root = criteriaDelete.from(entityManager.getMetamodel().entity(Category.class));
criteriaDelete.where(root.in(list));
entityManager.createQuery(criteriaDelete).executeUpdate();
I am using EclipseLink 2.5.2 having JPA 2.1
Unfortunately JPA provides entity callbacks, which are required to be called for each entity instances they listen on, so you will need to add in your own functionality to see that the listener is triggered only once per batch/transaction etc. The other alternative is to use provider specific behavior, in this case EclipseLink's session event listeners: https://wiki.eclipse.org/Introduction_to_EclipseLink_Sessions_(ELUG)#Session_Event_Manager_Events to listen for the PostCalculateUnitOfWorkChangeSet event or some other event that gets triggered when you need.
I have a named query that returns a Collection of entities.
These entities have a #PreUpdate-annotated method on them. This method is invoked during query.getResultList(). Because of this, the entity is changed within the persistence context, which means that upon transaction commit, the entity is written back to the database.
Why is this? The JPA 2.0 specification does not mention explicitly that #PreUpdate should be called by query execution.
The specification says:
The PreUpdate and PostUpdate callbacks occur before and after the
database update operations to entity data respectively. These database
operations may occur at the time the entity state is updated or they
may occur at the time state is flushed to the database (which may be
at the end of the transaction).
In this case calling query.getResultList() triggers a em.flush() so that the query can include changed from current EntityManager session. em.flush() pushes all the changes to the database (makes all UPDATE,INSERT calls). Before UPDATE is sent via JDBC #PreUpdate corresponding hooks are called.
This is just my comment from rzymek's answer with some follow up code:
I tried to reproduce the problem OP had, because it sounded like the EntityManager would get flushed everytime the query is called. But that's not the case. #PostUpdate methods are only called when there is actual changed being done to the Database as far as I can tell. If you made a change with the EntityManager that is not yet flushed to the DB query.getResultList will trigger the flush to the DB which is the behaviour one should expect.
Place valinorDb = em.find(Place.class, valinorId);
// this should not trigger an PostUpdate and doesn't
// TODO: unit-testify this
em.merge(valinorDb);
valinorDb.setName("Valinor123");
valinorDb.setName("Valinor");
// this shouldn't trigger an PostUpdate because the Data is the same as in the beginning and doesn't
em.merge(valinorDb);
{
// this is done to test the behaviour of PostUpdate because of
// this:
// http://stackoverflow.com/questions/12097485/why-does-a-jpa-preupdate-annotated-method-get-called-during-a-query
//
// this was tested by hand, but should maybe changed into a unit
// test? PostUpdate will only get called when there is an actual
// change present (at least for Hibernate & EclipseLink) so we
// should be fine
// to use PostUpdate for automatically updating our index
// this doesn't trigger a flush as well as the merge didn't even trigger one
Place place = (Place) em.createQuery("SELECT a FROM Place a")
.getResultList().get(0);
Sorcerer newSorcerer = new Sorcerer();
newSorcerer.setName("Odalbort the Unknown");
place.getSorcerers().add(newSorcerer);
//this WILL trigger an PostUpdate as the underlying data actually has changed.
place = (Place) em.createQuery("SELECT a FROM Place a")
.getResultList().get(0);
}
In my case JPA Event Listener (#EntityListeners) calls query.getResultList() in its logic (to do some validation) and in effect goes into
neverending loop that calls the same listener once again and again and in the end got StackOverflowError. I used flush-mode = COMMIT to avoid flush on query like below. Maybe for someone it will be helpful.
List l = entityManager.createQuery(query)
/**
* to NOT do em.flush() on query that trigger
* #PreUpdate JPA listener
*/
.setFlushMode(FlushModeType.COMMIT)
.getResultList();
Working on a project using Entity Framework (4.3.1.0). I'm trying to figure out how to make my code work as a transaction, but for me it seems like my model doesnt update after the transaction has failed.
Let me show you:
using (TransactionScope trans = new TransactionScope())
{
_database.Units.Add(new Unit{ ... });
var a = false;
if (a)
{
trans.Complete();
Refresh();
}
}
Refresh();
What I experience is that after the transactionscope is finished it doesnt roll back to it's previous state. When I run the refresh method I iterate over all the items in Units and insert the values into a ObservableCollection which I display on the screen in a WPF window.
This mechanism works for when I successfully perform the transaction, but when I run the code above, the grid updates with the newly added Unit, but it does not go away after I run Refresh after the transaction.
I have the feeling I'm doing something fundamentaly wrong here :)
Entity Framework does not support transactions for the in-memory tracked entities - its "ObjectStateManager" which you see in the ObjectContext is not a transactional resource. The TransactionScope only "applies" to the database operations (queries, updates) done within it, not the in-memory operations, such as manipulating the object graph (which is what you do).