After Insert trigger on ContentDocumentLink runs twice for 1 record. How to prevent the same? - triggers

When iterated over Trigger.new for ContentDocumentLink, I'm trying to filter out some ContentDocumentLink records basis on the entity it is linked to.( Code snippet below. ) The system.debug shows same results but twice after some ms time interval. It is causing my functionality to run twice, how do I prevent the same?
if(govAgreementIds !=null){
for(ContentDocumentLink att:(List<ContentDocumentLink>)Trigger.new){
if(govAgreementIds.contains(att.LinkedEntityId)){
finalcdId.add(att.ContentDocumentId);
}
}
}
System.debug('finalcdId>> '+finalcdId);
Debugs logs :
17:08:07:232 USER_DEBUG [251]|DEBUG|finalcdId>> {0698A000000eg6bQAA}
17:08:08:528 USER_DEBUG [251]|DEBUG|finalcdId>> {0698A000000eg6bQAA}

Same ContentDocumentId can be shared across multiple LinkedEntityId's. your if condition checking LinkedEntityId. There may be multiple linkentity ID's for same ContentDocumentId. You need to implement code as below
if(govAgreementIds !=null){
for(ContentDocumentLink att:(List<ContentDocumentLink>)Trigger.new){
if(govAgreementIds.contains(att.LinkedEntityId)){
if (!finalcdId.contains(att.ContentDocumentId)) {
finalcdId.add(att.ContentDocumentId);
}
}
}
}

This is standard Salesforce behaviour.
This documentation on Triggers and Order of Execution mentions that triggers can be called twice (update triggers) once before and once after execution of workflow rules if something needs to be updated (see point #4, #8 and #11c)
To work around this for some reason, create a simple boolean class variable (a "flag") and use it to control/avoid the second execution:
public class HelperClass {
public static boolean firstRun = true;
}
trigger affectedTrigger on Account (before delete, after delete, after undelete) {
if(Trigger.isBefore){
if(Trigger.isDelete){
if(HelperClass.firstRun){ // check if running for first time
Trigger.old[0].addError('Before Account Delete Error');
HelperClass.firstRun=false; // falsify the flag to denote it has already run before
}
}
}
}
Source for the code: Answer #26 from Jitendra's blog
Hope this helps. Thanks.

Related

Flutter Future timeouts not always working correctly

Hey I need some help here for How to use timeouts in flutter correctly. First of all to explain what the main goal is:
I want to recive data from my Firebase RealTime Database but need to secure this request api call with an time out of 15 sec. So after 15 sec my timeout should throw an exception that will return to the Users frontend the alert for reasons of time out.
So I used the simple way to call timeouts on future functions:
This functions should only check if on some firebase node an ID is existing or not:
Inside this class where I have declared this functions I also have an instance which called : timeoutControl this is a class which contains a duration and some reasons for the exceptions.
Future<bool> isUserCheckedIn(String oid, String maybeCheckedInUserIdentifier, String onGateId) async {
try {
databaseReference = _firebaseDatabase.ref("Boarding").child(oid).child(onGateId);
final snapshot = await databaseReference.get().timeout(Duration(seconds: timeoutControl.durationForTimeOutInSec), onTimeout: () => timeoutControl.onEppTimeoutForTask());
if(snapshot.hasChild(maybeCheckedInUserIdentifier)) {
return true;
}
else {
return false;
}
}
catch (exception) {
return false;
}
}
The TimeOutClass where the instance timeoutControl comes from:
class CustomTimeouts {
int durationForTimeOutInSec = 15; // The seconds for how long to try until we throw an timeout exception
CustomTimeouts();
// TODO: Implement the exception reasons here later ...
onEppTimeoutForUpload() {
throw Exception("Some reason ...");
}
onEppTimeoutForTask() {
throw Exception("Some reason ...");
}
onEppTimeoutForDownload() {
throw Exception("Some reason ...");
}
}
So as you can see for example I tried to use this implementation above. This works fine ... sometimes I need to fight with un explain able things -_-. Let me try to introduce what in somecases are the problem:
Inside the frontend class make this call:
bool isUserCheckedIn = await service.isUserCheckedIn(placeIdentifier, userId, gateId);
Map<String, dynamic> data = {"gateIdActive" : isUserCheckedIn};
/*
The response here is an Custom transaction handler which contains an error or an returned param
etc. so this isn't relevant for you ...
*/
_gateService.updateGate(placeIdentifier, gateId, data).then((response) {
if(response.hasError()) {
setState(() {
EppDialog.showErrorToast(response.getErrorMessage()); // Shows an error message
isSendButtonDiabled = false; /*Reset buttons state*/
});
}
else {
// Create an gate process here ...
createGateEntrys(); // <-- If the closures update was successful we also handle some
// other data inside the RTDB for other reasons here ...
}
});
IMPORTANT to know for you guys is that I am gonna use the returned "boolean" value from this function call to update some other data which will be pushed and uploaded into another RTDB other node location for other reasons. And if this was also successful the application is going on to update some entrys also inside the RTDB -->createGateEntrys()<-- This function is called as the last one and is also marked as an async function and called with its closures context and no await statement.
The Data inside my Firebase RTDB:
"GateCheckIns" / "4mrithabdaofgnL39238nH" (The place identifier) / "NFdxcfadaies45a" (The Gate Identifier)/ "nHz2mhagadzadzgadHjoeua334" : 1 (as top of the key some users id who is checked in)
So on real devices this works always without any problems... But the case of an real device or simulator could not be the reason why I'am faceing with this problem now. Sometimes inside the Simulator this Function returns always false no matter if the currentUsers Identifier is inside the this child nodes or not. Therefore I realized the timeout is always called immediately so right after 1-2 sec because the exception was always one of these I was calling from my CustomTimeouts class and the function which throws the exception inside the .timeout(duration, onTimeout: () => ...) call. I couldn't figure it out because as I said on real devices I was not faceing with this problem.
Hope I was able to explain the problem it's a little bit complicated I know but for me is important that someone could explain me for what should I pay attention to if I am useing timeouts in this style etc.
( This is my first question here on StackOverFlow :) )

How can I do an offline batch in Firebase RTDB?

I have reason to believe that some ServerValue.increment() commands are not executing.
In my App, when the user submits, two commands are executed:
Future<void> _submit() async {
alimentoBloc.descontarAlimento(foodId, quantity);
salidaAlimentoBloc.crearSalidaAlimento(salidaAlimento);
}
The first command updates the amount of inventory left in the warehouse (using ServerValue.increment)...
Future<bool> descontarAlimento(String foodId, int quantity) async {
try {
dbRef.child('inventory/$foodId/quantity')
.set(ServerValue.increment(-quantity));
} catch (e) {
print(e);
}
return true;
}
The second command makes a food output register, where it records the quantity, type of food and other key data.
Future<bool> crearSalidaAlimento(SalidaAlimentoModel salidaAlimento) async {
try {
dbRef.child('output')
.push().set(salidaAlimento.toJson());
} catch (e) {
print(e);
}
return true;
}
After several reviews, I have noticed that the increase command is not executed sometimes, and then the inventory does not correspond to what it should be.
Then, I would like to do something similar to a transaction, this is: If neither of the two commands is executed, do not execute either of the two.
Is it possible to do a batch of commands in Firebase Realtime without losing the offline functionalities?
You can do a multi-path update to perform both writes transactionally:
var id = dbRef.push().key;
Map<String, dynamic> updates = {
"inventory/$foodId/quantity": ServerValue.increment(-quantity),
"output/$id": salidaAlimento.toJson()
}
dbRef.update(updates);
With the above, either both writes are completed, or neither of them is.
While you're offline, the client will fire local events based on its best guess for the current value of the server (which is gonna be 0 if it never read the value), and it will then send all pending changes to the server when it reconnects. For a quick test, see https://jsbin.com/wuhuyih/2/edit?js,console
You can't use a transaction while the device is offline.
They need to check the current value on the database and that is not possible while offline. If you want to make sure that they succeed you would need to check if a connection is awailable or not.

Vertx CompositeFuture

I am working on a solution where I am using vertx 3.8.4 and vertx-mysql-client 3.9.0 for asynchronous database calls.
Here is the scenario that I have been trying to resolve, in a proper reactive manner.
I have some mastertable records which are in inactive state.
I run a query and get the list of records from the database.
This I did like this :
Future<List<Master>> locationMasters = getInactiveMasterTableRecords ();
locationMasters.onSuccess (locationMasterList -> {
if (locationMasterList.size () > 0) {
uploadTargetingDataForAllInactiveLocations(vertx, amazonS3Utility,
locationMasterList);
}
});
Now in uploadTargetingDataForAllInactiveLocations method, i have a list of items.
What I have to do is, I need to iterate over this list, for each item, I need to download a file from aws, parse the file and insert those data to db.
I understand the way to do it using CompositeFuture.
Can someone from vertx dev community help me with this or with some documentation available ?
I did not find good contents on this by googling.
I'm answering this as I was searching for something similar and I ended up spending some time before finding an answer and hopefully this might be useful to someone else in future.
I believe you want to use CompositeFuture in vertx only if you want to synchronize multiple actions. That means that you either want an action to execute in the case that either all your other actions on which your composite future is built upon succeed or at least one of the action on which your composite future is built upon succeed.
In the first case I would use CompositeFuture.all(List<Future> futures) and in the second case I would use CompositeFuture.any(List<Future> futures).
As per your question, below is a sample code where a list of item, for each item we run an asynchronous operation (namely downloadAnProcessFile()) which returns a Future and we want to execute an action doAction() in the case that all the async actions succeeded:
List<Future> futures = new ArrayList<>();
locationMasterList.forEach(elem -> {
Promise<Void> promise = Promise.promise();
futures.add(promise.future());
Future<Boolean> processStatus = downloadAndProcessFile(); // doesn't need to be boolean
processStatus.onComplete(asyncProcessStatus -> {
if (asyncProcessStatus.succeeded()){
// eventually do stuff with the result
promise.complete();
} else {
promise.fail("Error while processing file whatever");
}
});
});
CompositeFuture.all(futures).onComplete(compositeAsync -> {
if (compositeAsync.succeeded()){
doAction(); // <-- here do what you want to do when all future complete
} else {
// at least 1 future failed
}
});
This solution is probably not perfect and I suppose can be improved but this is what I found works for me. Hopefully will work for someone else.

Mongodb reverting the saved transaction on exception

i am having starnge scenario in my grails application whenever different user places the order at same time and same menu is updated it throws a optimistic locking exception, now it goes like this
def orderApi {
// credits are deducted before try catch
// code
// .....
try {
// code to place order
}
catch(Exception e){
// send mail for exception
orderFailed = true
}
if(orderFailed){
refundUserCredits(order)
}
}
def refunduserCredits(Order order){
User user = order.user
user.credits = order.credits
if(!user.save()){
println "Unable to save user" // but it does not save the credits
}
}
i guess since i catched the exception , and refund the credits and save the user object it should save them. also the strange thing is if it not saving the user credits it should come in !user.save() and print the message , but it is not even doing that.help !
I think you'd benefit from using a transaction. It would allow you to bundle the order placement and credit deduction together as one all-or-nothing unit. Right now you're implementing your own transaction management. It would go something like this...
Order.withTransaction { status ->
// deduce credits and attempt to place order. save() all you want.
if(orderFailed) status.setRollbackOnly()
}
So the order and user changes are only persisted if all goes well.

Code First - Retrieve and Update Record in a Transaction without Deadlocks

I have a EF code first context which represents a queue of jobs which a processing application can retrieve and run. These processing applications can be running on different machines but pointing at the same database.
The context provides a method that returns a QueueItem if there is any work to do, or null, called CollectQueueItem.
To ensure no two applications can pick up the same job, the collection takes place in a transaction with an ISOLATION LEVEL of REPEATABLE READ. This means that if there are two attempts to pick up the same job at the same time, one will be chosen as the deadlock victim and be rolled back. We can handle this by catching the DbUpdateException and return null.
Here is the code for the CollectQueueItem method:
public QueueItem CollectQueueItem()
{
using (var transaction = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.RepeatableRead }))
{
try
{
var queueItem = this.QueueItems.FirstOrDefault(qi => !qi.IsLocked);
if (queueItem != null)
{
queueItem.DateCollected = DateTime.UtcNow;
queueItem.IsLocked = true;
this.SaveChanges();
transaction.Complete();
return queueItem;
}
}
catch (DbUpdateException) //we might have been the deadlock victim. No matter.
{ }
return null;
}
}
I ran a test in LinqPad to check that this is working as expected. Here is the test below:
var ids = Enumerable.Range(0, 8).AsParallel().SelectMany(i =>
Enumerable.Range(0, 100).Select(j => {
using (var context = new QueueContext())
{
var queueItem = context.CollectQueueItem();
return queueItem == null ? -1 : queueItem.OperationId;
}
})
);
var sw = Stopwatch.StartNew();
var results = ids.GroupBy(i => i).ToDictionary(g => g.Key, g => g.Count());
sw.Stop();
Console.WriteLine("Elapsed time: {0}", sw.Elapsed);
Console.WriteLine("Deadlocked: {0}", results.Where(r => r.Key == -1).Select(r => r.Value).SingleOrDefault());
Console.WriteLine("Duplicates: {0}", results.Count(r => r.Key > -1 && r.Value > 1));
//IsolationLevel = IsolationLevel.RepeatableRead:
//Elapsed time: 00:00:26.9198440
//Deadlocked: 634
//Duplicates: 0
//IsolationLevel = IsolationLevel.ReadUncommitted:
//Elapsed time: 00:00:00.8457558
//Deadlocked: 0
//Duplicates: 234
I ran the test a few times. Without the REPEATABLE READ isolation level, the same job is retrieved by different theads (seen in the 234 duplicates). With REPEATABLE READ, jobs are only retrieved once but performance suffers and there are 634 deadlocked transactions.
My question is: is there a way to get this behaviour in EF without the risk of deadlocks or conflicts? I know in real life there will be less contention as the processors won't be continually hitting the database, but nonetheless, is there a way to do this safely without having to handle the DbUpdateException? Can I get performance closer to that of the version without the REPEATABLE READ isolation level? Or are Deadlocks not that bad in fact and I can safely ignore the exception and let the processor retry after a few millis and accept that the performance will be OK if the not all the transactions are happening at the same time?
Thanks in advance!
Id recommend a different approach.
a) sp_getapplock
Use an SQL SP that provides an Application lock feature
So you can have unique app behaviour, which might involve read from the DB or what ever else activity you need to control. It also lets you use EF in a normal way.
OR
b) Optimistic concurrency
http://msdn.microsoft.com/en-us/data/jj592904
//Object Property:
public byte[] RowVersion { get; set; }
//Object Configuration:
Property(p => p.RowVersion).IsRowVersion().IsConcurrencyToken();
a logical extension to the APP lock or used just by itself is the rowversion concurrency field on DB. Allow the dirty read. BUT when someone goes to update the record As collected, it fails if someone beat them to it. Out of the box EF optimistic locking.
You can delete "collected" job records later easily.
This might be better approach unless you expect high levels of concurrency.
As suggested by Phil, I used optimistic concurrency to ensure the job could not be processed more than once. I realised that rather than having to add a dedicated rowversion column I could use the IsLocked bit column as the ConcurrencyToken. Semantically, if this value has changed since we retrieved the row, the update should fail since only one processor should ever be able to lock it. I used the fluent API as below to configure this, although I could also have used the ConcurrencyCheck data annotation.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<QueueItem>()
.Property(p => p.IsLocked)
.IsConcurrencyToken();
}
I was then able to simple the CollectQueueItem method, losing the TransactionScope entirely and catching the more DbUpdateConcurrencyException.
public OperationQueueItem CollectQueueItem()
{
try
{
var queueItem = this.QueueItems.FirstOrDefault(qi => !qi.IsLocked);
if (queueItem != null)
{
queueItem.DateCollected = DateTime.UtcNow;
queueItem.IsLocked = true;
this.SaveChanges();
return queueItem;
}
}
catch (DbUpdateConcurrencyException) //someone else grabbed the job.
{ }
return null;
}
I reran the tests, you can see it's a great compromise. No duplicates, nearly 100x faster than with REPEATABLE READ, and no DEADLOCKS so the DBAs won't be on my case. Awesome!
//Optimistic Concurrency:
//Elapsed time: 00:00:00.5065586
//Deadlocked: 624
//Duplicates: 0