CKModifyRecordsOperation completion block operates to early - cloudkit

I delete all records in a recordZone with
CKModifyRecordsOperation *modifyRecordsOperation = [[CKModifyRecordsOperation alloc] initWithRecordsToSave:nil recordIDsToDelete:[arrayWithRecordIdsMutable copy]];
The completionblock contains a method to read all records (just to check). As I would expect no records to be found, instead there are still "some" records left. If I read a few minutes later, they are gone.
I tried with
modifyRecordsOperation.modifyRecordsCompletionBlock = ^(NSArray *records, NSArray *deletedRecordIDs, NSError *error) {
as well as with the "normal" completionBlock from NSOperations
[modifyRecordsOperation setCompletionBlock:^{
still the same result. Does anyone have an idea, whether I'm doing something wrong and/or how to trigger activities directly after the delete did completely happened?
Apple docs says:
If you assign a completion block to the completionBlock property of the operation object, the completion block is called after the operation executes and returns its results to you. You can use a completion block to perform housekeeping chores related to the operation, but do not use it to process the results of the operation itself. Any completion block you specify should be prepared to handle the failure of the operation to complete its task, whether due to an error or an explicit cancellation.
I'm wondering what they mean with but do not use it to process the results of the operation itself
May be this is a hint?

I think what's happening is that all the records are marked as deleted but will eventually be deleted in a batch. I would recommend removing indexes you don't need to speed it up a bit. Also, the development environment is a lot slower than in production.
You'll see the same in the CloutKit Dashboard, select 20+ records and hit delete. Refresh and you'll still see some being deleted.

Related

How to investigate time required to obtain lock - and why - within a procedure

I am stumped on an issue I am having. The true context is rather complicated, but I can boil it down to these functional points (everything else is not related to the problematic table):
I have a trigger function that contains several SELECTs and then an UPDATE
The update takes an unreasonable amount of time to execute ("unreasonable" = > 1.4s)
The same exact queries when run outside the trigger (for the same rows, parameters, etc.) do not have any issues (i.e. they execute in under 1-2ms)
I am pretty sure that indexes, etc., are working as necessary; i.e. there shouldn't be any issues.
There are no circular triggers
There is on trigger on the destination table, but even with that removed the behavior is the same.
I have done many tests to no avail, but these are pretty meaningful:
when the update is replaced with a SELECT, the response time is fast, as expected
when the update is replaced with a SELECT... FOR UPDATE, the response time is slow, the same as the update
^ this (as well as other things) has led me to possibly believe that the delay is spent waiting to achieve a lock
No other transactions are really happening on that table. I am truly bewildered.
Server context: This is being run in AWS/RDS on db.m5.xlarge.
What I am looking for is whether there is a way to get some information about locks that are happening mid-transaction or possibly even a history of acquired locks? Or anything else that can give me insight into what is causing the delay that seems so closely related to acquiring a lock on that table.
Unfortunately, just to make everything even more frustrating, I cannot replicate the issue when I attempt to use EXPLAIN in the function body. The only way to do this (that I know of) is to use the EXECUTE... syntax with a query string. That doesn't have a delay - its also useless for the trigger.

Rollback in Postgres

As far as I know, we can't use start transaction within functions, thus we can't use COMMIT and ROLLBACK in functions.
But how then we ROLLBACK by some if-condition?
How then we can perform a sequence of statements in a specific level of isolation? I mean a situation when an application wants to call a SQL (plpgsql) function and that function really needs to be run in a transaction with a certain isolation level. What to do in such a case?
In which cases then it is really practical to run ROLLBACK? Only when we manually write a script, check something and then ROLLBACK manually if we don't like the result. And in the same case, I see the practicality of savepoints. However, I feel like it is a serious constraint.
If you want to rollback the complete transaction, RAISE an exception.
If you only want to roll back part of your work, start a new block with a BEGIN at the point to which you want to roll back and add an EXCEPTION clause to the block.
Since the transaction is started outside the function, the isolation level already has to be set properly when you are in the function.
You can query
SELECT current_setting('transaction_isolation', TRUE);
and throw an error if the setting is not correct.
is too general or too simple to answer.
You roll back a transaction if you have reached a point in your processing where you want to undo everything you have done so far in the transaction.
Often, that happens implicitly rather than explicitly by throwing an error.

Extbase: implement locking for concurrent access

In my extension I have a set of operations that are generated by user activities. Each operation consists of several steps.
To handle those operations I implemented a scheduler task (extension "scheduler" 6.2.0). Now the point is: steps of each operation must be done one after the other, not parallel. That means: at start the scheduler task should find next "free" operation, lock it and handle it.
For locking purposes database table with operations has an integer column "isLocked". So I wanted to use following SQL statement to lock an operation:
$lockID = time();
'UPDATE operations SET isLocked = '.$lockID.' WHERE isLocked = 0 AND uid = '.$freeOperationFound->getUid().';'
After this SQL command I wanted to check if lock was set:
$repository->findOneByIsLocked($lockID);
If locking was successful operation step handling can start.
If meanwhile another instance of scheduler task locks this operation, the SQL statement above does nothing because of condition: WHERE isLocked = 0.
The problem is: Extbase ignores SQL UPDATE-statements.
If I just update the free operation object via repository the lock of another task instance can be overwritten. I need some kind of "conditional" update.
I think I got it: $GLOBALS['TYPO3_DB']->exec_UPDATEquery is the answer.
The only question remaining is, if this method is also depricated in FLOW, like $query->statement of Repository.
While the exec_UPDATEquery function from the DatabaseConnection class certainly gets the job done, here is the solution via extbase. It might make more sense if you need to work with the Operation object after you lock it.
$persistenceManager = GeneralUtilities::makeInstance('TYPO3\CMS\extbase\Persistence\PersistenceManager');
$freeOperation = $repository->findOneByIsLocked(0);
$freeOperation->setIsLocked(time());
$repository->update($freeOperation);
$persistenceManager->persistAll();
$freeOperation->myOperation();
$freeOperation->myOtherOperation();
$freeOperation->setIsLocked(0);
$repository->update($freeOperation);
$persistenceManager->persistAll();
The reason why you need to persist manually is, that your task is not within the context of a ActionController Action. And even if you were, it wouldn't automatically persist your changes until the end of the Action. Doing it through extbase might be the safer option because you can be sure to actually work on the exact same operation that you have just locked.

tsql query kill scenario

In our webapp, we have lots of queries running. Most of them reading data but some update queries with high priority might come. Since, we'd like to cancel read queries but when using KILL, I'd like the read query to return certain dataset result or execution result upon receiving cancel.
My intention is to mimic the behavior of signal in C programs for which a signal handler is invoked upon receiving a kill signal.
Is there any method to define an asynchrnous KILL signal handler for SPs?
This is not a fully tested answer. But it is a bit more than just a comment.
One is to have dirty read (with nolock).
This part is tested I do this all time.
Build a large scalable app you need to resort to this and manage it.
A dirty read will not block an update.
You can get that - a dirty read.
A lot of people think a dirty read may get corrupt data.
If you are updating smith to johnson the select is not going to get smison.
The select is going to get smith and it will be immediately stale.
But how is that worse then taking a read lock?
The read get smith and blocks the update.
Once the read locks are cleared it is updated.
I would contend that blocking an update is also stale data.
If you are using reader I think you could pass the same cancellation token to each select and then just cancel the one token.
But if may not process the CancellationToken until it read the row so it may not cancel a query a long running query that has not yet returned any rows.
DbDataReader.ReadAsync Method (CancellationToken)
Or if you are not using reader look at
SqlCommand.Cancel
As far as getting cancel to return alternate data. I doubt SQL is going to do that.

Salesforce.com: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record

In our production org, we have a system of uploading sales data into Salesforce using command line data loader. This data is loaded into a temporary object Temp. We have created a formula field (which combines three fields) to form a unique key. The purpose of the object is to reduce user efforts for creating the key manually.
There is an after insert trigger on Temp which calls an asynchronous method which upserts the data to another object SalesData using the key. The insert/update trigger on SalesData checks the various fields and creates/updates the records in another object SalesRecords. After the insertion/updation is complete, all the records in temp object Temp are deleted. The SalesRecords object does not have any trigger on it and is a child of another object Sales. The Sales object has some rollup fields which are summing up fields from SalesRecords object.
Lately, we are getting the below error for some of the records which are updated.
UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record
Please provide some pointers to resolve the issue
this could either be caused by conflicting DML operations in the various trigger execution or some recursive trigger execution. i would assume that the async executions cause multiple subsequent updates on the same records, probably on the SalesRecords object. I would recommend to try to simplify the process to avoid too many related trigger executions.
I'm a little surprised you were able to get this to work in the first place. After triggers should be used with caution and only when before triggers can't be. One reason for this is that you don't need to perform additional DML to make changes to records, since in before triggers you simply change the values and the insert/update commit happens automatically. But recursive trigger firings is the main problem with after triggers.
One quick way to avoid trigger re-entry is to use a public static Boolean in a class that states whether you're already in this trigger from the same thread of execution.
Something like:
public static Boolean isExecuting = false;
Once set to true, any trigger code that is a re-fire can be avoided with:
if(Class.isExecuting == false)
{
Class.isExecuting = true;
// Perform trigger logic
// ...
}
Additionally, since the order of trigger execution cannot be determined up front, you might be seeing an issue with deletions or other data changes that depend on other parts of your flow to finish first.
Also, without knowing the details of your custom unique 3-part key, I'd wonder if there's a problem there too such as whether it's truly unique or not. Case insensitivity is a common mistake and it's the reason there are 15 AND 18 character Ids in Salesforce. For example, when people export to Excel (a case-insensitive environment) and do VLOOKUPs, they would occasionally find the wrong record. The 3-digit calculated suffix was added to disambiguate for case-insensitive environments.
Googling for this same error lead me to this post:
http://boards.developerforce.com/t5/General-Development/Unable-to-obtain-exclusive-access-to-this-record/td-p/345319
Which points out some common causes for this to happen:
Sharing Rules are being calculated.
A picklist value has been replaced and replacement is in progress.
A custom index creation/removal is in progress.
Most unlikely one - someone else is already editing the same record that you are trying to access at the same time.
Posting here in case somebody else needs it.
I got this error multiple times today. Turned out one of our vendors was updating their installed package during that time in the same org. All kinds of things were going wrong also - some object validation exceptions were being thrown on DMLs, without any error message content.
Resolution
The error is shown when a field update such as a roll-up summary field is being attempted on a parent object that already had a field update to cause the roll-up summary field to calculate. This could also occur if a trigger or another apex job running on the master object and it also attempting to do an update.
You can either reduce the batch size and try again or create separate smaller files to be imported if this issue occurs.