Well, Sorry, if you find this question weird, But Let me ask It anyway.
Imagine the following situation. There is 2 Clients, A and B. The A Client decided to create Profile and the Transaction in general takes 2 Minutes until Completion for example.
After 1 minute, B Client Decided to create a Profile with THE SAME Username And Password, (but the first Transaction is still in the Process, And we cannot apply the unique constraint, because there is no such User with this Username quite yet.)
So It will eventually end up with UNIQUE CONSTRAINT exception, and we'll need to make a rollback.
The Question is: How to avoid this situation?
I've heard about LOCK in PostgreSQL (that allows to lock the EXISTING ROW in order to others can't change it or read) but haven't find any similar to this sort of case.
Is there any feature, that provides some sort of functionality to block potential transactions?
Start the transaction like this:
BEGIN;
SET lock_timeout = 1;
INSERT INTO users (username, password) VALUES (...);
RESET lock_timeout;
/* the rest of the transaction */
COMMIT;
The second transaction that tries to create the same user won't block, but fail right away and can be rolled back.
EDIT: in case someone else stumbles across this, Laurenz Albe's post is a better solution. Use that instead.
The Question is: How to avoid this situation?
A simple way would be to split the commit into two parts, probably using a savepoint:
func createUser(user: User) error {
this.db.exec('INSERT INTO users VALUES ($1, $2)', user.username, user.hashedPassword);
this.db.withTransaction(func (tx Transaction) {
tx.exec('DELETE FROM users WHERE username = $1', user.username);
sp = tx.createSavepoint();
tx.exec('INSERT INTO users VALUES ($1, $2)', user.username, user.hashedPassword);
try {
// your code that takes two minutes
tx.commit();
} catch (e) {
tx.rollbackToSavepoint(sp);
tx.commit();
}
});
}
Where you first insert your row, immediately commiting the change. Now any new user can't use that username.
Then, start a transaction, delete the user. Create a savepoint. Create the user again. Now, instead of rolling back the entire transaction if something fails, rollback to the savepoint (where the user was created then deleted, effectively a no-op). If it works, since you deleted then created again, then the delete was effectively a no-op.
Related
I was trying out a simple trigger on Case Object.There is a field Hiring_Manager__c (looks up to User) in Case .On update or insert of a case, this field has to be populated with the Case Owner's Manager.I created the trigger as follows.It is not bulkified as I was just trying out for a single record.
I could see the value getting populated correctly in debug statements.But it is not updated on the record.
trigger HiringManagerupdate_case on Case (before insert,before update) {
Case updatedCaseRec= [select id,ownerId,Hiring_Manager__c,CaseNumber from case where id in :Trigger.new];
system.debug('Case Number'+updatedCaseRec.CaseNumber);
system.debug('Manager before updation'+updatedCaseRec.Hiring_Manager__c);
try{
User caseOwner = [Select Id,Name, ManagerId, Manager.Email From User Where Id = :updatedCaseRec.ownerId];
updatedCaseRec.Hiring_Manager__c=caseOwner.ManagerId;
}
Catch(DMLException de){
system.debug('Could not find Manager');
}
system.debug('Manager'+updatedCaseRec.Hiring_Manager__c);
}
I got a reply for this in developer's forum with a code that works fine.
trigger HiringManagerupdate_case on Case (before insert,before update) {
for(Case cas :Trigger.new){
try{
User caseOwner = [Select Id,Name, ManagerId, Manager.Email From User Where Id = :cas.ownerId];
cas.Hiring_Manager__c=caseOwner.ManagerId;
}
Catch(DMLException de){
system.debug('Could not find Manager');
}
system.debug('Manager'+cas.Hiring_Manager__c);
}
I would like to know
1)Difference in logic in both the approaches if I am trying to update only a single record?Mine is populating the value in debug statement but not on screen.
2)Will the select statement inside the loop (of second approach)affect governor limit?
3)Do we need to explicitly call update for before triggers? My understanding is the save and commit happens after the field update for before triggers and hence the changes will be saved automatically without calling update statement.?
Thank in advance.
Nisha
1) If you are trying to access the record(s) for the object which fired a before trigger, you must iterate over Trigger.new. You cannot query for it because the record(s) may not be in the database yet.
2) Yes it could. The query executes for every Case record being inserted or updated. If you have 1,000 such Cases, the query executes a 1,000 times.
3) No, we do not need to call update for before triggers. Your understanding is correct.
A user wants to invite a friend but I want to do a check first. For example:
SELECT friends_email from invites where friends_email = $1 limit 1;
If that finds one then I want to return a message such as "This friend already invited."
If that does not find one then I want to do an insert
INSERT INTO invites etc...
but then I need to return the primary user's region_id
SELECT region_id from users where user_id = $2
What's the best way to do this?
Thanks.
EDIT --------------------------------------------------------------
After many hours below is what I ended up with in 'plpgsql'.
IF EXISTS (SELECT * FROM invitations WHERE email = friends_email) THEN
return 'Already Invited';
END IF;
INSERT INTO invitations (email) VALUES (friends_email);
return 'Invited';
I undestand that there are probably dozens of better ways but this worked for me.
Without writing the exact code snippet for you...
Consider solving this problem by shaping your data to conform to your business rules. If you can only invite someone once, then you should have an "invites" table that reflects this by a UNIQUE rule across whatever columns define a unique invite. If it is just an email address, then declare the "invites.email" as a unique column.
Then do an INSERT. Write the insert so that it takes advantage of Postgres' RETURNING clause to give an answer on success. If the INSERT fails (because you already have that email address -- which was the point of the check you wanted to do), then catch the failure in your application code, and return the appropriate response.
Psuedocode:
Application:
try
invite.insert(NewGuy)
catch error.UniqueFail
return "He's already been invited"
# ...do other stuff
Postgres:
INSERT INTO invites
(data fields + SELECT region thingy)
VALUES
(some arrangement of data that includes "region_id")
RETURNING region_id
If that's hard to make work the first time you try it, phrasing the insert target as a CTE may be helpful. If all else fails, write it procedurally in plpgsql for the time being, making sure the external interface accepts a normal INSERT (so you don't have to change application code later) and sort it out once you know whether or not performance is an issue.
The basic idea here is to let the relational shape of your data obviate the need for any procedural checking wherever you can. That's at the heart of relational data modeling ...somewhat of a lost art these days.
You can create SQL stored procedure for implement functionality like described above.
But it is wrong form architecture point of view. See: Direct database manipulation an anti-pattern?
DB have scope of responsibility: store data.
You have to put business logic into your business layer.
I'm receiving an exception when I try to use data loader for this apex trigger. It says there's limit of 100 records to be updated at a time. Here is the code that explains a trigger on Account Object. All the comments are much appreciated
trigger MaintainPrimaryOverriding on Account (before insert, before update) {
if (TriggerUpdateController.getPrimaryBranchOverriding()){
TriggerAffiliationControl.setLock();
for(Account s : Trigger.new)
{
if (Trigger.isUpdate){
Id ownerId = Trigger.oldMap.get(s.Id).OwnerId;
if (s.OwnerId != ownerId){
//Use Branch Associated with owner ID
TriggerUpdateController.UpdatePrimaryBranchOfficeForAccountOwnerChange(s);
TriggerUpdateController.UpdateAffiliation(s);
}
}
else if(Trigger.isInsert){
TriggerUpdateController.UpdatePrimaryBranchOfficeForAccountOwnerChange(s);
}
}
TriggerAffiliationControl.setUnLock();
}
}
Thanks!
This error Too many SOQL queries is due to the fact, you are hitting on governor limit. Look at Best Practice: Avoid SOQL Queries Inside FOR Loops
Looks like you are calling below methods in your code for each update/insert record
"TriggerUpdateController.UpdatePrimaryBranchOfficeForAccountOwnerChange(s)"
"TriggerUpdateController.UpdateAffiliation(s)"
Looks like you are issuing SOQL statement in those above statements, which causing the governing limit error.
two modifications you have to do here.
1. Capture the list of updated or inserted accounts in a list and pass them to above two methods.
2. make above two methods to accept the list of accounts created in step1 and do the processing.
3. If you want fetch details from some other object, get those details in a SOQL.
If you could post the code for "TriggerUpdateController", i would be in a better position to suggest you correct code.
Regards,
Pundareekam Kudikala
I am using EF code first. Recently I had to replace the following code:
User user = userRepository.GetByEmail("some#email.com");
if (user == null)
{
user = New User { Email = email, CreatedAt = DateTime.Now };
userRepository.Add(user);
unitOfWork.Commit();
}
with
Context.ExecuteSqlCommand("IF NOT EXISTS(SELECT 1 FROM Users WHERE Email = '{0}')
INSERT INTO Users(Email, CreatedAt)
VALUES ('email', GETDATE())");
The reason behind this is that it took EF a very long time to run the first piece of code when trying to add thousands of rows. By changing it to a ExecuteSqlCommand, the time to handle that many rows decreased by a multitude.
The problem I am seeing now (only occurred twice so far) is the following message from the database: Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
How would I go about resolving this? Most of my data access is done through EF with a few exceptions like the the one above. I have never seen a deadlock in my logs before so I assume this ha something to do with the query.
My questions are:
Is there a way to write the query using No LOCK? How would that
query look?
Is there a way to tell EF to use NO LOCK for certain queries?
What you actually want is to lock the table earlier, not to prevent locking. Locking is necessary to ensure that between the two statements in your command, some other process hasn't come along and inserted the same user. (Locking is always necessary when inserting data because the physical storage is being modified.)
Assuming that this is actually the command that is causing deadlock, the following should resolve it, because it only asks for an exclusive lock :
Context.ExecuteSqlCommand("IF NOT EXISTS(SELECT 1 FROM Users WHERE Email = '{0}')
INSERT INTO Users(Email, CreatedAt)
VALUES ('email', GETDATE())");
We have a table called Contracts. These contract records are created by users on an external site and must be approved or rejected by staff on an internal site. When a contract is rejected, it's simply deleted from the db. When it's accepted, however, a new record is generated called Contract Acceptance which is written to its own table and is derived from data that exists on the contract.
The problem is that two internal staff members may each end up opening the same contract. The first user accepts and a contract acceptance record is generated. Then, with the same contract record still open on the page, the second user accepts the contract again, creating a duplicate acceptance record.
The quick and dirty way to get past this is to retrieve the contract from the db just before it's accepted, check the status, and produce an error message saying that it's already been accepted. This would probably work for most circumstances, but the users could still click the Accept button at the exact same time and sneak by this validation code.
I've also considered a thread lock deep in the data layer that prevents two threads from entering the same region of code at the same time, but the app exists on two load-balanced servers, so the users could be on separate servers which would render this approach useless.
The only method I can think of would have to exist at the database. Conceptually, I would like to somehow lock the stored procedure or table so that it can't be updated twice at the same time, but perhaps I don't understand Oracle enough here. How do updates work? Are update requests somehow queued up so that they do not occur at the exact same time? If this is so, I could check the status of the record in th SQL and return a value in an out parameter stating it has already been accepted. But if update requests aren't queued then two people could still get into the update sql at the exact same time.
Looking for good suggestions on how to go about this.
First, if there can only be one Contract Acceptance per Contract, then Contract Acceptance should have the Contract ID as its own primary (or unique) key: that will make duplicates impossible.
Second, to prevent the second user from trying to accept the contract while the first user is accepting it, you can make the acceptance process lock the Contract row:
select ...
from Contract
where contract_id = :the_contract
for update nowait;
insert into Contract_Acceptance ...
The second user's attempt to accept will then fail with an exception :
ORA-00054: resource busy and acquire with nowait specified
In general, there are two approaches to the problem
Option 1: Pessimistic Locking
In this scenario, you're pessimistic so you lock the row in the table when you select it. When a user queries the Contracts table, they'd do something like
SELECT *
FROM contracts
WHERE contract_id = <<some contract ID>>
FOR UPDATE NOWAIT;
Whoever selects the record first will lock it. Whoever selects the record second will get an ORA-00054 error that the application will then catch and let them know that another user has already locked the record. When the first user completes their work, they issue their INSERT into the Contract_Acceptance table and commit their transaction. This releases the lock on the row in the Contracts table.
Option 2: Optimistic Locking
In this scenario, you're being optimistic that the two users won't conflict so you don't lock the record initially. Instead, you select the data you need along with a Last_Updated_Timestamp column that you add to the table if it doesn't already exist. Something like
SELECT <<list of columns>>, Last_Updated_Timestamp
FROM Contracts
WHERE contract_id = <<some contract ID>>
When a user accepts the contract, before doing the INSERT into Contract_Acceptance, they issue an UPDATE on Contracts
UPDATE Contracts
SET last_updated_timestamp = systimestamp
WHERE contract_id = <<some contract ID>>
AND last_update_timestamp = <<timestamp from the initial SELECT>>;
The first person to do this update will succeed (the statement will update 1 row). The second person to do this will update 0 rows. The application detects the fact that the update didn't modify any rows and tells the second user that someone else has already processed the row.
In Either Case
In either case, you probably want to add a UNIQUE constraint to the Contract_Acceptance table. This will ensure that there is only one row in the Contract_Acceptance table for any given Contract_ID.
ALTER TABLE Contract_Acceptance
ADD CONSTRAINT unique_contract_id UNIQUE (Contract_ID)
This is a second line of defense that should never be needed but protects you in case the application doesn't implement its logic correctly.