I'm receiving an exception when I try to use data loader for this apex trigger. It says there's limit of 100 records to be updated at a time. Here is the code that explains a trigger on Account Object. All the comments are much appreciated
trigger MaintainPrimaryOverriding on Account (before insert, before update) {
if (TriggerUpdateController.getPrimaryBranchOverriding()){
TriggerAffiliationControl.setLock();
for(Account s : Trigger.new)
{
if (Trigger.isUpdate){
Id ownerId = Trigger.oldMap.get(s.Id).OwnerId;
if (s.OwnerId != ownerId){
//Use Branch Associated with owner ID
TriggerUpdateController.UpdatePrimaryBranchOfficeForAccountOwnerChange(s);
TriggerUpdateController.UpdateAffiliation(s);
}
}
else if(Trigger.isInsert){
TriggerUpdateController.UpdatePrimaryBranchOfficeForAccountOwnerChange(s);
}
}
TriggerAffiliationControl.setUnLock();
}
}
Thanks!
This error Too many SOQL queries is due to the fact, you are hitting on governor limit. Look at Best Practice: Avoid SOQL Queries Inside FOR Loops
Looks like you are calling below methods in your code for each update/insert record
"TriggerUpdateController.UpdatePrimaryBranchOfficeForAccountOwnerChange(s)"
"TriggerUpdateController.UpdateAffiliation(s)"
Looks like you are issuing SOQL statement in those above statements, which causing the governing limit error.
two modifications you have to do here.
1. Capture the list of updated or inserted accounts in a list and pass them to above two methods.
2. make above two methods to accept the list of accounts created in step1 and do the processing.
3. If you want fetch details from some other object, get those details in a SOQL.
If you could post the code for "TriggerUpdateController", i would be in a better position to suggest you correct code.
Regards,
Pundareekam Kudikala
Related
I'm currently building a forum alike application. Users will be able to see recent posts with the total like count. If the post is interesting to the user, they can like it as well and contribute to the total like count.
The normalized approach would be to have two tables: user_post(contains id, metadata ...), liked_post(which includes the user id + post id). When posts are getting queried, the like count would be determined with the COUNT() statement on the liked_post table grouped by the post id.
Im thinking of another approach, which requires no group by on a potential huge table. That would be to add a like_count column to the user_post table and break the normalization. This column would be always updated when a new liked_post entry gets inserted or deleted. That means: Every time a user likes a post -> there will be an update on the user_post table (increment the like_count column) + insert/delete entity in liked_post table (With a trigger or code in App layer).
Would this aggregation on the fly approach have any disadvantages, except for consistency concerns? This would enable very simple and fast select queries but Im not sure if the additional update would be an issue.
What are your thoughts ?
Im really interested in the performance impact and not if you should do this from the project begin or not.
Your idea is correct and widely used. Problem that you will face:
how do you make sure that like_count is valid? Can this number be delayed or approximated somehow?
In general you can do this following ways
update like_count within application code
update like_count by triggers
If you want to have exact values correct you could accumulate those sums by triggers or do it programatically ensuring that like count update is always within same transaction that insert to liked_posts
Using triggers it could be something like this:
CREATE FUNCTION public.update_like_count() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
UPDATE user_post SET user_post.liked_count = user_post.liked_count + 1
WHERE user_post.id = NEW.post_id;
RETURN NEW;
END;
$$;
CREATE TRIGGER update_like_counts
AFTER INSERT ON public.liked_posts
FOR EACH ROW EXECUTE PROCEDURE public.update_like_count();
Also you should handle AFTER DELETE by separate trigger.
Be aware that depending on transaction isolation level you might enter concurrency problem here (if 2 inserts are done at the same time - like_count may be exactly same number for two transactions) and end up with invalid total.
So I've had a problem similar to this in the past, the solution I went with is similar to what you've described, which is having an aggregated stored value like_count. Like you mentioned the only downside would be consistency concerns however this problem exists even in the former.
The solution to something like this lies more in the application dev, so utilizing something like web-sockets to keep posts up to date, without too much fluff
When a user's browser/client loads a post they join a room with the post id, and when user interacts with a post ( like, dislike etc ) that interaction is broadcasted to all users in that room ( post id ).
Finally when it comes to finding out which users liked this post, you can query/load at the point of when the user clicks to find out. ~ cheers
I'm trying to implement some kind of a basic social network project. It has Posts, Comments and Likes like any other.
A post can have many comments
A post can have many likes
A post can have one author
I have a /posts route on the client application. It lists the Posts by paginating and shows their title, image, authorName, commentCount and likesCount.
The graphql query is like this;
query {
posts(first: 10, after: "123456") {
totalCount
edges {
node {
id
title
imageUrl
author {
id
username
}
comments {
totalCount
}
likes {
totalCount
}
}
}
}
}
I'm using apollo-server, TypeORM, PostgreSQL and dataloader. I use dataloader to get author of each post. I simply batch the requested authorIds with dataloader, get authors from PostgreSQL with a where user.id in authorIds query, map the query result to the each authorId. You know, the most basic type of usage of dataloader.
But when I try to query the comments or likes connection under each post, I got stuck. I could use the same technique and use postId for them if there was no pagination. But now I have to include filter parameters for the pagination. And there maybe other filter parameters for some where condition as well.
I've found the cacheKeyFn option of dataloader. I simply create a string key for the passed filter object to the dataloader, and it doesn't duplicate them. It just passes the unique ones to the batchFn. But I can't create a sql query with TypeORM to get the results for each first, after, orderBy arguments separately and map the results back to the function which called the dataloader.
I've searched the spectrum.chat source code and I think they don't allow users to query nested connections. Also tried Github GraphQL Explorer and it lets you query nested connections.
Is there any recommended way to achieve this? I understood how to pass an object to dataloader and batch them using cacheKeyFn, but I can't figure out how to get the results from PostgreSQL in one query and map the results to return from the loader.
Thanks!
So, if you restrict things a bit, this is doable. The restriction is to only allowed batched connections on the first page of results, e.g. so all the connections you're fetching in parallel are being done with the parameters. This is a reasonable constraint because it lets you do things like get the first 10 feed items and the first 3 comments for each of them, which represents a fairly typical use case. Trying to support independent pagination within a single query is unlikely to fulfil any real world use cases for a UI, so it's likely an over-optimisation. With this in mind, you can support the "for each parent get the first N children" use case with PostgreSQL using window.
It's a bit fiddly, but there are answers floating around which will get you in the right direction: Grouped LIMIT in PostgreSQL: show the first N rows for each group?
So use dateloader how you are with cacheKeyFn, and let your loader function recognise whether you can perform the optimisation (e.g. after is null and all other arguments are the same). If you can optimise, use a windowing query, otherwise do unoptimised queries in parallel as you would normally.
I was trying out a simple trigger on Case Object.There is a field Hiring_Manager__c (looks up to User) in Case .On update or insert of a case, this field has to be populated with the Case Owner's Manager.I created the trigger as follows.It is not bulkified as I was just trying out for a single record.
I could see the value getting populated correctly in debug statements.But it is not updated on the record.
trigger HiringManagerupdate_case on Case (before insert,before update) {
Case updatedCaseRec= [select id,ownerId,Hiring_Manager__c,CaseNumber from case where id in :Trigger.new];
system.debug('Case Number'+updatedCaseRec.CaseNumber);
system.debug('Manager before updation'+updatedCaseRec.Hiring_Manager__c);
try{
User caseOwner = [Select Id,Name, ManagerId, Manager.Email From User Where Id = :updatedCaseRec.ownerId];
updatedCaseRec.Hiring_Manager__c=caseOwner.ManagerId;
}
Catch(DMLException de){
system.debug('Could not find Manager');
}
system.debug('Manager'+updatedCaseRec.Hiring_Manager__c);
}
I got a reply for this in developer's forum with a code that works fine.
trigger HiringManagerupdate_case on Case (before insert,before update) {
for(Case cas :Trigger.new){
try{
User caseOwner = [Select Id,Name, ManagerId, Manager.Email From User Where Id = :cas.ownerId];
cas.Hiring_Manager__c=caseOwner.ManagerId;
}
Catch(DMLException de){
system.debug('Could not find Manager');
}
system.debug('Manager'+cas.Hiring_Manager__c);
}
I would like to know
1)Difference in logic in both the approaches if I am trying to update only a single record?Mine is populating the value in debug statement but not on screen.
2)Will the select statement inside the loop (of second approach)affect governor limit?
3)Do we need to explicitly call update for before triggers? My understanding is the save and commit happens after the field update for before triggers and hence the changes will be saved automatically without calling update statement.?
Thank in advance.
Nisha
1) If you are trying to access the record(s) for the object which fired a before trigger, you must iterate over Trigger.new. You cannot query for it because the record(s) may not be in the database yet.
2) Yes it could. The query executes for every Case record being inserted or updated. If you have 1,000 such Cases, the query executes a 1,000 times.
3) No, we do not need to call update for before triggers. Your understanding is correct.
Hello fellow code lovers, I am attempting to learn apex code and stuck on a problem. Since most of you guys here are avid code lovers and excited about problem solving, I figured I could ran by problem by.
I am attempting to Create a Trigger on an object called Book that does the following:
On Delete, all associated Chapters are also deleted
There is also an object named Chapter that is has a lookup to book.
Here is my attempt. This is my first ever attempt at apex so please be patient. Is anyone willing to dabble in this piece of code?
trigger DeleteChaptersTrigger on Book__c (before delete) {
List<Book__c> book = Trigger.old;
List<Chapter__c> chapters = new List<Chapter__c>();
Set set = new Set();
for (Book__c b :books){
if ()
}
}
You need to write all trigger with consideration that the trigger might be processing many records at any one time so you need to bulkify your trigger code.
Here are the variables that available on the trigger object.
You want to get all the record ids that will be deleted. Use the keyset method on the oldmap to get this info without looping and creating your own collection. Then you can just delete the records returned from the query.
trigger DeleteChaptersTrigger on Book__c (before delete) {
Set<string> bookids = Trigger.oldMap.keyset();
delete [SELECT Id FROM Chapter__c WHERE Book__c IN :bookids];
}
Apex code is unlike other languages where it gets confused with reuse of used words, like set as a variable name. Apex is also case insensitive.
Since you have full control, I recommend changing the way your custom objects are related to each other.
A chapter has no meaning/value without a book so we want to change the relationship between the two.
Remove the lookup on the Chapter object and replace it with a master-detail. When a master record gets deleted, Salesforce automatically deletes the detail related records. This is what you want, and without coding.
I am using EF code first. Recently I had to replace the following code:
User user = userRepository.GetByEmail("some#email.com");
if (user == null)
{
user = New User { Email = email, CreatedAt = DateTime.Now };
userRepository.Add(user);
unitOfWork.Commit();
}
with
Context.ExecuteSqlCommand("IF NOT EXISTS(SELECT 1 FROM Users WHERE Email = '{0}')
INSERT INTO Users(Email, CreatedAt)
VALUES ('email', GETDATE())");
The reason behind this is that it took EF a very long time to run the first piece of code when trying to add thousands of rows. By changing it to a ExecuteSqlCommand, the time to handle that many rows decreased by a multitude.
The problem I am seeing now (only occurred twice so far) is the following message from the database: Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
How would I go about resolving this? Most of my data access is done through EF with a few exceptions like the the one above. I have never seen a deadlock in my logs before so I assume this ha something to do with the query.
My questions are:
Is there a way to write the query using No LOCK? How would that
query look?
Is there a way to tell EF to use NO LOCK for certain queries?
What you actually want is to lock the table earlier, not to prevent locking. Locking is necessary to ensure that between the two statements in your command, some other process hasn't come along and inserted the same user. (Locking is always necessary when inserting data because the physical storage is being modified.)
Assuming that this is actually the command that is causing deadlock, the following should resolve it, because it only asks for an exclusive lock :
Context.ExecuteSqlCommand("IF NOT EXISTS(SELECT 1 FROM Users WHERE Email = '{0}')
INSERT INTO Users(Email, CreatedAt)
VALUES ('email', GETDATE())");