Run additional block of code depending on if an apex trigger is ran on an insert vs an update? - triggers

As stated in title, is it possible to take the actual "context" in which a trigger is being run and incorporate that variable into my trigger code itself?
i.e. I want to run a slight part of extra code for the trigger if its a newly inserted record, but not if its an existing one being updated.

Yes, you can use trigger context variables to examine the context in which the trigger is running. There are a variety of examples in that article. For your use case, you can use an if statement like this:
if (Trigger.isInsert) {
// do something on insert (and not on update)
}

Related

A way to know if a Firebird table's data has changed without using a trigger

Is there a way of knowing that a table's data has changed (insert/update/delete) without using a trigger on that table? Perhaps a global trigger to indicate changes on a table?
If you want notification of changes, you will need to add a trigger yourself. Firebird 3 added a new feature to simplify identifying changed rows, the pseudo-column RDB$RECORD_VERSION. This pseudo-column contains the transaction that created the current version of a row.
Alternatively, you could try and use the trace facility to monitor for changes, but that is not an out of the box solution, as you will need to write the necessary logic to parse the trace output (and take things like transaction commit/rollback into account).

Can i able to make Trigger in Cache for mapped global?

I need to create a trigger function that would be called whenever i insert or delete data to my table.
Internally cache keeps the data in a global.
in the reverse way i can add data directly to the Global and i can view it in the table.
The trigger function works fine when i insert data using SQL Statement(Insert into).
But it fails to call when i add directly to the global.
so,how can i make triggers to be called when i add data to the Global directly. instead adding it using query (Insert into TABLE).
If you use the class to add data in the global, then you can use the Callback methods. For example %OnAfterSave does what you want.
On the other hand if you put data directly into the global then you will need some way to track when data is added. You can do this by writing your own agent or by doing what is adviced in this post: How can i make copy of a global automatically in my local system?
(this is the link referenced in that answer) http://docs.intersystems.com/cache20141/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_journal#GCDI_journal_util_ZJRNFILT

Can a ClearCase trigger be supressed by another ClearCase trigger?

I have a ClearCase trigger that runs a script after the checkin operation has been performed.
It works when a user checks in a new element version or adds a new element to source control.
When a file is deleted, however, I do not want the trigger to fire (or at least I don't want the script associated to it to run), but I know it will because after an element is removed, the folder is inevitably checked in.
Is the a way for an rmelem operation trigger to somehow suppress the checkin operation trigger?
You might do that by:
defining a preop trigger on rmelem which sets a flag (like a file written somewhere accessible by any client)
modifying your postop trigger on checkin which, is that file exists, will delete it and not execute the rest of the trigger.
But my point is: as far as I know, those triggers are independent one from the others, so you need to come up with an external coordination mechanism in order for one trigger to influence another trigger.
You could also play with environment variable (if a certain EV is set, then then postop trigger unset it and don't execute itself), but I am not sure if you can set and persists EV accross different execution of different trigger.
I am not sure if the trigger has to be run for all element types.
You can distinguish in your script if the element is a directory or a file element using the env var CLEARCASE_ELTYPE. Maybe that helps?
Another point is the env var PPID - the fine manual says:
You can use the CLEARCASE_PPID environment variable to help synchronize multiple firings ...##

How triggers works internally in SQL Server

Please correct me if I am wrong.
What I know about triggers is that they are triggered by events (Insert, Update, Delete). So we can run a stored procedure etc.. in the trigger.
This will give the application a good responsiveness because the query that the user interacts with, is quite small and this "other" longer time taking stuffs are taken care by the server internally as a separate task.
But I do not know about how the the triggers are handled inside the server. What I exactly want to know is what would happen in scenarios as given below.
Take Insert after trigger. And take trigger is executing a longer stored procedure. Then in the middle of the trigger there can be another insert. What I want to know is what will happen to that second trigger. If possible can I make that second trigger ignore itself.
marc_s has given the correct answer. I will copy it for the sake of completeness.
TRIGGERS ARE SYNCHRONOUS
If you want to have a asynchronous functionality go for a SQL broker implimenation.
Triggers are triggered by events - and then they are executed - right now. Since you cannot control when and how often they are triggered, you should keep the processing in those triggers to an absolute minimum - I always try to make - at most - an entry into another table (an "Audit" table) or possibly put a "marker" row into a "command" table. But the actual processing of that info - running stored procedures etc. - should be left to an outside job - don't do extensive processing in a trigger! This will reliably KILL all your performance\responsiveness.

Salesforce.com: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record

In our production org, we have a system of uploading sales data into Salesforce using command line data loader. This data is loaded into a temporary object Temp. We have created a formula field (which combines three fields) to form a unique key. The purpose of the object is to reduce user efforts for creating the key manually.
There is an after insert trigger on Temp which calls an asynchronous method which upserts the data to another object SalesData using the key. The insert/update trigger on SalesData checks the various fields and creates/updates the records in another object SalesRecords. After the insertion/updation is complete, all the records in temp object Temp are deleted. The SalesRecords object does not have any trigger on it and is a child of another object Sales. The Sales object has some rollup fields which are summing up fields from SalesRecords object.
Lately, we are getting the below error for some of the records which are updated.
UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record
Please provide some pointers to resolve the issue
this could either be caused by conflicting DML operations in the various trigger execution or some recursive trigger execution. i would assume that the async executions cause multiple subsequent updates on the same records, probably on the SalesRecords object. I would recommend to try to simplify the process to avoid too many related trigger executions.
I'm a little surprised you were able to get this to work in the first place. After triggers should be used with caution and only when before triggers can't be. One reason for this is that you don't need to perform additional DML to make changes to records, since in before triggers you simply change the values and the insert/update commit happens automatically. But recursive trigger firings is the main problem with after triggers.
One quick way to avoid trigger re-entry is to use a public static Boolean in a class that states whether you're already in this trigger from the same thread of execution.
Something like:
public static Boolean isExecuting = false;
Once set to true, any trigger code that is a re-fire can be avoided with:
if(Class.isExecuting == false)
{
Class.isExecuting = true;
// Perform trigger logic
// ...
}
Additionally, since the order of trigger execution cannot be determined up front, you might be seeing an issue with deletions or other data changes that depend on other parts of your flow to finish first.
Also, without knowing the details of your custom unique 3-part key, I'd wonder if there's a problem there too such as whether it's truly unique or not. Case insensitivity is a common mistake and it's the reason there are 15 AND 18 character Ids in Salesforce. For example, when people export to Excel (a case-insensitive environment) and do VLOOKUPs, they would occasionally find the wrong record. The 3-digit calculated suffix was added to disambiguate for case-insensitive environments.
Googling for this same error lead me to this post:
http://boards.developerforce.com/t5/General-Development/Unable-to-obtain-exclusive-access-to-this-record/td-p/345319
Which points out some common causes for this to happen:
Sharing Rules are being calculated.
A picklist value has been replaced and replacement is in progress.
A custom index creation/removal is in progress.
Most unlikely one - someone else is already editing the same record that you are trying to access at the same time.
Posting here in case somebody else needs it.
I got this error multiple times today. Turned out one of our vendors was updating their installed package during that time in the same org. All kinds of things were going wrong also - some object validation exceptions were being thrown on DMLs, without any error message content.
Resolution
The error is shown when a field update such as a roll-up summary field is being attempted on a parent object that already had a field update to cause the roll-up summary field to calculate. This could also occur if a trigger or another apex job running on the master object and it also attempting to do an update.
You can either reduce the batch size and try again or create separate smaller files to be imported if this issue occurs.