I would like to ask if there is a relationship between the Workflow and Importing of Files.
For example, the execution of workflow occurs when a record is saved, and it applies to updated records. And the action is to update a certain field on the target module if a specific field is changed. Say for example, Field A is updated to YES if field B is changed.
So it works well, when I manually saved the module after updating the field B.
How about during importing? Will the workflow would still matter? Provided that all conditions were successfully met.
I hope you could help me on this. I need to update our TS if there's a need for hard coding to support this.
Actually I have already posted this on the sugar forums. :D
Thanks so much!
Workflows are triggered with a before_save logic hook.
Logic hooks are triggered anytime the save() function is called
Creating records via the import process calls the the save() function.
So, yes, importing records will trigger your workflows.
The short answer is yes. There is no long answer.
Sugar uses the SugarBean class to save records, and it handles workflow. So if you save through Sugar, import, or use web services they all use SugarBean so therefore handle workflow.
Related
I have some functionality I need to implement in Dynamics Crm 2016. I need to scan all records for a custom entity and update any record where a certain condition is true. This is a bit too complex to do via a workflow (I can't change owner via a workflow step) so I'm thinking perhaps I could perform this logic in a custom plugin. I don't know if it makes sense to call this plugin from a workflow in crm though, as I need to perform the logic on all records for this particular entity, and I need the logic to run regularly, i.e. daily/weekly. What's the best way to do this?
I figured this out. It was actually possible to do entirely within Crm. What I was trying to do was the following.
I have a custom entity called announcement, and it has a custom field called embargo date.
I needed to somehow check periodically if the embargo date has been reached, meaning, is the embargo date today? If so, then I needed to change the owner of this entity.
If the embargo date has not yet been reached, then I need to wait until it is, checking the date again everyday till it is reached.
I managed this with a workflow. I added my check conditions, if they were true I assigned the entity to another user.
If my conditions weren't true, I added wait step to wait for 1 day,then another step to Start workflow where I called the current workflow recursively. Meaning, if the conditions aren't true have the workflow call itself again.
Is it possible to add a Ticket Report to the Available Reports via a plugin, so that after installing the plugin it becomes automatically available? Or would one have to manually save a custom query with the intended columns and filters?
If it is possible to do this via python code, what is the Trac interface to be implemented?
You can insert the reports through an implementation of IEnvironmentSetupParticipant. Example usage in env.py. In env.py the reports are inserted in environment_created. You'll want to insert reports in upgrade_environment. Example code for inserting the report can be found in report.py. needs_upgrade determines whether upgrade_environment is called on Environment startup so you'll need to provide the logic that determines whether your reports are already present, returning False if they are present and True if they are not. If True is returned, then upgrade_environment will get called (for your case environment_created can just be pass, or it can directly call upgrade_environment, the latter being a minor optimization discussed in #8172). See the Database Schema for information on the report table.
In Trac 1.2 (not yet released) we've tried to make it easier to work with the database by adding methods to the DatabaseManager class. DatabaseManager for Trac 1.1.6 includes methods set_database_version and get_database_version. This is useful for reducing the amount of code needed in IEnvironmentSetupParticipant.needs_upgrade when checking whether the database tables need to be upgraded (even simpler would be to just call DatabaseManager.needs_upgrade). There's also an insert_into_tables method that you could use to insert reports.
That said, I'm not sure you need to put an entry for your plugin in the system table using set_database_version. You can probably just query the report table and check if your report is present, use that check to return a boolean from IEnvironmentSetupParticipant.needs_upgrade, which will determine whether IEnvironmentSetupParticipant.upgrade_environment gets called. If you are developing for Trac 1.0.x, you can copy code from the DatabaseManager class in Trac 1.1.6. Another approach can be seen with CodeReviewerPlugin, for which I made a compat.py module that adds the methods I needed. The advantage of the compat.py approach is that the methods can be copied from the DatabaseManager class without polluting the main modules of your codebase. In the future when your plugin drops support for Trac < 1.2 you can just delete the compat.py module and modify the imports in your plugin code, but not have to change any of your primary plugin logic.
Is there a way to detect that a record being inserted is the result of a clone operation in a trigger?
As part of a managed package, I'd like to clear out some of the custom fields when Opportunity and OpportunityLineItem records are cloned.
Or is a trigger not the correct place to prevent certain fields being cloned?
I had considered creating dedicated code to invoke sObject.Clone() and excluding the fields that aren't required. This doesn't seem like an ideal solution for a managed package as it would also exclude any other custom fields on Opportunity.
In the Winter '16 release, Apex has two new methods that let you detect if a record is being cloned and from what source record id. You can use this in your triggers.
isClone() - Returns true if an entity is cloned from something, even if the entity hasn’t been saved.
getCloneSourceId() - Returns the ID of the entity from which an object was cloned.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
One approach, albeit kind of kludgy, would be to create a new field, say original_id__c, which gets populated by a workflow (or trigger, depending on your preference for the order of execution) when blank with the salesforce id of the record. For new records this field will match the standard salesforce id, for cloned records they won't. There are a number of variations on when and how and what to populate the field with, but the key is to give yourself your own hook to differentiate new and cloned records.
If you're only looking to control the experience for the end user (as opposed to a developer extending your managed package) you can override the standard clone button with a custom page that clears the values for a subset of fields using url hacking. There are some caveats, namely that the field is editable and visible on the page layout for the user who clicked the clone button. As of this writing I don't believe you can package standard button overrides, but the list of what's possible changes with ever release.
You cannot detect clone operation inside the trigger. It is treated as "Insert" operation.
You can still use dedicated code to invoke sObject.Clone() and exclude the fields that aren't required. You can ensure that you include all fields by using the sObject describe information to get hold of all fields for that object, and then exclude the fields that are not required.
Hope this makes sense!
Anup
In our production org, we have a system of uploading sales data into Salesforce using command line data loader. This data is loaded into a temporary object Temp. We have created a formula field (which combines three fields) to form a unique key. The purpose of the object is to reduce user efforts for creating the key manually.
There is an after insert trigger on Temp which calls an asynchronous method which upserts the data to another object SalesData using the key. The insert/update trigger on SalesData checks the various fields and creates/updates the records in another object SalesRecords. After the insertion/updation is complete, all the records in temp object Temp are deleted. The SalesRecords object does not have any trigger on it and is a child of another object Sales. The Sales object has some rollup fields which are summing up fields from SalesRecords object.
Lately, we are getting the below error for some of the records which are updated.
UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record
Please provide some pointers to resolve the issue
this could either be caused by conflicting DML operations in the various trigger execution or some recursive trigger execution. i would assume that the async executions cause multiple subsequent updates on the same records, probably on the SalesRecords object. I would recommend to try to simplify the process to avoid too many related trigger executions.
I'm a little surprised you were able to get this to work in the first place. After triggers should be used with caution and only when before triggers can't be. One reason for this is that you don't need to perform additional DML to make changes to records, since in before triggers you simply change the values and the insert/update commit happens automatically. But recursive trigger firings is the main problem with after triggers.
One quick way to avoid trigger re-entry is to use a public static Boolean in a class that states whether you're already in this trigger from the same thread of execution.
Something like:
public static Boolean isExecuting = false;
Once set to true, any trigger code that is a re-fire can be avoided with:
if(Class.isExecuting == false)
{
Class.isExecuting = true;
// Perform trigger logic
// ...
}
Additionally, since the order of trigger execution cannot be determined up front, you might be seeing an issue with deletions or other data changes that depend on other parts of your flow to finish first.
Also, without knowing the details of your custom unique 3-part key, I'd wonder if there's a problem there too such as whether it's truly unique or not. Case insensitivity is a common mistake and it's the reason there are 15 AND 18 character Ids in Salesforce. For example, when people export to Excel (a case-insensitive environment) and do VLOOKUPs, they would occasionally find the wrong record. The 3-digit calculated suffix was added to disambiguate for case-insensitive environments.
Googling for this same error lead me to this post:
http://boards.developerforce.com/t5/General-Development/Unable-to-obtain-exclusive-access-to-this-record/td-p/345319
Which points out some common causes for this to happen:
Sharing Rules are being calculated.
A picklist value has been replaced and replacement is in progress.
A custom index creation/removal is in progress.
Most unlikely one - someone else is already editing the same record that you are trying to access at the same time.
Posting here in case somebody else needs it.
I got this error multiple times today. Turned out one of our vendors was updating their installed package during that time in the same org. All kinds of things were going wrong also - some object validation exceptions were being thrown on DMLs, without any error message content.
Resolution
The error is shown when a field update such as a roll-up summary field is being attempted on a parent object that already had a field update to cause the roll-up summary field to calculate. This could also occur if a trigger or another apex job running on the master object and it also attempting to do an update.
You can either reduce the batch size and try again or create separate smaller files to be imported if this issue occurs.
I remember seeing a trait that will automatically add the created and updated dates when using either lift's Record or Mapper ORMs.
The question is, is there a similar thing for Squeryl to automatically set the date/time the record was inserted and, less importantly, the last date/time it was updated?
If not, is it possible to make one?
There is no existing trait you can mix in to do it, but if you are using 0.9.5-SNAPSHOT you can create your own using Squeryl's lifecycle callbacks. Take a look at this message for more info: https://groups.google.com/forum/#!searchin/squeryl/lifecycle/squeryl/8FY7n0DN5fs/O2O8OhqVPSUJ. If you run into any trouble post a message to the group and we'll do what we can to help you out.