Salesforce: Trigger that fires off a Workflow rule has stopped working - any ideas? - triggers

So in one part of our customised Salesforce system, the following happens:
a trigger changes the value of a picklist on a custom object
a Workflow rule detects that change and fires off an email.
Since about the 4th of December though, it seems to have stopped working.
edit: The Debug Logs show that the trigger is firing and changing the value of the picklist, but no Worflow Rules are evaluated.
The workflow rule is pretty simple, so I don't really understand whats preventing it. The details of the rule are:
Operates on a custom object.
Evaluation Criteria: When a record is created, or when a record is edited and did not previously meet the rule criteria
Rule Criteria: ISPICKVAL(Status__c, 'Not Started')
Active: Yes
Immediate Workflow Actions: an email alert
Edit: The Rule does fire if I manually update the object to set the appropriate status. But it isn't firing when a trigger changes the status.
Edit: Did something change on Salesforce around December 4th 2009? That seems to be when this stopped working ...
Any ideas?

If you had said "the trigger does not fire the workflow, even though a manual change via the UI does", I would have responded something like...
Absolutely. That's how it is designed.
Salesforce do not allow anything
automated to invoke anything automated
(ie you cannot start a WF from a trigger or another WF).
Given that you say this stopped working earlier in the month, I am frankly astonished! We wanted to achieve something like this, would have been about 10 months ago, and Salesforce told us it could not be done; they like to keep tight control over processes that could potentially run away and consume large CPU (because of the multi-tenanted nature of the offering), hence the stringent governor limits...
This may have changed recently, of course, we built work-rounds to get round the restriction...

To answer my own question ... I eventually found out what this was.
The Salesforce Spring '09 Workflow Rule and Roll-Up Summary Field Evaluations update was rolled out to all orgs at the start of Dec '09, and changed certain Workflow behaviours.
The update improves the accuracy of
your data and prevents the
reevaluation of workflow rules in the
event of a recursion.
Our particular problem was that we needed Workflow to be evaluated twice on a single object after the initial action - we had a series of changes to a status field that needed to kick off different things. After the Spring '09 update, Workflow is only evaulated once for an action on an object.
So, it did work, but then the platform changed, and it didn't work anymore. Time to write some code.

Related

Marklogic DLS Document Checkout Timout

Does MarkLogic DLS offer a similar file versioning experience to subversion?
Under Subversion, once the file(document) has been locked, others could not update it anymore, unless the file has been committed (check-in) or released the lock.
However in MarkLogic Library Services (DLS), once the document has been checked out, others could still call dls:document-checkout-update-checkin to update and release the lock. Does it mean it is the developer who should use those dls functions to implement the file lock and unlock mechanism?
I tried to use the timeout parameter in dls:doucment-checkout. However, it seems the document will remain in the checkout status forever. But I do see that parameter when I call 'dls:coument-checkout-status'.
Does it mean that it is the developer who should check the server timestamp together with the initial checkout timestamp and timeout duration to determine whether the file is still in lock status?
If so, I will need to write some XQuery programs and set up a scheduled task in ML to clean up the file checkout daily. Is my above understanding correct?
Per https://docs.marklogic.com/guide/app-dev/dls#id_56448, I believe the timeout is not enforced automatically - i.e. there's no background process in MarkLogic that is periodically inspecting documents to see if they should be automatically checked back in or un-checked out. The timeout appears to be meant to be used by a developer to apply their own logic with it - e.g. allowing a UI to state that "Jane checked this document out and only intended to keep it for 10 minutes, but that was 2 hours ago - would you like to break her checkout?"

Tracking in progress cadence workflow by client

Let's say we need to generate the order after the user finalized his/her cart.
This is our steps to generate order:
generate an order in pending state (order microservice)
authorize user's credit(accounting microservice)
set status of the cart to closed(cart microservice)
approve the order (order microservice)
To do this we simply create a cadence workflow that calls to an activity for each step.
problem1: how the client can detect that the order creation is in progress for that cart if the user opens the cart again or refresh the page?
(Note: assume our workflow is not executed by the worker yet)
My Solution for problem 1: create order and change its status to pending before running the workflow, so for the next requests, the client knows that the order is in pending status. but what happens if order creation was successful, but start workflow failed? it's not transactional (order creation and workflow run)
If this solution is your solution also and you accept its risk, tell me please. I'm new to cadence.
Problem 2: How to inform the client after the workflow has done and the order is ready?
Any solution or article or help, please?
Problem 1 : There are multiple solutions to consider:
1.1 Add a step in the workflow to change the order to pending state, before calling order microservice, instead of doing it outside of workflow. It will save you from the issue of consistency, you can add retry in the workflow to make sure the state are eventually consistent.
1.2 Add a query method to query the workflow state, and User/UI can make queryWorkflow call to retrieve the workflow state to see the order status.
1.3 Put the state into SearchAttribute of the workflow, and User/UI can make describeWorkflow call to get the state
1.4. After https://github.com/uber/cadence/issues/3729 is implemented, you can use memo instead of SearchAttribute like 1.3
Comparison: 1.1 is the choice if you think order storage is the source of truth of order state, 1.2~1.4 will make workflow become source of truth. It really depends on how you want to design the system architecture.
1.2 may not be a good choice if User/UI expects low latency. Because QueryCall may take a few seconds to return.
1.3~1.4 will be much more performant/fast. It only requires Cadence server make a DB call to get the workflow state.
1.3 has another benefit if you have Advanced visibility enabled with ElasticSearch+Kafka setup -- you can search/filter/count workflows by order states. But the limitation of 1.3 is that you should only put very small data like a string/integer, not a blob of state.
The benefit of 1.4 is that you could put a blob of state.
To prevent user finalizing a cart multiple times:
When starting workflow, use a stable workflowID associated to the cart. So that you can call describeWF before allowing them to finalize/checkout a cart. The workflow is persisted once the StartWF req is accepted.
If there is an active workflow(not failed/completed/timeouted), it means the cart is pending. For example if a cart uses a UUID, then you can use that UUID+prefix to make workflowID. Cadence guarantees workflowID uniqueness so there will be no race condition of starting two workflows for the same cart. If a checkout failed then a user can submit the checkout workflow again.
Problem 2 : It depends on what you want by "inform". The term inform sounds like asynchronous notification. If that's the case you can have another activity to send the notification to another microservice, or send a signal to another workflow that need the notification.
If here you means synchronous manner like showing on a WebUI, then it can be solved the same way as in the solutions I mentioned for problem 1.

Dynamics Crm workflow avoid update of record

In a CRM 2016 online real-time workflow, is it possible to avoid a record to be updated? In particular i created a real-time workflow of the type "before record status updated", and my objective is that I don't want an opportunity to be activated, if the value of a field of the opportunity is "yes".
Is this behaviour achievable with a workflow, or I need a plugin?
Just for chucks and giggles: As you said it is a real time workflow, you could use a custom workflow activity to throw an error, that would prevent record creation or update as real time workflows are transactional with the create/update operation.
Recommended: Synchronous plugins were put in place exactly for this very reason, to perform business validations and complex business operations.
To use the system as intended, use a plugin.

Dynamics CRM workflow that triggers on record field change also triggers on record create

I'm guessing that because the field is going from "doesn't exist"/"has no value" to "exists and has a value" that this is triggering the change event. Is there an easy way around this? I am using Dynamics CRM 2011 OnPrem.
Your description does not match your title very closely for this question (no offence.) Going on your description, the answer would depend entirely on which field or fields you selected for the change event.
In order to understand this better you have to realize that both events fire server-side only when the entity is saved. The On Create event fires first and if configured, a workflow starts. However the workflow runs asynchronously so any changes it may make may not trigger an On Change event right away. As part of the save process, the built-in field “Created On” is populated and conceivably a second event On Change could fire. (I don’t have an trigger on this field so I am not entirely sure).
It may be that you clicked the “select all” checkbox when selecting fields to monitor for change. This would cause several “auto-populating” system fields to be monitored and could cause workflows to fire at unanticipated times.
The way I look at it, every CRM configuration option has a “cost”. In this case if you simply select all, you are accepting a high cost. Conversely, if thoughtfully go through each field and considering the ramifications you can keep the cost low by selecting only those fields necessary to meet the requirements. Your results will be much more predictable as well.
By the way, when I mentioned “cost” earlier, I loosely define that as the time and effort that goes into troubleshooting and supporting any given piece of code or configuration selection.

MS CRM recursive workflow and performance

I’m about to write a workflow in CRM that calls itself every day. This is a recursive workflow.
It will run on half a million entities each day and deactive the record if it was not been upodated in the past 3 days.
I’m worried about performance has anyone else done this.
I haven't personally implemented anything like this, but that's 500,000 records that are floating around in the DB that the async service has to keep track of, which is going to tax your hardware. In addition, CRM keeps track of recursive workflow instances. I don't have the exact specs in front of me, but if a workflow calls itself a set number of times within a certain timeframe, CRM will kill the workflow.
Could you just write a console app that asks the Crm Service for records that haven't been updated in three days, and then deactivate them? Run it as a scheduled task once a day, and then your CRM system doesn't have the burden of keeping track of all those running workflow instances.
EDIT: Ah, I see now you might have been thinking of one workflow that runs on all the records as opposed to workflows running on each record. benjynito's advice makes sense if you go this route, although I still think a scheduled task would be more appropriate than using workflow.
You'll want to make sure your workflow is running in non-peak hours. Assuming you have an on-premise installation you should be able to get away with that. If you're using a hosted instance, you might be worried about one organization running the workflow while another organization is using the system. Use the timeout and maybe a custom workflow activity, if necessary, to force the start time to a certain period.
I'm assuming you'll be as efficient as possible in figuring out which records to deactivate. (i.e. Query Expression would only bring back the records you'll be deactivating).
The built-in infinite loop-protection offered by CRM shouldn't kill your workflow instances. It stops after a call depth of 8, but it resets to 1 if no calls are made for an hour. So the fact that you're doing this once a day should make you OK on the recursive workflow front.