If a user leaves a comment during a workflow, the comments are duplicated in the workflow comments section by service users for each Auto Advance step I have in the workflow model. Is there some way to not have these steps also create a comment? The Show Non-User Comments toggle doesn't hide these specific workflow comments.
Here's a bit of my workflow where the initiator is requested to make changes and can leave a comment when completing the step:
Here is a comment left by my user a a in the Workflow initiator changes requested step, then duplicated by the service users in the following auto advance steps:
Hi klementine: What you've used above is not an Auto Advancer step, rather a No Operation step. Also, it wouldn't be correct to say that the comments are being "duplicated". The comments are rather being "carried forward". There's a small difference between the two statements here.
By definition a "No Operation" step does nothing. It is simply a placeholder step that will bring the workflow to a halt if Handler Advance checkbox is not checked. The comments are being copied from the previous step because it has been designed that way. The No Operation step is designed to not intervene in the workflow operation and keep the metadata intact. It will not delete any history metadata from workflow node.
Having said that, even if you use an Auto Advancer step, you will experience a similar behavior because the Auto Advancer step is also designed to just advance the workflow to the default next step in cases of OR Splits. It also doesn't alter the workflow metadata.
It's important that we understand the exact business requirement before designing our workflows. I don't feel that a No Operation step is really required in your case.
Related
I'm trying to implement automatic backfills in Argo workflows, and one of the last pieces of the puzzle I'm missing is how to access the lastScheduledTime field from my workflow template.
I see that it's part of the template, and I see it getting updated each time a workflow is scheduled, but I can't find a way to access it from my template to calculate how many executions I might have missed since the last time the scheduler was online.
Is this possible? Or maybe, is this the best way to implement this functionality on Argo?
I've been looking at organisation and project settings but I can't see a setting that would prevent users from creating work items in an Azure DevOps project.
I have a number of users who refuse to follow the guidelines we set out for our projects so I'd like to inconvenience them and the wider project team so that they find it better to follow the guidelines than not - at the moment we've got one-word user stories and/or tasks with estimates of 60-70 hours which isn't reflective of the way that we should be planning.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
The Azure DevOps Aggregator project allows you to write simple scripts that get triggered when a work item is created or updated. It uses a service hook to trigger when such an event occurs and abstracts most of the API specific stuff away, providing you with an instance of the work item to directly interact with.
You can't block the creation or update from, such a policy, Azure DevOps will inform the aggregator too late in the creation process to do so, but you can revert changes, close the work item etc. There are also a few utility functions to send email.
You need to install the aggregator somewhere, it can be hosted in Azure Functions and we provide a docker container you can spin up anywhere you want. Then link it to Azure DevOps using a PAT token with sufficient permissions and write your first policy.
A few sample rules can be found in the aggregator docs.
store.DeleteWorkItem(self);
should put the work item in the Recycle Bin in Azure DevOps. You can create a code snippet around it that checks the creator of the work item (self.CreatedBy.Id) against a list of known bad identities.
Be mindful that when Azure DevOps creates a new work item the Created and Updated event may fire in rapid succession (this is caused by the mechanism that sets the backlog order on work items), so you may need to find a way to detect what metadata tells you a work item should be deleted. I generally check for a low Revision number (like, < 5) and the last few revisions didn't change any field other than Backlog Priority.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
I am afraid there is no such out of setting to do this.
That because the current permission settings for the workitem have not yet been subdivided to apply to the current scenario.
There is a setting about this is that:
Project Settings->Team configuration->Area->Security:
Set this value to Deny, it will prevent users from creating new work items. But it also prevent users from modify the workitem.
For your request, you could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions.
Let's say we need to generate the order after the user finalized his/her cart.
This is our steps to generate order:
generate an order in pending state (order microservice)
authorize user's credit(accounting microservice)
set status of the cart to closed(cart microservice)
approve the order (order microservice)
To do this we simply create a cadence workflow that calls to an activity for each step.
problem1: how the client can detect that the order creation is in progress for that cart if the user opens the cart again or refresh the page?
(Note: assume our workflow is not executed by the worker yet)
My Solution for problem 1: create order and change its status to pending before running the workflow, so for the next requests, the client knows that the order is in pending status. but what happens if order creation was successful, but start workflow failed? it's not transactional (order creation and workflow run)
If this solution is your solution also and you accept its risk, tell me please. I'm new to cadence.
Problem 2: How to inform the client after the workflow has done and the order is ready?
Any solution or article or help, please?
Problem 1 : There are multiple solutions to consider:
1.1 Add a step in the workflow to change the order to pending state, before calling order microservice, instead of doing it outside of workflow. It will save you from the issue of consistency, you can add retry in the workflow to make sure the state are eventually consistent.
1.2 Add a query method to query the workflow state, and User/UI can make queryWorkflow call to retrieve the workflow state to see the order status.
1.3 Put the state into SearchAttribute of the workflow, and User/UI can make describeWorkflow call to get the state
1.4. After https://github.com/uber/cadence/issues/3729 is implemented, you can use memo instead of SearchAttribute like 1.3
Comparison: 1.1 is the choice if you think order storage is the source of truth of order state, 1.2~1.4 will make workflow become source of truth. It really depends on how you want to design the system architecture.
1.2 may not be a good choice if User/UI expects low latency. Because QueryCall may take a few seconds to return.
1.3~1.4 will be much more performant/fast. It only requires Cadence server make a DB call to get the workflow state.
1.3 has another benefit if you have Advanced visibility enabled with ElasticSearch+Kafka setup -- you can search/filter/count workflows by order states. But the limitation of 1.3 is that you should only put very small data like a string/integer, not a blob of state.
The benefit of 1.4 is that you could put a blob of state.
To prevent user finalizing a cart multiple times:
When starting workflow, use a stable workflowID associated to the cart. So that you can call describeWF before allowing them to finalize/checkout a cart. The workflow is persisted once the StartWF req is accepted.
If there is an active workflow(not failed/completed/timeouted), it means the cart is pending. For example if a cart uses a UUID, then you can use that UUID+prefix to make workflowID. Cadence guarantees workflowID uniqueness so there will be no race condition of starting two workflows for the same cart. If a checkout failed then a user can submit the checkout workflow again.
Problem 2 : It depends on what you want by "inform". The term inform sounds like asynchronous notification. If that's the case you can have another activity to send the notification to another microservice, or send a signal to another workflow that need the notification.
If here you means synchronous manner like showing on a WebUI, then it can be solved the same way as in the solutions I mentioned for problem 1.
I have created and added some workflows to CRM 2011 RU13, through the UI
Through not fault of my own my development environment is completely air gapped from my production environment.
I added these workflows to my solution and exported the solution as managed and given the solution to the production admin.
When he deploys it fails with this message.
The workflow cannot be published or unpublished by someone who is not it's owner
How do I fix this. There is no way to not give workflows an owner. or say that the owner is the solution.
The production admin gets that message because he is not the owner (inside the target CRM environment) of one or more active workflows included in your solution.
This happens in these situations:
First time you give your solution to be imported, is USER_A to
perform the operation and all the workflows are assigned automatically
to him. If later USER_B try to import an updated version of the
solution he gets the error message because is not the owner of the
workflow(s).
First time you give your solution to be imported, is USER_A to
perform the operation and all the workflows are assigned
automatically to him. Meanwhile one or more workflows are assigned to
USER_C. If later USER_A try to import an updated version of the
solution he gets the error message because is not the owner of the
workflow(s).
Before a workflow can be updated must be first deactivated, and only the owner can deactivate a workflow. This is by design.
In your case the production admin must be the owner of the processes (he can assign temporarily the workflows to himself, import the solution and after assign back to the right user) or needs to be the owner of the workflows to import the solution (if he has the rights)
A couple of additional points for clarity for the OP:
The owner of the workflows in your dev environment is not relevant, the importing user in prod will become the owner (this does not contradict Guido, I'm just making sure you don't follow a red herring). It is quite right for their to be an "air gap" between dev and prod.
If you know which workflows are in your solution, assign those in prod to yourself, then import, then if and only if you need to, reassign them to the original owner(s).
You may not need to if that owner is just an equivalent system admin user, but if it is a special user (eg "Workflow daemon" so users can see why it updated their records) you will want to re-assign.
Note that after re-assigning them, that user has to activate the workflows. You cannot activate a workflow in someone else's name (or users could write workflows to run as admins and elevate their priviledges).
If the workflows have not actually been changed in this version of your solution, take them out of the solution and ignore them - often I find that a workflow has been written, carried across to production in the original "go live" and is then working perfectly fine, but is left in the solution which is constantly updated and re-published (ie export / imported).
Personally I often have a "go live" solution (or more than one, but that's a different thread...) and then we start all over again with a new solution which only contains incremental changes thereafter. This means that all your working workflows, plugins, web resources etc do not appear in that solution so this avoids confusion as to versions, reduces solution bloat, and avoids this problem of workflow ownership. If a workflow is actually updated, then you need to deal with the import, but don't make this a daily occurrence for unrelated changes.
So in one part of our customised Salesforce system, the following happens:
a trigger changes the value of a picklist on a custom object
a Workflow rule detects that change and fires off an email.
Since about the 4th of December though, it seems to have stopped working.
edit: The Debug Logs show that the trigger is firing and changing the value of the picklist, but no Worflow Rules are evaluated.
The workflow rule is pretty simple, so I don't really understand whats preventing it. The details of the rule are:
Operates on a custom object.
Evaluation Criteria: When a record is created, or when a record is edited and did not previously meet the rule criteria
Rule Criteria: ISPICKVAL(Status__c, 'Not Started')
Active: Yes
Immediate Workflow Actions: an email alert
Edit: The Rule does fire if I manually update the object to set the appropriate status. But it isn't firing when a trigger changes the status.
Edit: Did something change on Salesforce around December 4th 2009? That seems to be when this stopped working ...
Any ideas?
If you had said "the trigger does not fire the workflow, even though a manual change via the UI does", I would have responded something like...
Absolutely. That's how it is designed.
Salesforce do not allow anything
automated to invoke anything automated
(ie you cannot start a WF from a trigger or another WF).
Given that you say this stopped working earlier in the month, I am frankly astonished! We wanted to achieve something like this, would have been about 10 months ago, and Salesforce told us it could not be done; they like to keep tight control over processes that could potentially run away and consume large CPU (because of the multi-tenanted nature of the offering), hence the stringent governor limits...
This may have changed recently, of course, we built work-rounds to get round the restriction...
To answer my own question ... I eventually found out what this was.
The Salesforce Spring '09 Workflow Rule and Roll-Up Summary Field Evaluations update was rolled out to all orgs at the start of Dec '09, and changed certain Workflow behaviours.
The update improves the accuracy of
your data and prevents the
reevaluation of workflow rules in the
event of a recursion.
Our particular problem was that we needed Workflow to be evaluated twice on a single object after the initial action - we had a series of changes to a status field that needed to kick off different things. After the Spring '09 update, Workflow is only evaulated once for an action on an object.
So, it did work, but then the platform changed, and it didn't work anymore. Time to write some code.