Let's say we need to send user an email and wait for user to reply and then continue the workflow. Should we create an async activity to send the email, and when the reply email comes, we complete the activity? Or should we create a normal activity to send the email, and then the workflow await a signal, and when the reply email comes, we send the signal to the workflow? Are these 2 options equivalent? Or there are some difference can be used to decide which one to use for different activities?
Thanks in advance
I recommend the activity then signal approach for this use case. The reason is that sending an email and waiting for a reply are two different tasks with different timeouts and retry policies.
If send email activity fails it is expected to be retried on a short timeout on a pretty tight retry schedule. At the same time the timeout for the user action is expected to be much larger (potentially days or weeks) and is usually not retriable.
Edit to answer the retry question:
But what if we do want to retry? Say we expect the user to reply the
email within a day, otherwise we send it again. We can retry the
entire workflow but that is not ideal since send email and user reply
is only part of the workflow. Should we make it a child workflow and
have retry on the child workflow?
You retry the whole interaction. See the fileprocessing example of retrying part of a workflow. Here are Go SDK and Java SDK versions of it.
Related
The office365 trigger when new -email arrives (V3) is my logic Appp trigger condition.
The workflow does following:
Triggered on incomnig e-mails
check if sender is an AD User, if not send to a special Mailaddress
analyse the subject and put informations gother in new mailboydies and
distribute the mail to other mailboxes
I have seen a few cases I don' t understand, what it is the reason for that.
I got a trigger sicceeded but the trigger doe not fired Why?
image1: trigger History
image 2: not Fired Triggers
Screenshot shows succeeded triggers but not fired to start a run.
In this time spawn there was no email in the inbox.
I got a trigger failed ( i cannot reconstruct which mail could the reason).
image3: failed Trigger
for this case I can say nothing i found no e-mail at this time,
but it is possible that there was through Dectivation/Activation of logic app
old not readed mails ttries to trigger.
it was the first trigger after creation of this Logic app for testing
(copied from other subscription).
Can someone say me possible reasons for this 2 situations?
A status of skipped just indicates that the nothing new was found to fire the logic. Because the trigger will do check operation every once in a while even though the mailbox do not receive new email.
I think you do not need to worry about the records under "Trigger history". You just need to check the records under "Runs history". The records under "Runs history" can give a more intuitive picture of what's your logic app going on.
In our design we have something of a paradox. We have a database of projects. Each project has a status. We have a REST api to change a project from “Ready” status to “Cleanup” status. Two things must happen.
update the status in the database
send out an email to the approvers
Currently RESTful api does 1, and if that is successful, do 2.
But sometimes the email fails to send. But since (1) is already committed, it is not possible to rollback.
I don't want to send the email prior to commit, because I want to make sure the commit is successful before sending the email.
I thought about undoing step 1, but that is very hard. The status change involves adding new records to the history table, so I need to delete them. And if another person make other changes concurrently, the undo might get messed up.
So what can I do? If (2) fails, should I return “200 OK” to the client?
Seems like the best option is to return “500 Server Error” with error message that says “The project status was changed. However, sending the email to the approvers failed. Please take appropriate action.”
Perhaps I should not try to do 1 + 2 in a single operation? But that just puts the burden on the client, which is worse!
Just some random thoughts:
You can have a notification sent status flag along with a datetime of submission. When an email is successful then it flips, if not then it stays. When changes are submitted then your code iterates through ALL unsent notifications and tries to send. No idea what backend db you are suing but I believe many have the functionality to send emails as well. You could have a scheduled Job (SQL Server Agent for MSSQL) that runs hourly and tries to send if the datetime of the submission is lapsed a certain amount or starts setting off alarms if it fails as well.
If ti is that insanely important then maybe you could integrate a third party service such as sendgrid to run as a backup sending mech. That of course would be more $$ though...
Traditionally I've always separated functions like this into a backend worker process that handles this kind of administrative tasking stuff across many different applications. Some notifications get sent out every morning. Some get sent out every 15 minutes. Some are weekly summaries. If I run into a crash and burn then I light up the event log and we are (lucky/unlucky) enough to have server monitoring tools that alert us on specified application events.
I've build a simple task management webapp: User A fills up a form, hits submit button, sends data to a server and if the data validates User B gets assigned to this task.
I'd like to notify User B by email on this new assignment. However User A can alter the task data or even delete the task and the email that already has been sent would be incorrect in this case.
One approach is to delay the notification email for couple of minutes and then upon sending update the email message if needed.
Which are the best practices for notifications sending?
I think you have a few choices:
Send out emails whenever task status changes. Don't include details; send a link to user B to let them see what the changes are.
This is a good example of Why Starbucks Does Not Use Two Phase Commit. User B will tolerate "dirty reads" because they aren't life altering.
Send out all notification emails asynchronously on a fixed schedule. Have a timed task query a database, generate all the emails, and send them at once. The task will have the chance to only send the latest one. If user A assigns a task, makes updates, then deletes, user B will only get the last meaningful one. In this case, an assign followed by a delete might result in no email being sent. Only an assign or update as last state will result in an email being sent.
I am working on an application which need to notify around 100 people at once when a specific condition is met. Now when a user who is performing the action which results in the specific condition need to wait till all 100 emails are sent which takes quite long using Gmail SMTP. The application is built on top of Cake PHP.
My question is whether there is a way application can send 100 emails without blocking the user whose action results in meeting the specific condition.
To make my question clear, think of Groupon. It sends notification to all buyers when minimum numbers of buyers are met. So when the nth person make the purchase, Google sends the notification.One way is to notify all buyers immediately after the purchase is complete (which is what we are doing n context of our application) and probably other way is to wait and send the notification using an external script/app at a pre-defined time.
In case of former, the application would block while sending emails is complete. Since PHP deosn't support multi-threading, I was wondering if there is an easy way to make this operation asynchoronous so it doesn't affect main application flow.
You could put the notification in a queue, and use a cronjob that checks and sends notifications every 5 minutes. That way your user isn't locked up while the operation happens.
I'm not 100% sure, but you might be able to use an ajax call too, which would keep the user free to carry on after the request is sent.
How to Resume the Persisted Workflow with Delay Activity without Reloading into memory:
I am creating a workflow for leave application. My requirement is if any participant is not responded in the specified time, then the request needs to pass to next level participant approval.
Suppose a requester submitted a Leave Request and the Team Lead needs to approve it within 7 days. If the Team Lead is not responded in 7 days, then automatically it has to go to Manager Approval.
In general to achieve this, we will write a Windows service which is checking periodically and send the notifications once the period is elapsed.
But I want to achieve without writing the Windows service. Is there any possibility in WF4.0.
I am trying like this, once the requester is submitted the request then I am showing the request in the participant mail box and persisting the workflow. Once the participant responded I am resuming the workflow (because I am saving the workflow instance ID) and passing the participant response for further workflow execution.
In this if the participant is not responded, how to escalate / send the request to manager without using windows service.
Is it possible to do with anything with the Delay Activity?
If you create a workflow service it is hosted in the WoskflowServiceHost and this periodically checks is there are expired timers and resumes those.
You must host the workflow engine somewhere ...
If it's not in a windows service, it should be in IIS.
You can also host it in a "normal" command line application, but if you close the application the workflow will stop.