Azure Servicebus Queue function trigger called twice - triggers

I have an Azure function with a ServiceBusTrigger that is being called twice when it is deployed to Azure. It is very easy to reproduce. Just create a new ServiceBus Trigger function and add a message to the queue.
Here's the code to send the message:
static async Task Main(string[] args)
{
IQueueClient qc = new QueueClient(_sbConnString, "testing");
string data = "hello";
var msg = new Message(Encoding.UTF8.GetBytes(data));
await qc.SendAsync(msg);
await qc.CloseAsync();
}
Here's the function:
[FunctionName("TestTrigger")]
public static void Run([ServiceBusTrigger("testing", Connection = "myConnString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
Log stream shows the following:
2020-03-13T23:51:23.197 [Information] Executing 'Function1' (Reason='New ServiceBus message detected on 'testing'.', Id=1b52f3c0-2497-4473-b5f9-ae406a6dee94)
2020-03-13T23:51:23.198 [Information] Trigger Details: MessageId: 9f2a7af3d4c549bb8202a013c15c0358, DeliveryCount: 1, EnqueuedTime: 3/13/2020 11:51:23 PM, LockedUntil: 3/13/2020 11:51:53 PM
2020-03-13T23:51:23.198 [Information] C# ServiceBus queue trigger function processed message: hello
2020-03-13T23:51:23.198 [Information] Executed 'Function1' (Succeeded, Id=1b52f3c0-2497-4473-b5f9-ae406a6dee94)
2020-03-13T23:51:23.197 [Information] Executing 'Function1' (Reason='New ServiceBus message detected on 'testing'.', Id=1b52f3c0-2497-4473-b5f9-ae406a6dee94)
2020-03-13T23:51:23.198 [Information] Trigger Details: MessageId: 9f2a7af3d4c549bb8202a013c15c0358, DeliveryCount: 1, EnqueuedTime: 3/13/2020 11:51:23 PM, LockedUntil: 3/13/2020 11:51:53 PM
2020-03-13T23:51:23.198 [Information] C# ServiceBus queue trigger function processed message: hello
2020-03-13T23:51:23.198 [Information] Executed 'Function1' (Succeeded, Id=1b52f3c0-2497-4473-b5f9-ae406a6dee94)
I've tried creating this on both the Mac and Windows (VSCode and VS2019 respectively) and get the same results. When I debug locally on VS2019, the trigger only gets called once.
I also checked the queue using Service Bus Explorer and only one message ends up in the queue. The trigger is just called twice.
Am I missing something simple? Looking at the log timestamps, it appears that it is being executed in parallel.

the real function actually imports data into a database and we are getting twice as many records because the trigger function is called twice
You should always provide the details or else the question is skewed and hard to answer. In this case, the fact that there's some work taking place in the Function that could lead to a prolonged completion of the message matters. Here's what could be happening.
Your function receives messages in PeekLock mode. The message is leased to the consumer for a certain period of time. If whatever you do in the Function takes longer than the lease time, the message will be released and Functions will retrieve it again for processing. This will take place until either Function execution is done before the lock expires (lease ends) or the message is moved to the dead-letter queue.
You should do the following:
Check the queue's MaxLockDuration and ensure its longer than the maximum processing time.
Make your function idempotent and discard messages that have already been received.

Related

Timing of `postMessage` inside service worker `fetch` event handler

I have a simple service worker which, on fetch, posts a message to the client:
// main.js
navigator.serviceWorker.register("./service-worker.js");
console.log("client: addEventListener message");
navigator.serviceWorker.addEventListener("message", event => {
console.log("client: message received", event.data);
});
<script src="main.js"></script>
// service-worker.js
self.addEventListener("fetch", event => {
console.log("service worker: fetch event");
event.waitUntil(
(async () => {
const clientId =
event.resultingClientId !== ""
? event.resultingClientId
: event.clientId;
const client = await self.clients.get(clientId);
console.log("service worker: postMessage");
client.postMessage("test");
})()
);
});
When I look at the console logs, it's clear that the message event listener is registered after the message is posted by the service worker. Nonetheless, the event listener still receives the message.
I suspect this is because messages are scheduled asynchronously:
postMessage() schedules the MessageEvent to be dispatched only after all pending execution contexts have finished. For example, if postMessage() is invoked in an event handler, that event handler will run to completion, as will any remaining handlers for that same event, before the MessageEvent is dispatched.
https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage#Notes
However, I'm not sure what this means in this specific example. For example, when the fetch event handler has ran to completion, is the client JavaScript guaranteed to have ran, and therefore the message event listener will be registered?
I have a larger app that is doing something similar to the above, but the client JavaScript is ran slightly later in the page load, so I would like to know when exactly the event listener must be registered in order to avoid race conditions and guarantee the message posted by the service worker will be received.
By default, all messages sent from a page's controlling service worker to the page (using Client.postMessage()) are queued while the page is loading, and get dispatched once the page's HTML document has been loaded and parsed (i.e. after the DOMContentLoaded event fires). It's possible to start dispatching these messages earlier by calling ServiceWorkerContainer.startMessages(), for example if you've invoked a message handler using EventTarget.addEventListener() before the page has finished loading, but want to start processing the messages right away.
https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerContainer/startMessages

Camunda Timer Intermediate Catch Event trigger by REST API

I have a simpe Camunda BPMN Diagram. I add a Timer Intermediate Catch Event and set duration 23 Hours (PT23H) for testing purpose. I am trying to trigger Timer Event by Camunda REST Api during waiting postion. Tried to call rest api the following post request but gives me an error. Do you have any idea to how can i call it properly ? Thx
http://camunda-xxx/rest/message
{
"processInstanceId":"e984e112-27cd-11ea-8f92-0a580a800328",
"messageName":"Test"
}
{
"type": "RestException",
"message": "org.camunda.bpm.engine.MismatchingMessageCorrelationException: Cannot correlate message 'Test': No process definition or execution matches the parameters"
}
You can't trigger a timer via message.
Either use an event based gateway, so the process either waits for the timer or a message and then continues, or modify the timer jobs due date.

How to handle commands sent from saga in axon framework

Using a saga, given an event EventA, saga starts, it sends a command (or many).
How can we make sure that the command is sent successfully then actual logic in other micro-service did not throw, etc.
Let's have an example of email saga:
When a user register, we create a User Aggregate which publishes UserRegisteredEvent, a saga will be created and this saga is responsible to make sure that registration email is sent to user (email may contain a verification key, welcome message, etc).
Should we use :
commandGateway.sendAndWait with a try/catch -> does it scale?
commandGateway.send and use a deadline and use some kind of "fail event" like SendEmailFailedEvent -> requires to associate a "token" for commands so can associate the "associationProperty" with the correct saga
that sent SendRegistrationEmailCommand
commandGateway.send(...).handle(...) -> in handle can we reference eventGateway/commandGateway that were in MyEmailSaga?
If error we send an event? Or can we modify/call a method from the saga instance we had. If no error then other service have sent an event like "RegistrationEmailSentEvent" so saga will end.
use deadline because we just use "send" and do not handle the eventual error of the command which may have failed to be sent (other service is down, etc)
something else?
Or a combination of all?
How to handle errors below? (use deadline or .handle(...) or other)
Errors could be:
command has no handlers (no service up, etc)
command was handled but exception is raised in other service and no event is sent (no try/catch in other service)
command was handled, exception raised and caught, other service publish an event to notify it failed to send email (saga will receive event and do appropriate action depending on event type and data provided -> maybe email is wrong or does not exist so no need to retry)
other errors I missed?
#Saga
public class MyEmailSaga {
#Autowired
transient CommandGateway commandGateway;
#Autowired
transient EventGateway eventGateway;
#Autowired
transient SomeService someService;
String id;
SomeData state;
/** count retry times we send email so can apply logic on it */
int sendRetryCount;
#StartSaga
#SagaEventHandler(associationProperty = "id")
public void on(UserRegisteredEvent event) {
id = event.getApplicationId();
//state = event........
// what are the possibilities here?
// Can we use sendAndWait but it does not scale very well, right?
commandGateway.send(new SendRegistrationEmailCommand(...));
// Is deadline good since we do not handle the "send" of the command
}
// Use a #DeadlineHandler to retry ?
#DeadlineHandler(deadlineName = "retry_send_registration_email")
fun on() {
// resend command and re-schedule a deadline, etc
}
#EndSaga
#SagaEventHandler(associationProperty = "id")
public void on(RegistrationEmailSentEvent event) {
}
}
EDIT (after accepted answer):
Mainly two options (Sorry but kotlin code below):
First option
commandGateway.send(SendRegistrationEmailCommand(...))
.handle({ t, result ->
if (t != null) {
// send event (could be caught be the same saga eventually) or send command or both
}else{
// send event (could be caught be the same saga eventually) or send command or both
}
})
// If not use handle(...) then you can use thenApply as well
.thenApply { eventGateway.publish(SomeSuccessfulEvent(...)) }
.thenApply { commandGateway.send(SomeSuccessfulSendOnSuccessCommand) }
2nd option:
Use a deadline to make sure that saga do something if SendRegistrationEmailCommand failed and you did not receive any events on the failure (when you do not handle the command sent).
Can of course use deadline for other purposes.
When the SendRegistrationEmailCommand was received successfully, the receiver will publish an event so the saga will be notified and act on it.
Could be an RegistrationEmailSentEvent or RegistrationEmailSendFailedEvent.
Summary:
It seems that it is best to use handle() only if the command failed to be sent or receiver has thrown an unexpected exception, if so then publish an event for the saga to act on it.
In case of success, the receiver should publish the event, saga will listen for it (and eventually register a deadline just in case); Receiver may also send event to notify of error and do not throw, saga will also listen to this event.
ideally, you would use the asynchronous options to deal with errors. This would either be commandGateway.send(command) or commandGateway.send(command).thenApply(). If the failure are businesslogic related, then it may make sense to emit events on these failures. A plain gateway.send(command) then makes sense; the Saga can react on the events returned as a result. Otherwise, you will have to deal with the result of the command.
Whether you need to use sendAndWait or just send().then... depends on the activity you need to do when it fails. Unfortunately, when dealing with results asynchronously, you cannot safely modify the state of the Saga anymore. Axon may have persisted the state of the saga already, causing these changes to go lost. sendAndWait resolves that. Scalability is not often an issue, because different Sagas can be executed in parallel, depending on your processor configuration.
The Axon team is currently looking at possible APIs that would allow for safe asynchronous execution of logic in Sagas, while still keeping guarantees about thread safety and state persistence.

REST API to start a process instance is holding request until process instance is complete

I'm using Activiti version 6.
I created a BPMN process from activiti-app.
Then I want to start that process from activiti-rest.war using the API.
http://localhost:8080/activiti-rest/service/runtime/process-instances
request body :
{
"processDefinitionKey":"cep_dispatch_process",
"businessKey":"myBusinessKey",
"returnVariables": false
}
header :
Content-Type:application/json
As I see in the LOG process is being started in tomcat threads.
referring latest GitHub code:
Activiti-activiti-6.0.0\modules\activiti-rest\src\main\java\org\activiti\rest\service\api\runtime\process\ProcessInstanceCollectionResource.java
When I see method,
#RequestMapping(value = "/runtime/process-instances", method = RequestMethod.POST, produces = "application/json")
public ProcessInstanceResponse createProcessInstance(#RequestBody ProcessInstanceCreateRequest request, HttpServletRequest httpRequest, HttpServletResponse response) {
I can see process is being started and not waiting for process to complete, HTTP response is 201. I can understand request is not being hold for process instance to complete.
instance = processInstanceBuilder.start();
response.setStatus(HttpStatus.CREATED.value());
Please refer the log snipped below, I can see process is executing in server thread and request is waiting till process completed.
276-DEBUG 17-01-2019 14:12:07,177- (http-nio-8080-exec-3) ExecutionEntityManagerImpl: Child execution Execution[ id '130023' ] - parent '130021' created with parent 130021
241-DEBUG 17-01-2019 14:12:07,178- (http-nio-8080-exec-3) ContinueProcessOperation: Executing boundary event activityBehavior class org.activiti.engine.impl.bpmn.behavior.BoundaryTimerEventActivityBehavior with execution 130023
171-DEBUG 17-01-2019 14:12:07,202- (http-nio-8080-exec-3) ContinueProcessOperation: Executing activityBehavior class org.activiti.engine.impl.bpmn.behavior.SubProcessActivityBehavior on activity 'sid-1A2A8DF5-764A-4960-8E5D-F347DC10207C' with execution 130021
276-DEBUG 17-01-2019 14:12:07,203- (http-nio-8080-exec-3) ExecutionEntityManagerImpl: Child execution Execution[ id '130025' ] - parent '130021' created with parent 130021
63-DEBUG 17-01-2019 14:12:07,203- (http-nio-8080-exec-3) DefaultActivitiEngineAgenda: Operation class org.activiti.engine.impl.agenda.ContinueProcessOperation added to agenda
70-DEBUG 17-01-2019 14:12:07,203- (http-nio-8080-exec-3) CommandInvoker: Executing operation class org.activiti.engine.impl.agenda.ContinueProcessOperation
Request must not wait for process to complete.
How can I solve this, request to start the process must not wait for process-instance to complete.
As you see in the response below:
{"id":"130028",
"url":"http://localhost:8080/activiti-rest/service/runtime/process-instances/130028",
"businessKey":"myBusinessKey",
"suspended":false,
"ended":true,
"processDefinitionId":"cep_dispatch_process:13:125033",
"processDefinitionUrl":"http://localhost:8080/activiti-rest/service/repository/process-definitions/cep_dispatch_process:13:125033"
,"processDefinitionKey":"cep_dispatch_process",
"activityId":null,
"variables":[],
"tenantId":"",
"name":null,
"completed":true
}
API is returning only after process completes, I add delay of 2 min in service task, I can see request will be waiting.
I'm not a big guru in Activiti but as a simplest solution I can suggest to activate Async executor and use Asynchronous Continuations for your service task. This could solve your problem. Activiti's behaviour is expected because until it has persisted state to DB it can't say for sure that process is created (because transaction could be rolled back due to DB error for example)

Webflux WebClient asynchronous Request and processing Mono

I am new to webflux and am not able to find the right material to continue with the implementation.
I want to issue a request and process the response asynchronously. In this case service call takes about 8-10 ms to respond, so we issue the request and continue doing other work, and look for the response when it is needed for further processing.
Mono<Map<String,Price>> resp = webClient.post()
.uri("/{type}",isCustomerPricing ? "customer" : "profile")
.body(Mono.just(priceDetailsRequest),PriceDetailsRequest.class)
.retrieve().bodyToMono(customerPriceDetailsType);
How do we make this call execute asynchronously on a different thread.(I tried subscriberOn with Schedulers.single/ Scheuldes.parallel), but didn't see the call getting executed until Mono.block() is called.
How do we achieve ?
We want this call execute in parallel on a separate thread, so the
current thread can continue with other work
When processing completes, set response to context
When the current thread looks for the response, if the service has not
completed, block until the call completes
You don't need to block for consuming the response. Just assign an operator to consume the response in the same chain. An example is given below.
Mono<Map<String,Price>> resp = webClient.post()
.uri("/{type}",isCustomerPricing ? "customer" : "profile")
.body(Mono.just(priceDetailsRequest),PriceDetailsRequest.class)
.retrieve()
.bodyToMono(CustomerPriceDetailsType.class)
.map(processor::responseToDatabaseEntity) // Create a persistable entity from the response
.map(priceRepository::save) // Save the entity to the database
.subscribe(); //This is to ensure that the flux is triggered.
Alternatively you can provide a consumer as a parameter of the subscribe() method.