Event MyEvent has ID 2 which is already in use - powershell

I am implementing event tracing using EWT in a Service Fabric application and are faced with these errors
ERROR: Exception in Command Processing for EventSource MyCompany-ServiceFabricApplication-LiveDataReader: Event OnCommandMessageReceived has ID 2 which is already in use
The "OnCommandMessageReceived" is my custom event
[Event(2, Level = EventLevel.Verbose, Message = "Queue client created '{0}'")]
public void OnQueueClientCreated(string queueClientName)
{
if (IsEnabled())
{
WriteEvent(2, queueClientName);
}
}
I have multiple/many of these errors and I have tried to messing around with numbers but ...
Is there some Powershell command or else that can tell what IDs are in use or is there a safe range or something?
PS: When that event is fired i can see it in visual studio diagnostic events viewer but the Message is empty. It would be cool if it displayed the message from the payload. Is that possible?

ETW Events must have an unique ID per provider. So look if you have other events with ID 2 and change tie ID to a different value.

Related

Symfony Messenger: Send logged messenger errors per email (via swift_mailer)?

I've configured monolog to send errors via email as described in the symfony docs here: https://symfony.com/doc/4.3/logging/monolog_email.html
Works well with all errors happing during a request, as well as console command errors.
But it does not send emails for errors which occurred during the handling of a messenger message.
Errors are shown when running the consumer bin/console messenger:consume async -vv and they also show up in prod.log like this:
[2020-01-10 12:52:38] messenger.CRITICAL: Error thrown while handling message...
Thanks for any hints on how to set up monolog to get messenger errors emailed too.
In fact monolog swift_mailer type use SwiftMailerHandler
wish also implements reset interface and use memory spool by default wish keep all emails in buffer until it is destructed, so till the end of request :
onKernelTerminate
onCliTerminate
OR till reset method is called, which means that for messenger worker no emails will be send ever because ther's no instant flush - all of them will be kept in in-memory buffer, and probably lost if the process will be killed.
To solve this, you can just disable the default spool memory setting for swiftmailer.
Another solution is to flush your emails after WorkerMessageFailedEvent event gets fired, you can implement an event subscriber to do it for this.
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\Messenger\Event\WorkerMessageFailedEvent;
use Symfony\Component\Messenger\Event\WorkerMessageHandledEvent;
use Symfony\Contracts\Service\ResetInterface;
/**
* Class ServiceResetterSubscriber.
*/
class ServiceResetterSubscriber implements EventSubscriberInterface
{
protected ResetInterface $servicesResetter;
public function __construct(ResetInterface $servicesResetter)
{
$this->servicesResetter = $servicesResetter;
}
public function resetServices(): void
{
$this->servicesResetter->reset();
}
public static function getSubscribedEvents(): array
{
return [
WorkerMessageFailedEvent::class => ['resetServices', 10],
];
}
}
Register your service with the right argument:
App\EventSubscriber\ServiceResetterSubscriber:
arguments: ['#services_resetter']
By the way without this (and without buffer limit) your app will leak and no emails will be sent ever.
Another trick:
Make sure that your message implements \JsonSerializable to get the message content in your logs, because messenger uses his monolog directly and its context serializer wish use json_encode for seriliazation.
That's why we need to customize their JSON representation when encoded is done with json_encode.

How to handle commands sent from saga in axon framework

Using a saga, given an event EventA, saga starts, it sends a command (or many).
How can we make sure that the command is sent successfully then actual logic in other micro-service did not throw, etc.
Let's have an example of email saga:
When a user register, we create a User Aggregate which publishes UserRegisteredEvent, a saga will be created and this saga is responsible to make sure that registration email is sent to user (email may contain a verification key, welcome message, etc).
Should we use :
commandGateway.sendAndWait with a try/catch -> does it scale?
commandGateway.send and use a deadline and use some kind of "fail event" like SendEmailFailedEvent -> requires to associate a "token" for commands so can associate the "associationProperty" with the correct saga
that sent SendRegistrationEmailCommand
commandGateway.send(...).handle(...) -> in handle can we reference eventGateway/commandGateway that were in MyEmailSaga?
If error we send an event? Or can we modify/call a method from the saga instance we had. If no error then other service have sent an event like "RegistrationEmailSentEvent" so saga will end.
use deadline because we just use "send" and do not handle the eventual error of the command which may have failed to be sent (other service is down, etc)
something else?
Or a combination of all?
How to handle errors below? (use deadline or .handle(...) or other)
Errors could be:
command has no handlers (no service up, etc)
command was handled but exception is raised in other service and no event is sent (no try/catch in other service)
command was handled, exception raised and caught, other service publish an event to notify it failed to send email (saga will receive event and do appropriate action depending on event type and data provided -> maybe email is wrong or does not exist so no need to retry)
other errors I missed?
#Saga
public class MyEmailSaga {
#Autowired
transient CommandGateway commandGateway;
#Autowired
transient EventGateway eventGateway;
#Autowired
transient SomeService someService;
String id;
SomeData state;
/** count retry times we send email so can apply logic on it */
int sendRetryCount;
#StartSaga
#SagaEventHandler(associationProperty = "id")
public void on(UserRegisteredEvent event) {
id = event.getApplicationId();
//state = event........
// what are the possibilities here?
// Can we use sendAndWait but it does not scale very well, right?
commandGateway.send(new SendRegistrationEmailCommand(...));
// Is deadline good since we do not handle the "send" of the command
}
// Use a #DeadlineHandler to retry ?
#DeadlineHandler(deadlineName = "retry_send_registration_email")
fun on() {
// resend command and re-schedule a deadline, etc
}
#EndSaga
#SagaEventHandler(associationProperty = "id")
public void on(RegistrationEmailSentEvent event) {
}
}
EDIT (after accepted answer):
Mainly two options (Sorry but kotlin code below):
First option
commandGateway.send(SendRegistrationEmailCommand(...))
.handle({ t, result ->
if (t != null) {
// send event (could be caught be the same saga eventually) or send command or both
}else{
// send event (could be caught be the same saga eventually) or send command or both
}
})
// If not use handle(...) then you can use thenApply as well
.thenApply { eventGateway.publish(SomeSuccessfulEvent(...)) }
.thenApply { commandGateway.send(SomeSuccessfulSendOnSuccessCommand) }
2nd option:
Use a deadline to make sure that saga do something if SendRegistrationEmailCommand failed and you did not receive any events on the failure (when you do not handle the command sent).
Can of course use deadline for other purposes.
When the SendRegistrationEmailCommand was received successfully, the receiver will publish an event so the saga will be notified and act on it.
Could be an RegistrationEmailSentEvent or RegistrationEmailSendFailedEvent.
Summary:
It seems that it is best to use handle() only if the command failed to be sent or receiver has thrown an unexpected exception, if so then publish an event for the saga to act on it.
In case of success, the receiver should publish the event, saga will listen for it (and eventually register a deadline just in case); Receiver may also send event to notify of error and do not throw, saga will also listen to this event.
ideally, you would use the asynchronous options to deal with errors. This would either be commandGateway.send(command) or commandGateway.send(command).thenApply(). If the failure are businesslogic related, then it may make sense to emit events on these failures. A plain gateway.send(command) then makes sense; the Saga can react on the events returned as a result. Otherwise, you will have to deal with the result of the command.
Whether you need to use sendAndWait or just send().then... depends on the activity you need to do when it fails. Unfortunately, when dealing with results asynchronously, you cannot safely modify the state of the Saga anymore. Axon may have persisted the state of the saga already, causing these changes to go lost. sendAndWait resolves that. Scalability is not often an issue, because different Sagas can be executed in parallel, depending on your processor configuration.
The Axon team is currently looking at possible APIs that would allow for safe asynchronous execution of logic in Sagas, while still keeping guarantees about thread safety and state persistence.

Distinguish between create and update event of Managed Objects

I'm using Apama in Cumulocity. Whenever a managed object (device) is created in Cumulocity, I would like to provide it with some initial parameters, here the required interval the device needs to report to Cumulocity before it's considered to be unavailable.
My problem is that in Apama I don't seem to have any way to distinguish between create and update events. So if I receive a managed object, add some parameters to it and send it back to the managed object channel, I end up in a loop.
I can of course do some check after the event has been received, but I would prefer to filter on only the create events of managed objects and don't perform any IF checks.
Is there any way I can filter on only create events? What is the difference between the CHANNEL and the UPDATE_CHANNEL? It doesn't seem to make a difference which one I use.
My current code looks as follows. What I want to achieve is avoid the IF statement and filter directly on create events in the listener.
monitor InitializeDevice {
action onload() {
monitor.subscribe(ManagedObject.CHANNEL);
on all ManagedObject(type = "c8y_MQTTDevice") as mo {
log "###Received managed object. Content is: " + mo.toString() at INFO;
if (mo.params.hasKey("c8y_RequiredAvailability")) {
//Assuming an interval has already been set, do nothing.
log "###Received managed object with required availability fragment. Doing nothing." at INFO;
}
else {
//Set the response interval on the managed object
dictionary<string,any> params := mo.params;
dictionary<string,any> paramssub := new dictionary<string,any>;
paramssub.add("responseInterval",3);
params.add("c8y_RequiredAvailability",paramssub);
mo.params := params;
log "###Added required interval to managed object. Content is: " + mo.toString() at INFO;
send mo to ManagedObject.UPDATE_CHANNEL;
}
}
}
}
When I execute this monitor and create a new managed object, this is what is printed to the logs:
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Received managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_IsDevice":any(dictionary<any,any>,{}),"owner":any(string,"some-owner")})
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Added required interval to managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_IsDevice":any(dictionary<any,any>,{}),"c8y_RequiredAvailability":any(dictionary<string,any>,{"responseInterval":any(integer,3)}),"owner":any(string,"some-owner")})
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Received managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_IsDevice":any(dictionary<any,any>,{}),"c8y_RequiredAvailability":any(dictionary<string,any>,{"responseInterval":any(integer,3)}),"owner":any(string,"some-owner")})
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Received managed object with required availability fragment. Doing nothing.
2019-05-27 16:15:08.244 INFO [7868] - InitializeDevice [6] ###Received managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_Availability":any(dictionary<any,any>,{any(string,"lastMessage"):any(dictionary<any,any>,{any(string,"date"):any(integer,27),any(string,"day"):any(integer,1),any(string,"hours"):any(integer,16),any(string,"minutes"):any(integer,15),any(string,"month"):any(integer,4),any(string,"seconds"):any(integer,7),any(string,"time"):any(integer,1558966507220),any(string,"timezoneOffset"):any(integer,-120),any(string,"year"):any(integer,119)}),any(string,"status"):any(string,"AVAILABLE")}),"c8y_Connection":any(dictionary<any,any>,{any(string,"status"):any(string,"DISCONNECTED")}),"c8y_IsDevice":any(dictionary<any,any>,{}),"c8y_RequiredAvailability":any(dictionary<any,any>,{any(string,"responseInterval"):any(integer,3)}),"owner":any(string,"some-owner")})
2019-05-27 16:15:08.244 INFO [7868] - InitializeDevice [6] ###Received managed object with required availability fragment. Doing nothing.
Is there any way to filter directly on create events? Why do I received two print statements after the update?
Thanks
Mathias
After some investigation it seems that there isn't any way to distinguish the Creation and Update messages. So the code you are using at the moment is probably the only way to do this.
Edited:
BUT The second part of the question:
Why do I received two print statements after the update?
c8y sends managed object to MO.CHANNEL -> Apama monitor
monitor adds c8y_RequiredAvailability
monitor sends managed object with update to MO.UPDATE_CHANNEL -> c8y
c8y sends updated managed object containing c8y_RequiredAvailability -> Apama monitor
c8y sends managed object + c8y_Availability-> Apama monitor
so 3 is the confirmation of your update, and 4 is c8y asynchronously sending the final update with availability on the MO.CHANNEL
To be explicit - the MO.CHANNEL is where created and updated objects arrive into Apama. Sending on that channel shouldn't have an effect. The MO.UPDATE_CHANNEL is the request channel where you send updates to which may then trigger further messages on the MO.CHANNEL as c8y processes.

Workday: Put_Customer returning an error

We are using Snaplogic to load records into workday. Currently, extracting customer records from the source and trying to load them into workday using the object Put_Customer of web service Revenue_Management.
I was getting the following error:
But I'm not getting any category information from the source. So, I tried putting the value for Customer_Category_Reference as 1. But I ended up getting the following error.
The documentation for workday is not helpful and this has been a blocker for me for some time now.
Any help will be appreciated.
Update:
Trying to get customer categories using the Get_Customer_Categories object of Revenue_Management web service using Snaplogic. But getting the following error:
Failure: Soap fault, Reason: Processing error occurred. The task submitted is not authorized., Resolution: Address SOAP fault message and retry
Unfortunately I don't have access to a tenant at this time to validate . However it is likely to work based in prior experience . Perhaps you could create a customer in Workday, through the GUI. Then do get customer API call. Note the category reference . Then, use that in your put customer call
If you look at the API documentation, you will find that Put_Customer accepts a WID in the Customer_WWS_Data object. If you search for "Customer Categories" in Workday, you will likely find the report of the same name. Just select the category that you want your newly loaded customers to default to (click on the magnifying class, then on the ellipsis, Integration Ids, View Ids). The Workday ID will appear at the top.
I have not used the Revenue Management API, but my code for creating a position reference in the Compensation API is probably very similar to what you need to do for the Customer Category reference:
public static Position_ElementObjectType getPositionReference(string WID) {
return new Position_ElementObjectType {
ID = new Position_ElementObjectIDType[] {
new Position_ElementObjectIDType {
type = "WID",
Value = WID
}
}
};
}

When using MDA, should you differentiate between idempotent and non-idempotent event handlers?

The question assumes the use of Event Sourcing.
When rebuilding current state by replaying events, event handlers should be idempotent. For example, when a user successfully updates their username, a UsernameUpdated event might be emitted, the event containing a newUsername string property. When rebuilding current state, the appropriate event handler receives the UsernameUpdated event and sets the username property on the User object to the newUsername property of the UsernameUpdated event object. In other words, the handling of the same message multiple times always yields the same result.
However, how does such an event handler work when integrating with external services? For example, if the user wants to reset their password, the User object might emit a PasswordResetRequested event, which is handled by a portion of code that issues a 3rd party with a command to send an SMS. Now when the application is rebuilt, we do NOT want to re-send this SMS. How is this situation best avoided?
There are two messages involved in the interaction: commands and events.
I do not regard the system messages in a messaging infrastructure the same as domain events. Command message handling should be idempotent. Event handlers typically would not need to be.
In your scenario I could tell the aggregate root 100 times to update the user name:
public UserNameChanged ChangeUserName(string username, IServiceBus serviceBus)
{
if (_username.Equals(username))
{
return null;
}
serviceBus.Send(new SendEMailCommand(*data*));
return On(new UserNameChanged{ Username = userName});
}
public UserNameChanged On(UserNameChanged #event)
{
_username = #event.UserName;
return #event;
}
The above code would result in a single event so reconstituting it would not produce any duplicate processing. Even if we had 100 UserNameChanged events the result would still be the same as the On method does not perform any processing. I guess the point to remember is that the command side does all the real work and the event side is used only to change the state of the object.
The above isn't necessarily how I would implement the messaging but it does demonstrate the concept.
I think you are mixing two separate concepts here. The first is reconstructing an object where the handlers are all internal methods of the entity itself. Sample code from Axon framework
public class MyAggregateRoot extends AbstractAnnotatedAggregateRoot {
#AggregateIdentifier
private String aggregateIdentifier;
private String someProperty;
public MyAggregateRoot(String id) {
apply(new MyAggregateCreatedEvent(id));
}
// constructor needed for reconstruction
protected MyAggregateRoot() {
}
#EventSourcingHandler
private void handleMyAggregateCreatedEvent(MyAggregateCreatedEvent event) {
// make sure identifier is always initialized properly
this.aggregateIdentifier = event.getMyAggregateIdentifier();
// do something with someProperty
}
}
Surely you wouldn't put code that talks to an external API inside an aggregate's method.
The second is replaying events on a bounded context which could cause the problem you are talking about and depending on your case you may need to divide your event handlers into clusters.
See Axon frameworks documentation for this point to get a better understanding of the problem and the solution they went with.
Replaying Events on a Cluster
TLDR; store the SMS identifier within the event itself.
A core principle of event sourcing is "idempotency". Events are idempotent, meaning that processing them multiple times will have the same result as if they were processed once. Commands are "non-idempotent", meaning that the re-execution of a command may have a different result for each execution.
The fact that aggregates are identified by UUID (with a very low percentage of duplication) means that the client can generate the UUIDs of newly created aggregates. Process managers (a.k.a., "Sagas") coordinate actions across multiple aggregates by listening to events in order to issue commands, so in this sense, the process manager is also a "client". Cecause the process manager issues commands, it cannot be considered "idempotent".
One solution I came up with is to include the UUID of the soon-to-be-created SMS in the PasswordResetRequested event. This allows the process manager to only create the SMS if it does not yet already exist, hence achieving idempotency.
Sample code below (C++ pseudo-code):
// The event indicating a password reset was successfully requested.
class PasswordResetRequested : public Event {
public:
PasswordResetRequested(const Uuid& userUuid, const Uuid& smsUuid, const std::string& passwordResetCode);
const Uuid userUuid;
const Uuid smsUuid;
const std::string passwordResetCode;
};
// The user aggregate root.
class User {
public:
PasswordResetRequested requestPasswordReset() {
// Realistically, the password reset functionality would have it's own class
// with functionality like checking request timestamps, generationg of the random
// code, etc.
Uuid smsUuid = Uuid::random();
passwordResetCode_ = generateRandomString();
return PasswordResetRequested(userUuid_, smsUuid, passwordResetCode_);
}
private:
Uuid userUuid_;
string passwordResetCode_;
};
// The process manager (aka, "saga") for handling password resets.
class PasswordResetProcessManager {
public:
void on(const PasswordResetRequested& event) {
if (!smsRepository_.hasSms(event.smsUuid)) {
smsRepository_.queueSms(event.smsUuid, "Your password reset code is: " + event.passwordResetCode);
}
}
};
There are a few things to note about the above solution:
Firstly, while there is a (very) low possibility that the SMS UUIDs can conflict, it can actually happen, which could cause several issues.
Communication with the external service is prevented. For example, if user "bob" requests a password reset that generates an SMS UUID of "1234", then (perhaps 2 years later) user "frank" requests a password reset that generates the same SMS UUID of "1234", the process manager will not queue the SMS because it thinks it already exists, so frank will never see it.
Incorrect reporting in the read model. Because there is a duplicate UUID, the read side may display the SMS sent to "bob" when "frank" is viewing the list of SMSes the system sent him. If the duplicate UUIDs were generated in quick succession, it is possible that "frank" would be able to reset "bob"s password.
Secondly, moving the SMS UUID generation into the event means you must make the User aggregate aware of the PasswordResetProcessManager's functionality (but not the PasswordResetManager itself), which increases coupling. However, the coupling here is loose, in that the User is unaware of how to queue an SMS, only that an SMS should be queued. If the User class were to send the SMS itself, you could run into the situation in which the SmsQueued event is stored while the PasswordResetRequested event is not, meaning that the user will receive an SMS but the generated password reset code was not saved on the user, and so entering the code will not reset the password.
Thirdly, if a PasswordResetRequested event is generated but the system crashes before the PasswordResetProcessManager can create the SMS, then the SMS will eventually be sent, but only when the PasswordResetRequested event is re-played (which might be a long time in the future). E.g., the "eventual" part of eventual consistency could be a long time away.
The above approach works (and I can see that it should also work in more complicated scenarious, like the OrderProcessManager described here: https://msdn.microsoft.com/en-us/library/jj591569.aspx). However, I am very keen to hear what other people think about this approach.