Orion Broker: Renew subscriptions without changing message format - fiware-orion

I've one instance of the Orion context broker running and several other services receiving notifications from it through "ONCHANGE" subscriptions.
I also have a simple script that checks the existing subscriptions through GET /v2/subscriptions and then renews them as needed. However this end point does not return the format (XML/JSON) that the data is sent for each subscriber.
The problem is that different services require different formats and without knowing the initial Accept header is not possible to renew the subscription correctly, since the format is also updated when a call to any of the update methods is made (POST /v1/updateContextSubscription or PUT /v1/contextSubscriptions/{subscriptionID}), defaulting to XML
Is there a way I can know the format of a subscription without accessing directly to the Mongo database? or any update method that does not change the format of the messages set up initially?

XML is deprecated since Orion 0.23.0 (more info here). Thus, I recommend you to adapt all the notification receptors to process only JSON and update subscription using always JSON.
Otherwise, your update subscription program needs to to keep track of which format is being used by each receptor (in a URL->format table) in order to choose the right one in each case.

Related

Design Commands And Events while Handling External partner with Axon 4

This is a question related to designing command handling with Axon 4.
Let say I've a domain that model the concept of a Payment.
The actual payment will be done by an external Partner. I want to track it in my system via the following events: a Payment Request Was Issued followed by either
Partner Agreed the Payment or Partner Declined the Payment.
Every events issued by the command should be enrolled in the same database transaction.
What would be the best practice to actually call my partner in Axon 4 ?
Here's what I've done so far:
Have one command named RequestPaymentCommand
This command will be handled by a Payment Aggregate like this:
do some checks
apply the event PaymentRequestWasIssued
and then, call the external partner and given the result it will apply either PaymentAccepted or PaymentRefused
In this answer from stackoverflow, it is said that
All the data that you need to apply the event should normally be available in the command
With this statement in mind, I understand that I should create as much Commands as Events ? But In this case, what is the point of all theses commands ? Should I end up with something like:
My command RequestPaymentCommand will generate the PaymentRequestWasIssued event.
Then from somewhere I call my partner and then send another command (how to name it ?) that will generate the event given the result from the partner ?
The actual payment will be done by an external Partner
This means that your application is not the source of truth and it should not try to behave like one. This means that it should only observe what is happening in the remote system and possible react to remote events. To "observe" could mean to duplicate/copy the remote events in local databases, without modifications, just for cache reasons or for display reasons. Your system should not directly give other interpretations to these events, other than those given by their source.
After the remote events are copied locally, your system could react to them. This could mean that a Saga, after receives the Partner Agreed the Payment it sends a UnlockFeature command to a local Aggregate (see DDD).
With this statement in mind, I understand that I should create as much Commands as Events ? But In this case, what is the point of all theses commands ?
This is an indication that those are not your events: you should not emit them from your code; in the worst case you store them and react to them (in a Saga/Process manager). This means that you should discover the local business processes and model them as such: they react to events by sending commands.

Is there any way to avoid evaluation of new subscriptions over existing entities?

When I append a new subscription in ORION, it automatically evaluates the condition and it invoques the designed end-point for that. I want that the new subscription affects only entities appended later.
Is there any way to avoid it or I have to control this at end-point level?
Related to this, is there any batch option to create several subscriptions at same time for a initial load of the platform?
Orion Version: 1.2.0
Regarding initial notification:
No, it isn't.
We understand that for some uses cases this is not convenient. However, behaving in the opossite way ruins another uses cases which need to know the "inicial state" before starting getting notifications corresponding to actual changes. The best solution to make everybody happy is to make this configurable, so each client can chose what it prefers. This feature is currently in our roadmap (see this issue in github.com).
While this gets implemented in Orion, in your case maybe a possible workaround is just ignore the first received nofitication belonging to a subscription (you can identify the subscription to which one notification belongs by the subscriptionId field in the notification payload). All the following notifications beloning to that subscription will correspond to actual changes.
Regarding batch option to create several subscriptions
No, there isn't any operation like that.
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).

How do I guarantee unique subscriptions in Orion Context Broker?

In my setup I have one application that should subscribe to a specific kind of context change.
The application currently perform subscription at startup time. However if I restart the application, the subscription is duplicated. To overcome this issue I started to keep track of subscriptions in a database, so that I have an association between my application id and the latest subscription id.
Is there any way to achieve similar result in Orion (let's call it like "named subscriptions"), without using an external database?
There is a planned subscription "browsing" operation in Orion development roadmap (see operation ID 45 in this document) that could help in your case.
However, while this operation get implemented, one alternative to the one you mention (i.e. to keep subscription info in an external DB) would be to access to the Orion DB itself to get the subscription information. The datamodel (described here) is pretty simple and getting the info is quite easy if you are familiar with MongoDB. Note that this solution requires access to the Orion DB (i.e. it is feasible if you control your own instance of Orion).
EDIT: Given that different subscriptions may use the same reference, I'd recommend to use _id field to identify each subscription (_id field values are unique). NGSI doesn't include metadata in subscription, but you may associated subscription IDs with application using Orion itself, e.g. SubscriptionAssociation entities with two attributes, one for the application name and another for the subscription ID being associated
EDIT: since Orion 0.25.0, the GET /v2/subscriptions operation allows you to browse existing subscriptions.

Rest API needs additional operations - how to structure?

My application deals requires users to sign up before they can use the service and to do that they create an Application. The initial plan of the interface is as follows...
POST /Users/Applications - Creates an application and returns a unique identifier.
GET /Users/Applications/{id} - Retrieves an existing application.
PUT /Users/Applications/{id} - Updates an existing application.
DELETE /Users/Applications/{id} - Deletes an existing application.
This seems very clean and logical and makes best use of the HTTP verbs. However what if I now need to do other operations on an application e.g.
ActivateApplication - once all of the data is in the system by using PUT I now want the users to activate their application. This isn't just a matter of updating a status on the application using PUT, there are several additional jobs that should be done to activate an application such as emailing the HR dept. to inform them a new application has arrived.
PrintApplication - when called from the client prints the application to the office printer. (Not an ideal example but you get the idea I'm sure!)
How would I structure my REST interface to handle this type of request? Maybe something like this...
POST /Users/Applications/{id}/print
POST /Users/Applications/{id}/activate
...for activate I'm changing state so I believe I need to use POST. I understand REST is about documents but how do I structure my API when I need to perform operations on documents, not just get and update the document itself?
This Martin Fowler's article states that:
Some people incorrectly make a correspondence between POST/PUT and create/update. The choice between them is rather different to that.
When I try to decide between PUT and POST I follow the next rule:
PUT -> Idempotent
POST -> Not Idempotent
Idempotent means that there's no difference between performing one and multiple operations. The DB data will be the same after the first operation and after each of the other operations.
In case of not-idempotent operations, every performed operation changes the data in the DB.
That's why PUT is usually used for UPDATE operations and POST for CREATE. But this is not the correct rule.
Comming back to your question, in my opinion you are using POSTs correctly as a not idempotent action, because multiple calls to ActivateApplication will send multiple emails.
Edited
As #elolos has commented, following the Single Responsability Principle, sending an e-mail should be another responsability not directly linked to Update the State. Handle an event when the property changed in order to trigger processes like sending emails would be a better approach. This way ActivateApplication operation may be idempotent and be called using PUT Http method.

New/Read Flags in CQRS

I am currently drafting a concept for a (mostly) HTML-based collaboration suite which I plan to implement using CQRS. This software will contain messages that can be sent to the user (which can either be read or unread, obviously) and other elements which shall be marked "new" if they were created after the last user login.
Hardly something new, but I am not quite sure how that would be correctly implemented using CQRS. As I understand it, Change of any kind should, without exception, only be possible via Commands. But creating commands for every single (new) element that is being accessed seems a bit too much, not to mention the overhead.
I don't know if I need it, but what would be the best way to implement a Last-Accessed Timestamp on elements. Basically the same problem like the above, with the difference that the change happens EVERY time the element is accessed, not only the first time for each user.
CQRS seems to be an awesome concept but it really needs more learning material. Can't wait till a book is released :)
Regards
[Edit] No one? Wouldn't have thought that this is such a complicated issue..
I assume you're using event-sourcing in which case once you allow your query-service/event-handlers to raise appropriate events then this becomes fairly easy to solve.
For your messages/elements; when handling the specific creation events of your elements either add to existing or create additional event-handlers, to store to a messages read-model with a status of new and appropriate information about the element.
As part of you're user login I don't see why you can't raise a user-logged-in event (from the security/query service depending on how your implementing authentication) to say the user has logged in. An event-handler could capture this and write the last-login timestamp to a specific user-last-login read-model.
In addition the user-logged-in event-handler would need to update all the new messages (for that user) to an unread status. Seeing as we're changing the status of the messages as the user logs in do you still need to store the last-login timestamp?
For your last-accessed timestamp, perhaps you could just work this into your query service as queries for your different elements complete. Raise a query-completed event with element id/type information.