Fiware-Orion: Geolocation - fiware-orion

Hi I'm a student and I'm working for the first time on the broker. I understood how the creation of entities works and their updating through "update" queries. My question is: you can create an entity that contains variables (eg geolocation) defined with value "null" or "zero" and then initialize them with values that interest me. So as to have dynamic and non-static variables (ie that require user update)?
Or do we need interaction with the CEP to do this?
From what I have read in the fiware-orion guide when I create an entity (ex car having attributes and velocity coordinates: geopoint). The values of these 2 attributes must be set in a static way (ex: speed 100 and position coordinates 40.257, 2.187). If I understand the values of these attributes I can only update them by making an update query. So my question is:
Is it possible to update the value of the attributes that contain the position or the speed of the car in a dynamic way, ie without having to write the values from the keyboard? Or does this require the use of the CEP of orion?
If I could not explain myself more generally I would like to know if it is possible to follow the progress of a moving car without me having to add the values from the keyboard.
Thanks.

Orion Context Brokker exposes a REST-based API that (among other things) allows you to create, update and query entities. From the point of view of Orion, it doesn't matter who is the one invoking the API: it can be done manually (for instance, using Postman or curl) or can be an automated system developed by you or a third party (for instance, a software running in a sensor in the car that measures the speed and periodically sends an update using a wireless communication network).
From a client-server point of view (in the case you are familiar with these concepts), Orion takes the server of the API role and the one updating the speed (either manually or automatically) takes the role of client of the API.

Related

Multiple GET Rest APIs on different fields of a table or One Rest API in DDD

I wanted to provide functionality to clients from my service to get the data based on different fields or sometimes combination of fields. Eg.
getByA
getByB
getByC
getByAandB
getByAandC
In domain driven design, while designing the GET APIs, what should I do out of the following 2:
Should I create individual get api for all such functionalities I wanted to provide?
Should I create one get API with all the possible gets by using all these fields in query parameter. Eg.
get?A=?&B=?&C=?
Which one is the better way to do this? Any suggestions on best practice?
There is a middle path between using individual GET APIs for each of these queries and creating one GET API.
You could use the Specification pattern to expose one GET API, but translate it into a Domain Specification Object before passing it on to the Domain layer for querying. You typically do this transformation in your View Controller, before invoking the Application Service.
Martin Fowler and Eric Evans have published a great paper on using Specifications: https://martinfowler.com/apsupp/spec.pdf
As the paper states, The central idea of Specification is to separate the statement of how to match a candidate, from the candidate object that it is matched against.
Note:
You are fine if you are using this pattern for the Query side as you have outlined in your question, and avoid reusing it in different contexts. For ex., DO NOT use a specification object on both the query side and command side, if you are using (or plan to use) CQRS. You will be creating a central dependency between two parts, that NEED to be kept separate.
Specifications are handy when you want to represent a domain concept. Evaluate your queries (getByAandB and getByAandC) to draw out the question you are asking to the domain (For ex., ask your domain expert to describe the data he is trying to fetch).

Model parametrized API Call in Activity Diagram

I have an an activity diagram with two swimlanes (Client and Server). I want to model a request call from Client to Server.
Is it correct to use Signals Notation for Calls between systems? Are there alternatives?
The call is parametrized, Client wants to send something which was created before. How to model this?
Thankful for any hint! Here's my example:
My answer has to be improved, but here is a first step.
The norm/spec says: "A SendSignalAction is an InvocationAction that creates a Signal instance and transmits the instance to the object given on its target InputPin. A SendSignalAction must have argument InputPins corresponding, in order, to each of the (owned and inherited) Properties of the Signal being sent, with the same type, ordering and multiplicity as the corresponding attribute.
And a SendSignalAction has an association to a target objet which is an input pin.
So for your question about Request:item I would use input pin, one for the object from which the Signal is created and one to define the Target. (in the schema the target comes from an output pin but a data store may be use). Then after sending the request, the client is waiting the answer. The AcceptEvent is linked to a trigger (not shown on the schema) which a signal, the one created by the server. But you can not link SendRequest of Client to ReceiveRequest of Server because this is not how it runs.
For the server, you can do similar reasoning.
Concerning the parametrization of the call I would use InputPin to model the arguments of the Call i.e. the Object sent by the Call as shown below.
Signal and Call Notations are correct for me but I am not used to have the sending and receive action in the same diagram so will suggest two alternatives.
1) First remove them...
2) Separate Client and Server Modelling
Let me know what you think about that and what seems to be clear for you...
I also recognize the tool you used so please find my project at:
https://www.dropbox.com/s/s1mx46cb3linop0/Project1.zip?dl=0
As I see it, it should be modeled like this:
The server runs in an independent loop and starts with waiting for a request. There's a object flow between Create request and Query result set. This symbolizes data placed in a queue (or what ever is appropriate). The receipt of the result set would be done below in a similar way, I just left that out for brevity.
You can also draw an object for the query set
instead of the ActionPins.

Looking for RESTful approach to update multiple resources with the same field set

The task: I have multiple resources that need to be updated in one HTTP call.
The resource type, field and value to update are the same for all resources.
Example: have set of cars by their IDs, need to update "status" of all cars to "sold".
Classic RESTFul approach: use request URL something like
PUT /cars
with JSON body like
[{id:1,status:sold},{id:2,status:sold},...]
However this seems to be an overkill: too many times to put status:sold
Looking for a RESTful way (I mean the way that is as close to "standard" rest protocol as possible) to send status:sold just once for all cars along with the list of car IDs to update. This is what I would do:
PUT /cars
With JSON
{ids=[1,2,...],status:sold} but I am not sure if this is truly RESTful approach.
Any ideas?
Also as an added benefit: I would like to be able to avoid JSON for small number of cars by simply setting up a URL with parameters something like this:
PUT /cars?ids=1,2,3&status=sold
Is this RESTful enough?
An even simpler way would just be:
{sold:[1,2,...]}
There's no need to have multiple methods for larger or smaller numbers of requests - it wastes development time and has no noteable impact upon performance or bandwidth.
As far as it being RESTful goes, as long as it's easily decipherable by the recipient and contains all the information you need, then there's no problem with it.
As I understand it using put is not sufficient to write a single property of a resource. One idea is to simply expose the property as a resource itself:
Therefore: PUT /car/carId/status with body content 'Sold'.
Updating more than one car should result in multiple puts since a request should only target a single resource.
Another Idea is to expose a certain protocol where you build a 'batch' resource.
POST /daily-deals-report/ body content {"sold" : [1, 2, 3, 4, 5]}
Then the system can simply acknowledge the deals being made and update the cars status itself. This way you create a whole new point of view and enable a more REST like api then you actually intended.
Also you should think about exposing a resource listing all cars that are available and therefore are ready for being sold (therefore not sold, and not needing repairs or are not rent).
GET /cars/pricelist?city=* -> List of all cars ready to be sold including car id and price.
This way a car do not have a status regarding who is owning the car. A resource is usually independent of its owner (owner is aggregating cars not a composite of it).
Whenever you have difficulties to map an operation to a resource your model tend to be flawed by object oriented thinking. With resources many relations (parent property) and status properties tend to be misplaced since designing resources is even more abstract than thinking in services.
If you need to manipulate many similar objects you need to identify the business process that triggers those changes and expose this process by a single resource describing its input (just like the daily deals report).

SO style reputation system with CQRS & Event Sourcing

I am diving into my first forays with CQRS and Event Sourcing and I have a few points Id like some guidance on. I would like to implement a SO style reputation system. This seems a perfect fit for this architecture.
Keeping SO as the example. Say a question is upvoted this generates an UpvoteCommand which increases the questions total score and fires off a QuestionUpvotedEvent.
It seems like the author's User aggregate should subscribe to the QuestionUpvotedEvent which could increase the reputation score. But how/when you do this subscription is not clear to me? In Greg Youngs example the event/command handling is wired up in the global.asax but this doesn't seem to involve any routing based on aggregate Id.
It seems as though every User aggregate would subscribe to every QuestionUpvotedEvent which doesn't seem correct, to make such a scheme work the event handler would have to exhibit behavior to identify if that user owned the question that was just upvoted. Greg Young implied this should not be in event handler code, which should merely involve state change.
What am i getting wrong here?
Any guidance much appreciated.
EDIT
I guess what we are talking about here is inter-aggregate communication between the Question & User aggregates. One solution I can see is that the QuestionUpvotedEvent is subscribed to by a ReputationEventHandler which could then fetch the corresponding User AR and call a corresponding method on this object e.g. YourQuestionWasUpvoted. This would in turn generated a user specific UserQuestionUpvoted event thereby preserving replay ability in the future. Is this heading in the right direction?
EDIT 2
See also the discussion on google groups here.
My understanding is that aggregates themselves should not be be subscribing to events. The domain model only raises events. It's the query side or other infrastructure components (such as an emailing component) that subscribe to events.
Domain Services are designed to work with use-cases/commands that involve more than one aggregate.
What I would do in this situation:
VoteUpQuestionCommand gets invoked.
The handler for VoteUpQuestionCommand calls:
IQuestionVotingService.VoteUpQuestion(Guid questionId, Guid UserId);
This then fecthes both the question & user aggregates, calling the appropriate methods on both, such as user.IncrementReputation(int amount) and question.VoteUp(). This would raise two events; UsersReputationIncreasedEvent and QuestionUpVotedEvent respectively, which would be handled by the query side.
My rule of thumb: if you do inter-AR communication use a saga. It keeps things within the transactional boundary and makes your links explicit => easier to handle/maintain.
The user aggregate should have a QuestionAuthored event... in that event is subscribes to the QuestionUpvotedEvent... similarly it should have a QuestionDeletedEvent and/or QuestionClosedEvent in which it does the proper handling like unsibscribing from the QuestionUpvotedEvent etc.
EDIT - as per comment:
I would implement the Question is an external event source and handle it via a gateway. The gateway in turn is the one responsible for handling any replay correctly so the end result stays exactly the same - except for special events like rejection events...
This is the old question and tagged as answered but I think can add something to it.
After few months of reading, practice and create small framework and application base on CQRS+ES, I think CQRS try to decouple components dependencies and responsibilities. In some resources write for each command you Should change maximum one aggregate on command handler (you can load more than one aggregate on handler but only one of them can change).
So in your case I think the best practice is #Tom answer and you should use saga. If your framework doesn't support saga (Like my small framework) you can create some event handler like UpdateUserReputationByQuestionVotedEvent. In that, handler create UpdateUserReputation(Guid user id, int amount) OR UpdateUserReputation(Guid user id, Guid QuestionId, int amount) OR
UpdateUserReputation(Guid user id, string description, int amount). After command sends to handler, the handler load user by user id and update states and properties. In this type of handling you can create a more complex scenario or workflow.

How to get list of aggregates using JOliviers's CommonDomain and EventStore?

The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.