Is there any queryContext requests to get Context subscription information of an entity? Or is it still on the roadmap?
Orion roadmap includes an operation to get all subscriptions and other to get the details of a given subscription (by subscritpion ID). It think it is not exactly what you mean (if I understand correctly, you refer to an operation that given an entity returns all the subscription that refer to such entity) but you could combine the "get all subscriptions" operation with the entity::id filter to get the same effect.
Both operations and filter are still on roadmap. This tracker shows implementation status and it is updated each time a new version is planned and released.
Alternatively, while the API operations get implemented (see the other answer to this question), you can get subscriptions information directly from the Orion database as workaround. Looking to the description of the subscriptions collection in the Orion Administration manual and with a basic knowledge of MongoDB query operations, for example you can get all the subscription to entity ID "foo" with the following query:
db.csubs.find({"entities.id": "foo"})
Related
the question I'd like to ask was raised some time ago (FIWARE Orion: How to retrieve the servicePath of an entity?) but as far as I've seen, there is no final answer.
In short, I'd like to retrieve the service path of entities when I exec a GET query to /v2/entitites which returns multiple results.
In our FIWARE instance, we strongly rely on the servicePath element to differentiate between entities with the same id. It is not a good design choice but, unfortunately, we cannot change it as many applications use that id convention at the moment.
There was an attempt three years ago to add a virtual field 'servicePath' to the query result (https://github.com/telefonicaid/fiware-orion/pull/2880) but the pull request was discarded because it didn't include test coverage for that feature and the final NGSIv2 spec didn't include that field.
Is there any plan to implement such feature in the future? I guess the answer is no, what brings me to the next question: is there any other way to do it which does not involve creating subscriptions (we found that the initial notification of a subscription does give you that info, but the notification is limited to 1000 results, what is too low for the number of entities we want to retrieve, and it does not allow pagination either)?
Thanks in advance for your responses.
A possible workaround is to use an attribute (provided by the context producer application) to keep the service path. Somehow, this is the same idea of the builtin attribute proposed in PR #2880.
I am trying to subscribe to changes on delete, create and update mutations.
In my GraphQL schema, I created a subscription field that listens to all those mutations with type Subscription { onAll: Task #aws_subscribe(mutations: ["createTask","updateTask","deleteTask"]) }
Now when tried using amplify-vue components, in case of getting back a response :onSubscriptionMsg=SomeFunction(response) I am receiving old list of tasks from response.data.listOfTasks.
So how should I know which mutation was provoked and thus update the data.listOfTasks?
Thanks a heap in advance for answering this question :)
A suggestion would be to break apart the subscription into multiple subscriptions (i.e. CreateTaskSubscription, UpdateTaskSubscription, etc.) and that way you would be able to implement your Vue logic separately based on which mutation was invoked - as there is a 1-1 mapping now between subscription and mutation as opposed to the 1-to-many that your onAll subscription has currently.
Some reference docs:
Splitting up subscriptions: https://docs.aws.amazon.com/appsync/latest/devguide/real-time-data.html
Vue handling for each subscription type (go to the API connect section): https://aws-amplify.github.io/docs/js/vue
We are developing a REST API where client (application) will make call to our REST APIs.
The client (application) will handle the business logic with rollback capabilities (eg. client can rollback if update "Shipment" services [pass] and update "Stock" services [failed]).
There are many online articles about TCC [Try/Confirm/Cancel] which describe reserving/cancelling a resource via POST/DELETE method but none describe how to handle PUT request (eg. update "Stock" count by 1 and rollback on failure).
Anyone know of a solution to handle a PUT rollback (since PUT request overwrite the original data, how can we rollback to the original data)?
Per TCC pattern, the confirm operation uses PUT for its idempotent characteristics. When implementing such behavior if partial reservation is confirmed while others has expired, we could use the cancel operation to mimic the rollback behavior so that each confirmed participant link will be sent with another DELETE request. I wrote a tutorial article about a minimal TCC implementation with Java in a Booking system. You can refer to the implementation there.
TCC is only a concept to implement transactional behaviour using HTTP/REST. It covers a standard how to model the transaction including timeouts and cancellation. In this model the transaction is bound to any kind of resource with an id so you are able to either confirm or cancel it at any point. As the transaction has a timeout you may end up with an invalid or no longer existing transaction.
Whatever you do from start to the end of the transaction is up to you. But you need something to identify a transaction. As an overwrite of a resource using PUT does not create a resource object by itself you would need a kind of virtual resource to do so.
You may create a new version (maybe using locking) of the resource (the entities is just a placeholder here):
Start: PUT /entities/42 -> link rel:tcc, href: /entities/42/version/7
Confirm: PUT <application/tcc> /entities/42/version/7
Cancel: DELETE /entities/42/version/7
Instead of a version you can also think of a kind of transaction id if you have such.
since shopify's transaction reporting is broken, I'm trying to use the API to get transaction fees for orders and basic accounting. In their API docs, they have their endpoints and parameters listed for getting/posting transactions. To "Receive a list of all Transactions", the docs say
GET /admin/orders/#{id}/transactions.json
but don't explain what the #{id} is for. The call will only work if I put a transaction ID in, but then it only shows a single transaction, rather than a list. The docs state that to "Get the Representation of a specific transaction":
GET /admin/orders/#{id}/transactions/#{id}.json
Which has the id in there twice. I can't use a single transaction, I need all of them for a specific range. I've tried /admin/orders/transactions.json, or putting in all or * in for the id, and it returns errors unless the id is a valid transaction id. Any ideas?
Transactions belong to an order. So the ID you are wondering about is for one specific order. So if you want transactions for your accounting system, the important thing you're basing your API work on will be orders. So setup your code to first off download the orders of interest. Say for a month. Now for each order ask for the transactions, and produce your report.
The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.