How to Subscribe to All GraphQL Mutations in AWS-Amplify Vue Components? - graphql-js

I am trying to subscribe to changes on delete, create and update mutations.
In my GraphQL schema, I created a subscription field that listens to all those mutations with type Subscription { onAll: Task #aws_subscribe(mutations: ["createTask","updateTask","deleteTask"]) }
Now when tried using amplify-vue components, in case of getting back a response :onSubscriptionMsg=SomeFunction(response) I am receiving old list of tasks from response.data.listOfTasks.
So how should I know which mutation was provoked and thus update the data.listOfTasks?
Thanks a heap in advance for answering this question :)

A suggestion would be to break apart the subscription into multiple subscriptions (i.e. CreateTaskSubscription, UpdateTaskSubscription, etc.) and that way you would be able to implement your Vue logic separately based on which mutation was invoked - as there is a 1-1 mapping now between subscription and mutation as opposed to the 1-to-many that your onAll subscription has currently.
Some reference docs:
Splitting up subscriptions: https://docs.aws.amazon.com/appsync/latest/devguide/real-time-data.html
Vue handling for each subscription type (go to the API connect section): https://aws-amplify.github.io/docs/js/vue

Related

Is it possible to obtain the ServicePath when querying entities in Orion?

the question I'd like to ask was raised some time ago (FIWARE Orion: How to retrieve the servicePath of an entity?) but as far as I've seen, there is no final answer.
In short, I'd like to retrieve the service path of entities when I exec a GET query to /v2/entitites which returns multiple results.
In our FIWARE instance, we strongly rely on the servicePath element to differentiate between entities with the same id. It is not a good design choice but, unfortunately, we cannot change it as many applications use that id convention at the moment.
There was an attempt three years ago to add a virtual field 'servicePath' to the query result (https://github.com/telefonicaid/fiware-orion/pull/2880) but the pull request was discarded because it didn't include test coverage for that feature and the final NGSIv2 spec didn't include that field.
Is there any plan to implement such feature in the future? I guess the answer is no, what brings me to the next question: is there any other way to do it which does not involve creating subscriptions (we found that the initial notification of a subscription does give you that info, but the notification is limited to 1000 results, what is too low for the number of entities we want to retrieve, and it does not allow pagination either)?
Thanks in advance for your responses.
A possible workaround is to use an attribute (provided by the context producer application) to keep the service path. Somehow, this is the same idea of the builtin attribute proposed in PR #2880.

How to merge/consolidate responses from multiple RESTful microservices?

Let's say there are two (or more) RESTful microservices serving JSON. Service (A) stores user information (name, login, password, etc) and service (B) stores messages to/from that user (e.g. sender_id, subject, body, rcpt_ids).
Service (A) on /profile/{user_id} may respond with:
{id: 1, name:'Bob'}
{id: 2, name:'Alice'}
{id: 3, name:'Sue'}
and so on
Service (B) responding at /user/{user_id}/messages returns a list of messages destined for that {user_id} like so:
{id: 1, subj:'Hey', body:'Lorem ipsum', sender_id: 2, rcpt_ids: [1,3]},
{id: 2, subj:'Test', body:'blah blah', sender_id: 3, rcpt_ids: [1]}
How does the client application consuming these services handle putting the message listing together such that names are shown instead of sender/rcpt ids?
Method 1: Pull the list of messages, then start pulling profile info for each id listed in sender_id and rcpt_ids? That may require 100's of requests and could take a while. Rather naive and inefficient and may not scale with complex apps???
Method 2: Pull the list of messages, extract all user ids and make bulk request for all relevant users separately... this assumes such service endpoint exists. There is still delay between getting message listing, extracting user ids, sending request for bulk user info, and then awaiting for bulk user info response.
Ideally I want to serve out a complete response set in one go (messages and user info). My research brings me to merging of responses at service layer... a.k.a. Method 3: API Gateway technique.
But how does one even implement this?
I can obtain list of messages, extract user ids, make a call behind the scenes and obtain users data, merge result sets, then serve this final result up... This works ok with 2 services behind the scenes... But what if the message listing depends on more services... What if I needed to query multiple services behind the scenes, further parse responses of these, query more services based on secondary (tertiary?) results, and then finally merge... where does this madness stop? How does this affect response times?
And I've now effectively created another "client" that combines all microservice responses into one mega-response... which is no different that Method 1 above... except at server level.
Is that how it's done in the "real world"? Any insights? Are there any open source projects that are built on such API Gateway architecture I could examine?
The solution which we used for such problem was denormalization of data and events for updating.
Basically, a microservice has a subset of data it requires from other microservices beforehand so that it doesn't have to call them at run time. This data is managed through events. Other microservices when updated, fire an event with id as a context which can be consumed by any microservice which have any interest in it. This way the data remain in sync (of course it requires some form of failure mechanism for events). This seems lots of work but helps us with any future decisions regarding consolidation of data from different microservices. Our microservice will always have all data available locally for it process any request without synchronous dependency on other services
In your case i.e. for showing names with a message, you can keep an extra property for names in Service(B). So whenever a name update in Service(A) it will fire an update event with id for the updated name. The Service(B) then gets consumes the event, fetches relevant data from Service(A) and updates its database. This way even if Service(A) is down Service(B) will function, albeit with some stale data which will eventually be consistent when Service(A) comes up and you will always have some name to be shown on UI.
https://enterprisecraftsmanship.com/2017/07/05/how-to-request-information-from-multiple-microservices/
You might want to perform response aggregation strategies on your API gateway. I've written an article on how to perform this on ASP.net Core and Ocelot, but there should be a counter-part for other API gateway technologies:
https://www.pogsdotnet.com/2018/09/api-gateway-response-aggregation-with.html
You need to write another service called Aggregator which will internally call both services and get the response and merge/filter them and return the desired result. This can be easily achieved in non-blocking using Mono/Flux in Spring Reactive.
An API Gateway often does API composition.
But this is typical engineering problem where you have microservices which is implementing databases per service pattern.
The API Composition and Command Query Responsibility Segregation (CQRS) pattern are useful ways to implement queries .
Ideally I want to serve out a complete response set in one go
(messages and user info).
The problem you've described is what Facebook realized years ago in which they decided to tackle that by creating an open source specification called GraphQL.
But how does one even implement this?
It is already implemented in various popular programming languages and maybe you can give it a try in the programming language of your choice.

Are AppSync's subscriptions very limited to one specific use case?

I spend the last day trying AWS AppSync I'm a bit disappointed with what subscriptions can do.
It seems to me that the current state of AppSync subscription is for the use case where you have a list of items and you want it to be sync over all clients.
It's pretty limited compared to what apollo-subscription can do.
So if I understood the doc correctly:
we can't filter out target to whom you want to send the data to
I have use cases where mutations like a vote on a Post can lead to push data of a different Type to the owner of the Post only.
it has to be linked to a specific mutation and it has to be of the same type
I have use cases where a mutation or even a query can lead to send a push to a specific target that is listening to the event.
It is not linked to a resolver
Can you please correct me If I'm wrong?
As you already figured out, the result must be the same as from the mutation and you can't link your mutation to a resolver.
But concerning your first assumption:
It is possible to filter the results of a mutation.
For example if you have the following mutation:
type Mutation {
addPost(input: PostAddInput!): Post!
}
input PostAddInput {
text: String!
author: ID!
}
You can publish the result of the mutation to the specific user with this subscription:
type Subscription {
addedPost(author_id: ID!): Post!
#aws_subscribe(mutations: ["addPost"])
}
Now you will only receive the results if the author_id of the mutation matches the subscribed author_id.
I also created an AppSync RDS repository on GitHub if you want to try it out by yourself.

Orion Context Broker (0.17.0) subscriptions information

Is there any queryContext requests to get Context subscription information of an entity? Or is it still on the roadmap?
Orion roadmap includes an operation to get all subscriptions and other to get the details of a given subscription (by subscritpion ID). It think it is not exactly what you mean (if I understand correctly, you refer to an operation that given an entity returns all the subscription that refer to such entity) but you could combine the "get all subscriptions" operation with the entity::id filter to get the same effect.
Both operations and filter are still on roadmap. This tracker shows implementation status and it is updated each time a new version is planned and released.
Alternatively, while the API operations get implemented (see the other answer to this question), you can get subscriptions information directly from the Orion database as workaround. Looking to the description of the subscriptions collection in the Orion Administration manual and with a basic knowledge of MongoDB query operations, for example you can get all the subscription to entity ID "foo" with the following query:
db.csubs.find({"entities.id": "foo"})

SO style reputation system with CQRS & Event Sourcing

I am diving into my first forays with CQRS and Event Sourcing and I have a few points Id like some guidance on. I would like to implement a SO style reputation system. This seems a perfect fit for this architecture.
Keeping SO as the example. Say a question is upvoted this generates an UpvoteCommand which increases the questions total score and fires off a QuestionUpvotedEvent.
It seems like the author's User aggregate should subscribe to the QuestionUpvotedEvent which could increase the reputation score. But how/when you do this subscription is not clear to me? In Greg Youngs example the event/command handling is wired up in the global.asax but this doesn't seem to involve any routing based on aggregate Id.
It seems as though every User aggregate would subscribe to every QuestionUpvotedEvent which doesn't seem correct, to make such a scheme work the event handler would have to exhibit behavior to identify if that user owned the question that was just upvoted. Greg Young implied this should not be in event handler code, which should merely involve state change.
What am i getting wrong here?
Any guidance much appreciated.
EDIT
I guess what we are talking about here is inter-aggregate communication between the Question & User aggregates. One solution I can see is that the QuestionUpvotedEvent is subscribed to by a ReputationEventHandler which could then fetch the corresponding User AR and call a corresponding method on this object e.g. YourQuestionWasUpvoted. This would in turn generated a user specific UserQuestionUpvoted event thereby preserving replay ability in the future. Is this heading in the right direction?
EDIT 2
See also the discussion on google groups here.
My understanding is that aggregates themselves should not be be subscribing to events. The domain model only raises events. It's the query side or other infrastructure components (such as an emailing component) that subscribe to events.
Domain Services are designed to work with use-cases/commands that involve more than one aggregate.
What I would do in this situation:
VoteUpQuestionCommand gets invoked.
The handler for VoteUpQuestionCommand calls:
IQuestionVotingService.VoteUpQuestion(Guid questionId, Guid UserId);
This then fecthes both the question & user aggregates, calling the appropriate methods on both, such as user.IncrementReputation(int amount) and question.VoteUp(). This would raise two events; UsersReputationIncreasedEvent and QuestionUpVotedEvent respectively, which would be handled by the query side.
My rule of thumb: if you do inter-AR communication use a saga. It keeps things within the transactional boundary and makes your links explicit => easier to handle/maintain.
The user aggregate should have a QuestionAuthored event... in that event is subscribes to the QuestionUpvotedEvent... similarly it should have a QuestionDeletedEvent and/or QuestionClosedEvent in which it does the proper handling like unsibscribing from the QuestionUpvotedEvent etc.
EDIT - as per comment:
I would implement the Question is an external event source and handle it via a gateway. The gateway in turn is the one responsible for handling any replay correctly so the end result stays exactly the same - except for special events like rejection events...
This is the old question and tagged as answered but I think can add something to it.
After few months of reading, practice and create small framework and application base on CQRS+ES, I think CQRS try to decouple components dependencies and responsibilities. In some resources write for each command you Should change maximum one aggregate on command handler (you can load more than one aggregate on handler but only one of them can change).
So in your case I think the best practice is #Tom answer and you should use saga. If your framework doesn't support saga (Like my small framework) you can create some event handler like UpdateUserReputationByQuestionVotedEvent. In that, handler create UpdateUserReputation(Guid user id, int amount) OR UpdateUserReputation(Guid user id, Guid QuestionId, int amount) OR
UpdateUserReputation(Guid user id, string description, int amount). After command sends to handler, the handler load user by user id and update states and properties. In this type of handling you can create a more complex scenario or workflow.