Should I used Orchestration or Choreography pattern? - rest

I am currently developing a service and its current architecture is monolithic. So, when a client clicks, the front end connects with the backend service. On the backend, currently it is designed in such a way that it calls a master method which then manages other methods sequentially waiting for its response. The API calling the master would start a thread and give HTTP response as 202 immediately. Currently the backend is in single pod.
Logic is something like
master(request):
m1 = callMethod1(request)
if m1.message == 'OK':
m2 = callMethod2(request)
if m2.message == 'OK':
m3 = callMethod3(request)
#around 10 methods like this
else:
#fail
callMethod1(request):
#do something with request
#update database
#return the status
callMethod2(request):
#do something with request
#update database if pass(OK) or fail(ERROR)
#return the status
Concurrent users for this application would be not more than 150.
Tech stack:
Front end - React
Backend - Django
My questions:
Should I convert each methods to an API and put it in individual pods?
If so, what pattern do I use, orchestration or choreography?
If choreography, what is an cost efficient way to do the publisher - subscriber functionality? Redis or Service Bus?
Or should I just stick to the present architecture?

This use case looks simpler using orchestration. Check out the temporal.io open source project that would allow converting your code to a workflow withou logical changes.

Related

How to keep thread safe with multiple pods of a Spring Data Reactive Repository microservice

I have a microservice to wrap the access to a MongoDB (DaaS). It is implemented with Spring Data Reactive Repository (ReactiveMongoRepository).
I have deployed within a docker image, running on Kubernetes (in Google Cloud). I have configured the orchestrator to keep a minimum of 2 pods of my microservice (and a maximum of 4).
In other microservice, I have implemented a batch process with multithreading, where I call my daas, with the following sequence:
findById
Modify some fields (including increments of counters)
save
Here is the relevant code:
public Mono<Element> updateElement(String id) {
return this.daasClient.findById(id)
.map(elem -> modify(elem))
.flatMap(elem -> this.daasClient.save(elem));
}
When there are lots of operations, each pod runs some of then (including the previuos code), so I have seen that the access to the resource (Mongo) is not thread safe, so the resut is not as expected.
I guess, 2 pods run the findById simultaneously, so the update is not to the "real" document, so, the last in invoking the save method overrides the changes of the other.
Anybody know how I could do to avoid this, i.e., to implement this operation in thread-safe (pod-safe) way?
Thanks

REST API - with ordered execution (Spring boot)

we are building rest API with spring boot.
We have a scenario where we cannot execute requests for a object in parallel. Actually, in back-end, parallel execution is not possible for same objects.
Example :
Not supported in parallel
Request 1 for action XYZ for object A
Request 2 for action XYZ for object A
Request 3 for action ABC for object A
Supported in parallel
Request 1 for action XYZ for object A
Request 2 for action XYZ for object B
Request 3 for action ABC for object C
So, I am looking best possible ways to achieve this scenario. one major drawback I can see is that it wont keep the rest application stateless.
I can think of doing this by making a set where I can track which object entry already exists. If yes then dont accept request for that object. But with this, I will have to reject the similar request.
Other option where I dont have to reject the request is to maintain a queue where I execute request in a certain object for same object.
But I can sense that this is not the right way to do this. So, if anybody has solved similar situation, please guide.

How to merge/consolidate responses from multiple RESTful microservices?

Let's say there are two (or more) RESTful microservices serving JSON. Service (A) stores user information (name, login, password, etc) and service (B) stores messages to/from that user (e.g. sender_id, subject, body, rcpt_ids).
Service (A) on /profile/{user_id} may respond with:
{id: 1, name:'Bob'}
{id: 2, name:'Alice'}
{id: 3, name:'Sue'}
and so on
Service (B) responding at /user/{user_id}/messages returns a list of messages destined for that {user_id} like so:
{id: 1, subj:'Hey', body:'Lorem ipsum', sender_id: 2, rcpt_ids: [1,3]},
{id: 2, subj:'Test', body:'blah blah', sender_id: 3, rcpt_ids: [1]}
How does the client application consuming these services handle putting the message listing together such that names are shown instead of sender/rcpt ids?
Method 1: Pull the list of messages, then start pulling profile info for each id listed in sender_id and rcpt_ids? That may require 100's of requests and could take a while. Rather naive and inefficient and may not scale with complex apps???
Method 2: Pull the list of messages, extract all user ids and make bulk request for all relevant users separately... this assumes such service endpoint exists. There is still delay between getting message listing, extracting user ids, sending request for bulk user info, and then awaiting for bulk user info response.
Ideally I want to serve out a complete response set in one go (messages and user info). My research brings me to merging of responses at service layer... a.k.a. Method 3: API Gateway technique.
But how does one even implement this?
I can obtain list of messages, extract user ids, make a call behind the scenes and obtain users data, merge result sets, then serve this final result up... This works ok with 2 services behind the scenes... But what if the message listing depends on more services... What if I needed to query multiple services behind the scenes, further parse responses of these, query more services based on secondary (tertiary?) results, and then finally merge... where does this madness stop? How does this affect response times?
And I've now effectively created another "client" that combines all microservice responses into one mega-response... which is no different that Method 1 above... except at server level.
Is that how it's done in the "real world"? Any insights? Are there any open source projects that are built on such API Gateway architecture I could examine?
The solution which we used for such problem was denormalization of data and events for updating.
Basically, a microservice has a subset of data it requires from other microservices beforehand so that it doesn't have to call them at run time. This data is managed through events. Other microservices when updated, fire an event with id as a context which can be consumed by any microservice which have any interest in it. This way the data remain in sync (of course it requires some form of failure mechanism for events). This seems lots of work but helps us with any future decisions regarding consolidation of data from different microservices. Our microservice will always have all data available locally for it process any request without synchronous dependency on other services
In your case i.e. for showing names with a message, you can keep an extra property for names in Service(B). So whenever a name update in Service(A) it will fire an update event with id for the updated name. The Service(B) then gets consumes the event, fetches relevant data from Service(A) and updates its database. This way even if Service(A) is down Service(B) will function, albeit with some stale data which will eventually be consistent when Service(A) comes up and you will always have some name to be shown on UI.
https://enterprisecraftsmanship.com/2017/07/05/how-to-request-information-from-multiple-microservices/
You might want to perform response aggregation strategies on your API gateway. I've written an article on how to perform this on ASP.net Core and Ocelot, but there should be a counter-part for other API gateway technologies:
https://www.pogsdotnet.com/2018/09/api-gateway-response-aggregation-with.html
You need to write another service called Aggregator which will internally call both services and get the response and merge/filter them and return the desired result. This can be easily achieved in non-blocking using Mono/Flux in Spring Reactive.
An API Gateway often does API composition.
But this is typical engineering problem where you have microservices which is implementing databases per service pattern.
The API Composition and Command Query Responsibility Segregation (CQRS) pattern are useful ways to implement queries .
Ideally I want to serve out a complete response set in one go
(messages and user info).
The problem you've described is what Facebook realized years ago in which they decided to tackle that by creating an open source specification called GraphQL.
But how does one even implement this?
It is already implemented in various popular programming languages and maybe you can give it a try in the programming language of your choice.

Service which provides an interface to an async service and Idempotency violation

Please keep in mind i have a rudimentary understanding of rest and building services. I am asking this question mostly cause i am trying to decouple a service from invoking a CLI(within the same host) by providing a front to run async jobs in a scalable way.
I want to build a service where you can submit an asynchronous job. The service should be able to tell me status of the job and location of the results.
APIs
1) CreateAsyncJob
Input: JobId,JobFile
Output: 200Ok (if job was submitted successfully)
2) GetAsyncJobStatus
Input: JobId
Output: Status(inProgress/DoesntExist/Completed/Errored)
3)GetAsyncJobOutput
Input: JobId
Output: OutputFile
Question
The second API, GetAsyncJobStatus violates the principles of idempotency.
How is idempotency preserved in such APIs where we need to update the progress of a particular job ?
Is Idempotency a requirement in such situations ?
Based on the link here idempotency is a behaviour demonstrated by an API by producing the same result during it's repeated invocations.
As per my understanding idempotency is at per API method level ( we are more concerned about what would happen if a client calls this API repeatedly). Hence the best way to maintain idempotency would be to segregate read and write operations into separate APIs. This way we can reason more throughly with the idempotent behavior of the individual API methods. Also while this term is gaining traction with RESTful services, the principles hold true even for other API systems.
In the use case you have provided the response to the API call made by the client would differ (depending upon the status of the job).Assuming that this API is read-only and does not perform any write operations on the server, the state on the server would remain the same by invoking only this API - for e.g. if there were 10 jobs in the system in varied states calling this API 100 times for a job id could result in different status every time for the job id (based on it's progress) - however the number of jobs on the server and their corresponding states would be the same.
However if this API were to be implemented in a way that would alter the state of the server in some way - then this call is not necessarily idempotent.
So keep two APIs - getJobStatus(String jobId) and updateJobStatus(String jobId). The getJobStatus is idempotent while updateJobStatus is not.
Hope this helps

data store with a "get or block" operation?

I'm looking for a data store that has a "get or block" operation. This operation would return the value associated with a key/query if that value exists or block until that value is created.
It's like a pub/sub message queue but with a memory to handle the case when the subscriber connects after the publisher has published the result.
This operation allows unrelated processes to rendezvous with each other, and it seems that it would be a very useful architectural building block to have - especially in a web environment - i.e. a web request comes in which kicks off a backend server process to do some work and the web client can get the results via a future AJAX call.
Here is an blog post I found on how to accomplish this sort of operation with mongodb:
http://blog.mongodb.org/post/29495793738/pub-sub-with-mongodb
What other solutions are in use today? Can I do the same thing with redis or rabbitmq? I've looked at the docs for both, but it's unclear exactly how it would work. Should I roll my own server with 0MQ? Is there something out there specifically tailored for this problem?
Your are correct both Redis[1] and rabbitmq[2] have pub/sub capabilities.
[1] http://redis.io/topics/pubsub
[2] http://www.rabbitmq.com/tutorials/tutorial-three-python.html