I already have REST API (for System-to-System communication) which takes lot of time to process.
I want to have asynchronous processing. I see two options here:
To make the API itself as asynchronous, where it returns a LOCATION header giving another URI to fetch result.
To make the client asynchronous - using asynchronous HTTP Client or AsyncRestTemplate etc.
I was wondering what is better way in such scenarios, as both seems to solve the issue.
There are some pros and cons to each of the options you mentioned that you may want to consider when making a decision.
Asynchronous API
This approach has a lot of benefits, such as allowing the API to process requests in parallel and improving the overall performance and scalability of the system. However, this approach can also add some complexity to the API, as it requires the API to implement asynchronous processing and provide a mechanism for clients to fetch the result of a request.
Asynchronous client
This approach can provide a simpler and more straightforward solution, as it does not require any changes to the API itself. This approach can also improve the performance and scalability of the system, as it allows the client to process multiple requests in parallel and handle responses asynchronously. However, this approach may require the client to implement additional logic to handle asynchronous processing and fetching results, which can add some complexity to the client.
Summary
Making the API asynchronous can provide better performance and scalability, but may require more complex implementation, while making the client asynchronous can provide a simpler solution, but may not provide the same level of performance and scalability. You will need to weigh the pros and cons of each approach and decide which is best for your specific system based on your requirements and constraints.
It depends on your specific use case and requirements.
If you have a lot of requests coming in from multiple clients, making the API asynchronous may be the best option as it allows you to scale better and process requests in parallel.
On the other hand, if your API is already built and you are just looking to improve performance of the requests, using an asynchronous client may be the best option as it will allow the requests to be sent in parallel and processed faster.
Related
I am unsure how to make use of event-driven architecture in real-world scenarios. Let's say there is a route planning platform consisting of the following back-end services:
user-service (manages user data and roles)
map-data-service (roads & addresses, only modified by admins)
planning-tasks-service
(accepts new route planning tasks, keeps track of background tasks, stores results)
The public website will usually request data from all 3 of those services. map-data-service needs information about user-roles on a data change request. planning-tasks-service needs information about users, as well as about map-data to validate new tasks.
Right now those services would just make a sync request to each other to get the needed data. What would be the best way to translate this basic structure into an event-driven architecture? Can dependencies be reduced by making use of events? How will the public website get the needed data?
Cosmin is 100% correct in that you need something to do some orchestration.
One approach to take, if you have a client that needs data from multiple services, is the Experience API approach.
Clients call the experience API, which performs the orchestration - pulling data from different sources and providing it back to the client. The design of the experience API is heavily, and deliberately, biased towards what the client needs.
Based on the details you've said so far, I can't see anything that cries out for event-based architecture. The communication between the client and ExpAPI can be a mix of sync and async, as can the ExpAPI to [Services] communication.
And for what it's worth, putting all of that on API gateway is not a bad idea, in that they are designed to host API's and therefore provide the desirable controls and observability for managing them.
Update based on OP Comment
I was really interested in how an event-driven architecture could
reduce dependencies between my microservices, as it is often stated
Having components (or systems) talk via events is sort-of the asynchronous equivalent of Inversion of Control, in that the event consumers are not tightly-coupled to the thing that emits the events. That's how the dependencies are reduced.
One thing you could do would be to do a little side-project just as a learning exercise - take a snapshot of your code and do a rough-n-ready conversion to event-based and just see how that went - not so much as an attempt to event-a-cise your solution but to see what putting events into a real-world solution looks like. If you have the time, of course.
The missing piece in your architecture is the API Gateway, which should be the only entry-point in your system, used by the public website directly.
The API Gateway would play the role of an orchestrator, which decides to which services to route the request, and also it assembles the final response needed by the frontend.
For scalability purposes, the communication between the API Gateway and individual microservices should be done asynchronously through an event-bus (or message queue).
However, the most important step in creating a scalable event-driven architecture which leverages microservices, is to properly define the bounded contexts of your system and understand the boundaries of each functionality.
More details about this architecture can be found here
Event storming is the first thing you need to do to identify domain events(a change in state in your system). For example, 'userCreated', 'userModified', 'locatinCreated', 'routeCreated', 'routeCompleted' etc. Then you can define topics that manage these events. Interested parties can consume these events by subscribing to published events(via topics/channel) and then act accordingly. Implementation of an event-driven architecture is often composed of loosely coupled microservices that communicate asynchronously through a message broker like Apache Kafka. Free EDA book is an excellent resource to know most of the things in EDA.
Tutorial: Even-driven-architecture pattern
I am currently working on a project made of many microservices that will asynchronously broadcast data to many possible client applications.
Additionally, client applications will be able to communicate with the system (i.e. the set of microservices) via a ReST Open-API
For broadcasting the data, my first consideration was to use a MOM (Message Oriented Middleware) such as AMQ.
However, I am asked to reconsider this solution and to prefer a ReST endpoint (over HTTP) in order to provide an API more "Open-API oriented".
I am not a big specialist of HTTP but it seems to me that main technologies to send asynchronous data from server to client are:
WebSocket
SSE
I am opening this discussion I order to get advices/feedback from other developers to help me to measure the pros & cons of this new solution. Among that:
is an HTTP technology such as SSE/WebSocket relevant for my needs
For additional information, here are a few metrics regarding the
amount of data to broadcast
considerable amount of messages per seconde
responsiveness
more than 100 clients listening for data
Thank you for your help and contribution
There's many different definitions of what people consider REST and not REST, but most people tend to agree that in practical terms and popular best practices REST services expose a data model via HTTP, and limit operations to this data model by either requesting the state of resources (GET), or updating the state of resources (PUT). From that foundation things are stacked on top of that.
What you describe is a pub-sub model. While it might be possible in academic terms to use REST concepts in a pub-sub architecture, I don't think that's really what you're looking for here.
Websocket and SSE are in most real-word situations do not fall under a REST umbrella, but they can augment an existing REST service.
If your goal is to simply create a pub-sub system that uses a technology stack that people are familiar with, Websockets are a really good choice. It's widely available and works in browsers.
Consider an event-driven microservice based web application that should have some async web APIs. AFAIK the suggested way to achieve async http request/response is to respond each API call with a say 202 Accepted status code and a location header to let caller retrieve the results later.
In this way, we have to generate a unique ID (like uuid or guid) for each request and store the id and all related events in the future in a persistent storage, so the API caller can track the progress of its request.
My question is how this API layer should be implemented considering we may have tens or hundreds of thousands of requests and responses per second. What is the most efficient architecture and tools to make such an API with this load?
One way could be storing all the requests and all related events in both database and a cache like redis (just for a certain limited time like 30 minutes).
Is there any better pattern/architecture/tools? How big companies and websites solved this issue?
Which database could be better for this scenario? (MongoDB, MySQL, …)
I really appreciate any useful answer specially if you have some production experience.
very valid question! In term of architecture or tools point of you should check out zipkin, which is an open distributed tracing system tried and tested by Twitter and especially if you have a microservice architecture, It is really useful to track-down all your request/response. It also includes Storage options include in-memory, JDBC (mysql), Cassandra, and Elasticsearch.
If you are using spring-boot for your microservices then it is easily pluggable.
Even if you are not totally convinced with Zipkin, architecture is worth looking into. From Production experience, I have used it and it was really useful.
I'm not sure I understand correctly the notion of RESTful API. If I understand correctly, such an API should provide functions you can trigger with GET, POST, PUT & DELETE requests. My question is: if an API only provides POST requests functions, is it still RESTful?
You should probably watch this lecture and read this article.
REST a such has nothing to do with how much of available HTTP methods you use. So, the quick answer is: yes, it could be considered "restful" (whatever that actually means).
Buuut ... it most likely - isn't. And it has nothing to do with the abuse of POST calls.
The main indicator for this magical "RESTfulness" has nothing really to do with how you make the HTTP request (methods and pretty URLs are pointless worthless as a determining factor).
What matters is the returned data and whether, by looking at this data, you can learn about other resources and actions, that are related the resource in any given endpoint. It's basically about the discover-ability.
REST is a misused term for some time and the community especially at Stackoverflow doesn't even care about its actual intention, the decoupling of clients from server APIs in a distributed system.
Client and server achieve the decoupling by following certain recommendations like avoiding stateful connections where client state is stored at and managed by the server, using unique identifiers for resources (URIs) and further nice-to-have features like cacheability to reduce the workload both server and clients have to perform. While Fieldings dissertation lists 6 constraints, he later on explained some further rules applications following the REST architectural style have to follow and the benefits the system gains by following these. Among these are:
The API should not depend on any single communication protocol and adhere to and not violate the underlying protocol used. Altough REST is used via HTTP most of the time, it is not restricted to this protocol.
Strong focus on resources and their presentation via media-types.
Clients should not have initial knowledge or assumptions on the available resources or their returned state ("typed" resource) in an API but learn them on the fly via issued requests and analyzed responses. This gives the server the opportunity to move arround or rename resources easily without breaking a client implementation.
So, basically, if you limit yourself only to HTTP you somehow already violate the general idea REST tries to impose.
As #tereško mentioned the Richardson maturity model I want to clarify that this model is rather nonsense in the scope of REST. Even if level 3 is reached it does not mean that this architecture follows REST. And any application that hasn't reached level 3 isn't following this architectural style anyways. Note that an application that only partially follows REST isn't actually following it. It's like either properly or not at all.
In regards to RESTful (the dissertation doesn't contain this term) usually one regards a JSON based API exposed via HTTP as such.
To your actual question:
Based on this quote
... such an API should provide functions you can trigger with GET, POST, PUT & DELETE requests
in terms of REST architectural style I'd say NO as you basically use such an API for RPC calls (a relaxed probably JSON based SOAP if you will), limit yourself to HTTP only and do not use the semantics of the underlying HTTP protocol fully; if you follow the JSON based HTTP API crowd the answer is probably it depends on who you ask as there is no precise definition of the term "RESTful" IMO. I'd say no here as well if you trigger functions rather than resources on the server.
Yes. Restful has some guidelines you should follow. As long as you use HTTP verbs correctly and good practices with regards to URLs naming having only POSTs would be OK. If, on the other hand, a POST request in your application can also delete a record, then I would not call it Restful.
I am new to rest : I am creating a shopping cart like webservice where the user needs to authenticate and add items per user. How to implement this using Rest.
What does it mean when they say "REST is stateless"
Can I create a session in sqlserver database and return it in response so that client can use it their further call? Does it consider scalable?
I have seen posts on stateful rest service and they have answered saying scalability will be an issue. Also some posts suggest to store the information in database Managing state in RESTful based application
But storing the value in database is also some sort of stateful since the client needs to be executed in order and pass some token for further calls.
So can i conclude rest is not applicable for shopping cart like applications?
The REST stateless constraint says the client-server communication must be stateless, meaning that each request must contain all information needed to fulfill it. In simpler terms, this means you can't have a server-side session, but it's fine for you to have a client-side session.
Keep in mind that REST is an architectural style, and that you should follow constraints in order to leverage on the related benefits. If the benefits aren't important for you, it's better to ignore them than to use something that won't be of any good merely to follow the style. The stateless constraint intends to increase visibility, reliability and scalability. Visibility, because the whole request can be understood immediately; reliability because it's easier to recover from server-side failures; and scalability because any server instance can respond any requests. If these aren't important for you, feel free to keep a server-side session if that's easier for you.