Microservices communication JDBC SQL VS REST [closed] - spring-data

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
which is the best way between these two approches that allow two microservices to exchange data
1- Via Rest call.
2- Each microsevice expose its related data as a database's view ,so that it can be reached by other microservices using Spring JDBC template or JPA.
Notice that each microservice has its own (private) tables in the same database schema.
Thanks,

I would say that from a domain driven design perspective (and microservices could be considered as domains), other domains should not know anything about how your data is stored/structured (Bounded Context). Therefore I would vote for REST. Another point would be, what if your table/view structure changes? this would cause breaking changes in other microservices. With REST you can change the underlying code of your routes without bothering your consumer. Direct Database queries would be needed if you have to use stored procedures (or other database related performance tweaks) for better performance.

The world is imperfect, but in a perfect world your microservices should rarely (if ever) communicate directly with each other. One microservice having knowledge of another inherently couples them more tightly than is preferable for this distributed architecture. This coupling affects CI/CD, reduces fault tolerance and leaks domain information outside each service.
In our system, the only microservice accessed by (nearly) all others is the Authorization service so that if required, each microservice can validate the credentials it receives for a specific requested action. All other communication that occurs between services is asynchronous and passed along over our Integration Bus (RabbitMQ in our case).
Between the two options you presented, REST is probably better because it adds at least some abstraction between the services, but you might consider taking a hard look at your modeling to see if you can reduce dependencies between services to eliminate the need. Decent (though old) article on Auth0 here about dependencies and here is a good (long) talk about this issue from a Spring Data project lead.

It depends.
The REST API provides I looser coupling between the two services and is usually the preferred way.
The database view would make sense if you have many queries and the performance calling the REST API could be an issue.

I prefer REST in order to exchange data across different microservices, probably you can adapt something similar in your application.However I wrote this in Java you can write piece of code for any language of your choice.
Hope this helps.
public final class MicroServiceDelegator {
private MicroServiceDelegator() {
}
/**
* Calls the concerned microservice and gets the response JSON.
*
* #param request the request object containing the request information
* #param microserviceUri the URI to make a request to
* #param requestMethod specifies either GET or POST request method
* #return JSON string response from the micro service
* #throws Exception
*/
public static final String callMicroService(final HttpServletRequest request, final String microserviceUri, final RequestMethod requestMethod)
throws Exception {
String responseBody;
// Logic to identify type of request GET/POST
// Build any HttpClient
// Execute the Client
// Build responseBody
// Convert response to JSON
return responseBody;
}
}
This can be used in any microservice, for example
#ResponseBody
#RequestMapping(method = RequestMethod.POST, path = "/all", produces = MediaType.APPLICATION_JSON_UTF8_VALUE)
public String getAllProducts(final HttpServletRequest request, final HttpServletResponse response) {
String responseString = "";
try {
responseString = MicroServiceDelegator.callMicroService(request,
"http://products-microservice" + "/all-products",
RequestMethod.POST);
} catch (Exception e) {
log.warn(e)
}
return responseString;
}

Related

Providing separate OpenApi definitions

We have a service that provides 2 separate Rest API. One is a simple API that our customers use and the other one is an internal API used by a web application.
Our customers are not able to access the web API so I would like to be able to provide 2 separate OpenApi specifications, one for our customers and one for our web developers.
I found a pretty straightforward way to achieve what I want by creating an endpoint, retrieve the OpenApi document and filter out the tags belonging to the customer API.
#Inject
OpenApiDocument document;
#Operation(hidden = true)
#GET
#Produces("application/yaml")
public Response customer() throws IOException {
OpenAPI model = FilterUtil.applyFilter(new MyTagFilter("mytag"), document.get());
String result = OpenApiSerializer.serialize(model, Format.YAML);
return Response.ok(result).build();
}
One problem is that the injected OpenApiDocument instance is null in development mode. The OpenApiDocumentProducer seems to be missing some classloader magic that is present in the OpenApiHandler class. Another minor problem is that the filter “MyTagFilter” also needs to filter out Schemas not used by any tagged PathItems and the code becomes somewhat dodgy.
Is there a better way to solve my problem?
Would it be possible to fix OpenApiDocumentProducer to provide a non null and up to date OpenApiDocument in developer mode?
Similar question: Quarkus: Provide multiple OpenApi/Swagger-UI endpoints

Using GraphQL strictly as a query language

I think that my problem is a common one, and I'm weighing the costs and benefits of GraphQL as a solution.
I work on a product whose data is stored by a monolithic CRUD-based REST API. We have components of our application expose a search interface for data, and of course need some kind of server-side support for making requests for that data. This could include sorting, filtering, choosing fields, etc. There are, of course, more traditional ways of providing these functions in a REST context, like query parameter add-ons for endpoints, but it would be cool to try out GraphQL in this context to build a foundation for expanding its use for querying a bit.
GraphQL exposes a really nice query language for searching on data, and ultimately allows me to tailor the language of search specifically to my domain. However, I'm not sure if there is a great way to leverage the IDL without managing a separate server altogether.
Take the following Java Jersey API Proof-of-Concept example:
#GET
#Path("/api/v1/search")
public Response search(QueryIDL query) throws IOException {
final SchemaParser schemaParser = new SchemaParser();
TypeDefinitionRegistry typeDefinitionRegistry = // load schema
RuntimeWiring runtimeWiring = // wire up data-fetching classes
SchemaGenerator schemaGenerator = new SchemaGenerator();
GraphQLSchema graphQLSchema =
schemaGenerator.makeExecutableSchema(typeDefinitionRegistry, runtimeWiring);
GraphQL build = GraphQL.newGraphQL(graphQLSchema).build();
ExecutionResult executionResult = build.execute(query.toString());
return Response.ok(executionResult.getData()).build();
}
I am just planning to take a request body into my Jersey server that looks exactly like the request that would be sent to a GraphQL server. I'm then leveraging some library support to interpret and execute the request for data.
Without really thinking too much about everything that could go wrong, it looks like a client would be able to use this API similar to the way they would use a GraphQL server, except that I don't need to necessarily manage a separate server just to facilitate my search requirements.
Does it seem valuable, or silly, to use the GraphQL IDL in an endpoint-based context like this?
Apart from not needing to rebuild the schema or the GraphQL instance on each request (there are cases where you may want to rebuild the GraphQL instance, but your case is not the one), this is pretty much the canonical way of using it.
It is rather uncommon to keep a separate server for GraphQL, and it usually gets introduced exactly the way you described - as just another endpoint next to your usual REST endpoints. So your usage is legit - not silly at all :)
Btw, I'm not sure what would QueryIDL be... the query is just a string. No need for a special class.

Limitations of GET vs POST - Restful API [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I need to design a RESTful query API, that returns a set of objects based on a few filters. The usual HTTP method for this is GET. The only problem is, it can have at least a dozen filters, and if we pass all of them as query parameters, the URL can get quite long (long enough to be blocked by some firewall).
Reducing the numbers of parameters is not an option.
One alternative I could think of is to make use of the POST method on the URI and send the filters as part of the POST body. Is this against being RESTfull (Making a POST call to query data).
Anyone have any better design suggestions?
Remember that with a REST API, it's all a question of your point of view.
The two key concepts in a REST API are the endpoints and the resources (entities). Loosely put, an endpoint either returns resources via GET or accepts resources via POST and PUT and so on (or a combination of the above).
It is accepted that with POST, the data you send may or may not result in the creation of a new resource and its associated endpoint(s), which will most likely not "live" under the POSTed url. In other words when you POST you send data somewhere for handling. The POST endpoint is not where the resource might normally be found.
Quoting from RFC 2616 (with irrelevant parts omitted, and relevant parts highlighted):
9.5 POST
The POST method is used to request that the origin server accept the
entity enclosed in the request as a new subordinate of the resource
identified by the Request-URI in the Request-Line. POST is designed to
allow a uniform method to cover the following functions:
...
Providing a block of data, such as the result of submitting a form, to a data-handling process;
...
...
The action performed by the POST method might not result in a resource that can be identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate response status, depending on whether or not the response includes an entity that describes the result.
If a resource has been created on the origin server, the response SHOULD be 201 (Created)...
We have grown used to endpoints and resources representing 'things' or 'data', be it a user, a message, a book - whatever the problem domain dictates. However, an endpoint can also expose a different resource - for example search results.
Consider the following example:
GET /books?author=AUTHOR
POST /books
PUT /books/ID
DELETE /books/ID
This is a typical REST CRUD. However what if we added:
POST /books/search
{
"keywords": "...",
"yearRange": {"from": 1945, "to": 2003},
"genre": "..."
}
There is nothing un-RESTful about this endpoint. It accepts data (entity) in the form of the request body. That data is the Search Criteria - a DTO like any other. This endpoint produces a resource (entity) in response to the request: Search Results. The search results resource is a temporary one, served immediately to the client, without a redirect, and without being exposed from some other canonical url.
It's still REST, except the entities aren't books - the request entity is book search criteria, and the response entity is book search results.
A lot of people have accepted the practice that a GET with too long or too complex a query string (e.g. query strings don't handle nested data easily) can be sent as a POST instead, with the complex/long data represented in the body of the request.
Look up the spec for POST in the HTTP spec. It's incredibly broad. (If you want to sail a battleship through a loophole in REST... use POST.)
You lose some of the benefits of the GET semantics ... like automatic retries because GET is idempotent, but if you can live with that, it might be easier to just accept processing really long or complicated queries with POST.
(lol long digression... I recently discovered that by the HTTP spec, GET can contain a document body. There's one section that says, paraphrasing, "Any request can have a document body except the ones listed in this section"... and the section it refers to doesn't list any. I searched and found a thread where the HTTP authors were talking about that, and it was intentional, so that routers and such wouldn't have to differentiate between different messages. However, in practice a lot of infrastructure pieces do drop the body of a GET. So you could GET with with filters represented in the body, like POST, but you'd be rolling the dice.)
In a nutshell: Make a POST but override HTTP method using X-HTTP-Method-Override header.
Real request
POST /books
Entity body
{
"title": "Ipsum",
"year": 2017
}
Headers
X-HTTP-Method-Override: GET
On the server side, check if header X-HTTP-Method-Override exists then take its value as the method to build the route to the final endpoint in the backend. Also, take the entity body as the query string. From a backend point of view, the request became just a simple GET.
This way you keep the design in harmony with REST principles.
Edit: I know this solution was originally intended to solve PATCH verb problem in some browsers and servers but it also work for me with GET verb in the case of a very long URL which is the problem described in the question.
If you are developing in Java and JAX-RS I recommend you use #QueryParam with #GET
I had the same question when I needed to go through a list.
See example:
import java.util.List;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response;
#Path("/poc")
public class UserService {
#GET
#Path("/test/")
#Produces(MediaType.APPLICATION_JSON)
public Response test(#QueryParam("code") final List<Integer> code) {
Integer int0 = codigo.get(0);
Integer int1 = codigo.get(1);
return Response.ok(new JSONObject().put("int01", int0)).build();
}
}
URI Pattern: “poc/test?code=1&code=2&code=3
#QueryParam will convert the query parameter “orderBy=age&orderBy=name” into java.util.List automatically.

Is it correct to use Post instead of Get to fetch data in Web API

I am currently creating Restful API through ASP.Net WebAPI technology. I have 2 questions related to WebAPI
I had done below:
Created below method in Controller class:
public HttpResponseMessage PostOrderData(OrderParam OrderInfo)
Based on Parameter OrderInfo, Query the SQL Server and get list of orders.
Set the Response.Content with the collection object:
List<Orders> ordList = new List<Orders>();
//filled the ordList from SQL query result
var response = Request.CreateResponse<List<Orders>>(HttpStatusCode.OK, ordList);
On Client side,
OrderParam ordparam = new OrderParam();
response = client.PostAsJsonAsync("api/order", ordparam).Result;
if (response.IsSuccessStatusCode)
{
List<Orders> mydata = response.Content.ReadAsAsync<List<Orders>>().Result;
}
So question: is it fine to Post the data to server to Get the data i.e. usage of Post data insted of Get is correct? Is there any disadvantage in approach? (One disadvantage is: I will not able to query directly from browser) I have used Post here because parameter "OrderParam" might extend in future and there might be problem due to increase in Length of URL due to that.
2nd Question is: I have used classes for parameter and for returning objects i.e. OrderParam and Orders. Now consumer (clients) of this web api are different customers and they will consume API through .Net (C#) or through Jquery/JS. So do we need to pass this class file containing defination of OrderParam and Orders classes manually to each client? and send each time to client when there will be any change in above classes?
Thanks in advance
Shah
Typically no.
POST is not safe nor idempotent - as such cannot be cached. It is meant to be used for cases where you are changing the state on the server.
If you have a big critieria, you need to redesign but in most cases, URL fragments or querystring params work. Have a look at OData which uses querystring for very complex queries and uses GET.
With regard to second question, also no. Server can expose schema (similar to WSDL) or docs but should not know about the client.
Yes you can, RESTFUL is nothing to do with Security, it is just a Convention and for Web API you can use it because you do not need any caching for web api.

Is it a bad practice to return an object in a POST via Web Api?

I'm using Web Api and have a scenario where clients are sending a heartbeat notification every n seconds. There is a heartbeat object which is sent in a POST rather than a PUT, because as I see it they are creating a new heartbeat rather than updating an existing heartbeat.
Additionally, the clients have a requirement that calls for them to retrieve all of the other currently online clients and the number of unread messages that individual client has. It seems to me that I have two options:
Perform the POST followed by a GET, which to me seems cleaner from a pure REST standpoint. I am doing a creation and a retrieval and I think the SOLID principles would prefer to split them accordingly. However, this approach means two round trips.
Have the POST return an object which contains the same information that the GET would otherwise have done. This consolidates everything into a single request, but I'm concerned that this approach would be considered ill-advised. It's not a pure POST.
Option #2 stubbed out looks like this:
public HeartbeatEcho Post(Heartbeat heartbeat)
{
}
HeartbeatEcho is a class which contains properties for the other online clients and the number of unread messages.
Web Api certainly supports option #2, but just because I can do something doesn't mean I should. Is option #2 an abomination, premature optimization, or pragmatism?
The option 2 is not an abomination at all. A POST request creates a new resource, but it's quite common that the resource itself is returned to the caller. For example, if your resources are items in a database (e.g., a Person), the POST request would send the required members for the INSERT operation (e.g., name, age, address), and the response would contain a Person object which in addition to the parameters passed as input it would also have an identifier (the DB primary key) which can be used to uniquely identify the object.
Notice that it's also perfectly valid for the POST request only return the id of the newly created resource, but that's a choice you have, depending on the requirements of the client.
public HttpResponseMessage Post(Person p)
{
var id = InsertPersonInDBAndReturnId(p);
p.Id = id;
var result = this.Request.CreateResponse(HttpStatusCode.Created, p);
result.Headers.Location = new Uri("the location for the newly created resource");
return result;
}
Whichever way solves your business problem will work. You're correct POST for new record vs PUT for update to existing record.
SUGGESTION:
One thing you may want to consider is adding Redis to your stack and the apps can post very fast, then you could use the Pub/Sub functionality for the echo part or Blpop (blocking until record matches criteria). It's super fast and may help you scale and perfectly designed for what you are trying to do.
See: http://redis.io/topics/pubsub/
See: http://redis.io/commands/blpop
I've used both Redis for similar, but also RabbitMQ and with RabbitMQ we added socket.io connection to "stream" the heartbeat in real time without need for long polling.