Spring Cloud: How to manage requests on Zuul to another services? - spring-cloud

Actually I would like to understand correct approach for managing requests among several microservices, one of them is Zuul:
I have Zuul-app, which is proxy before my microservice. Zuul started on port 7777 and declares API like /api/service1/get or /api/service2/get. On every service I have echo-endpoint which is available localhost:7777/api/service1/get and work well.
But those echo-endpoints are available directly from corresponding services. Thus I can make request from Postman, let's say, to service1/get/ and service2/get
As far as I understand anybody can call those services through Zuul or directly from those services. So what is difference and what is real value of Zuul for such case (instead of Zuul can authorize users, let's say as proxy microservice)
So what is correct approach for using Zuul for microservices ?

Your question looks like you are asking two things. What is the purpose and how to use it. Going to answer the first one.
Its purpose is to be the service in front of all the other services you have. Like front door to your system.
Rest of the services should be hidden of outside world, behind proxy service.
The purpose is to route all the services from one place, so with netflix-zuul you are able to intercept the request, manipulate, authenticate, route...
You can integrate service discovery (netflix-eureka) so your services will be registered there, and you don't need to deal with urls of your services, you can access them by path you defined and registered service ids.
You can integrate load balancing (netflix-ribbon) across your system.
You can control the interactions between your services by adding latency tolerance and fault tolerance logic (netflix-hystrix). So you can provide fallback options when error occurs..
And so on...

Related

Controlling the user experience when doing canary or A/B deployments with Istio

I have an application with multiple services called from a primary application service. I understand the basics of doing canary and A/B deployments, however all the examples I see show a round robin where each request switches between versions.
What I'd prefer is that once a given user/session is associated with a certain version it stays that way to avoid giving a confusing experience to the user.
How can this be achieved with Kubernetes or Istio/Envoy?
You can do this with Istio using Request Routing - Route based on user identity but I don't know how mature the feature is. It may also be possible to route based on cookies or header values.
We've been grappling with this because we want to deploy test microservices into production and expose them only if the first request contains a "dark release" header.
As mentioned by Jonas, cookies and header values can in theory be used to achieve what you're looking for. It's very easy to achieve if the service that you are canarying is on the edge, and your user is directly accessing.
The problem is, you mention you have multiple services. If you have a chain where the user accesses edge service A which is then making calls to service B, service C etc, the headers or cookies will not be propagated from one service to another.
This is the same problem that we hit when trying to do distributed tracing. The Istio documents currently have this FAQ:
https://istio.io/faq/distributed-tracing/#istio-copy-headers
The long and short of that is that you will have to do header propagation manually. Luckily most of my microservices are built on Spring Boot and I can achieve header propagation with a simple 5-line class that intercepts all outgoing calls. But it is nonetheless invasive and has to be done everywhere. The antithesis of a service mesh.
It's possible there is a clever way around this but it's hard to infer from the docs what is possible and what isn't. I've seen a few github issues raised by Istio developers to address this but every one I've seen has gone stale after initial enthusiasm.

Share circuit breaker over multiple feign clients

For an application that has multiple feign clients connecting all to the same external component we want one shared circuit breaker.
How can this be achieved with spring-cloud-starter-openfeign?
Detailed explanation:
When the providing service is down all 3 clients should stop sending. As all requests should fail. Is it possible that all 3 clients share the same circuitbreaker?
I think you can create FeignClient(not circuit breaker) for providing service(feign in this service).Consuming service inject ‘providing service's FeignClient’, this client can request providing service.
guides spring-cloud-feign and
circuit-breaker

How does it connect various microservices with Docker?

I have two microservices into Docker and I want to connect one with other, but I don´t know to do it. The two (and the future apps) are API Rest with Spring-boot, I am searching info, tutorials... but I don`t see nothing. My idea is have an main app that it is be able to connect with the other microservices that they are API Rest and afterwards this main app publish and all this I want to have it inside of the container (Docker).
Is it possible?
Anyone knows any tutorial that explain this?
Thanks so much!
What you are describing could be an API Gateway. Here is a great tutorial explaining this pattern.
Implement an API gateway that is the single entry point for all clients. The API gateway handles requests in one of two ways. Some requests are simply proxied/routed to the appropriate service. It handles other requests by fanning out to multiple services.
A variation of this pattern is the Backend for Front-End pattern. It defines a separate API gateway for each kind of client.
Using an API gateway has the following benefits:
Insulates the clients from how the application is partitioned into microservices
Insulates the clients from the problem of determining the locations of service instances
Provides the optimal API for each client
Reduces the number of requests/roundtrips. For example, the API gateway enables clients to retrieve data from multiple services with a single round-trip. Fewer requests also means less overhead and improves the user experience. An API gateway is essential for mobile applications.
Simplifies the client by moving logic for calling multiple services from the client to API gateway
Translates from a “standard” public web-friendly API protocol to whatever protocols are used internally
The API gateway pattern has some drawbacks:
Increased complexity - the API gateway is yet another moving part that must be developed, deployed and managed
Increased response time due to the additional network hop through the API gateway - however, for most applications the cost of an extra roundtrip is insignificant.
How implement the API gateway?
An event-driven/reactive approach is best if it must scale to scale to handle high loads. On the JVM, NIO-based libraries such as Netty, Spring Reactor, etc. make sense. NodeJS is another option.
Just give you the simplest answer:
In general containers can communicate among each others with any protocols (http,ftp,tcp,udp) not limit to only rest(http/s)
using the internal/ external IPs and ports
using the internal/ external names (dns):
in your Micro-service is in the same cluster on multi-host -> you should be able to write the program in your Springboot to call http://{{container service name}} , It's the built-in feature of containers
if you have more microservices in different cluster or hosts or the internet , you can use APIM (API management) or reverse-proxy(NGINX,HAProxy) to manages the service name eg.
microservice1.yourdomain.com —> container1 or service1(cluster)
microservice2.yourdomain.com —> container2 or service 2(cluster)
yourdomain.com/microservice1—> container2 or service 2(cluster)
yourdomain.com/microservice2—> container1 or service1(cluster)
PS . there are more sophisticated techniques out there but it fundamentally come down above approaches.

Microservice Architecture with UI and Auth Server

I am thinking in moving our monolithic company portal into micro services . To do so i need create a portal HTML UI that has some kind of redundancy so we don't go down during updates and also full spring security including roles and permissions.
Currently i am stuck about deciding what is the best practice and where to PUT the UI .
My Options:
Merge API Gateway and EDGE to have the UI same as any other micro service and forward /ui/** to it . (Back draw with this was the resources path as Zuul did not update them by adding the /ui prefix, so i thought in putting it as default forward)
Create two separate gateways as in the above diagram.
If 2 is the optimal solution , should the Rest Calls from HTML be sent directly to the API Gateway , or go to edge and from it to API Gateway ?
You may end up having different level of security for the two... so separate gateways might be better
I would send the requests directly to ali gateway and get rid of the extra hop.

Forward HTTP RESTfull API requests from http server to my application

I have a question about the design of an application I'm working on.
I made a monolithic java application with sockets open 24/7, something like a game server. I'm just trying to say it's a single jar application instead of a modular servlet/page based web application.
I would now like to add a RESTful API to this application. So people/clients can make HTTP requests to my application to obtain certain info. Because of the monolithic nature of my java application I'm unsure of how to implement this. One other important thing: I'm expecting multiple requests per second, so it would be nice if I could have an existing http server handle the requests, and somehow forward them to my app to set up a reply, and have the http server send it again.
Some things I have thought of:
wrap my application in a tomcat application, although I'm not sure if tomcat can run an application continuously instead of mapping to servlets on request.
open a socket and parse incoming http requests myself (or there is propably a lib for that?). I fear this will have an impact on performance, and would rather use existing http servers because they are optimized for high traffic.
use an excisting http server to handle the requests (apache, lighttp, ...) and have it forward requests to my app via things like scgi, or use a server that can forward via XMLRPC. Are there any other technologies/protocols to do this?
Any advice on how to handle this?
Thanks!
I'd decouple your RESTful service endpoint as much as possible from your original application. This allows you to scale (add multiple servers for your REST endpoint), but also to change your original application without having to change your REST API directly.
Clients <== REST (HTTP) ==> RESTful endpoint <== legacy (sockets) ==> Legacy backend
So your REST server is one the hand a service provider for your clients, but represents at the same time also a client for your original backend.
I would design the RESTful API and then pick one of the existing REST frameworks for Java, like Restlet, and implement the REST service itself. At the same time you can start implementing a gateway between the REST server and your original backend, by using sockets.
Pay attention to scalability and performance (i.e. you may want to use connection pools for the rest <=> backend bridge and not spawn a socket per incoming API request) and also think of possible advantages of HTTP. You might benefit when you're able to use caching, etc. as far as your backend application logic allows so.