Manage Fault Tolerance with Feign, Ribbon and Eureka - spring-cloud

I want to implement a resilient microservices architecture using Feign Client,Ribbon and Eureka so I encountered an issue. When a microservices target is down I want to redirect to another instance of microservices without the user seeing it. For example, I have 4 instances of microservices B and one instance A :
The Browser client call A then A call B1 but B1 is down => A redirect automatically to B2, B2 is KO then A call B3 and B3 is up then it returns a response to A. A returns response to the browser client.
How I could implement it, please.
Thanks in advance.

Basically, Ribbon should already find instances that are alive for you - firstly, Eureka stores and updates information on which instances are alive and, secondly, Ribbon runs health-check requests to the instances. If that is not working correctly for you, you can try customising the polling intervals for Ribbon. If you want a failed request to be repeated against a different instance, you can use Spring Cloud Netflix Ribbon with Spring Retry (see documentation).
Having said that, since Spring Cloud Ribbon is now in maintenance mode and will not be making it into the 2020.0.0 release train, I would definitely not encourage adding it at this point. The available alternative is Spring Cloud LoadBalancer. It supports retrieving the instances that are alive from Service Discovery (either with or without caching and health-checks. It does not support retries at this point, but there's an issue for it in the project backlog.

Related

DDD with Microservices and Multiple inputs via REST and Message Queue

I have an aggregate root with the business logic in a c# project. Also in the solution is a REST web.api project that passes commands / requests to the aggregate root to do work and handle queries. This is my microservice. Now I want some of my events / commands / request to come of a message queue. I'm considering this:
Put a console app in the solution to listen for messages from a message queue. Then reference the aggregate root project in the console app
Is it a bad pattern to share "microservice business logic" between two services? Because now I have two "services" an api and a console app doing the work. I would have to ensure that when the business logic changes both services are deployed.
Personally I think it is fine to do what I suggest, a good CI/CD pipeline should mitigate that. But are there any other cons I might have missed?
For some background I would suggest watching DDD & Microservices: At Last, Some Boundaries! by Eric Evans.
A bounded context is the micro service. How you surface it is another matter. What you describe seems to be what I actually do quite frequently. I have an Identity & Access open source project that I'm working on (so depending on when you read this it may be in a different state) that demonstrates this structure.
Internal to an organization one may access the BC either via a service bus or via the web-api. External parties would utilize only the web-api as messaging should not be exposed.
The web-api either returns data from the query layer or sends commands via the service bus (messaging) to the BC functional endpoint. Depending on the complexity of the system I may introduce an orchestration concern that interacts with multiple BCs. It is probably a BC in its own right much along the lines of a reporting BC.

Controlling the user experience when doing canary or A/B deployments with Istio

I have an application with multiple services called from a primary application service. I understand the basics of doing canary and A/B deployments, however all the examples I see show a round robin where each request switches between versions.
What I'd prefer is that once a given user/session is associated with a certain version it stays that way to avoid giving a confusing experience to the user.
How can this be achieved with Kubernetes or Istio/Envoy?
You can do this with Istio using Request Routing - Route based on user identity but I don't know how mature the feature is. It may also be possible to route based on cookies or header values.
We've been grappling with this because we want to deploy test microservices into production and expose them only if the first request contains a "dark release" header.
As mentioned by Jonas, cookies and header values can in theory be used to achieve what you're looking for. It's very easy to achieve if the service that you are canarying is on the edge, and your user is directly accessing.
The problem is, you mention you have multiple services. If you have a chain where the user accesses edge service A which is then making calls to service B, service C etc, the headers or cookies will not be propagated from one service to another.
This is the same problem that we hit when trying to do distributed tracing. The Istio documents currently have this FAQ:
https://istio.io/faq/distributed-tracing/#istio-copy-headers
The long and short of that is that you will have to do header propagation manually. Luckily most of my microservices are built on Spring Boot and I can achieve header propagation with a simple 5-line class that intercepts all outgoing calls. But it is nonetheless invasive and has to be done everywhere. The antithesis of a service mesh.
It's possible there is a clever way around this but it's hard to infer from the docs what is possible and what isn't. I've seen a few github issues raised by Istio developers to address this but every one I've seen has gone stale after initial enthusiasm.

Spring Cloud: How to manage requests on Zuul to another services?

Actually I would like to understand correct approach for managing requests among several microservices, one of them is Zuul:
I have Zuul-app, which is proxy before my microservice. Zuul started on port 7777 and declares API like /api/service1/get or /api/service2/get. On every service I have echo-endpoint which is available localhost:7777/api/service1/get and work well.
But those echo-endpoints are available directly from corresponding services. Thus I can make request from Postman, let's say, to service1/get/ and service2/get
As far as I understand anybody can call those services through Zuul or directly from those services. So what is difference and what is real value of Zuul for such case (instead of Zuul can authorize users, let's say as proxy microservice)
So what is correct approach for using Zuul for microservices ?
Your question looks like you are asking two things. What is the purpose and how to use it. Going to answer the first one.
Its purpose is to be the service in front of all the other services you have. Like front door to your system.
Rest of the services should be hidden of outside world, behind proxy service.
The purpose is to route all the services from one place, so with netflix-zuul you are able to intercept the request, manipulate, authenticate, route...
You can integrate service discovery (netflix-eureka) so your services will be registered there, and you don't need to deal with urls of your services, you can access them by path you defined and registered service ids.
You can integrate load balancing (netflix-ribbon) across your system.
You can control the interactions between your services by adding latency tolerance and fault tolerance logic (netflix-hystrix). So you can provide fallback options when error occurs..
And so on...

Guidance on how to make micro-services communicate effectively

We are embarking on a new project development , where we will have multiple micro-services communicating each other to provide information in cloud native system. Our application will be decomposed into multiple services like Text Cleaner , Entities Extractor, Entities Resolver , Output Converter. As you can see in diagram we have some forking where input to one service in required by other service and so forth.
Only one service is going to be exposed outside. Others would be internal. And we have to provide synchronous response to clients.
I wanted to check if some one can guide me here to best patterns:
1- Should we have one Wrapper class which has model classes for all projects as one all of details is needed in final output convertors or how should the data flow so data is sorted out in last micro-service. We want to keep systems loosely coupled and are thinking about how orchestrate this flow without having a middle layer which composes all this data?
2- How to orchestrate this flow? Service Mesh / Api Gateway?
Looks like a workflow based solution.. When so many steps are involved ; the only response you can give to consumer is that request accepted.. and in background the process starts..You cannot let consumer wait for very long because they will get connection time out.
if all these services are deployed on different servers ( which should be the case for Micro services definition for scalability); you can communicate via HTTP or using some messaging solution like JMS or if u are deployed on cloud ; they give workflow based services..

JavaFX interactivity with Spring MVC Restful

I am building a JavaFX client application communicating with Spring MVC Restful server(Spring boot 1.4.1) application which works as expected.
Some features require fast interaction with the server to validate limits and availability before proceeding to next input example check if member number insert is valid and if has exceeded limit to insert, during accumulation of records(each confirmed record temporarily stored in a tableview before sent to server for storage) before the records are actually saved.
Within JavaFX and Spring framework(in both frontend and backend) scope, how can such kind of features made look more interactive(or live) than normal "let-me-wait-for-response" approach
If question is not clear, just ask, otherwise i think it is
It appears that the only interaction you have between client (JavaFX) and server (SpringBoot) is through a REST API. This will make short bursts of data (such a validation) take longer.
Switching to another communication mechanism (for example gRPC or Netty with Msgpack) could help. Note that once you open the door for non-REST calls it'll make you re-think the use of REST in the first place.
Non-REST communication may not be an option depending on your requirements (firewalls, etc) or may need additional setup in order to surmount other obstacles, in other words, there's no free lunch.