Spring Cloud Circuit Breakers or Hystrix - spring-cloud

Hystrix is predominantly meant for applications built using spring cloud.
Having said that there could be multiple services layer for an application.
e.g. Amazon (Amazon site must be having multiple services like login, products, carts, orders, payment and so on)
Client (say web user) -> web application X -> Service A (it uses Data Source A ) -> Service B (Data Source B) -> Service C (Data Source C) -> Service D (Data Source D) -> Service E (Data Source E)
with this kind of scenario when something breaks in Service E, How that gets navigated back to client?
How Hystrix can be useful here to know unavailability of one specific functionality in Service E?
If that example is wrong, then is Hystrix scope limited to multiple processes inside one service and not multiple services used in one application?
Having said that above example can be tweaked something like below
Client (say web user) -> web application X -> Service A -> inside Service A Lets say there are processes like process 1 ->process 2 ->process 3 ->process 4->process 5
and anything fails in process 5 gets navigated back to process 1 and then back to client.
My question is more about maintaining thread state here.
With try-catch thread scope is limited per service (please correct me if wrong).
How Hystrix maintains state of thread during this whole transaction?

Hystrix is predominantly meant for applications built using spring
cloud
Not exactly. Hystrix is generally used to enable Circuit Breaker functionality. It could be used everywhere. Even for a simple method call
For example
class A {
B b;
public void methodA() {
b.methodB();
}
}
class A {
DatabaseConnectionPool object;
#HystrixCommand(fallbackMethod = "abcd")
public void methodB() {
// Get db object from pool.
// call db and fetch something.
}
}
Even for a simple method call, it can be used. Doesn't matter what is being done inside the code wrapped around Hystrix.
But generally we use Hystrix around pieces of code which would throw exceptions for unknown reasons (especially when calling differnt applications)
with this kind of scenario when something breaks in Service E, How
that gets navigated back to client? How Hystrix can be useful here to
know unavailability of one specific functionality in Service E?
If you wrap each method call i.e from Service A --> Service B and Service B --> Service C and further with Hystrix, then each call is treated as a circuit and you can visualize using Hystrix-dashboard, what is the state(closed, open, half-open) of each circuit.
Let's assume the call from Service B --> Service C fails(throws exception) then Hystrix would wrap the original exception in Hystrix Exception and throws it back. If you have a fallback method, then it goes to the fallback method in Service B and returns the value specified in fallback. If you don't have fallback then it throws the exception higher up the chain. And same thing repeats higher up the chain.
How Hystrix maintains state of thread during this whole transaction ?
For each call wrapped with Hystrix, Hystrix maintains a Thread Pool and you can configure this completely.
If I already have an existing Java feature of using try-catch why will
someone go for Hystrix explicitly?
Hystrix provides much more functionality. You cannot even compare that with try catch. I suggest you read Circuit breaker pattern.

Related

Dispatch a blocking service in a Reactive REST GET endpoint with Quarkus/Mutiny

Lately i've implemented a Reactive REST GET endpoint with Quarkus/Mutiny using a callback structure;
Connect MyRequestService to Reactive REST GET endpoint with Quarkus/Mutiny
After finishing, I was wondering how this is settled with a call to a blocking service;
How do i call a blocking service from my Reactive REST GET endpoint with
Quarkus/Mutiny
I didn't see a quick answer in the documentation, but it turned out to be quite simple;
The ServiceResource just forwards the call to the Service.
MyRequestService creates a MyJsonResultSupplier and delivers this to the Mutiny Uni with method item(). The resulting Uni is returned to the ServiceResource.
Mutiny uses method get() on the Supplier for a MyJsonResult. The call blocks with an acquire on semaphore mMyJsonResultSupplierSemaphore. Next, another worker thread calls method ready() which sets mMyJsonResult and releases semaphore mMyJsonResultSupplierSemaphore unblocking method get() towards Mutiny.
Mutiny completely hides the reactive part of the story, so you can just block on a method call within a registered supplier.

Dynamic binding of services in Thrift or gRPC / passing service as argument

I work with existing system that uses a lot of dynamic service registrations, using Andorid HIDL/AIDL, for example:
Multiple objects implement:
IHandler { Response handleRequet(Id subset, Request r)}
One object implements:
class IHandlerCoordinator {
Response handleRequet(Id subset, Request r);
void RegisterHandler(std::vector<Id> subsets, IHandler handler_for_subset_ids);
}
Multiple object on startup/dynamicaly register into IHandlerCoordinator (passing expected subset of what they can handle), and then IHandlerCoordinator dispatches incoming requests to clients.
In xIDL it requires passing services as arguments, how it can be emulated in Thrift / gRPC?
W/regard to Thrift: There is no such thing as callbacks yet. There have been some dicussions around that topic (see mailing list archives and/or JIRA) but there's no implementation. Biggest challenge is to do it in an transport-agnositic way, so the current consensus is that you have to implement it manually if you need it.
Technically, there's two general ways to do it:
implement a server instance also on the client side which receives the callbacks
integrate long running calls or a polling mechanism to actively retrieve "callback" data from the server by means of client calls
With gRPC it's easier, because gRPC focuses on HTTP. Thrift has been open from the beginning for any kind of transport you can imagine.

Spring Cloud Stream AggregateApplication with local Data Flow Server

I am trying to create Spring Cloud Stream Aggregate Application which runs with Data Flow Web Server to be able to manage application via Web UI.
Application runner class:
#SpringBootApplication
public class Runner {
public static void main(String[] args) {
new AggregateApplicationBuilder(args).web(true)
.from(JSONFileSourceApplication.class).args("--fixedDelay=5000")
.via(ProcessorOne.class)
.to(LoggingSinkApplication.class).run(args);
}
This works OK. Now trying to add Dataflow Server. Create a class:
#SpringBootApplication
#EnableDataFlowServer
public class WebServer {}
And set it as parent configuration of AggregateApplicationBuilder:
...
new AggregateApplicationBuilder(WebServer.class, args).web(true)
...
If I run it, the following exception occurs:
BeanCreationException: Error creating bean with name 'initH2TCPServer' ...
Factory method 'initH2TCPServer' threw exception ... Exception opening port "19092" (port may be in use)
Looks like AggregateApplicationBuilder process tries to create another H2 server instead of using one from parent configuration.
If I replace #SpringBootApplication annotation with #Configuration in my JSONFileSourceApplication, ProcessorOne and LoggingSinkApplication classes - stream application starts, web server starts (http://localhost:9393/dashboard), but I don't see my stream components, all tabs in web UI are empty.
How to run Spring Cloud Stream AggregateApplication with Web UI enabled?
SCDF as it stands today, it does not support the concept of aggregate applications.
The primary reason to this is the fact that SCDF assumes apps to be of know channel types; it is either input/output or both (for processors). However, when using AggregateApplicationBuilder, there are a variety of ways you can compose the channels and it gets blurry in the DSL/UI to be able to discover and bind to the channels in an automated manner.
That said, in the upcoming release,
1) We are planning to introduce the concept of "function-chaining". This allows composition of "multiple" small functions (e.g., filterNulls, transformToUppercase, splitByHypen, ..) into a single stream application at runtime. As a developer, you'd focus on developing/testing the functions standalone and register them with SCDF. Once available in the registry, you will have new DSL primitives to compose them into a single unit, which internally is chained (at runtime) by SCDF.
2) We have plans to elevate the visibility of queues/topics. There will be DSL primitives to interact and create data pipelines with them directly. Given this flexibility, it will be easier for composition like use-cases.

How do you register a listener in service fabric (azure)

I am writing a reliable actor in service fabric who's job it will be to listen to changes in a Firebase DB and run logic based on those changes. I have it functioning, but not correctly. What I've done so far is write the actor code with a method called MonitorRules() which is what is listening to Firebase using a C# Firebase client wrapper called FireSharp. MonitorRules() looks like this:
public async Task MonitorRules()
{
FireSharp.FirebaseClient client = new FireSharp.FirebaseClient(new FireSharp.Config.FirebaseConfig
{
AuthSecret = "My5up3rS3cr3tAu7h53cr37",
BasePath = "https://myapp.firebaseio.com/"
});
await client.OnAsync("businessRules",
added: (sender, args) =>
{
ActorEventSource.Current.ActorMessage(this, $"{args.Data} added at {args.Path}");
},
changed: (sender, args) =>
{
ActorEventSource.Current.ActorMessage(this, $"{args.OldData} changed to {args.Data} at {args.Path}");
}
);
}
I then call MonitorRules() after the service is registered like so in the service's Main() method:
fabricRuntime.RegisterActor<RuleMonitor>();
var serviceUri = new Uri("fabric:/MyApp.RuleEngine/RuleMonitorActorService");
var actorId = ActorId.NewId();
var ruleMonitor = ActorProxy.Create<IRuleMonitor>(actorId, serviceUri);
ruleMonitor.MonitorRules();
This "works" in that the service opens a connection to Firebase and responds to data changes. The problem is that since the service is run on three nodes of a five node cluster, it's actually listening three times and processes each message three times. Also, if there is no activity for a while, the service is deactivated and no longer responds to changes in Firebase. All in all, not the right way to set something like this up I'm sure, but I can not find any documentation on how to set up a polling client like this in service fabric. Is there a way to set this up that will adhere to the spirit of azure service fabric?
Yeah, there are a few things to familiarize yourself with here. The first is the Actor lifecycle and garbage collection. Tl;dr: Actors are deactivated if they do not receive a client request (via ActorProxy) or a reminder for some period of time, which is configurable.
Second, Actors have Timers and Reminders that you can use to do periodic work, like polling a database for changes. The difference between a timer and reminder is that a timer doesn't count as "being used" meaning that the actor can still be deactivated which shuts down the timer, but a reminder counts as "being used" and can also re-activate a deactivated actor. The way to think about timers and reminders is that you're doing the polling, rather than waiting for a callback from something else like you have here with FireSharp.
Finally, calling MonitorRules from Main() is not the best idea. The reason is that Main() is actually the entry point for your actor service host process, which is just an EXE that is used to host instances of your actors. The only thing that should happen in Main() is registering your actor type and nothing else. Let's look at what's happening here in more detail:
So you deploy your actor service to a cluster. The first thing that happens is we start the host process on as many nodes as necessary to run the actor service (in your case that's 3). We enter Main() where the actor service type gets registered and at this point, that's all we should do, because once the actor service is registered with the host process, we'll then create an instance (or multiple instances or replicas if it's stateful) of the service, and then the service can start doing its work. For actors, that means the actor service is ready to start activating actors when a client application makes a call using ActorProxy. But with the ActorProxy call you have in Main(), you're basically saying "activate an actor on every node where this host is when the host starts" which is why you're listening three times.
With all that in mind, the first question to ask yourself is whether actors are the right model for you. If you just want a simple place to monitor Firebase with a FireSharp client, it might be easier just use a reliable service instead because you can put your monitoring in RunAsync, which is started automatically when the service starts, unlike actors which need a client to activate them.

Does vert.x have centralized filtering?

I am new to Vert.X.
Does Vert.x have a built in facility for centralized filters? What I mean are the kind of filters that you would use on a J2EE application.
For instance, all pages have to go through the auth filter, or something like that.
Is there a standardized way to achieve this in Vert.x?
I know this question is quite old, but for those still looking for filter in Vertx 3, the solution is to use subRouter as a filter:
// Your regular routes
router.route("/").handler((ctx) -> {
ctx.response().end("...");
});
// Have more routes here...
Router filterRouter = Router.router(vertx);
filterRouter.get().handler((ctx)->{
// Do something smart
// Forward to your actual router
ctx.next();
});
filterRouter.mountSubRouter("/", router);
Filtering is an implementation of the chain of responsibility in the servlet container. Vert.x does not have this kind of concept but with yoke (or apex in the new release) you are able to easily reproduce this behavior.
Give a look in the routing section: https://github.com/vert-x3/vertx-web/blob/master/vertx-web/src/main/asciidoc/index.adoc
HTH,
Carlo
Vert.x is unopinionated about how many things should be handled. But generally speaking, these types of features are typically implemented as "bus mods" (i.e. modules/verticles which receive input and produce output over the event bus) in Vert.x 2. In fact, the auth manager module may help you get a better understanding of how this is done:
https://github.com/vert-x/mod-auth-mgr
In Vert.x 3 the module system will be/is gone, but the pattern will remain the same. It's possible that some higher level framework built on Vert.x could support these types of filters, but Vert.x core will not.
If also recommend you poke around in Vert.x Apex if you're getting started building web applications on Vert.x:
https://github.com/vert-x3/vertx-apex
Vert.x is more similar to node.js than any other java based frameworks.
Vert.x depends on middlewares. You can define them and attach them to a route. Depending on the order they are defined in they will get called.
For example lets say you have a user application where you would like to run logging and request verification before the controller is called.
You can do something like follows:
Router router = Router.router(vertx);
router.route("/*").handler(BodyHandler.create()); // Allows Body to available in post calls
router.route().handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext event) {
//Handle logs
}
})
router.route("/user").handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext event) {
// handle verification for all apis starting with /user
}
});
Here depending on the route set of middleware will get called.
From my POV, this is exactly the opposite to what vert.x tries to achieve. A verticle being the core building block of the framework is supposed to keep the functions distributed, rather than centralized.
For the multithreaded (cluster) async environment that makes sense, because as soon as you start introducing something "centralized" (and usually synchronous), you would lose the async ability.
One of the options is to implement auth in your case would be to exchange the messages with respecive auth-verticle over the event bus. In this case you would have to handle the async aspect of such a request.