Spring Cloud Hystrix - Semaphore timeout and response - spring-cloud

In the case of Spring Cloud apps that use Hystrix with the Semaphores strategy, the Semaphore does not interrupt execution. It detects the timeout after the execution finished, updates the statistics of the circuit breaker, but more important it sends a response error to the client. This is a problem since the request already executed successfully downstream, but the client will see an error.
Is it possible to send the actual response received from the downstream to the client and only update circuit breaker statistics?

Related

RabbitMQ - design re-try mechanism

Whenever I'm not able to process events/messages that are in RabbitMQ, I am storing into MongoDB for automated and/or manual re-try. Question is how to enable automated re-try mechanism from MongoDB. How effectively can I listen to MongoDB? Is it good to store failed events/messages in MongoDB upfront? Or should I create error queue where I can listen to failed events/messages push the messages to MongoDB for manual re-try whenever automated re-try fails? Any other suggestions?
My intention is to design effective re-try mechanism for failed RabbitMQ events/messages
This is a typical use case for dead-letter exchanges.
Any queue can be associated with a dead-letter exchange. A consumer which is unable to process a message for whatever reason it can reject the given message. The message will be routed to the dead-letter exchange which works as a regular one. You can therefore apply any routing policy on the dead-lettered message.

Combining a RabbitMQ consumer and a Mojolicious websocket server

I've got a Mojolicious server inspired by this example, and a RabbitMQ consumer inspired by this example. My plan is to combine them, so that a webclient can access the Mojolicious server and subscribe to some kind of updates. The mojo server will from time to time check a rabbitmq queue, to see if there is some updates, and if it is, it will send the data to the connected websocket clients.
I'm struggling to see how this can be done. Do I put the rabbitmq stuff inside the mojo server stuff, or the other way around. How do I prevent the rabbitmq consume from blocking incoming websocket connections. I guess I have to use a timeout on the consume, but then I might have to run it in a loop which might block the websocket. Or have I misunderstood something? Maybe Mojolicious is not the right library to use?
I'm thinking that the server is checking the rabbitmq queue every 10 seconds, but at the same time accepting websocket connections.
Anyone have some ideas on how to solve this? Some pseudo code or anything would be appreciated.

Kafka Custom Authorizer

I use Kafka 0.10.1.1 and using the custom authorizer.
From the custom authorizer, I call a microservice for authorizaton. It works fine for a while and stars throwing the following exception in the logs and the whole cluster becomes unresponsive. The exception keeps coming until i restart the cluster. But, the whole cluster works fine without any issues even for months without the custom authorizer. Is there any bug in the Kafka version 0.10.1.1 or anything wrong with the custom authorizer.
TRACE [ReplicaFetcherThread-0-39], Issuing to broker 1 of fetch request kafka.server.ReplicaFetcherThread$FetchRequest#8c63320 (kafka.server.ReplicaFetcherThread)
[2017-06-30 08:29:17,473] TRACE [ReplicaFetcherThread-2-1], Issuing to broker 1 of fetch request kafka.server.ReplicaFetcherThread$FetchRequest#67a143a (kafka.server.ReplicaFetcherThread)
[2017-06-30 08:29:17,473] WARN [ReplicaFetcherThread-3-1], Error in fetch kafka.server.ReplicaFetcherThread$FetchRequest#12d29e06 (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to <HOST:PORT> (id: 1 rack: null) failed
at kafka.utils.NetworkClientBlockingOps$.awaitReady$1(NetworkClientBlockingOps.scala:83)
at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:93)
at kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:248)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:238)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
My Custom authorizer uses microservice for checking authorization and caches data in a guava caches with the expiry time of 10 mins.
Thanks
I suggest taking a thread dump to see what all the threads are doing.
Just a guess here, given there isn't much info to go on.
If you have as single cache instance what could be happening is that once the cache expires, all requests start hitting the microservice for authorization info and, since this adds latency, the thread pool gets exhausted. A thread dump can tell you how many threads are calling the microservice simultaneously.
If this is indeed the problem, one of the options you could consider, is to use a separate cache per thread (using a Thread-local variable). That way each thread's cache will expire at its own time and won't cause other threads hitting the microservice at exactly the same time.
Another, and a better way IMO is to remove the blocking calls to the microservice from the authorize code-path completely. Instead of a fall-through cache, have the cache always up to date by updating it from a separate background thread. This way no latency is ever added to the authorize calls.

Can I use Kafka queue in my Rest WEBSERVICE

I have a rest based application deployed in server(tomcat) ,
Every request comes to server it takes 1 second of time to serve, now I have a issue, sometimes Server receive more request then it is capable of serving which making server non responsive. Now I was thinking if I can store the requests in a queue so that server can pull request and serve that request and handle the pick time issue.
Now I was thinking can Kafka be helpful for this, if yes any pointer where I can start.
You can use Kafka (or any other messaging system for this ex- ActiveMQ, RabbitMQ etc).
When WebService receives request, add request (with all details required to process it) in Kafka queue (using Kafka message producer details)
Separate service (having Kafka consumer details) will read from topic(queue) and process it.
In case need to send message to client when request is processed, server can push information to client using WebSocket (Or client can poll for request status however this need request status endpoint and will cause load on that endpoint).
Apache Kafka would be helpful in your case. If you use a Kafka broker, it will allow you to face with a peak of requests. The requests will be stored in a queue as you mentionned and be treated by your server at its own speed.
As your are using tomcat, I guess you developped your server in Java. Apache Kafka propose a Java API which is quite easy to use.

HornetQ concurrent session usage warning

With regards to this post, I am using HornetQ 2.4.0 embedded, and using the hornetq core api. I use it to queue messages received via a web service call. I have another thread that dequeues messages synchronously, and processes them. When enqueuing, I sometimes get the "WARN: HQ212051: Invalid concurrent session usage" warning. Does this apply to this embedded usage that I am doing, where I am not using the JMS api? I'm running in embedded jetty, and don't have a container. If I do need to guarantee single thread access, I would have to do my own pool or do a thread local session, correct? (would rather not synch access to the shared objects for performance reasons)