Timeout between request retries Apache HttpClient - httpclient

Could somebody share how to configure modern HttpClient 4.5.3 to retry failed requests and wait for some time before each retry?
So far it looks like I got it correctly that .setRetryHandler(new DefaultHttpRequestRetryHandler(X, false)) will allow to retry requests X times.
But I cannot understand how to configure backoff: .setConnectionBackoffStrategy() / .setBackoffManager() according to JavaDocs regulate something else, not timeout between retries.

About the dynamic delay, I want to suggest this:
CloseableHttpClient client = HttpClientBuilder.create()
.setRetryHandler(new HttpRequestRetryHandler() {
#Override
public boolean retryRequest(IOException exception, int executionCount, HttpContext context) {
return executionCount <= maxRetries ;
}
})
.setServiceUnavailableRetryStrategy(new ServiceUnavailableRetryStrategy() {
int waitPeriod = 100;
#Override
public boolean retryRequest(HttpResponse response, int executionCount, HttpContext context) {
waitPeriod *= 2;
return executionCount <= maxRetries &&
response.getStatusLine().getStatusCode() >= 500; //important!
}
#Override
public long getRetryInterval() {
return waitPeriod;
}
})
.build();
Appendix:
Please note, that ServiceUnavailableRetryStrategy.retryRequest will NOT be called, if there was an IO error like timeout, port not open or connection closed. In such cases, only HttpRequestRetryHandler.retryRequest will be called, and the retry will happen either immediately or after a fixed delay (I could not finally clarify this). So oleg's answer is actually the right one. There is no way to do it with support of HttpClient 4.5.
(I would actually like to call this a design bug, as delayed retries after an IO error are vitally important in a modern microservice environment.)

BackoffManager / ConnectionBackoffStrategy combo can be used to dynamically increase or decrease max connection per route limits based on rate of I/O errors and 503 responses. They have no influence on request execution and cannot be used to control request re-execution
This is the best one can do with HC 4.x APIs
CloseableHttpClient client = HttpClientBuilder.create()
.setRetryHandler(new HttpRequestRetryHandler() {
#Override
public boolean retryRequest(IOException exception, int executionCount, HttpContext context) {
return executionCount <= maxRetries &&
exception instanceof SocketException;
}
})
.setServiceUnavailableRetryStrategy(new ServiceUnavailableRetryStrategy() {
#Override
public boolean retryRequest(HttpResponse response, int executionCount, HttpContext context) {
return executionCount <= maxRetries &&
response.getStatusLine().getStatusCode() == HttpStatus.SC_SERVICE_UNAVAILABLE;
}
#Override
public long getRetryInterval() {
return 100;
}
})
.build();
Please note there is presently no elegant way of enforcing a delay between request execution attempts in case of an I/O error or dynamically adjusting the retry interval based on request route.

You can use lambda
client.setRetryHandler((e, execCount, httpContext) -> {
if (execCount > tries) {
return false;
} else {
try {
Thread.sleep(recalMillis);
} catch (InterruptedException ex) {
//ignore
}
return true;
}
Notice that handler works only for IOExceptions types

This worked for us. This does both retries (3) and a delay (1000ms). We override two things, HttpResponseInterceptor.process() and HttpRequestRetryHandler.retryRequest(). The first will throw an exception for invalid (400+) HTTP codes which will come to the retryRequest implementation. NOTE: If all retries have been exhausted, you will come to the final Catch at the bottom of this snippet.
// Define an HttpClient with the following features:
// 1) Intercept HTTP response to detect any 400+ HTTP Codes (process() implementation), and if so,
// 2) Force a retry with a delay (HttpRequestRetryHandler.retryRequest() implementation)
final CloseableHttpClient httpClient = HttpClients.custom().addInterceptorLast(new HttpResponseInterceptor() {
#Override
public void process(HttpResponse response, HttpContext context) throws HttpException, IOException {
if (response.getStatusLine().getStatusCode() >= 400) {
// Throw an IOException to force a retry via HttpRequestRetryHandler.retryRequest()
throw new IOException("Invalid code returned: " + response.getStatusLine().getStatusCode());
}
}
})
.setRetryHandler(new HttpRequestRetryHandler() {
#Override
public boolean retryRequest(IOException exception, int executionCount, HttpContext context) {
if (executionCount > MAX_RETRIES) { // MAX_RETRIES = 3
return false;
} else {
try {
// Sleep before retrying
Thread.sleep(DELAY); // DELAY = 1000 MS
} catch (InterruptedException ex) {
// ... Log or silently swallow
}
return true;
}
}
})
.build();
final HttpGet getOp = new HttpGet("http://yoururl.com/api/123/");
try {
return httpClient.execute(getOp, new ResponseHandler<String>() {
#Override
public String handleResponse(final HttpResponse response) throws ClientProtocolException, IOException {
// ... Process response after preliminary HTTP code verification
}
});
} catch (IOException ioe) {
// NOTE: Comes here if all retries have failed, throw error back to caller
log.error("All retries have been exhausted");
throw ioe;
}

Related

Can't handle bad request using doOnError WebFlux

I wanna send some DTO object to server. Server have "Valid" annotation, and when server getting not valid DTO, he should send validation errors and something like "HttpStatus.BAD_REQUEST", but when I'm trying to send HttpStatus.BAD_REQUEST doOnError just ignore it.
POST-request from client
BookDTO bookDTO = BookDTO
.builder()
.author(authorTf.getText())
.title(titleTf.getText())
.publishDate(LocalDate.parse(publishDateDp.getValue().toString()))
.owner(userAuthRepository.getUser().getLogin())
.fileData(file.readAllBytes())
.build();
webClient.post()
.uri(bookAdd)
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(bookDTO)
.retrieve()
.bodyToMono(Void.class)
.doOnError(exception -> log.error("Error on server - [{}]", exception.getMessage()))
.onErrorResume(WebClientResponseException.class, throwable -> {
if (throwable.getStatusCode() == HttpStatus.BAD_REQUEST) {
log.error("BAD_REQUEST!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"); --My log doesn't contain this error, but server still has errors from bindingResult
return Mono.empty();
}
return Mono.error(throwable);
})
.block();
Server-part
#PostMapping(value = "/add", consumes = {MediaType.APPLICATION_JSON_VALUE})
public HttpStatus savingBook(#RequestBody #Valid BookDTO bookDTO, BindingResult bindingResult) {
List<FieldError> errors = bindingResult.getFieldErrors();
if (bindingResult.hasErrors()) {
for (FieldError error : errors ) {
log.info("Client post uncorrected data [{}]", error.getDefaultMessage());
}
return HttpStatus.BAD_REQUEST;
}else{libraryService.addingBookToDB(bookDTO);}
return null;
}
doOnError is a so-called side effect operation that could be used for instrumentation before onError signal is propagated downstream. (e.g. to log error).
To handle errors you could use onErrorResume. The example, the following code handles the WebClientResponseException and returns Mono.empty instead.
...
.retrieve()
.doOnError(ex -> log.error("Error on server: {}", ex.getMessage()))
.onErrorResume(WebClientResponseException.class, ex -> {
if (ex.getStatusCode() == HttpStatus.BAD_REQUEST) {
return Mono.empty();
}
return Mono.error(ex);
})
...
As an alternative as #Toerktumlare mentioned in his comment, in case you want to handle http status, you could use onStatus method of the WebClient
...
.retrieve()
.onStatus(HttpStatus.BAD_REQUEST::equals, res -> Mono.empty())
...
Update
While working with block it's important to understand how reactive signals will be transformed.
onNext(T) -> T in case of Mono and List<T> for Flux
onError -> exception
onComplete -> null, in case flow completes without onNext
Here is a full example using WireMock for tests
class WebClientErrorHandlingTest {
private WireMockServer wireMockServer;
#BeforeEach
void init() {
wireMockServer = new WireMockServer(wireMockConfig().dynamicPort());
wireMockServer.start();
WireMock.configureFor(wireMockServer.port());
}
#Test
void test() {
stubFor(post("/test")
.willReturn(aResponse()
.withHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.withStatus(400)
)
);
WebClient webClient = WebClient.create("http://localhost:" + wireMockServer.port());
Mono<Void> request = webClient.post()
.uri("/test")
.retrieve()
.bodyToMono(Void.class)
.doOnError(e -> log.error("Error on server - [{}]", e.getMessage()))
.onErrorResume(WebClientResponseException.class, e -> {
if (e.getStatusCode() == HttpStatus.BAD_REQUEST) {
log.info("Ignoring error: {}", e.getMessage());
return Mono.empty();
}
return Mono.error(e);
});
Void response = request.block();
assertNull(response);
}
}
The response is null because we had just complete signal Mono.empty() that was transformed to null by applying block

Stop processing kafka messages if something goes wrong during process

In my processor API I store the messages in a key value store and every 100 messages I make a POST request. If something fails while trying to send the messages (api is not responding etc.) I want to stop processing messages. Until there is evidence the API calls work.
Here is my code:
public class BulkProcessor implements Processor<byte[], UserEvent> {
private KeyValueStore<Integer, ArrayList<UserEvent>> keyValueStore;
private BulkAPIClient bulkClient;
private String storeName;
private ProcessorContext context;
private int count;
#Autowired
public BulkProcessor(String storeName, BulkClient bulkClient) {
this.storeName = storeName;
this.bulkClient = bulkClient;
}
#Override
public void init(ProcessorContext context) {
this.context = context;
keyValueStore = (KeyValueStore<Integer, ArrayList<UserEvent>>) context.getStateStore(storeName);
count = 0;
// to check every 15 minutes if there are any remainders in the store that are not sent yet
this.context.schedule(Duration.ofMinutes(15), PunctuationType.WALL_CLOCK_TIME, (timestamp) -> {
if (count > 0) {
sendEntriesFromStore();
}
});
}
#Override
public void process(byte[] key, UserEvent value) {
int userGroupId = Integer.valueOf(value.getUserGroupId());
ArrayList<UserEvent> userEventArrayList = keyValueStore.get(userGroupId);
if (userEventArrayList == null) {
userEventArrayList = new ArrayList<>();
}
userEventArrayList.add(value);
keyValueStore.put(userGroupId, userEventArrayList);
if (count == 100) {
sendEntriesFromStore();
}
}
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
iterator.close();
count = 0;
}
#Override
public void close() {
}
}
Currently in my code if a call to the API fails it will iterate the next 100 (and this will keep happening as long as it fails) and add them to the keyValueStore. I don't want this to happen. Instead I would prefer to stop the stream and continue once the keyValueStore is emptied. Is that possible?
Could I throw a StreamsException?
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
throw new StreamsException(e);
}
Would that kill my stream app and so the process dies?
You should only delete the record from state store after you make sure your record is successfully processed by the API, so remove the first keyValueStore.delete(entry.key); and keep the second one. If not then you can potentially lost some messages when keyValueStore.delete is committed to underlying changelog topic but your messages are not successfully process yet, so it's only at most one guarantee.
Just wrap the calling API code around an infinite loop and keep trying until the record successfully processed, your processor will not consume new message from above processor node cause it's running in a same StreamThread:
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
//remove this state store delete code : keyValueStore.delete(entry.key);
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
while (true) {
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);//only delete after successfully process the message to achieve at least one processing guarantee
break;
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
}
iterator.close();
count = 0;
}
Yes you could throw a StreamsException, this StreamTask will be migrate to another StreamThread during re-balance, maybe on the sample application instance. If the API keep causing Exception until all StreamThread had died, your application will not automatically exit and receive below Exception, you should add a custom StreamsException handler to exit your app when all stream threads had died using KafkaStreams#setUncaughtExceptionHandler or listen to Stream State change (to ERROR state):
All stream threads have died. The instance will be in error state and should be closed.
In the end I used a simple KafkaConsumer instead of KafkaStreams, but the bottom line was that I changed the BulkApiException to extend RuntimeException, which I throw again after I log it. So now it looks as follows:
} catch (BulkApiException bae) {
logger.error(bae.getMessage(), bae.fillInStackTrace());
throw new BulkApiException();
} finally {
consumer.close();
int exitCode = SpringApplication.exit(ctx, () -> 1);
System.exit(exitCode);
}
This way the application is exited and the k8s restarts the pod. That was because if the api where I'm trying to forward the requests is down, then there is no point on continue reading messages. So until the other api is back up k8s will restart a pod.

Vertx delay when call many request to api

this is mycode. It seem only execute 1 request
public class RestFulService extends AbstractVerticle {
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.get("/test/hello/:input").handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext routingContext) {
WorkerExecutor executor = vertx.createSharedWorkerExecutor("my-worker-pool",10,120000);
executor.executeBlocking(future -> {
try {
Thread.sleep(5000);
future.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
},false, res -> {
System.out.println("The result is: " + res.result());
routingContext.response().end("routing1"+res.result());
executor.close();
});
}
});
}
When i call 10 request from browser in same time, it take 50000ms to done all request.
Please guide me fix it.
Try with curl, I suspect your browser is using the same connection for all requests (thus waiting for a response before sending the next request).
By the way, you don't need to call createSharedWorkerExecutor on each request. You can do it once when the verticle is started.

netty issue when writeAndFlush called from different InboundChannelHandlerAdapter.channelRead

I've got an issue, for which I am unable to post full code (sorry), due to security reasons. The gist of my issue is that I have a ServerBootstrap, created as follows:
bossGroup = new NioEventLoopGroup();
workerGroup = new NioEventLoopGroup();
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addFirst("idleStateHandler", new IdleStateHandler(0, 0, 3000));
//Adds the MQTT encoder and decoder
ch.pipeline().addLast("decoder", new MyMessageDecoder());
ch.pipeline().addLast("encoder", new MyMessageEncoder());
ch.pipeline().addLast(createMyHandler());
}
}).option(ChannelOption.SO_BACKLOG, 128).option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
channelFuture = b.bind(listenAddress, listenPort);
With createMyHandlerMethod() that basically returns an extended implementation of ChannelInboundHandlerAdapter
I also have a "client" listener, that listens for incoming connection requests, and is loaded as follows:
final String host = getHost();
final int port = getPort();
nioEventLoopGroup = new NioEventLoopGroup();
bootStrap = new Bootstrap();
bootStrap.group(nioEventLoopGroup);
bootStrap.channel(NioSocketChannel.class);
bootStrap.option(ChannelOption.SO_KEEPALIVE, true);
bootStrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addFirst("idleStateHandler", new IdleStateHandler(0, 0, getKeepAliveInterval()));
ch.pipeline().addAfter("idleStateHandler", "idleEventHandler", new MoquetteIdleTimeoutHandler());
ch.pipeline().addLast("decoder", new MyMessageDecoder());
ch.pipeline().addLast("encoder", new MyMessageEncoder());
ch.pipeline().addLast(MyClientHandler.this);
}
})
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.TCP_NODELAY, true);
// Start the client.
try {
channelFuture = bootStrap.connect(host, port).sync();
} catch (InterruptedException e) {
throw new MyException(“Exception”, e);
}
Where MyClientHandler is again a subclassed instance of ChannelInboundHandlerAdapter. Everything works fine, I get messages coming in from the "server" adapter, i process them, and send them back on the same context. And vice-versa for the "client" handler.
The problem happens when I have to (for some messages) proxy them from the server or client handler to other connection. Again, I am very sorry for not being able to post much code, but the gist of it is that I'm calling from:
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(someOtherMessage);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
Now here's the problem: the bolded (client) writeAndFlush - never actually writes the message bytes, it doesn't throw any errors. The ChannelFuture returns all false (success, cancelled, done). And if I sync on it, eventually it times out for other reasons (connection timeout set within my code).
I know I haven't posted all of my code, but I'm hoping that someone has some tips and/or pointers for how to isolate the problem of WHY it is not writing to the client context. I'm not a Netty expert by any stretch, and most of this code was written by someone else. They are both subclassing ChannelInboundHandlerAdapter
Feel free to ask any questions if you have any.
*****EDIT*********
I tried to proxy the request back to a DIFFERENT context/channel (ie, the client channel) using the following test code:
public void proxyPubRec(int messageId) throws MQTTException {
logger.log(logLevel, "proxying PUBREC to context: " + debugContext());
PubRecMessage pubRecMessage = new PubRecMessage();
pubRecMessage.setMessageID(messageId);
pubRecMessage.setRemainingLength(2);
logger.log(logLevel, "pipeline writable flag: " + ctx.pipeline().channel().isWritable());
MyMQTTEncoder encoder = new MyMQTTEncoder();
ByteBuf buff = null;
try {
buff = encoder.encode(pubRecMessage);
ctx.channel().writeAndFlush(buff);
} catch (Throwable t) {
logger.log(Level.SEVERE, "unable to encode PUBREC");
} finally {
if (buff != null) {
buff.release();
}
}
}
public class MyMQTTEncoder extends MQTTEncoder {
public ByteBuf encode(AbstractMessage msg) {
PooledByteBufAllocator allocator = new PooledByteBufAllocator();
ByteBuf buf = allocator.buffer();
try {
super.encode(ctx, msg, buf);
} catch (Throwable t) {
logger.log(Level.SEVERE, "unable to encode PUBREC, " + t.getMessage());
}
return buf;
}
}
But the above at line: ctx.channel().writeAndFlush(buff) is NOT writing to the other channel - any tips/tricks on debugging this sort of issue?
someOtherMessage has to be ByteBuf.
So, take this :
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(someOtherMessage);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
... and replace it with this :
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(ByteBuf);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
Actually, this turned out to be a threading issue. One of my threads was blocked/waiting while other threads were writing to the context and because of this, the writes were buffered and not sent, even with a flush. Problem solved!
Essentially, I put the first message code in an Runnable/Executor thread, which allowed it to run separately so that the second write/response was able to write to the context. There are still potentially some issues with this (in terms of message ordering), but this is not on topic for the original question. Thanks for all your help!

RequestFactory and offline clients

I'm trying to create an application which is able to work even when network is down.
The idea is to store data returned from RequestFactory on the localStorage, and to use localStorage when network isn't available.
My problem - I'm not sure exactly how to differentiate between server errors(5XX, 4XX, ...) and network errors.
(I assume that on both cases my Receiver.onFailure() would be called, but I still don't know how to identify this situation)
Any help would be appreciated,
Thanks,
Gilad.
The response code when there is no internet connection is 0.
With RequestFactory to identify that the request was unsuccessful because of the network the response code has to be accessed. The RequestTransport seems like the best place.
Here is a rough implementation of an OfflineAwareRequestTransport.
public class OfflineAwareRequestTransport extends DefaultRequestTransport {
private final EventBus eventBus;
private boolean online = true;
public OfflineAwareRequestTransport(EventBus eventBus) {
this.eventBus = eventBus;
}
#Override
public void send(final String payload, final TransportReceiver receiver) {
// super.send(payload, proxy);
RequestBuilder builder = createRequestBuilder();
configureRequestBuilder(builder);
builder.setRequestData(payload);
builder.setCallback(createRequestCallback(receiver, payload));
try {
builder.send();
} catch (RequestException e) {
}
}
protected static final int SC_OFFLINE = 0;
protected RequestCallback createRequestCallback(final TransportReceiver receiver,
final String payload) {
return new RequestCallback() {
public void onError(Request request, Throwable exception) {
receiver.onTransportFailure(new ServerFailure(exception.getMessage()));
}
public void onResponseReceived(Request request, Response response) {
if (Response.SC_OK == response.getStatusCode()) {
String text = response.getText();
setOnline(true);
receiver.onTransportSuccess(text);
} else if (response.getStatusCode() == SC_OFFLINE) {
setOnline(false);
boolean processedOk = processPayload(payload);
receiver.onTransportFailure(new ServerFailure("You are offline!", OfflineReceiver.name,
"", !processedOk));
} else {
setOnline(true);
String message = "Server Error " + response.getStatusCode() + " " + response.getText();
receiver.onTransportFailure(new ServerFailure(message));
}
}
};
}