Understanding HTTP 504 Result from Spray Web Service - scala

Given:
client <-- HTTP --> spray web service <-- HTTP --> other web service
The client receives the following HTTP status code and entity body when hitting the spray web service:
504 Gateway Timeout - Empty
Per this answer, my understanding is that the spray web service is not receiving a timely response from the other web service, and thus sending an HTTP 504 to the client.
Assuming that's reasonable, per https://github.com/spray/spray/blob/master/spray-http/src/main/scala/spray/http/StatusCode.scala#L158, I'm guessing that one of these server config values is responsible for the 504 in the spray web service's HTTP response to the client.
What config or implicit would cause the spray web service to reply with a 504 to the client?

I think you are using the default Spray timeouts and perhaps you will need to increase them. If this is the case, there are 2 values you will have to configure to increase the timeouts.
idle-timeout: Time you can have an idle connection to your server before it is disconnected (default 60s).
request-timeout: Time a request from your client (from your server to another) can be idle before it timesout (default 20s).
The first value must be always higher than the second, as the idle-timeout will make pointless the connections from your request client.
So just overwrite your configuration in your application.conf like this:
spray.can {
server {
idle-timeout = 120 s
request-timeout = 20 s
}
}

Related

Strange issue with Vertx Http request

I configured an HTTPS website on AWS, which allows visiting from a white list of IPs.
My local machine runs with a VPN connection, which is in the white list.
I could visit the website from web browser or by the java.net.http package with the below code:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://mywebsite/route"))
.GET() // GET is default
.build();
HttpResponse<Void> response = client.send(request,
HttpResponse.BodyHandlers.discarding());
But if I replaced the code with a Vertx implementation from io.vertx.ext.web.client package, I got a 403 forbidden response from the same website.
WebClientOptions options = new WebClientOptions().setTryUseCompression(true).setTrustAll(true);
HttpRequest<Buffer> request = WebClient.create(vertx, options)
.getAbs("https://mywebsite/route")
.ssl(true).putHeaders(headers);
request.send(asyncResult -> {
if (asyncResult.succeeded()) {
HttpResponse response = asyncResult.result();
}
});
Does anyone have an idea why the Vertx implementation is rejected?
Finally got the root cause. I started a local server that accepts the testing request and forwards it to the server on AWS. The testing client sent the request to localhost and thus "Host=localhost:8080/..." is in the request header. In the Vert.X implementation, a new header entry "Host=localhost:443/..." is wrongly put into the request headers. I haven't debug the Vert.X implementation so I have no idea why it behaviors as this. But then the AWS firewall rejected the request with a rule that a request could not come from localhost.

Haproxy agent-check DRAIN(agent) status still accepts new connections

I am trying to perform a load balancing for my Node.js application which uses websockets. I need haproxy to stop load balance new connections on a server, which has reached its maximum number of connections, keeping the existing ones intact in the same time.
I do this by performing an agent-check for each of my servers. If a server cannot accept new connections it responds with "drain" to an agent-check. If a server is able to respond to a new connection, it responds with "ready" to an agent-check.
Here is my haproxy.cfg configuration file:
global
daemon
maxconn 240000
log /dev/log local0 debug
log /dev/log local1 notice
tune.ssl.default-dh-param 2048
defaults
mode http
log global
option httplog
option dontlognull
option dontlog-normal
option http-server-close
option redispatch
timeout connect 20000ms
timeout http-request 1m
timeout client 2100000ms
timeout server 2100000ms
timeout queue 30s
timeout check 5s
timeout http-keep-alive 180s
timeout tunnel 3600s
timeout tarpit 60s
frontend stats
bind *:8084
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUE
frontend test
mode http
bind *:5000
default_backend ws
backend ws
mode http
fullconn 100000
balance roundrobin
cookie SERVERID insert indirect nocache
server 1 backend1:9999 check agent-check agent-port 8080 cookie 1 inter 500 fall 1 rise 2
And here is how I respond to haproxy agent-check in my Node.js app:
const healthCheckServer = net.createServer((c) => {
let data = '';
if (currentConn < MAX_CONN) {
data += 'ready';
} else {
data += 'drain';
}
c.write(data + '\r\n');
c.destroy();
});
healthCheckServer.listen(8080, '0.0.0.0');
When number of connections to my application is reaching its maximum, haproxy correctly changes server status to DRAIN (agent) (I can observe this in haproxy web dashboard). The problem is, that new connections are still accepted by the application.
I am new to haproxy, so can someone point me to where I am wrong?
Found out that when server is drained by agent (status is set to DRAIN (agent)) and if server is the only one existing in the backend it will still accept new connections.
When there are multiple servers present and each server is drained the behavior is just like expected: haproxy returns HTTP 503.
UPDATE:
Turned out that I was looking in the wrong direction all the time.
First I had to mark my backend which processes WebSocket connections as a non-http (remove mode http line). My guess is haproxy incorrectly counts current sessions for http backend when using WebSockets. Removing mode http solved my problem.
Second, returning maxconn:<conn> in agent-check looks like much simple and more idiomatic way to limit the number of concurrent connections.
Sources:
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-agent-check
https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy/
I hope this will help someone.

Camel Rest DSL - multicast and early reply to http client

Camel version is 2.18.1
I have the following routes:
1: A rest dsl accepting data from HTTP clients and sending it to the route
below before replying to the client.
rest("/api")
.post("/commands/{command}").to("direct:commands")
2: A multicast to route the command to two endpoints longRunningProces and shortRunningProcessWhichMustSendRespondToHttpClient.
from("direct:commands")
.multicast()
.to("direct:longRunningProces")
.to("direct:shortRunningProcessWhichMustSendRespondToHttpClient");
How can I send to Http Client the response from the shortRunningProcessWhichMustSendRespondToHttpClient route ?
Because you use multicast main thread is waiting for response.
use wireTap
http://camel.apache.org/wire-tap.html
route look like this
from("direct:commands")
.wireTap("direct:longRunningProces") //<<- seperate thread to process this route
.to("direct:shortRunningProcessWhichMustSendRespondToHttpClient");
here is a complete code
github

Spray.can.server.request-timeout property has no effect

In my src/main/resources/application.conf I include:
spray.can.server {
request-timeout = 1s
}
In order to test this, in the Future which is servicing my request I put a Thread.sleep(10000).
When I issue a request, the server waits 10 seconds and responds with no hint of a timeout being sent to the client.
I am not overriding the timeout handler.
Why are my clients (chrome and curl) not receiving a timeout?
The configuration looks correct, so Spray request timeout should be working. One of the frequent reasons for it not working is that your config application.conf is not being used by the application.
The reasons for config being ignored could be that it's in the wrong place, not included in your classpath, or not included in a JAR that you package.
To troubleshoot first check that default Spray timeout is working. By default it's 20 sec. Make your code sleep for 30sec and see if you get timeouts triggered.
Check what's in your final config values by printing it. Set this in your conf:
akka {
# Log the complete configuration at INFO level when the actor system is started.
# This is useful when you are uncertain of what configuration is used.
log-config-on-start = on
}
Finally, keep in mind other timeouts like timeout-timeout = 2 s.
I think the request-timeout is for the http client, if the response is not returned before that value, the client will get a timeout from spray, see the spary doc
# If a request hasn't been responded to after the time period set here
# a `spray.http.Timedout` message will be sent to the timeout handler.
# Set to `infinite` to completely disable request timeouts.
For example, in my web browser, I can got a below message:
Ooops! The server was not able to produce a timely response to your request.
Please try again in a short while!
If the timeout is long enough, the browser will keep waiting until a response is returned

Connection reset by tomcat server on continuous reception of HTTP GET request

I am doing load test of web server. Current i am using tomcat 6 to test my code. While running the server resets the connection after few minutes on receiving continuous GET requests for the same page. If I send GET request with some gap (say 500 ms) then it works fine. If I send GET request with 10 ms or less than 10 ms then server resets the connection after few seconds from the start of test. Please help on how to fix this problem. What is the reason for reset ? Whether the server is overloaded or I have to perform some operation while establish connection ??.
My GET request format is:
GET /index.html HTTP/1.1
Host: 180.168.40.40
Connection: keep-alive