I'd like to have health checks to an haproxy instance fail solely based on whether haproxy is running. In other words I don't want the health check to be proxied to a backend server.
I see there is a way to do this by returning a static file. This would work for me, but I was wondering: Is there a way to return an empty response with just a status code without having to return a file and deal with the false 503? The solution linked seems hacky and I worry that behavior will not be allowed in some later version.
monitor-uri <uri>
Intercept a URI used by external components' monitor requests
May be used in sections :
When an HTTP request referencing will be received on a frontend,
HAProxy will not forward it nor log it, but instead will return either
"HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on
failure conditions defined with "monitor fail". This is normally
enough for any front-end HTTP probe to detect that the service is UP
and running without forwarding the request to a backend server. Note
that the HTTP method, the version and all headers are ignored, but the
request must at least be valid at the HTTP level. This keyword may
only be used with an HTTP-mode frontend.
Example :
# Use /haproxy_test to report haproxy's status
frontend www
mode http
monitor-uri /haproxy_test
Related
I have an app with a backend and a frontend that communicate with each other, and is developed on Wildfly.
When I make a PUT request from the backend to the frontend, these requests do not arrive and they return a 500 error. However, POST requests work correctly and do not generate problems.
I have checked the configuration of the wildfly, ficher or configuration server (standalone.xml), but I do not fall where the problem may be.
How can I fix this?
So you say when the backend sends a PUT request the response has status 500, while when the backend sends a POST request to the same URL the response has status 2XX?
That does not smell like a connectivity issue. It seems more that your PUT request gets blocked somewhere. Try to find out where that error 500 is coming from. It could be a proxy or firewall inbetween client and server, it could be even that Wildfly or the application are not configured to respond to PUT.
Busy building an API, and some parts of our API heavily depend on a third party.
When we are unable to connect to the 3rd party, or the connection fails, I simply returned an error 500. However, I was wondering if it wouldn't make more sense to return a 502 Bad Gateway or a 504 Gateway Timeout?
However, my interpretation is that it could only be relevant for proxies, and not for API's?
In that case I would suggest to use 503 Service Unavailable and use the Retry-After Header to specify the time the client should wait before retrying.
When it's a matter of RESTful APIs I always check this super complete guide, which contains all the answers for all the all the questions you will ever imagine.
Service Unavailable - service is (temporarily) not available (e.g. if a required component or downstream service is not available) — client retry may be sensible. If possible, the service should indicate how long the client should wait by setting the Retry-After header.
I have an AWS EC2 Jira instance running behind an AWS Classic load balancer. The site loads in the browser fine, but all API requests are returning 404 for some reason. It is not a Jira 404, but a generic 404 response with no body and minimal headers. Only response useful header seems to be Server: nginx.
Tried white-listing my client IP, opening up all ports, sending request to the LB and directly to the instance with proper security group settings, etc., but same 404 response is returned. I'm using Postman to test the API. I noticed when I load the EC2 instance directly in the browser, it redirects to the load balancer.
Returns 200 with HTML. Basic auth works, too.
GET http://jira (home page)
Returns 404:
GET http://jira/rest/api/2/issue/ticket-num (or any other /rest/ endpoints)
Where should I start looking to debug this 404 issue? I feel like I'm missing something basic. I'm not seeing any Jira configuration for setting up its rest API. I feel like perhaps it's a server configuration issue, although I've never come across manual web server configuration while installing Jira, so maybe on the AWS's side?
EDIT: still waiting to get ssh access to the instance, so I'll update as I get more info and access.
This HTTP 404 responses with very limited set of headers could be from the default (the bottom one) rule in ELB. I experienced similar issue getting HTTP 404 because instead of host header I set path and provided the host domain name in one of ELB rules. So the rule did not work and default rule returned 404 because there is no such path exists on the instance.
I would recommend to try to use Redirect to or Return fixed response options for default rule to check out if it goes to the default rule.
I'am working on a microservice architecture based on Docker, registrator, consul and HAProxy.
I'am also using Consul-template to dynamically generate the HAProxy config file. Everything works fine : When I add multiple instances of the same microservice, the HAProxy configuration is updated immediately and requests are dispatched correctly using a round robin strategy.
My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error.
I'am new to HAProxy so is there a way to configure HAProxy to tell it to retry a failing request to another endpoint if a container disappears?
Precision : I'am using a layer7 routing mode (mode http) for my frontends and backends. Here is a little sample of my consul-template file :
backend hello-backend
balance roundrobin
mode http
{{range service "HelloWorld" }}server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
# Path stripping
reqrep ^([^\ ]*)\ /hello/(.*) \1\ /\2
frontend http
bind *:8080
mode http
acl url_hello path_beg /hello
use_backend hello-backend if url_hello
Thank you for your help.
It isn't possible for HAProxy to resend a request that has already been sent to a backend.
Here's a forum post from Willy, the creator.
redispatch only happens when the request is still in haproxy. Once it has been sent, it is cannot be performed. It must not be performed either for non idempotent requests, because there is no way to know whether some processing has begun on the server before it died and returned an RST.
http://haproxy.formilux.narkive.com/nGKXq6WU/problems-with-haproxy-down-servers-and-503-errors
The post is quite old but it's still applicable based on more recent discussions. If a request is larger than tune.bufsize (default is around 16KB iirc) then HAProxy hasn't even retained the entire request in memory at the point an error occurs.
Both fortunately (for the craft) and unfortunately (for purposes of real-world utility), Willy has always insisted on correct behavior by HAProxy, and he is indeed correct that it is inappropriate to retry non-idempotent requests once they have been sent to a back-end server, because there are certainly cases where this would result in duplicate processing.
For GET requests which, by definition, should be idempotent (a GET request must be repeatable without consequence, otherwise it should not have been designed to use GET -- it should have been POST or another verb) there's a viable argument that resending to a different back-end would be a legitimate course of action, but this also is not currently supported.
Varnish, by contrast, does support a do-over, which I have used (behind HAProxy) with success on GET requests where I have on-line and near-line storage for the same object namespace. Old, "unpopular" files are migrated to near-line (slower, cheaper) storage, but all requests are sent to on-line storage, with the retry destination of near-line if on-line returns a 404. But, I've never tried this with requests other than GET.
Ideally, your solution would be for your back-ends to be declared unhealthy, perhaps by deliberately failing their HTTP health checks for a draining time before shutting down. One fairly simple approach is for the health check to require the presence of a static file, which gets deleted from the back-end before shutdown. Or, you can request HAProxy consider the backend to be in maintenance mode through the stats/admin UI or socket, preventing more requests from being initiated while allowing running requests to drain.
I am trying to create a service that has 2 dependencies. One of the dependencies is internally managed while the 2nd requires an external http outbound call to a third party API. The sequence requires the updating the resource and then executing the http outbound call.
So my question is, on the event of a failure on the 2nd step, what is the correct http status code to return?
Should the response be 424 or 500 with a message body explaining the encountered error?
424: Method Failure - Indicates the method was not executed on a particular resource within its scope because some part of the method's execution failed causing the entire method to be aborted.
500: Internal Server Error.
The failure you're asking about is one that has occurred within the internals of the service itself, so a 5xx status code range is the correct choice. 503 Service Unavailable looks perfect for the situation you've described.
5xx codes are for telling the client that even though the request was fine, the server has had some kind of problem fulfilling the request. On the other hand, 4xx codes are used to tell the client it has done something wrong (and that the server is just fine, thanks). Sections 10.4 and 10.5 of the HTTP 1.1 spec explain the different purposes of 4xx and 5xx codes.
Status code 424 is defined in the WebDAV standard and is for a case where the client needs to change what it is doing - the server isn't experiencing any problem here.
As second operation is an external service call, you should choose 502 or 504 according to the situations.
Quoted from: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.3
10.5.3 502 Bad Gateway
The server, while acting as a gateway or proxy, received an invalid response from the upstream server it accessed in attempting to fulfill the request.
10.5.4 503 Service Unavailable
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.
Note: The existence of the 503 status code does not imply that a
server must use it when becoming overloaded. Some servers may wish
to simply refuse the connection.
10.5.5 504 Gateway Timeout
The server, while acting as a gateway or proxy, did not receive a timely response from the upstream server specified by the URI (e.g. HTTP, FTP, LDAP) or some other auxiliary server (e.g. DNS) it needed to access in attempting to complete the request.
Note: Note to implementors: some deployed proxies are known to
return 400 or 500 when DNS lookups time out.
503 Service Unavailable is appropriate if the issue is one that the server expects to be alleviated (such as if it gets a 503 from the upstream server, for instance). 502 Bad Gateway should be used for unknown errors from an upstream server, where you don't know how to respond.