I'm facing an issue that popped out due to a bad api design. There is a large (I mean huge: 11361 in length) request URI with lot of parameters that is being blocked (or maybe badly routed) by envoy proxy sidecar throwing a connection reset by peer. The request is done between a pod and a service inside the same K8s cluster.
By removing the envoy sidecar in the caller pod, the api call works normally even if the destination is deployed with the envoy sidecar.
This is the log showing up when I try to do a curl with that huge URI:
"- - -" 0 UF,URX "-" "-" 0 0 1000 - "-" "-" "-" "-" "10.10.146.112:80" outbound|80||default-http-backend.default.svc.cluster.local
The destination shown in the log is a fallback internal service.
I've tryied to add an EnvoyFilter that increases the max header size, but I think that those rules are for incoming calls and not for outgoing requests.
The issue is showing up with both Istio versions 1.4.0 and 1.5.8
Any ideas to workaround this while the dev team does the refactor?
Thank you very much :)
Related
I have an Openshift 4.6 platform running an applicative pod.
We use postman to send request to the pod.
The applicative pod return a 200 http response code, but get a 502 in postman.
So there is a interim component inside OpenShift/K8s that should transform the 200 into a 502.
Is there a way to debug/trace more information in Egress ?
Thanks
Nicolas
The HTTP 502 error is likely returned by the OpenShift Router that is forwarding your request to your application.
In practice this often means that the OpenShift Router (HAProxy) is sending the request to your application and it does not receive any or an unexpected answer from your application.
So I would recommend that you check your applications logs if there is any error in your application and if your application returns a valid HTTP answer. You can test this by using curl localhost:<port> from your application Pods to see if there is a response being returned.
I am creating istio service mesh and then trying to call an external service from istio pod.
I followed steps in link
https://istio.io/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/
till
2 Verify that your ServiceEntry was applied correctly by sending a request to http://edition.cnn.com/politics.
but in place of "edition.cnn.com", used my service.
When I try to do curl inside my pod, I am getting the below error.
[2020-02-02T10:02:52.465Z] "GET / HTTP/1.1" 503 UF,URX "-" "-" 0 91 150 - "-" "curl/7.58.0" "fafa8680-bdf1-468a-b50f-1a4430707ceb" "service.abc.com" "173.25.13.66:80" outbound|80||service.abc.com - 173.25.13.66:80 10.44.0.6:47544 - default
I can ping to service.abc.com, but how do I debug this error, and how to get more logs for analysis? As it did not mention to create steps for mtls and destination rules in above link, I did not create them.
Note: I am not facing any issue with edition.cnn.com, but getting issues when using my service which is external to mesh and is running in another server within my company network.
service.abc.com service supports only http or only https or both http and https? Is it configured to redirect http to https ? if you hit an endpoint with http and if its neither listening on port 80 nor redirecting http to https you are expected to get 503.
If you follow all the steps till 5 in the doc and assuming that service.abc.com is a https service it should work as expected because at step 5 even if you are sending a http request istio egress gateway is going to convert it to https(TLS origination) before it sends out the request to service.abc.com.
I'd like to have health checks to an haproxy instance fail solely based on whether haproxy is running. In other words I don't want the health check to be proxied to a backend server.
I see there is a way to do this by returning a static file. This would work for me, but I was wondering: Is there a way to return an empty response with just a status code without having to return a file and deal with the false 503? The solution linked seems hacky and I worry that behavior will not be allowed in some later version.
monitor-uri <uri>
Intercept a URI used by external components' monitor requests
May be used in sections :
When an HTTP request referencing will be received on a frontend,
HAProxy will not forward it nor log it, but instead will return either
"HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on
failure conditions defined with "monitor fail". This is normally
enough for any front-end HTTP probe to detect that the service is UP
and running without forwarding the request to a backend server. Note
that the HTTP method, the version and all headers are ignored, but the
request must at least be valid at the HTTP level. This keyword may
only be used with an HTTP-mode frontend.
Example :
# Use /haproxy_test to report haproxy's status
frontend www
mode http
monitor-uri /haproxy_test
I have an AWS EC2 Jira instance running behind an AWS Classic load balancer. The site loads in the browser fine, but all API requests are returning 404 for some reason. It is not a Jira 404, but a generic 404 response with no body and minimal headers. Only response useful header seems to be Server: nginx.
Tried white-listing my client IP, opening up all ports, sending request to the LB and directly to the instance with proper security group settings, etc., but same 404 response is returned. I'm using Postman to test the API. I noticed when I load the EC2 instance directly in the browser, it redirects to the load balancer.
Returns 200 with HTML. Basic auth works, too.
GET http://jira (home page)
Returns 404:
GET http://jira/rest/api/2/issue/ticket-num (or any other /rest/ endpoints)
Where should I start looking to debug this 404 issue? I feel like I'm missing something basic. I'm not seeing any Jira configuration for setting up its rest API. I feel like perhaps it's a server configuration issue, although I've never come across manual web server configuration while installing Jira, so maybe on the AWS's side?
EDIT: still waiting to get ssh access to the instance, so I'll update as I get more info and access.
This HTTP 404 responses with very limited set of headers could be from the default (the bottom one) rule in ELB. I experienced similar issue getting HTTP 404 because instead of host header I set path and provided the host domain name in one of ELB rules. So the rule did not work and default rule returned 404 because there is no such path exists on the instance.
I would recommend to try to use Redirect to or Return fixed response options for default rule to check out if it goes to the default rule.
I'am working on a microservice architecture based on Docker, registrator, consul and HAProxy.
I'am also using Consul-template to dynamically generate the HAProxy config file. Everything works fine : When I add multiple instances of the same microservice, the HAProxy configuration is updated immediately and requests are dispatched correctly using a round robin strategy.
My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error.
I'am new to HAProxy so is there a way to configure HAProxy to tell it to retry a failing request to another endpoint if a container disappears?
Precision : I'am using a layer7 routing mode (mode http) for my frontends and backends. Here is a little sample of my consul-template file :
backend hello-backend
balance roundrobin
mode http
{{range service "HelloWorld" }}server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
# Path stripping
reqrep ^([^\ ]*)\ /hello/(.*) \1\ /\2
frontend http
bind *:8080
mode http
acl url_hello path_beg /hello
use_backend hello-backend if url_hello
Thank you for your help.
It isn't possible for HAProxy to resend a request that has already been sent to a backend.
Here's a forum post from Willy, the creator.
redispatch only happens when the request is still in haproxy. Once it has been sent, it is cannot be performed. It must not be performed either for non idempotent requests, because there is no way to know whether some processing has begun on the server before it died and returned an RST.
http://haproxy.formilux.narkive.com/nGKXq6WU/problems-with-haproxy-down-servers-and-503-errors
The post is quite old but it's still applicable based on more recent discussions. If a request is larger than tune.bufsize (default is around 16KB iirc) then HAProxy hasn't even retained the entire request in memory at the point an error occurs.
Both fortunately (for the craft) and unfortunately (for purposes of real-world utility), Willy has always insisted on correct behavior by HAProxy, and he is indeed correct that it is inappropriate to retry non-idempotent requests once they have been sent to a back-end server, because there are certainly cases where this would result in duplicate processing.
For GET requests which, by definition, should be idempotent (a GET request must be repeatable without consequence, otherwise it should not have been designed to use GET -- it should have been POST or another verb) there's a viable argument that resending to a different back-end would be a legitimate course of action, but this also is not currently supported.
Varnish, by contrast, does support a do-over, which I have used (behind HAProxy) with success on GET requests where I have on-line and near-line storage for the same object namespace. Old, "unpopular" files are migrated to near-line (slower, cheaper) storage, but all requests are sent to on-line storage, with the retry destination of near-line if on-line returns a 404. But, I've never tried this with requests other than GET.
Ideally, your solution would be for your back-ends to be declared unhealthy, perhaps by deliberately failing their HTTP health checks for a draining time before shutting down. One fairly simple approach is for the health check to require the presence of a static file, which gets deleted from the back-end before shutdown. Or, you can request HAProxy consider the backend to be in maintenance mode through the stats/admin UI or socket, preventing more requests from being initiated while allowing running requests to drain.