Is there a way to prevent envoy from adding specific headers? - kubernetes

According to the docs here https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-proto
Envoy proxy adds the Header X-Forwarded-Proto to the request, for some reason the header value is wrong; it set it as http although the incoming requests scheme is https which cause some problems in my application code since it depends on the correct value of this header.
Is this a bug in envoy? Can I prevent envoy from doing this?

As I mentioned in comments there is related github issue about that.
Is there a way to prevent envoy from adding specific headers?
There is istio dev #howardjohn comment about that
We currently have two options:
EnvoyFilter
Alpha api
There will not be a third; instead we will promote the alpha API.
So the first option would be envoy filter.
There are 2 answers with that in above github issue.
Answer provided by #jh-sz
In general, use_remote_address should be set to true when Envoy is deployed as an edge node (aka a front proxy), whereas it may need to be set to false when Envoy is used as an internal service node in a mesh deployment.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: xff-trust-hops
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: ANY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
use_remote_address: true
xff_num_trusted_hops: 1
AND
Answer provided by #vadimi
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-app-filter
spec:
workloadLabels:
app: my-app
filters:
- listenerMatch:
portNumber: 5120
listenerType: SIDECAR_INBOUND
filterName: envoy.lua
filterType: HTTP
filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:headers():replace("x-forwarded-proto", "https")
end
function envoy_on_response(response_handle)
end
The second option would be Alpha api, this feature is actively in development and is considered pre-alpha.
Istio provides the ability to manage settings like X-Forwarded-For (XFF) and X-Forwarded-Client-Cert (XFCC), which are dependent on how the gateway workloads are deployed. This is currently an in-development feature. For more information on X-Forwarded-For, see the IETF’s RFC.
You might choose to deploy Istio ingress gateways in various network topologies (e.g. behind Cloud Load Balancers, a self-managed Load Balancer or directly expose the Istio ingress gateway to the Internet). As such, these topologies require different ingress gateway configurations for transporting correct client attributes like IP addresses and certificates to the workloads running in the cluster.
Configuration of XFF and XFCC headers is managed via MeshConfig during Istio installation or by adding a pod annotation. Note that the Meshconfig configuration is a global setting for all gateway workloads, while pod annotations override the global setting on a per-workload basis.

The reason this happens is most likely because you have one or more proxies in front of Envoy/Istio.
You need to tell Envoy how many proxies you have in front of it so that it can set forwarded headers correctly (such as X-Forwarded-Proto and X-Forwarded-For).
In Istio 1.4+ you can achieve this with an Envoy filter:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: xff-trust-hops
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: ANY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
use_remote_address: true
xff_num_trusted_hops: 1 # Change as needed
Note that if you have multiple proxies in front of Envoy you have to change the xff_num_trusted_hops variable to the correct amount. For example if you have a GCP or AWS cloud load balancer, you might have to increase this value to 2.
In Istio 1.8+, you will be able to configure this via the Istio operator instead, example:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
gatewayTopology:
numTrustedProxies: 1 # Change as needed
More information is available here.

Related

Filter request logs in the istio sidecar proxy

I have azure front door sitting on front of a aks cluster which has istio and proxy sidecars injected into each pod.
Azure front door has health probes which hit a request at least once a second due to the number of azure front door endpoints. The number of requests the apps are getting is very high to the point I want to slow the interval It hits with the affect of losing the benefits of front door.
Microsoft suggest to code a telemetry initialiser in dotnet to mark requests as synthetic, however this seems like a massive problem that I would need to get multiple teams to buy into. As well as replicate to multiple languages.
Instead I would like to use an envoy filter to look at the header of the requests and if it matches the front door agent "Edge Health Probe" I would like to completely ignore it.
This would mean I am in control of what logs get sent to the app insights, can roll out a one fix fits all and would not need to involve the Devs.
I have looked to envoy filter but can't really understand how it would work.
Is this possible with envoy filter or does anyone know of a better method?
Thanks
Kevin
You can do that with an EnvoyFilter. This example recognizes the header on the ingress gateway and simple sends a 200 response without sending it to the workloads:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: custom-ms-header
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.network.http_connection_manager
patch:
operation: INSERT_BEFORE
value:
name: custom.ms-header
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inlineCode: |
function envoy_on_request(request_handle)
val = request_handle:headers():get("some-header")
if (val and val ~= "some-value") then
request_handle:respond({[":status"] = "200"}, "ok")
end
end
Alternatively you can apply it to match certain workloads
[...]
namespace: my-namespace
spec:
workloadSelector:
labels:
istio: my-worklload
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
[...]
Tough this requires you to apply it to every workload.
am I misunderstanding this but if you want to ignore the request why not simply turn off the health probes?
Or change the interval of the probes from the default 30 seconds to 255 which decreases the requests, also of course the default health probes are HEAD requests so you can easily filter them out.

EKS Block specific external IP from viewing nginx application

I have an EKS cluster with an nginx deployment on namespace gitlab-managed-apps. Exposing the application to the public from ALB ingress. I'm trying to block a specific Public IP (ex: x.x.x.x/32) from accessing the webpage. I tried Calico and K8s network policies. Nothing worked for me. I created this Calico policy with my limited knowledge of Network policies, but it blocks everything from accessing the nginx app, not just x.x.x.x/32 external IP. Showing everyone 504 Gateway timeout from ALB
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: ingress-external
namespace: gitlab-managed-apps
spec:
selector:
app == 'nginx'
types:
- Ingress
ingress:
- action: Deny
source:
nets:
- x.x.x.x/32
Try this:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: ingress-external
namespace: gitlab-managed-apps
spec:
selector:
app == 'nginx'
types:
- Ingress
ingress:
- action: Deny
source:
nets:
- x.x.x.x/32
- action: Allow
calico docs suggests:
If one or more network policies apply to a pod containing ingress rules, then only the ingress traffic specifically allowed by those policies is allowed.
So this means that any traffic is denied by default and only allowed if you explicitly allow it. This is why adding additional rule action: Allow should allow all other traffic that was not matched by the previous rule.
Also remember what docs mention about rules:
A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order.
So default Allow rule has to follow the Deny rule for the specific IP, not the other way around.

Override x-request-id header in istio

I was trying to understand the tracing in istio.
According to istio documentation, x-request-id can be used for tracing purpose.(https://istio.io/latest/docs/tasks/observability/distributed-tracing/overview/)
I am seeing different behavior in Istio vs pure envoy proxy.
For tracing istio and also pure envoy proxy set the x-request-id. (generated guid)
However in istio the client can send a header x-request-id and the same is forwarded to the microservices.
Whereas if I have pure envoy - the x-request-id sent by the client is not considered and envoy overrides it with a generated guid.
Can istio be configured to over-ride this x-request-id if required?
It seems it is possible to implement using envoy filter only. This is not my solution - I found it, however it looks very promising and maybe resolve your issue.
Please take a look at Istio EnvoyFilter to add x-request-id to all responses
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: gateway-response
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(handle)
local metadata = handle:streamInfo():dynamicMetadata()
local headers = handle:headers()
local rid = headers:get("x-request-id")
-- for key, value in pairs(handle:headers()) do
-- handle:logTrace("key:" .. key .. " <--> value:" .. value)
-- end
if rid ~= nil then
metadata:set("envoy.filters.http.lua", "req.x-request-id", rid)
end
end
function envoy_on_response(handle)
local metadata = handle:streamInfo():dynamicMetadata():get("envoy.filters.http.lua")
local rid = metadata["req.x-request-id"]
if rid ~= nil then
handle:headers():add("x-request-id", rid)
end
end
And second one - just my assumption..
maybe you can try to add the x-request-id in the header of VirtualService? virtualservice-headers

Istio: Can I add randomly generated unique value as a header to every request before it reaches my application

I have a RESTful service within a spring boot application. This spring boot app is deployed inside a kubernetes cluser and we have Istio as a service mesh attached to the sidecar of each container pod in the cluster. Every request to my service first hits the service mesh i.e Istio and then gets routed accordingly.
I need to put a validation for a request header and if that header is not present then randomly generate a unique value and set it as a header to the request. I know that there is Headers.HeaderOperations which i can use in the destination rule but how can i generate a unique value every time the header is missing? I dont want to write the logic inside my application as this is a general rule to apply for all the applications inside the cluster
There is important information that needs to be said in this subject. And it looks to me like You are trying to make a workaround tracing for an applications that does not forward/propagate headers in Your cluster. So I am going to mention few problems that can be encountered with this solution (just in case).
As mentioned in answer from Yuri G. You can configure unique x-request-id headers but they will not be very useful in terms of tracing if the requests are passing trough applications that do not propagate those x-request-id headers.
This is because tracing entire request paths needs to have unique x-request-id though out its entire trace. If the x-request-id value is different in various parts of the path the request takes, how are We going to put together the entire trace path?
In a scenario where two requests are received in application pod at the same time even if they had unique x-request-id headers, only application is able to tell which inbound request matches with which outbound connection. One of the requests could take longer to process and without forwarded trace header we can't tell which one is which.
Anyway for applications that do support forwarding/propagating x-request-id headers I suggest following guide from istio documentation.
Hope it helps.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true
From reading the documentation of istio and envoy it seems like this is not supported by istio/envoy out of the box. As a workaround you have 2 options
Option 1: To set the x-envoy-force-trace header in virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- headers:
request:
set:
x-envoy-force-trace: true
It will generate a header x-request-id if it is missing. But it seems like abuse of tracing mechanism.
Option 2: To use consistentHash balancing based on header, e.g:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName:
name: x-custom-request-id
It will generate the header x-custom-request-id for any request that doesn't have this header. In this case the requests with same x-custom-request-id value will go always to the same pod that can cause uneven balancing.
The answer above works well! I have updated it for the latest istio (filter name is in full):
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true

Default Load Balancing in Kubernetes

I've recently started working with Kubernetes clusters. The flow of network calls for a given Kubernetes service in our cluster is something like the following:
External Non-K8S Load Balancer -> Ingress Controller -> Ingress Resource -> Service -> Pod
For a given service, there are two replicas. By looking at the logs of the containers in the replicas, I can see that calls are being routed to different pods. As far as I can see, we haven't explicitly set up any load-balancing policies anywhere for our services in Kubernetes.
I've got a few questions:
1) Is there a default load-balancing policy for K8S? I've read about kube-proxy and random routing. It definitely doesn't appear to be round-robin.
2) Is there an obvious way to specify load balancing rules in the Ingress resources themselves? On a per-service basis?
Looking at one of our Ingress resources, I can see that the 'loadBalancer' property is empty:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"example-service-ingress","namespace":"member"},"spec":{"rules":[{"host":"example-service.x.x.x.example.com","http":{"paths":[{"backend":{"serviceName":"example-service-service","servicePort":8080},"path":""}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-02-13T17:49:29Z"
generation: 1
name: example-service-ingress
namespace: x
resourceVersion: "59178"
selfLink: /apis/extensions/v1beta1/namespaces/x/ingresses/example-service-ingress
uid: b61decda-2fb7-11e9-935b-02e6ca1a54ae
spec:
rules:
- host: example-service.x.x.x.example.com
http:
paths:
- backend:
serviceName: example-service-service
servicePort: 8080
status:
loadBalancer:
ingress:
- {}
I should specify - we're using an on-prem Kubernetes cluster, rather than on the cloud.
Cheers!
The "internal load balancing" between Pods of a Service has already been covered in this question from a few days ago.
Ingress isn't really doing anything special (unless you've been hacking in the NGINX config it uses) - it will use the same Service rules as in the linked question.
If you want or need fine-grained control of how pods are routed to within a service, it is possible to extend Kubernetes' features - I recommend you look into the traffic management features of Istio, as one of its features is to be able to dynamically control how much traffic different pods in a service receive.
I see two options that can be used with k8s:
Use istio's traffic management and create a DestinationRule. It currently supports three load balancing modes:
Round robin
Random
Weighted least request
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
...
spec:
...
subsets:
- name: test
...
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Use lb_type in envoy proxy with ambassador on k8s. More info about ambassador is in https://www.getambassador.io.