Override x-request-id header in istio - trace

I was trying to understand the tracing in istio.
According to istio documentation, x-request-id can be used for tracing purpose.(https://istio.io/latest/docs/tasks/observability/distributed-tracing/overview/)
I am seeing different behavior in Istio vs pure envoy proxy.
For tracing istio and also pure envoy proxy set the x-request-id. (generated guid)
However in istio the client can send a header x-request-id and the same is forwarded to the microservices.
Whereas if I have pure envoy - the x-request-id sent by the client is not considered and envoy overrides it with a generated guid.
Can istio be configured to over-ride this x-request-id if required?

It seems it is possible to implement using envoy filter only. This is not my solution - I found it, however it looks very promising and maybe resolve your issue.
Please take a look at Istio EnvoyFilter to add x-request-id to all responses
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: gateway-response
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(handle)
local metadata = handle:streamInfo():dynamicMetadata()
local headers = handle:headers()
local rid = headers:get("x-request-id")
-- for key, value in pairs(handle:headers()) do
-- handle:logTrace("key:" .. key .. " <--> value:" .. value)
-- end
if rid ~= nil then
metadata:set("envoy.filters.http.lua", "req.x-request-id", rid)
end
end
function envoy_on_response(handle)
local metadata = handle:streamInfo():dynamicMetadata():get("envoy.filters.http.lua")
local rid = metadata["req.x-request-id"]
if rid ~= nil then
handle:headers():add("x-request-id", rid)
end
end
And second one - just my assumption..
maybe you can try to add the x-request-id in the header of VirtualService? virtualservice-headers

Related

Kong's flaky rate limiting behavior

I have deployed some APIs in Azure Kubernetes Service and I have been experimenting with Kong to be able to use some of its features such as rate limiting and IP restriction but it doesn't always work as expected. Here is the plugin objects I use:
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: kong-rate-limiting-plugin
annotations:
kubernetes.io/ingress.class: kong
labels:
global: 'true'
config:
minute: 10
policy: local
limit_by: ip
hide_client_headers: true
plugin: rate-limiting
---
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: kong-ip-restriction-plugin
annotations:
kubernetes.io/ingress.class: kong
labels:
global: 'true'
config:
deny:
- {some IP}
plugin: ip-restriction
The first problem is when I tried to apply these plugins across the cluster by setting the global label to \"true\" as described here, I got this error when applying it with kubectl:
metadata.labels: Invalid value: "\\\"true\\\"": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')
The second problem is even though I used KongClusterPlugin and set global to 'true', I still had to add the plugins explicitly to the ingress object for them to work. Here is my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
konghq.com/plugins: kong-rate-limiting-plugin,kong-ip-restriction-plugin
konghq.com/protocols: https
konghq.com/https-redirect-status-code: "301"
namespace: default
spec:
ingressClassName: kong
...
And here is my service:
apiVersion: v1
kind: Service
metadata:
name: my-svc
namespace: default
spec:
externalTrafficPolicy: Local
type: LoadBalancer
...
The third problem is by setting limit_by to ip, I expected it to rate-limit per IP, but I noticed it would block all clients when the threshold was hit collectively by the clients. I tried to mitigate that by preserving the client IP and setting externalTrafficPolicy to Local in the service object as I thought maybe the Kubernetes objects weren't receiving the actual client's IP. Now the rate limiting behavior seems to be more reasonable, however sometimes it's as if it's back to its old state and returns HTTP 429 randomly. The other issue I see here is I can set externalTrafficPolicy to Local only when the service type has been set to LoadBalancer or NodePort. I set my service to be of type LoadBalancer which exposes it publicly and seems to be a problem. It would be ironic that using an ingress controller that's supposed to shield the service rather exposes it. Am I missing something here or does this make no sense?
The fourth problem is the IP restriction plugin doesn't seem to be working. I was able to successfully call the APIs from a machine with the IP I put in 'config.deny'.
The fifth problem is the number of times per minute I have to hit the APIs to get a HTTP 429 doesn't match the value I placed in 'config.minute'.

Filter request logs in the istio sidecar proxy

I have azure front door sitting on front of a aks cluster which has istio and proxy sidecars injected into each pod.
Azure front door has health probes which hit a request at least once a second due to the number of azure front door endpoints. The number of requests the apps are getting is very high to the point I want to slow the interval It hits with the affect of losing the benefits of front door.
Microsoft suggest to code a telemetry initialiser in dotnet to mark requests as synthetic, however this seems like a massive problem that I would need to get multiple teams to buy into. As well as replicate to multiple languages.
Instead I would like to use an envoy filter to look at the header of the requests and if it matches the front door agent "Edge Health Probe" I would like to completely ignore it.
This would mean I am in control of what logs get sent to the app insights, can roll out a one fix fits all and would not need to involve the Devs.
I have looked to envoy filter but can't really understand how it would work.
Is this possible with envoy filter or does anyone know of a better method?
Thanks
Kevin
You can do that with an EnvoyFilter. This example recognizes the header on the ingress gateway and simple sends a 200 response without sending it to the workloads:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: custom-ms-header
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.network.http_connection_manager
patch:
operation: INSERT_BEFORE
value:
name: custom.ms-header
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inlineCode: |
function envoy_on_request(request_handle)
val = request_handle:headers():get("some-header")
if (val and val ~= "some-value") then
request_handle:respond({[":status"] = "200"}, "ok")
end
end
Alternatively you can apply it to match certain workloads
[...]
namespace: my-namespace
spec:
workloadSelector:
labels:
istio: my-worklload
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
[...]
Tough this requires you to apply it to every workload.
am I misunderstanding this but if you want to ignore the request why not simply turn off the health probes?
Or change the interval of the probes from the default 30 seconds to 255 which decreases the requests, also of course the default health probes are HEAD requests so you can easily filter them out.

Is there a way to prevent envoy from adding specific headers?

According to the docs here https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-proto
Envoy proxy adds the Header X-Forwarded-Proto to the request, for some reason the header value is wrong; it set it as http although the incoming requests scheme is https which cause some problems in my application code since it depends on the correct value of this header.
Is this a bug in envoy? Can I prevent envoy from doing this?
As I mentioned in comments there is related github issue about that.
Is there a way to prevent envoy from adding specific headers?
There is istio dev #howardjohn comment about that
We currently have two options:
EnvoyFilter
Alpha api
There will not be a third; instead we will promote the alpha API.
So the first option would be envoy filter.
There are 2 answers with that in above github issue.
Answer provided by #jh-sz
In general, use_remote_address should be set to true when Envoy is deployed as an edge node (aka a front proxy), whereas it may need to be set to false when Envoy is used as an internal service node in a mesh deployment.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: xff-trust-hops
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: ANY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
use_remote_address: true
xff_num_trusted_hops: 1
AND
Answer provided by #vadimi
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-app-filter
spec:
workloadLabels:
app: my-app
filters:
- listenerMatch:
portNumber: 5120
listenerType: SIDECAR_INBOUND
filterName: envoy.lua
filterType: HTTP
filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:headers():replace("x-forwarded-proto", "https")
end
function envoy_on_response(response_handle)
end
The second option would be Alpha api, this feature is actively in development and is considered pre-alpha.
Istio provides the ability to manage settings like X-Forwarded-For (XFF) and X-Forwarded-Client-Cert (XFCC), which are dependent on how the gateway workloads are deployed. This is currently an in-development feature. For more information on X-Forwarded-For, see the IETF’s RFC.
You might choose to deploy Istio ingress gateways in various network topologies (e.g. behind Cloud Load Balancers, a self-managed Load Balancer or directly expose the Istio ingress gateway to the Internet). As such, these topologies require different ingress gateway configurations for transporting correct client attributes like IP addresses and certificates to the workloads running in the cluster.
Configuration of XFF and XFCC headers is managed via MeshConfig during Istio installation or by adding a pod annotation. Note that the Meshconfig configuration is a global setting for all gateway workloads, while pod annotations override the global setting on a per-workload basis.
The reason this happens is most likely because you have one or more proxies in front of Envoy/Istio.
You need to tell Envoy how many proxies you have in front of it so that it can set forwarded headers correctly (such as X-Forwarded-Proto and X-Forwarded-For).
In Istio 1.4+ you can achieve this with an Envoy filter:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: xff-trust-hops
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: ANY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
use_remote_address: true
xff_num_trusted_hops: 1 # Change as needed
Note that if you have multiple proxies in front of Envoy you have to change the xff_num_trusted_hops variable to the correct amount. For example if you have a GCP or AWS cloud load balancer, you might have to increase this value to 2.
In Istio 1.8+, you will be able to configure this via the Istio operator instead, example:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
gatewayTopology:
numTrustedProxies: 1 # Change as needed
More information is available here.

how to forward request to public service like cdn using istio virtualservice?

i'm trying to reverse proxy using istio virtual service
it is possible forward request in virtual service? (like nginx's proxy_pass)
in result,
http://myservice.com/about/* -> forward request to CDN (external service outside k8s system - aws s3, etc....)
http://myservice.com/* -> my-service-web (internal service includes in istio mesh)
defined serviceentry, but it just "redirect", not forward reqeust.
here is my serviceentry.yaml and virtualservice.yaml
serviceentry.yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: my-service-proxy
namespace: my-service
spec:
hosts:
- CDN_URL
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: TLS
resolution: DNS
virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
namespace: my-service
spec:
hosts:
- myservice.com
gateways:
- myservice
http:
- match:
- uri:
prefix: /about
rewrite:
authority: CDN_URL
uri: /
route:
- destination:
host: CDN_URL
- route:
- destination:
host: my-service-web.svc.cluster.local
port:
number: 80
virtualservice can acts like nginx-igress?
Based on that istio discuss
User #palic asked same question here
Shouldn’t it be possible to let ISTIO do the reverse proxy
thing, so that no one needs a webserver (httpd/nginx/
lighthttpd/…) to do the reverse proxy job?
And the answer provided by #Daniel_Watrous
The job of the Istio control plane is to configure a fleet of reverse proxies. The purpose of the webserver is to serve content, not reverse proxy. The reverse proxy technology at the heart of Istio is Envoy, and Envoy can be use as a replacement for HAProxy, nginx, Apache, F5, or any other component that is being used as a reverse proxy.
it is possible forward request in virtual service
Based on that I would say it's not possible to do in virtual service, it's just rewrite(redirect), which I assume is working for you.
when i need function of reverse proxy, then i have to using nginx ingresscontroller (or other things) instead of istio igress gateway?
If we talk about reverse proxy, then yes, you need to use other technology than istio itself.
As far as I'm concerned, you could use some nginx pod, which would be configured as reverse proxy to the external service, and it will be the host for your virtual service.
So it would look like in below example.
EXAMPLE
ingress gateway -> Virtual Service -> nginx pod ( reverse proxy configured on nginx)
Service entry -> accessibility of URLs outside of the cluster
Let me know if you have any more questions.

Istio: Can I add randomly generated unique value as a header to every request before it reaches my application

I have a RESTful service within a spring boot application. This spring boot app is deployed inside a kubernetes cluser and we have Istio as a service mesh attached to the sidecar of each container pod in the cluster. Every request to my service first hits the service mesh i.e Istio and then gets routed accordingly.
I need to put a validation for a request header and if that header is not present then randomly generate a unique value and set it as a header to the request. I know that there is Headers.HeaderOperations which i can use in the destination rule but how can i generate a unique value every time the header is missing? I dont want to write the logic inside my application as this is a general rule to apply for all the applications inside the cluster
There is important information that needs to be said in this subject. And it looks to me like You are trying to make a workaround tracing for an applications that does not forward/propagate headers in Your cluster. So I am going to mention few problems that can be encountered with this solution (just in case).
As mentioned in answer from Yuri G. You can configure unique x-request-id headers but they will not be very useful in terms of tracing if the requests are passing trough applications that do not propagate those x-request-id headers.
This is because tracing entire request paths needs to have unique x-request-id though out its entire trace. If the x-request-id value is different in various parts of the path the request takes, how are We going to put together the entire trace path?
In a scenario where two requests are received in application pod at the same time even if they had unique x-request-id headers, only application is able to tell which inbound request matches with which outbound connection. One of the requests could take longer to process and without forwarded trace header we can't tell which one is which.
Anyway for applications that do support forwarding/propagating x-request-id headers I suggest following guide from istio documentation.
Hope it helps.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true
From reading the documentation of istio and envoy it seems like this is not supported by istio/envoy out of the box. As a workaround you have 2 options
Option 1: To set the x-envoy-force-trace header in virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- headers:
request:
set:
x-envoy-force-trace: true
It will generate a header x-request-id if it is missing. But it seems like abuse of tracing mechanism.
Option 2: To use consistentHash balancing based on header, e.g:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName:
name: x-custom-request-id
It will generate the header x-custom-request-id for any request that doesn't have this header. In this case the requests with same x-custom-request-id value will go always to the same pod that can cause uneven balancing.
The answer above works well! I have updated it for the latest istio (filter name is in full):
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true