Istio: Add a custom request header to outbound HTTP requests - kubernetes

We want to add a custom request header to HTTP requests to a specific external endpoint. Created the following Istio configurations.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: httpbin-se
spec:
exportTo:
- .
hosts:
- httpbin.org
location: MESH_EXTERNAL
ports:
- name: 443-port
number: 443
protocol: TLS
resolution: NONE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin-vs
spec:
hosts:
- httpbin.org
http:
- route:
- destination:
host: httpbin.org
weight: 100
headers:
request:
add:
test-header: xyz
Tried testing from a container... the test-header is not getting added
bash-4.2$ curl -X GET "https://httpbin.org/headers" -H "accept: application/json"
{
"headers": {
"Accept": "application/json",
"Host": "httpbin.org",
"User-Agent": "curl/7.29.0",
"X-Amzn-Trace-Id": "Root=1-63062d1c-2d6808374e71f1ef16713fe8"
}
}

Related

In Grafana I am getting a "400 Bad Request Client sent an HTTP request to an HTTPS server" when trying to update datasource configmaps

In Grafana I notice that when I deploy a configmap that should add a datasource it makes no change and does not add the new datasource - note that the configmap is in the cluster and in the correct namespace.
If I make a change to the configmap I get the following error if I look at the logs for the grafana-sc-datasources container:
POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 400 Bad Request Client sent an HTTP request to an HTTPS server.
I assume I do not see any changes because it can not make the post request.
I played around a bit and at one point I did see changes being made/updated in the datasources:
I changed the protocol to http under grafana: / server: / protocol: and I was NOT able to open the grafana website but I did notice that if I did make a change to a datasource configmap in the cluster then I would see a successful 200 message in logs of the grafana-sc-datasources container : POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 200 OK {"message":"Datasources config reloaded"}.
So I assume just need to know how to get Grafana to send the POST request as https instead of http.
Can someone point me to what might be wrong and how to fix it?
Note that I am pretty new to K8s, grafana and helmcharts.
Here is a configmap that I am trying to get to work:
apiVersion: v1
kind: ConfigMap
metadata:
name: jaeger-${NACKLE_ENV}-grafana-datasource
labels:
grafana_datasource: '1'
data:
jaeger-datasource.yaml: |-
apiVersion: 1
datasources:
- name: Jaeger-${NACKLE_ENV}
type: jaeger
access: browser
url: http://jaeger-${NACKLE_ENV}-query.${NACKLE_ENV}.svc.cluster.local:16690
version: 1
basicAuth: false
Here is the current Grafana values file:
# use 1 replica when using a StatefulSet
# If we need more than 1 replica, then we'll have to:
# - remove the `persistence` section below
# - use an external database for all replicas to connect to (refer to Grafana Helm chart docs)
replicas: 1
image:
pullSecrets:
- docker-hub
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/capacityType
operator: In
values:
- ON_DEMAND
persistence:
enabled: true
type: statefulset
storageClassName: biw-durable-gp2
podDisruptionBudget:
maxUnavailable: 1
admin:
existingSecret: grafana
sidecar:
datasources:
enabled: true
label: grafana_datasource
dashboards:
enabled: true
label: grafana_dashboard
labelValue: 1
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
node-exporter:
gnetId: 1860
revision: 23
datasource: Prometheus
core-dns:
gnetId: 12539
revision: 5
datasource: Prometheus
fluentd:
gnetId: 7752
revision: 6
datasource: Prometheus
ingress:
apiVersion: networking.k8s.io/v1
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: '/api/health'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol: HTTPS
# Redirect to HTTPS at the ALB
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
defaultBackend:
service:
name: grafana
port:
number: 80
livenessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" }, "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }
readinessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" } }
service:
type: NodePort
name: grafana
rolePrefix: app-role
env: eks-test
serviceAccount:
name: grafana
annotations:
eks.amazonaws.com/role-arn: ""
pod:
spec:
serviceAccountName: grafana
grafana.ini:
server:
# don't use enforce_domain - it causes an infinite redirect in our setup
# enforce_domain: true
enable_gzip: true
# NOTE - if I set the protocol to http I do see it make changes to datasources but I can not see the website
protocol: https
cert_file: /biw-cert/domain.crt
cert_key: /biw-cert/domain.key
users:
auto_assign_org_role: Editor
# https://grafana.com/docs/grafana/v6.5/auth/gitlab/
auth.gitlab:
enabled: true
allow_sign_up: true
org_role: Editor
scopes: read_api
auth_url: https://gitlab.biw-services.com/oauth/authorize
token_url: https://gitlab.biw-services.com/oauth/token
api_url: https://gitlab.biw-services.com/api/v4
allowed_groups: nackle-teams/devops
securityContext:
fsGroup: 472
runAsUser: 472
runAsGroup: 472
extraConfigmapMounts:
- name: "cert-configmap"
mountPath: "/biw-cert"
subPath: ""
configMap: biw-grafana-cert
readOnly: true

Kubernetes rewrite-target paths for appending path to match

I'm using the OSS ingress-nginx Ingress controller and trying to create a rewrite-target rule such that I can append a path string before my string match.
If I wanted to create a rewrite rule with regex that matches /matched/path and rewrites that to /prefix/matched/path, how might I be able to do that?
I've tried something like the following but it's no good, and I'm just confused about the syntax of this ingress definition:
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- path: /(/prefix/)(/|$)(/matched/path)(.*)
backend:
serviceName: webapp1
If I wanted to create a rewrite rule with regex that matches
/matched/path and rewrites that to /prefix/matched/path, how might
I be able to do that?
In order to achieve this you have add /prefix into your rewrite-target.
Here's a working example with ingress syntax from k8s v1.18:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress-v118
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /prefix/$1
spec:
rules:
http:
paths:
- path: /(matched/path/?.*)
backend:
serviceName: test
servicePort: 80
Since the syntax for the new ingress changed in 1.19 (see release notes and some small info at the end) I`m placing also an example with it:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress-v119
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /prefix/$1
spec:
rules:
- http:
paths:
- path: /(matched/path/?.*)
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Here is a test with http echo server:
➜ ~ curl 172.17.0.4/matched/path
{
"path": "/prefix/matched/path",
"headers": {
"host": "172.17.0.4",
"x-request-id": "011585443ebc6adcf913db1c506abbe6",
"x-real-ip": "172.17.0.1",
"x-forwarded-for": "172.17.0.1",
"x-forwarded-host": "172.17.0.4",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-scheme": "http",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
This rule will also ignore the / at the end of the request:
➜ ~ curl 172.17.0.4/matched/path/
{
"path": "/prefix/matched/path/",
"headers": {
"host": "172.17.0.4",
"x-request-id": "0575e9022d814ba07457395f78dbe0fb",
"x-real-ip": "172.17.0.1",
"x-forwarded-for": "172.17.0.1",
"x-forwarded-host": "172.17.0.4",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-scheme": "http",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
Worth to mention some notable differences/changes in the new ingress syntax:
spec.backend -> spec.defaultBackend
serviceName -> service.name
servicePort -> service.port.name (for string values)
servicePort -> service.port.number (for numeric values) pathType no longer has a default value in v1; "Exact", "Prefix", or
"ImplementationSpecific" must be specified Other Ingress API updates
backends can now be resource or service backends
path is no longer required to be a valid regular expression (#89778,
#cmluciano) [SIG API Machinery, Apps, CLI, Network and Testing]

Ambassador Rate limitting not working properly

I am trying to do rate-limiting with ambassador following this tutorial. I am using minikube and local docker image.I have tested all api is responding correctly after deploying to Kubernetes only the rate-limiting function isn't working.
Here is my deploy.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-deployment
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 443
targetPort: 3000
selector:
app: nodejs-deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
spec:
selector:
matchLabels:
app: nodejs-deployment
replicas: 2
template:
metadata:
labels:
app: nodejs-deployment
spec:
containers:
- name: nodongo
image: soham/nodejs-starter
imagePullPolicy: "Never"
ports:
- containerPort: 3000
Here is my rate-limit.yaml
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: nodejs-backend
spec:
prefix: /delete/
service: nodejs-deployment
labels:
ambassador:
- request_label_group:
- delete
---
apiVersion: getambassador.io/v2
kind: RateLimit
metadata:
name: backend-rate-limit
spec:
domain: ambassador
limits:
- pattern: [{generic_key: delete}]
rate: 1
unit: minute
injectResponseHeaders:
- name: "x-test-1"
value: "my-rl-test"
When I am executing the command -- curl -vLk 10.107.60.125/delete/
It is returning
* Trying 10.107.60.125:80...
* TCP_NODELAY set
* Connected to 10.107.60.125 (10.107.60.125) port 80 (#0)
> GET /delete/ HTTP/1.1
> Host: 10.107.60.125
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: text/html; charset=utf-8
< Content-Length: 11
< ETag: W/"b-CgqQ9sWpkiO3HKmStsUvuC/rZLU"
< Date: Tue, 03 Nov 2020 17:13:00 GMT
< Connection: keep-alive
<
* Connection #0 to host 10.107.60.125 left intact
Delete User
The response I am getting is 200 however I am expecting 429 error code.

How to pass the part of request uri as custom header in Kubernetes ingress controller

I've the below configuration in ingress.yaml which forwards the requests with uris like /default/demoservice/health or /custom/demoservice/health to backend demoservice. I would want to retrieve the first part of uri (i.e default or custom in the example above)from the uri and pass as custom header to upstream.
I've deployed the ingress configmap with custom header
X-MyVariable-Path: ${request_uri}
but this sends the full request uri. How can I split?
- path: "/(.*?)/(demoservice.*)$"
backend:
serviceName: demoservice
servicePort: 80
I have found a solution, tested it and it works.
All you need is to add following annotations to your ingress object :
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-MyVariable-Path $1;
Where $1 is referencing whatever is captured in first group of regexp expression in path: field.
I've reproduced your scenario using the following yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-MyVariable-Path $1;
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: echo
servicePort: 80
path: /(.*?)/(demoservice.*)$
---
apiVersion: v1
kind: Service
metadata:
labels:
run: echo
name: echo
spec:
ports:
- port: 80
targetPort: 80
selector:
run: echo
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: echo
name: echo
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: Always
name: echo
You can test using curl:
curl -k https://<your_ip>/default/demoservice/healthz
Output:
{
"path": "/default/demoservice/healthz",
"headers": {
"host": "192.168.39.129",
"x-request-id": "dfcc67a80f5b02e6fe6c647c8bf8cdf0",
"x-real-ip": "192.168.39.1",
"x-forwarded-for": "192.168.39.1",
"x-forwarded-host": "192.168.39.129",
"x-forwarded-port": "443",
"x-forwarded-proto": "https",
"x-scheme": "https",
"x-myvariable-path": "default", # your variable here
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "192.168.39.129",
"ip": "::ffff:172.17.0.4",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "echo"
}
}
I hope it helps =)
I found two ways to achieve this.
One is by using regular expression in your configmap to parse the first part of request_uri.
To do the above, you need to add nginx.ingress.kubernetes.io/use-regex: "true" annotation to your ingress, before you do so. As this is set to false by default.
Another approach is defining that particular header in the annotation itself and add something like $1 below is the example.
ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-MyVariable-Path: "$1;
Not entirely sure, that would work for you. But I got this logic from Rewrite example here.
And add the path as /customheader(/|$)(.*) something like this will create a capture group.
Hope it works and helpful.

istio load balancing of a single service with multiple versions

I was able to achieve load-balancing with sample istio applications
https://github.com/piomin/sample-istio-services
https://istio.io/docs/guides/bookinfo/
But was not able to get istio load-balancing working with single private service having 2 versions. Example: 2 consul servers with different versions .
Service and pod definition :
apiVersion: v1
kind: Service
metadata:
name: consul-test
labels:
app: test
spec:
ports:
- port: 8500
name: http
selector:
app: test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v1
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v1
spec:
containers:
- name: consul-test-v1
image: consul:latest
ports:
- containerPort: 8500
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v2
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v2
spec:
containers:
- name: consul-test-v2
image: consul:1.1.0
ports:
- containerPort: 8500
Gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: con-gateway
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
exact: /catalog
route:
- destination:
host: consul-test
port:
number: 8500
Routing rules in virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: consul-test
spec:
hosts:
- consul-test
gateways:
- con-gateway
- mesh
http:
- route:
- destination:
host: consul-test
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: consul-test
spec:
host: consul-test
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Though I route all traffic ( http requests ) to consul server version v1, my http requests on consul-service lands on v1 and v2 alternately i.e, it follows Round-Robin rule .
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "ebfa341b-4557-a392-9f8a-8ee307113faa",
"Node": "consul-test-v1-765dd566dd-6cmj9",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
}
]
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "1b60a5bd-9a17-ff18-3a65-0ff95b3a836a",
"Node": "consul-test-v2-fffd475bc-st4mv",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 5,
"ModifyIndex": 6
}
]
I have the above mentioned issue when curl is done on the service ClusterIP:ClusterPort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
But LoadBalancing works as expected when curl is done on INGRESS_HOST and INGRESS_PORT ( determining INGRESS_HOST and INGRESS_PORT present here )
$ curl -L http://$INGRESS_HOST:$INGRESS_PORT/v1/catalog/nodes --- WORKS