Role of labels in istio's DestinationRule - kubernetes

I am going through the traffic management section of istio 's documentation.
In a DestinationRule example, it configures several service subsets.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-destination-rule
spec:
host: my-svc
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
- name: v3
labels:
version: v3
My question (since it is not clear on the documentation) is about the role of spec.subsets.name.labels
Do these labels refer to:
labels in the corresponding k8s Deployment ?
or
labels in the pods of the Deployment?
Where exactly (in terms of k8s manifests) do the above labels reside?

Istio sticks to the labeling paradigm on Kubernetes used to identify resources within the cluster.
Since this particular DestinationRule is intended to determine, at network level, which backends are to serve requests, is targeting pods in the Deployment instead of the Deployment itself (as that is an abstract resource without any network features).
A good example of this is in the Istio sample application repository:
The Deployment doesn't have any version: v1 labels. However, the pods grouped in it does:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v1
template:
metadata:
labels:
app: tcp-echo
version: v1
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.1
imagePullPolicy: IfNotPresent
args: [ "9000", "hello" ]
ports:
- containerPort: 9000
And the DestinationRule picks these objects by their version label:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tcp-echo-destination
spec:
host: tcp-echo
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2

Related

Right namespace for VirtualService and DestinationRule Istio resources

I am trying to understand the VirtualService and DestinationRule resources in relation with the namespace which should be defined and if they are really namespaced resources or they can be considered as cluster-wide resources also.
I have the following scenario:
The frontend service (web-frontend) access the backend service (customers).
The frontend service is deployed in the frontend namespace
The backend service (customers) is deployed in the backend namespace
There are 2 versions of the backend service customers (2 deployments), one related to the version v1 and one related to the version v2.
The default behavior for the clusterIP service is to load-balance the request between the 2 deployments (v1 and v2) and my goal is by creating a DestinationRule and a VirtualService to direct the traffic only to the deployment version v1.
What I want to understand is which is the appropriate namespace to define such DestinationRule and a VirtualService resources. Should I create the necessary DestinationRule and VirtualService resources in the frontend namespace or in the backend namespace?
In the frontend namespace I have the web-frontend deployment and and the related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/web-frontend:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers.backend.svc.cluster.local'
---
kind: Service
apiVersion: v1
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
selector:
app: web-frontend
type: NodePort
ports:
- port: 80
name: http
targetPort: 8080
I have expose the web-frontend service by defining the following Gateway and VirtualService resources as follow:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-all-hosts
# namespace: default # Also working
namespace: frontend
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: web-frontend
# namespace: default # Also working
namespace: frontend
spec:
hosts:
- "*"
gateways:
- gateway-all-hosts
http:
- route:
- destination:
host: web-frontend.frontend.svc.cluster.local
port:
number: 80
In the backend namespace I have the customers v1 and v2 deployments and related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
namespace: backend
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/customers:1.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
namespace: backend
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: gcr.io/tetratelabs/customers:2.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: customers
namespace: backend
labels:
app: customers
spec:
selector:
app: customers
type: NodePort
ports:
- port: 80
name: http
targetPort: 3000
I have created the following DestinationRule and VirtualService resources to send the traffic only to the v1 deployment.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
host: customers.backend.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
hosts:
- "customers.backend.svc.cluster.local"
http:
## route - subset: v1
- route:
- destination:
host: customers.backend.svc.cluster.local
port:
number: 80
subset: v1
The question is which is the appropriate namespace to define the VR and DR resources for the customer service?
From my test I see that I can use either the frontend namespace, or the backend namespace. Why the VR,DR can be created to the frontend namespace or in the backend namespaces and in both cases are working? Which is the correct one?
Are the DestinationRule and VirtualService resources really namespaced resources or can be considered as cluster-wide resources ?
Are the low level routing rules of the proxies propagated to all envoy proxies regardless of the namespace?
A DestinationRule to actually be applied during a request needs to be on the destination rule lookup path:
-> client namespace
-> service namespace
-> the configured meshconfig.rootNamespace namespace (istio-system by default)
In your example, the "web-frontend" client is in the frontend Namespace (web-frontend.frontend.svc.cluster.local), the "customers" service is in the backend Namespace (customers.backend.svc.cluster.local), so the customers DestinationRule should be created in one of the following Namespaces: frontend, backend or istio-system. Additionally, please note that the istio-system Namespace isn't recommended unless the destination rule is really a global configuration that is applicable in all Namespaces.
To make sure that the destination rule will be applied we can use the istioctl proxy-config cluster command for the web-frontend Pod:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 - customers.frontend
customers.backend.svc.cluster.local 80 v1 customers.frontend
customers.backend.svc.cluster.local 80 v2 customers.frontend
When the destination rule is created in the default Namespace, it will not be applied during the request:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 -
For more information, see the Control configuration sharing in namespaces documentation.

Add header with EnvoyFilter does not work

I am testing istio 1.10.3 to add headers with minikube but I am not able to do so.
Istio is installed in the istio-system namespaces.
The namespace where the deployment is deployed is labeled with istio-injection=enabled.
In the config_dump I can see the LUA code only when the context is set to ANY. When I set it to SIDECAR_OUTBOUND the code is not listed:
"name": "envoy.lua",
"typed_config": {
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua",
"inline_code": "function envoy_on_request(request_handle)\n request_handle:headers():add(\"request-body-size\", request_handle:body():length())\nend\n\nfunction envoy_on_response(response_handle)\n response_handle:headers():add(\"response-body-size\", response_handle:body():length())\nend\n"
}
Someone can give me some tips?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: headers-envoy-filter
namespace: nginx-echo-headers
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("request-body-size", request_handle:body():length())
end
function envoy_on_response(response_handle)
response_handle:headers():add("response-body-size", response_handle:body():length())
end
workloadSelector:
labels:
app: nginx-echo-headers
version: v1
Below is my deployment and Istio configs:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-echo-headers-v1
namespace: nginx-echo-headers
labels:
version: v1
spec:
selector:
matchLabels:
app: nginx-echo-headers
version: v1
replicas: 2
template:
metadata:
labels:
app: nginx-echo-headers
version: v1
spec:
containers:
- name: nginx-echo-headers
image: brndnmtthws/nginx-echo-headers:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-echo-headers-svc
namespace: nginx-echo-headers
labels:
version: v1
service: nginx-echo-headers-svc
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: nginx-echo-headers
version: v1
---
# ISTIO GATEWAY
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-echo-headers-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "api.decchi.com.ar"
# ISTIO VIRTUAL SERVICE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-echo-headers-virtual-service
namespace: nginx-echo-headers
spec:
hosts:
- 'api.decchi.com.ar'
gateways:
- istio-system/nginx-echo-headers-gateway
http:
- route:
- destination:
# k8s service name
host: nginx-echo-headers-svc
port:
# Services port
number: 80
# workload selector
subset: v1
## ISTIO DESTINATION RULE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginx-echo-headers-dest
namespace: nginx-echo-headers
spec:
host: nginx-echo-headers-svc
subsets:
- name: "v1"
labels:
app: nginx-echo-headers
version: v1
It is only working when I configure the context in GATEWAY. The envoyFilter is running in the istio-system namespace and the workloadSelector is configured like this:
workloadSelector:
labels:
istio: ingressgateway
But my idea is to configure it in SIDECAR_OUTBOUND.
it is only working when I configure the context in GATEWAY, the envoyFilter is running in the istio-system namespace
That's correct! You should apply your EnvoyFilter in the config root namespace istio-system- in your case.
And the most important part, just omit context field, when matching your configPatches, so that this applies to both sidecars and gateways. You can see the examples of usage in this Istio Doc.
Here is an example I managed to come up with
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-x-cluster-client-ip-header
namespace: istio-system
spec:
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
patch:
operation: MERGE
value:
request_headers_to_add:
- header:
key: 'x-cluster-client-ip'
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
append: false
# the following is used to debug
response_headers_to_add:
- header:
key: 'x-cluster-client-ip'
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
append: false
https://gist.github.com/qudongfang/75cf0230c0b2291006f72cd23d45f297

Using InfluxDB for Horizaontal Pod AutoScaling using Custom Metrics

I have the TICK stack deployed in my Kubernetes cluster for monitoring purposes. My application pushes its custom data to it.
I have tried horizontal pod autoscaling using custom metrics with the help of the Prometheus adapter. I was curious if there is such an adapter for InfluxDB as well?
The Kubernetes popular custom metrics adapters do not include the InfluxDB one. Is there a way I can use my current infrastructure(containing InfluxDB) to autoscale pods using custom metrics from my application?
Why not use custom metrics from Prometheus with influxdb-exporter? I don't see why it should not work.
It is possible to use influxdb with heapster, in attachments some files that I set up to use in an easy way.
First run influxdb.yaml
Run second heapster-rbac.yaml
Third run heapster.yaml
**INFLUXDB.YAML**
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
**heapster-rbac.yaml**
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
**heapster.yaml**
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: k8s.gcr.io/heapster-amd64:v1.5.4
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
InfluxDB can be used with Heapster(as pointed out by #LucasSales) but it is deprecated in the current versions of Kubernetes.
For the latest versions of Kubernetes we have the metrics server for basic CPU/memory metrics. Prometheus is the accepted third party monitoring tool especially for things like custom metrics.

Autoscaling a google Cloud-Endpoints backend deployment declaratively (in the yaml)?

I have successfully followed the documentation here and here to deploy an API spec and GKE backend to Cloud Endpoints.
This has left me with a deployment.yaml that looks like this:
apiVersion: v1
kind: Service
metadata:
name: esp-myproject
spec:
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: esp-myproject
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-myproject
spec:
replicas: 1
template:
metadata:
labels:
app: esp-myproject
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=myproject1-0-0.endpoints.myproject.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 8081
- name: myproject
image: gcr.io/myproject/my-image:v0.0.1
ports:
- containerPort: 8080
This creates a single replica of the app on the backend. So far, so good...
I now want to update the yaml file to declaratively specify auto-scaling parameters to enable multiple replicas of the app to run alongside each other when traffic to the endpoint justifies more than one.
I have read around (O'Reilly book: Kubernetes Up & Running, GCP docs, K8s docs), but there are two things on which I'm stumped:
I've read a number of times about the HorizontalPodAutoscaler and it's not clear to me whether the deployment must make use of this in order to enjoy the benefits of autoscaling?
If so, I have seen examples in the docs of how to define the spec for the HorizontalPodAutoscaler in yaml as shown below - but how would I combine this with my existing deployment.yaml?
HorizontalPodAutoscaler example (from the docs):
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Thanks in advance to anyone who can shed some light on this for me.
I've read a number of times about the HorizontalPodAutoscaler and it's not clear to me whether the deployment must make use of this in order to enjoy the benefits of autoscaling?
Doesn't have to, but it's recommended and it's already built in. You can build your own automation that scales up and down but the question is why since it's already supported with the HPA.
If so, I have seen examples in the docs of how to define the spec for the HorizontalPodAutoscaler in yaml as shown below - but how would I combine this with my existing deployment.yaml?
It should be straightforward. You basically reference your deployment in the HPA definition:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-esp-project-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: esp-myproject <== here
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
i faced same issue what worked for me is
if you are on GKE and facing issue where enabled API are
autoscaling/v1
autoscaling/v2beta1
while GKE version is around 1.12 to 1.14 you wont be able to apply manifest of autoscaling/v2beta2 however you can apply same thing something like
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: core-deployment
namespace: default
spec:
maxReplicas: 9
minReplicas: 5
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: core-deployment
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: 500m
if you want based on utilization
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: core-deployment
namespace: default
spec:
maxReplicas: 9
minReplicas: 5
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: core-deployment
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80

How to debug QuotaSpecBinding for rate-limits in istio?

I am trying to enable the rate-limit for my istio enabled service. But it doesn't work. How do I debug if my configuration is correct?
apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 5
validDuration: 1s
overrides:
- dimensions:
engine: myEngineValue
maxAmount: 5
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
engine: destination.labels["engine"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
# - service: '*' ; I tried with this as well
- name: my-service
namespace: default
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
I tried with - service: '*' as well in the QuotaSpecBinding; but no luck.
How, do I confirm if my configuration was correct? the my-service is the kubernetes service for my deployment. (Does this have to be a VirtualService of istio for rate limits to work? Edit: Yes, it has to!)
I followed this doc except the VirtualService part.
I have a feeling somewhere in the namespaces I am doing a mistake.
You have to define the virtual service for the service my-service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
spec:
hosts:
- myservice
http:
- route:
- destination:
host: myservice
This way, you allow Istio to know which service are you host you are referring to.
In terms of debugging, I know that there is a project named Kiali that aims to leverage observability in Istio environments. I know that they have validations for some Istio and Kubernetes objects: Istio configuration browse.