How to debug QuotaSpecBinding for rate-limits in istio? - kubernetes

I am trying to enable the rate-limit for my istio enabled service. But it doesn't work. How do I debug if my configuration is correct?
apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 5
validDuration: 1s
overrides:
- dimensions:
engine: myEngineValue
maxAmount: 5
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
engine: destination.labels["engine"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
# - service: '*' ; I tried with this as well
- name: my-service
namespace: default
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
I tried with - service: '*' as well in the QuotaSpecBinding; but no luck.
How, do I confirm if my configuration was correct? the my-service is the kubernetes service for my deployment. (Does this have to be a VirtualService of istio for rate limits to work? Edit: Yes, it has to!)
I followed this doc except the VirtualService part.
I have a feeling somewhere in the namespaces I am doing a mistake.

You have to define the virtual service for the service my-service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
spec:
hosts:
- myservice
http:
- route:
- destination:
host: myservice
This way, you allow Istio to know which service are you host you are referring to.
In terms of debugging, I know that there is a project named Kiali that aims to leverage observability in Istio environments. I know that they have validations for some Istio and Kubernetes objects: Istio configuration browse.

Related

Mirror traffic using Traefik 2

We are using Traefik v2 running in kubernetes in a shared namespace (called shared), with multiple namespaces for different projects/services. We are utilising the IngressRoute CRD along with middlewares.
We need to mirror (duplicate) all incoming traffic to a specific URL (blah.example.com/newservice) and forward it to 2 backend services in 2 different namespaces. Because they are separated between 2 namespaces, they are running as the same name, with the same port.
I've looked at the following link, but don't seem to understand it:
https://doc.traefik.io/traefik/v2.3/routing/providers/kubernetes-crd/#mirroring
This is my config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: shared-ingressroute
namespace: shared
spec:
entryPoints: []
routes:
- kind: Rule
match: Host(`blah.example.com`) && PathPrefix(`/newservice/`)
middlewares:
- name: shared-middleware-testing-middleware
namespace: shared
priority: 0
services:
- kind: Service
name: customer-mirror
namespace: namespace1
port: TraefikService
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: shared-middleware-testing-middleware
namespace: shared
spec:
stripPrefix:
prefixes:
- /newservice/
---
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
name: customer-mirror
namespace: namespace1
spec:
mirroring:
name: newservice
port: 8011
namespace: namespace1
mirrors:
- name: newservice
port: 8011
percent: 100
namespace: namespace2
What am I doing wrong?
based on docs, for Your case kind should be TraefikService
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: shared-ingressroute
namespace: shared
spec:
entryPoints: []
routes:
- kind: Rule
match: Host(`blah.example.com`) && PathPrefix(`/newservice/`)
middlewares:
- name: shared-middleware-testing-middleware
namespace: shared
services:
- kind: TraefikService
name: customer-mirror
namespace: namespace1

Kubernetes API to create a CRD using Minikube, with deployment pod in pending state

I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example

Rate limits not limiting anything

I'm trying to use the istio rate limits to limit access to the service hello.
(1 call per second max)
I used the template from the book info demo application.
This is the configuration I've got so far :
Handler
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: quotahandler
namespace: istio-system
spec:
compiledAdapter: memquota
params:
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 1
validDuration: 1s
Instance
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | "unknown"
QuotaSpec
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
QuotaSpecBinding
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: hello
namespace: default
Rule
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: quotahandler
instances:
- requestcountquota
Needless to say that curling the service's ip is still working even when its well over 1 request per second and the limit is activated...
FYI, I used the serviceIP / virtualService (+ gateway).
Also I'm using the "In Memory" version and not the Redis version.
Any help on understanding where the error is would be gladly appreciated !

why Istio rate-limiting working incorrect?

I configured rate-limiting correctly according to the istio tutorial, and it worked. But when I lowered the limit, it seemed that rate-limiting had not changed.Here are all my configuration files. I hope you can give me some help. Thank you very much.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: quotahandler
namespace: istio-system
spec:
compiledAdapter: memquota
params:
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 500
validDuration: 1s
overrides:
- dimensions:
destination: productpage
source: "10.28.11.20"
maxAmount: 500
validDuration: 1s
- dimensions:
destination: productpage
maxAmount: 500 (Here I increased the number of requests per second.)
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service.name | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: productpage
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
# quota only applies if you are not logged in.
# match: match(request.headers["cookie"], "user=*") == false
actions:
- handler: quotahandler
instances:
- requestcountquota
At first, I configure it directly.
- dimensions:
Destination: product page
MaxAmount: 1
Valid Duration: 5S
Rate-limiting works well.
When I configure:
- dimensions:
Destination: product page
MaxAmount: 500
ValidDuration: 1s
The request will still return 429 in a short time,configuration (500/s) should be unrestricted at this time.During the test, I visited k8s product page service IP directly, such as curl 10.233.5.240:9080/product page.
I hope you can tell me why. Thank you very much for your answer.
I've solved this problem because the istio-policy component has insufficient resources (cpu&memory) and is allocated too little by default, which makes the policy ineffective.But I don't know why. Please explain what's in memquota or redisquota. Aren't quota configurations all in envoy configurations?

How to allow non-HTTP out of an istio cluster/pod

I've installed istio v1.0.5 in a k8s cluster (1 master, 2 worker nodes) and have deployed an application that requires HTTP from clients into a service and this service then needs to communicate out of the cluster. I did not use helm to install istio and the material I've read has a lot of helm examples to update the init container config to include the cluster IP cidr.
From my understanding, this is still an on-going discussion with the devs and the best way to solve this issue is to annotate the deployment with the following:
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: home-devices-deployment
namespace: home-devices-app
labels:
app: home-devices-app
annotations:
traffic.sidecar.istio.io/includeOutboundIPRanges: "10.244.0.0/16"
I put in my clusterIP CIDR but it still doesn't allow the container to connect to an external system via SSH/TCP 22.
ubuntu#k8s-master:~/applications$ kubectl cluster-info dump | grep -i cidr
"podCIDR": "10.244.0.0/24",
"podCIDR": "10.244.1.0/24"
"podCIDR": "10.244.2.0/24"
"--allocate-node-cidrs=true",
"--cluster-cidr=10.244.0.0/16",
"--node-cidr-mask-size=24",
Any help is appreciated.
--update--
I tried ServiceEntry's but still am not successful. Please remember this is a container that is SSH'ing externally.
ubuntu#k8s-master:~/applications$ kubectl get serviceentry -n home-devices-app -o yaml
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
creationTimestamp: "2019-01-10T02:45:27Z"
generation: 1
name: ex-ssh-service-entry
namespace: home-devices-app
resourceVersion: "1432196"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/home-devices- app/serviceentries/ex-ssh-service-entry
uid: c9b22284-1481-11e9-ad97-000c297d3726
spec:
addresses:
- 10.10.10.5
hosts:
- '*.ca'
location: MESH_EXTERNAL
ports:
- name: ssh
number: 22
protocol: TCP
resolution: NONE
- apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
creationTimestamp: "2019-01-10T02:45:27Z"
generation: 1
name: srx-ssh-service-entry
namespace: home-devices-app
resourceVersion: "1432197"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/home-devices- app/serviceentries/srx-ssh-service-entry
uid: c9b3b586-1481-11e9-ad97-000c297d3726
spec:
addresses:
- 10.10.10.6
hosts:
- '*.ca'
location: MESH_EXTERNAL
ports:
- name: ssh
number: 22
protocol: TCP
resolution: NONE
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Try adding a service entry like below. It worked for me.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: ext-svcentry
spec:
hosts:
- "*.com"
location: MESH_EXTERNAL
addresses:
- 11.22.33.44
ports:
- number: 8080
name: http
protocol: TCP
resolution: NONE