Istio rating limit with redisquota not taken into account - kubernetes

I'm trying to use rate limitings with istio (i've already done it with envoy but the project manager wants me to try it that way). I based my config on the tutorial of istio. I tried a few different things but can't make it work and i don't even know how to debug this. Kiali doesn't give any nice information about quotas, rules,... My goal is to block to max 2 request per XX seconds the traffic to a service. you can find my code here if you want to give a try: https://github.com/hagakure/istio_rating.
first step i did was: istioctl install --set meshConfig.disablePolicyChecks=false --set values.pilot.policy.enabled=true
as said on istio website
then i add some yaml config:
My service:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
namespace: rate-limit
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 80
Exposed by Istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-gateway
namespace: rate-limit
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-web
protocol: HTTP
hosts:
- '*'
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-vs
namespace: rate-limit
spec:
hosts:
- "*"
gateways:
- hello-world-gateway
http:
- route:
- destination:
port:
number: 80
host: hello-world-svc.rate-limit.svc.cluster.local
My rate-limiting configuration for istio:
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: requestcount
namespace: rate-limit
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: quota
namespace: rate-limit
spec:
rules:
- quotas:
- quota: requestcount.instance.rate-limit
charge: 1
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: quota-binding
namespace: rate-limit
spec:
quotaSpecs:
- name: quota
namespace: rate-limit
services:
- service: '*'
---
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: quotahandler
namespace: rate-limit
spec:
compiledAdapter: redisquota
params:
redisServerUrl: localhost:6379
connectionPoolSize: 10
quotas:
- name: requestcount.instance.rate-limit
maxAmount: 2
validDuration: 30s
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota-rule
namespace: rate-limit
spec:
actions:
- handler: quotahandler.handler.rate-limit
instances:
- requestcount.instance.rate-limit
But nothing appends, i can curl as much as i want the service, no problem :'(

1.6.2 i know it's deprecated but it is still usable no?
As mentioned in documentation
The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.
Consider using Envoy native rate limiting instead of mixer rate limiting. Istio will add support for native rate limiting API through the Istio extensions API.
As far as I know mixer no longer exist when you install istio, documentation says that
If you depend on specific Mixer features like out of process adapters, you may re-enable Mixer. Mixer will continue receiving bug fixes and security fixes until Istio 1.7.
But I couldn´t find a proper documentation on how to do that.
There is older github issue about rate limiting when mixer is deprecated.
i've already done it with envoy but the project manager wants me to try it that way
There is a github issue with envoy filter rate limiting example, which as mentioned in above issue and documentation should be used now instead of deprecated rate limiting from istio documentation. So I would recommend to talk with your project manager about that. This is actually the right way to go.
About the issue which might occur if you have used older version of istio with mixer or you have enabled it somehow on newer versions.
Take a look at this github issue.
There were some issues with the commands from documentation you mentioned
istioctl install --set meshConfig.disablePolicyChecks=false --set values.pilot.policy.enabled=true
Instead you should use
istioctl install --set values.pilot.policy.enabled=true --set values.global.policyCheckFailOpen=true
OR
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
values:
pilot:
policy:
enabled: true
global:
policyCheckFailOpen: true
Hope you find this informations useful.

Related

Traefik IngressRoute CRD not Registering Any Routes

I'm configuring Traefik Proxy to run on a GKE cluster to handle proxying to various microservices. I'm doing everything through their CRDs and deployed Traefik to the cluster using a custom deployment. The Traefik dashboard is accessible and working fine, however when I try to setup an IngressRoute for the service itself, it is not accessible and it does not appear in the dashboard. I've tried setting it up with a regular k8s Ingress object and when doing that, it did appear in the dashboard, however I ran into some issues with middleware, and for ease-of-use I'd prefer to go the CRD route. Also, the deployment and service for the microservice seem to be deploying fine, they both appear in the GKE dashboard and are running normally. No ingress is created, however I'm unsure of if a custom CRD IngressRoute is supposed to create one or not.
Some information about the configuration:
I'm using Kustomize to handle overlays and general data
I have a setting through kustomize to apply the namespace users to everything
Below are the config files I'm using, and the CRDs and RBAC are defined by calling
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: ${IMAGE}
imagePullPolicy: IfNotPresent
ports:
- name: web
containerPort: ${HTTP_PORT}
readinessProbe:
httpGet:
path: /ready
port: web
initialDelaySeconds: 10
periodSeconds: 2
envFrom:
- secretRef:
name: users-service-env-secrets
service.yml
apiVersion: v1
kind: Service
metadata:
name: users-service
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: web
selector:
app: users-service
ingress.yml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: users-stripprefix
spec:
stripPrefix:
prefixes:
- /userssrv
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users-service-ingress
spec:
entryPoints:
- service-port
routes:
- kind: Rule
match: PathPrefix(`/userssrv`)
services:
- name: users-service
namespace: users
port: service-port
middlewares:
- name: users-stripprefix
If any more information is needed, just lmk. Thanks!
A default Traefik installation on Kubernetes creates two entrypoints:
web for http access, and
websecure for https access
But you have in your IngressRoute configuration:
entryPoints:
- service-port
Unless you have explicitly configured Traefik with an entrypoint named "service-port", this is probably your problem. You want to remove the entryPoints section, or specify something like:
entryPoints:
- web
If you omit the entryPoints configuration, the service will be available on all entrypoints. If you include explicit entrypoints, then the service will only be available on those specific entrypoints (e.g. with the above configuration, the service would be available via http:// and not via https://).
Not directly related to your problem, but if you're using Kustomize, consider:
Drop the app: users-service label from the deployment, the service selector, etc, and instead set that in your kustomization.yaml using the commonLabels directive.
Drop the explicit namespace from the service specification in your IngressRoute and instead use kustomize's namespace transformer to set it (this lets you control the namespace exclusively from your kustomization.yaml).
I've put together a deployable example with all the changes mentioned in this answer here.

Unable to log egress traffic HTTP requests with the istio-proxy

I am following this guide.
Ingress requests are getting logged. Egress traffic control is working as expected, except I am unable to log egress HTTP requests. What is missing?
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: myapp
spec:
workloadSelector:
labels:
app: myapp
outboundTrafficPolicy:
mode: REGISTRY_ONLY
egress:
- hosts:
- default/*.example.com
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: example
spec:
location: MESH_EXTERNAL
resolution: NONE
hosts:
- '*.example.com'
ports:
- name: https
protocol: TLS
number: 443
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
Kubernetes 1.22.2 Istio 1.11.4
For ingress traffic logging I am using EnvoyFilter to set log format and it is working without any additional configuration. In the egress case, I had to set accessLogFile: /dev/stdout.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: config
namespace: istio-system
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
AFAIK istio collects only ingress HTTP logs by default.
In the istio documentation there is an old article (from 2018) describing how to enable egress traffic HTTP logs.
Please keep in mind that some of the information may be outdated, however I believe this is the part that you are missing.

How to apply multiple rate limits for a single service in kong rate limit

I have service and I need to limit the API based on users and organization.
Example : User A and User B belongs same OrgA. Any user can access the API 5 times a day and Organization can access the API 8 times a day.
Service
apiVersion: v1
kind: Service
metadata:
name: kong-my-app
annotations:
kubernetes.io/ingress.class: kong
konghq.com/plugins: rate-limiting-myapp-1, rate-limiting-myapp
labels:
run: kong-my-app
spec:
type: NodePort
ports:
- nodePort: 31687
port: 8200
protocol: TCP
selector:
run: kong-my-app
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: rate-limiting-myapp-1
config:
hour: 8
limit_by: header
header_name: ‘x-org-id’
policy: local
plugin: rate-limiting
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: rate-limiting-myapp
config:
hour: 5
limit_by: header
header_name: ‘x-user-id’
policy: local
plugin: rate-limiting
Service is picking the last plugin provided in the annotation. is it possible to apply same plugin of two variant ?
In above example, its picking only rate-limiting-myapp which is the last one in the plugin list.
Please help me if we have any other way to do.
is it a limitation in kong rate limit plugin ?. Do we need advanced rate limiter(Enterprise) to get this done ?

Enable IAP on Ingress

I've follow the documentation about how to enable IAP on GKE.
I've:
configured the consent screen
Create OAuth credentials
Add the universal redirect URL
Add myself as IAP-secured Web App User
And write my deployment like this:
data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
When I look at my ingress I've this:
$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).
So everything look fine, but IAP is not activated, why ?
The worst is that, if I enable it manualy it work but if I redo kubectl apply -f monitoring.yaml IAP is disabled.
What am I missing ?
Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:
gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"
By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (like in backend documentation )
I've also write the value in the backend like this:
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
But doesn't work either.
Erratum:
The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).
But nevertheless, I can't enable IAP only with my backend configuration.
As suggested I've filed a bug report: https://issuetracker.google.com/issues/153475658
Solution given by Totem
Change given yaml with this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
The backend is associated with the service and not the Ingress...
Now it Works !
You did everything right, just a one small change:
The annotation should be added on the Service resource
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.
this is based on internal example im using:
beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'

GKE Managed Certificate not serving over HTTPS

I'm trying to spin up a Kubernetes cluster that I can access securely and can't seem to get that last part. I am following this tutorial: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
Here are the .yaml files i'm using for my Ingress, Nodeport and ManagedCertificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: client-v1-cert
spec:
domains:
- api.mydomain.com
---
apiVersion: v1
kind: Service
metadata:
name: client-nodeport-service
spec:
selector:
app: myApp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-v1
networking.gke.io/managed-certificates: client-v1-cert
spec:
backend:
serviceName: client-nodeport-service
servicePort: 80
No errors that I can see in the GCP console. i can also access my API from http://api.mydomain.com/, but it won't work when I try https, just not https. Been banging my head on this for a few days and just wondering if there's some little thing i'm missing.
--- UPDATE ---
Output of kubectl describe managedcertificate
Name: client-v1-cert
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-07-01T17:42:43Z
Generation: 3
Resource Version: 1136504
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcer
tificates/client-v1-cert
UID: b9b7bec1-9c27-33c9-a309-42284a800179
Spec:
Domains:
api.mydomain.com
Status:
Certificate Name: mcrt-286cdab3-b995-40cc-9b3a-28439285e694
Certificate Status: Active
Domain Status:
Domain: api.mydomain.com
Status: Active
Expire Time: 2019-09-29T09:55:12.000-07:00
Events: <none>
I figured out a solution to this problem. I ended up going into my GCP console, locating the load balancer associated with the Ingress, and then I noticed that there was only one frontend protocol, and it was HTTP serving over port 80. So I manually added another frontend protocol for HTTPS, selected the managed certificate from the list, and waited about 5 minutes and everything worked.
I have no idea why my ingress.yaml didn't do that automatically though. So though the problem is fixed if there is anyone out there who knows what I would love to know.