Right namespace for VirtualService and DestinationRule Istio resources - kubernetes

I am trying to understand the VirtualService and DestinationRule resources in relation with the namespace which should be defined and if they are really namespaced resources or they can be considered as cluster-wide resources also.
I have the following scenario:
The frontend service (web-frontend) access the backend service (customers).
The frontend service is deployed in the frontend namespace
The backend service (customers) is deployed in the backend namespace
There are 2 versions of the backend service customers (2 deployments), one related to the version v1 and one related to the version v2.
The default behavior for the clusterIP service is to load-balance the request between the 2 deployments (v1 and v2) and my goal is by creating a DestinationRule and a VirtualService to direct the traffic only to the deployment version v1.
What I want to understand is which is the appropriate namespace to define such DestinationRule and a VirtualService resources. Should I create the necessary DestinationRule and VirtualService resources in the frontend namespace or in the backend namespace?
In the frontend namespace I have the web-frontend deployment and and the related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/web-frontend:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers.backend.svc.cluster.local'
---
kind: Service
apiVersion: v1
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
selector:
app: web-frontend
type: NodePort
ports:
- port: 80
name: http
targetPort: 8080
I have expose the web-frontend service by defining the following Gateway and VirtualService resources as follow:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-all-hosts
# namespace: default # Also working
namespace: frontend
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: web-frontend
# namespace: default # Also working
namespace: frontend
spec:
hosts:
- "*"
gateways:
- gateway-all-hosts
http:
- route:
- destination:
host: web-frontend.frontend.svc.cluster.local
port:
number: 80
In the backend namespace I have the customers v1 and v2 deployments and related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
namespace: backend
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/customers:1.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
namespace: backend
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: gcr.io/tetratelabs/customers:2.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: customers
namespace: backend
labels:
app: customers
spec:
selector:
app: customers
type: NodePort
ports:
- port: 80
name: http
targetPort: 3000
I have created the following DestinationRule and VirtualService resources to send the traffic only to the v1 deployment.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
host: customers.backend.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
hosts:
- "customers.backend.svc.cluster.local"
http:
## route - subset: v1
- route:
- destination:
host: customers.backend.svc.cluster.local
port:
number: 80
subset: v1
The question is which is the appropriate namespace to define the VR and DR resources for the customer service?
From my test I see that I can use either the frontend namespace, or the backend namespace. Why the VR,DR can be created to the frontend namespace or in the backend namespaces and in both cases are working? Which is the correct one?
Are the DestinationRule and VirtualService resources really namespaced resources or can be considered as cluster-wide resources ?
Are the low level routing rules of the proxies propagated to all envoy proxies regardless of the namespace?

A DestinationRule to actually be applied during a request needs to be on the destination rule lookup path:
-> client namespace
-> service namespace
-> the configured meshconfig.rootNamespace namespace (istio-system by default)
In your example, the "web-frontend" client is in the frontend Namespace (web-frontend.frontend.svc.cluster.local), the "customers" service is in the backend Namespace (customers.backend.svc.cluster.local), so the customers DestinationRule should be created in one of the following Namespaces: frontend, backend or istio-system. Additionally, please note that the istio-system Namespace isn't recommended unless the destination rule is really a global configuration that is applicable in all Namespaces.
To make sure that the destination rule will be applied we can use the istioctl proxy-config cluster command for the web-frontend Pod:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 - customers.frontend
customers.backend.svc.cluster.local 80 v1 customers.frontend
customers.backend.svc.cluster.local 80 v2 customers.frontend
When the destination rule is created in the default Namespace, it will not be applied during the request:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 -
For more information, see the Control configuration sharing in namespaces documentation.

Related

Path or PathPrefix routing with Traefik on Kubernetes not working

I'm using Traefik 2.7.0 on an AKS Kubernetes Cluster 1.22.6.
Currently, everything routes to the same service:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`)
kind: Rule
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-my-domain-com-cert
I'm currently in the process of externalizing an API resource from this service to a dedicated new service ("/users") because there will be other services in the future that will need the same functionality.
What I'm trying (and failing) to do, is to route calls to "/users" to the new service:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`) && Path(`/users`)
kind: Rule
services:
- name: users-api
namespace: namespace2
port: 80
- match: Host(`api.my-domain.com`)
kind: Rule
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-baywa-lusy-com-cert
I tried Path(..) and PathPrefix(..). No success. Everything is still routed to the old service. The new service has slightly different output. So I can tell with certainty that it's still routed to the old service.
Adding the priority manually didn't help either:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`) && Path(`/users`)
kind: Rule
priority: 2000
services:
- name: users-api
namespace: namespace2
port: 80
- match: Host(`api.my-domain.com`)
kind: Rule
priority: 1000
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-baywa-lusy-com-cert
Am I Missing something here? Any help is appreciated!
Thanks,
best regards,
Pascal
You can only expose services in the same namespace as your IngressRoute resource. If you watch the logs of your Traefik pod when you deploy your IngressRoute, you should see something like:
time="2023-01-26T13:57:17Z" level=error msg="service namespace2/users-api not in the parent resource namespace namespace1" providerName=kubernetescrd ingress=namespace1 namespace=namespace1
To do what you want, you need to create two separate IngressRoute resources, one in namespace1 and one in namespace2.
In namespace1:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
app: old-api
name: old-api
namespace: namespace1
spec:
entryPoints:
- web
routes:
- kind: Rule
priority: 1000
match: Host(`api.my-domain.com`)
services:
- name: old-api
port: 80
In namespace2:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
app: new-api
name: new-api
namespace: namespace2
spec:
entryPoints:
- web
routes:
- kind: Rule
priority: 2000
match: Host(`api.my-domain.com`) && PathPrefix(`/users`)
services:
- name: new-api
port: 80
You can find all the files I used to test this configuration here.
I don't know if the explicit priorities are necessary or not; it worked for me without them but maybe I was just lucky so I left them there. I would generally assume that a "more specific route" takes precedence over a "less specific route", but I don't know if that's actually true.

Redirect oauth2 proxy to custom page in case of 500 server error

I would like to either be able to customise the existing 500 error page OR in case it is not possible, redirect to my custom page.
I am using Okta oidc and everything is setup on Kubernetes cluster. I could not find where I can configure this 500 server error page.
Below is my POC for a somewhat similar setup but this is using github provider.
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
labels:
app: demo
version: new
spec:
containers:
- name: nginx
image: nikk007/nginxwebsite
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: demo
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway-configuration
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: selfsigned-cert-tls-secret
hosts:
- "*" # Domain name of the external website
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: test-selfsigned
namespace: istio-system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: selfsigned-cert
namespace: istio-system
spec:
dnsNames:
- mymy.com
secretName: selfsigned-cert-tls-secret
issuerRef:
name: test-selfsigned
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: myvs # "just" a name for this virtualservice
namespace: default
spec:
hosts:
- "*" # The Service DNS (ie the regular K8S Service) name that we're applying routing rules to.
gateways:
- ingress-gateway-configuration
http:
- route:
- destination:
host: oauth2-proxy # The Target DNS name
port:
number: 4180
---
kind: DestinationRule # Defining which pods should be part of each subset
apiVersion: networking.istio.io/v1alpha3
metadata:
name: grouping-rules-for-our-photograph-canary-release # This can be anything you like.
namespace: default
spec:
host: myservice.default.svc.cluster.local # Service
subsets:
- labels: # SELECTOR.
version: new # find pods with label "safe"
name: new-group
- labels:
version: old
name: old-group
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --provider=github
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
# Register a new application
# https://github.com/settings/applications/new
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <myapp-client-id>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <my-app-secret>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <cookie-secret-I-generated-via-python>
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
I am using cert-manager for self signed certificates.
I am using istio ingress gateway instead of k8s ingress controller.
Everything is working fine and I am able to access istio gateway, which directs to oauth2-proxy page, which directs to github login.
In case there is issue with authorization provider, then we get an error 500 page via outh2-proxy. I want to configure oauth2-proxy to redirect to my custom page(or i would like to configure the oauth2-proxy internal html for error page) in case of 500 error. I tried kubectl exec into oauth2-proxy pod but I could not locate their html. Moreover, I dont have root access inside oauth2-proxy pod.
If my own app throws 500 error then I can handle it by configuring its nginx.

ArgoCD: How to not store semi-secret information in manifests?

I'm learning K8s, so bear with me as a noob.
I'm running a single-node K3s cluster at home, and have successfully deployed the traefik/whoami application using the command below, but would like to deploy it via ArgoCD.
cat apps/whoami/whoami.yaml | envsubst | kubectl apply -f -
The manifest I created.
---
apiVersion: v1
kind: Namespace
metadata:
name: k3s-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deploy
namespace: k3s-test
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami:v1.8.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami-svc
namespace: k3s-test
labels:
service: whoami
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
selector:
app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: k3s-test
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: whoami.${DOMAIN_NAME}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-svc
port:
number: 80
I want to publish my code to GitHub so that ArgoCD can sync it, but don't want to expose information that is not necessarily secret, but not necessarily public. Currently, my domain name is set as an environment variable (because I don't want to commit mydomain.com to my GitHub repo) and I'm using envsubst when running kubectl apply. Does ArgoCD have similar functionality? I found this GitHub issue showing ArgoCD probably doesn't have variable interpolation, but is there an alternative? Or do I need to store my domain name as a full-on K8s secret?

Role of labels in istio's DestinationRule

I am going through the traffic management section of istio 's documentation.
In a DestinationRule example, it configures several service subsets.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-destination-rule
spec:
host: my-svc
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
- name: v3
labels:
version: v3
My question (since it is not clear on the documentation) is about the role of spec.subsets.name.labels
Do these labels refer to:
labels in the corresponding k8s Deployment ?
or
labels in the pods of the Deployment?
Where exactly (in terms of k8s manifests) do the above labels reside?
Istio sticks to the labeling paradigm on Kubernetes used to identify resources within the cluster.
Since this particular DestinationRule is intended to determine, at network level, which backends are to serve requests, is targeting pods in the Deployment instead of the Deployment itself (as that is an abstract resource without any network features).
A good example of this is in the Istio sample application repository:
The Deployment doesn't have any version: v1 labels. However, the pods grouped in it does:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v1
template:
metadata:
labels:
app: tcp-echo
version: v1
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.1
imagePullPolicy: IfNotPresent
args: [ "9000", "hello" ]
ports:
- containerPort: 9000
And the DestinationRule picks these objects by their version label:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tcp-echo-destination
spec:
host: tcp-echo
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2

Deploy same image two different namespaces same port

I have a single node k8s cluster. I have two namespaces, call them n1 and n2. I want to deploy the same image, on the same port but in different namespaces.
How do I do this?
namespace yamls:
apiVersion: v1
kind: Namespace
metadata:
name: n1
and
apiVersion: v1
kind: Namespace
metadata:
name: n2
service yamls:
apiVersion: v1
kind: Service
metadata:
name: my-app-n1
namespace: n1
labels:
app: my-app-n1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n1
and
apiVersion: v1
kind: Service
metadata:
name: my-app-n2
namespace: n2
labels:
app: my-app-n2
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n2
deployment yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n1
labels:
app: my-app-n1
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n1
template:
metadata:
labels:
app: my-app-n1
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
and
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n2
labels:
app: my-app-n2
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n2
template:
metadata:
labels:
app: my-app-n2
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
waiter:v1 corresponds to this repo: https://hub.docker.com/r/adamgardnerdt/waiter
Surely I can do this as namespaces are supposed to represent different environments? eg. nonprod vs. prod. So surely I can deploy identically into two different "environments" aka "namespaces"?
For Service you have specified namespaces , that is correct.
For Deployments you should also specify namespaces othervise they will go to default namespace.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n1
namespace: n1
labels:
app: my-app-n1
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n1
template:
metadata:
labels:
app: my-app-n1
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
I want to deploy the same image, on the same port but in different namespaces.
You are already doing that with your configs, except for deployment objects, that should refer to correct namespaces (as mentioned by answer from Ijaz Ahmad Khan), available to other services in the namespaces using DNS names my-app-n1 and my-app-n2 respectively.
Because waiter is a web server, I assume you would like to access both instances of it from the internet. Hence, you should:
change the type of both services to ClusterIP,
add ingress object, one per each namespace, containing a host name, e.g. myapp.com and staging.myapp.com respectively),
put a load balancer in front of your cluster: the load balancer will use ingress objects to know which hostname matches which service (your cloud provider should create a load balancer automatically).