Kubernetes Istio exposure not working with Virtualservice and Gateway - kubernetes

So we have the following use case running on Istio 1.8.2/Kubernetes 1.18:
Our cluster is exposed via a External Loadbalancer on Azure. When we expose the app the following way, it works:
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
...
name: frontend
namespace: frontend
spec:
replicas: 1
selector:
matchLabels:
app: applicationname
template:
metadata:
labels:
app: appname
name: frontend
customer: customername
spec:
imagePullSecrets:
- name: yadayada
containers:
- name: frontend
image: yadayada
imagePullPolicy: Always
ports:
- name: https
protocol: TCP
containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: frontend
labels:
name: frontend-svc
customer: customername
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
name: frontend
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend
namespace: frontend
annotations:
kubernetes.io/ingress.class: istio
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: "customer.domain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: frontend-svc
servicePort: 80
tls:
- hosts:
- "customer.domain.com"
secretName: certificate
When we start using a Virtualservice and Gateway, we fail to make it work for some reason. We wanna use VSVC and Gateways cause they offer more flexibility and options (like url rewriting). Other apps dont have this issue running on istio (much simpler as well), we dont have networkpolicy in place (yet). We simply cannot reach the webpage. Anyone has an idea? Virtualservice and Gateway down below. with the other 2 replicasets not mentioned cause they are not the problem:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: null
name: virtualservice-name
namespace: frontend
spec:
gateways:
- frontend
hosts:
- customer.domain.com
http:
- match:
- uri:
prefix: /
route:
- destination:
host: frontend
port:
number: 80
weight: 100
- match:
- uri:
prefix: /api/
route:
- destination:
host: backend
port:
number: 8080
weight: 100
- match:
- uri:
prefix: /auth/
route:
- destination:
host: keycloak
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: frontend
namespace: frontend
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http2
protocol: HTTP
tls:
httpsRedirect: True
hosts:
- "customer.domain.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
credentialName: customer-cert
hosts:
- "customer.domain.com"

Your Gateway specifies PASSTHROUGH, however your VirtualService provides an HttpRoute. This means the TLS connection is not terminated by the Gateway, but the VirtualService expects terminated TLS. See also this somewhat similar question.
How do I properly HTTPS secure an application when using Istio?

#user140547 Correct, we changed that now. But we still couldn't access the application.
We found out that one of the important services was not receiving gateway traffic, since that one wasn't setup correctly. It is our first time having an istio deployment with multiple services. So we thought each of them needed their own Gateway. Little did we know that 1 gateway was more then enough...

Related

How to configure tls with traefik in kubernetes using yaml?

I am having trouble exposing a service over http and https using traefik 2.9 in Kubernetes.
The http endpoint kinda works, I introduced CORS errors somehow once I tried to add https but that is not my main concern. The https ingress is broken and I cant find any indication of why its not working. The traefik pod doesn't log any errors and the dotnet service isn't receiving the requests. Also both routes show up in the dashboard and websecure is displayed as having TLS enabled.
Excluding ClusterRole, ServiceAccount, and ClusterRoleBinding because I believe that's configured correctly as the http route wouldn't work if it wasnt.
Traefik config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
Traefik services:
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: dashboard
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.1.38
ports:
- targetPort: web
port: 80
name: http
- targetPort: websecure
port: 443
name: https
selector:
app: traefik
Secret for tls:
apiVersion: v1
data:
comptech.pem: <contents of pem file base64 encoded>
comptech.crt: <contents of crt file base64 encoded>
comptech.key: <contents of key file base64 encoded>
kind: Secret
metadata:
name: comptech-cert
namespace: default
type: Opaque
Service for dotnet application:
apiVersion: v1
kind: Service
metadata:
name: control-api-service
spec:
ports:
- name: http
port: 80
targetPort: 5000
protocol: TCP
- name: https
port: 443
targetPort: 5000
protocol: TCP
selector:
app: control-api
Ingresses:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-secure-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: https
tls:
- secretName: comptech-cert
My hope here is that someone with much more experience with traefik/tls will be able to quickly realize what I'm doing incorrectly. Any input is greatly appreciated!
UPDATE:
The firewall was only allowing http traffic, we reconfigured it to support https and it is responding with Traefiks default certs. So i can hit the container but tls is still not configured using my supplied cert.
The pem file is not needed and the crt file was generated incorrectly using openssl the command that worked for me was: openssl crl2pkcs7 -nocrl -certfile comptech.pem | openssl pkcs7 -print_certs -out cert.crt
Pointing to the https port of the control-api-service was not working and needed to be changed to http
A config map needed to be created for the traefik deployment to work correctly:
apiVersion: v1 kind: ConfigMap metadata: name: traefik-config labels:
name: traefik-config namespace: default data: dyn.yaml: |
# https://doc.traefik.io/traefik/https/tls/
tls:
stores:
default:
defaultCertificate:
certFile: '/certs/tls.crt'
keyFile: '/certs/tls.key'
Finally the configmap and secret must be used in the traefik deployment like below:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
- --providers.file.filename=/config/dyn.yaml
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
volumeMounts:
- name: comptech-cert-volume
mountPath: /certs
- name: traefik-config-volume
mountPath: /config
volumes:
- name: comptech-cert-volume
secret:
secretName: comptech-cert
- name: traefik-config-volume
configMap:
name: traefik-config
In my setup, I use the IngressRoute CRD implementation from Traefik.
The CRDs were automatically installed when I setup the Traefik controller using Helm.
Is it a possibility for you to use this in your setup? You can check if the CRDs already exist using below command on your k8s cluster.
kubectl get crd
Below is a snippet from one of my projects where I also use a custom wildcard certificate from a secret using the IngressRoute manifest.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
tls:
secretName: bluecert
You can also include other custom resources that are available from Traefik. The complete set of configuration that is available can be seen here. For example, below is the same snippet with middleware and tlsoptions resources included for improving the security of the endpoint.
apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
name: tlsoptions
spec:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_AES_256_GCM_SHA384
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_FALLBACK_SCSV
curvePreferences:
- CurveP521
- CurveP384
sniStrict: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: security
spec:
headers:
frameDeny: true
sslRedirect: true
browserXssFilter: true
contentTypeNosniff: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 31536000
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
middlewares:
- name: security
tls:
secretName: bluecert
options:
name: tlsoptions

Facing 502 error while implementing Istio-ingress with envoy proxy

Below is the configuration which I am using in my environment.
And I am able to launch the site but some how inbound is getting blocked by Istio/envoy so i am not able to navigate my sites other pages which is called bu ajax getting below attached error
apiVersion: v1
kind: Service
metadata:
name: svc-controlcenter
namespace: ns-test
labels:
app: controlcenter
app.kubernetes.io/managed-by: Helm
env: dev
annotations:
meta.helm.sh/release-name: controlcenter
meta.helm.sh/release-namespace: ns-test
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8001
selector:
app: controlcenter
env: dev
Istio-Gateway
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: cc-gw-apps
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*.<URL>"
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: secret
mode: SIMPLE
Virtual Services
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-cc
namespace: ns-test
spec:
hosts:
- "<FQDN>"
gateways:
- istio-system/cc-gw-apps
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: <same>.ns-test.svc.cluster.local
port:
number: 80

How to access the prometheus & grafana via Istion ingress gateway? I have installed the promethius anfd grafana through Helm

I used below command to bring up the pod:
kubectl create deployment grafana --image=docker.io/grafana/grafana:5.4.3 -n monitoring
Then I used below command to create custerIp:
kubectl expose deployment grafana --type=ClusterIP --port=80 --target-port=3000 --protocol=TCP -n monitoring
Then I have used below virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- cogtiler-gateway.skydeck
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana
kubectl apply -f grafana-virtualservice.yaml -n monitoring
Output:
virtualservice.networking.istio.io/grafana created
Now, when I try to access it, I get below error from grafana:
**If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_path setting includes subpath
3. If you have a local dev build make sure you build frontend using: npm run dev, npm run watch, or npm run build
4. Sometimes restarting grafana-server can help **
The easiest and working out of the box solution to configure that would be with a grafana host and / prefix.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "grafana.example.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 80
As you mentioned in the comments, I want to use path based routing something like my.com/grafana, that's also possible to configure. You can use istio rewrite to configure that.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
But, according to this github issue you would have also additionally configure grafana for that. As without the proper grafana configuration that won't work correctly.
I found a way to configure grafana with different url with the following env variable GF_SERVER_ROOT_URL in grafana deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: grafana
spec:
containers:
- image: docker.io/grafana/grafana:5.4.3
name: grafana
env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s/grafana/"
resources: {}
Also there is a Virtual Service and Gateway for that deployment.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana/
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
You need to create a Gateway to allow routing between the istio-ingressgateway and your VirtualService.
Something in the lines of :
kind: Gateway
metadata:
name: ingress
namespace: istio-system
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
You also need a DNS entry for your domain (my-domain.com) that points to the IP address of your istio-ingressgateway.
When your browser will hit my.domain.com, then it'll be redirected to the istio-ingressgateway. The istio-ingressgateway will inspect the Host field from the request, and redirect the request to grafana (according to VirtualService rules).
You can check kubectl get svc -n istio-system | grep istio-ingressgateway to get the public IP of your ingress gateway.
If you want to enable TLS, then you need to provision a TLS certificate for your domain (most easy with cert-manager). Then you can use https redirect in your gateway, like so :
kind: Gateway
metadata:
name: ingress
namespace: whatever
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- my.domain.com
tls:
mode: SIMPLE
# name of the secret containing the TLS certificate + keys. The secret must exist in the same namespace as the istio-ingressgateway (probably istio-system namespace)
# This secret can be created by cert-manager
# Or you can create a self-signed certificate
# and add it to manually inside the browser trusted certificates
credentialName: my-domain-tls
Then you VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "my.domain.com"
gateways:
- ingress
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana

Kiali: Avoid KIA1106 - More than one Virtual Service for same host

I'm using Kiali on Istio/Kubernetes to monitor my mesh.
I need to route 2 different pods based in URL contain, and for this, I'm following the tutorial in Split large virtual services and destination rules into multiple resources. So, I created 2 VirtualServices for the same host and gateway:
Service 1:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-product-composite
spec:
hosts:
- "kubernetes.b-thinking.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /product-composite
route:
- destination:
port:
number: 80
host: product-composite
Service 2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uaa
spec:
replicas: 1
selector:
matchLabels:
app: uaa
template:
metadata:
labels:
app: uaa
spec:
containers:
- name: uaa
image: bthinking/uaa
imagePullPolicy: Never
env:
- name: LOGGING_LEVEL_ROOT
value: DEBUG
ports:
- containerPort: 8090
resources:
limits:
memory: 350Mi
---
apiVersion: v1
kind: Service
metadata:
name: uaa
spec:
type: NodePort
selector:
app: uaa
ports:
- port: 8090
nodePort: 31090
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.b-thinking.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: uaa
But, I'm getting the warning KIA1106 - More than one Virtual Service for same host
Looking in docs, it explain the case (it's my case), but it redirect to the same guide I have followed.

K8S: Routing with Istio return 404

I'm new in the k8s world.
Im my dev enviroment, I use ngnix as proxy(with CORS configs and with headers forwarding like ) for the different microservices (all made with spring boot) I have. In a k8s cluster, I had to replace it with Istio?
I'm trying to run a simple microservice(for now) and use Istio for routing to it. I've installed istio with google cloud.
If I navigate to IstioIP/auth/api/v1 it returns 404
This is my yaml file
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- '*'
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /auth
route:
- destination:
host: auth-srv
port:
number: 8082
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
labels:
app: auth-srv
spec:
ports:
- name: http
port: 8082
selector:
app: auth-srv
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-srv
spec:
replicas: 1
template:
metadata:
labels:
app: auth-srv
version: v1
spec:
containers:
- name: auth-srv
image: gcr.io/{{MY_PROJECT_ID}}/auth-srv:1.5
imagePullPolicy: IfNotPresent
env:
- name: JAVA_OPTS
value: '-DZIPKIN_SERVER=http://zipkin:9411'
ports:
- containerPort: 8082
livenessProbe:
httpGet:
path: /api/v1
port: 8082
initialDelaySeconds: 60
periodSeconds: 5
Looks like istio doesn't know anything about the url. Therefore you are getting a 404 error response.
If you look closer at the configuration in the virtual server you have configured istio to match on path /auth.
So if you try to request ISTIOIP/auth you will reach your microservice application. Here is image to describe the traffic flow and why you are getting a 404 response.