How do I point Kubernetes Ingress to the Istio ingress gateway? - kubernetes

I have a currently functioning Istio application. I would now like to add HTTPS using the Google Cloud managed certs. I setup the ingress there like this...
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: istio-system
spec:
domains:
- mydomain.co
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: managed-cert
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: istio-ingressgateway
port:
number: 443
---
But when I try going to the site (https://mydomain.co) I get...
Secure Connection Failed
An error occurred during a connection to earth-615.mydomain.co. Cannot communicate securely with peer: no common encryption algorithm(s).
Error code: SSL_ERROR_NO_CYPHER_OVERLAP
The functioning virtual service/gateway looks like this...
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: earth-616
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: test-app
namespace: foo
spec:
hosts:
- "*"
gateways:
- "istio-system/ingress-gateway"
http:
- match:
- uri:
exact: /
route:
- destination:
host: test-app
port:
number: 8000

Pointing k8s ingress towards istio ingress would result in additional latency and additional requirement for the istio gateway to use ingress sni passthrough to accept the HTTPS (already TLS terminated traffic).
Instead the best practice here would be to use the certificate directly with istio Secure Gateway.
You can use the certificate and key issued by Google CA. e.g. from Certificate Authority Service and create a k8s secret to hold the certificate and key. Then configure istio Secure Gateway to terminate the TLS traffic as documented in here.

Related

ALB Ingress Controller, Terminating TLS via AWS ACM. How to go End-To-End?

We currently use the AWS ALB Ingress Controller to front out ingress, and terminate SSL using a certificate from AWS ACM. This work fine.
Is there a way, to also encrypt the traffic from the Load balancer to the cluster?
Here is what I attempted
Install/Configure Cert Manager
Add a TLS Secret to the Ingress
Change the ingress annotation to set the backend protocol to HTTPS alb.ingress.kubernetes.io/backend-protocol: HTTPS
This... results in a 502 gateway error
here is my current, working ingress, with only the relevant parts still shown.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/certificate-arn: real-cert-arn
alb.ingress.kubernetes.io/group.name: public.monitor
alb.ingress.kubernetes.io/group.order: "40"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/load-balancer-name: monitoring-public-qa
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
cert-manager.io/cluster-issuer: cert-manager-r53-qa
kubernetes.io/ingress.class: alb
spec:
rules:
- host: goldilocks.qa.realdomain.com
http:
paths:
- backend:
service:
name: goldilocks-dashboard
port:
name: http
path: /*
pathType: ImplementationSpecific
tls:
- hosts:
- goldilocks.qa.realdomain.com
secretName: goldilocks-qa-cert
status:
loadBalancer:
ingress:
- hostname: real-lb-address.us-gov-west-1.elb.amazonaws.com
my (staging) cert exists, and appears fine.
[ec2-user#ip-10-17-2-102 ~]$ kubectl get cert -n goldilocks goldilocks-qa-cert -o yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: goldilocks-qa-cert
namespace: goldilocks
ownerReferences:
- apiVersion: networking.k8s.io/v1
blockOwnerDeletion: true
controller: true
kind: Ingress
name: goldilocks-dashboard
uid: 2494e5be-e624-471b-afd3-c8d56f5dc853
resourceVersion: "81003934"
uid: d494a1ec-7509-474e-b154-d5db8ceb86c0
spec:
dnsNames:
- goldilocks.qa.realdomain.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: cert-manager-r53-qa
secretName: goldilocks-qa-cert
usages:
- digital signature
- key encipherment
status:
conditions:
- lastTransitionTime: "2023-02-08T16:06:56Z"
message: Certificate is up to date and has not expired
The fact I can not figure out what to google to find this answer, leads me to think that I'm attempting to do something weird? I understand I could just terminate the TLS on the pod, but I didn't want to rely on the lets encrypt to provide me good/valid certs, I just want the traffic encrypted.

Unable to log egress traffic HTTP requests with the istio-proxy

I am following this guide.
Ingress requests are getting logged. Egress traffic control is working as expected, except I am unable to log egress HTTP requests. What is missing?
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: myapp
spec:
workloadSelector:
labels:
app: myapp
outboundTrafficPolicy:
mode: REGISTRY_ONLY
egress:
- hosts:
- default/*.example.com
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: example
spec:
location: MESH_EXTERNAL
resolution: NONE
hosts:
- '*.example.com'
ports:
- name: https
protocol: TLS
number: 443
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
Kubernetes 1.22.2 Istio 1.11.4
For ingress traffic logging I am using EnvoyFilter to set log format and it is working without any additional configuration. In the egress case, I had to set accessLogFile: /dev/stdout.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: config
namespace: istio-system
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
AFAIK istio collects only ingress HTTP logs by default.
In the istio documentation there is an old article (from 2018) describing how to enable egress traffic HTTP logs.
Please keep in mind that some of the information may be outdated, however I believe this is the part that you are missing.

Cannot allow external traffic through ISTIO

I am trying to setup Istio and I need to whitelist few ports for allowing non mTLS traffic from outside world coming in through specfic port for few pods runnings in local k8s.
I am unable to find a successful way of doing it.
Tried Service entry, policy and destination rule and didnt succeed.
Helps is highly appreciated.
version.BuildInfo{Version:"1.1.2", GitRevision:"2b1331886076df103179e3da5dc9077fed59c989", User:"root", Host:"35adf5bb-5570-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.1"}```
Service Entry
```apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-traffic
namespace: cloud-infra
spec:
hosts:
- "*.cluster.local"
ports:
- number: 50506
name: grpc-xxx
protocol: TCP
location: MESH_EXTERNAL
resolution: NONE```
You need to add a DestinationRule and a Policy :
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: destinationrule-test
spec:
host: service-name
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8080
tls:
mode: DISABLE
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: policy-test
spec:
targets:
- name: service-name
ports:
- number: 8080
peers:
This has been tested with istio 1.0, but it will probably work for istio 1.1. It is heavily inspired by the documentation https://istio.io/help/ops/setup/app-health-check/
From your question, I understood that you want to control your ingress traffic allow some ports to your services that functioning in your mesh/cluster from outside, but your configuration is for egress traffic.
In order to control and allow ports to your services from outside, you can follow these steps.
1.Make sure that containerPort included to your deployment/pod configuration.
For more info
2.You have to have service pointing to your backends/pods. For more info about Kubernetes Services.
3.Then in your Istio enabled cluster, you have to create Gateway similar to below configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-service-gateway
namespace: foo-namespace # Use same namespace with backend service
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: HTTP
protocol: HTTP
hosts:
- "*"
4.Then configure route to your service for traffic entering via the this gateway by creating VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: your-service
namespace: foo-namespace # Use same namespace with backend service
spec:
hosts:
- "*"
gateways:
- your-service-gateway # define gateway name
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 3000 # Backend service port
host: your-service # Backend service name
Hope it helps.

Istio: Ingress for ACME-challenge not working (503)

We are running Istio 1.1.3 on 1.12.5-gke.10 cluster-nodes.
We use certmanager for managing our let's encrypt certificates.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: certs.ourdomain.nl
namespace: istio-system
spec:
secretName: certs.ourdomain.nl
newBefore: 360h # 15d
commonName: operations.ourdomain.nl
dnsNames:
- operations.ourdomain.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer
acme:
config:
- http01:
ingressClass: istio
domains:
- operations.ourdomain.nl
Next thing we see the acme backend, service (nodeport and ingress) deployed. The ingress (auto-generated) looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "1734084804"
certmanager.k8s.io/acme-http-token: "1476005735"
name: cm-acme-http-solver-69vzw
namespace: istio-system
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Certificate
name: certs.ourdomain.nl
uid: 751011d2-4fc8-11e9-b20e-42010aa40101
spec:
rules:
- host: operations.ourdomain.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-fzk8q
servicePort: 8089
path: /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck
status:
loadBalancer: {}
However, when we try to access the url operations.ourdomain.nl /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck we get a 404.
We do have a loadbalancer for istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
labels:
app: istio-ingress
chart: gateways-1.1.0
heritage: Tiller
istio: ingress
release: istio
name: istio-ingress
namespace: istio-system
spec:
selector:
app: istio-ingress
servers:
- hosts:
- operations.ourdomain.nl
#port:
# name: http
# number: 80
# protocol: HTTP
#tls:
# httpsRedirect: true
- hosts:
- operations.ourdomain.nl
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: certs.ourdomain.nl
mode: SIMPLE
privateKey: sds
serverCertificate: sds
This interesting article gives a good insight in how the acme-challenge is supposed to work. For purpose of testing we have removed the port 80 and redirect to https in our custom gateway. We have added the autogenerated k8s gateway, listening only on port 80.
Istio is supposed to create a virtualservice for the acme-challenge. This seems to be happening, because now, when we request the acme-challenge url we get a 503: upstream connect error or disconnect/reset before headers. I believe this means the request gets to the gateway and is matched by a virtualservice, but there is no service / healthy pod to revert the traffic to.
We do see some possibly interesting logging in the istio-pilot:
“ProxyStatus”: {“endpoint_no_pod”:
{“cm-acme-http-solver-l5j2g.istio-system.svc.cluster.local”:
{“message”: “10.16.57.248”}
I have double checked and the service mentioned above does have a pod it is exposing. So I am not sure whether this line is relevant to this issue.
The acme-challenge pods do not have an istio-sidecar. Could this be the issue? If so: why does it apparently work for others

Why I can't expose the grafana that comes from istio with Istio Gateway?

I am using helm to install istio-1.0.0 version with --set grafana.enabled=true.
To access the grafana dashboard, I have to do port forwarding using kubectl command. It works okay. However, i want to access it using public ip, hence I am using this gateway yaml file
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: agung-ns
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 15031
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-global-route
namespace: agung-ns
spec:
hosts:
- "grafana.domain"
gateways:
- grafana-gateway
- mesh
http:
- route:
- destination:
host: "grafana.istio-system"
port:
number: 3000
weight: 100
I tried to curl it, but it returns 404 status, which means something wrong with routing logic and/or my configuration above.
curl -HHost:grafana.domain http://<my-istioingressgateway-publicip>:15031 -I
HTTP/1.1 503 Service Unavailable
date: Tue, 14 Aug 2018 13:04:27 GMT
server: envoy
transfer-encoding: chunked
Any idea?
I think the problem is that you refer service in different namespace. You need to add FQDN (grafana.istio-system.svc.cluster.local).
If you need istio, grafana, prometheus and jaeger integrated, exposed through gateway and with enabled security you can check the project I am working on:
https://github.com/kyma-project/kyma
I did expose it like this:
grafana.yml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "my.dns.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vts
namespace: istio-system
spec:
hosts:
- "my.dns.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 3000
then:
kubectl apply grafana.yml