How to get TLS 1.3 on GKE - kubernetes

I have a service deployed in Google Kubernetes Engine and have gotten the request to support TLS 1.3 connections on that service. Currently I do not get higher than TLS 1.2. Do I need to define my ingress differently?
My ingress is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-tls-__CI_ENVIRONMENT_SLUG__
namespace: __KUBE_NAMESPACE__
labels:
app: __CI_ENVIRONMENT_SLUG__
ref: __CI_ENVIRONMENT_SLUG__
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- __SERVICE_TLS_ENDPOINT__
secretName: __CI_ENVIRONMENT_SLUG__-service-cert
rules:
- host: __SERVICE_TLS_ENDPOINT__
http:
paths:
- path: /
backend:
serviceName: service-__CI_ENVIRONMENT_SLUG__
servicePort: 8080
Master version 1.17.13-gke.600
Pool version 1.17.13-gke.600

Your Ingress resource looks good. I used the same setup as yours and received a message that TLS 1.3 was supported.
The official documentation states:
Default TLS Version and Ciphers
To provide the most secure baseline configuration possible,
nginx-ingress defaults to using TLS 1.2 and 1.3 only, with a secure set of TLS ciphers.
Please check which version of nginx-ingress-controller you are running:
Kubernetes.github.io: Ingress-nginx: Deploy: Detect installed version
Also you can check if TLS 1.3 is enabled in nginx.conf of your nginx-ingress-controller pod (ssl_protocols TLSv1.2 TLSv1.3;). You will need to exec into the pod.
Troubleshooting steps for ensuring support for TLS 1.3
Does your server (nginx-ingress) supports TLS 1.3?
You can check if your Ingress controller supports it by running an online analysis:
SSLLabs.com: SSLTest: Analyze
You should get a message stating that TLS 1.3 is supported:
You can also use alternative online tools:
Geekflare.dev: TLS test
Geekflare.com: 10 Online Tool to Test SSL, TLS and Latest Vulnerability
Does your client supports TLS 1.3?
Please make sure that the client connecting to your Ingress supports TLS 1.3.
The client connecting to the server was not mentioned in the question:
Assuming that it's a web browser, you can check it with a similar tool to the one used for a server:
Clienttest.ssllabs.com:8443: SSLTest: ViewMyClient
Assuming that it is some other tool (curl, nmap, openssl, etc.) please check its documentation for more reference.
Additional reference:
Github.com: Kubernetes: Ingress nginx: Enable tls 1.3 in the nginx image
En.wikipedia.org: Wiki: Transport Layer Security Adoption

Related

How do I make it accessible from outside my local k8s through traefik

I'm messing around with kubernetes and I've set up a kluster on my local pc using kind. I have also installed traefik as an ingress controller, and I have already managed to access an api that I have deployed in the kluster and a grafana through some ingress (without doing port forwards or anything like that). But with mongo I can't. While the API and grafana need an IngressRoute, mongo needs an IngressRouteTCP
The IngressRouteTCP that I have defined is this:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mongodb-ingress-tcp
namespace: mongo_namespace
spec:
entryPoints:
- web
routes:
- match: HostSNI(`mongo.localhost`)
services:
- name: mongodb
port: 27017
But I get this error:
I know i can use a port forward, but I want do in this way (with ingress)
A lot of thanks
You need to specify tls parameters
like this
tls:
certResolver: "bar"
domains:
- main: "snitest.com"
sans:
- "*.snitest.com"
To avoid using TLS u need to match all routes HostSNI(`*`)
It is important to note that the Server Name Indication is an extension of the TLS protocol.
Hence, only TLS routers will be able to specify a domain name with that rule.
However, there is one special use case for HostSNI with non-TLS routers: when one wants a non-TLS router that matches all (non-TLS) requests, one should use the specific HostSNI(`*`) syntax.
DOCS

Loki behind https ingress configuration with helm

Is there any way to configure promtail to send logs to loki via https-ingress?
promtail ---> https-ingress ---> loki
I used this helm chart promtail and configured loki url as http://gateway.loki.monitoring.example.com:80/loki/api/v1/push. After I deploy promtail chart I see below errors in promtail pod
level=error ts=2022-03-28T14:10:23.740581978Z caller=client.go:360 component=client host=gateway.loki.monitoring.example.com:80 msg="f
inal error sending batch" status=308 error="server returned HTTP status 308 Permanent Redirect (308): <html>"
I even specified https in loki url as https://gateway.loki.monitoring.example.com:80/loki/api/v1/push but still failing
level=warn ts=2022-03-28T14:27:47.976570998Z caller=client.go:349 component=client host=gateway.loki.monitoring.example:80 msg="er
ror sending batch, will retry" status=-1 error="Post \"https://gateway.loki.monitoring.example.com:80/loki/api/v1/push\": http: server
gave HTTP response to HTTPS client"
I found this config https://grafana.com/docs/loki/latest/installation/helm/#run-loki-behind-https-ingress, but it is outdated
NOTE:
I have not configured any https at loki side.
Configured loki-distributed chart's ingress like below(and rest ingress config are default)
...
ingress:
# -- Specifies whether an ingress for the gateway should be created
enabled: true
# -- Ingress Class Name. MAY be required for Kubernetes versions >= 1.18
ingressClassName: monitoring-ingress
# -- Annotations for the gateway ingress
annotations:
cert-manager.io/cluster-issuer: monitoring-cluster-issuer
# -- Hosts configuration for the gateway ingress
hosts:
- host: gateway.loki.monitoring.example.com
paths:
- path: /
# -- pathType (e.g. ImplementationSpecific, Prefix, .. etc.) might also be required by some Ingress Controllers
pathType: Prefix
# -- TLS configuration for the gateway ingress
tls:
- secretName: loki-gateway-tls-certs
hosts:
- gateway.loki.monitoring.example.com
...
Did I miss any ingress config at loki?
After I played some time, I understood I need to remove port and specify https for the loki URL. Should be like below
https://gateway.loki.monitoring.example.com/loki/api/v1/push

Nginx Ingress returns 502 Bad Gateway on Kubernetes

I have a Kubernetes cluster deployed on AWS (EKS). I deployed the cluster using the “eksctl” command line tool. I’m trying to deploy a Dash python app on the cluster without success. The default port for Dash is 8050. For the deployment I used the following resources:
pod
service (ClusterIP type)
ingress
You can check the resource configuration files below:
pod-configuration-file.yml
kind: Pod
apiVersion: v1
metadata:
name: dashboard-app
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: my_image_from_ecr
ports:
- containerPort: 8050
service-configuration-file.yml
kind: Service
apiVersion: v1
metadata:
name: dashboard-service
spec:
selector:
app: dashboard
ports:
- port: 8050 # exposed port
targetPort: 8050
ingress-configuration-file.yml (host based routing)
kind: Ingress
metadata:
name: dashboard-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dashboard.my_domain
http:
paths:
- backend:
serviceName: dashboard-service
servicePort: 8050
path: /
I followed the steps below:
kubectl apply -f pod-configuration-file.yml
kubectl apply -f service-configuration-file.yml
kubectl apply -f ingress-confguration-file.yml
I also noticed that the pod deployment works as expected:
kubectl logs my_pod:
and the output is:
Dash is running on http://127.0.0.1:8050/
Warning: This is a development server. Do not use app.run_server
in production, use a production WSGI server like gunicorn instead.
* Serving Flask app "annotation_analysis" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
You can see from the ingress configuration file that I want to do host based routing using my domain. For this to work, I have also deployed an nginx-ingress. I have also created an “A” record set using Route53
that maps the “dashboard.my_domain” to the nginx-ingress:
kubectl get ingress
and the output is:
NAME HOSTS ADDRESS. PORTS. AGE
dashboard-ingress dashboard.my_domain nginx-ingress.elb.aws-region.amazonaws.com 80 93s
Moreover,
kubectl describe ingress dashboard-ingress
and the output is:
Name: dashboard-ingress
Namespace: default
Address: nginx-ingress.elb.aws-region.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host.my-domain
/ dashboard-service:8050 (192.168.36.42:8050)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Unfortunately, when I try to access the Dash app on the browser, I get a
502 Bad Gateway error from the nginx. Could you please help me because my Kubernetes knowledge is limited.
Thanks in advance.
It had nothing to do with Kubernetes or AWS settings. I had to change my python Dash code from:
if __name__ == "__main__":
app.run_server(debug=True)
to:
if __name__ == "__main__":
app.run_server(host='0.0.0.0',debug=True).
The addition of host='0.0.0.0' did the trick!
I think you'll need to check whether any other service is exposed at path / on the same host.
Secondly, try removing rewrite-target annotation. Also can you please update your question with output of kubectl describe ingress <ingress_Name>
I would also suggest you to use backend-protocol annotation with value as HTTP (default value, you can avoid using this if dashboard application is not SSL Configured, and only this application is served at the said host.) But, you may need to add this if multiple applications are served at this host, and create one Ingress with backend-protocol: HTTP for non SSL services, and another with backend-protocol: HTTPS to serve traffic to SSL enabled services.
For more information on backend-protocol annotation, kindly refer this link.
I have often faced this issue in my Ingress Setup and these steps have helped me resolve it.

Letsencrypt/Cert Manager workflow for apps served through Istio VirtualService/Gateway

Is there a common (or any) workflow to issue and renew LE certificates for apps configured in an Istio VirtualService & Gateway? The Istio docs only cover an Ingress use case, and I don't think it covers handling renewals.
My real world use case is about making this work with a wildcard cert and custom applications, but for the sake of simplicity, I want to figure this out using the Prometheus service installed with the Istio demo. The VirtualService and Gateway are necessary for my real world use case.
Here is how I am currently serving Prometheus over https with a self-signed cert. I am running Istio version 1.5.2 on GKE K8s version 1.15.11. Cert Manager is installed as well.
So how would I adapt this to use Cert Manager for issuing and renewing an LE cert for prom.example.com?
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: prometheus-gateway
#namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: http-prom
protocol: HTTPS
hosts:
- "prom.example.com"
tls:
mode: SIMPLE # enables HTTPS on this port
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-vs
spec:
hosts:
- "prom.example.com"
gateways:
- prometheus-gateway
http:
- match:
- port: 443
route:
- destination:
host: prometheus
port:
number: 9090
TL;DR
Configure cert-manager with DNS domain verification to issue certificate, renewal is handled automatically.
Few notes on the example in Istio docs that hopefully will clarify the workflow:
cert-manager knows nothing about Istio, it is key role is to issue and renew certificate then save them to a secret object in kubernetes.
LE ACME verification is typically done with DNS e.g. AWS Route53
Issued Certificate secret will be in a specific k8s namespace and not visible outside that.
Istio knows nothing about cert-manager, all what it needs is the issued certificate secrets which is configured in the gateway with SDS. This means two things:
The name of the SDS secret must match the one cert-manager produces (this is the only link between them)
The secret must be in the same namespace where the Istio gateway will be.
Finally, your VirtualServices just need a gateway that is configured properly as above. The good news is, VirtualService can link to gateway in any namespace if you used the full qualified name.
So you can have your gateway(s) in the same namespace where you issue the Certificate object to avoid copying secrets around, then your VirtualServices can be in any namespace just use the full gateway name.
There is an example for this in istio documentation:
This example demonstrates the use of Istio as a secure Kubernetes Ingress controller with TLS certificates issued by Let’s Encrypt.
You will start with a clean Istio installation, create an example service, expose it using the Kubernetes Ingress resource and get it secured by instructing cert-manager (bundled with Istio) to manage issuance and renewal of TLS certificates that will be further delivered to the Istio ingress gateway and hot-swapped as necessary via the means of Secrets Discovery Service (SDS).
Hope it helps.

mutual TLS based on specific IP

I'm trying to configure nginx-ingress for mutual TLS but only for specific remote address. I tried to use snippet but no success:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($remote_addr = 104.214.x.x) {
auth-tls-verify-client: on;
auth-tls-secret: namespace/nginx-ca-secret;
auth-tls-verify-depth: 1;
auth-tls-pass-certificate-to-upstream: false;
}
The auth-tls annotations work when applied as annotations, but inside the snippet they don't.
Any idea how to configure this or maybe a workaround to make it work?
The job of mTLS is basically restricting access to a service by requiring the client to present a certificate. If you expose a service and then require only clients with specific IP addresses to present a certificate, the entire rest of the world can still access your service without a certificate, which completely defeats the point of mTLS.
If you want more info, here is a good article that explains why TLS and mTLS exist and what is the difference between them.
There are two ways to make a sensible setup out of this:
Just use regular TLS instead of mTLS
Make a service in your cluster require mTLS to access it regardless of IP addresses
If you go for option 2, you need to configure the service itself to use mTLS, and then configure ingress to pass through the client certificate to the service. Here's a sample configuration for nginx ingress that will work with a service that expects mTLS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mtls-sample
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: mtls-svc
servicePort: 443