I have a GKE cluster, external domain name, and letsencrypt certs.
When I am using a Load balancer and instruct pods to use certs that I generate using certbot then performance is quite good. But I have to renew certs manually which takes a lot of effort.
When using an ingress controller and letting cert-manager update certs by itself then additional hops add latency and make the traffic path more complex. Then the connection is on h2 from client to ingress and then the connection become plain HTTP from ingress to pods.
Is there any way remove the extra hops when using nginx ingress controller and take out the performance issue?
There is no extra hop if you are using the cert-manager with ingress.
You can use the cert-manager it will save the cert into secret and attach to ingress. However, it's up to you where you are doing TLS termination.
You can also bypass the HTTPS traffic till POD for end-to-end encryption if you are doing TLS termination at ingress level backed traffic till POD will be in plain HTTP.
Internet > ingress (TLS in secret) > Plain HTTP if you terminate > service > PODs
If you want to use the certificate into POD you can mount the secret into POD and that will be further used by the application.
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod
If you will use the secret with POD you might need to reload the POD in that case you can use the Reloader to auto roll out the PODs.
Reloader : https://github.com/stakater/Reloader
Related
I'm trying to setup a reverse proxy to work with Kubernetes. I currently have an ingress load-balancer using Metallb and Contour with Envoy.
I also have a working certificate issuer with Let's Encrypt and cert-manager allowing services and deployments to get certificates for HTTPS.
My problem is trying to get other websites and servers not run in Kubernetes but are in our DNS range to have HTTPS certificates and I feel like I am missing something.
My IP for my load-balancer is 10.64.1.35 while the website I am trying to get a certificate for is 10.64.0.145.
Thank you if you could offer any help!
I think that will never work. Something needs to request a certificate, in kubernetes this usually is the presence of a Resource. The cert-manager listens to the creation of that resource, and requests a certificate from let's encrypt.
Then that certificate must be configured in some loadbalancer and the loadbalancer must reload its configuration (That's what Metallb does).
When you have applications running elsewhere outside of this setup, those applications will never have certificates.
If you really want to have that Metallb loadbalancer request and attach the certificates, you'll need to create a resource in kubernetes and proxy all the traffic for that application through kubernetes.
myapp.com -> metallb -> kubernetes -> VPS
However, I think the better way for you is to setup let's encrypt on the server where you need it. That way you prevent 2 additional network hops, and resources on the metallb and kubernetes server(s).
Need help on how to configure TLS/SSL on k8s cluster for internal pod to pod communication over https. Able to curl http://servicename:port over http but for https i am ending up with NSS error on client pod.
I generated a self signed cert with CN=*.svc.cluster.local (As all the services in k8s end with this) and i am stuck on how to configure it on k8s.
Note: i exposed the main svc on 8443 port and i am doing this in my local docker desktop setup on windows machine.
No Ingress --> Because communication happens within the cluster itself.
Without any CRD(custom resource definition) cert-manager
You can store your self-signed certificate inside the secret of Kubernetes and mount it to the volume of the pod.
If you don't want to use the CRD or cert-manager you can use the native Kubernetes API to generate the Certificate which will be trusted by all the pods by default.
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
managing the self singed certificate across all pods and service might be hard I would suggest using the service mesh. Service mesh encrypts the network traffic using the mTLS.
https://linkerd.io/2.10/features/automatic-mtls/#:~:text=By%20default%2C%20Linkerd%20automatically%20enables,TLS%20connections%20between%20Linkerd%20proxies.
Mutual TLS between service to service communication managed by the Side car containers in case of service mesh.
https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/
in this case, No ingress required and no cert-manager required.
GKE Ingress: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Nginx Ingress: https://kubernetes.github.io/ingress-nginx/
Why GKE Ingress
GKE Ingress can be used along with Google's managed SSL certificates. These certificates are deployed in edge servers of load balancer which results in very low TTFB (time to first byte)
What's wrong about GKE Ingress
The HTTP/domain routing is done in the load balancer using 'forward rules' which is very pricy. Costs around $7.2 per rule. Each domain requires one rule.
Why Nginx Ingress
Nginx Ingress also creates (TCP/UP) load balancer where we can specify routing of HTTP/domain using ingress controller. Since the routing is done inside the cluster there are no additional costs on adding domains into the rules
What's wrong about Nginx Ingress
To enable SSL, we can use cert-manager. But as I mentioned above, Google's managed certificate deploy certificates in edge servers which results in very low latency
My Question
Is it possible to use both of them together? So that HTTPS requests first hit GKE ingress which will terminate SSL and route the traffic to Nginx ingress which will route it to corresponding pods
Is not possible to point an Ingress to another Ingress. Furthermore and in your particular case, is also not possible to point a GCE ingress class to Nginx since it relies in an HTTP(S) Load Balancer, which can only have GCE instances/instances groups (basically the node pools in GKE), or GCS buckets as backends.
If you were to deploy an Nginx ingress using GKE, it will spin up a Network Load Balancer which is not a valid backend for the HTTP(S) Load Balancer.
So is neither possible via Ingress nor GCP infrastructure features. However, if you need the GCE ingress class to be hit first, and then, manage further routing with Nginx, you might want to consider having Nginx as a Kubernetes Service/Deployment to manage the incoming traffic once is within the cluster network.
You can create a ClusterIP service for internally accessing your Nginx deployment and from there, using cluster-local hostnames to redirect to other services/applications within the cluster.
Moving from VMs to Kubernetes.
We are running our services on multiple VMs. Services are running on multiple VMs and have VIP in front of them. Clients will be accessing VIP and VIP will be routing traffic to services. Here, we use SSL cert for VIP and VIP to VM also using HTTPS.
Here the service will be deployed into VM with a JKS file. This JKS file will have a cert for exposing HTTPS and also to communicate with SSL enabled database.
How to achieve the same thing in Kubernetes cluster? Need HTTPS for VIP and services and also for communication to SSL enabled database from service.
Depends on the platform where you running Kubernetes (on-premises, AWS, GKE, GCE etc.) you have several ways to do it, but I will describe a solution which will work on all platforms - Ingress with HTTPS termination on it.
So, in Kubernetes you can provide access to your application inside a cluster using Ingress object. It can provide load balancing, HTTPS termination, routing by path etc. In most of the cases, you can use Ingress controller based on Nginx. Also, it providing TCP load balancing and SSL Passthrough if you need it.
For providing routing from users to your services, you need:
Deploy your application as a combination of Pods and Service for them.
Deploy Ingress controller, which will manage your Ingress objects.
Create a secret for your certificate.
Create an Ingress object with will point to your service with TLS settings for ask Ingress to use your secret with your certificate, like that:
spec:
tls:
hosts:
- foo.bar.com
secretName: foo-secret
Now, when you call the foo.bar.com address, Ingress with using FQDN-based routing and provide HTTPS connection between your client and pods in a cluster using a service object, which knows where exactly your pod is. You can read how it works here and here.
What about encrypted communication between your services inside a cluster - you can use the same scheme with secrets for providing SSL keys to all your services and setup Service to use HTTPS endpoint of an application instead of HTTP. Technically it is same as using https upstream in installations without Kubernetes, but all configuration for Nginx will be provided automatically based on your Service and Ingress objects configuration.
I have Istio Ingress which is working with traffic going in to microservices and inbetween microservices is being encrypted within ISTIO domain. But i dont want to expose ISTIO ingress to public.
So tried deploying NGINX or HAPROXY ingress (with https certs) and point them to ISTIO ingress and everything is working great.
My only worry now is that traffic between NGINX INGRESS (https term) > ISTIO INGRESS is not encrypted.
What is the usual way on Istio to get full encryption of traffic but with NGINX/HAPROXY ingress.
I guess one way is to HAPROXY tcp mode to ISTIO ingress with certs on Istio ingress. Haven't tried it but it should work. Wild idea is running NGINX ingress within ISTIO mash but then i would loose some Istio Ingress capabilities.
What is the recommended way or any suggestion. How is usualy Istio being exposed on some real Prod env example.
Since i dont use cloud loadbalancer on voyager instances but expose Voyager/Haproxy on Host-Port
I collocated Voyager(HostPort) and Istio(HostPort) via DeamonSet/node-selector(and taints) on same machines called frontend. Then just pointed Voyager to loadbalance the loopback/localhost with port of Istio HostPort I specified.
backendRule:
- 'server local-istio localhost:30280'
This way no unenctypted traffic is traversing the network between Voyager/Haproxy and Istio Ingress since they communicate now on same Host. I have 2 frontend nodes witch are beeing loadbalanced so i have redundancy. But its kind of improvisation and breaking kubernetes logic. On the other hand it works great.
Other solution was to use selfsigned certificates on Istio, than just point Voyager/Haproxy to Istio instances. But this requires multiple terminations since Voyager is also terminating Https. Advanteg of this is that you can leave Voyager and Istio instances to Kubernetes to distribute. No need to bind them to specific machines.