What's the easiest way to add TLS to a Kubernetes service? - kubernetes

I have a simple web server exposed publicly on Kubernetes on GKE and a domain registered. I'm looking to add TLS to this so it's accessible via HTTPS. I've heard a lot about using Let's Encrypt and ended up attempting this: https://github.com/jetstack/cert-manager/blob/master/docs/tutorials/acme/quick-start/index.rst but found it totally over-whelming. Is there a simpler approach to using Let's Encrypt given that my deployment is just a single service and pod?
The config I'm using is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: gcr.io/my-repo
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: /healthz
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-balancer-service
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-app
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /*
backend:
serviceName: web-balancer-service
servicePort: 8080
========================================
EDIT: Following #Utku Özdemir's suggestion I tried to codify those changes into YAML. I created the IP address with
gcloud compute addresses create example-ip-address --global
And the certificate and provisioning with: https://gist.github.com/nickponline/ab74d3d179e21474551b7596c6478eea
Everything provisions correctly but when I inspect the ManagedCertificates with kubectl describe ManagedCertificates example-certificate is says
Spec:
Domains:
app.domain.xyz
Status:
Certificate Name: xxxxxxxxxxxxxxxxxx
Certificate Status: Provisioning
Domain Status:
Domain: app.domain
Status: FailedNotVisible
Events: <none>
I've waited 24 hours so assume that this isn't going to change.

Since you use the ingress controller of the GKE itself, when you create an Ingress resource, it triggers the creation of a Load Balancer resource in the Google Cloud Platform. Normally, SSL termination is a responsibility of the ingress controller, therefore that GCP load balancer is responsible of doing the SSL termination.
This means, cert-manager will not work for your case, since the certificates will live outside of your cluster, and the traffic will be already SSL terminated before coming in your cluster.
Luckily, GCP has self-provisioned SSL (Let's Encrypt) support. To make use of that, yo need to follow the steps below:
Go to the Load Balancing screen on GCP, switch to advanced view, and jump to the Certificates tab (or simply click here).
Create a new SSL certificate, with "Create Google-managed certificate" chosen. To the domain field, write down the exact domain you want the SSL certificate for. It should look like this:
Go to the External IP Addresses screen, and reserve a new static IP address. Choose the type to be global (at the time of writing, GCP ingress controller only supports global IP addresses). Should look like this:
Take the static IP that you reserved (in this example it is 34.95.84.106)
Go to your domain registrar, and add an A type record for your domain (the one in the SSL certificate) to point to the static IP you allocated. In this example, it would be my-app.example.com -> 34.95.84.106.
Finally, you will need to edit your ingress to put 2 annotations, so it will hint the Google Cloud's Ingress Controller to use the static IP you reserved, and the certificate you created. See the ingress example below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-app
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: my-ssl-certificate # the name of the SSL certificate resource you created
kubernetes.io/ingress.global-static-ip-name: my-static-ip # the name of the static ip resource you created
kubernetes.io/ingress.allow-http: "false" # if you want to block plain http
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /*
backend:
serviceName: web-balancer-service
servicePort: 8080
Apply it, and verify that the changes are reflected by going to the Load Balancers screen on GCP.
Important Notes:
If there already is a GCP Load Balancer that is created by an Ingress, the changes you do (annotations) on the ingress will not reflect to the existing load balancer. Therefore, delete your existing ingress, make sure the existing load-balancer disappears, and create the ingress with correct annotations, so the load balancer will be configured correctly.
For the Let's Encrypt provisioning to work, your DNS record should be in place. It checks the owner of the domain using DNS before issuing the certificate. Also, the initial provisioning can take quite some time (up to half an hour).

Related

Kubernetes ingress to pod running on same host?

We are just getting started with k8s (bare metal on Ubuntu 20.04). Is it possible for ingress traffic arriving at a host for a load balanced service to go to a pod running on that host (if one is available)?
We have some apps that use client side consistent hashing (using customer ID) to select a service instance to call. The service instances are stateless but maintain in memory ML models for each customer. So it is useful (but not essential) to have repeated requests for a given customer go to the same service. Then we can just use antiAffinity to have one pod per host.
Our existing service discovery mechanism lets the clients find all the instances of the service and the nodes they are running on. All our k8s nodes are running the Nginx ingress controller.
I finally got this figured out. This was way harder than it should be IMO! Update: It's not working. Traffic frequently goes to the wrong pod.
The service needs externalTrafficPolicy: Local (see docs).
apiVersion: v1
kind: Service
metadata:
name: starterservice
spec:
type: LoadBalancer
selector:
app: starterservice
ports:
- port: 8168
externalTrafficPolicy: Local
The Ingress needs nginx.ingress.kubernetes.io/service-upstream: "true" (service-upstream docs).
The nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com" bit is because our service discovery updates DNS so each instance of the service includes the name of the host it is running on in its DNS name.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: starterservice
namespace: default
annotations:
nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- host: starterservice.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: starterservice
port:
number: 8168
So now a call https://starterservice-foo.example.com will go to the instance running on k8s host foo.
I believe Sticky Sessions is what you are looking for. Ingress does not communicate directly with pods, but with services. Sticky sessions try to bind requests from the same client to the same pod by setting an affinity cookie.
This is used for example with SignalR sessions, where the negotiation request has to be on the same host as the following websocket connection.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Affinity mode "balanced" is the default. If your pod count does not change, part of your clients will lose the session. Use "persistent" to have users connect to the same pod always (unless it dies of course). Further reading: https://github.com/kubernetes/ingress-nginx/issues/5944

Problem configuring websphere application server behind ingress

I am running websphere application server deployment and service (type LoadBalancer). The websphere admin console works fine at URL https://svcloadbalancerip:9043/ibm/console/logon.jsp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
was-svc LoadBalancer x.x.x.x x.x.x.x 9080:30810/TCP,9443:30095/TCP,9043:31902/TCP,7777:32123/TCP,31199:30225/TCP,8880:31027/TCP,9100:30936/TCP,9403:32371/TCP 2d5h
But if i configure that websphere service behind ingress using ingress file like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /ibm/console/logon.jsp
backend:
serviceName: was-svc
servicePort: 9043
- path: /v1
backend:
serviceName: web
servicePort: 8080
The url https://ingressip//ibm/console/logon.jsp doesn't works.
I have tried the rewrite annotation too.
Can anyone help to just deploy the ibmcom/websphere-traditional docker image in kubernetes using deployment and service. With the service mapped behind the ingress and the websphere admin console should somehow be opened from ingress
There is a helm chart available from IBM team which has the ingress resource as well. In your code snippet, you are missing SSL related annotations as well.
https://hub.helm.sh/charts/ibm-charts/ibm-websphere-traditional
https://github.com/IBM/charts/tree/master/stable/ibm-websphere-traditional
I have added the Virtual Host configuration for admin console to work with port 443 in the following code sample.
Please Note: Exposing admin console on the ingress is not a good practice. Configuration should be done via wsadmin or by extending the base Dockerfile. Any changes done through the console will be lost when the container restarts.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: websphere
spec:
type: NodePort
ports:
- name: admin
port: 9043
protocol: TCP
targetPort: 9043
nodePort: 30510
- name: app
port: 9443
protocol: TCP
targetPort: 9443
nodePort: 30511
selector:
run: websphere
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: websphere-admin-vh
namespace: default
data:
ingress_vh.props: |+
#
# Header
#
ResourceType=VirtualHost
ImplementingResourceType=VirtualHost
ResourceId=Cell=!{cellName}:VirtualHost=admin_host
AttributeInfo=aliases(port,hostname)
#
#
#Properties
#
443=*
EnvironmentVariablesSection
#
#
#Environment Variables
cellName=DefaultCell01
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: websphere
name: websphere
spec:
containers:
- image: ibmcom/websphere-traditional
name: websphere
volumeMounts:
- name: admin-vh
mountPath: /etc/websphere/
ports:
- name: app
containerPort: 9443
- name: admin
containerPort: 9043
volumes:
- name: admin-vh
configMap:
name: websphere-admin-vh
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- http:
paths:
- path: /ibm/console
backend:
serviceName: websphere
servicePort: 9043
Exposing both the adminhost and defaulthost via ingress isn't possible, or at least I've never figured out how to accomplish it. The crux of the issue is that ingress listens on port 80 or port 443 and forwards your request to the corresponding port on the container. Thus, the Host header of your request contains that port. I don't know enough about WAS channels/virtualhosts to understand how this works exactly, but in order for accessing WAS endpoints over any port other than the one listed for the endpoint in WAS config to work, the websphere-traditional image has to set a property to extract the port it should use for things like checking against virtualhost hostalias entries and issuing redirects from the Host header (com.ibm.ws.webcontainer.extractHostHeaderPort).
The problem becomes, when it uses that port, that port needs to be listed as a host alias for the virtual host in order for the traffic to be let through to the application. And since a combination of wildcard host and specific port can only be a host alias on one virtual host at a time, they were set up as host aliases on defaulthost so that web applications will work via ingress, but this makes it impossible to also access the admin console since that is served via a separate virtualhost which doesn't (and as far as I know can't) have the host alias entries set up to allow traffic with port 443 in its host header through. I haven't had to figure out how to get this working because kubectl port-forward has been sufficient to get at the admin console for the times I've needed to consult something, and you can't make changes anyway because they'll disappear when the pod restarts and a new one is started from the same (unchanged) image.

Is there a way to configure an EKS service to use HTTPS?

Here is the config for our current EKS service:
apiVersion: v1
kind: Service
metadata:
labels:
app: main-api
name: main-api-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Cluster
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: main-api
sessionAffinity: None
type: LoadBalancer
is there a way to configure it to use HTTPS instead of HTTP?
To terminate HTTPS traffic on Amazon Elastic Kubernetes Service and pass it to a backend:
1.    Request a public ACM certificate for your custom domain.
2.    Identify the ARN of the certificate that you want to use with the load balancer's HTTPS listener.
3.    In your text editor, create a service.yaml manifest file based on the following example. Then, edit the annotations to provide the ACM ARN from step 2.
apiVersion: v1
kind: Service
metadata:
name: echo-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: echo-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
4.    To create a Service object, run the following command:
$ kubectl create -f service.yaml
5.    To return the DNS URL of the service of type LoadBalancer, run the following command:
$ kubectl get service
Note: If you have many active services running in your cluster, be sure to get the URL of the right service of type LoadBalancer from the command output.
6.    Open the Amazon EC2 console, and then choose Load Balancers.
7.    Select your load balancer, and then choose Listeners.
8.    For Listener ID, confirm that your load balancer port is set to 443.
9.    For SSL Certificate, confirm that the SSL certificate that you defined in the YAML file is attached to your load balancer.
10.    Associate your custom domain name with your load balancer name.
11.    Finally, In a web browser, test your custom domain with the following HTTPS protocol:
https://yourdomain.com
You should use an ingress (and not a service) to expose http/s outside of the cluster
I suggest using the ALB Ingress Controller
There is a complete walkthrough here
and you can see how to setup TLS/SSL here

How to expose a Ingress for external access in Kubernetes?

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.
This is the YAML I'm using after making several attempts:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
I ran yaml like this: kubectl create -f file.yaml
In the /etc/hosts file I added k8s.local to the ip of the master server.
When trying the command in or out of the master server a "Connection refused" message appears:
$ curl http://172.16.0.18:80/ -H 'Host: k8s.local'
I do not know if it's important, but I'm using Flannel in the cluster.
My idea is just to create a 'hello world' and expose it out of the cluster!
Do I need to change anything in the configuration to allow this access?
YAML file edited:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster
You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80
Of course the best solution is to use a cloud provider with a load balancer
You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.
Here is some information on how to install it.
If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.
And yes the selector on the 'Service' needs to be:
selector:
app: nginx
In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.
If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml
That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.
https://github.com/alexellis/inlets
Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.
I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)
https://blog.alexellis.io/https-inlets-local-endpoints/
I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:
mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com
proxy / 127.0.0.1:8080 {
transparent
}
proxy /tunnel 127.0.0.1:8080 {
transparent
websocket
}
tls {
max_certs 10
}
Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.
The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.

Kubernetes Service pointing to External Resource

We have an existing website, lets say example.com, which is a CNAME for where.my.server.really.is.com.
We're now developing new services using Kubernetes. Our first service /login is ready to be deployed. Using a mock HTML server I've been able to deploy two pods with seperate services that map to example.com and example.com/login.
What I would like to do is get rid of my mock HTML server, and provide a service inside of the cluster, that points to our full website outside of the server. Then I can change the DNS for example.com to point to our kubernetes cluster and people will still get the main site from where.my.server.really.is.com.
We are using Traefik for ingress, and these are the changes I've made to the config for the website:
---
kind: Service
apiVersion: v1
metadata:
name: wordpress
spec:
type: ExternalName
externalName: where.my.server.really.is.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wordpress
annotations:
kubernetes.io/ingress.class: traefik
spec:
backend:
serviceName: wordpress
servicePort: 80
rules:
- host: example.com
http:
paths:
- backend:
serviceName: wordpress
servicePort: 80
Unfortunately, when I visit example.com, rather than getting where.my.server.really.is.com, I get a 503 with the body "Service Unavailable". example.com/login works as expected
What have I missed?
Following traefik documentation on using ExternalName
When specifying an ExternalName, Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.
I believe you are missing the ports configuration of the Service. Something like
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- name: http
port: 80
type: ExternalName
externalName: where.my.server.really.is.com
You can see a full example in the docs.