Kubernetes Ingress with external Reverse Proxy - kubernetes

I'm trying to make the kubernetes dashboard externally accessible.
For this i installed nginx-ingress as ingress controller in my kubernetes cluster which is accessible on port 32012 on the kubernetes node IP.
Now i want to make it accessible via a domain like k8s.xxx.xx. For that I'd create a TLS certificate from Lets Encrypt to secure the whole thing via HTTPS.
Now we have a Apache2 reverse proxy which handles the requests going to k8s.xxx.xx and proxies them to port 32012, but here I'm getting a error from nginx-ingress:
The plain HTTP request was sent to HTTPS port
The connection chain is:
User -> https://k8s.xxx.xx/ > Apache2 reverse proxy redirecting to port 32012 on kubernetes node and handles Lets Encrypt certificate > ingress controller > kubernetes-dashboard pod.
My kubernetes ingress component looks like the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: k8s.xxx.xx
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443

Related

How do you specify a custom port for the ingress controller on Minikube?

It seems to listen on 80 by default - sensible - but if I wanted it to listen for requests on (for example) 8000, how would I specify this?
For clarity, this is via the nginx controller enabled via minikube addons enable ingress)
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services
within the cluster.
It means that it'll use default ports for HTTP and HTTPS ports.
From the documentation we can read:
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
While in general, Ingress does not allow you to expose random things on random ports, in case of nginx-ingress, you can do it. But you have to use additional annotation.
For instance, if your pod exposes 443, but you want it to be available on port 8081 you have to do following trick:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "8081"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 443
nginx-ingress controller actually allows changing http and https ports.
See the configuration parameters:
controller.service.ports.http
controller.service.ports.https

Traefik ingress does not work with cluster IP

I am using minikube for developing an application on Kubernetes and I am using Traefik as the ingress controller.
I am able to connect and use my application services when I use the url of the host which I defined in the ingress ("streambridge.local") and I set up in the linux hosts ("/etc/hosts"). But when I use the exact same ip address that I used for the dns I am not able to connect to any of the services and I receive "404 page not found". I have to mention that I am using the IP address of the minikube which I got by: $(minikube ip). Below is my ingress config and the commnads that I used for the dns.
How I can connect and use my application services with the IP?
Ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.frontend.rule.type: PathPrefixStrip
traefik.frontend.passHostHeader: "true"
traefik.backend.loadbalancer.sticky: "true"
traefik.wss.protocol: http
traefik.wss.protocol: https
spec:
rules:
- host: streambridge.local
http:
paths:
- path: /dashboard
backend:
serviceName: dashboard
servicePort: 9009
- path: /rdb
backend:
serviceName: rethinkdb
servicePort: 8085
My /etc/hosts:
127.0.0.1 localhost
192.168.99.100 traefik-ui.minikube
192.168.99.100 streambridge.local
This works: http://streambridge.local/rdb
But this does not work: http://192.168.99.100/rdb and returns 404 page not found
You have created ingress routes that evaluate the host header of the http request. So while you are actually connecting to the same ip, it is once with host:streambridge.local and once with "192.168.99.100" for which you did not add a rule in traefik. This is therefore working exactly as configured.

GKE ingress answers in HTTP even my service uses HTTPS only [duplicate]

This question already has answers here:
How to force SSL for Kubernetes Ingress on GKE
(10 answers)
Closed 3 years ago.
I have Nginx-based service which configured to accept HTTPS-only.
However GKE ingress answers HTTP requests in HTTP. I know that GKE Ingress doesn't know to enforce HTTP -> HTTPS redirect, but is it possible to learn it at least return HTTPS from service?
rules:
- http:
paths:
- path: /*
backend:
serviceName: dashboard-ui
servicePort: 8443
UPDATE: I do have TSL configured on GKE ingress and my K8S service. When request comes in HTTPS everything works nice. But HTTP requests gets HTTP response. I implemented HTTP->HTTPS redirect in my service, but it didn't help. In fact, for now all communication between ingress and my service is HTTTPS because service exposes only HTTPS port
SOLUTION - thanks to Paul Annetts: Nginx should check original protocol inside HTTPS block and redirect, like this
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
Yes, you can configure the GKE Kubernetes Ingress to both terminate HTTPS for external traffic, and also to use HTTPS internally between Google HTTP(S) Load Balancer and your service inside the GKE cluster.
This is documented here, but it is fairly complex.
For HTTPS to work you will need a TLS certificate and key.
If you have your own TLS certificate and key in the cluster as a secret, you can provide it using the tls section of Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-2
spec:
tls:
- secretName: my-secret
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-metrics
servicePort: 60000
You can also upload your TLS certificate and key directly to Google Cloud and provide a ingress.gcp.kubernetes.io/pre-shared-cert annotation that tells GKE Ingress to use it.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-psc-ingress
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: "my-domain-tls-cert"
...
To use HTTPS for traffic inside Google Cloud, from the Load Balancer to your GKE cluster, you need the cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}' annotation on your NodePort service. Note that your ports must be named for the HTTPS to work.
apiVersion: v1
kind: Service
metadata:
name: my-service-3
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
type: NodePort
selector:
app: metrics
department: sales
ports:
- name: my-https-port
port: 443
targetPort: 8443
- name: my-http-port
port: 80
targetPort: 50001
The load balancer itself doesn’t support redirection from HTTP->HTTPS, you need to find another way for that.
As you have NGINX as entry-point into your cluster, you can detect the protocol used to connect to the load-balancer with the X-forwarded-Proto HTTP header and do a redirect, something like this.
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}

How to expose a Ingress for external access in Kubernetes?

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.
This is the YAML I'm using after making several attempts:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
I ran yaml like this: kubectl create -f file.yaml
In the /etc/hosts file I added k8s.local to the ip of the master server.
When trying the command in or out of the master server a "Connection refused" message appears:
$ curl http://172.16.0.18:80/ -H 'Host: k8s.local'
I do not know if it's important, but I'm using Flannel in the cluster.
My idea is just to create a 'hello world' and expose it out of the cluster!
Do I need to change anything in the configuration to allow this access?
YAML file edited:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster
You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80
Of course the best solution is to use a cloud provider with a load balancer
You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.
Here is some information on how to install it.
If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.
And yes the selector on the 'Service' needs to be:
selector:
app: nginx
In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.
If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml
That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.
https://github.com/alexellis/inlets
Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.
I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)
https://blog.alexellis.io/https-inlets-local-endpoints/
I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:
mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com
proxy / 127.0.0.1:8080 {
transparent
}
proxy /tunnel 127.0.0.1:8080 {
transparent
websocket
}
tls {
max_certs 10
}
Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.
The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.

Kubernetes - How ingress routing works

I saw some example where the Kubernetes cluster is installed with ingress controller and then the ingress class is added with annotations and host as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: testsvc.k8s.privatecloud.com
http:
I am not sure which service is installed and which IP is configured with the DNS "k8s.privatecloud.com" so as to route requests?
How the DNS routing "k8s.privatecloud.com" routes requests to Kubernetes cluster? How the ingress to kubernetes bridging works?
Also, There could be many services configured with the hosts rule like,
testsvc.k8s.privatecloud.com
testsvc1.k8s.privatecloud.com
testsvc2.k8s.privatecloud.com
How the subdomain routing works here when we hit the service testsvc.k8s.privatecloud.com or testsvc1.k8s.privatecloud.com ...
Thanks
The DNS for all the hostnames in your given example (e.g. testsvc.k8s.privatecloud.com) would point to the machine or load-balancer through which traffic will reach the Ingress controller's nginx, as is described in the kuberetes Ingress documentation
Subdomain routing is traditionally done via "virtual-hosting", sometimes called "v-host-ing", and the nginx ingress uses the HTTP Host: header to know which backend service should receive that traffic. Some Ingress controllers are able to use SNI for that same trick over https.
In addition to #Matthew L Daniel answer.
The kubernetes Ingress works as a proxy between external network and your cluster. The behavior of the ingress is explained in the object ingress. For example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
Above it`s explained how to route traffic between 2 backends s1 and s2. Ingress does not hold any information about services except its name and port, every time it needs more details it would need to be requested from the api-server.