How do Ingress Controller and Ingress work together? - kubernetes

I don't see any link between the service and Ingress yaml files. How is it linked and how does it work? I looked at the nginx ingress controller but couldn't find any links to the ingress either.
How does the traffic flow? LB -> Ingress controller -> Ingress -> Backend service -> pods? And it seems only 80 and 443 are allowed by ingress. Does that mean any custom ports defined on ingress-nginx service is directly connected to the pod through like LB -> Backend service -> Pod?
Update: Figured out the traffic flow. Its as follows:
LB -> Ingress controller -> Ingress -> Backend service -> pods
I have a https virtual host with a custom port and I guess I need to edit the ingress-controller yaml file to allow custom port and add the custom port to ingress and would it start routing?
Ingress.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test
namespace: test
rules:
- path: /
backend:
serviceName: httpd
servicePort: 443
cloud-generic-service.yml:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
1235: "test-web-dev/tomcat7:1235"
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-1235
port: 1235
protocol: TCP
targetPort: 1235

Explanation to this can be found in the documentation
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster. Traffic routing is controlled by rules
defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable
URLs, load balance traffic, terminate SSL / TLS, and offer name-based
virtual hosting.
So Ingress routes traffic from outside the cluster to service that you've specified in it, httpd in your example. You can specify how traffic should be used by adding annotations (example of annotation for nginx ingress).
The Ingress controller is an application that runs in a cluster and
configures an HTTP load balancer according to Ingress resources. The
load balancer can be a software load balancer running in the cluster
or a hardware or cloud load balancer running externally. Different
load balancers require different Ingress controller implementations.
In the case of NGINX, the Ingress controller is deployed in a pod along with the > load balancer.
Ingress resources requires Ingress controller to be present in the cluster. It is not deployed in to the cluster by default that's why it has has to be installed manually.

Related

Why put a LoadBalancer Type of Service in front of the Nginx Ingress

I find some usecases of k8s in production which work with the Public Cloud will put a LoadBalancer type of Service in front of the Nginx Ingress. (You can find an example from the below yaml.)
As I known, ingress can be used to expose the internal servcie to the public, so what's the point to put a loadbalancer in front of the ingress? Can I delete that service?
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.27.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.45.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
kubernetes.io/elb.class: union
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
...so what's the point to put a loadbalancer in front of the ingress?
This way allows you to take advantage of the cloud provider LB facilities (eg. multi-az etc), then with Ingress you can further control routing using path or name-based virtual host for services in the cluster.
Can I delete that service?
Ingress doesn't do port mapping or pods selection, and you can't resolve an Ingress name with DNS.
Because the Ingress Controller itself is, in this case, running inside a Pod so it needs to be exposed to the internet like anything else running in Pod. Some Ingress Controllers have the actual proxy running externally, like the AWS ALB one. But Nginx is just running inside the container like normal.

kubernates expose service with ingress on a certain port

Hi I have a react docker that uses nginx
with this service
apiVersion: v1
kind: Service
metadata:
labels:
appcluster: ethernial
app: clientweb
visibility: external
name: clientweb-service-ext
spec:
ports:
- port: 80
name: http
selector:
app: clientweb
type: ClusterIp
I want to expose it, I have only 1 Node that is the Master, but the port 80 is already in use by apache running on master node (cannot shutdown it yet)
I want to expose my react app so I can reach it by http://:30000 for example
(I also need to expose other REST apis externally and internally, one hosted on a pod and each one uses port 80)
so how I setup my ingress?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: clientweb-ingress
spec:
defaultBackend:
service:
name: clientweb
port:
number: 8080
thanks!
You need to expose the ingress controller using a NodePort service on port 30000. Once you do that you can access backend pods exposed via ingress resource using 30000 port. If you are using nginx ingress controller then follow this doc and the NodePort service(taken from the nginx installation docs) would look like below with your desired port 30000 and 30001.
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.13.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 30000 # Specified nodeport
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 30001 # Specified nodeport
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
In this case you can still continue to have apache on port 80 on the host system.
curl http://NODEIP:30000/<path-in-ingress>
curl https://NODEIP:30001/<path-in-ingress>
First, you need to understand the relationship between Ingress and Ingress Controller. Ingress is just a kind of resources, and it will do nothing except declare the ingress rules. All Ingress rules will need a certain Ingress Controller to implement its rules.
Then you need to deploy an Ingress Controller, typically a deployment(for certain pod) and a service(for external access). You can have a look at Nginx Ingress Controller at https://kubernetes.github.io/ingress-nginx/ and use kubectl or helm to make the deployment. Do not forget to annotate the ingress-class as it will be used later.
After this, you can apply any Ingress to this certain Ingress Controller by adding kubernetes.io/ingress.class: "nginx" annotation to your Ingress. And ingress controller(nginx server) will add your rules to its config, that means your Ingress rules has been applied.
Finally, as your ingress-controller-service has expose it self(LoadBalanceIp or NodePort from port 30000), all traffic to this endpoint will go through your Ingress rules and redirect to the desire service.

Setting up internal service on GKE without external IP

I am new to GKE and kubernetes. I installed elastic search on GKE using Google Click to Deploy. I also installed nginx-ingress and secured the elasticsearch service with HTTP basic authentication (through the ingress). I created an external static IP and assigned it to the ingress controller using the loadBalancerIp field in the ingress-controller service configuration.
Questions:
I have appengine services running in GCP which need to access this elasticsearch setup. Can I avoid exposing my elasticsearch service outside - with some kind of an "internal" IP which only my appengine services can access? Is using VPC one of the ways of doing this?
I see that my ingress was also assigned an external IP address (the static IP I created was assigned to the nginx-ingress-controller service). However, when I hit this IP on port 80, I get connection refused and on 9200 port, it times out. Can I avoid having two external IPs? How secure is this ingress IP address? What are its open ports?
Here is my ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required - ok
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
name: basic-ingress
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: elasticsearch-1-elasticsearch-svc
servicePort: 9200
path: /
Here is the ingress controller service configuration:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.15
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
spec:
clusterIP: <Some IP>
externalTrafficPolicy: Cluster
loadBalancerIP: <External IP>
ports:
- name: http
nodePort: 30290
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30119
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
My suggestion is to use 2 load balancer, 1 for public and 1 for private. To create private load balancer you just need to add following line in metadata section
cloud.google.com/load-balancer-type: "Internal"
Reference:
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing

How to configure a kubernetes bare-metal ingress controller to listen to port 80?

I have a kubernetes setup with 1 master and 1 slave, hosted on DigitalOcean Droplets.
For exposing my services I want to use Ingresses.
As I have a bare metal install, I have to configure my own ingress controller.
How do I get it to listen to port 443 or 80 instead of the 30000-32767 range?
For setting up the ingress controller I used this guide: https://kubernetes.github.io/ingress-nginx/deploy/
My controller service looks like this:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
And now obviously, because the NodePort range is 30000-32767, this controller doesn't get mapped to port 80 or 443:
➜ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx ingress-nginx NodePort 10.103.166.230 <none> 80:30907/TCP,443:30653/TCP 21m
I agree with #Matthew L Daniel, if you don't consider to use external load balancer, the best option would be sharing host network interface with ingress-nginx Pod by enabling hostNetwork option in the Pods' spec:
template:
spec:
hostNetwork: true
Thus, NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes, without mapping special proxy ports (30000-32767) to the nested services. Find more information here.
You can’t bind ingress service to port 80. You can run HAProxy on the host and redirect port 80,443 request Ingress service port number.

How to expose a Ingress for external access in Kubernetes?

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.
This is the YAML I'm using after making several attempts:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
I ran yaml like this: kubectl create -f file.yaml
In the /etc/hosts file I added k8s.local to the ip of the master server.
When trying the command in or out of the master server a "Connection refused" message appears:
$ curl http://172.16.0.18:80/ -H 'Host: k8s.local'
I do not know if it's important, but I'm using Flannel in the cluster.
My idea is just to create a 'hello world' and expose it out of the cluster!
Do I need to change anything in the configuration to allow this access?
YAML file edited:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster
You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80
Of course the best solution is to use a cloud provider with a load balancer
You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.
Here is some information on how to install it.
If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.
And yes the selector on the 'Service' needs to be:
selector:
app: nginx
In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.
If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml
That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.
https://github.com/alexellis/inlets
Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.
I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)
https://blog.alexellis.io/https-inlets-local-endpoints/
I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:
mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com
proxy / 127.0.0.1:8080 {
transparent
}
proxy /tunnel 127.0.0.1:8080 {
transparent
websocket
}
tls {
max_certs 10
}
Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.
The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.