Writing a custom Ingress Controller - cannot assign Address to Ingress - kubernetes

I'm following instructions on how to write a custom Ingress Controller - however for me they stop short of quite an important step - how to assign an Address to the Ingress claimed by this Controller.
So far I've tried to execute the following steps:
Deploy the Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebAppIngress
spec:
backend:
serviceName: myWebAppBackendSvc
servicePort: 8090
Deploy the RC with the custom Ingress Controller setting up the ports as below
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
Note the external IP of the Node on which the Ingress Controller was deployed and
Patch the Ingress with this IP: kubectl patch ingress myWebAppIngress -p='{"status": {"loadBalancer": {"ingress": [{"ip": "<IP noted below>"}]}}}'
However, this has not assigned an address to my Ingress, as I can see in kubectl get ingress myWebAppIngress Does anyone know what I need to do in order to assign an address to the Ingress?

hostPort exposes your application on given ports on nodes that this app is launched on. This will not give you external IP as it does not ie. provision cloud loadbalancer.
If you want to stick to fully automated k8s setup for this, then first you should be able to correctly use services with LoadBalancer type, and define such service for your ingress controller http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
another approach can be to use service of NodePort type (it has the advantage of the nodePort being managed by kube-proxy hence available also on nodes not running the ingress pod) and manually configure your loadbalancer to point to assigned nodePorts.
sidenote: you should never need to fiddle with status field contents

Related

Ingress in Kubernetes

I was doing some research about ingress and it seems I have to create a new ingress resource for each namespace. Is that correct?
I just created 2 separate ingress resources in different namespaces in my GKE cluster but it seems to use the same LB in(which is great for cost) but I would think it is possible to have clashes then. (when using same path). I just tried it and the first one I've created is still working on the path, the other newer one on the same path is just not working.
Can someone explain me the correct setup for ingress?
As Kubernetes works, ingress controller won't pass a packet to a service that is in a different namespace from the ingress resource. So, if you create an ingress resource in the default namespace, all your services must be in the default namespace as well.
This is something that won't change. EVER. There has been a feature request years ago, and kubernetes team announced that it's not going to happen. It introduces a security hole when ingress controller is being able to transpass a namespace.
Now, what we do in these situations is actually pretty neat. You will have to do the following:
Say you have 2 services in the namespaces you need. e.g. service1.foo and service2.bar.
create 2 headless services without selectors and 2 Endpoint objects pointing to the IP addresses of the services service1.foo and service2.bar, in the same namespace as the ingress resource. The headless service without selectors will force kube-dns (or coreDNS) to search for either ExternalName type service or an Endpoint object. Now, the only requirement here is that your headless service and the Endpoint object must have the same name.
Create your ingress resource pointing to the headless services.
It should look like this (for 1 service):
Say the IP address of service1.foo is 10.10.10.10. Your headless service and the Endpoint object would be:
apiVersion: v1
kind: Service
metadata:
name: bait-svc
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: bait-svc
subsets:
- addresses:
- ip: 10.10.10.10
ports:
- port: 80
protocol: TCP
and Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: ssl-certs
rules:
- host: site1.training.com
http:
paths:
- path: /
backend:
serviceName: bait-svc
servicePort: 80
So, the Ingress points to the bait-svc, and bait-svc points to service1.foo. And you will do this for each service.
UPDATE
I am thinking now, it might not work with GKE Ingress Controller, as on GKE you need a NodePort type service for the HTTP load balancer to reach the service. As you can see, in my example I've got nginx Ingress Controller.
Independently if it works or not, I would recommend using some other Ingress Controller. It's not that GKE IC is not good. It is quite robust, but almost always you end up hitting some limitation. Other ICs are more flexible.
The behavior of conflicting Ingress routes is undefined and implementation dependent. In most cases it’s just last writer wins.

Ingress expose the service with the type clusterIP

Is it possible to expose the service by ingress with the type of ClusterIP?
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: my-service-port
port: 4001
targetPort: 4001
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.example.com
http:
paths:
- path: /my-service
backend:
serviceName: my-service
servicePort: 4001
I know the service can be exposed with the type of NodePort, but it may cost one more NAT connection, if someone could show me what's the fastest way to detect internal service from the world of internet in the cloud.
No, clusterIP is only reachable from within the cluster. An Ingress is essentially just a set of layer 7 forwarding rules, it does not handle the layer 4 requirements of exposing the internals of your cluster to the outside world. At least 1 NAT step is required.
For Ingress to work, though, you need to have at least one service involved that exposes your workload externally, so nodePort or loadBalancer. Your ingress controller and the infrastructure of your cluster will determine which of the two services you will need to use.
In the case of Nginx ingress, you need to have a single LoadBalancer service which the ingress will use to bridge traffic from outside the cluster to inside it. After that, you can use clusterIP services for each of your workloads.
In your above example, as long as the nginx ingress controller is correctly configured (with a loadbalancer), then the config you are using should work fine.
In short : YES
Now to the elaborate answer...
First thing first, let's have a look at what the official documentation says :
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
[...]
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer...
What's confusing here is the term Load balancer. In the definition above, we are talking about the classic and well known in the web load balancer.
This one has nothing to do with kubernetes !
So back to the definition, to use an Ingress and make it work, we need a kubernetes resource called IngressController. And this resource happen to be a load balancer ! That's it.
However, you have to keep in mind that there is a difference between a load balancer in the outside world and a kubernetes service of type type:LoadBalancer.
So in summary (and in order to redirect the traffic from the outside world to your k8s clusterIp service) :
Do you need a Load balancer to make your kind:Ingress works ? Yes, this is the kind:IngressController kubernetes resource.
Do you need a kubernetes service type:LoadBalancer or type:NodePort to make your kind:Ingress works ? Definitely no ! A service type:ClusterIP works just fine !

Is it possible to serve up applications through a Kubernetes controller node?

I have built a K3s (https://k3s.io) cluster on a set of Raspberry Pi4 computers.
The controller (ctrl-1) node is a gateway in that it has 2 network interfaces. One is connected to my LAN and the other is connected to a network that it creates, e.g. K3S-LAN. The two nodes (node-1 and node-2) are deployed to the K3S-LAN.
I want to be able to access the applications running on the nodes through ctrl-1, e.g. from the LAN. This is because this cluster is meant to be portable so only the ctrl-1 node needs to be connected to the guest LAN. (Yes there are issues with DNS names etc to be sorted out, but I want to get the basics running first).
This means that I need to be able to "proxy" the ingress through ctrl-1. I thought I had the right idea for this in that I deployed "nginx-ingress" to the master, using Helm. However I forgot about the service for this - this has been scheduled on the nodes, whereas it needs to be on the controller so that the ports are opened up (I think). However I cannot find how to make the service run on the controller.
At the moment I have the service running with a type of NodePort. I could install MetalLB so that I have LoadBalancer capabilities. However with what I have seen I am not sure if this would help or not.
ctrl-1 does not have any taints setup on it, just the role of master.
Am I barking up the wrong tree here? I guess this might not be the intended use case of Kubernetes, but I am playing around with an idea. Thanks for any ideas that people have.
Update*
I have just thought that the way around this might be to run HAProxy on ctrl-1 (as another service on the host) and setup rules to proxy to the necessary services within the cluster. That would act as the bridge between the networks.
You just need to expose your pod via a Nodeport type service and it can be accessed via http://master-node-ip:nodeport. Make sure that kube-proxy is running on all master and worker nodes.
The ingress approach also should work as long as you have kube-proxy running on your master. You deploy nginx ingress on your cluster and it will get deployed into a worker node. Then you can expose nginx ingress controller itself using a NodePort service. After this you can create ingress resource for configuring the nginx ingress controller to route traffic to your backend pods and services running on worker nodes. The services for backend pods should be of type ClusterIP.
Deploy nginx ingress controller and expose it via NodePort service using kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/baremetal/service-nodeport.yaml
Deploy nginx pod(nginx is an example..this should be your pod) kubectl run nginx --generator=run-pod/v1 --image=nginx
Expose nginx pod via ClusterIP service
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
Create ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
With above setup I can now access nginx and get "Welcome to nginx! " via http://master-node-ip:NodePort of nginx ingress controller

How to expose kubernetes service on prem using 443/80

Is it possible to expose Kubernetes service using port 443/80 on-premise?
I know some ways to expose services in Kubernetes:
1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.
2. Host network - Force the pod to use the host’s network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.
3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option.
MetalLB which allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, is not yet stable enough.
Do you know any other way to expose service in Kubernetes using port 443/80 on-premise?
I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.
Thanks.
IMHO ingress is the best way to do this on prem.
We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.
Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding ingress.
Here's an example of the daemonset manifest:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
spec:
selector:
matchLabels:
component: ingress-controller
template:
metadata:
labels:
component: ingress-controller
spec:
restartPolicy: Always
hostNetwork: true
containers:
- name: nginx-ingress-lb
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
Use ingress controller as an entrypoint to a services in kubernetes cluster. Run ingress controller on port 80 or 443.
You need to define ingress rules for each backend service that you want to access from outside. Ingress controller should be able to allow client to access the services based on the paths defined in the ingress rules.
If you need to allow access over https then you need to get the dns certificates, load them into secrets and bind them in the ingress rules
Most popular one is nginx ingress controller. Traefik and ha proxy ingress controllers are also other alternate solutions
Idea with hostNetwork proxy is actually not bad, Openshift Router uses that for example. You define two or three nodes to run proxy and use DNS load balancing in front of them.
And you can still use kube-dns with hostNetwork, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
You are probably running a kubeadm on-premise Kubernetes setup with a nginx ingress controller on unix/linux hosts and can't safely expose ports in the restricted system port range (0-1023).
You either need to set up your own dedicated load balancer pair (e.g. a Linux boxes with HA-Proxy running) or alternatively use an existing load balancers if you are lucky engough being in a corporate environment that already provides load balancing (e.g. F5 LB).
Then you will be able to set the load balancers to forward your 443/80 requests to your cluster node's 30443/30080 ports that are handled by your cluster's ingress controller.

Assign an External IP to a Node

I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.
So far, i've been following the example here to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.
I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.
How can I manually set my node's ExternalIP address?
This is something that is tested and works for an nginx service created on a particular node.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.
I would suggest checking out MetalLB: https://github.com/google/metallb
It allows for externalIP addresses in a baremetal cluster using either ARP or BGP. It has worked great for us and allows you to simply request a LoadBalancer service like you would in the cloud.