Unable to configure UDP on ingress-nginx-controller - kubernetes

I use Azure Kubernetes to host a couple of different services.
I'm trying to configure UDP load balancing over external IP.
I have created service with type LoadBalancer, UDP protocol and sessionAffinity. Also my deployment has configured HTTP RedinessProbe.
If UDP client reach my service from kubernetes network every thing works fine:
- client have sticky session to concrete pod in ready state.
- client re-balanced to another ready pod if already assigned pod was dead;
- client re-balanced after sessionAffinityConfig.clientIP.timeoutSeconds is elapsed(i.e next packets may be routed to other ready pod).
Thinks go different if I try to connect to LoadBalancer externally(using external IP):
- client have sticky session to concrete pod in ready state.
- client doesn't get new ready pod if previous was dead. It connected to new pod only in case If client stop to send messages during sessionAffinityConfig.clientIP.timeoutSeconds period of time.
So to solve it I tried to use ingress-nginx. I found useful article here about this kind of configuration.
But after I completed with udp-service configuration and adding UDP port I get following error:
cannot create an external load balancer with mix protocols
Could You please point me how to do it properly in Kubernetes.
udp-services config map:
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
5684: "dev/dip-dc:5684"
ingress-nginx controller service YAML:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx2
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
- name: upd
port: 5684
targetPort: udp

A multi-protocol LB service is unfortunately not supported in many K8S providers.
Check out this tutorial that shows you how to build your own UDP/TCP load balancer.
The summary of what you will need to do is:
Create a NodePort service for your application
Create a small server instance and run Nginx with LB config

Related

Kubernetes Ingress - how to access my service on my computer?

I have the following template, with a deployment, a service and an Ingress. I ran minikube addons enable ingress locally to add an ingress controller before.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: datamastery/fastapi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 3000
nodePort: 30002
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: datamastery.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: fastapi-service
port:
number: 3000
When I run kubectl get servicesI get:
fastapi-service LoadBalancer 10.108.5.228 <pending> 5000:30002/TCP 5d22h
I my etc/hosts/ file I added the following:
10.108.5.228 datamastery.com
Normally I would expect now to be able to open my service in the browser, but nothing happens. What did I do wrong? Did I miss something in the template? Is the IP wrong? Something in the hosts file?
Thank you!
fastapi-service LoadBalancer 10.108.5.228 5000:30002/TCP 5d22h
10.108.5.228 is an address within your SDN. Only members of your SDN can reach this address, it is unlikely your workstation would have a route sending this trafic to one of your Kubernetes nodes.
<pending> means your cluster is not integrated with a cloud provider with LoadBalancer capabilities. When in doubt: you should use ClusterIP as your service type. LoadBalancer only makes sense in specific cases. While setting a nodePort as you did is also not required (would make sense with a NodePort service, which is as well useful in few use cases, though should not be used otherwise).
You did create an Ingress. If you have an Ingress Controller, you want to connect to that ip/port. The Host header would tell your ingress controller where to route this, within your SDN.
I believe what you are doing here is trying to combine two different things.
NodePort is only sufficient if you have only one node OR you really control where your service pods getting deployed. Otherwise it is not suitable to use the node IP to access services.
To overcome this issue we usually use ingress as a service proxy. Incoming traffic will be routed to the correct service pods depending on the URL and port. Ingress also manages the SSL termination. So basically this is your first "load balancer" as ingress assigned traffic to services across nodes and pods.
In production environment you deploy the ingress controller with the type: Loadbalancer in the kube-system namespace, example for Nginx-ingress:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This would spin up a cloud load balancer of your provider and link it to the ingress service in your cluster. So basically now you would have a real load balancer in place balancing traffic between your nodes and ingress routes them to your services and services to your pods.
Back to your question:
In your config files you try to spin up a service with the type: LoadBalancer. This would skip the ingress part and spin up a second cloud load balancer from your provider dedicated for this single service.
You have to remove the type (and nodePort) to use default ClusterIP for your service.
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 3000
targetPort: 3000
In addition you have mentioned a wrong port. Your ingress object points on port 3000. You Service object on port 5000. So we also change this.
With this config your traffic on the FQDN is routed to ingress, to ClusterIP service on port 3000 to your pods.

Why put a LoadBalancer Type of Service in front of the Nginx Ingress

I find some usecases of k8s in production which work with the Public Cloud will put a LoadBalancer type of Service in front of the Nginx Ingress. (You can find an example from the below yaml.)
As I known, ingress can be used to expose the internal servcie to the public, so what's the point to put a loadbalancer in front of the ingress? Can I delete that service?
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.27.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.45.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
kubernetes.io/elb.class: union
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
...so what's the point to put a loadbalancer in front of the ingress?
This way allows you to take advantage of the cloud provider LB facilities (eg. multi-az etc), then with Ingress you can further control routing using path or name-based virtual host for services in the cluster.
Can I delete that service?
Ingress doesn't do port mapping or pods selection, and you can't resolve an Ingress name with DNS.
Because the Ingress Controller itself is, in this case, running inside a Pod so it needs to be exposed to the internet like anything else running in Pod. Some Ingress Controllers have the actual proxy running externally, like the AWS ALB one. But Nginx is just running inside the container like normal.

kubernates expose service with ingress on a certain port

Hi I have a react docker that uses nginx
with this service
apiVersion: v1
kind: Service
metadata:
labels:
appcluster: ethernial
app: clientweb
visibility: external
name: clientweb-service-ext
spec:
ports:
- port: 80
name: http
selector:
app: clientweb
type: ClusterIp
I want to expose it, I have only 1 Node that is the Master, but the port 80 is already in use by apache running on master node (cannot shutdown it yet)
I want to expose my react app so I can reach it by http://:30000 for example
(I also need to expose other REST apis externally and internally, one hosted on a pod and each one uses port 80)
so how I setup my ingress?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: clientweb-ingress
spec:
defaultBackend:
service:
name: clientweb
port:
number: 8080
thanks!
You need to expose the ingress controller using a NodePort service on port 30000. Once you do that you can access backend pods exposed via ingress resource using 30000 port. If you are using nginx ingress controller then follow this doc and the NodePort service(taken from the nginx installation docs) would look like below with your desired port 30000 and 30001.
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.13.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 30000 # Specified nodeport
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 30001 # Specified nodeport
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
In this case you can still continue to have apache on port 80 on the host system.
curl http://NODEIP:30000/<path-in-ingress>
curl https://NODEIP:30001/<path-in-ingress>
First, you need to understand the relationship between Ingress and Ingress Controller. Ingress is just a kind of resources, and it will do nothing except declare the ingress rules. All Ingress rules will need a certain Ingress Controller to implement its rules.
Then you need to deploy an Ingress Controller, typically a deployment(for certain pod) and a service(for external access). You can have a look at Nginx Ingress Controller at https://kubernetes.github.io/ingress-nginx/ and use kubectl or helm to make the deployment. Do not forget to annotate the ingress-class as it will be used later.
After this, you can apply any Ingress to this certain Ingress Controller by adding kubernetes.io/ingress.class: "nginx" annotation to your Ingress. And ingress controller(nginx server) will add your rules to its config, that means your Ingress rules has been applied.
Finally, as your ingress-controller-service has expose it self(LoadBalanceIp or NodePort from port 30000), all traffic to this endpoint will go through your Ingress rules and redirect to the desire service.

K8s manual ingress controller config

I'm running a bare metal k8s cluster so I have to configure ingress-nginx manually.
I have applied the mandatory yaml and bare-metal yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
As described in the doc.
I am not sure if I need to apply anything else, like RBAC. This created a deployment, and a pod but no service. I need to create the service, using the NodePort method described here.
The problem is my service is not starting, it stays in pending. Did anyone have any success with this? How does the nginx service need to be configured?
The problem with pending state on bare metal may be caused:
Your service can't receive IP address from external.
Your service can't claim the port on system because it is already in use.
In you case, it looks like your service can't claim the port. Could you try to use different ports on the system (for testing at the beginning):
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
- name: https
port: 443
nodePort: 30443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
and then your ingress will available to those nodePorts 30236 -> 80 and 30443 -> 443 from external.

can i use ingress-nginx to simple route traffic?

I really like the kubernetes Ingress schematics. I currently run ingress-nginx controllers to route traffic into my kubernetes pods.
I would like to use this to also route traffic to 'normal' machines: ie vm's or physical nodes that are not part of my kubernetes infrastructure. Is this possible? How?
In Kubernetes you can define an externalName service in which you define a FQND to an external server.
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
Then you can use my-service in your nginx rule.
You can create static service and corresponding endpoints for external services which are not k8s and then use k8s service in ingress to route traffic.
Also you see ingress doc to enable custom upstream check
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-checks
In below example just change port/IP according to your need
apiVersion: v1
kind: Service
metadata:
labels:
product: external-service
name: external-service
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
product: external-service
name: external-service
subsets:
- addresses:
- ip: x.x.x.x
- ip: x.x.x.x
- ip: x.x.x.x
ports:
- name: http
port: 80
protocol: TCP
I don't think it's possible, since ingress-nginx get pods info through watch namespace, service, endpoints, ingress resources, then redirect traffic to pods, without these resources specific to kubernetes, ingress-nginx has no way to find the ips that need loadbalance. And ingress-nginx doesn't has health-check method defined, it's up to the kubernetes builtin mechanic to check the health of the running pods.