Connecting to Kubernetes TCP and UDP services on the same IP - kubernetes

I have 2 pods in a Kubernetes namespace. One uses TCP and the other uses UDP and both are exposed using ClusterIP services via external IP. Both services use the same external IP.
This way I let my users access both the services using the same IP.
I want to remove the use of spec.externalIPs but be able to allow my user to still use a single domain name/IP to access both the TCP and UDP services.
I do not want to use spec.externalIPs, so I believe clusterIP and NodePort services cannot be used. ​Load balancer service does not allow me to specify both TCP and UDP in the same service.
I have experimented with NGINX Ingress Controller. But even there the Load Balancer service needs to be created which cannot support both TCP and UDP in the same service.
Below is the cluster IP service exposing the apps currently using external IP:
apiVersion: v1
kind: Service
metadata:
labels:
app: tcp-udp-svc
name: tcp-udp-service
spec:
externalIPs:
- <public IP- IP2>
ports:
- name: tcp-exp
port: 33001
protocol: TCP
targetPort: 33001
- name: udp-exp
port: 33001
protocol: UDP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: ClusterIP
The service shows up like below
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
tcp-udp-service ClusterIP <internal IP IP1> <public IP- IP2> 33001/TCP,33001/UDP
Using the above set up, both the TCP and UDP apps on port 33001 is accessible externally just fine using IP2.
As you can see I've used:
spec:
externalIPs:
- <public IP- IP2>
In the service to make it accessible externally.
However I do not want to use this set up, ie. I am looking for a set up without using the spec.externalIPs.
When using a load balancer service to expose the apps, I see that both TCP and UDP cannot be added in the same load balancer service. So I have to create one load balancer service for TCP and add another load balancer service for UDP like below
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
tcp-service LoadBalancer <internal IP IP1> <public IP- IP2> 33001/TCP
udp-service LoadBalancer <internal IP IP3> <public IP- IP4> 33001/UDP
---
apiVersion: v1
kind: Service
metadata:
name: tcp-service
spec:
externalTrafficPolicy: Cluster
ports:
- name: tcp-svc
port: 33001
protocol: TCP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: udp-service
spec:
externalTrafficPolicy: Cluster
ports:
- name: udp-svc
port: 33001
protocol: UDP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: LoadBalancer
But the problem is that each of these services get individual IPs assigned (IP2 & IP4).
But I want to be able to access both the TCP & UDP apps using the same IP. When testing out with nginx ingress controller too, I am faced the same issue as above.
Is there any other possible way to achieve what I am looking for, ie. to expose both TCP and UDP services on the same IP, but without using the spec.externalIPs?

Unfortunately, you will not be able to achieve your desired result with Load Balancer. Service type in any way for UDP traffic, because according the following documentation UDP protocol is not supported by any of VPC load balancer types.
You could theoretically define a portable public IP address for LoadBalancer Service type, by using the loadBalancerIP annotation, but this portable public IP address has to be available in portable public subnet upfront and Cloud Provivers's LB needs to support UDP protocol. You can see this doc
Workaround for non-Prod setup:
You can use hostPort to expose TCP & UDP ports directly on worker nodes. Could be used together with some Ingress controllers that support TCP & UDP Services, like NGINX Ingress. For more see this documentation.

Related

Is it possible to select the port on a multiport service in kubernetes using the port name?

I have two pods running in the same namespace.
One of then can be accessed trough the following multi-port service:
kind: Service
apiVersion: v1
metadata:
name: bacon-service
spec:
selector:
app: bacon-app
restype: pod
ports:
- name: http
port: 80
targetPort: 8001
protocol: TCP
- name: http2
port: 8787
targetPort: 8787
protocol: TCP
type: ClusterIP
And the second needs to access the http2 port of the first pod to make gRPC transactions.
I've been using the port number to access it, which has been working just fine.
http://bacon-app:8787
Yet I think a clever implementation would be to use the port name.
So if by any change someday someone decides to change the service port, the client pod would not be affected.
I've tried to achieve so with the following:
http://bacon-app:http2
Which simply does not work, would it be possible to do it somehow?
No, sorry, but it's not possible, as this is pure http routing and has nothing to do with Kubernetes. Browsers / http clients only understand <scheme>://<host>[:port]

How to maintain udp session within a pod in kubernetes?

When receive udp packets, I want to get the source IP and source Port from the packet, and expect that doesn't change if the packet is from the same source(same IP and same Port). My packet is sent through kube-proxy in iptables mode, but when my packets paused several seconds, and then the source port would change, and set sessionAffinity to "ClientIP" doesn't work. It seems that udp session can only be kept several seconds. Any way to expand the session time, or keep the port stay the same when my packet sender's ip and port haven't changed?
This is a community wiki answer. Feel free to expand it.
As already mentioned in the comments, you can try to use the NGINX Ingress Controller. The Exposing TCP and UDP services says:
Ingress does not support TCP or UDP services. For this reason this
Ingress controller uses the flags --tcp-services-configmap and
--udp-services-configmap to point to an existing config map where
the key is the external port to use and the value indicates the
service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
The example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53:
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

Proxying UDP DNS traffic to a TCP backend service

I am creating a CoreDNS DNS server on Kubernetes that needs to listen for UDP traffic on port 53 using an AWS network load balancer. I would like that traffic to be proxied to a Kubernetes service using TCP.
My current service looks like this:
---
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: coredns
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
type: NodePort
ports:
- name: dns
port: 5353
targetPort: 5353
nodePort: 30053
protocol: UDP
- name: dns-tcp
port: 5053
targetPort: 5053
nodePort: 31053
protocol: TCP
- name: metrics
port: 9153
targetPort: 9153
protocol: TCP
The network load balancer speaks to the cluster on the specified node ports, but the UDP listener times out when requesting zone data from the server.
when I dig for records, I am getting a timeout unless +tcp is specified in the dig. The health checks from the load balancer to the port are returning healthy, and the TCP queries are returning as expected.
Ideally, my listener would accept both TCP and UDP traffic on port 53 at the load balancer and return either TCP or UDP traffic based on the initial protocol of the request.
Is there anything glaringly obvious I am missing as to why UDP traffic is not either making it to my cluster or returning a response?

DigitalOcean Loadbalancer behavior Kubernetes TCP 443

Currently I've this Load Balancer Service on my Kubernetes Cluster.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP [HIDDEN] <none> 443/TCP 44h
load-balancer LoadBalancer [HIDDEN] [HIDDEN] 443:30014/TCP 39h
This is my .yaml file config
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
selector:
app: nodeapp
type: LoadBalancer
ports:
- protocol: TCP
port: 443
targetPort: 3000
name: https
For some reason DigitalOcean does not setup the HTTPS instead if leaves it as TCP 443. And then I've to manually go to DigitalOcean and change TCP to HTTPS and create the let's encrypt certificate. How can I make Kubernetes create a load balancer using HTTPS on port 443 instead of TCP 443.
According with their documentation you need to add additional annotations like that:
---
kind: Service
apiVersion: v1
metadata:
name: https-with-cert
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
How to add SSL certificate: https://www.digitalocean.com/docs/networking/load-balancers/how-to/custom-ssl-cert/
A Service of type Load Balancer will create a Layer 4 type LB(Network LB), with awareness of only the IP and Port.
You will need Layer 7 LB(Application LB), that's application aware.
So, to enable HTTPS, you will need to manage via an ingress.

How to make service TCP/UDP ports externally accessible in Kubernetes?

I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.
I don't need load balancing capabilities.
The approach should expose an IP address that is externally available with a dedicated port for each tenant
I don't want to expose the nodes directly to the internet
I have the following service so far:
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
spec:
type: NodePort
ports:
- port: 8111
targetPort: 8111
protocol: UDP
name: my-udp
- port: 8222
targetPort: 8222
protocol: TCP
name: my-tcp
selector:
app: my-app
What is the way to go?
Deploy a NGINX ingress controller on your AWS cluster
Change your service my-service type from NodePort to ClusterIP
Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
"8222": your-namespace/my-service:8222
Same for configMap udp-services :
data:
"8111": your-namespace/my-service:8111
Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)
The description provided by #ffledgling is what you need.
But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.

Categories