Proxying UDP DNS traffic to a TCP backend service - kubernetes

I am creating a CoreDNS DNS server on Kubernetes that needs to listen for UDP traffic on port 53 using an AWS network load balancer. I would like that traffic to be proxied to a Kubernetes service using TCP.
My current service looks like this:
---
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: coredns
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
type: NodePort
ports:
- name: dns
port: 5353
targetPort: 5353
nodePort: 30053
protocol: UDP
- name: dns-tcp
port: 5053
targetPort: 5053
nodePort: 31053
protocol: TCP
- name: metrics
port: 9153
targetPort: 9153
protocol: TCP
The network load balancer speaks to the cluster on the specified node ports, but the UDP listener times out when requesting zone data from the server.
when I dig for records, I am getting a timeout unless +tcp is specified in the dig. The health checks from the load balancer to the port are returning healthy, and the TCP queries are returning as expected.
Ideally, my listener would accept both TCP and UDP traffic on port 53 at the load balancer and return either TCP or UDP traffic based on the initial protocol of the request.
Is there anything glaringly obvious I am missing as to why UDP traffic is not either making it to my cluster or returning a response?

Related

Connecting to Kubernetes TCP and UDP services on the same IP

I have 2 pods in a Kubernetes namespace. One uses TCP and the other uses UDP and both are exposed using ClusterIP services via external IP. Both services use the same external IP.
This way I let my users access both the services using the same IP.
I want to remove the use of spec.externalIPs but be able to allow my user to still use a single domain name/IP to access both the TCP and UDP services.
I do not want to use spec.externalIPs, so I believe clusterIP and NodePort services cannot be used. ​Load balancer service does not allow me to specify both TCP and UDP in the same service.
I have experimented with NGINX Ingress Controller. But even there the Load Balancer service needs to be created which cannot support both TCP and UDP in the same service.
Below is the cluster IP service exposing the apps currently using external IP:
apiVersion: v1
kind: Service
metadata:
labels:
app: tcp-udp-svc
name: tcp-udp-service
spec:
externalIPs:
- <public IP- IP2>
ports:
- name: tcp-exp
port: 33001
protocol: TCP
targetPort: 33001
- name: udp-exp
port: 33001
protocol: UDP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: ClusterIP
The service shows up like below
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
tcp-udp-service ClusterIP <internal IP IP1> <public IP- IP2> 33001/TCP,33001/UDP
Using the above set up, both the TCP and UDP apps on port 33001 is accessible externally just fine using IP2.
As you can see I've used:
spec:
externalIPs:
- <public IP- IP2>
In the service to make it accessible externally.
However I do not want to use this set up, ie. I am looking for a set up without using the spec.externalIPs.
When using a load balancer service to expose the apps, I see that both TCP and UDP cannot be added in the same load balancer service. So I have to create one load balancer service for TCP and add another load balancer service for UDP like below
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
tcp-service LoadBalancer <internal IP IP1> <public IP- IP2> 33001/TCP
udp-service LoadBalancer <internal IP IP3> <public IP- IP4> 33001/UDP
---
apiVersion: v1
kind: Service
metadata:
name: tcp-service
spec:
externalTrafficPolicy: Cluster
ports:
- name: tcp-svc
port: 33001
protocol: TCP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: udp-service
spec:
externalTrafficPolicy: Cluster
ports:
- name: udp-svc
port: 33001
protocol: UDP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: LoadBalancer
But the problem is that each of these services get individual IPs assigned (IP2 & IP4).
But I want to be able to access both the TCP & UDP apps using the same IP. When testing out with nginx ingress controller too, I am faced the same issue as above.
Is there any other possible way to achieve what I am looking for, ie. to expose both TCP and UDP services on the same IP, but without using the spec.externalIPs?
Unfortunately, you will not be able to achieve your desired result with Load Balancer. Service type in any way for UDP traffic, because according the following documentation UDP protocol is not supported by any of VPC load balancer types.
You could theoretically define a portable public IP address for LoadBalancer Service type, by using the loadBalancerIP annotation, but this portable public IP address has to be available in portable public subnet upfront and Cloud Provivers's LB needs to support UDP protocol. You can see this doc
Workaround for non-Prod setup:
You can use hostPort to expose TCP & UDP ports directly on worker nodes. Could be used together with some Ingress controllers that support TCP & UDP Services, like NGINX Ingress. For more see this documentation.

How to maintain udp session within a pod in kubernetes?

When receive udp packets, I want to get the source IP and source Port from the packet, and expect that doesn't change if the packet is from the same source(same IP and same Port). My packet is sent through kube-proxy in iptables mode, but when my packets paused several seconds, and then the source port would change, and set sessionAffinity to "ClientIP" doesn't work. It seems that udp session can only be kept several seconds. Any way to expand the session time, or keep the port stay the same when my packet sender's ip and port haven't changed?
This is a community wiki answer. Feel free to expand it.
As already mentioned in the comments, you can try to use the NGINX Ingress Controller. The Exposing TCP and UDP services says:
Ingress does not support TCP or UDP services. For this reason this
Ingress controller uses the flags --tcp-services-configmap and
--udp-services-configmap to point to an existing config map where
the key is the external port to use and the value indicates the
service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
The example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53:
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

How does the Gateway port definition work?

I have an example istio cluster on AKS with the default ingress gateway. Everything works as expected I'm just trying to understand how. The Gateway is defined like so:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
namespace: some-config-namespace
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- uk.bookinfo.com
- eu.bookinfo.com
tls:
httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https-443
protocol: HTTPS
hosts:
- uk.bookinfo.com
- eu.bookinfo.com
tls:
mode: SIMPLE # enables HTTPS on this port
serverCertificate: /etc/certs/servercert.pem
privateKey: /etc/certs/privatekey.pem
Reaching the site on https://uk.bookinfo.com works fine. However when I look at the LB and Service that goes to the ingressgateway pods I see this:
LB-IP:443 -> CLUSTER-IP:443 -> istio-ingressgateway:8443
kind: Service
spec:
ports:
- name: http2
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30804
- name: https
protocol: TCP
port: 443
targetPort: 8443
nodePort: 31843
selector:
app: istio-ingressgateway
istio: ingressgateway
clusterIP: 10.2.138.74
type: LoadBalancer
Since the targetPort for the istio-ingressgateway pods is 8443 then how does the Gateway definition work which defines a port number as 443?
As mentioned here
port: The port of this service
targetPort: The target port on the pod(s) to forward traffic to
As far as I know targetPort: 8443 points to envoy sidecar, so If I understand correctly envoy listen on 8080 for http and 8443 for https.
There is an example in envoy documentation.
So it goes like this:
LB-IP:443 -> CLUSTER-IP:443 -> istio-ingressgateway:443 -> envoy-sidecar:8443
LB-IP:80 -> CLUSTER-IP:80 -> istio-ingressgateway:80 -> envoy-sidecar:8080
For example, for http if you check your ingress-gateway pod with netstat without any gateway configured there isn't anything listening on port 8080:
kubectl exec -ti istio-ingressgateway-86f88b6f6-r8mjt -n istio-system -c istio-proxy -- /bin/bash
istio-proxy#istio-ingressgateway-86f88b6f6-r8mjt:/$ netstat -lnt | grep 8080
Let's create a http gateway now with below yaml.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-gw
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
And check with netstat again:
kubectl exec -ti istio-ingressgateway-86f88b6f6-r8mjt -n istio-system -c istio-proxy -- /bin/bash
istio-proxy#istio-ingressgateway-86f88b6f6-r8mjt:/$ netstat -lnt | grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
As you can see we have configured gateway on port 80, but inside the ingressgateway we can see that it's listening on port 8080.
I think this mechanism of istio is a bit confusing.
In my understanding, the port defined by the gateway should be the port of listen in the pod, and then the service decides the exposed port.
althouth the current mechanism is convenient for users, there will be some confusion and uncertainty.
If istio wants to keep this mechanism, I suggest istio to improve it. When creating a gateway, it is associated with ingress-svc instead of pod, and then the port is clearly defined as the port of svc. If the port does not exist in svc, update the service to open it Or prohibit the creation of the gateway

NodePort service of istio-ingressgateway returns connection refused

I have deployed istio in kubernetes through the official helm chart, but cannot access the nodeport service of the istio-ingressgateway. I get connection refused.
There is a listener on NodePort 31380 though.
$ netstat -plan | grep 31380
tcp6 0 0 :::31380 :::* LISTEN 8523/kube-proxy
The iptables firewall on this k8s node does not block the traffic to 31380/tcp.
I tried connecting to 127.0.0.1:31380 and the LAN IP of the node directly from the node.
Any ideas what could be wrong or how I can debug that?
Best regards,
rforberger
In Your hello-istio service configuration there is port miss-configuration:
---
apiVersion: v1
kind: Service
metadata:
name: hello-istio
namespace: hello-istio
spec:
selector:
run: hello-istio
ports:
- protocol: TCP
port: 13451
targetPort: 80
---
The port should be same as containerPort from deployment. And targetPort should be same as destination port number in gateway.
So Your service configuration should look like this:
---
apiVersion: v1
kind: Service
metadata:
name: hello-istio
namespace: hello-istio
spec:
selector:
run: hello-istio
ports:
- protocol: TCP
port: 80
targetPort: 13451
---
It is working now.
I was stupid, I had to define port 80 in the Gateway definition, instead of any random port in order to start a listener on the istio-ingressgateway on port 80.
I didn't dare this in order to not route any other traffic the is incoming to the istio ingress already.
Thanks for your help.

DigitalOcean Loadbalancer behavior Kubernetes TCP 443

Currently I've this Load Balancer Service on my Kubernetes Cluster.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP [HIDDEN] <none> 443/TCP 44h
load-balancer LoadBalancer [HIDDEN] [HIDDEN] 443:30014/TCP 39h
This is my .yaml file config
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
selector:
app: nodeapp
type: LoadBalancer
ports:
- protocol: TCP
port: 443
targetPort: 3000
name: https
For some reason DigitalOcean does not setup the HTTPS instead if leaves it as TCP 443. And then I've to manually go to DigitalOcean and change TCP to HTTPS and create the let's encrypt certificate. How can I make Kubernetes create a load balancer using HTTPS on port 443 instead of TCP 443.
According with their documentation you need to add additional annotations like that:
---
kind: Service
apiVersion: v1
metadata:
name: https-with-cert
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
How to add SSL certificate: https://www.digitalocean.com/docs/networking/load-balancers/how-to/custom-ssl-cert/
A Service of type Load Balancer will create a Layer 4 type LB(Network LB), with awareness of only the IP and Port.
You will need Layer 7 LB(Application LB), that's application aware.
So, to enable HTTPS, you will need to manage via an ingress.