How to maintain udp session within a pod in kubernetes? - kubernetes

When receive udp packets, I want to get the source IP and source Port from the packet, and expect that doesn't change if the packet is from the same source(same IP and same Port). My packet is sent through kube-proxy in iptables mode, but when my packets paused several seconds, and then the source port would change, and set sessionAffinity to "ClientIP" doesn't work. It seems that udp session can only be kept several seconds. Any way to expand the session time, or keep the port stay the same when my packet sender's ip and port haven't changed?

This is a community wiki answer. Feel free to expand it.
As already mentioned in the comments, you can try to use the NGINX Ingress Controller. The Exposing TCP and UDP services says:
Ingress does not support TCP or UDP services. For this reason this
Ingress controller uses the flags --tcp-services-configmap and
--udp-services-configmap to point to an existing config map where
the key is the external port to use and the value indicates the
service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
The example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53:
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

Related

Issue running KNative with MicroK8S on Multipass

I'm trying to get KNative to be able to create services on my Multipass VM with MacOS as the host OS and I am using MicroK8S. I have DNS enabled and I am using metallb as my ingress controller. I have also changed Multipass to use hyperkit instead of VirtualBox. I don't know what's not been configured or missconfigured. The error I get when I try to create a new service is pasted below:
ubuntu#uncommon-javelin:~/sandbox/sessions/serverless_k8s/yaml$ kn service create nginx --image nginx --port 80
Error: Internal error occurred: failed calling webhook "webhook.serving.knative.dev": failed to call webhook: Post "https://webhook.knative-serving.svc:443/defaulting?timeout=10s": dial tcp 10.152.183.167:443: connect: connection refused
Run 'kn --help' for usage
When I ping that IP, it times out. So it seems like that IP address is either locked down or doesn't exist. Port 443 is configured in my ingress-service.yaml file
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
And here is what I have configured for metallb address pool
apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
metallb.univers.tf/address-pool: custom-addresspool
spec:
selector:
name: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
And here's another address-pool.yaml I have configured for my cluster, I'm pretty sure that I have either something networking misconfigured or I'm missing a configuration somewhere.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: custom-addresspool
namespace: metallb-system
spec:
addresses:
- 192.168.1.1-192.168.1.100
Knative uses validating admission webhooks to ensure that the resources in the cluster are valid. It seems like the Knative webhooks are not running on your cluster, but the validatingwebhookconfiguration has been created, as has the service in front of the webhook (the IP in the error message is a ClusterIP of a Kubernetes service on your cluster).
I'd look at the webhook pods in the knative-serving namespace for more details.

Proxying UDP DNS traffic to a TCP backend service

I am creating a CoreDNS DNS server on Kubernetes that needs to listen for UDP traffic on port 53 using an AWS network load balancer. I would like that traffic to be proxied to a Kubernetes service using TCP.
My current service looks like this:
---
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: coredns
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
type: NodePort
ports:
- name: dns
port: 5353
targetPort: 5353
nodePort: 30053
protocol: UDP
- name: dns-tcp
port: 5053
targetPort: 5053
nodePort: 31053
protocol: TCP
- name: metrics
port: 9153
targetPort: 9153
protocol: TCP
The network load balancer speaks to the cluster on the specified node ports, but the UDP listener times out when requesting zone data from the server.
when I dig for records, I am getting a timeout unless +tcp is specified in the dig. The health checks from the load balancer to the port are returning healthy, and the TCP queries are returning as expected.
Ideally, my listener would accept both TCP and UDP traffic on port 53 at the load balancer and return either TCP or UDP traffic based on the initial protocol of the request.
Is there anything glaringly obvious I am missing as to why UDP traffic is not either making it to my cluster or returning a response?

NodePort service of istio-ingressgateway returns connection refused

I have deployed istio in kubernetes through the official helm chart, but cannot access the nodeport service of the istio-ingressgateway. I get connection refused.
There is a listener on NodePort 31380 though.
$ netstat -plan | grep 31380
tcp6 0 0 :::31380 :::* LISTEN 8523/kube-proxy
The iptables firewall on this k8s node does not block the traffic to 31380/tcp.
I tried connecting to 127.0.0.1:31380 and the LAN IP of the node directly from the node.
Any ideas what could be wrong or how I can debug that?
Best regards,
rforberger
In Your hello-istio service configuration there is port miss-configuration:
---
apiVersion: v1
kind: Service
metadata:
name: hello-istio
namespace: hello-istio
spec:
selector:
run: hello-istio
ports:
- protocol: TCP
port: 13451
targetPort: 80
---
The port should be same as containerPort from deployment. And targetPort should be same as destination port number in gateway.
So Your service configuration should look like this:
---
apiVersion: v1
kind: Service
metadata:
name: hello-istio
namespace: hello-istio
spec:
selector:
run: hello-istio
ports:
- protocol: TCP
port: 80
targetPort: 13451
---
It is working now.
I was stupid, I had to define port 80 in the Gateway definition, instead of any random port in order to start a listener on the istio-ingressgateway on port 80.
I didn't dare this in order to not route any other traffic the is incoming to the istio ingress already.
Thanks for your help.

DigitalOcean Loadbalancer behavior Kubernetes TCP 443

Currently I've this Load Balancer Service on my Kubernetes Cluster.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP [HIDDEN] <none> 443/TCP 44h
load-balancer LoadBalancer [HIDDEN] [HIDDEN] 443:30014/TCP 39h
This is my .yaml file config
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
selector:
app: nodeapp
type: LoadBalancer
ports:
- protocol: TCP
port: 443
targetPort: 3000
name: https
For some reason DigitalOcean does not setup the HTTPS instead if leaves it as TCP 443. And then I've to manually go to DigitalOcean and change TCP to HTTPS and create the let's encrypt certificate. How can I make Kubernetes create a load balancer using HTTPS on port 443 instead of TCP 443.
According with their documentation you need to add additional annotations like that:
---
kind: Service
apiVersion: v1
metadata:
name: https-with-cert
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
How to add SSL certificate: https://www.digitalocean.com/docs/networking/load-balancers/how-to/custom-ssl-cert/
A Service of type Load Balancer will create a Layer 4 type LB(Network LB), with awareness of only the IP and Port.
You will need Layer 7 LB(Application LB), that's application aware.
So, to enable HTTPS, you will need to manage via an ingress.

How to configure a kubernetes bare-metal ingress controller to listen to port 80?

I have a kubernetes setup with 1 master and 1 slave, hosted on DigitalOcean Droplets.
For exposing my services I want to use Ingresses.
As I have a bare metal install, I have to configure my own ingress controller.
How do I get it to listen to port 443 or 80 instead of the 30000-32767 range?
For setting up the ingress controller I used this guide: https://kubernetes.github.io/ingress-nginx/deploy/
My controller service looks like this:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
And now obviously, because the NodePort range is 30000-32767, this controller doesn't get mapped to port 80 or 443:
➜ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx ingress-nginx NodePort 10.103.166.230 <none> 80:30907/TCP,443:30653/TCP 21m
I agree with #Matthew L Daniel, if you don't consider to use external load balancer, the best option would be sharing host network interface with ingress-nginx Pod by enabling hostNetwork option in the Pods' spec:
template:
spec:
hostNetwork: true
Thus, NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes, without mapping special proxy ports (30000-32767) to the nested services. Find more information here.
You can’t bind ingress service to port 80. You can run HAProxy on the host and redirect port 80,443 request Ingress service port number.