Kubernetes service with port-forward does not load balance - kubernetes

im goofin around with K8s for my master thesis at the moment. For this im spinning up an K8s Cluster with the help of KinD. I have also developed an small flask REST API which will echo en ENV var.
Now im starting 3 Services which hold a number of pods of the flask App and they are calling each other. For better understanding i have an hello svc, an world service and an world2 svc.
So far so good.
I have successfully deployed them and now i want to port-forward the hello svc.
kubectl --namespace test port-forward svc/hello 30000
This works fine but as soon as im starting my JMeter Application to test the load balancing features something odd happens.
As you can see in the grafana dashboard the other services are happily load balancing the traffic but the svc which is port forwarded is sending all of its traffic into one hello pod.
This is my deployment:
deployment.yml
Am i missing something? Or did i deploy my application wrong?
Thanks in advance!

Port-forward allows the use of services for convenience purposes only. Behind the scenes connects to a single pod directly. Connection will be dropped should this pod die. There is no load balancing in port forward.One pod selected by the service is chosen and all traffic is forwarded there for the entire lifetime of the port forward command.I would suggest using NodePort type service if you need to test load balancing via JMeter from outside the kubernetes cluster.

For all these who are interested i found a Workaround which is also closer to production.
First of all i installed MetalLB https://mauilion.dev/posts/kind-metallb/
With this LoadBalancer i declared an IP Range which ist the same as the one of my nodes.
Also the service which i am exposing received the type: LoadBalancer with this grafana is now showing an equal distribution of requests.

Related

Accessing pods through ClusterIP

I want to create a cluster of RESTful web APIs in AWS EKS and be able to access them through a single IP (allowing kubernetes to load balance requests to each). I have followed the procedure explained the this link and have set up an example nginx deployment as shown in the following image:
The problem is that when I access the example nginx deployment via 172.31.22.183 it works just fine, but when I try to use the cluster IP 10.100.145.181 it does not yield any response in such a way that it seems to be unreachable.
What's the purpose of that cluster ip then and how can I use it to achieve what I need?
What's the purpose of that cluster ip then and how can I use it to
achieve what I need?
ClusterIP is local IP that is used internally in the cluster, you can use it to access the application.
While i think Endpoint IP that you got, is might be external and you can access the application outside.
AWS EKS and be able to access them through a single IP (allowing
kubernetes to load balance requests to each)
For this best practice is to use the ingress, API gateway or service mesh.
Ingress is single point where all your request will be coming inside it will be load balancing and forwarding the traffic internally inside the cluster.
Consider ingress is like Loadbalancer single point to come inside the cluster.
Ingress : https://kubernetes.io/docs/concepts/services-networking/ingress/
AWS Example : https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/
ClusterIP is an IP that is only accessible inside the cluster. You cannot hit it from outside cluster unless you use kubectl port-forward

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

Ingress traffic flow in to kubernetes cluster

Can anyone please help me understand the ingress traffic flow to a pod in kubernetes? Any web links or documents are much appreciated.
In my application there is a intermittent connection timed out so i want to understand how the traffic is flowing in to cluster and where do i need to enable tcpdump to understand what is happening when there is timeout.
Your question does not contain enough information to give you a detailed answer. There are different types of ingress controllers, and load balancers as well.
So, suppose:
you are using Azure Kubernetes Service
you are using Azure Load Balancer
you have two types of backend pods, each has its own dedicated service
you are using Nginx as ingress controller which is able to do LAYER 7 (OSI) load balancing
Nginx has also its own pods and a service sits in front of these pods. This service has a Service IP which is available only within the AKS cluster. Due to this, additionally you can use Azure Load Balancer (ALB) to make your backend pods available for the public. ALB is a layer 4 load balancer, which sends the incoming traffic to the worker nodes.
Kube-proxy is running on every worker nodes and able to recognize that the traffic from the ALB was destined to the Nginx service.
See the flow on the image below:

Access kubernetes cluster from outside of the host machine via port 80

So, instead of explaining the architecture I draw you a picture today :) I know, it's 1/10.
Forgot to paint this as well, it is a single node cluster
Hope this will save you some time.
Probably it's also easier to see where my struggles are, as I expose the lack of understandings.
So, in a nutshell:
What is working:
I can curl each ingress via virtual hosts from inside of the server using curl -vH 'host: host.com' http://192.168.1.240/articleservice/system/ipaddr
I can access the server
What's not working:
I can not access the cluster from outside.
Somehow I am not able to solve this myself, even tho I read quite a lot and had lots of help. As I am having issues with this for a period of time now explicit answers are really appreciated.
Generally you cannot access your cluster from outside without exposing a service.
You should change your "Ingress Controller" service type to NodePort and let kubernetes assign a port to that service.
you can see ports assigned to a service using kubectl get service ServiceName.
now it's possible to access that service from out side on http://ServerIP:NodePort but if you need to use standard HTTP and HTTPS ports you should use a reverse proxy outside of your cluster to flow traffic from port 80 to NodePort assigned to Ingress Controller Service.
If you don't like to add reverse proxy, it is possible to add externalIPs to Ingress controller service but in this way you lose RemoteAddr in your Endpoints and you get ingress controller pod IP instead.
externalIPs can be list of your public IPs
you can find useful information about services and ingress in following links:
Kubernetes Services
Nginx Ingress - Bare-metal considerations

Accessing a webpage hosting on a pod

I have deployment that hosts a website on port 9001 and a service attached to it. I want to allow anyone (from outside cluster) to be able to connect to that site.
Any help would be appreciated.
I want to allow anyone (from outside cluster) to be able to connect to that site
There are many ways to do this using kubernetes services to expose port 9001 of the website to the outside world:
Service type LoadBalancer if you have an external, cloud-provider's load-balancer.
ExternalIPs. The website can be hit at ExternalIP:Port.
Service type NodePort if the cluster's nodes are reachable from the users. The website can be hit at NodeIP:NodePort.
Ingress controller and ingress resource.
As you wrote that this is not a cloud deployment, you need to consider how to correctly expose this to the world in a decent fashion. First and formost, create a NodePort type service for your deployment. With this, your nodes will expose that service on a high port.
Depending on your network, at this point you either need to configure a loadbalancer in your network to forward traffic for some IP:80 to your node(s) high NodePort, or for example deploy HAProxy in a DeamonSet with hostNetwork: true that will proxy 80 to your NodePort.
A bit more complexity can be added by deployment of Nginx IngressController (exposed as above) and use of Ingress to make the Ingress Controller expose all your services without the need to fiddle with NodePort/LB/HAProxy for each of them individualy any more.