Kubernetes architecture - kubernetes

Is it possible to simplify this chain that runs on bare metal:
StatefulSet with replicas count that will change over the time
Service
Nginx-ingress with proxy-next-upstream: "error http_502 timeout invalid_header non_idempotent"
Pod with Nginx for caching and many other things that ingress can't do
Service type: LoadBalancer
MetalLB
Is it possible to simplify this stack?

Yes if you turn nginx into sidecar (deploy in every pod) + remove ingress. Cache is not shared in this case:
StatefulSet with replicas count that will change over the time
Sidecar (means in every replica) with nginx for caching and many other things that ingress can't do, including the ingress settings you used. Proxy pass to localhost in this case.
Service: LoadBalancer
MetalLB
Or if you need a common cache - just throw away the ingress:
StatefulSet
ServiceA (pointing to StatefulSet): ClusterIP
nginx with caching and hacks. Proxy pass to ServiceA.namespace.svc.cluster.local
ServiceB (pointing to nginx deployment): LoadBalancer
MetalLB

Related

Kubernetes ingress controller strict round-robin

I'm trying to set up an ingress controller in Kubernetes that will give me strict alternation between two (or more) pods running in the same service.
My testing setup is a single Kubernetes node, with a deployment of two nginx pods.
The deployment is then exposed with a NodePort service.
I've then deployed an ingress contoller (I've tried both Kubernetes Nginx Ingress Controller and Nginx Kubernetes Ingress Controller, separately) and created an ingress rule for the NodePort service.
I edited index.html on each of the nginx pods, so that one shows "SERVER A" and the other "SERVER B", and ran a script that then curls the NodePort service 100 times. It greps "SERVER x" each time, appends it to an output file, and then tallies the number of each at the end.
As expected, curling the NodePort service itself (which uses kube-proxy), I got completely random results-- anything from 50:50 to 80:20 splits between the pods.
Curling the ingress controller, I consistently get something between 50:50 and 49:51 splits, which is great-- the default round-robin distribution is working well.
However, looking at the results, I can see that I've curled the same server up to 4 times in a row, but I need to enforce a strict alternation A-B-A-B. I've spent quite a researching this and trying out different options, but I can't find a setting that will do this. Does anyone have any advice, please?
I'd prefer to stick with one of the ingress controllers I've tried, but I'm open to trying a different one, if it will do what I need.
Nginx default behaviour is like strict round-robin only. You can use it to perform most tests on Nginx ingress with different config tweaks if required.
There is also other options like you can use the Istio service mesh.
You can Load balance the traffic as you required by changing the config only
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Read more at : https://istio.io/latest/docs/reference/config/networking/destination-rule/
& https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings
however, i would suggest going with service mesh only when there is a large cluster implementing for 2-3 services better use the Nginx ingress or haproxy-ingress also good option.

Minikube service expose to public IP

I am learning Kubernetes and trying to deploy an app using MiniKube.
I have managed to expose the service mapped to nginx pod on Minikube IP. I can access the nginx service on url $(minikube ip):$(serviceport). which is fine, however I am looking to expose this to the public network. Currently this service is only accessible via my local machine, any other machine on my wifi network is not able to access it as it is exposed only on minikube ip. I dont want to forward the port in my local linux via IPtables, and I am looking for a built in solution to expose the port to world (and not just on minikube ip). I know it can be achieved as minikube dashboard by default expose the service on localhost, this implies that minikube can talk to other network adapters and can register the port, I am not sure how.
Here is my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginxservice
labels:
app: nginxservice
spec:
type: NodePort
ports:
- port: 80
name: http
targetPort: 80
nodePort: 32756
selector:
app: nginxcontainer
#subudear is right - you need Ingress.
An API object that manages external access to the services in a
cluster, typically HTTP. Ingress may provide load balancing, SSL
termination and name-based virtual hosting.
Ingress exposes HTTP and
HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress
resource.
To be able use regularly use ingress(Im not talking about minikube right now) - it is not enough simply create Ingress object. You should first install related ingress controller.
There are lot of them, most popular are:
NGINX Ingress Controller
Kubernetes Nginx Ingress Controller
Traefik
Istio Ingress Controller
First 2 are very similar, but use absolutely different annotations. It often happens people confuse them
Talking about minikube:
As per guidelines, in order to install ingress the only you have to do is
minikube addons enable ingress
Please note that by default, minikube installing exactly NGINX Ingress controller
nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m
You have to create ingress.
Follow the steps in this doc - https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

Can Ingress Controllers use Selector based rules?

I have deployed a statefulset in AKS - My goal is to load balance traffic to my statefulset.
From my understanding I can define a LoadBalancer Service that can route traffic based on Selectors, something like this.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx
However I don't want to necessarily go down the LoadBalance route and I would prefer Ingress doing this work for me, My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.
Update
To elaborate more on the scenario - Each pod in my statefulset is a stateless node doing data processing of a HTTP feed. I want my ingress service to be able to load balance traffic across these statefulset pods ( honoring keep-alives etc), however given the nature of statefulsets in k8s they are currently exposed through a headless service. I am not sure if a headless service can load balance traffic to my statefulsets?
Update 2
Quick search reveals headless service does not loadbalance
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed "headless" Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
As much i know it's not possible to do the selector-based routing with ingress.
selector based routing is mostly used during a Blue-green deployment or canary deployment you can only achieve this by using the service mesh. You can use any of the service mesh like istio or APP mesh and you can do the selector base routing.
I have deployed a statefulset in AKS - My goal is to load balance
traffic to my statefulset.
if your goal is to just load balance traffic you can use the ingress controller maybe still not sure about scenrio you are trying to explain.
By default kubernetes service also Load balance the traffic across the PODs.
Flow will be something like DNS > ingress > ingress controller > Kubernetes service (Load balancing here) > any of statefulset
+1 to Harsh Manvar's answer but let me add also my 3 cents.
My question is can any of the ingress controller support routing rules
which can do Path based routing to endpoints based on selectors?
Instead of routing to another service.
To the best of my knowledge, the answer to your question is no, it can't as it doesn't even depend on a particular ingress controller implementation. Note that various ingress controllers, no matter how different they may be when it comes to implementation, must conform to the general specification of the ingress resource, described in the official kubernetes documentation. You don't have different kinds of ingresses, depending on what controller is used.
Ingress and Service work on a different layer of abstraction. While Service exposes a set of pods using a selector e.g.:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp 👈
path-based routing performed by Ingress is always done between Services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test 👈
port:
number: 80
I am not sure if a headless service can load balance traffic to my statefulsets?
The first answer is "no". Why?
k8s Service is implemented by the kube-proxy. Kube-proxy itself can work in two modes:
iptables (also known as netfilter)
ipvs (also known as LVS/Linux Virtual Server)
load balancing in case of iptables mode is a NAT iptables rule: from ClusterIP address to the list of Endpoints
load balancing in case of ipvs mode is a VIP (LVS Virtual IP) with the Endpoints as upstreams
So, when you create k8s Service with clusterIP set to None you are exactly saying:
"I need this service WITHOUT load balancing"
Setting up the clusterIP to None causes kube-proxy NOT TO CREATE NAT rule in iptables mode, VIP in ipvs mode. There will be nothing for traffic load balancing across the pods selected by this particular Service selector
The second answer is "it could be". Why?
You are free to create headless Service with desired pods selector. DNS query to this Service will return the list of DNS A records for selected pods. Then you can use this data to implement load balancing YOUR way

AKS Kubernetes questions

Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref

Is it possible to serve up applications through a Kubernetes controller node?

I have built a K3s (https://k3s.io) cluster on a set of Raspberry Pi4 computers.
The controller (ctrl-1) node is a gateway in that it has 2 network interfaces. One is connected to my LAN and the other is connected to a network that it creates, e.g. K3S-LAN. The two nodes (node-1 and node-2) are deployed to the K3S-LAN.
I want to be able to access the applications running on the nodes through ctrl-1, e.g. from the LAN. This is because this cluster is meant to be portable so only the ctrl-1 node needs to be connected to the guest LAN. (Yes there are issues with DNS names etc to be sorted out, but I want to get the basics running first).
This means that I need to be able to "proxy" the ingress through ctrl-1. I thought I had the right idea for this in that I deployed "nginx-ingress" to the master, using Helm. However I forgot about the service for this - this has been scheduled on the nodes, whereas it needs to be on the controller so that the ports are opened up (I think). However I cannot find how to make the service run on the controller.
At the moment I have the service running with a type of NodePort. I could install MetalLB so that I have LoadBalancer capabilities. However with what I have seen I am not sure if this would help or not.
ctrl-1 does not have any taints setup on it, just the role of master.
Am I barking up the wrong tree here? I guess this might not be the intended use case of Kubernetes, but I am playing around with an idea. Thanks for any ideas that people have.
Update*
I have just thought that the way around this might be to run HAProxy on ctrl-1 (as another service on the host) and setup rules to proxy to the necessary services within the cluster. That would act as the bridge between the networks.
You just need to expose your pod via a Nodeport type service and it can be accessed via http://master-node-ip:nodeport. Make sure that kube-proxy is running on all master and worker nodes.
The ingress approach also should work as long as you have kube-proxy running on your master. You deploy nginx ingress on your cluster and it will get deployed into a worker node. Then you can expose nginx ingress controller itself using a NodePort service. After this you can create ingress resource for configuring the nginx ingress controller to route traffic to your backend pods and services running on worker nodes. The services for backend pods should be of type ClusterIP.
Deploy nginx ingress controller and expose it via NodePort service using kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/baremetal/service-nodeport.yaml
Deploy nginx pod(nginx is an example..this should be your pod) kubectl run nginx --generator=run-pod/v1 --image=nginx
Expose nginx pod via ClusterIP service
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
Create ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
With above setup I can now access nginx and get "Welcome to nginx! " via http://master-node-ip:NodePort of nginx ingress controller