I run istio on Kubernetes. I want to know how the envoy sidecar works. For example, after sidecar is injected into the pod, the original container cannot access the outer network without EgressRule. How does it work?
All the traffic inside the pod is captured by iptables commands and directed to the sidecar proxy. Then the sidecar proxy performs routing, according to routing tables it receives from Istio Pilot (a part of the Istio Control Plane). The routing tables are based on the Kubernetes services and on the Istio RouteRules. Since Istio cannot know anything about the external services, it cannot route the traffic to the external services without an EgressRule defined. EgressRules define the routing tables for the external services.
Related
My infrastructure is based on Kubernetes (k3s, with istio ingress). I would like to use istio to expose an application that is not in my cluster.
outside (internet) --https--> my router --> [cluster] istio --> [not cluster] application (192.168.1.29:8123)
I tried creating a HAProxy container, but it didn't work...
Any ideas?
If you insist on piping your traffic to the non-cluster application through the Kubernetes cluster, there are a couple of ways to handle this. You could use a Kubernetes-native ExternalName Kubernetes service.
The Istio way would be to create a ServiceEntry, though, and then use a VirtualService combined with a Gateway to direct traffic to your application outside of the cluster.
I have set up a kubernetes cluster using kubeadm on a server, which is using an ingress controller (nginx) and this is working as intended. However, I used to deploy a nginx reverse proxy when I was using docker and to forward traffic to the containers. I have read that the ingress controller embarks a reverse proxy but I am not sure if it is sufficient and how to configure it (like IP ban when too many requests are sent in 1 s, ...).
I am aware that it can be done by modifying the port of the cluster and forwarding the traffic from the reverse proxy to the ingress controller but I don't know if it has any utility.
If you have more control over your inbound traffic, you can test multiple ingresses, not only Nginx. It will depend on the purpose of your requirement, although Nginx supports rate-limit. I suggest test others ingresses but try to install metal-lb firstly. So you can assign a specific Loadbalancer IP for each ingress.
I was making some research about how K8s resolves the services using the clusterIP services and how CNIs like WeaveNet or how service meshes like Istio provide additional features to this functionality. However, I'm new on the topic and I'd like to share here what I've found to see if somebody can expand and correct my points:
Istiod has a service registry. This service registry is filled with the entries coming from K8s services clusterIPs (which in turn is the service registry of K8s) and other possible external services defined with Kind: ServiceEntry
(see seciton 5.5 of book istio in action)
This service registry is then mixed with more information about virtualservices and destination rules. These new/added K8s kinds are CRDs from Istio. They are what give the features of L7 load balancing that allow to distribute traffic by HTTP headers or URI path.
Without Istio, K8s has different (3) ways to implement the clusterIPs services concept. This services provide load balancing at L4.
https://kubernetes.io/docs/concepts/services-networking/service/
The most extended one nowadays is the iptables proxy mode. The iptables of the Linux machine are populated in bases of what theh kube-proxy provides. Kube-proxy gets those data from the kube-apiserver and (problably the core-dns). The kube-apisever will in turn consult the etcd database to know about the k8s clusterIP services. The entry of the iptables is populated with a the clusterIP->pod IP with only one pod IP out of the many pod that a deployment behind the clusterIP could be.
Any piece of code/application inside of the container could make calls directy to the kube-apiserver if using the correct authentication and get the pod address but that would be not practic
K8s can use CNIs (container network interfaces). One example of this would be Weavenet.
https://www.weave.works/docs/net/latest/overview/
Wevenet creates a new layer 2 network using Linux kernel features. One daemon sets up this L2 network and manages the routing between machines and there are various ways to attach machines to the network.
In this network the containers can be exposed to the outside world.
Weavenet implements a micro DNS server at each node. You simply name containers and the routing just can work without the use of services, including the load balancing across multiple continers with the same name.
Hello I'm new to Istio and currently learning about Istio.
As per my understanding, Envoy proxy will resolve an IP address of destination instead of Kube DNS server. Envoy will send traffic directly to healthy pod based on information which received from control pane.
So... Does Kubernetes service required to setup, if I'm using Istio?
Correct me if I'm wrong.
Thanks!
From the docs
In order to direct traffic within your mesh, Istio needs to know where
all your endpoints are, and which services they belong to. To populate
its own service registry, Istio connects to a service discovery
system. For example, if you’ve installed Istio on a Kubernetes
cluster, then Istio automatically detects the services and endpoints
in that cluster.
So Kubernetes service is needed for istio to achieve service discovery i.e to know the POD IPs. But kubernetes service(L4) is not used for load balancing and routing traffic because L7 envoy proxy does that in istio.
From the docs.
A pod must belong to at least one Kubernetes service even if the pod
does NOT expose any port. If a pod belongs to multiple Kubernetes
services, the services cannot use the same port number for different
protocols, for instance HTTP and TCP.
I have a question related to Kubernetes networking.
I have a microservice (say numcruncherpod) running in a pod which is serving requests via port 9000, and I have created a corresponding Service of type NodePort (numcrunchersvc) and node port which this service is exposed is 30900.
My cluster has 3 nodes with following IPs:
192.168.201.70,
192.168.201.71
192.168.201.72
I will be routing the traffic to my cluster via reverse proxy (nginx). As I understand in nginx I need to specify IPs of all these cluster nodes to route the traffic to the cluster, is my understanding correct ?
My worry is since nginx won't have knowledge of cluster it might not be a good judge to decide the cluster node to which the traffic should be sent to. So is there a better way to route the traffic to my kubernetes cluster ?
PS: I am not running the cluster on any cloud platform.
This answer is a little late, and a little long, so I ask for forgiveness before I begin. :)
For people not running kubernetes clusters on Cloud Providers there are 4 distinct options for exposing services running inside the cluster to the world outside.
Service of type: NodePort. This is the simplest and default. Kubernetes assigns a random port to your service. Every node in the cluster listens for traffic to this particular port and then forwards that traffic to any one of the pods backing that service. This is usually handled by kube-proxy, which leverages iptables and load balances using a round-robin strategy. Typically since the UX for this setup is not pretty, people often add an external "proxy" server, such as HAProxy, Nginx or httpd to listen to traffic on a single IP and forward it to one of these backends. This is the setup you, OP, described.
A step up from this would be using a Service of type: ExternalIP. This is identical to the NodePort service, except it also gets kubernetes to add an additional rule on all kubernetes nodes that says "All traffic that arrives for destination IP == must also be forwarded to the pods". This basically allows you to specify any arbitrary IP as the "external IP" for the service. As long as traffic destined for that IP reaches one of the nodes in the cluster, it will be routed to the correct pod. Getting that traffic to any of the nodes however, is your responsibility as the cluster administrator. The advantage here is that you no longer have to run an haproxy/nginx setup, if you specify the IP of one of the physical interfaces of one of your nodes (for example one of your master nodes). Additionally you cut down the number of hops by one.
Service of type: LoadBalancer. This service type brings baremetal clusters at parity with cloud providers. A fully functioning loadbalancer provider is able to select IP from a pre-defined pool, automatically assign it to your service and advertise it to the network, assuming it is configured correctly. This is the most "seamless" experience you'll have when it comes to kubernetes networking on baremetal. Most of LoadBalancer provider implementations use BGP to talk and advertise to an upstream L3 router. Metallb and kube-router are the two FOSS projects that fit this niche.
Kubernetes Ingress. If your requirement is limited to L7 applications, such as REST APIs, HTTP microservices etc. You can setup a single Ingress provider (nginx is one such provider) and then configure ingress resources for all your microservices, instead of service resources. You deploy your ingress provider and make sure it has an externally available and routable IP (you can pin it to a master node, and use the physical interface IP for that node for example). The advantage of using ingress over services is that ingress objects understand HTTP mircoservices natively and you can do smarter health checking, routing and management.
Often people combine one of (1), (2), (3) with (4), since the first 3 are L4 (TCP/UDP) and (4) is L7. So things like URL path/Domain based routing, SSL Termination etc is handled by the ingress provider and the IP lifecycle management and routing is taken care of by the service layer.
For your use case, the ideal setup would involve:
A deployment for your microservice, with health endpoints on your pod
An Ingress provider, so that you can tweak/customize your routing/load-balancing as well as use for SSL termination, domain matching etc.
(optional): Use a LoadBalancer provider to front your Ingress provider, so that you don't have to manually configure your Ingress's networking.
Correct. You can route traffic to any or all of the K8 minions. The K8 network layer will forward to the appropriate minion if necessary.
If you are running only a single pod for example, nginx will most likely round-robin the requests. When the requests hit a minion which does not have the pod running on it, the request will be forwarded to the minion that does have the pod running.
If you run 3 pods, one on each minion, the request will be handled by whatever minion gets the request from nginx.
If you run more than one pod on each minion, the requests will be round-robin to each minion, and then round-robin to each pod on that minion.