For statefulsets, I am wondering if it's possible to put each replica behind a virtual ip, perhaps using a Service, so that we have the same connection and DNS behavior for per replica host names as we do for ClusterIP hostname that we get for a non-headless service.
When we use replica hostnames, we seem to lose the load balancing and connection management provided by virtual ip, and that causes problems for our app.
You can improve your DNS request by deploying a NodeLocal DNS Cache. This could help to reduce the average DNS lookup time. The local DNS cache can be used with kube dns ConfigMAP to automatically pick up stub domains and upstream nameservers.
You can enable this feature in an existing cluster adding the –update-addons with the argument NodeLocalDNS=ENABLED like it is shown in the following example:
gcloud container clusters update CLUSTER_NAME \
--update-addons=NodeLocalDNS=ENABLED
You can find more information regarding this feature in this link:
Also to setup a service when you are using a StatefulSet you can use the Pod label, this label allows you to attach a Service to a specific Pod.
In addition you can deploy a Health check to review if your backend responds to traffic, If the backend fails to respond will be marked as unhealthy and the traffic will be attended by the healthy backend
Related
I am new to learning kubernetes, and I understand that pods have dynamic IP and require some other "service" resource to be attached to a pod to use the fixed IP address. What service do I require and what is the process of configuration & How does AWS-ECR fit into all this.
So if I have to communicate from a container of a pod to google.com, Can I assume my source as the IP address of the "service", if I have to establish a connection?
Well, for example on Azure, this feature [Feature Request] Pod Static IP is under request:
See https://github.com/Azure/AKS/issues/2189
Also, as I know, you can currently assign an existing IP adress to a load balancer service or an ingress controller
See https://learn.microsoft.com/en-us/azure/aks/static-ip
By default, the public IP address assigned to a load balancer resource
created by an AKS cluster is only valid for the lifespan of that
resource. If you delete the Kubernetes service, the associated load
balancer and IP address are also deleted. If you want to assign a
specific IP address or retain an IP address for redeployed Kubernetes
services, you can create and use a static public IP address
As you said we needs to define a service which selects all the required pods and then you would be sending requests to this service instead of the pods.
I would suggest you to go through this https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types.
The type of service you need basically depends on the use-case.
I will give a small overview so you get an idea.
Usually when pods only have internal requests ClusterIP is used
Node port allow external requests but is basically used for testing and not for production cases
If you also have requests coming from outside the cluster you would usually use load balancer
Then there is another option for ingress
As for AWS-ECR, its basically a container registry where you store your docker images and pull from it.
I'm trying to understand the concepts of ingress and ingress controllers in kubernetes. But I'm not so sure what the end product should look like. Here is what I don't fully understand:
Given I'm having a running Kubernetes cluster somewhere with a master node which runes the control plane and the etcd database. Besides that I'm having like 3 worker nodes - each of the worker nodes has a public IPv4 address with a corresponding DNS A record (worker{1,2,3}.domain.tld) and I've full control over my DNS server. I want that my users access my web application via www.domain.tld. So I point the the www CNAME to one of the worker nodes (I saw that my ingress controller i.e. got scheduled to worker1 one so I point it to worker1.domain.tld).
Now when I schedule a workload consisting of 2 frontend pods and 1 database pod with 1 service for the frontend and 1 service for the database. From what've understand right now, I need an ingress controller pointing to the frontend service to achieve some kind of load balancing. Two questions here:
Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its service? Is it best practice to run an ingress controller on every worker node in the cluster?
For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point will get be at another IPv4 address, right? From a user perspective which tries to access the frontend via www.domain.tld, this DNS entry has to be updated, right? How so? Do I need to run a specific kubernetes-aware DNS server somewhere? I don't understand the connection between the DNS server and the kubernetes cluster.
Bonus question: If I run more ingress controllers replicas (spread across multiple workers) do I do a DNS-round robin based approach here with multiple IPv4 addresses bound to one DNS entry? Or what's the best solution to achieve HA. I rather not want to use load balancing IP addresses where the worker share the same IP address.
Given I'm having a running Kubernetes cluster somewhere with a master
node which runes the control plane and the etcd database. Besides that
I'm having like 3 worker nodes - each of the worker nodes has a public
IPv4 address with a corresponding DNS A record
(worker{1,2,3}.domain.tld) and I've full control over my DNS server. I
want that my users access my web application via www.domain.tld. So I
point the the www CNAME to one of the worker nodes (I saw that my
ingress controller i.e. got scheduled to worker1 one so I point it to
worker1.domain.tld).
Now when I schedule a workload consisting of 2 frontend pods and 1
database pod with 1 service for the frontend and 1 service for the
database. From what've understand right now, I need an ingress
controller pointing to the frontend service to achieve some kind of
load balancing. Two questions here:
Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its
service? Is it best practice to run an ingress controller on every
worker node in the cluster?
Yes, it's a good practice. Having multiple pods for the load balancer is important to ensure high availability. For example, if you run the ingress-nginx controller, you should probably deploy it to multiple nodes.
For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point
will get be at another IPv4 address, right? From a user perspective
which tries to access the frontend via www.domain.tld, this DNS entry
has to be updated, right? How so? Do I need to run a specific
kubernetes-aware DNS server somewhere? I don't understand the
connection between the DNS server and the kubernetes cluster.
Yes, the IP will change. And yes, this needs to be updated in your DNS server.
There are a few ways to handle this:
assume clients will deal with outages. you can list all load balancer nodes in round-robin and assume clients will fallback. this works with some protocols, but mostly implies timeouts and problems and should generally not be used, especially since you still need to update the records by hand when k8s figures it will create/remove LB entries
configure an external DNS server automatically. this can be done with the external-dns project which can sync against most of the popular DNS servers, including standard RFC2136 dynamic updates but also cloud providers like Amazon, Google, Azure, etc.
Bonus question: If I run more ingress controllers replicas (spread
across multiple workers) do I do a DNS-round robin based approach here
with multiple IPv4 addresses bound to one DNS entry? Or what's the
best solution to achieve HA. I rather not want to use load balancing
IP addresses where the worker share the same IP address.
Yes, you should basically do DNS round-robin. I would assume external-dns would do the right thing here as well.
Another alternative is to do some sort of ECMP. This can be accomplished by having both load balancers "announce" the same IP space. That is an advanced configuration, however, which may not be necessary. There are interesting tradeoffs between BGP/ECMP and DNS updates, see this dropbox engineering post for a deeper discussion about those.
Finally, note that CoreDNS is looking at implementing public DNS records which could resolve this natively in Kubernetes, without external resources.
Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its service? Is it best practice to run an ingress controller on every worker node in the cluster?
A quantity of replicas of the ingress will not affect the quality of load balancing. But for HA you can run more than 1 replica of the controller.
For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point will get be at another IPv4 address, right? From a user perspective which tries to access the frontend via www.domain.tld, this DNS entry has to be updated, right? How so? Do I need to run a specific kubernetes-aware DNS server somewhere? I don't understand the connection between the DNS server and the kubernetes cluster.
Right, it will be on another IPv4. Yes, DNS should be updated for that. There are no standard tools for that included in Kubernetes. Yes, you need to run external DNS and somehow manage records on it manually (by some tools or scripts).
DNS server inside a Kubernetes cluster and your external DNS server are totally different things. DNS server inside the cluster provides resolving only inside the cluster for service discovery. Kubernetes does not know anything about access from external networks to the cluster, at least on bare-metal. In a cloud, it can manage some staff like load-balancers to automate external access management.
I run more ingress controllers replicas (spread across multiple workers) do I do a DNS-round robin based approach here with multiple IPv4 addresses bound to one DNS entry? Or what's the best solution to achieve HA.
DNS round-robin works in that case, but if one of the nodes is down, your clients will get a problem with connecting to that node, so you need to find some way to move/remove IP of that node.
The solutions for HA provided by #jjo is not the worst way to achieve what you want if you can prepare an environment for that. If not, you should choose something else, but the best practice is using a Load Balancer provided by an infrastructure. Will it be based on several dedicated servers, or load balancing IPs, or something else - it does not matter.
The behavior you describe is actually a LoadBalancer (a Service with type=LoadBalancer in Kubernetes), which is "naturally" provided when you're running Kubernetes on top of a cloud provider.
From your description, it looks like your cluster is on bare-metal (either true or virtual metal), a possible approach (that has worked for me) will be:
Deploy https://github.com/google/metallb
this is where your external IP will "live" (HA'd), via the speaker-xxx pods deployed as DaemonSet to each worker node
depending on your extn L2/L3 setup, you'll need to choose between L3 (BGP) or L2 (ARP) modes
fyi I've successfully used L2 mode + simple proxyarp at the border router
Deploy nginx-ingress controller, with its Service as type=LoadBalancer
this will make metallb to "land" (actually: L3 or L2 "advertise" ...) the assigned IP to the nodes
fyi I successfully tested it together with kube-router using --advertise-loadbalancer-ip as CNI, the effect will be that e.g. <LB_IP>:80 will be redirected to the ingress-nginx Service NodePort
Point your DNS to ingress-nginx LB IP, i.e. what's shown by:
kubectl get svc --namespace=ingress-nginx ingress-nginx -ojsonpath='{.status.loadBalancer.ingress[].ip}{"\n"}'
fyi you can also quickly test it using fake DNSing with http://A.B.C.D.xip.io/ (A.B.C.D being your public IP addr)
Here is a Kubernetes DNS add-ons Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services allowing to handle DNS record updates for ingress LoadBalancers. It allows to keep DNS record up to date according to Ingress controller config.
How to install Kubernetes dashboard on external IP address?
Is there any tutorial for this?
You can expose services and pods in several ways:
expose the internal ClusterIP service through Ingress, if you have that set up.
change the service type to use 'type: LoadBalancer', which will try to create an external load balancer.
If you have external IP addresses on your kubernetes nodes, you can also expose the ports directly on the node hosts; however, I would avoid these unless it's a small, test cluster.
change the service type to 'type: NodePort', which will utilize a port above 30000 on all cluster machines.
expose the pod directly using 'type: HostPort' in the pod spec.
Depending on your cluster type (Kops-created, GKE, EKS, AKS and so on), different variants may not be setup. Hosted clusters typically support and recommend LoadBalancers, which they charge for, but may or may not have support for NodePort/HostPort.
Another, more important note is that you must ensure you protect the dashboard. Running an unprotected dashboard is a sure way of getting your cluster compromised; this recently happened to Tesla. A decent writeup on various way to protect yourself was written by Jo Beda of Heptio
I have deployed two POD-s with hostnetwork set to true. When the POD-s are deployed on same OpenShfit node then everything works fine since they can discover each other using node IP.
When the POD-s are deployed on different OpenShift nodes then they cant discover each other, I get no route to host if I want to point one POD to another using node IP. How to fix this?
The uswitch/kiam (https://github.com/uswitch/kiam) service is a good example of a use case.
it has an agent process that runs on the hostnetwork of all worker nodes because it modifies a firewall rule to intercept API requests (from containers running on the host) to the AWS api.
it also has a server process that runs on the hostnetwork to access the AWS api since the AWS api is on a subnet that is only available to the host network.
finally... the agent talks to the server using GRPC which connects directly to one of the IP addresses that are returned when looking up the kiam-server.
so you have pods of the agent deployment running on the hostnetwork of node A trying to connect to kiam server running on the hostnetwork of node B.... which just does not work.
furthermore, this is a private service... it should not be available from outside the network.
If you want the two containers to be share the same physical machine and take advantage of loopback for quick communications, then you would be better off defining them together as a single Pod with two containers.
If the two containers are meant to float over a larger cluster and be more loosely coupled, then I'd recommend taking advantage of the Service construct within Kubernetes (under OpenShift) and using that for the appropriate discovery.
Services are documented at https://kubernetes.io/docs/concepts/services-networking/service/, and along with an internal DNS service (if implemented - common in Kubernetes 1.4 and later) they provide a means to let Kubernetes manage where things are, updating an internal DNS entry in the form of <servicename>.<namespace>.svc.cluster.local. So for example, if you set up a Pod with a service named "backend" in the default namespace, the other Pod could reference it as backend.default.svc.cluster.local. The Kubernetes documentation on the DNS portion of this is available at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
This also avoids the "hostnetwork=true" complication, and lets OpenShift (or specifically Kubernetes) manage the networking.
If you have to absolutely use hostnetwork, you should be creating router and then use those routers to have the communication between pods. You can create ha proxy based router in opeshift, reference here --https://docs.openshift.com/enterprise/3.0/install_config/install/deploy_router.html
I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).