How to make kubernetes use existing etcd cluster on bare metal ubuntu14-04? - kubernetes

I am trying to configure two Kubernetes cluster on bare metal ubuntu that shares the same ETCD cluster, I have no idea where to do the changes to it work.

You should be able to configure the etcd instance(s) used by Kubernetes using the --etcd_servers flag on the kube-apiserver component. It also has a flag for specifying the prefix to put all of its objects under in etcd.
However, we typically recommend using a dedicated etcd instance for Kubernetes anyway so that it can't be affected by other users of the etcd cluster.

Agreed with Alex, please don't forget to configure flannel to use your existing etcd cluster, it is also relying on etcd to work.

Related

What is minikube config specifying?

According to the minikube handbook the configuration commands are used to "Configure your cluster". But what does that mean?
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
Are these the values it will reserve on the host machine in preparation for use?
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
Thanks for the help in advance.
Answering the question:
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
In short. It will be a limit for the whole resource (either a VM, a container, etc. depending on a --driver used). It will be used for the underlying OS, Kubernetes components and the workload that you are trying to run on it.
Are these the values it will reserve on the host machine in preparation for use?
I'd reckon this would be related to the --driver you are using and how its handling the resources. I personally doubt it's reserving the 100% of CPU and memory you've passed in the $ minikube start and I'm more inclined to the idea that it uses how much it needs during specific operations.
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
By default, when you create a minikube instance with: $ minikube start ... you will create a single node cluster capable of being a control-plane node and a worker node simultaneously. You will be able to run your workloads (like an nginx-deployment without adding additional node).
You can add a node to your minikube ecosystem with just: $ minikube node add. This will make another node marked as a worker (with no control-plane components). You can read more about it here:
Minikube.sigs.k8s.io: Docs: Tutorials: Multi node
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
As said previously, you don't need to delete the minikube cluster to add another node. You can run $ minikube node add to add a node on a minikube host. There are also options to delete/stop/start nodes.
Personally speaking if the workload that you are trying to run requires multiple nodes, I would try to consider other Kubernetes cluster built on top/with:
Kubeadm
Kubespray
Microk8s
This would allow you to have more flexibility on where you want to create your Kubernetes cluster (as far as I know, minikube works within a single host (like your laptop for example)).
A side note!
There is an answer (written more than 2 years ago) which shows the way to add a Kubernetes cluster node to a minikube here :
Stackoverflow.com: Answer: How do I get the minikube nodes in a local cluster
Additional resources:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm
Github.com: Kubernetes sigs: Kubespray
Microk8s.io

Kubernetes Cluster: How to figure out which nodes are master nodes

When I use the command kubectl get nodes. I got list of nodes with ROLES . Are there any way I can find out which nodes are masters?
Use this command for this purpose.
kubectl get node --selector='node-role.kubernetes.io/master'
In EKS, according to the AWS Documentation:
The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS endpoint associated with your cluster.
As mentioned in my comment above, you don't have access to the master node in an EKS cluster, as it is managed by AWS.
The idea behind it is to "make your life easier" and make you worry only about the loads that will run on the worker nodes.
There is also this documentation page, that may help in the understanding of EKS.

Load balancer for kubeapi server while creating the Kubernetes cluster using kubeadm

I am trying to create Kubernetes cluster having 1 master and 2 worker nodes by using the tool kubeadm in my on-premise machines. I am following the Kubernetes official documentation for forming the cluster from the following url:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
After installing all the runtime and completing before begin pre-requistics steps, I found in the document as the first step of forming the cluster is Create load balancer for kube-apiserver.
My Doubt
When I created the single master 3 worker nodes cluster using kubespray tool, I did not created any separate load balancer for that. So here when I am following the kubeadm tool, Do I need to create the load balancer actually for forming ?
Why are both tools showing different way, Since I did not created load balancer by using kubespray tool. Now I am trying to create cluster with kubeadm tool.
Speaking of load balancers creation during Kubernetes deployment using Kubeadm it depends on your setup. It is not mandatory to setup load balancer. Your cluster will still work, but without load balancing, it's going to be hard to qualify this cluster as HA.
In a single master setup as it is in your case, the master node manages the etcd database, API server, controller manager and scheduler, along with the worker nodes. However, if that single master node fails, all the worker node fail as well and entire cluster will be lost.
Learn more here: kubernetes-ha-kubeadm.
Kubeadm covers the needs of a life-cycle management for Kubernetes clusters, including self-hosted layouts, dynamic discovery services, etc. Kubespray is more about generic configuration, initial clustering, and bootstrapping.
Kubespray is a good choice when you either are familiar with Ansible or seek a possibility to switch between multiple platforms. If your priority is tight integration with unique features offered by the supported clouds, and you plan to stick with your provider, kops may be a better option.
Deploying a loadbalancer is up to a user and is not covered by ansible roles in Kubespray. By default, it only configures a non-HA endpoint, which points to the access_ip or IP address of the first server node in the kube-master group. It can also configure clients to use endpoints for a given loadbalancer type. More information you can find here: kubespray-lb.
Here you have comparision of Kubernetes deployment tools: Kubernetes Deployment Tools.

Use Calico for policy and networking on AWS EKS?

AWS EKS makes use of their own CNI plugin and there are docs that allow you to install Calico for managing policy. For a number of reasons, I'd like to have Calico manage networking as well.
Based on the installation instructions I can't seem to find a way to do either option:
etcd
Doesn't seem viable as I can't find a way to access the EKS control plane etcd endpoints. If I were to deploy my own etcd pods inside the cluster, I need to use the AWS CNI plugin for those to get an IP address, so that doesn't work. I could bring my own etcd cluster outside of Kubernetes, but that seems a bit ridiculous.
Kubernetes API datastore
This option wants me to change setting to the controller which I don't have access to in the AWS EKS managed control plane.
The short answer is as of this writing EKS (nor GKE) doesn't give you direct access to any of the control plane components: etcd, kube-apiserver, kube-controller-manager, coredns/kube-dns, kube-scheduler.
They do have some docs on how to install Calico on an EKS cluster, but if you want more control you'll have to set up your own standalone cluster.
They might allow you access to the master components in the future but the bottom line is that EKS is a 'managed' service where they are supposed to take care of all your control plane components.

usecase for etcd inside Kubernetes

I was just wondering, why it is useful to run etcd cluster inside Kubernetes, when Kubernetes itself depends on etcd.
It just does not make sense to me, as if I have HA Kube, I am also forced to have HA etcd outside. Hence to reason to install it again inside...
I have an external ETCD that manages my k8s HA cluster and im not letting any developer apps near it. I would be too concerned about something going wrong and breaking the k8s cluster. It is also a fixed size at 3 which works well for the cluster size with its requirements. If the developers need a key/value store for their db and want etcd, this would be a great way to make one in the cluster for the applications. With it being statefulsets, its scalable.
If you're using Kubernetes via GKE, the underlying Etcd cluster is not exposed in any way.