Kubernetes Cluster: How to figure out which nodes are master nodes - kubernetes

When I use the command kubectl get nodes. I got list of nodes with ROLES . Are there any way I can find out which nodes are masters?

Use this command for this purpose.
kubectl get node --selector='node-role.kubernetes.io/master'

In EKS, according to the AWS Documentation:
The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS endpoint associated with your cluster.
As mentioned in my comment above, you don't have access to the master node in an EKS cluster, as it is managed by AWS.
The idea behind it is to "make your life easier" and make you worry only about the loads that will run on the worker nodes.
There is also this documentation page, that may help in the understanding of EKS.

Related

How to make more master nodes on kubernetes cluster

I got cluster seems like:
[masters]
master
[workers]
worker1
worker2
worker3
I know, that I can add this nodes to inventory and make reinit my cluster.
But mb can I do that without reconfigure inventory file?
I thought that I could do it through kubectl join --token etc,
but my output by kubectl token list is empty
According to the official documentation:
You can replicate Kubernetes masters in kube-up or kube-down
scripts for Google Compute Engine. This document describes how to use
kube-up/down scripts to manage highly available (HA) masters and how
HA masters are implemented for use with GCE.
Also if you want to setup an HA kubernetes cluster on Bare Metal you should check out this guide.
If you need to get a better understanding of how HA clusters work overall you should read this article.
Please let me know if that helped.

How can Kubernete auto scale nodes?

I am using kubernete to manage docker cluster. Right now, I can set up POD autoscale using Horizontal Pod Scaler, that is fine.
And now I think the next step is to autoscale nodes. I think for HPA, the auto-created pod is only started in the already created nodes, but if all the available nodes are utilized and no available resource for any more pods, I think the next step is to automatically create node and have node join the k8s master.
I googled a lot and there are very limited resources to introduce this topic.
Can anyone please point me to any resource how to implement this requirement.
Thanks
One way to do using AWS and setting up your own Kubernetes cluster is by following these steps :
Create an Instance greater than t2.micro (will be master node).
Initialize the Kubernetes cluster using some tools like Kubeadm. After the initialisation would be completed you would get a join command, which needs to e run on all the nodes who want to join the cluster. (Here is the link)
Now create an Autoscaling Group on AWS with start/boot script containing that join command.
Now whenever the utilisation specified by you in autoscaling group is breached the scaling would happen and the node(s) would automatically join the Kubernetes cluster. This would allow the Kubernetes to schedule pods on the newly joined nodes based on the HPA.
(I would suggest to use Flannel as pod network as it automatically removes the node from Kubernetes cluster when it is not available)
kubernetes operations (kops) helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line.
Features:
Automates the provisioning of Kubernetes clusters in AWS and GCE
Deploys Highly Available (HA) Kubernetes Masters
Most of the managed kubernetes service providers provide auto scaling feature of the nodes
Elastic Kubernetes Service EKS- configure cluster auto scalar
Google Kubernetes Engine
GKE Auto Scalar
Auto scaling feature needs to be supported by the underlying cloud provider. Google cloud supports auto scaling during cluster creation or update by passing flags --enable-autoscaling --min-nodes and --max-nodes to the corresponding gcloud commands.
Examples:
gcloud container clusters create mytestcluster --zone=us-central1-b --enable-autoscaling --min-nodes=3 --max-nodes=10 --num-nodes=5
gcloud container clusters update mytestcluster --enable-autoscaling --min-nodes=1 --max-nodes=15
below link would be helpful
https://medium.com/kubecost/understanding-kubernetes-cluster-autoscaling-675099a1db92

Instance metadata on IBM Cloud

Is there any way to get instance metadata on IBM Cloud Kubernetes cluster, from internal pod? Something like doing curl to http://metadata.google.internal/computeMetadata/v1/instance/... on GKE clusters, or http://169.254.169.254/latest/... on EKS clusters.
Any help will be appreciated. Thanks!
I think I follow what you're after.. you're able to get details/metadata about your worker node from the SoftLayer APIs available - https://sldn.softlayer.com/reference/softlayerapi/
For k8s specific info you can utilize the k8s api-server to query metadata about things like the node, pods, etc at the Kubernetes.default.svc.cluster.local address from inside a pod. You can find the service account and token from within your pod at /var/run/secrets/kubernetes.io
Hope that helps.

How do managed Kubernetes providers hide the master nodes?

If I run kubectl get nodes on GKE, EKS, or DigitalOcean Kubernetes, I only see the worker nodes. How are these systems architected at the network or application level to create this separation between workers and masters?
You can run the Kubernetes control plane outside Kubernetes as long as the worker nodes have network access to the control plane. This approach is used on most managed Kubernetes solutions.
A Container Engine cluster is a group of Compute Engine instances running Kubernetes. It consists of one or more node instances, and a managed Kubernetes master endpoint.
Every container cluster has a single master endpoint, which is managed by Container Engine. The master provides a unified view into the cluster and, through its publicly-accessible endpoint, is the doorway for interacting with the cluster.
The managed master also runs the Kubernetes API server, which services REST requests, schedules pod creation and deletion on worker nodes, and synchronizes pod information (such as open ports and location) with service information.
More info can be found here

Deploy Kubernetes on OpenStack

I am trying to understand the relationship between Kubernetes and OpenStack. I am confused around the topic of deploying Kubernetes on OpenStack and doing my research I found there are too many tutorials. My understanding of the sequence is:
Start several nova instances on OpenStack.
Install Kubernetes master on one instance and install Kubernetes node on other instances.
Submit YAML file using kubectl and Kubernetes will create and deploy my application.
As for Kubernetes's self-healing capacity, can Kubernetes restart some of the failed nova instances? Which component in Kubernetes is responsible for restart/reboot/delete/re-provision nova instances? Is it Kubernetes master? If so, what will happen if the Kubernetes master is down and cannot be recovered?
1, 2 and 3 are correct.
Self-healing
You can deploy in master HA configuration. The recommended way is either 3 or 5 master with a quorum of (n + 1)/ 2
Can Kubernetes reprovision/restart some the failed nova instances?
Not really. That's after nova to manage all the server services. Kubernetes has an OpenStack module that allows it to interact with OpenStack components like create external load balancer and creates volumes that can be used with your workloads/pods/containers.
You can either use kubeadm or kubespray to bootstrap a cluster.
Hope it helps.
If you want to deploy Kubernetes on top of Openstack I would recommend that you look into Openstack Magnum. This is the most common use case for Openstack and Kubernates.
There is also the possibility of running the Openstack Control Plane under Kubernetes, which would allow you to better scale and auto-heal Openstack services. This is primarily for the Control Plane (e.g. nova-api), and as far as I know there is no way of running nova-computes under Kubernetes.
I found a good blog post here that describes some of the benefits from such an approach.
Yes, you're spot on with your observations in the case of running Kubernetes on top of OpenStack and the other answers here give you further pointers already. I just wanted to point out, in addition, that the other way round is also an option, that is, running OpenStack on top of Kubernetes, for example using OpenStack-Helm.