How to deploy Kube-Controller-Master? - kubernetes

I have installed Kubernetes with minikube, which is a single node cluster.
There is a yaml file to deploy controller master but it showing
Back-off restarting failed container Error syncing pod
Can someone solve this issue?
link for the yaml file is here https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/admin/high-availability/kube-controller-manager.yaml

The Kubernetes controller manager is a core component of Kubernetes and already running in every Kubernetes cluster, usually in form of a standalone pod managed by the Kubernetes addon manager. Minikube uses localkube which integrates the controller manager together with other Kubernetes core components in a single binary to simplify setup of single-node clusters for testing purposes. If you want to change options of the integrated controller manager or other components, use the --extra-config option of minikube start.
The example you linked is a custom deployment of the controller manager used for highly available multi-master clusters. If you want to test this you need to set up your cluster manually, minikube is not the right tool for this.

Related

Is it possible to upgrade a k8s registered cluster in Rancher?

I'm considering to register a kubernetes cluster in Rancher. After that, how should I handle coming kubernetes upgrades? Can they be handled by Rancher itself?
I only found information about upgrading a k3s registered cluster.
You should be able to do it from the cluster view (if your cluster was installed via rancher) as documented:
From the Global view, find the cluster for which you want to upgrade Kubernetes. Select ⋮ > Edit.
Expand Cluster Options.
From the Kubernetes Version drop-down, choose the version of Kubernetes that you want to use for the cluster.
Click Save.

Load balancer for kubeapi server while creating the Kubernetes cluster using kubeadm

I am trying to create Kubernetes cluster having 1 master and 2 worker nodes by using the tool kubeadm in my on-premise machines. I am following the Kubernetes official documentation for forming the cluster from the following url:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
After installing all the runtime and completing before begin pre-requistics steps, I found in the document as the first step of forming the cluster is Create load balancer for kube-apiserver.
My Doubt
When I created the single master 3 worker nodes cluster using kubespray tool, I did not created any separate load balancer for that. So here when I am following the kubeadm tool, Do I need to create the load balancer actually for forming ?
Why are both tools showing different way, Since I did not created load balancer by using kubespray tool. Now I am trying to create cluster with kubeadm tool.
Speaking of load balancers creation during Kubernetes deployment using Kubeadm it depends on your setup. It is not mandatory to setup load balancer. Your cluster will still work, but without load balancing, it's going to be hard to qualify this cluster as HA.
In a single master setup as it is in your case, the master node manages the etcd database, API server, controller manager and scheduler, along with the worker nodes. However, if that single master node fails, all the worker node fail as well and entire cluster will be lost.
Learn more here: kubernetes-ha-kubeadm.
Kubeadm covers the needs of a life-cycle management for Kubernetes clusters, including self-hosted layouts, dynamic discovery services, etc. Kubespray is more about generic configuration, initial clustering, and bootstrapping.
Kubespray is a good choice when you either are familiar with Ansible or seek a possibility to switch between multiple platforms. If your priority is tight integration with unique features offered by the supported clouds, and you plan to stick with your provider, kops may be a better option.
Deploying a loadbalancer is up to a user and is not covered by ansible roles in Kubespray. By default, it only configures a non-HA endpoint, which points to the access_ip or IP address of the first server node in the kube-master group. It can also configure clients to use endpoints for a given loadbalancer type. More information you can find here: kubespray-lb.
Here you have comparision of Kubernetes deployment tools: Kubernetes Deployment Tools.

Is it possible/adviseable to turn off the NodeRestriction plugin on EKS?

I am trying to set up a job scheduler (airflow) on an EKS cluster to replace a scheduler (Jenkins) we're running directly on an ec2. This job scheduler should be able to deploy pods to the EKS cluster it's running on.
However, whenever I try to deploy the pod (with a pod manifest), I get the following error message:
Error from server (Forbidden): error when creating "deployment.yaml": pods "simple-pod" is forbidden: pod does not have "kubernetes.io/config.mirror" annotation, node "ip-xx.ec2.internal" can only create mirror pods
I believe the restriction has to do with the NodeRestriction plugin on the kube-apiserver running on the EKS Control Plane.
I have looked through documentation to see if I can turn this plugin off, however it does not appear to be possible through kubectl, and only possible by modifying the kube-apiserver configuration on control plane itself.
Is it possible to turn off this plugin? Or, is it possible to label a node or pod to mark that it is not subject to this plugin? More broadly, is running a job scheduler on EKS that assigns job on the same cluster a bad design choice?
If we wanted to containerize and deploy our job scheduler, do we need to instantiate a separate EKS cluster/other service to run it on?

Kubernetes Helm chart initiation with Kubernetes cluster

I am implementing the continuous integration and continuous deployment by using Ansible, Docker, Jenkins and Kubernetes. I already created one Kubernetes cluster with 1 master and 2 worker nodes by using Ansible and kubespray deployment. And I have 30 - 40 number of micro service application. I need to create that much of service and deployments.
My Confusion
When I am using Kubernetes package manager Kubernetes Helm chart, then do I need to initiate my chart on master node or in my base machine from where I I deployed my kubernet cluster ?
If I am initiating inside master, then can I use kubectl to deploy using ssh on remote worker nodes?
If I am initiating outside the Kubernetes cluster nodes , then Can i use kubectl command to deploy in Kubernetes cluster ?
Your confusion seems to lie in the configuration and interactions of Helm components. This explanation provides a good graphics to represent the relationships.
If you are using the traditional Helm/Tiller configuration, Helm will be installed locally on your machine and, assuming you have the correct kubectl configuration, you can "initialize" your cluster by running helm init to install Tiller into your cluster. Tiller will run as a deployment in kube-system, and has the RBAC privileges to create/modify/delete/view the chart resources. Helm will automatically manage all the API objects for you, and the kube-scheduler will schedule the pods to all your nodes accordingly. You should not be directly interacting with your master and nodes via your console.
In either configuration, you would always be making the Helm deployment from your local machine with a kubectl access to your cluster.
Hope this helps!
If you look for the way for running helm client inside your Kubernetes cluster, please check the concept of Helm-Operator.
I would recommend you also to look around for term "GitOps" - set of practices which combines Git with Kubernetes, and sets Git as a source of truth for your declarative infrastructure and applications.
There are two great OSS projects out there, that implements GitOps best practices:
flux (uses Helm-Operator)
Jenkins-x (uses helm as a part of release pipeline, check out this session on YT to see it in action)

How to migrate to custom node logging on Kubernetes?

With an existing Kubernetes Cluster (e.g. v 1.2.2 on GCE) that has set ENABLE_NODE_LOGGING=true and LOGGING_DESTINATION=gcp, what is the recommended way to stop those pods from running on each node and deploy a replacement DaemonSet that uses a custom fluentd configuration and docker image?
This should take into consider future Kubernetes upgrades as well.
If you set those configuration parameters when starting your cluster, it will create a manifest file on each node that configures fluentd to send container logs to google cloud logging. You can remove those manifest files and the kubelet will stop the fluentd containers (and you should also modify your instance template to change the parameters; otherwise any new nodes created, replacing broken nodes or scaling up your number of nodes, will continue to create fluentd containers).
Alternatively, if you modify the configuration parameter and run upgrade.sh to upgrade your nodes to a newer version of Kubernetes then your nodes will not have the manifest file and you won't be running the fluentd container any longer.