What is Kubernetes engine (RKE, GKE, EKS...)? - kubernetes

As I understand that Kubernetes is a set of binaries that can form a new k8s cluster. There is an open-source kubernetes on git hub but there still some confusion:
Who is the core team maintain (have write permission) to kubernetes repo? "The Linux Foundation" or CNCF?
I see that there a multi Kubernetes engines (RKE, EKS..). Do they just add some add-ons/plugin/tools or they modify the source code of kubernetes to build another version of k8s components (apiserver, kube-proxy, kubelet)?
If I use RKE binary to setup my cluster and it shows Kubernetes version "v1.17.2" that means the version is release of kubernetes repo or it just another fork repo of rancher team. The question is the same to GKE, EKS...

Who is the core team maintain (have write permission) to kubernetes
repo? "The Linux Foundation" or CNCF?
Cloud Native Computing Foundation (CNCF) is one of the projects hosted by the Linux Foundation. Kubernetes is one of the project graduated from CNCF. Read more over here.
I see that there a multi Kubernetes engines (RKE, EKS..). Do they just
add some add-ons/plugin/tools or they modify the source code of
kubernetes to build another version of k8s components (apiserver,
kube-proxy, kubelet)?
They are really not "multi kubernetes engines", these are just Kubernetes offering from different vendors. Another such example is GKE (Google Kubernetes Engine) by Google. Main advantage you get from GKE/EKS v/s Kubernetes is that GKE/EKS etc. are managed products, so the vendor providing the same will be responsible for cluster management, availibility of Master and Worker nodes etc.
If I use RKE binary to setup my cluster and it shows Kubernetes
version "v1.17.2" that means the version is release of kubernetes repo
or it just another fork repo of rancher team. The question is the same
to GKE, EKS..
At the core you still have got Kubernetes but once you are using managed products like GKE or EKS, better not to mix them with "Kubernetes" and start thinking of them as GKE or EKS etc. They all can have their own Release cycles + many different other Cloud Computing products of the same vendor are integrated with it. Read more over here.

Related

Procedure to upgrade Kubernetes cluster offline

What are the steps for upgrading Kubernetes offline via kubeadm. I have a vanilla kubernetes cluster running with no access to internet. In order to upgrade kuberenetes when
kubeadm upgrade plan 'command is executed, it reaches out to internet for the plan.
The version of kubernetes used is 22.1.2,
CNI used: flannel.
Cluster size: 3 master, 5 worker.
It is a time taking process to manage the offline Kubernetes cluster. Because you need to set up your own repositories and registries for images. Once you are done with the setup of the nodes and registries, one can upgrade the cluster based on the requirements. There are a lot of resources available online that will teach how to manage different repositories for each OS distribution.
You can build your own images based on the requirements and push them to the registry. Later these images will help to create the Pods. You need to set up your own CA certificates because container engines require SSL. Example SSL setup.
For more information refer to this K8’s community discussion forum.

GKE - Hybrid Kubernetes cluster

I've been reading the Google Cloud documentation about hybrid GKE cluster with Connect or completely on prem with GKE on-prem and VMWare.
However, I see that GKE with Connect you can manage the on-prem Kubernetes cluster from Google Cloud dashboard.
But, what I am trying to find, is, to mantain a hybrid cluster with GKE mixing on-prem and cloud nodes. Graphical example:
For the above solution, the master node is managed by GCloud, but the ideal solution is to manage multiple node masters (High availability) on cloud and nodes on prem. Graphical example:
Is it possible to apply some or both of the proposed solutions on Google Cloud with GKE?
If you want to maintain hybrid clusters, mixing on prem and cloud nodes, you need to use Anthos.
Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments.
The primary computing environment for Anthos uses Anthos clusters, which extend GKE for use on Google Cloud, on-premises, or multicloud to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters. With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.
If you want to know more about Anthos in GCP please follow this link.

Kubernetes Helm chart initiation with Kubernetes cluster

I am implementing the continuous integration and continuous deployment by using Ansible, Docker, Jenkins and Kubernetes. I already created one Kubernetes cluster with 1 master and 2 worker nodes by using Ansible and kubespray deployment. And I have 30 - 40 number of micro service application. I need to create that much of service and deployments.
My Confusion
When I am using Kubernetes package manager Kubernetes Helm chart, then do I need to initiate my chart on master node or in my base machine from where I I deployed my kubernet cluster ?
If I am initiating inside master, then can I use kubectl to deploy using ssh on remote worker nodes?
If I am initiating outside the Kubernetes cluster nodes , then Can i use kubectl command to deploy in Kubernetes cluster ?
Your confusion seems to lie in the configuration and interactions of Helm components. This explanation provides a good graphics to represent the relationships.
If you are using the traditional Helm/Tiller configuration, Helm will be installed locally on your machine and, assuming you have the correct kubectl configuration, you can "initialize" your cluster by running helm init to install Tiller into your cluster. Tiller will run as a deployment in kube-system, and has the RBAC privileges to create/modify/delete/view the chart resources. Helm will automatically manage all the API objects for you, and the kube-scheduler will schedule the pods to all your nodes accordingly. You should not be directly interacting with your master and nodes via your console.
In either configuration, you would always be making the Helm deployment from your local machine with a kubectl access to your cluster.
Hope this helps!
If you look for the way for running helm client inside your Kubernetes cluster, please check the concept of Helm-Operator.
I would recommend you also to look around for term "GitOps" - set of practices which combines Git with Kubernetes, and sets Git as a source of truth for your declarative infrastructure and applications.
There are two great OSS projects out there, that implements GitOps best practices:
flux (uses Helm-Operator)
Jenkins-x (uses helm as a part of release pipeline, check out this session on YT to see it in action)

Is it safe/right to deploy Spinnaker to same kubernetes cluster which Spinnaker will manage?

I cannot find any articles answering question: Is it safe/right to deploy Spinnaker to same Kubernetes cluster which Spinnaker will manage? Mainly I mean for production, HA deployments.
I think the architectures of Spinnaker and Kubernetes compliment each other very well, and running Spinnaker in the same K8s cluster it is managing is definitely safe.
As per your comment in #mdirkse's answer, there is a codelab, which is official Spinnaker documentation, that explains how to create a set of basic pipelines for deploying code from a Github repo to a production Kubernetes cluster in the form of a Docker container.
In this documentation, it specifically states the following:
We will be deploying Spinnaker to the same Kubernetes cluster it will be managing. ...
Not sure if this is exactly what you are looking for though.
I'm not sure about "right", but I'd definitely say that it is safe to run Spinnaker on the same Kubernetes cluster that it manages, if you set it up right. Kubernetes (and Docker) gives you all the tools you need to properly separate Spinnaker from the other things running on the cluster (namespaces, quotas, node affinities etc). Indeed the whole point of Kubernetes is to be able to easily run software in an HA/fault tolerant way, and since Spinnaker consists of a collection of stateless microservices it really plays to the strenghts of k8s.

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.