Procedure to upgrade Kubernetes cluster offline - kubernetes

What are the steps for upgrading Kubernetes offline via kubeadm. I have a vanilla kubernetes cluster running with no access to internet. In order to upgrade kuberenetes when
kubeadm upgrade plan 'command is executed, it reaches out to internet for the plan.
The version of kubernetes used is 22.1.2,
CNI used: flannel.
Cluster size: 3 master, 5 worker.

It is a time taking process to manage the offline Kubernetes cluster. Because you need to set up your own repositories and registries for images. Once you are done with the setup of the nodes and registries, one can upgrade the cluster based on the requirements. There are a lot of resources available online that will teach how to manage different repositories for each OS distribution.
You can build your own images based on the requirements and push them to the registry. Later these images will help to create the Pods. You need to set up your own CA certificates because container engines require SSL. Example SSL setup.
For more information refer to this K8’s community discussion forum.

Related

High available kubernetes cluster? bootkube or kubeadm self-hosting

I am already running a single master kubernetes cluster now and I am doing research about setting up Highly available Kubernetes clusters. I was thinking of Multi master cluster setup then realized self-hosted cluster might be a better option to go future ready.
Additional challenge is I am doing it in Bare Metal (Meaning, I am going to use cloud vms from these cloud provider, Hetzner, Linode, DigitialOcean and they have CSI driver, cloud controller manager etc., )
In this case, I see 2 options.
Setup with bootkube (https://github.com/kubernetes-sigs/bootkube)
Setup with kubeadm self-hosting. (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/)
I assume this is still an early topic hence I am not able to find guidance to choose the right approach and then correct documentation. I need this for a scalable production environment where I will start small with at least 8 nodes and can grow faster.
Is bootkube considerable for future readiness?
or kubeadm self-hosting is still in alpha stage, am I getting into a risk running a production environment?
Any good, documentation, blog, article to go in this direction?
I use Keepalived + Haproxy and Ansible to deploy HA kubernetes cluster. Now kubeadm supports join control plane command, so it easy to integrate with ansible.
You can also refer: https://github.com/kubernetes-sigs/kubespray.

How to deploy Storm, Zookeeper, and Supervisor nodes to GCP?

We're trying to set up a Storm cluster with Google Compute Engine but are having a hard time finding resources. Most Tutorials only cover deploying single applications to GCE. We've dockerized the project but don't know how to deploy to GCP. Any Suggestions?
You may try to configure an instance template and create instances with COS image which already have Docker installed.
Here you can have more information about this.
Other option is using Kubernetes Engine (GKE) which has more features that can help you to have more control on your workloads and it also supports autoscaling, auto upgrades and node repairs.

Install multi master kubernetes cluster in local

I have tried with
minikube tool, It's a single node.
kubeadm tool, It's a multinode but single master.
I am looking for the tool which can be configure multi master kubernetes cluster in
local.
There's no tool to install a multi-master Kubernetes cluster locally as of this writing. Generally, a multi-master setup is meant for production environments and a local setup is generally far from what someone would describe as a production environment.
You can probably piece together a local installation from this and Kubernetes the Hard Way.
Kubeadm can be used to create a multi-master highly available setup. Documentation regarding this can be found # https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/.
If you only have access to one physical machine, but want to create a multi master setup you can use manually provision several VMs and create the cluster, or you can automate everything by using tools such as Vagrant and Ansible Playbooks. Tutorials regarding this is available # https://github.com/justmeandopensource/kubernetes/tree/master/kubeadm-ha-multi-master. You can also have a look at justmeandopensource channel on youtube (https://www.youtube.com/user/wenkatn) for detailed tutorials (I used them and was of great help).
if have a limited amount of the physical machine and you want to run the setup of multiple masters you can use the LXD container to first create the VMs and use those VM containers to setup the K8s clusters.
Some of resource link : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/
with kubeadm : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
also as mentioned by #rico kubernetes the hard way is the ultimate thing to use : https://github.com/kelseyhightower/kubernetes-the-hard-way
here one nice tutorial link of youtube using kubeadm: https://www.youtube.com/watch?v=q92MYG-EW-w
you can also follow this github opensource repo guide : https://github.com/hub-kubernetes/kubernetes-multi-master

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.

Does Kubernetes have the ability to spin up new nodes?

Does Kubernetes have the ability/need to hook into a cloud provider (AWS, Rackspace) to spin up new nodes? If so, how does it then provision the node - does it run Ansible etc? Or will Kubernetes need to have all the nodes available to it manually?
The short answer is no.
The longer answer is explained in the following blog posting that describes the new kubeadm command:
http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html
There are three stages in setting up a Kubernetes cluster, and we
decided to focus on the second two (to begin with):
Provisioning: getting some machines
Bootstrapping: installing Kubernetes on them and configuring certificates
Add-ons: installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc
We realized early on that there's enormous variety in the way that
users want to provision their machines.
They use lots of different cloud providers, private clouds, bare
metal, or even Raspberry Pi's, and almost always have their own
preferred tools for automating provisioning machines: Terraform or
CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare
metal. So we made an important decision: kubeadm would not provision
machines. Instead, the only assumption it makes is that the user has
some computers running Linux.
Update
http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html