Kubernetes 1.2:
How do you bootstrap a second master for an HA configuration?
Can you use kube-up?
The HA doc doesn't really get into that:
https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/admin/high-availability.md
Thanks
There aren't any automated scripts (like kube-up.sh) checked into the Kubernetes github repository that will create an HA cluster; you will need to understand the intricacies of building a cluster (many of which are described in the Creating a Custom Cluster from Scratch guide) and build an HA cluster from scratch or modify a "normal" cluster to make it into an HA configuration.
If you are interested in helping contribute to developing better tools for HA masters, you can join the Kubernetes High Availability special interest group.
kops (https://github.com/kubernetes/kops) is able to provision HA Kubernetes with multiple masters: https://github.com/kubernetes/kops/blob/master/docs/commands.md#other-interesting-modes
AFAIK some more upcoming work is being done in k8s 1.5 and 1.6
Some other platforms based on kubernetes supports multiple masters.
For example, Openshift origin support setup cluster with multiple masters and on load balance.
However, Openshift built some customized solutions into kubernetes. Currently k8s couldn't be decoupled from Openshift gracefully.
Personally speaking, I also prefer multiple masters configuration for HA.
Related
I am already running a single master kubernetes cluster now and I am doing research about setting up Highly available Kubernetes clusters. I was thinking of Multi master cluster setup then realized self-hosted cluster might be a better option to go future ready.
Additional challenge is I am doing it in Bare Metal (Meaning, I am going to use cloud vms from these cloud provider, Hetzner, Linode, DigitialOcean and they have CSI driver, cloud controller manager etc., )
In this case, I see 2 options.
Setup with bootkube (https://github.com/kubernetes-sigs/bootkube)
Setup with kubeadm self-hosting. (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/)
I assume this is still an early topic hence I am not able to find guidance to choose the right approach and then correct documentation. I need this for a scalable production environment where I will start small with at least 8 nodes and can grow faster.
Is bootkube considerable for future readiness?
or kubeadm self-hosting is still in alpha stage, am I getting into a risk running a production environment?
Any good, documentation, blog, article to go in this direction?
I use Keepalived + Haproxy and Ansible to deploy HA kubernetes cluster. Now kubeadm supports join control plane command, so it easy to integrate with ansible.
You can also refer: https://github.com/kubernetes-sigs/kubespray.
We're trying to set up a Storm cluster with Google Compute Engine but are having a hard time finding resources. Most Tutorials only cover deploying single applications to GCE. We've dockerized the project but don't know how to deploy to GCP. Any Suggestions?
You may try to configure an instance template and create instances with COS image which already have Docker installed.
Here you can have more information about this.
Other option is using Kubernetes Engine (GKE) which has more features that can help you to have more control on your workloads and it also supports autoscaling, auto upgrades and node repairs.
I am trying to understand the relationship between Kubernetes and OpenStack. I am confused around the topic of deploying Kubernetes on OpenStack and doing my research I found there are too many tutorials. My understanding of the sequence is:
Start several nova instances on OpenStack.
Install Kubernetes master on one instance and install Kubernetes node on other instances.
Submit YAML file using kubectl and Kubernetes will create and deploy my application.
As for Kubernetes's self-healing capacity, can Kubernetes restart some of the failed nova instances? Which component in Kubernetes is responsible for restart/reboot/delete/re-provision nova instances? Is it Kubernetes master? If so, what will happen if the Kubernetes master is down and cannot be recovered?
1, 2 and 3 are correct.
Self-healing
You can deploy in master HA configuration. The recommended way is either 3 or 5 master with a quorum of (n + 1)/ 2
Can Kubernetes reprovision/restart some the failed nova instances?
Not really. That's after nova to manage all the server services. Kubernetes has an OpenStack module that allows it to interact with OpenStack components like create external load balancer and creates volumes that can be used with your workloads/pods/containers.
You can either use kubeadm or kubespray to bootstrap a cluster.
Hope it helps.
If you want to deploy Kubernetes on top of Openstack I would recommend that you look into Openstack Magnum. This is the most common use case for Openstack and Kubernates.
There is also the possibility of running the Openstack Control Plane under Kubernetes, which would allow you to better scale and auto-heal Openstack services. This is primarily for the Control Plane (e.g. nova-api), and as far as I know there is no way of running nova-computes under Kubernetes.
I found a good blog post here that describes some of the benefits from such an approach.
Yes, you're spot on with your observations in the case of running Kubernetes on top of OpenStack and the other answers here give you further pointers already. I just wanted to point out, in addition, that the other way round is also an option, that is, running OpenStack on top of Kubernetes, for example using OpenStack-Helm.
I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.
Is it in any way possible to configure a Kubernetes Cluster that utilizes ressources from multiple IaaS providers at the same time e.g. a cluster running partially on GCE and AWS? Or a Kubernetes Cluster running on your bare metal and an IaaS provider? Maybe in combination with some other tools like Mesos? Are there any other tools like Kubernetes that provide this capability? If it's not possbile with Kubernetes, what would one have to do in order to provide that feature?
Any help or suggestions would be very much appreciated.
There is currently no supported way to achieve what you're trying to do. But there is a Kubernetes project under way to address it, which goes under the name of Kubernetes Cluster Federation, alternatively known as "Ubernetes". Further details are available here:
http://www.slideshare.net/quintonh/federation-of-kubernetes-clusters-aka-ubernetes-kubecon-2015-slides-quinton-hoole
http://tinyurl.com/ubernetesv2
http://tinyurl.com/ubernetes-wg-notes