Setting up Kubernetes on NixOS - kubernetes

On NixOS is is easy to set up Kubernetes by a single line of config:
services.kubernetes.roles = ["master" "node"];
This installs both the master and node components on the local system and therefore creates a nice little working local kubernetes "cluster".
If I want to set up a "real" cluster I need to install it over multiple hosts, but I'm not sure about the intended way to connect them.
If I install only the master components on one host and only the node components on another node, how do I tell the node where to find its master?
There are quite a few configuration options, but I'm not sure how to use them correctly. Is anyone aware of some example setup?

Have a look at the latter part of Jaka Hudoklin/offlinehacker's NixCon '15 presentation about Kubernetes on NixOS at GateHub. It has an example configuration that configures docker to use a bridge interface. You can then use openvswitch to link the networks together.

I'm currently working to automate Kubernetes deployment with NixOS / NixOps. It works quiet well with multiple local VirtualBox nodes. Regarding AWS integration I still have to fix few things. Then I will try to integrate with other cloud providers.
You can have a look to this repository: NixOps Kubernetes. Do not hesitate to fork and help me improve it.

Have you checked Kubeadm tool? You can check it out at - https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Related

Solutions to create a cluster split on 2 VMs

I'm looking for some clues on which solution can help me on my problem which is running a kubernetes cluster between 2 VMs.
I'm beginning with Kubernetes and all its possibilities but like everybody I started from a minikube single-node cluster to host my 4 containers respectively hosting mongoDB, redis, rabbitMQ and minio.
The idea is that I need something like minikube to create a cluster like this:
Moreover, these 2 VMs will run on RedHat EL 7 and won't be local and they may be hosted on different machines
Is it possible to build that architecture with kubeadm?
Response is yes, it can be achieved with kubeadm, it will allow you to aggregate your different VM/hosts into the cluster in the same way docker-swarm does by kind of subscribing them into the cluster.
See how (thanks #Jason Stanley)
Some Prerequisites
If like me you're on a client infrastructure, implementing your solution 'on-premise' on a Red Hat environment, you'll need two or three things.
The proper docker environment
An image registry
An easy way to configure your cluster
Docker environment
First one and because it's a restriction meant by docker and red hat is to use docker-ee the enterprise edition which allow support on prod by red hat, which is why most people pay a red hat subscription.
See typical architecture using docker-ee and how to install that on Red Hat
Local image registry
The other thing is to deploy a docker registry, this is an official image (documentation here) provided by Docker team, registry will allow your cluster to pull the image from it - images that you will push inside while setting everything up - note that this is relevant in a restricted environment where your VM/hosts can't have access to internet or use a proxy.
Cluster setting tool
One handy tool is using helm which allow you to set your cluster easily by referencing which images provide a given service, on which port with which policy.
Helm documentation

Install multi master kubernetes cluster in local

I have tried with
minikube tool, It's a single node.
kubeadm tool, It's a multinode but single master.
I am looking for the tool which can be configure multi master kubernetes cluster in
local.
There's no tool to install a multi-master Kubernetes cluster locally as of this writing. Generally, a multi-master setup is meant for production environments and a local setup is generally far from what someone would describe as a production environment.
You can probably piece together a local installation from this and Kubernetes the Hard Way.
Kubeadm can be used to create a multi-master highly available setup. Documentation regarding this can be found # https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/.
If you only have access to one physical machine, but want to create a multi master setup you can use manually provision several VMs and create the cluster, or you can automate everything by using tools such as Vagrant and Ansible Playbooks. Tutorials regarding this is available # https://github.com/justmeandopensource/kubernetes/tree/master/kubeadm-ha-multi-master. You can also have a look at justmeandopensource channel on youtube (https://www.youtube.com/user/wenkatn) for detailed tutorials (I used them and was of great help).
if have a limited amount of the physical machine and you want to run the setup of multiple masters you can use the LXD container to first create the VMs and use those VM containers to setup the K8s clusters.
Some of resource link : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/
with kubeadm : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
also as mentioned by #rico kubernetes the hard way is the ultimate thing to use : https://github.com/kelseyhightower/kubernetes-the-hard-way
here one nice tutorial link of youtube using kubeadm: https://www.youtube.com/watch?v=q92MYG-EW-w
you can also follow this github opensource repo guide : https://github.com/hub-kubernetes/kubernetes-multi-master

Best way to configure Kubernetes on local server

Through a stroke of luck I've been given an extremely powerful server in my office - I'd love to somehow set up a replica of our staging Kubernetes environment on it. Our staging Kube environment is 5 nodes running on AWS that each have different configurations. I can't find much in the way of best practice guides (probably because this is a very weird use case) for this configuration.
My gut feel is this:
Install some kind of bare metal OS on the machine
Set up multiple VMs on the machine each configured to mirror a node from staging
Install the Kube master on one of the machines
Enrol each of the other VMs as a node under kubernetes
Run my deployments
Is there any better way for me to configure this or any potential issues I may hit/roadblocks if I follow this approach?
If you want to have it everything in one machine, I would also go for the multi-vm option. With Vagrant you could try to make the process simpler. This could help you:
https://github.com/pires/kubernetes-vagrant-coreos-cluster
After setting up the cluster you could adapt it to mimic the state of your staging cluster.
The only issue that comes to mind is that of overlay networking an external access. If you configure NAT networking you would have issues with external access and probably no issue with the network overlay. On the other side, I am not 100% certain how the overlay network would work in a bridged setting.

Running Kubernetes on a single machine bare metal

I'm looking to run Kubernetes in production on a single machine - bare metal - no VM. But can't seem to find a writeup for this scenario. The reason is basically that we have on-premise small installations and we'd prefer to have everything based on Kubernetes rather than having two different environments - one cloud and one on-prem.
Update
With the official Ubuntu guide I managed to get it up and running in conjunction with the following: http://www.dangtrinh.com/2017/09/how-to-deploy-openstack-in-single.html. LXD version was wrong for what Conjure was expecting and IPv6 needs to be turned off for LXD. Now I am the happy owner of an Intel NUC here for testing which acts as a master and a node. Thanks for all the assistance in the comments!

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.