Can Kubernetes mix physical and virtual servers as masters - kubernetes

I am trying to add additional master nodes to my K8 master which is a physical server. Can I add 2 virtual servers in a separate subnet as additional masters for the cluster. The secondary masters will be hosting K8, docker, and etcd.
Is the a risk in trying to do this beside latency?

There are no risks other than usual risks of misconfiguration or potential security holes you could leave - but nothing in particular related with the scenario itself.
Anyway answering your question, you will just do it as with traditional multi master way, just make sure you have met this requirements:
Full network connectivity between all machines in the cluster (public
or private network)
sudo privileges on all machines
SSH access from one device to all nodes in the system
kubeadm and kubelet installed on all machines. kubectl is optional.
Then just follow standard guides on how to deploy HA Kubernetes cluster.
I will not describe the whole process as you did not ask about it and also you can find many detailed guides on how set it up. Including the one from Kubernetes official documentation. If you will have problems feel free to ask more questions, but remember to provide the steps which led to that potential problem.

Related

How to simulate node joins and failures with a local Kubernetes cluster?

I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.

Is it possible to apply & maintain the CIS Benchmarks Compliance on Managed Kubernetes Clusters such as Azure Kubernetes Service?

I have a managed Kubernetes cluster over Azure Public Cloud. I tried to make some changes on the nodes to satisfy 1 Host Compliance provided by CIS Benchmark Guide for Kubernetes. Then I upgraded a node regarding size. And the host compliance failed again. It was reset on that node. How do I maintain all the changes on the nodes?
I did ssh over the nodes and did the change over there. But compliance failed after the node upgrade.
You can Reconfigure a Node's Kubelet in a Live Cluster, but it's for Cluster configuration.
As for the changes on the node itself, I recommend reading Security hardening in AKS virtual machine hosts.
AKS clusters are deployed on host virtual machines, which run a security optimized OS. This host OS is currently based on an Ubuntu 16.04.LTS image with a set of additional security hardening steps applied (see Security hardening details).
The goal of the security hardened host OS is to reduce the surface area of attack and allow the deployment of containers in a secure fashion.
Important
The security hardened OS is NOT CIS benchmarked. While there are overlaps with CIS benchmarks, the goal is not to be CIS-compliant. The goal for host OS hardening is to converge on a level of security consistent with Microsoft’s own internal host security standards.
If you need to make any changes then I would advice setting up your own cluster manually using kubeadm. Just get virtual servers configure them your way and use Creating a single control-plane cluster with kubeadm or any other guide that fits your needs.

Will the master know the data on workers/nodes in k8s

I try to deploy a set of k8s on the cloud, there are two options:the masters are in trust to the cloud provider or maintained by myself.
so i wonder about that if the masters in trust will leak the data on workers?
Shortly, will the master know the data on workers/nodes?
The abstractions in Kubernetes are very well defined with clear boundaries. You have to understand the concept of Volumes first. As defined here,
A Kubernetes volume is essentially a directory accessible to all
containers running in a pod. In contrast to the container-local
filesystem, the data in volumes is preserved across container
restarts.
Volumes are attached to the containers in a pod and There are several types of volumes
You can see the layers of abstraction source
Master to Cluster communication
There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
Also, you should check the CCM - The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another. The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top of Kubernetes.
Hope this answers all your questions related to Master accessing the data on Workers.
If you are still looking for more secure ways, check 11 Ways (Not) to Get Hacked
Short answer: yes the control plane can access all of your data.
Longer and more realistic answer: probably don't worry about it. It is far more likely that any successful attack against the control plane would be just as successful as if you were running it yourself. The exact internal details of GKE/AKS/EKS are a bit fuzzy, but all three providers have a lot of experience running multi-tenant systems and it wouldn't be negligent to trust that they have enough protections in place against lateral escalations between tenants on the control plane.

Kubernetes Architecture - Kubernetes Cluster Management and initializing Nodes [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to change my deploy scenario from docker to Kubernetes. Now I explored the architecture of Kubernetes - Cluster, Nodes, Pods, Services, replica Sets/controller, Kubernetes-cni, kube-ctl etc. Now I need to begin with deployment into Kubernetes cluster. When I am exploring, I found documentations and discussions that can create single node and master in same machine or possible in VMs. Also found kubespray and minikube documentations for cluster creation.
Here I am adding my confusions about hands on with Kubernetes.
For creating and working with Kubernetes, why there is a variation like single node and master in same or in VMs? Why there is a deviation in cluster container?
How I can decide whether I need to choose single node and master in same machine or I need to use Vms for different nodes?
How the Minikube and Kubespray is providing different methodology in Kubernetes architecture?, Since Kubernetes are product of one single source - Google.
If I am installing kubeadm, kubernetes-cni and kubelet in my ubuntu 16.04, Can I initiate nodes in the same machine ?
How can I clarify these confusions?
The taxonomy of concepts and terms is very complicated, and the documentation is still pretty sparse.
1. For creating and working with kubernetes,
why there is a variation like single node and master in same or in VMs?
Why there is a deviation in cluster container?
The deviation is to support many distinct use cases- container workload developers working on their laptops needing what amounts to a fake cluster without a lot of operational ceremony; kubernetes ops folks learning and testing on a small but real clusters; and real production workloads for varyingly-sized plants.
For the first case, for container workload development, there is a piece of software called minikube, which is like a distribution of kubernetes that automates creating a single virtual machine- using VirtualBox or other desktop-class virtual machine tooling- that is preconfigured to run a combined kubernetes master and node, sufficient to be able to run real kubernetes workloads, but on a laptop.
For the second case, for non production purposes, the master and worker functions can be run on a single machine, or a single master machine can be used with a small number of worker machines.
A production kubernetes cluster will usually have 3 or 5 or 7 master machines- VMs or bare metals. Multiple masters are needed to maintain quorum for etcd- where kubernetes stores all runtime state- in the case of machine failures. 3 master machines allow for 1 master machine to fail without disrupting the cluster. 5 masters will tolerate 2 master machine failures, etc.
This number of masters can support a large number of worker machines- dozens to hundreds- running the container workloads. In a production environment, one would not want to run client workloads on master machines.
2. How I can decide whether I need to choose single node and master
in same machine. Or do I need to use Vms for different nodes?
See above- for development, use minikube. For production, plan to use multiple redundant masters if you are running the cluster yourself, or use a cloud provider's managed kubernetes offering.
3. How the Minikube and Kubespray is providing different methodology
in kubernetes architecture?
Minikube is for development only. Kubespray is one of many tools that provides some automation help when building a production cluster. Kubespray's distinguishing feature is the use of Ansible for machine setup and automation. This may or may not be desirable, depending on your comfort and interest in Ansible and/or its competitors.
4. Why have so many options when kubernetes is the product of a
single source - google.
Kubernetes certainly originated in Google, but now there are hundreds or more engineers across many companies, including Microsoft, Amazon, RedHat, Oracle, and tons of tiny companies, actively working on it. It is a remarkable project.
5. If I am installing kubeadm, kubernetes-cni and kubelet in my ubuntu 16.04
Can I initiate nodes in the same machine ?
Kubeadm is a setup tool, not a production runtime tool, but yes, you can run containers on the same machine as the bits that are needed for a kubernetes master. In addition to etcd, kubelet, apiserver, controller manager, you need to run Docker as well- Kubelet talks to Docker to schedule containers. I would only advise NOT running anything else on this machine- improper configuration can cause problems with the machine serving as master/worker so any other work will be lost.

Does Kubernetes have the ability to spin up new nodes?

Does Kubernetes have the ability/need to hook into a cloud provider (AWS, Rackspace) to spin up new nodes? If so, how does it then provision the node - does it run Ansible etc? Or will Kubernetes need to have all the nodes available to it manually?
The short answer is no.
The longer answer is explained in the following blog posting that describes the new kubeadm command:
http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html
There are three stages in setting up a Kubernetes cluster, and we
decided to focus on the second two (to begin with):
Provisioning: getting some machines
Bootstrapping: installing Kubernetes on them and configuring certificates
Add-ons: installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc
We realized early on that there's enormous variety in the way that
users want to provision their machines.
They use lots of different cloud providers, private clouds, bare
metal, or even Raspberry Pi's, and almost always have their own
preferred tools for automating provisioning machines: Terraform or
CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare
metal. So we made an important decision: kubeadm would not provision
machines. Instead, the only assumption it makes is that the user has
some computers running Linux.
Update
http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html