Is it possible to apply & maintain the CIS Benchmarks Compliance on Managed Kubernetes Clusters such as Azure Kubernetes Service? - kubernetes

I have a managed Kubernetes cluster over Azure Public Cloud. I tried to make some changes on the nodes to satisfy 1 Host Compliance provided by CIS Benchmark Guide for Kubernetes. Then I upgraded a node regarding size. And the host compliance failed again. It was reset on that node. How do I maintain all the changes on the nodes?
I did ssh over the nodes and did the change over there. But compliance failed after the node upgrade.

You can Reconfigure a Node's Kubelet in a Live Cluster, but it's for Cluster configuration.
As for the changes on the node itself, I recommend reading Security hardening in AKS virtual machine hosts.
AKS clusters are deployed on host virtual machines, which run a security optimized OS. This host OS is currently based on an Ubuntu 16.04.LTS image with a set of additional security hardening steps applied (see Security hardening details).
The goal of the security hardened host OS is to reduce the surface area of attack and allow the deployment of containers in a secure fashion.
Important
The security hardened OS is NOT CIS benchmarked. While there are overlaps with CIS benchmarks, the goal is not to be CIS-compliant. The goal for host OS hardening is to converge on a level of security consistent with Microsoft’s own internal host security standards.
If you need to make any changes then I would advice setting up your own cluster manually using kubeadm. Just get virtual servers configure them your way and use Creating a single control-plane cluster with kubeadm or any other guide that fits your needs.

Related

Procedure to upgrade Kubernetes cluster offline

What are the steps for upgrading Kubernetes offline via kubeadm. I have a vanilla kubernetes cluster running with no access to internet. In order to upgrade kuberenetes when
kubeadm upgrade plan 'command is executed, it reaches out to internet for the plan.
The version of kubernetes used is 22.1.2,
CNI used: flannel.
Cluster size: 3 master, 5 worker.
It is a time taking process to manage the offline Kubernetes cluster. Because you need to set up your own repositories and registries for images. Once you are done with the setup of the nodes and registries, one can upgrade the cluster based on the requirements. There are a lot of resources available online that will teach how to manage different repositories for each OS distribution.
You can build your own images based on the requirements and push them to the registry. Later these images will help to create the Pods. You need to set up your own CA certificates because container engines require SSL. Example SSL setup.
For more information refer to this K8’s community discussion forum.

How to simulate node joins and failures with a local Kubernetes cluster?

I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.

What combination of firewall rules adapted for kubernetes with flannel as a CNI

I have been trying to find the right firewall rules to apply on a kubernetes kubeadm cluster. with flannel as CNI.
I opened these ports:
6443/tcp, 2379/tcp, 2380/tcp, 8285/udp, 8472/udp, 10250/tcp, 10251/tcp, 10252/tcp, 10255/tcp, 30000-32767/tcp.
But I always end up with a service that cannot reach other services, or myself not able to reach the dashboard unless I disable the firewall. I always start with a fresh cluster.
kubernetes version 1.15.4.
Is there any source that list suitable rules to apply on cluster created by kubeadm with flannel running inside containers ?
As stated in Kubeadm system requeriments:
Full network connectivity between all machines in the cluster (public or private network is fine)
It's a very common practice is to put all custom rules on the Gateway(ADC) or into Cloud Security Groups, to you prevent conflicting rules.
Then you have to Ensure IP Tables tooling does not use the NFTables backend.
Nftables backend is not compatible with the current Kubeadm packages: it causes duplicated firewall rules and breaks kube-proxy.
And ensure required ports are open between all machines of the Cluster.
Other security measures should be deployed through other components, like:
Network Policy (Depending on the network providers)
Ingress
RBAC
And Others.
Also check the articles about Securing a Cluster and Kubernetes Security - Best Practice Guide.

Can Kubernetes mix physical and virtual servers as masters

I am trying to add additional master nodes to my K8 master which is a physical server. Can I add 2 virtual servers in a separate subnet as additional masters for the cluster. The secondary masters will be hosting K8, docker, and etcd.
Is the a risk in trying to do this beside latency?
There are no risks other than usual risks of misconfiguration or potential security holes you could leave - but nothing in particular related with the scenario itself.
Anyway answering your question, you will just do it as with traditional multi master way, just make sure you have met this requirements:
Full network connectivity between all machines in the cluster (public
or private network)
sudo privileges on all machines
SSH access from one device to all nodes in the system
kubeadm and kubelet installed on all machines. kubectl is optional.
Then just follow standard guides on how to deploy HA Kubernetes cluster.
I will not describe the whole process as you did not ask about it and also you can find many detailed guides on how set it up. Including the one from Kubernetes official documentation. If you will have problems feel free to ask more questions, but remember to provide the steps which led to that potential problem.

Kubernetes deployment using shared-disk FC HBA options

I have been looking at available Kubernetes storage add-ons and have been unable to put together something that would work with our setup. Current situation is several nodes each with an FC HBA controller connected to a single LUN. I realize that some sort of cluster FS will need to be implemented, but once that is in place I don't see how I would then connect this to Kubernetes.
We've discussed taking what we have and making an iSCSI or NFS host but in addition to requiring another dedicated machine, we lose all the advantages of having the storage directly available on each node. Is there any way to make use of our current infrastructure?
Details:
4x Kubernetes nodes (1 master) deployed via kubeadm on Ubuntu 16.04 using flannel as the network addon, each system has the SAN available as block device (/dev/sdb)