Kubernetes security in master-slave salt mode - kubernetes

I have a few questions about Kubernetes master-slave salt mode (reposting from https://github.com/kubernetes/kubernetes/issues/21215)
How do you think anyone who has a large cluster in GCE upgrade things in place when a new vulnerability is exposed?
How does one do things like regular key rotation etc, without a master-minion salt setup in GCE? Does not that leave GCE cluster more vulnerable in the long run?
I am not a security expert, so this is probably a naive question. Since GCE cluster is already running inside a pretty locked down network, is the communication between master-slave a major concern? I understand, in GKE the master is hidden and access is restricted to the GCP project owner. But in GCE, master is visible. So, is this a real concern for GCE only setups?

How do you think anyone who has a large cluster in GCE upgrade things in place when a new vulnerability is exposed?
By upgrading to a new version of k8s. If there is a kernel or docker vulnerability, we would build a new base image (container-vm), send a PR to enable it in GCE, and then cut a new release referencing the new base image. If there is a k8s vulnerability, we would cut a new version of kubernetes and you could upgrade it using the upgrade.sh script in github.
How does one do things like regular key rotation etc, without a master-minion salt setup in GCE? Does not that leave GCE cluster more vulnerable in the long run?
By updating the keys on the master node, updating the keys in the node instance template, and rolling nodes from the old instance template to the new instance template. We don't want to distribute keys via salt, because then you have to figure out how to secure salt (which requires keys which then also need to be rotated). Instead we "distribute" keys out of band using the GCE metadata server.
Since GCE cluster is already running inside a pretty locked down network, is the communication between master-slave a major concern?
For GKE, the master is running outside of the protected network, so it is a concern. GCE follows the same security model (even though it isn't strictly necessary) because it reduces the burden on the folks maintaining both systems if there is less drift in how they are configured.
So, is this a real concern for GCE only setups?
For most folks it probably isn't a concern. But you could imagine a large company running multiple clusters (or other workloads) in the same network so that services maintained by different teams could easily communicate over the internal cloud network. In that case, you would still want to protect the communication between the master and nodes to reduce the impact an attacker (or malicious insider) could have by exploiting a single entry point into the network.

Related

How can I make a Kubernetes multi-cluster Redis solution?

I deployed this week a Redis instance using Bitnami's Helm Chart into a GKE (Google Kubernetes Engine) cluster. Although I've been successful on this part, the challenge now is to make a failover disaster recovery strategy that replicates the data to another Redis instance in another GKE cluster (but same GCP project). How can I do this? I tried Persistence Volume Claims but they are only visible inside the cluster.
Redis Enterprise does have a WAN (multi-geo) replication mode, however I've never used it and it looks like it dramatically limits which features of Redis you can use to things that are compatible with a CRDT model. Basically first you need to answer how you would do this by hand, and then investigate automating that. WAN failover is a very complex thing, and generally you wouldn't even want to do it since you wouldn't fail over at the Redis level. Instead you would fail over your entire DC (or whatever you want to call your failure zones). Distributed database modelling and management is very tricky, here be dragons.

Kubernetes - Single Cluster or Multiple Clusters

I'm migrating a number of applications from AWS ECS to Azure AKS and being the first production deployment for me in Kubernetes I'd like to ensure that it's set up correctly from the off.
The applications being moved all use resources at varying degrees with some being more memory intensive and others being more CPU intensive, and all running at different scales.
After some research, I'm not sure which would be the best approach out of running a single large cluster and running them all in their own Namespace, or running a single cluster per application with Federation.
I should note that I'll need to monitor resource usage per application for cost management (amongst other things), and communication is needed between most of the applications.
I'm able to set up both layouts and I'm sure both would work, but I'm not sure of the pros and cons of each approach, whether I should be avoiding one altogether, or whether I should be considering other options?
Because you are at the beginning of your kubernetes journey I would go with separate clusters for each stage you have (or at least separate dev and prod). You can very easily take your cluster down (I did it several times with resource starvation). Also not setting correctly those network policies you might find that services from different stages/namespaces (like test and sandbox) communicate with each other. Or pipelines that should deploy dev to change something in other namespace.
Why risk production being affected by dev work?
Even if you don't have to upgrade the control plane yourself, aks still has its versions and flags and it is better to test them before moving to production on a separate cluster.
So my initial decision would be to set some hard boundaries: different clusters. Later once you get more knowledge with aks and kubernetes you can review your decision.
As you said that communication is need among the applications I suggest you go with one cluster. Application isolation can be achieved by Deploying each application in a separate namespace. You can collect metrics at namespace level and can set resources quota at namespace level. That way you can take action at application level
A single cluster (with namespaces and RBAC) is easier to setup and manage. A single k8s cluster does support high load.
If you really want multiple clusters, you could try istio multi-cluster (istio service mesh for multiple cluster) too.
Depends... Be aware AKS still doesn't support multiple node pools (On the short-term roadmap), so you'll need to run those workloads in single pool VM type. Also when thinking about multiple clusters, think about multi-tenancy requirements and the blast radius of a single cluster. I typically see users deploying multiple clusters even though there is some management overhead, but good SCM and configuration management practices can help with this overhead.

Will the master know the data on workers/nodes in k8s

I try to deploy a set of k8s on the cloud, there are two options:the masters are in trust to the cloud provider or maintained by myself.
so i wonder about that if the masters in trust will leak the data on workers?
Shortly, will the master know the data on workers/nodes?
The abstractions in Kubernetes are very well defined with clear boundaries. You have to understand the concept of Volumes first. As defined here,
A Kubernetes volume is essentially a directory accessible to all
containers running in a pod. In contrast to the container-local
filesystem, the data in volumes is preserved across container
restarts.
Volumes are attached to the containers in a pod and There are several types of volumes
You can see the layers of abstraction source
Master to Cluster communication
There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
Also, you should check the CCM - The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another. The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top of Kubernetes.
Hope this answers all your questions related to Master accessing the data on Workers.
If you are still looking for more secure ways, check 11 Ways (Not) to Get Hacked
Short answer: yes the control plane can access all of your data.
Longer and more realistic answer: probably don't worry about it. It is far more likely that any successful attack against the control plane would be just as successful as if you were running it yourself. The exact internal details of GKE/AKS/EKS are a bit fuzzy, but all three providers have a lot of experience running multi-tenant systems and it wouldn't be negligent to trust that they have enough protections in place against lateral escalations between tenants on the control plane.

VPN access for applications running inside a shared Kubernetes cluster

We are currently providing our software as a software-as-a-service on Amazon EC2 machines. Our software is a microservice-based application with around 20 different services.
For bigger customers we use dedicated installations on a dedicated set of VMs, the number of VMs (and number of instances of our microservices) depending on the customer's requirements. A common requirement of any larger customer is that our software needs access to the customer's datacenter (e.g., for LDAP access). So far, we solved this using Amazon's virtual private gateway feature.
Now we want to move our SaaS deployments to Kubernetes. Of course we could just create a Kubernetes cluster across an individual customer's VMs (e.g., using kops), but that would offer little benefit.
Instead, perspectively, we would like to run a single large Kubernetes cluster on which we deploy the individual customer installations into dedicated namespaces, that way increasing resource utilization and lowering cost compared to the fixed allocation of machines to customers that we have today.
From the Kubernetes side of things, our software works fine already, we can deploy multiple installations to one cluster just fine. An open topic is however the VPN access. What we would need is a way to allow all pods in a customer's namespace access to the customer's VPN, but not to any other customers' VPNs.
When googleing for the topic, I found approaches that add a VPN client to the individual container (e.g., https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/) which is obviously not an option).
Other approaches seem to describe running a VPN server inside K8s (which is also not what we need).
Again others (like the "Strongswan IPSec VPN service", https://www.ibm.com/blogs/bluemix/2017/12/connecting-kubernetes-cluster-premises-resources/ ) use DaemonSets to "configure routing on each of the worker nodes". This also does not seem like a solution that is acceptable to us, since that would allow all pods (irrespective of the namespace they are in) on a worker node access to the respective VPN... and would also not work well if we have dozens of customer installations each requiring its own VPN setup on the cluster.
Is there any approach or solution that provides what we need, .i.e., VPN access for the pods in a specific namespace only?
Or are there any other approaches that could still satisfy our requirement (lower cost due to Kubernetes worker nodes being shared between customers)?
For LDAP access, one option might be to setup a kind of LDAP proxy, so that only this proxy would need to have VPN access to the customer network (by running this proxy on a small dedicated VM for each customer, and then configuring the proxy as LDAP endpoint for the application). However, LDAP access is only one out of many aspects of connectivity that our application needs depending on the use case.
If your IPSec concentrator support VTI, it's possible route the traffic using firewall rules. For example, PFSense suports it: https://www.netgate.com/docs/pfsense/vpn/ipsec/ipsec-routed.html.
Using VTI, you can direct traffic using some kind of policy routing: https://www.netgate.com/docs/pfsense/routing/directing-traffic-with-policy-routing.html
However, i can see two big problems here:
You cannot have two IPSEC tunnels with the conflicted networks. For example, your kube network is 192.168.0.0/24 and you have two customers: A (172.12.0.0/24) and B (172.12.0.0/12). Unfortunelly, this can happen (unless your customer be able to NAT those networks).
Find the ideals criteria for rule match (to allow the routing), since your source network are always the same. Use mark packages (using iptables mangle or even through application) can be a option, but you will still get stucked on the first problem.
A similar scenario is founded on WSO2 (API gateway provider) architecture. They solved it using reverse-proxy in each network (sad but true) https://docs.wso2.com/display/APICloud/Expose+your+On-Premises+Backend+Services+to+the+API+Cloud#ExposeyourOn-PremisesBackendServicestotheAPICloud-ExposeyourservicesusingaVPN
Regards,
UPDATE:
I don't know if you use GKE. If yes, maybe use Alias-IP can be an option: https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips. The PODs IPs will be routable from VPC. So, you can apply some kind of routing policy based on their CIDR.

Building low power HA cluster for self hosted services/blog

I would like to setup a HA swarm / kubernetes cluster based on low power architecture (arm).
My main objective is to learn how works a HA web cluster, how it reacts to failures and recover from them, how easy it is to scale.
I would like to host a blog on it as well as other services once it is working (git / custom services / home automation / CI server / ...).
Here are my first questions:
Regading the hardware, which is the more appropriate ? Rpi3 or Odroid-C2 or something else? I intend to have 4-6 nodes to start. Low power consumption is important to me since it will be running 24/7 at home
What is the best architecure to follow ? I would like to run everything in container (for scalability and redudancy), and have redundant load balancer, web servers and databases. Something like this: architecture
Would it be possible to have web server / databases distributed on all the cluster, and load balancing on 2-3 nodes ? Or is it better to separate it physically?
Which technology is the more suited (swarm / kubernetes / ansible to deploy / flocker for storage) ? I read about this topic a lot lately, but there are a lot of choices.
Thanks for your answers !
EDIT1: infrastructure deployment and management
I have almost all the material and I am now looking in a way to easily manage and deploy the 5 (or more) PIs. I want the procedure to be as scalable as possible.
Is there some way to:
retrieve an image from network the first time (PXE boot like)
apply custom settings for each node: network config (IP), SSH access, ...
automatically deploy / update new software on servers
easily add new nodes on the cluster
I can have a dedicated PI or my PC that would act as deployment server.
Thanks for your inputs !
Raspberry Pi, ODroid, CHIP, BeagleBoard are all suitable hardware.
Note that flash card has a limited lifetime if you constantly read/write to them.
Kubernetes is a great option to learn clustering containers.
Docker Swarm is also good.
None of these solutions provide distributed storage, so if you're talking about a PHP type web server and SQL database which are not distributed, then you can't really be redundant even with Kubernetes or Swarm.
To be effectively redundant, you need master/slave setup for the DB, or better a clustered database like elasticsearch or maybe the cluster version of MariaDB for SQL, so you have redundancy provided by the database cluster itself (which is not a replacement for backups, but it better than a single container)
For real distributed storage, you need to look at technologies like Ceph or GlusterFS. These do not work well with Kubernetes or Swarm because they need to be tied to the hardware. There is a docker/kubernetes Ceph project on Github, but I'd say it is still a bit hacky.
Better provision this separately, or directly on the host.
As far as load balancing is concerned, you ay want to have a couple nodes with external load balancers for redundancy, if you build a Kubernetes cluster you don't really chose what else may run on the same node, except by specifying CPU/RAM quota and limits, or affinity.
If you want to have a try to Raspberry Pi 3 in Kubernetes here is a step by step tutorial to setup your Kubernetes cluster with Raspberry Pi 3:
To prevent de read/write issue, you might consider purchasing and additional NAS device and mount it as volume to yours pods
Totally agree with MrE with the distributed storage for PHP-like. Volume lifespan is per pod, and is tied to the pod. So you cannot share one Volume between pods.