I would like to implement my own iptables rules before Kubernetes (kube-proxy) start doing it's magic and dynamically create rules based on services/pods running on the node. The kube-proxy is running in --proxy-mode=iptables.
Whenever I tried to load rules when booting up the node, for example in the INPUT chain, the Kubernetes rules (KUBE-EXTERNAL-SERVICES and KUBE-FIREWALL) are inserted on top of the chain even though my rules were also with -I flag.
What am I missing or doing wrong?
If it is somehow related, I am using weave-net plugin for the pod network.
The most common practice is to put all custom firewall rules on the gateway(ADC) or into cloud security groups. The rest of the cluster security is implemented by other features, like Network Policy (It depends on the network providers), Ingress, RBAC and others.
Check out the articles about Securing a Cluster and Kubernetes Security - Best Practice Guide.
These articles can also be helpful to secure your cluster:
Hardening your cluster's security
The Ultimate Guide to Kubernetes Security
Related
I am setting up hybrid cluster(master-centos and 2 worker nodes-windows 2019) with containerd as runtime. I cannot use any CNI like calico and weave as they need docker as runtime.I can use Flannel but it does not support network policies well. Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?
Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?
Network policies was create for that exact purpose and as per documents you need CNI that supports them. In other way they will be ignored.
Network policies are implemented by the network plugin.
To use network policies, you must be using a networking solution which
supports NetworkPolicy. Creating a NetworkPolicy resource without a
controller that implements it will have no effect.
If your only option is to use flannel for networking, you can install Calico network policy to secure cluster communications. So basically you are installing calico for policy and flannel for networking commonly known as Canal. You can find more details in calico docs
Here's also a good answer how to setup calico with containerd that you might find useful for your case.
As Flannel is L2 networking solution only thus no support for NetworkPolicy (L3/L4) you can implement security on the service level (any form of authorization like user/pass, certificate, saml, oauth etc.).
But without NetworkPolicy one will loose firewall like security which may not be what you want.
I have been trying to find the right firewall rules to apply on a kubernetes kubeadm cluster. with flannel as CNI.
I opened these ports:
6443/tcp, 2379/tcp, 2380/tcp, 8285/udp, 8472/udp, 10250/tcp, 10251/tcp, 10252/tcp, 10255/tcp, 30000-32767/tcp.
But I always end up with a service that cannot reach other services, or myself not able to reach the dashboard unless I disable the firewall. I always start with a fresh cluster.
kubernetes version 1.15.4.
Is there any source that list suitable rules to apply on cluster created by kubeadm with flannel running inside containers ?
As stated in Kubeadm system requeriments:
Full network connectivity between all machines in the cluster (public or private network is fine)
It's a very common practice is to put all custom rules on the Gateway(ADC) or into Cloud Security Groups, to you prevent conflicting rules.
Then you have to Ensure IP Tables tooling does not use the NFTables backend.
Nftables backend is not compatible with the current Kubeadm packages: it causes duplicated firewall rules and breaks kube-proxy.
And ensure required ports are open between all machines of the Cluster.
Other security measures should be deployed through other components, like:
Network Policy (Depending on the network providers)
Ingress
RBAC
And Others.
Also check the articles about Securing a Cluster and Kubernetes Security - Best Practice Guide.
Is that possible to deploy an openshift in DMZ zone ( Restricted zone ).What are the challenges i will face?.What are the things i have to do in DMZ zone network?
You can deploy Kubernetes and OpenShift in DMZ.
You can also add DMZ in front of Kubernetes and OpenShift.
The Kubernetes and OpenShift network model is a flat SDN model. All pods get IP addresses from the same network CIDR and live in the same logical network regardless of which node they reside on.
We have ways to control network traffic within the SDN using the NetworkPolicy API. NetworkPolicies in OpenShift represent firewall rules and the NetworkPolicy API allows for a great deal of flexibility when defining these rules.
With NetworkPolicies it is possible to create zones, but one can also be much more granular in the definition of the firewall rules. Separate firewall rules per pod are possible and this concept is also known as microsegmentation (see this post for more details on NetworkPolicy to achieve microsegmentation).
The DMZ is in certain aspects a special zone. This is the only zone exposed to inbound traffic coming from outside the organization. It usually contains software such as IDS (intrusion detection systems), WAFs (Web Application Firewalls), secure reverse proxies, static web content servers, firewalls and load balancers. Some of this software is normally installed as an appliance and may not be easy to containerize and thus would not generally be hosted within OpenShift.
Regardless of the zone, communication internal to a specific zone is generally unrestricted.
Variations on this architecture are common and large enterprises tend to have several dedicated networks. But the principle of purpose-specific networks protected by firewall rules always applies.
In general, traffic is supposed to flow only in one direction between two networks (as in an osmotic membrane), but often exceptions to this rule are necessary to support special use cases.
Useful article: openshift-and-network-security-zones-coexistence-approache.
It's very secure if you follow standard security practices for your cluster. But nothing is 100% secure. So adding a DMZ would help reduce your attack vectors.
In terms of protecting your Ingress from outside, you can limit your access for your external load balancer just to HTTPS, and most people do that but note that HTTPS and your application itself can also have vulnerabilities.
As for pods and workloads, you can increase security (at some performance cost) using things like a well-crafted seccomp profile and or adding the right capabilities in your pod security context. You can also add more security with AppArmor or SELinux, but lots of people don't since it can get very complicated.
There are also other alternatives to Docker in order to more easily sandbox your pods (still early in their lifecycle as of this writing): Kata Containers, Nabla Containers and gVisor.
Take look on: dmz-kubernetes.
Here is similar problem: dmz.
I'm working on a project where we're attempting to transition legacy product (deployed as a standalone VM) to kubernetes infrastcurture.
I'm using KUBEROUTER as CNI provider.
To protect the VM against DoS(and log the attempt) we've added different chains in iptables filter table. (These include rules for ping flood, syn flood - I think network policies/ingress controller can manage syn flood, but not sure how icmp flood would be taken care of. )
When I deployed kubernetes on my VM, I found that kubernetes updates iptables and creates it's own chains. (Mainly k8s updates NAT rules but chains are added in filter table as well)
My questions are:
Is it possible to customize iptables on VM where kubernetes is running?
If I add my own chains (making sure that k8s chains are in place) to iptables configuration, would they be overwritten by k8s?
Can I add chains using plain old iptables commands or need to do so via kubectl? (From k8s documentation, I got an impression that we can only update rules in NAT table using kubectl)
Please let me know, if somebody knows more on this, thanks !
~Prasanna
Is it possible to customize iptables on VM where kubernetes is running?
Yes, you can manage your VM's iptables normally, but the rules concerning application inside of Kubernetes should be managed from inside of Kubernetes.
If I add my own chains (making sure that k8s chains are in place) to iptables configuration, would they be overwritten by k8s?
Chains should not be overwritten by Kubernetes as Kubernetes creates its own chain and manages it.
Can I add chains using plain old iptables commands or need to do so via kubectl? (From k8s documentation, I got an impression that we can only update rules in NAT table using kubectl)
You can use iptables for rules related to the VirtualMachine. To manage firewall rules you should use iptables because kubectl can’t manage the firewall. For the inbound and outbound rules in Kubernetes cluster use Kubernetes tools ( .yaml files where you specify the network policies). Be aware not to create services that might be in conflict with iptables rules.
If you intend to expose application services externally, by either using the NodePort or LoadBalancing service types, traffic forwarding must be enabled in your iptables ruleset. If you find that you are unable to access a service from outside of the network used by the pod where your application is running, check that your iptables ruleset does not contain a rule similar to the following:
:FORWARD DROP [0:0]
Kubernetes network policies are application-centric compared to standard infrastructure/network-centric standard firewalls.
Which means that we do not really use CIDR or IP based network policies, in Kubernetes they are built on labels and selectors.
Concerning DDoS protection and details of ICMP flood attacks: the truth is that "classic" methods of mitigation - limiting the ICMP responses/filtering techniques will have an impact on legitimate traffic. In the "new era" of DDoS attacks with huge traffics, firewall based solutions are not enough as the traffic is usually able to overbear them. You could consider some vendor specific solutions or if you have that kind of possibilities - prepare your infrastructure to bear huge amounts of traffic or implement solutions like ping size and frequency limitations. Also, the overall DDoS protection consists of many levels and solutions. There are solutions like black hole routing, rate limiting, anycast network diffusion, uRPF, ACL's which can also help with application-level DDoS attacks. There are many more interesting practices I could recommend, but in my opinion, it is important to have a playbook and incident response plan in case of those attacks.
In preparation for HIPAA compliance, we are transitioning our Kubernetes cluster to use secure endpoints across the fleet (between all pods). Since the cluster is composed of about 8-10 services currently using HTTP connections, it would be super useful to have this taken care of by Kubernetes.
The specific attack vector we'd like to address with this is packet sniffing between nodes (physical servers).
This question breaks down into two parts:
Does Kubernetes encrypts the traffic between pods & nodes by default?
If not, is there a way to configure it such?
Many thanks!
Actually the correct answer is "it depends". I would split the cluster into 2 separate networks.
Control Plane Network
This network is that of the physical network or the underlay network in other words.
k8s control-plane elements - kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, kubelet - talk to each other in various ways. Except for a few endpoints (eg. metrics), it is possible to configure encryption on all endpoints.
If you're also pentesting, then kubelet authn/authz should be switched on too. Otherwise, the encryption doesn't prevent unauthorized access to the kubelet. This endpoint (at port 10250) can be hijacked with ease.
Cluster Network
The cluster network is the one used by the Pods, which is also referred to as the overlay network. Encryption is left to the 3rd-party overlay plugin to implement, failing which, the app has to implement.
The Weave overlay supports encryption. The service mesh linkerd that #lukas-eichler suggested can also achieve this, but on a different networking layer.
The replies here seem to be outdated. As of 2021-04-28 at least the following components seem to be able to provide an encrypted networking layer to Kubernetes:
Istio
Weave
linkerd
cilium
Calico (via Wireguard)
(the list above was gained via consultation of the respective projects home pages)
Does Kubernetes encrypts the traffic between pods & nodes by default?
Kubernetes does not encrypt any traffic.
There are servicemeshes like linkerd that allow you to easily introduce https communication between your http service.
You would run a instance of the service mesh on each node and all services would talk to the service mesh. The communication inside the service mesh would be encrypted.
Example:
your service -http-> localhost to servicemesh node - https-> remoteNode -http-> localhost to remote service.
When you run the service mesh node in the same pod as your service the localhost communication would run on a private virtual network device that no other pod can access.
No, kubernetes does not encrypt traffic by default
I haven't personally tried it, but the description on the Calico software defined network seems oriented toward what you are describing, with the additional benefit of already being kubernetes friendly
I thought that Calico did native encryption, but based on this GitHub issue it seems they recommend using a solution like IPSEC to encrypt just like you would a traditional host