I'm trying to figure out which the best approach to verify the network policy configuration for a given cluster.
According to the documentation
Network policies are implemented by the network plugin, so you must be
using a networking solution which supports NetworkPolicy - simply
creating the resource without a controller to implement it will have
no effect.
Assuming I have access only through kubectl to my cluster, what should I do to ensure the network policy resource deployed into the cluster will be honored?
I'm aware about the CNI available and the related matrix capabilities.
I know you could check for the pod deployed under kube-system that are related to those CNI and verify the related capabilities using for ex. the matrix that I shared, but I wonder if there's a more structured approach to verify the current CNI installed and the related capabilities.
Regarding the "controller to implement it", is there a way to get the list of add-on/controller related to the network policy?
Which the best approach to verify the network policy configuration for
a given cluster?
If you have access to the pods, you can run tests to make sure that your NetworkPolicies are effective or not. There are two ways for you to check it:
Reading your NetworkPolicy using kubectl (kubectl get networkpolicies).
Testing your endpoints to check if NetworkPolicies are effective.
I wonder if there's a more structured approach to verify the current
CNI installed and the related capabilities.
There is no structured way to check your CNI. You need to understand how your CNI works to be able to identify it on your cluster. For Calico for example, you can identify it by checking if calico pods are running. (kubectl get pods --all-namespaces --selector=k8s-app=calico-node)
Regarding the "controller to implement it", is there a way to get the
list of add-on/controller related to the network policy?
"controller to implement it" is a reference to the CNI you are using.
There is a tool called Kubernetes Network Policies Viewer that allows your to see graphically your NetworkPolicy. This is not connected to your question but it might help you to visualize your NetworkPolicies and understand what they are doing.
Related
My scenary is like the image below:
After a couple of days trying to find a way to block connections among pods based on a rule i found the Network Policy. But it's not working for me neither at Google Cloud Platform or Local Kubernetes!
My scenary is quite simple, i need a way to block connections among pods based in a rule (e.g. namespace, workload label and so on). At the first glance i tought the will work for me, but i don't know why it's not working at the Google Cloud, even when i create a cluster from the scratch with the option "Network policy" enable.
Network policy will allow you to do exactly what you described on picture. You can allow or block based on labels or namespaces.
It's difficult to help you when you don't explain what you exactly did and what is not working. Update your question with actual network policy YAML you created and ideally also send kubectl get pod --show-labels from the namespace with the pods.
What do you mean by 'local kubernetes' is also unclear but it depends largely on network CNI you're using as it must support network policies. For example Calico or Cilium support it. Minikube in it's default setting don't so you should follow i.e. this guide: https://medium.com/#atsvetkov906090/enable-network-policy-on-minikube-f7e250f09a14
You can use Istio Sidecar to solve this : https://istio.io/latest/docs/reference/config/networking/sidecar/
Another Istio solution is the usage of AuthorizationPolicy : https://istio.io/latest/docs/reference/config/security/authorization-policy/
Just to update because I was involved in the problem of this post.
The problem was with pods that had the istio injected. In this case, all pods in the namespace, because it had istio-injection=enabled.
The NetworkPolicy rule was not taking effect when the selection was made by a matchselector, egress or ingress, and the pods involved were already running before NetworkPolicy was created. By killing the pod and then starting it, the pods that had the label match had access normally. I don't know if there is a way to say to refresh the sidecar inside the pods without having to restart it.
Pods started after the creation of NetworkPolicy did not give the problem of this post.
I am setting up hybrid cluster(master-centos and 2 worker nodes-windows 2019) with containerd as runtime. I cannot use any CNI like calico and weave as they need docker as runtime.I can use Flannel but it does not support network policies well. Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?
Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?
Network policies was create for that exact purpose and as per documents you need CNI that supports them. In other way they will be ignored.
Network policies are implemented by the network plugin.
To use network policies, you must be using a networking solution which
supports NetworkPolicy. Creating a NetworkPolicy resource without a
controller that implements it will have no effect.
If your only option is to use flannel for networking, you can install Calico network policy to secure cluster communications. So basically you are installing calico for policy and flannel for networking commonly known as Canal. You can find more details in calico docs
Here's also a good answer how to setup calico with containerd that you might find useful for your case.
As Flannel is L2 networking solution only thus no support for NetworkPolicy (L3/L4) you can implement security on the service level (any form of authorization like user/pass, certificate, saml, oauth etc.).
But without NetworkPolicy one will loose firewall like security which may not be what you want.
I would like to implement my own iptables rules before Kubernetes (kube-proxy) start doing it's magic and dynamically create rules based on services/pods running on the node. The kube-proxy is running in --proxy-mode=iptables.
Whenever I tried to load rules when booting up the node, for example in the INPUT chain, the Kubernetes rules (KUBE-EXTERNAL-SERVICES and KUBE-FIREWALL) are inserted on top of the chain even though my rules were also with -I flag.
What am I missing or doing wrong?
If it is somehow related, I am using weave-net plugin for the pod network.
The most common practice is to put all custom firewall rules on the gateway(ADC) or into cloud security groups. The rest of the cluster security is implemented by other features, like Network Policy (It depends on the network providers), Ingress, RBAC and others.
Check out the articles about Securing a Cluster and Kubernetes Security - Best Practice Guide.
These articles can also be helpful to secure your cluster:
Hardening your cluster's security
The Ultimate Guide to Kubernetes Security
When using Istio with Kubernetes, is an overlay network still required for each node?
I have read the FAQ's and the documentation, but cannot see anything that directly references this.
Istio is built upon the Kubernetes. It creates sidecar containers in the Kubernetes Pods for routing requests, gathering metrics and so on. But still, it requires a way for Pods to communicate with each other. Therefore, Kubernetes network overlay is required.
For additional information, you can start from the following link.
We need to know about pods network isolation.
Is there a possibility to access one pod from another one in cluster? Maybe by namespace dividing?
We also need pod's membership in local networks, which are not accessible from outside.
Any plans? Is it will be soon?
In a standard Kubernetes installation, all pods (even across namespaces) share a flat IP space and can all communicate with each other.
To get isolation, you'll need to customize your install to prevent cross namespace communication. One way to do this is to use OpenContrail. They recently wrote a blog post describing an example deployment using the Guestbook from the Kubernetes repository.