Kubernetes pod to pod cluster Network Policy - kubernetes

Using k8s network policy or calico, can I only use these tools for pod to pod cluster network policies.
I already have network rules for external cluster policies.
For example if I apply this calico rule:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-ingress-from-b
namespace: app
spec:
selector: app == 'a'
ingress:
- action: Allow
protocol: TCP
source:
selector: app == 'b'
destination:
ports:
- 80
In this example I allow traffic coming from app B to app A.
But this will disallow every other ingress traffic going to A.
Would it be possible to only apply this rule from pod to pod ?

You should read The NetworkPolicy resource, it provides an example NetworkPolicy with Ingress and Egress.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
The explanation is as following:
isolates “role=db” pods in the “default” namespace for both ingress and egress traffic (if they weren’t already isolated)
(Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
any pod in the “default” namespace with the label “role=frontend”
any pod in a namespace with the label “project=myproject”
IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
(Egress rules) allows connections from any pod in the “default” namespace with the label “role=db” to CIDR 10.0.0.0/24 on TCP port 5978
See the Declare Network Policy walkthrough for further examples.
So if you use a podSelector, you will be able to select pods for this Network Policy to apply to.

Related

Kubernetes Egress call restrict with namespace

I have application running in K3s and want to implement network policy based on namespace only.
Let's assume that currently I have three namespace A, B and C. I want to allow egress (external call to internet from pod) for namespace-A and remaining namespace[B & C] egress calls should be blocked/denied. Is this possible in Kubernetes network policy (and not calico or cilium) ?
You can define a deny all egress policy like described in the documentation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespce: your-namespace
spec:
podSelector: {}
policyTypes:
- Egress
This policy will be applied to all pods in the namespace because the pod selector is empty and that means (quoting documentation):
An empty podSelector selects all pods in the namespace.
The policy will block all egress traffic because it has Egress as policy type but it doesn't have any egress section.
If you want to allow in-cluster egress you might want to add an egress section in the policy, like for example:
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
This allows all traffic from the namespace where you create the network policy to pods labeled with k8s-app: kube-dns in namespace kube-system on port 53 (TCP and UDP).

Network policy to restrict communication of pods within namespace and port

Namespace 1: arango
Namespace 2: apache - 8080
Criteria to acheive:
The policy should not allow pods which are not listening in port 8080
The policy Should not allow pods from any other namespace except "arango"
Is the following ingress help acheive this? or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Your current config
Your current configuration, is allowing traffic to pods with label app: arango in default namespace on port: 8080 from pods which have label app: apache in default namespace
It will apply to default namespace as you didn't specify it. If namespace is not defined, Kubernetes always is using default namespace.
Questions
or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
It depends on your requirements, if you want filter traffic from your pod to outside, from outside to your pod or both. It's well describe in Network Policy Resource documentation.
NetworkPolicy is namespaced resource so it will run in the namespace it was created in. If you want to allow another namespaces you should use namespaceSelector
The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules.
To sum up, ingress traffic is from outside to your pods and egress is from your pods to outside.
You want to apply two main rules:
The policy should not allow pods which are not listening in port 8080
If you would like to use this only for ingress traffic, it would looks like:
ingress:
- from:
ports:
- protocol: <protocol>
port: 8080
The policy Should not allow pods from any other namespace except "arango"
Please keep in mind that NetworkPolicy is namespaced resource thus it will work in the Namespace which was created. It should be specify in metadata.namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
namespace: arango
spec:
...
Requested Network Policy
I have tested this on my GKE cluster with Network Policy enabled.
In example below, incoming traffic to pods with label app: arango in arango namespace are allowed only if they come from pod with label app: apache, are listening on port: 8080 and were deployed in arango namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: arango
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Useful links:
Guide to Kubernetes Ingress Network Policies
Get started with Kubernetes network policy
If this answer didn't solve your issue, please clarify/provide more details how it should work and I will edit answer.

How to create a network policy that matches Kubernetes API

In our EKS Kubernetes cluster we have a general calico network policy to disallow all traffic. Then we add network policies to allow all traffic.
One of our pods needs to talk to the Kubernetes API but I can't seem to match that traffic with anything else than very broad ipBlock selectors. Is there any other way to do it?
This currently works but gives too broad access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
run: my-test-pod
policyTypes:
- Egress
egress:
- to: # To access the actual kubernetes API
- ipBlock:
cidr: 192.168.0.0/16
ports:
- protocol: TCP
port: 443
In AWS EKS I can't see the control plane pods but in my RPI cluster I can. In the RPI cluster, the API pods has labels "component=kube-apiserver,tier=control-plane" so I also tried using a podSelector with those labels but it does not match either in EKS or the RPI cluster:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
component: kube-apiserver
Any help would be appreciated.
What if you:
find API server by running kubectl cluster-info
look into smth like
Kubernetes master is running at ... lets say from the example https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com
translate that https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com to ip address, lets say it would be a.b.c.d
And finally use a.b.c.d/32 inside NetworkPolicy, e.g
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: a.b.c.d/32
ports:
- protocol: TCP
port: 443
Please correct me if I understood smth wrong

unable to connect internet from pod after applying egress network policy in GKE

I have a pod (kubectl run app1 --image tomcat:7.0-slim) in GKE after applying the egress network policy apt-get update command unable to connect internet.
Before applying policy:
After applying policy:
This is the policy applied:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app2-np
namespace: default
spec:
podSelector:
matchLabels:
name: app2
policyTypes:
- Egress
- Ingress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: app3
ports:
- port: 8080
- ports:
- port: 80
- port: 53
- port: 443
The Here am able to connect 8080 port of app3 pod in same namespace. Please help in correcting my netpol.
It happens because you are defining the egress rule only for app3 on port 8080, and it will block all internet connect attempts.
If you need to use access internet from some of your pods, you can tag them and create a NetworkPOlicy to permit the internet access.
In the example below, the pods with the tag networking/allow-internet-egress: "true" will be able to reach the internet:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-egress
spec:
podSelector:
matchLabels:
networking/allow-internet-egress: "true"
egress:
- {}
policyTypes:
- Egress
Another option is allow by ip blocks, in the example below, a rule will allow the internet access (0.0.0.0) except for the ipBlocks 10.0.0.0/8
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
Finally, in this site you can visualize your NetworkPolices in a good way to understand what is the exact behaviour.
References:
https://www.stackrox.com/post/2020/01/kubernetes-egress-network-policies/
Kubernets networkpolicy allow external traffic to internet only

Kubernetes network allocation range

Is there a way in Kubernetes or there's a network plugin on which we can limit the range of IP allocation. For example, I am trying to use weave and using a subnet 192.168.16.0/24. I want to limit the allocation of IPs through Kubernetes to pods to the range of 192.168.16.10-30.
However, my app might use the rest of the IPs based on requirements i.e. my app can start a virtual IP from 192.168.16.31-50 but I want some mechanism to make sure that the IP range I specified will not be allocated by K8s and my app can consume that.
I need something like this: https://www.weave.works/docs/net/latest/tasks/ipam/configuring-weave/.
Network Policy resource will help
See Documentation
An example NetworkPolicy might look like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
The rule ipBlock describes the network ranges for ingress and egress rules.
E.g.:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
CIDR
CIDR stands for Classless Inter-Domain Routing, see samples of IPv4 CIDR blocks
More info
For more info see the NetworkPolicy reference.
Also, you can check great intro to k8s networking by Reuven Harrison
It's a good question actually. It depends on your CNI, in your case when using weavenet.
I am assuming you are using a daemonset for your Weavenet. If so, add something like this on your daemonset yaml file.
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 192.168.16.32/27
This gives your pods an IP range from 192.168.16.32-63.
You can also setup this with Weave CLI, let me know if you need that.
Hope this is helpful.