Whitelist "kube-system" namespace using NetworkPolicy - kubernetes

I have a multi-tenant cluster, where multi-tenancy is achieved via namespaces. Every tenant has their own namespace. Pods from a tenant cannot talk to pods of other tenants. However, some pods in every tenant have to expose a service to the internet, using an Ingress.
This I how far I got (I am using Calico):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant1-isolate-namespace
namespace: tenant1
spec:
policyTypes:
- Ingress
podSelector: {} # Select all pods in this namespace
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1 # white list current namespace
Deployed for each namespace (tenant1, tenant2, ... ), this limits communication between pods within their namespace. However, this prevents pods from the kube-system namespace to talk to pods in this namespace.
However, the kube-system namespace does not have any labels by default so I can not specifically white list this namespace.
I found a (dirty) workaround for this issue by manually giving it a label:
kubectl label namespace/kube-system permission=talk-to-all
And adding the whitelist rule to the networkpolicy:
...
- from:
- namespaceSelector:
matchLabels:
permission: talk-to-all # allow namespaces that have the "talk-to-all privilege"
Is there a better solution, without manually giving kube-system a label?
Edit: I tried to additionally add an "OR" rule to specifically allow communication from pods that have the label "app=nginx-ingress", but without luck:
- from
...
- podSelector:
matchLabels:
app: nginx-ingress # Allow pods that have the app=nginx-ingress label

apiVersion: networking.k8s.io/v1
The namespaceSelector is designed to match namespaces by labels only. There is no way to select namespace by name.
The podSelector can only select pods in the same namespace with NetworkPolicy object. For objects located in different namespaces, only selection of the whole namespace is possible.
Here is an example of Kubernetes Network Policy implementation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Follow this link to read a good explanation of the whole concept of Network policy, or this link to watch the lecture.
apiVersion: projectcalico.org/v3
Calico API gives you more options for writing NetworkPolicy rules, so, at some point, you can achieve your goal with less efforts and mind-breaking.
For example, using Calico implementation of Network Policy you can:
set action for the rule (Allow, Deny, Log, Pass),
use negative matching (protocol, notProtocol, selector, notSelector),
apply more complex label selectors(has(k), k not in { ‘v1’, ‘v2’ }),
combine selectors with operator &&,
use port range (ports: [8080, "1234:5678", "named-port"]),
match pods in other namespaces.
But still, you can match namespaces only by labels.
Consider reading Calico documentation for the details.
Here is an example of Calico Network Policy implementation:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-tcp-6379
namespace: production
spec:
selector: role == 'database'
types:
- Ingress
- Egress
ingress:
- action: Allow
protocol: TCP
source:
selector: role == 'frontend'
destination:
ports:
- 6379
egress:
- action: Allow

Indeed, tenant1 pods will need access to kube-dns in the kube-system namespace specifically.
One approach without requiring kube-system namespace to be labelled is the following policy.
Although kube-dns could be in any namespace with this approach so it may not be suitable for you.
---
# Default deny all ingress & egress policy, except allow kube-dns
# All traffic except this must be explicity allowed.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-except-kube-dns
namespace: tenant1
spec:
podSelector: {}
egress:
- to:
- podSelector:
matchLabels:
k8s-app: kube-dns
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Ingress
- Egress
Then, you would need also need an 'allow all within namespace policy' as follows:
---
# Allow intra namespace traffic for development purposes only.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-intra-namespace
namespace: tenant1
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
policyTypes:
- Ingress
- Egress
Lastly, you will want to add specific policies such as an ingress rule.
It would be better to replace the allow-intra-namespace policy with specific rules to suit individual pods, which your tenant1 could do.
These have been adapted from this website: https://github.com/ahmetb/kubernetes-network-policy-recipes

I'm on k3os with default flannel CNI. It has default label on kube-system namespace:
$ kubectl describe ns kube-system
Name: kube-system
Labels: kubernetes.io/metadata.name=kube-system
Annotations: <none>
Status: Active
This works for me:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
Here is my full yaml which allows all external traffic and kube-dns in kube-system namespace for egress:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-egress
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16

If i understood correctly you are using calico. Just use their example on how to implement default-deny without breaking kube-dns communication.
Found here
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-app-policy
spec:
namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system"}
types:
- Ingress
- Egress
egress:
# allow all namespaces to communicate to DNS pods
- action: Allow
protocol: UDP
destination:
selector: 'k8s-app == "kube-dns"'
ports:
- 53

Related

Kubernetes NetworkPolicy: Allow egress only to internet, and Allow ingress only from ingress controller and promtail

By default pods can communicate with each other in Kubernetes, which is unwanted should a pod be compromised. We want to use NetworkPolicies to control inbound (ingress) and outbound (egress) traffic to/from pods.
Specifically pods should ONLY be able to:
Egress: Call services on the internet
Ingress: Receive requests from the Nginx-ingress controller
Ingress: Send logs via promtail to Loki
What I have tried
1. Denying all ingress and egress
This is the default policy that we want to gradually open up. It blocks all ingress and egress.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
namespace: mynamespace
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
2. Opening egress to internet only
We allow egress only to IP-adresses that are not reserved for private networks according to wikipedia.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-allow-internet-only
namespace: mynamespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
3. Opening Ingress from ingress controller and loki
We have deployed the standard NginX Ingress Controller in namespace default, and it has the lable app.kubernetes.io/name=ingress-nginx. We have also deployed the standard loki-grafana stack to the default namespace, which uses promtail to transfer logs to Loki. Here I allow pods to recieve ingress from the promtail and ingress-nginx pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-allow-ingress-controller-and-promptail
namespace: mynamespace
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name=default
- podSelector:
matchLabels:
app.kubernetes.io/name=ingress-nginx
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name=default
- podSelector:
matchLabels:
app.kubernetes.io/name=promtail
So, does this configuration look right?
I am new to Kubernetes, so I hope you guys can help point me in the right direction. Does this configuration do what I intent it to do, or have I missed something? E.g. is it enough that I have just blocked egress within the private network to ensure that the pods are isolated from each other, or should I also make the ingress configuration as I have done here?
I have compared your Ingress with K8 Doc and Egress with this SO and deny Both ingress and Egress seems to be correct.The only thing we need to do is check whether all the name space is given correct or not. Seems to be correct as per your YAML file.
But kubernetes pods use the DNS server inside Kubernetes; due to this DNS server being blocked, we need to define more specific IP ranges to allow DNS lookups. Follow this SO to define DNS config at pod levels and to get curl calls with domain names allow Egress to Core DNS from kube-system(by adding a namespace selecter (kube-system) and a pod selector (dns pods)).
How to identify dns pod
# Identifying DNS pod
kubectl get pods -A | grep dns
# Identifying DNS pod label
kubectl describe pods -n kube-system coredns-64cfd66f7-rzgwk
Adding DNS pod to NetworkPolicy
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-allow-internet-only
namespace: mynamespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
- podSelector:
matchLabels:
k8s-app: "kube-dns"
For those curious I ended with the following network policy:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all
namespace: <K8S_NAMESPACE>
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet
namespace: <K8S_NAMESPACE>
spec:
podSelector: {}
policyTypes:
- Egress
- Ingress
egress:
- to:
- ipBlock:
cidr: "0.0.0.0/0"
except:
- "10.0.0.0/8"
- "172.16.0.0/12"
- "192.168.0.0/16"
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
- podSelector:
matchLabels:
k8s-app: "kube-dns"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-from-ingresscontroller
namespace: <K8S_NAMESPACE>
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "default"
- podSelector:
matchLabels:
app.kubernetes.io/name: "ingress-nginx"
---
It turned out that the DNS server had to be added to allow-internet and that it was not necessary to add allow-ingress-from-promtail, as promtail gets the log in another way that through ingress.

Kubernetes NetworkPolicy limit egress traffic to service

Is it possible to allow egress traffic only to the specific service?
This is my naive try to do that:
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: default
spec:
podSelector: {}
egress:
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
to:
- podSelector:
matchLabels:
k8s-app: kube-dns
policyTypes:
- Egress
No, as far as I know you can do that only using podSelector.
However, if you have an access to cluster, I think you can still manually add additional labels for needed pods and use podSelector
Create egress policies provides you good template of NetworkPolicy structure. The following policy allows pod outbound traffic to other pods in the same namespace that match the pod selector.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-egress-same-namespace
namespace: default
spec:
podSelector:
matchLabels:
color: blue
egress:
- to:
- podSelector:
matchLabels:
color: red
ports:
- port: 80
I know that you can use namespaceSelector for ingress like below. Not sure you can use it with egress- havent tried. But to access to pods from other namespace you should somehow point it in the configuration
namespaceSelector:
matchLabels:
shape: square

k8s egress network policy not working for dns

I have added this NetworkPolicy to block all egress but allow DNS.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
However, I'm getting this error with a service that this rule applies to: Could not lookup srv records on _origintunneld._tcp.argotunnel.com: lookup _origintunneld._tcp.argotunnel.com on 10.2.0.10:53: read udp 10.32.1.179:40784->10.2.0.10:53: i/o timeout
This IP (10.2.0.10) belongs to the kube-dns service, which has a pod with the k8s-app=kube-dns label and is in the kube-system namespace with the label networking/namespace=kube-system.
If I remove the pod selector and namespace selector then the egress policy works and I do not get the error
This works but is not secure as it isn't restricted to the kube-dns pod:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
egress:
- to:
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
kube-system namespace yaml: kubectl get namespace kube-system -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2020-07-30T22:08:25Z"
labels:
networking/namespace: kube-system
name: kube-system
resourceVersion: "4084751"
selfLink: /api/v1/namespaces/kube-system
uid: b93e68b0-7899-4f39-a3b8-e0e12e4008ee
spec:
finalizers:
- kubernetes
status:
phase: Active
I've encountered the same issue. For me it was because NodeLocal DNSCache was enabled on my cluster.
Current policy does not explicitly allow traffic to Kubernetes DNS. As a result, DNS queries from pods in {{ $namespace }} will be dropped, unless allowed by other rules.
Creating an allow egress rule to k8s DNS should resolve your issue.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
I came across a similar issue. In my case I am using GKE Dataplane v2 Network Policies. In this scenario, Dataplane V2 is basically a managed Cillium implementation (not fully featured). It will be managing some of its internal CRD resources required to maintain a healthy network. This can lead to some conflicts with app deployment tools which automatically sync k8s resources (e.g.: ArgoCD). With proper testing I found that my network policies were not matching pod labels for "k8s-app: kube-dns".
So perhaps a quick fix for your test is to allow all pods in kube-system namespace by removing the podSelector code:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
If you confirm egress is working, you need to further troubleshoot your environment and understand why your netpolicy is not matching the kube-dns label.
If you are using ArgoCD a good place to start is by blacklisting/excluding cillium.io resources. For example, include this in your ArgoCD config:
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
- CiliumEndpoint
clusters:
- "*"

networkpolicy in kubernetes to allow port from namespace

Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace snafu. Ensure that the new NetworkPolicy allows Pods
in namespace internal to connect to port 8080 of Pods in namespace snafu.Further ensure that the new NetworkPolicy: does not allow access
to Pods, which don't listen on port 8080 does not allow access from Pods,which are not in namespace internal.
Please help me with this question.
Also please verify if the below yaml(in the comment section) is correct and help me understand the second part of question
(Further ensure that the new NetworkPolicy: does not allow access to Pods, which don't listen on port 8080 does not allow access from Pods,which are not in namespace internal)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: snafu
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: internal
ports:
- protocol: TCP
port: 8080
The second part mean you must isolate all the pods in the namespace snafu by default which mean you need to change your podSelector field to:
...
spec:
podSelector: {}
...
First part seems incorrect, need to create labels for namespace Internal.
- namespaceSelector:
matchLabels:
purpose: production
Here, purpose: production is label of namespace Internal
https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/06-allow-traffic-from-a-namespace.md
I think it can be sth like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: snafu
spec:
podSelector: {}
policyTypes:
Ingress
ingress:
from:
namespaceSelector:
matchLabels:
key: value
ports:
protocol: TCP
port: 8080
Br,
first check label of your namespace e.g.
[root#master ~]# kg ns --show-labels
NAME STATUS AGE LABELS
default Active 54d kubernetes.io/metadata.name=default
kube-node-lease Active 54d kubernetes.io/metadata.name=kube-node-lease
kube-public Active 54d kubernetes.io/metadata.name=kube-public
kube-system Active 54d kubernetes.io/metadata.name=kube-system
my-app Active 171m kubernetes.io/metadata.name=my-app
here my namespace is my-app and I want to allow traffic at port 80 for all the pods in namespace my-app , but don't want to allow any traffic from other namespace (e.g. default)
so use
matchLabels
kubernetes.io/metadata.name: my-app
[root#master ~]# cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: my-app
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: my-app
ports:
- protocol: TCP
port: 80
enter code here

What is the difference between ingress value as blank array and value - {}?

In kubernetes network policy we can set Ingress value as blank array i.e. [] or we can also set value as - {}
What is the difference between using these 2 values?
First YAML that I tried - It didn't work
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes: ["Ingress","Egress"]
ingress: []
egress:
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306
Second YAML that was answer in katacoda scenario
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Egress
- Ingress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306
In both cases you have specified Policy Types: Ingress and Egress
In the first example:
ingress: []
this rule (is empty) and deny all ingress traffic, (the same result if ingress rule are not present in the spec).
You can verify this by running:
kubectl describe networkpolicy internal-policy
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
In the second example:
ingress:
- {}
this rule allow all ingress traffic:
kubectl describe networkpolicy internal-policy
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From: <any> (traffic not restricted by source)
As per documentation: Network Policies
Ingress rules:
Each NetworkPolicy may include a list of whitelist ingress rules. Each rule allows traffic which matches both the from and ports sections.
Hope this help.