Kubernetes NetworkPolicies Blocking DNS - kubernetes

I have an AKS cluster (Azure CNI) which I'm trying to implement NetworkPolicies on. I've created the network policy which is
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myserver
spec:
podSelector:
matchLabels:
service: my-server
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
service: myotherserver
- podSelector:
matchLabels:
service: gateway
- podSelector:
matchLabels:
service: yetanotherserver
ports:
- port: 8080
protocol: TCP
egress:
- to:
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
- port: 5432
protocol: TCP
- port: 8080
protocol: TCP
but when I apply the policy I'm seeing recurring messages that the host name cannot be resolved. I've installed dnsutils on the myserver pod; and can see the DNS requests are timing out; and I've also tried installing tcpdump on the same pod; and I can see requests going from myserver to kube-dns. I'm not seeing any responses coming back.
If I delete the networkpolicy DNS comes straight back; so I'm certain there's an issue with my networkpolicy but can't find a way to allow the DNS traffic. If anyone can shed any light on where I'm going wrong it would be greatly appreciated!

To avoid duplication create a separate network policy for opening up DNS traffic. First we label the kube-system namespace. Then allow DNS traffic from all pod to kube-system namespace.
kubectl label namespace kube-system name=kube-system
kubectl create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-access
namespace: <your-namespacename>
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
EOF

Solution which does not require a name label to the target namespace. It's necessary to define a namespaceSelector as well as a podSelector. The default namespaceSelector will target the pod's own namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-access
namespace: <your-namespacename>
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
EDIT: Changed namespaceSelector to only target kube-system namespace based on the kubernetes.io/metadata.name label. This assumes you have automatic labeling enabled. https://kubernetes.io/docs/concepts/overview/_print/#automatic-labelling
If you don't have this feature enabled, the next best thing is to define an allow-all namespaceSelector along with the podSelector.

Related

Kubernetes Network Policy - Allowing traffic to service over specific port only

I have a GKE cluster v1.21 and have network policy enabled.
I denied all ingress and egress traffic within the cluster using:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-traffic
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
And I have a deployment and an internal load balancer the listens on 443:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
selector:
app: my-webserver
ports:
- port: 443
targetPort: 8080
type: LoadBalancer
And another deployment called my-deployment from which I want to send https requests to my-webserver.
I've setup the following policies:
Allow ingress for my-webserver
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-ingress-my-webserver
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app: my-webserver
ingress:
- from:
- podSelector:
matchLabels:
app: my-deployment
ports:
- protocol: TCP
port: 443
Allow egress for my-deployment
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-my-deployment
spec:
podSelector:
matchLabels:
app: my-deployment
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: my-webserver
ports:
- protocol: TCP
port: 443
Those network policies don't allow me to send requests to my-webserver from my-deployment.
However, when removing the ports section on both policies this works - I can make https calls from my-deployment to my-webserver.
But, I want to be able to allow connections only on a specific port and a specific protocol.
Is there a way for me to restrict connections only to a specific port?
I am assuming you are testing from a my-deployment's Pod and hitting the External LB's IP address, if that is the case; this is happening because the communication you are allowing is just from the source Pod to the service, you are missing the part from the service to the destination Pod.
As per Google's documentation:
You can use GKE's Network Policy Enforcement to control the
communication between your cluster's Pods and Services. You define a
network policy by using the Kubernetes Network Policy API to create
Pod-level firewall rules. These firewall rules determine which Pods
and Services can access one another inside your cluster.
What you need to do, is to allow the port 8080 as well in both policies:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-ingress-my-webserver
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app: my-webserver
ingress:
- from:
- podSelector:
matchLabels:
app: my-deployment
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 443
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-my-deployment
spec:
podSelector:
matchLabels:
app: my-deployment
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: my-webserver
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 443

Kubernetes NetworkPolicy limit egress traffic to service

Is it possible to allow egress traffic only to the specific service?
This is my naive try to do that:
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: default
spec:
podSelector: {}
egress:
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
to:
- podSelector:
matchLabels:
k8s-app: kube-dns
policyTypes:
- Egress
No, as far as I know you can do that only using podSelector.
However, if you have an access to cluster, I think you can still manually add additional labels for needed pods and use podSelector
Create egress policies provides you good template of NetworkPolicy structure. The following policy allows pod outbound traffic to other pods in the same namespace that match the pod selector.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-egress-same-namespace
namespace: default
spec:
podSelector:
matchLabels:
color: blue
egress:
- to:
- podSelector:
matchLabels:
color: red
ports:
- port: 80
I know that you can use namespaceSelector for ingress like below. Not sure you can use it with egress- havent tried. But to access to pods from other namespace you should somehow point it in the configuration
namespaceSelector:
matchLabels:
shape: square

k8s egress network policy not working for dns

I have added this NetworkPolicy to block all egress but allow DNS.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
However, I'm getting this error with a service that this rule applies to: Could not lookup srv records on _origintunneld._tcp.argotunnel.com: lookup _origintunneld._tcp.argotunnel.com on 10.2.0.10:53: read udp 10.32.1.179:40784->10.2.0.10:53: i/o timeout
This IP (10.2.0.10) belongs to the kube-dns service, which has a pod with the k8s-app=kube-dns label and is in the kube-system namespace with the label networking/namespace=kube-system.
If I remove the pod selector and namespace selector then the egress policy works and I do not get the error
This works but is not secure as it isn't restricted to the kube-dns pod:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
egress:
- to:
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
kube-system namespace yaml: kubectl get namespace kube-system -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2020-07-30T22:08:25Z"
labels:
networking/namespace: kube-system
name: kube-system
resourceVersion: "4084751"
selfLink: /api/v1/namespaces/kube-system
uid: b93e68b0-7899-4f39-a3b8-e0e12e4008ee
spec:
finalizers:
- kubernetes
status:
phase: Active
I've encountered the same issue. For me it was because NodeLocal DNSCache was enabled on my cluster.
Current policy does not explicitly allow traffic to Kubernetes DNS. As a result, DNS queries from pods in {{ $namespace }} will be dropped, unless allowed by other rules.
Creating an allow egress rule to k8s DNS should resolve your issue.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
I came across a similar issue. In my case I am using GKE Dataplane v2 Network Policies. In this scenario, Dataplane V2 is basically a managed Cillium implementation (not fully featured). It will be managing some of its internal CRD resources required to maintain a healthy network. This can lead to some conflicts with app deployment tools which automatically sync k8s resources (e.g.: ArgoCD). With proper testing I found that my network policies were not matching pod labels for "k8s-app: kube-dns".
So perhaps a quick fix for your test is to allow all pods in kube-system namespace by removing the podSelector code:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: {{ $namespace }}
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
If you confirm egress is working, you need to further troubleshoot your environment and understand why your netpolicy is not matching the kube-dns label.
If you are using ArgoCD a good place to start is by blacklisting/excluding cillium.io resources. For example, include this in your ArgoCD config:
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
- CiliumEndpoint
clusters:
- "*"

networkpolicy in kubernetes to allow port from namespace

Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace snafu. Ensure that the new NetworkPolicy allows Pods
in namespace internal to connect to port 8080 of Pods in namespace snafu.Further ensure that the new NetworkPolicy: does not allow access
to Pods, which don't listen on port 8080 does not allow access from Pods,which are not in namespace internal.
Please help me with this question.
Also please verify if the below yaml(in the comment section) is correct and help me understand the second part of question
(Further ensure that the new NetworkPolicy: does not allow access to Pods, which don't listen on port 8080 does not allow access from Pods,which are not in namespace internal)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: snafu
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: internal
ports:
- protocol: TCP
port: 8080
The second part mean you must isolate all the pods in the namespace snafu by default which mean you need to change your podSelector field to:
...
spec:
podSelector: {}
...
First part seems incorrect, need to create labels for namespace Internal.
- namespaceSelector:
matchLabels:
purpose: production
Here, purpose: production is label of namespace Internal
https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/06-allow-traffic-from-a-namespace.md
I think it can be sth like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: snafu
spec:
podSelector: {}
policyTypes:
Ingress
ingress:
from:
namespaceSelector:
matchLabels:
key: value
ports:
protocol: TCP
port: 8080
Br,
first check label of your namespace e.g.
[root#master ~]# kg ns --show-labels
NAME STATUS AGE LABELS
default Active 54d kubernetes.io/metadata.name=default
kube-node-lease Active 54d kubernetes.io/metadata.name=kube-node-lease
kube-public Active 54d kubernetes.io/metadata.name=kube-public
kube-system Active 54d kubernetes.io/metadata.name=kube-system
my-app Active 171m kubernetes.io/metadata.name=my-app
here my namespace is my-app and I want to allow traffic at port 80 for all the pods in namespace my-app , but don't want to allow any traffic from other namespace (e.g. default)
so use
matchLabels
kubernetes.io/metadata.name: my-app
[root#master ~]# cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: my-app
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: my-app
ports:
- protocol: TCP
port: 80
enter code here

Whitelist "kube-system" namespace using NetworkPolicy

I have a multi-tenant cluster, where multi-tenancy is achieved via namespaces. Every tenant has their own namespace. Pods from a tenant cannot talk to pods of other tenants. However, some pods in every tenant have to expose a service to the internet, using an Ingress.
This I how far I got (I am using Calico):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant1-isolate-namespace
namespace: tenant1
spec:
policyTypes:
- Ingress
podSelector: {} # Select all pods in this namespace
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1 # white list current namespace
Deployed for each namespace (tenant1, tenant2, ... ), this limits communication between pods within their namespace. However, this prevents pods from the kube-system namespace to talk to pods in this namespace.
However, the kube-system namespace does not have any labels by default so I can not specifically white list this namespace.
I found a (dirty) workaround for this issue by manually giving it a label:
kubectl label namespace/kube-system permission=talk-to-all
And adding the whitelist rule to the networkpolicy:
...
- from:
- namespaceSelector:
matchLabels:
permission: talk-to-all # allow namespaces that have the "talk-to-all privilege"
Is there a better solution, without manually giving kube-system a label?
Edit: I tried to additionally add an "OR" rule to specifically allow communication from pods that have the label "app=nginx-ingress", but without luck:
- from
...
- podSelector:
matchLabels:
app: nginx-ingress # Allow pods that have the app=nginx-ingress label
apiVersion: networking.k8s.io/v1
The namespaceSelector is designed to match namespaces by labels only. There is no way to select namespace by name.
The podSelector can only select pods in the same namespace with NetworkPolicy object. For objects located in different namespaces, only selection of the whole namespace is possible.
Here is an example of Kubernetes Network Policy implementation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Follow this link to read a good explanation of the whole concept of Network policy, or this link to watch the lecture.
apiVersion: projectcalico.org/v3
Calico API gives you more options for writing NetworkPolicy rules, so, at some point, you can achieve your goal with less efforts and mind-breaking.
For example, using Calico implementation of Network Policy you can:
set action for the rule (Allow, Deny, Log, Pass),
use negative matching (protocol, notProtocol, selector, notSelector),
apply more complex label selectors(has(k), k not in { ‘v1’, ‘v2’ }),
combine selectors with operator &&,
use port range (ports: [8080, "1234:5678", "named-port"]),
match pods in other namespaces.
But still, you can match namespaces only by labels.
Consider reading Calico documentation for the details.
Here is an example of Calico Network Policy implementation:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-tcp-6379
namespace: production
spec:
selector: role == 'database'
types:
- Ingress
- Egress
ingress:
- action: Allow
protocol: TCP
source:
selector: role == 'frontend'
destination:
ports:
- 6379
egress:
- action: Allow
Indeed, tenant1 pods will need access to kube-dns in the kube-system namespace specifically.
One approach without requiring kube-system namespace to be labelled is the following policy.
Although kube-dns could be in any namespace with this approach so it may not be suitable for you.
---
# Default deny all ingress & egress policy, except allow kube-dns
# All traffic except this must be explicity allowed.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-except-kube-dns
namespace: tenant1
spec:
podSelector: {}
egress:
- to:
- podSelector:
matchLabels:
k8s-app: kube-dns
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Ingress
- Egress
Then, you would need also need an 'allow all within namespace policy' as follows:
---
# Allow intra namespace traffic for development purposes only.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-intra-namespace
namespace: tenant1
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
policyTypes:
- Ingress
- Egress
Lastly, you will want to add specific policies such as an ingress rule.
It would be better to replace the allow-intra-namespace policy with specific rules to suit individual pods, which your tenant1 could do.
These have been adapted from this website: https://github.com/ahmetb/kubernetes-network-policy-recipes
I'm on k3os with default flannel CNI. It has default label on kube-system namespace:
$ kubectl describe ns kube-system
Name: kube-system
Labels: kubernetes.io/metadata.name=kube-system
Annotations: <none>
Status: Active
This works for me:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
Here is my full yaml which allows all external traffic and kube-dns in kube-system namespace for egress:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-egress
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
If i understood correctly you are using calico. Just use their example on how to implement default-deny without breaking kube-dns communication.
Found here
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-app-policy
spec:
namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system"}
types:
- Ingress
- Egress
egress:
# allow all namespaces to communicate to DNS pods
- action: Allow
protocol: UDP
destination:
selector: 'k8s-app == "kube-dns"'
ports:
- 53