Kubernetes NetworkPolicy is not overriding existing allow all egress policy - kubernetes

There is already two existing Network Policies present and one of which allows all the outbound traffic for all pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-default
namespace: sample-namespace
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector: {}
ingress:
- from:
- podSelector: {}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress
namespace: sample-namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector: {}
- ipBlock:
cidr: 0.0.0.0/0
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
and I want to block all outbound traffic for a certain pod with label app: localstack-server so I created one more Network Policy for it but its not getting applied on that Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: psp-localstack-default-deny-egress
namespace: sample-namespace
spec:
podSelector:
matchLabels:
app: localstack-server
policyTypes:
- Egress
I'm able to run curl www.example.com inside that pod and its working fine which it should not have.

NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).
To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like netpol-all-allowed: true, and don't add it to localstack-server.NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).
To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like netpol-all-allowed: true, and don't add it to localstack-server.

I think yor first yaml you allowed all egress, because in the Kubernetes Network Policy documentation following network policy is given with the explanation:
With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
Earlier in the docs it says, that
By default, a pod is non-isolated for egress; all outbound connections are allowed.
So i would suggest to leave out the allow-default rule for egress, then the denying of egress for that pod should work.

Related

Kubernetes Network Policy Egress to pod via service

I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.
In this scenario I have two pods:
sleeper, the client
frontend, the server behind a Service called frontend-svc which forwards port 8080 to the pods port 80
Both running in the same namespace: ns
In the sleeper pod I simply wget a ping endpoint in the frontend pod:
wget -qO- http://frontend-svc.ns:8080/api/Ping
Here's my egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-to-frontend-egress
namespace: ns
spec:
podSelector:
matchLabels:
app: sleeper
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: frontend
As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.
Unfortunately, this breaks my ping:
wget: bad address 'frontend-svc.ns:8080'
However if I retrieve the pod's ip (using kubectl get po -o wide) and talk to the frontend directly I do get a response:
wget -qO- 10.x.x.x:80/api/Ping (x obviously replaced with values)
My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-kube-system
namespace: ns
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
podSelector: {}
policyTypes:
- Egress
For now I don't want to bother with the exact pod and port, so I allow all pods from the ns namespace to egress to kube-system pods.
However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.
I'm running on Azure Kubernetes with Calico Network Policies.
Any clue what might be the issue, because I'm out of ideas.
After getting it up and running, here's a more locked-down version of the DNS egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-pods-dns-egress
namespace: ns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
# This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
kubernetes.io/metadata.name: "kube-system"
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
I recreated your deployment and the final networkpolicy (egress to kube-system for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're wget command when accessing the pod directly (80).
Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: https://docs.otterize.com/quick-tutorials/k8s-network-policies.
It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.

Isolate k8s pods network between namespaces

I need to isolate k8s pods network between namespaces.
A pod-1 running in namespace ns-1 cannot access the network from a pod-2 in namespace ns-2.
The purpose of it, is creating a sandbox between namespaces and prevent network communications between specific pods based on it labels.
I was trying the NetworkPolicy to do this, but my knowledge about k8s is a little "crude".
Is this possible? Can someone provide an example?
I'm trying to block all intranet comunication and allow internet using this:
spec:
egress:
- ports:
- port: 53
protocol: UDP
to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/12
- 172.40.0.0/16
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
podSelector:
matchExpressions:
- key: camel.apache.org/integration
operator: Exists
policyTypes:
- Egress
But when I access something like google.com, it resolves the DNS correctly but not connects resulting in timeout.
The policy intention is to:
block all private network access
allow only the kube-dns nameserver resolver on port 53
but allow all access to internet
What am I doing wrong?
The settings of Network Policies are very flexible, and you can configure it in different way. If we look to your case, then you have to create 2 policies for your cluster. First it is namespace network policy for your production and second one is for your sandbox. Of course, before to start to modify your network be sure that you choose, install, and configure network provider.
It is example of NetworkPolicy .yaml file for isolate your name NameSpace:
# You can create a "default" policy for a namespace which prevents all ingress
# AND egress traffic by creating the following NetworkPolicy in that namespace.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: YourSandbox
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
And after that you can create your pod in this NameSpace and it will be isolated. Just add to your config file string with NameSpace name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-c
namespace: YourSandbox
And in this example, we add access to connect outside and inside to specific NameSpace and service:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-service-c
namespace: YourSandbox
spec:
podSelector:
matchLabels:
app: YourSandboxService
ingress:
- from:
- namespaceSelector:
matchLabels:
env: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
egress:
- to:
- namespaceSelector:
matchLabels:
name: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
Use this network policy ip Block to configure the egress blocking the default local private network IPs and allow the rest of the internet access open.
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except: 192.168.40.0/24 #Your pul of local private network IPs
If you use the length of the subnet prefix /32, that indicates that you are limiting the scope of the rule to this one IP address only.

How to DENY all Ingress UDP using Network Policies in Kubernetes

I am new to configuring network policies in k8s. I have to make a change in production which I cant test. Basically we need to block all UDP traffic going to the pods in a specific namespace. Would the below work?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-udp
namespace: foxden-loadtest
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
ports:
- protocol: UDP
Try this example
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-allow-tcp only
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- ports:
- port: 80
protocol: TCP
Other all traffic will get blocked. Only TCP will work
policyTypes: ["ingress"] indicates that this policy enforces policies for the ingress traffic.
inress: [] empty rule set does not whitelist any traffic, therefore all ingress traffic is blocked.
Example : https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/11-deny-egress-traffic-from-an-application.md

kubernetes networkpolicy allow external traffic to internet only

Im trying to implement network policy in my kubernetes cluster to isolate my pods in a namespace but still allow them to access the internet since im using Azure MFA for authentication.
This is what i tried but cant seem to get it working. Ingress is working as expected but these policies blocks all egress.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: grafana-policy
namespace: default
spec:
podSelector:
matchLabels:
app: grafana
ingress:
- from:
- podSelector:
matchLabels:
app: nginx-ingress
Anybody who can tell me how i make above configuration work so i will also allow internet traffic but blocking traffic to other POD's?
Try adding a default deny all network policy on the namespace:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Then adding an allow Internet policy after:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/20
This will block all traffic except for internet outbound.
In the allow-internet-only policy, there is an exception for all private IPs which will prevent pod to pod communication.
You will also have to allow Egress to Core DNS from kube-system if you require DNS lookups, as the default-deny-all policy will block DNS queries.
Kubernetes will allow all traffic unless there is a network policy.
If a Network Policy is set, it will only allow traffic set by the network policy and deny everything else.
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods
So you will need to specify the Egress rules as well in order for it to work the way you want :)
Can you try like this?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress,Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
It should allow egress to all destinations. But if the destination is a pod, it should be blocked by the lacking ingress rules of the same NetworkPolicy.

How to stop all external traffic and allow only inter pod network call within namespace using network policy?

I'm setting up a namespace in my kubernetes cluster to deny any outgoing network calls like http://company.com but to allow inter pod communication within my namespace like http://my-nginx where my-nginx is a kubernetes service pointing to my nginx pod.
How to achieve this using network policy. Below network policy helps in blocking all outgoing network calls
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
How to white list only the inter pod calls?
Using Network Policies you can whitelist all pods in a namespace:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-egress-to-sample
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
name: sample
As you probably already know, pods with at least one Network Policy applied to them can only communicate to targets allowed by any Network Policy applied to them.
Names don't actually matter. Selectors (namespaceSelector and podSelector in this case) only care about labels. Labels are key-value pairs associated with resources. The above example assumes the namespace called sample has a label of name=sample.
If you want to whitelist the namespace called http://my-nginx, first you need to add a label to your namespace (if it doesn't already have one). name is a good key IMO, and the value can be the name of the service, http://my-nginx in this particular case (not sure if : and / can be a part of a label though). Then, just using this in your Network Policies will allow you to target the namespace (either for ingress or egress)
- namespaceSelector:
matchLabels:
name: http://my-nginx
If you want to allow communication to a service called my-nginx, the service's name doesn't really matter. You need to select the target pods using podSelector, which should be done with the same label that the service uses to know which pods belong to it. Check your service to get the label, and use the key: value in the Network Policy. For example, for a key=value pair of role=nginx you should use
- podSelector:
matchLabels:
role: nginx
This can be done using the following combination of network policies:
# The same as yours
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
---
# allows connections to all pods in your namespace from all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-egress
namespace: sample
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels: {}
---
# allows connections from all pods in your namespace to all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-internal
namespace: sample
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels: {}
assuming your network policy implementation implements the full spec.
I am not sure if you can do it by using Kubernetes NetworkPolicy, but you can achieve this by Istio-enabled pods.
Note: First make sure that Istio installed on your cluster. For installation see.
See quote from Istio's documentation about Egress Traffic.
By default, Istio-enabled services are unable to access URLs outside
of the cluster because the pod uses iptables to transparently redirect
all outbound traffic to the sidecar proxy, which only handles
intra-cluster destinations.
Also, you can whitelist domains outside of the cluster by adding ServiceEntry and VirtualService to your cluster, example in Configuring the external services in Istio documentation.
I hope it can be useful for you.