Isolate k8s pods network between namespaces - kubernetes

I need to isolate k8s pods network between namespaces.
A pod-1 running in namespace ns-1 cannot access the network from a pod-2 in namespace ns-2.
The purpose of it, is creating a sandbox between namespaces and prevent network communications between specific pods based on it labels.
I was trying the NetworkPolicy to do this, but my knowledge about k8s is a little "crude".
Is this possible? Can someone provide an example?
I'm trying to block all intranet comunication and allow internet using this:
spec:
egress:
- ports:
- port: 53
protocol: UDP
to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/12
- 172.40.0.0/16
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
podSelector:
matchExpressions:
- key: camel.apache.org/integration
operator: Exists
policyTypes:
- Egress
But when I access something like google.com, it resolves the DNS correctly but not connects resulting in timeout.
The policy intention is to:
block all private network access
allow only the kube-dns nameserver resolver on port 53
but allow all access to internet
What am I doing wrong?

The settings of Network Policies are very flexible, and you can configure it in different way. If we look to your case, then you have to create 2 policies for your cluster. First it is namespace network policy for your production and second one is for your sandbox. Of course, before to start to modify your network be sure that you choose, install, and configure network provider.
It is example of NetworkPolicy .yaml file for isolate your name NameSpace:
# You can create a "default" policy for a namespace which prevents all ingress
# AND egress traffic by creating the following NetworkPolicy in that namespace.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: YourSandbox
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
And after that you can create your pod in this NameSpace and it will be isolated. Just add to your config file string with NameSpace name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-c
namespace: YourSandbox
And in this example, we add access to connect outside and inside to specific NameSpace and service:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-service-c
namespace: YourSandbox
spec:
podSelector:
matchLabels:
app: YourSandboxService
ingress:
- from:
- namespaceSelector:
matchLabels:
env: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
egress:
- to:
- namespaceSelector:
matchLabels:
name: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
Use this network policy ip Block to configure the egress blocking the default local private network IPs and allow the rest of the internet access open.
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except: 192.168.40.0/24 #Your pul of local private network IPs
If you use the length of the subnet prefix /32, that indicates that you are limiting the scope of the rule to this one IP address only.

Related

Kubernetes NetworkPolicy is not overriding existing allow all egress policy

There is already two existing Network Policies present and one of which allows all the outbound traffic for all pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-default
namespace: sample-namespace
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector: {}
ingress:
- from:
- podSelector: {}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress
namespace: sample-namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector: {}
- ipBlock:
cidr: 0.0.0.0/0
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
and I want to block all outbound traffic for a certain pod with label app: localstack-server so I created one more Network Policy for it but its not getting applied on that Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: psp-localstack-default-deny-egress
namespace: sample-namespace
spec:
podSelector:
matchLabels:
app: localstack-server
policyTypes:
- Egress
I'm able to run curl www.example.com inside that pod and its working fine which it should not have.
NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).
To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like netpol-all-allowed: true, and don't add it to localstack-server.NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).
To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like netpol-all-allowed: true, and don't add it to localstack-server.
I think yor first yaml you allowed all egress, because in the Kubernetes Network Policy documentation following network policy is given with the explanation:
With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
Earlier in the docs it says, that
By default, a pod is non-isolated for egress; all outbound connections are allowed.
So i would suggest to leave out the allow-default rule for egress, then the denying of egress for that pod should work.

How to create a network policy that matches Kubernetes API

In our EKS Kubernetes cluster we have a general calico network policy to disallow all traffic. Then we add network policies to allow all traffic.
One of our pods needs to talk to the Kubernetes API but I can't seem to match that traffic with anything else than very broad ipBlock selectors. Is there any other way to do it?
This currently works but gives too broad access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
run: my-test-pod
policyTypes:
- Egress
egress:
- to: # To access the actual kubernetes API
- ipBlock:
cidr: 192.168.0.0/16
ports:
- protocol: TCP
port: 443
In AWS EKS I can't see the control plane pods but in my RPI cluster I can. In the RPI cluster, the API pods has labels "component=kube-apiserver,tier=control-plane" so I also tried using a podSelector with those labels but it does not match either in EKS or the RPI cluster:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
component: kube-apiserver
Any help would be appreciated.
What if you:
find API server by running kubectl cluster-info
look into smth like
Kubernetes master is running at ... lets say from the example https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com
translate that https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com to ip address, lets say it would be a.b.c.d
And finally use a.b.c.d/32 inside NetworkPolicy, e.g
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: a.b.c.d/32
ports:
- protocol: TCP
port: 443
Please correct me if I understood smth wrong

unable to connect internet from pod after applying egress network policy in GKE

I have a pod (kubectl run app1 --image tomcat:7.0-slim) in GKE after applying the egress network policy apt-get update command unable to connect internet.
Before applying policy:
After applying policy:
This is the policy applied:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app2-np
namespace: default
spec:
podSelector:
matchLabels:
name: app2
policyTypes:
- Egress
- Ingress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: app3
ports:
- port: 8080
- ports:
- port: 80
- port: 53
- port: 443
The Here am able to connect 8080 port of app3 pod in same namespace. Please help in correcting my netpol.
It happens because you are defining the egress rule only for app3 on port 8080, and it will block all internet connect attempts.
If you need to use access internet from some of your pods, you can tag them and create a NetworkPOlicy to permit the internet access.
In the example below, the pods with the tag networking/allow-internet-egress: "true" will be able to reach the internet:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-egress
spec:
podSelector:
matchLabels:
networking/allow-internet-egress: "true"
egress:
- {}
policyTypes:
- Egress
Another option is allow by ip blocks, in the example below, a rule will allow the internet access (0.0.0.0) except for the ipBlocks 10.0.0.0/8
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
Finally, in this site you can visualize your NetworkPolices in a good way to understand what is the exact behaviour.
References:
https://www.stackrox.com/post/2020/01/kubernetes-egress-network-policies/
Kubernets networkpolicy allow external traffic to internet only

kubernetes networkpolicy allow external traffic to internet only

Im trying to implement network policy in my kubernetes cluster to isolate my pods in a namespace but still allow them to access the internet since im using Azure MFA for authentication.
This is what i tried but cant seem to get it working. Ingress is working as expected but these policies blocks all egress.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: grafana-policy
namespace: default
spec:
podSelector:
matchLabels:
app: grafana
ingress:
- from:
- podSelector:
matchLabels:
app: nginx-ingress
Anybody who can tell me how i make above configuration work so i will also allow internet traffic but blocking traffic to other POD's?
Try adding a default deny all network policy on the namespace:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Then adding an allow Internet policy after:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/20
This will block all traffic except for internet outbound.
In the allow-internet-only policy, there is an exception for all private IPs which will prevent pod to pod communication.
You will also have to allow Egress to Core DNS from kube-system if you require DNS lookups, as the default-deny-all policy will block DNS queries.
Kubernetes will allow all traffic unless there is a network policy.
If a Network Policy is set, it will only allow traffic set by the network policy and deny everything else.
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods
So you will need to specify the Egress rules as well in order for it to work the way you want :)
Can you try like this?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress,Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
It should allow egress to all destinations. But if the destination is a pod, it should be blocked by the lacking ingress rules of the same NetworkPolicy.

Whitelist "kube-system" namespace using NetworkPolicy

I have a multi-tenant cluster, where multi-tenancy is achieved via namespaces. Every tenant has their own namespace. Pods from a tenant cannot talk to pods of other tenants. However, some pods in every tenant have to expose a service to the internet, using an Ingress.
This I how far I got (I am using Calico):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant1-isolate-namespace
namespace: tenant1
spec:
policyTypes:
- Ingress
podSelector: {} # Select all pods in this namespace
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1 # white list current namespace
Deployed for each namespace (tenant1, tenant2, ... ), this limits communication between pods within their namespace. However, this prevents pods from the kube-system namespace to talk to pods in this namespace.
However, the kube-system namespace does not have any labels by default so I can not specifically white list this namespace.
I found a (dirty) workaround for this issue by manually giving it a label:
kubectl label namespace/kube-system permission=talk-to-all
And adding the whitelist rule to the networkpolicy:
...
- from:
- namespaceSelector:
matchLabels:
permission: talk-to-all # allow namespaces that have the "talk-to-all privilege"
Is there a better solution, without manually giving kube-system a label?
Edit: I tried to additionally add an "OR" rule to specifically allow communication from pods that have the label "app=nginx-ingress", but without luck:
- from
...
- podSelector:
matchLabels:
app: nginx-ingress # Allow pods that have the app=nginx-ingress label
apiVersion: networking.k8s.io/v1
The namespaceSelector is designed to match namespaces by labels only. There is no way to select namespace by name.
The podSelector can only select pods in the same namespace with NetworkPolicy object. For objects located in different namespaces, only selection of the whole namespace is possible.
Here is an example of Kubernetes Network Policy implementation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Follow this link to read a good explanation of the whole concept of Network policy, or this link to watch the lecture.
apiVersion: projectcalico.org/v3
Calico API gives you more options for writing NetworkPolicy rules, so, at some point, you can achieve your goal with less efforts and mind-breaking.
For example, using Calico implementation of Network Policy you can:
set action for the rule (Allow, Deny, Log, Pass),
use negative matching (protocol, notProtocol, selector, notSelector),
apply more complex label selectors(has(k), k not in { ‘v1’, ‘v2’ }),
combine selectors with operator &&,
use port range (ports: [8080, "1234:5678", "named-port"]),
match pods in other namespaces.
But still, you can match namespaces only by labels.
Consider reading Calico documentation for the details.
Here is an example of Calico Network Policy implementation:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-tcp-6379
namespace: production
spec:
selector: role == 'database'
types:
- Ingress
- Egress
ingress:
- action: Allow
protocol: TCP
source:
selector: role == 'frontend'
destination:
ports:
- 6379
egress:
- action: Allow
Indeed, tenant1 pods will need access to kube-dns in the kube-system namespace specifically.
One approach without requiring kube-system namespace to be labelled is the following policy.
Although kube-dns could be in any namespace with this approach so it may not be suitable for you.
---
# Default deny all ingress & egress policy, except allow kube-dns
# All traffic except this must be explicity allowed.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-except-kube-dns
namespace: tenant1
spec:
podSelector: {}
egress:
- to:
- podSelector:
matchLabels:
k8s-app: kube-dns
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Ingress
- Egress
Then, you would need also need an 'allow all within namespace policy' as follows:
---
# Allow intra namespace traffic for development purposes only.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-intra-namespace
namespace: tenant1
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
policyTypes:
- Ingress
- Egress
Lastly, you will want to add specific policies such as an ingress rule.
It would be better to replace the allow-intra-namespace policy with specific rules to suit individual pods, which your tenant1 could do.
These have been adapted from this website: https://github.com/ahmetb/kubernetes-network-policy-recipes
I'm on k3os with default flannel CNI. It has default label on kube-system namespace:
$ kubectl describe ns kube-system
Name: kube-system
Labels: kubernetes.io/metadata.name=kube-system
Annotations: <none>
Status: Active
This works for me:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
Here is my full yaml which allows all external traffic and kube-dns in kube-system namespace for egress:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-egress
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
If i understood correctly you are using calico. Just use their example on how to implement default-deny without breaking kube-dns communication.
Found here
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-app-policy
spec:
namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system"}
types:
- Ingress
- Egress
egress:
# allow all namespaces to communicate to DNS pods
- action: Allow
protocol: UDP
destination:
selector: 'k8s-app == "kube-dns"'
ports:
- 53