Kubernetes Network Policy Egress to pod via service - kubernetes

I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.
In this scenario I have two pods:
sleeper, the client
frontend, the server behind a Service called frontend-svc which forwards port 8080 to the pods port 80
Both running in the same namespace: ns
In the sleeper pod I simply wget a ping endpoint in the frontend pod:
wget -qO- http://frontend-svc.ns:8080/api/Ping
Here's my egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-to-frontend-egress
namespace: ns
spec:
podSelector:
matchLabels:
app: sleeper
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: frontend
As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.
Unfortunately, this breaks my ping:
wget: bad address 'frontend-svc.ns:8080'
However if I retrieve the pod's ip (using kubectl get po -o wide) and talk to the frontend directly I do get a response:
wget -qO- 10.x.x.x:80/api/Ping (x obviously replaced with values)
My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-kube-system
namespace: ns
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
podSelector: {}
policyTypes:
- Egress
For now I don't want to bother with the exact pod and port, so I allow all pods from the ns namespace to egress to kube-system pods.
However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.
I'm running on Azure Kubernetes with Calico Network Policies.
Any clue what might be the issue, because I'm out of ideas.
After getting it up and running, here's a more locked-down version of the DNS egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-pods-dns-egress
namespace: ns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
# This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
kubernetes.io/metadata.name: "kube-system"
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP

I recreated your deployment and the final networkpolicy (egress to kube-system for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're wget command when accessing the pod directly (80).
Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: https://docs.otterize.com/quick-tutorials/k8s-network-policies.
It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.

Related

Ingress-nginx how to set externalIPs of nginx ingress to 1 ip externalIP only

i installed nginx ingress with the yaml file
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
when deploy i can see that the endpoints/externalIPs by default are all the ip of my nodes
but i only want 1 externalIPs to be access able to my applications
i had tried bind-address(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address) in a configuration file and applied it but it doesn't work, my ConfigMap file:
apiVersion: v1
data:
bind-address: "192.168.30.16"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
I tried kubectl edit svc/ingress-nginx-controller -n ingress-nginx to edit the svc adding externalIPs but it still doesn't work.
The only thing the nginx ingress document mentioned is https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#external-ips but i tried editing the svc, after i changed, it was set to single IP, but later it re-add the IPs again. Seems like there an automatic update of external IPs mechanic in ingress-nginx?
Is there anyway to set nginx ingress externals ip to only 1 of the node ip? i'm running out of option for googling this. Hope someone can help me
but I only want 1 external IPs to be access able to my applications
If you wish to "control" who can access your service(s) and from which ip/subnet/namesapce etc you should use NetworkPolicy
https://kubernetes.io/docs/concepts/services-networking/network-policies/
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
Other pods that are allowed (exception: a pod cannot block access to itself)
Namespaces that are allowed.
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Dependent on whether there is a LoadBalancer implementation for your cluster that might as intended.
If you want to use a specified node use type: NodePort
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
It might then also be useful to use a nodeSelector so you can control what node the nxinx controller gets scheduled to, for DNS reasons.

Isolate k8s pods network between namespaces

I need to isolate k8s pods network between namespaces.
A pod-1 running in namespace ns-1 cannot access the network from a pod-2 in namespace ns-2.
The purpose of it, is creating a sandbox between namespaces and prevent network communications between specific pods based on it labels.
I was trying the NetworkPolicy to do this, but my knowledge about k8s is a little "crude".
Is this possible? Can someone provide an example?
I'm trying to block all intranet comunication and allow internet using this:
spec:
egress:
- ports:
- port: 53
protocol: UDP
to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/12
- 172.40.0.0/16
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
podSelector:
matchExpressions:
- key: camel.apache.org/integration
operator: Exists
policyTypes:
- Egress
But when I access something like google.com, it resolves the DNS correctly but not connects resulting in timeout.
The policy intention is to:
block all private network access
allow only the kube-dns nameserver resolver on port 53
but allow all access to internet
What am I doing wrong?
The settings of Network Policies are very flexible, and you can configure it in different way. If we look to your case, then you have to create 2 policies for your cluster. First it is namespace network policy for your production and second one is for your sandbox. Of course, before to start to modify your network be sure that you choose, install, and configure network provider.
It is example of NetworkPolicy .yaml file for isolate your name NameSpace:
# You can create a "default" policy for a namespace which prevents all ingress
# AND egress traffic by creating the following NetworkPolicy in that namespace.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: YourSandbox
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
And after that you can create your pod in this NameSpace and it will be isolated. Just add to your config file string with NameSpace name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-c
namespace: YourSandbox
And in this example, we add access to connect outside and inside to specific NameSpace and service:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-service-c
namespace: YourSandbox
spec:
podSelector:
matchLabels:
app: YourSandboxService
ingress:
- from:
- namespaceSelector:
matchLabels:
env: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
egress:
- to:
- namespaceSelector:
matchLabels:
name: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
Use this network policy ip Block to configure the egress blocking the default local private network IPs and allow the rest of the internet access open.
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except: 192.168.40.0/24 #Your pul of local private network IPs
If you use the length of the subnet prefix /32, that indicates that you are limiting the scope of the rule to this one IP address only.

Kubernetes Egress call restrict with namespace

I have application running in K3s and want to implement network policy based on namespace only.
Let's assume that currently I have three namespace A, B and C. I want to allow egress (external call to internet from pod) for namespace-A and remaining namespace[B & C] egress calls should be blocked/denied. Is this possible in Kubernetes network policy (and not calico or cilium) ?
You can define a deny all egress policy like described in the documentation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespce: your-namespace
spec:
podSelector: {}
policyTypes:
- Egress
This policy will be applied to all pods in the namespace because the pod selector is empty and that means (quoting documentation):
An empty podSelector selects all pods in the namespace.
The policy will block all egress traffic because it has Egress as policy type but it doesn't have any egress section.
If you want to allow in-cluster egress you might want to add an egress section in the policy, like for example:
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
This allows all traffic from the namespace where you create the network policy to pods labeled with k8s-app: kube-dns in namespace kube-system on port 53 (TCP and UDP).

Network policy to restrict communication of pods within namespace and port

Namespace 1: arango
Namespace 2: apache - 8080
Criteria to acheive:
The policy should not allow pods which are not listening in port 8080
The policy Should not allow pods from any other namespace except "arango"
Is the following ingress help acheive this? or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Your current config
Your current configuration, is allowing traffic to pods with label app: arango in default namespace on port: 8080 from pods which have label app: apache in default namespace
It will apply to default namespace as you didn't specify it. If namespace is not defined, Kubernetes always is using default namespace.
Questions
or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
It depends on your requirements, if you want filter traffic from your pod to outside, from outside to your pod or both. It's well describe in Network Policy Resource documentation.
NetworkPolicy is namespaced resource so it will run in the namespace it was created in. If you want to allow another namespaces you should use namespaceSelector
The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules.
To sum up, ingress traffic is from outside to your pods and egress is from your pods to outside.
You want to apply two main rules:
The policy should not allow pods which are not listening in port 8080
If you would like to use this only for ingress traffic, it would looks like:
ingress:
- from:
ports:
- protocol: <protocol>
port: 8080
The policy Should not allow pods from any other namespace except "arango"
Please keep in mind that NetworkPolicy is namespaced resource thus it will work in the Namespace which was created. It should be specify in metadata.namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
namespace: arango
spec:
...
Requested Network Policy
I have tested this on my GKE cluster with Network Policy enabled.
In example below, incoming traffic to pods with label app: arango in arango namespace are allowed only if they come from pod with label app: apache, are listening on port: 8080 and were deployed in arango namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: arango
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Useful links:
Guide to Kubernetes Ingress Network Policies
Get started with Kubernetes network policy
If this answer didn't solve your issue, please clarify/provide more details how it should work and I will edit answer.

How to create a network policy that matches Kubernetes API

In our EKS Kubernetes cluster we have a general calico network policy to disallow all traffic. Then we add network policies to allow all traffic.
One of our pods needs to talk to the Kubernetes API but I can't seem to match that traffic with anything else than very broad ipBlock selectors. Is there any other way to do it?
This currently works but gives too broad access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
run: my-test-pod
policyTypes:
- Egress
egress:
- to: # To access the actual kubernetes API
- ipBlock:
cidr: 192.168.0.0/16
ports:
- protocol: TCP
port: 443
In AWS EKS I can't see the control plane pods but in my RPI cluster I can. In the RPI cluster, the API pods has labels "component=kube-apiserver,tier=control-plane" so I also tried using a podSelector with those labels but it does not match either in EKS or the RPI cluster:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
component: kube-apiserver
Any help would be appreciated.
What if you:
find API server by running kubectl cluster-info
look into smth like
Kubernetes master is running at ... lets say from the example https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com
translate that https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com to ip address, lets say it would be a.b.c.d
And finally use a.b.c.d/32 inside NetworkPolicy, e.g
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: a.b.c.d/32
ports:
- protocol: TCP
port: 443
Please correct me if I understood smth wrong