Kubernetes Egress call restrict with namespace - kubernetes

I have application running in K3s and want to implement network policy based on namespace only.
Let's assume that currently I have three namespace A, B and C. I want to allow egress (external call to internet from pod) for namespace-A and remaining namespace[B & C] egress calls should be blocked/denied. Is this possible in Kubernetes network policy (and not calico or cilium) ?

You can define a deny all egress policy like described in the documentation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespce: your-namespace
spec:
podSelector: {}
policyTypes:
- Egress
This policy will be applied to all pods in the namespace because the pod selector is empty and that means (quoting documentation):
An empty podSelector selects all pods in the namespace.
The policy will block all egress traffic because it has Egress as policy type but it doesn't have any egress section.
If you want to allow in-cluster egress you might want to add an egress section in the policy, like for example:
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
This allows all traffic from the namespace where you create the network policy to pods labeled with k8s-app: kube-dns in namespace kube-system on port 53 (TCP and UDP).

Related

Kubernetes NetworkPolicy is not overriding existing allow all egress policy

There is already two existing Network Policies present and one of which allows all the outbound traffic for all pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-default
namespace: sample-namespace
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector: {}
ingress:
- from:
- podSelector: {}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress
namespace: sample-namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector: {}
- ipBlock:
cidr: 0.0.0.0/0
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
and I want to block all outbound traffic for a certain pod with label app: localstack-server so I created one more Network Policy for it but its not getting applied on that Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: psp-localstack-default-deny-egress
namespace: sample-namespace
spec:
podSelector:
matchLabels:
app: localstack-server
policyTypes:
- Egress
I'm able to run curl www.example.com inside that pod and its working fine which it should not have.
NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).
To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like netpol-all-allowed: true, and don't add it to localstack-server.NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).
To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like netpol-all-allowed: true, and don't add it to localstack-server.
I think yor first yaml you allowed all egress, because in the Kubernetes Network Policy documentation following network policy is given with the explanation:
With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
Earlier in the docs it says, that
By default, a pod is non-isolated for egress; all outbound connections are allowed.
So i would suggest to leave out the allow-default rule for egress, then the denying of egress for that pod should work.

Kubernetes Network Policy Egress to pod via service

I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.
In this scenario I have two pods:
sleeper, the client
frontend, the server behind a Service called frontend-svc which forwards port 8080 to the pods port 80
Both running in the same namespace: ns
In the sleeper pod I simply wget a ping endpoint in the frontend pod:
wget -qO- http://frontend-svc.ns:8080/api/Ping
Here's my egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-to-frontend-egress
namespace: ns
spec:
podSelector:
matchLabels:
app: sleeper
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: frontend
As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.
Unfortunately, this breaks my ping:
wget: bad address 'frontend-svc.ns:8080'
However if I retrieve the pod's ip (using kubectl get po -o wide) and talk to the frontend directly I do get a response:
wget -qO- 10.x.x.x:80/api/Ping (x obviously replaced with values)
My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-kube-system
namespace: ns
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
podSelector: {}
policyTypes:
- Egress
For now I don't want to bother with the exact pod and port, so I allow all pods from the ns namespace to egress to kube-system pods.
However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.
I'm running on Azure Kubernetes with Calico Network Policies.
Any clue what might be the issue, because I'm out of ideas.
After getting it up and running, here's a more locked-down version of the DNS egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-pods-dns-egress
namespace: ns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
# This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
kubernetes.io/metadata.name: "kube-system"
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
I recreated your deployment and the final networkpolicy (egress to kube-system for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're wget command when accessing the pod directly (80).
Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: https://docs.otterize.com/quick-tutorials/k8s-network-policies.
It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.

Ingress-nginx how to set externalIPs of nginx ingress to 1 ip externalIP only

i installed nginx ingress with the yaml file
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
when deploy i can see that the endpoints/externalIPs by default are all the ip of my nodes
but i only want 1 externalIPs to be access able to my applications
i had tried bind-address(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address) in a configuration file and applied it but it doesn't work, my ConfigMap file:
apiVersion: v1
data:
bind-address: "192.168.30.16"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
I tried kubectl edit svc/ingress-nginx-controller -n ingress-nginx to edit the svc adding externalIPs but it still doesn't work.
The only thing the nginx ingress document mentioned is https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#external-ips but i tried editing the svc, after i changed, it was set to single IP, but later it re-add the IPs again. Seems like there an automatic update of external IPs mechanic in ingress-nginx?
Is there anyway to set nginx ingress externals ip to only 1 of the node ip? i'm running out of option for googling this. Hope someone can help me
but I only want 1 external IPs to be access able to my applications
If you wish to "control" who can access your service(s) and from which ip/subnet/namesapce etc you should use NetworkPolicy
https://kubernetes.io/docs/concepts/services-networking/network-policies/
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
Other pods that are allowed (exception: a pod cannot block access to itself)
Namespaces that are allowed.
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Dependent on whether there is a LoadBalancer implementation for your cluster that might as intended.
If you want to use a specified node use type: NodePort
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
It might then also be useful to use a nodeSelector so you can control what node the nxinx controller gets scheduled to, for DNS reasons.

Network policy to restrict communication of pods within namespace and port

Namespace 1: arango
Namespace 2: apache - 8080
Criteria to acheive:
The policy should not allow pods which are not listening in port 8080
The policy Should not allow pods from any other namespace except "arango"
Is the following ingress help acheive this? or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Your current config
Your current configuration, is allowing traffic to pods with label app: arango in default namespace on port: 8080 from pods which have label app: apache in default namespace
It will apply to default namespace as you didn't specify it. If namespace is not defined, Kubernetes always is using default namespace.
Questions
or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
It depends on your requirements, if you want filter traffic from your pod to outside, from outside to your pod or both. It's well describe in Network Policy Resource documentation.
NetworkPolicy is namespaced resource so it will run in the namespace it was created in. If you want to allow another namespaces you should use namespaceSelector
The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules.
To sum up, ingress traffic is from outside to your pods and egress is from your pods to outside.
You want to apply two main rules:
The policy should not allow pods which are not listening in port 8080
If you would like to use this only for ingress traffic, it would looks like:
ingress:
- from:
ports:
- protocol: <protocol>
port: 8080
The policy Should not allow pods from any other namespace except "arango"
Please keep in mind that NetworkPolicy is namespaced resource thus it will work in the Namespace which was created. It should be specify in metadata.namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
namespace: arango
spec:
...
Requested Network Policy
I have tested this on my GKE cluster with Network Policy enabled.
In example below, incoming traffic to pods with label app: arango in arango namespace are allowed only if they come from pod with label app: apache, are listening on port: 8080 and were deployed in arango namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: arango
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Useful links:
Guide to Kubernetes Ingress Network Policies
Get started with Kubernetes network policy
If this answer didn't solve your issue, please clarify/provide more details how it should work and I will edit answer.

Kubernetes pod to pod cluster Network Policy

Using k8s network policy or calico, can I only use these tools for pod to pod cluster network policies.
I already have network rules for external cluster policies.
For example if I apply this calico rule:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-ingress-from-b
namespace: app
spec:
selector: app == 'a'
ingress:
- action: Allow
protocol: TCP
source:
selector: app == 'b'
destination:
ports:
- 80
In this example I allow traffic coming from app B to app A.
But this will disallow every other ingress traffic going to A.
Would it be possible to only apply this rule from pod to pod ?
You should read The NetworkPolicy resource, it provides an example NetworkPolicy with Ingress and Egress.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
The explanation is as following:
isolates “role=db” pods in the “default” namespace for both ingress and egress traffic (if they weren’t already isolated)
(Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
any pod in the “default” namespace with the label “role=frontend”
any pod in a namespace with the label “project=myproject”
IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
(Egress rules) allows connections from any pod in the “default” namespace with the label “role=db” to CIDR 10.0.0.0/24 on TCP port 5978
See the Declare Network Policy walkthrough for further examples.
So if you use a podSelector, you will be able to select pods for this Network Policy to apply to.