Kubernetes Network Policy - Allow specific IP - kubernetes

I'm using Kubernetes on IBM cloud.
I want to create a network policy that denies all the incoming connections to a pod (which expose the app on port 3000 ), but which allows the incoming connections only from a specific IP (MY_IP).
I wrote this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: <MY_POLICY_NAME>
namespace: <MY_NAMESPACE>
spec:
podSelector:
matchLabels:
app: <MY_APP>
env: <MY_ENV>
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: <MY_IP>/24
except:
- <MY_IP>/32
ports:
- protocol: TCP
port: 3000
Unfortunately, this is not working because it blocks all the connections.
How can I fix this?

In your policy as it is right now, you are allowing ingress from that CIDR, except for all traffic from your <MY_IP>. So it is blocking all traffic from your IP.
PS: Source IP preservation is disabled by default for Ingress in IBM Cloud Kubernetes Service. Make sure you've enabled it for your Ingress service: https://console.bluemix.net/docs/containers/cs_ingress.html#preserve_source_ip

Related

Kubernetes egress doesnt work for active database connection (jdbc)

I would like to deny outgoing connections from existing pods for specific IP address. I created the following network policy (NP) in which I restricted the IP range of the database server (10.16.0.0/16). The policy only works after the pod is restarted (jdbc erros in log). If I apply Network Policy to a running pod, the pod is still able to communicate with the database.
In case of other system (ldap) NP will block the communication immediately without having to restart the pod.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-deny-ip
namespace: <namespace>
spec:
podSelector:
matchLabels:
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- <cidr>
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
- podSelector:
matchLabels:
k8s-app: "kube-dns"
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
I assumed that communication would be blocked immediately and errors would appear in the log. I tried blocking all communication from the pod, but it didn't affect the database (no error in log, only ldap errors). I also tried block ingress and egress for specific cidr, but nothing changed.
Has anyone encountered this behavior?

Im trying to have a network policy which has egress to two other pods and a ip, which is a windows server

In gke cluster have a pod (hello) in default namespace, which acts like a client and connects to a server; installed in a windows vm present outside of cluster. Once connection is established between the client pod and server on windows vm; the pod receives transactions from server to the pod. In my policy I have given both egress and ingress rules. Strangely the even after giving wrong CIDR , the pod still receives traffic from windows vm. Hello app pod service is a loadbalancer type, which has external
node IP.
Expected result is when given wrong ip the connection should be denied, but I can still get transactions from server present on windows VM, even when i gave wrong ip in cidrblock.
here is my network policy.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: hello-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: hello
namespace: default
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 10.2.0.6/32 # ip of windows vm present outside of cluster
egress:
- to:
- ipBlock:
cidr: 10.7.0.3/32 # ip of db present outside of cluster
ports:
- port: 5432
- to:
- ipBlock:
cidr: 10.2.0.6/32 # pod to connect to the windows
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- port: 53
protocol: UDP
- to:
- podSelector:
matchLabels:
app: activemq
- to:
- podSelector:
matchLabels:
app: rabbitmq

How to create a network policy that matches Kubernetes API

In our EKS Kubernetes cluster we have a general calico network policy to disallow all traffic. Then we add network policies to allow all traffic.
One of our pods needs to talk to the Kubernetes API but I can't seem to match that traffic with anything else than very broad ipBlock selectors. Is there any other way to do it?
This currently works but gives too broad access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
run: my-test-pod
policyTypes:
- Egress
egress:
- to: # To access the actual kubernetes API
- ipBlock:
cidr: 192.168.0.0/16
ports:
- protocol: TCP
port: 443
In AWS EKS I can't see the control plane pods but in my RPI cluster I can. In the RPI cluster, the API pods has labels "component=kube-apiserver,tier=control-plane" so I also tried using a podSelector with those labels but it does not match either in EKS or the RPI cluster:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
component: kube-apiserver
Any help would be appreciated.
What if you:
find API server by running kubectl cluster-info
look into smth like
Kubernetes master is running at ... lets say from the example https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com
translate that https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com to ip address, lets say it would be a.b.c.d
And finally use a.b.c.d/32 inside NetworkPolicy, e.g
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: a.b.c.d/32
ports:
- protocol: TCP
port: 443
Please correct me if I understood smth wrong

Kubernetes pod to pod cluster Network Policy

Using k8s network policy or calico, can I only use these tools for pod to pod cluster network policies.
I already have network rules for external cluster policies.
For example if I apply this calico rule:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-ingress-from-b
namespace: app
spec:
selector: app == 'a'
ingress:
- action: Allow
protocol: TCP
source:
selector: app == 'b'
destination:
ports:
- 80
In this example I allow traffic coming from app B to app A.
But this will disallow every other ingress traffic going to A.
Would it be possible to only apply this rule from pod to pod ?
You should read The NetworkPolicy resource, it provides an example NetworkPolicy with Ingress and Egress.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
The explanation is as following:
isolates “role=db” pods in the “default” namespace for both ingress and egress traffic (if they weren’t already isolated)
(Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
any pod in the “default” namespace with the label “role=frontend”
any pod in a namespace with the label “project=myproject”
IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
(Egress rules) allows connections from any pod in the “default” namespace with the label “role=db” to CIDR 10.0.0.0/24 on TCP port 5978
See the Declare Network Policy walkthrough for further examples.
So if you use a podSelector, you will be able to select pods for this Network Policy to apply to.

Allow egress traffic to single IP address

I'm writing the network policies of a Kubernetes cluster. How can I specify a single IP address that I want to authorize in my egress policy instead of authorizing a whole range of IP addresses ?
An example based on the official docs:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.11.12.13/32
ports:
- protocol: TCP
port: 5978
It's essential to use /32 subnet prefix length which indicates that you're limiting the scope of the rule just to this one IP address.