Kubernetes network allocation range - kubernetes

Is there a way in Kubernetes or there's a network plugin on which we can limit the range of IP allocation. For example, I am trying to use weave and using a subnet 192.168.16.0/24. I want to limit the allocation of IPs through Kubernetes to pods to the range of 192.168.16.10-30.
However, my app might use the rest of the IPs based on requirements i.e. my app can start a virtual IP from 192.168.16.31-50 but I want some mechanism to make sure that the IP range I specified will not be allocated by K8s and my app can consume that.
I need something like this: https://www.weave.works/docs/net/latest/tasks/ipam/configuring-weave/.

Network Policy resource will help
See Documentation
An example NetworkPolicy might look like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
The rule ipBlock describes the network ranges for ingress and egress rules.
E.g.:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
CIDR
CIDR stands for Classless Inter-Domain Routing, see samples of IPv4 CIDR blocks
More info
For more info see the NetworkPolicy reference.
Also, you can check great intro to k8s networking by Reuven Harrison

It's a good question actually. It depends on your CNI, in your case when using weavenet.
I am assuming you are using a daemonset for your Weavenet. If so, add something like this on your daemonset yaml file.
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 192.168.16.32/27
This gives your pods an IP range from 192.168.16.32-63.
You can also setup this with Weave CLI, let me know if you need that.
Hope this is helpful.

Related

How to create a network policy that matches Kubernetes API

In our EKS Kubernetes cluster we have a general calico network policy to disallow all traffic. Then we add network policies to allow all traffic.
One of our pods needs to talk to the Kubernetes API but I can't seem to match that traffic with anything else than very broad ipBlock selectors. Is there any other way to do it?
This currently works but gives too broad access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
run: my-test-pod
policyTypes:
- Egress
egress:
- to: # To access the actual kubernetes API
- ipBlock:
cidr: 192.168.0.0/16
ports:
- protocol: TCP
port: 443
In AWS EKS I can't see the control plane pods but in my RPI cluster I can. In the RPI cluster, the API pods has labels "component=kube-apiserver,tier=control-plane" so I also tried using a podSelector with those labels but it does not match either in EKS or the RPI cluster:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
component: kube-apiserver
Any help would be appreciated.
What if you:
find API server by running kubectl cluster-info
look into smth like
Kubernetes master is running at ... lets say from the example https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com
translate that https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com to ip address, lets say it would be a.b.c.d
And finally use a.b.c.d/32 inside NetworkPolicy, e.g
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: a.b.c.d/32
ports:
- protocol: TCP
port: 443
Please correct me if I understood smth wrong

Kubernetes pod to pod cluster Network Policy

Using k8s network policy or calico, can I only use these tools for pod to pod cluster network policies.
I already have network rules for external cluster policies.
For example if I apply this calico rule:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-ingress-from-b
namespace: app
spec:
selector: app == 'a'
ingress:
- action: Allow
protocol: TCP
source:
selector: app == 'b'
destination:
ports:
- 80
In this example I allow traffic coming from app B to app A.
But this will disallow every other ingress traffic going to A.
Would it be possible to only apply this rule from pod to pod ?
You should read The NetworkPolicy resource, it provides an example NetworkPolicy with Ingress and Egress.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
The explanation is as following:
isolates “role=db” pods in the “default” namespace for both ingress and egress traffic (if they weren’t already isolated)
(Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
any pod in the “default” namespace with the label “role=frontend”
any pod in a namespace with the label “project=myproject”
IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
(Egress rules) allows connections from any pod in the “default” namespace with the label “role=db” to CIDR 10.0.0.0/24 on TCP port 5978
See the Declare Network Policy walkthrough for further examples.
So if you use a podSelector, you will be able to select pods for this Network Policy to apply to.

Allow egress traffic to single IP address

I'm writing the network policies of a Kubernetes cluster. How can I specify a single IP address that I want to authorize in my egress policy instead of authorizing a whole range of IP addresses ?
An example based on the official docs:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.11.12.13/32
ports:
- protocol: TCP
port: 5978
It's essential to use /32 subnet prefix length which indicates that you're limiting the scope of the rule just to this one IP address.

Kubernetes Network Policy - Allow specific IP

I'm using Kubernetes on IBM cloud.
I want to create a network policy that denies all the incoming connections to a pod (which expose the app on port 3000 ), but which allows the incoming connections only from a specific IP (MY_IP).
I wrote this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: <MY_POLICY_NAME>
namespace: <MY_NAMESPACE>
spec:
podSelector:
matchLabels:
app: <MY_APP>
env: <MY_ENV>
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: <MY_IP>/24
except:
- <MY_IP>/32
ports:
- protocol: TCP
port: 3000
Unfortunately, this is not working because it blocks all the connections.
How can I fix this?
In your policy as it is right now, you are allowing ingress from that CIDR, except for all traffic from your <MY_IP>. So it is blocking all traffic from your IP.
PS: Source IP preservation is disabled by default for Ingress in IBM Cloud Kubernetes Service. Make sure you've enabled it for your Ingress service: https://console.bluemix.net/docs/containers/cs_ingress.html#preserve_source_ip

How to allow access to kubernetes api using egress network policy?

Init container with kubectl get pod command is used to get ready status of other pod.
After Egress NetworkPolicy was turned on init container can't access Kubernetes API: Unable to connect to the server: dial tcp 10.96.0.1:443: i/o timeout. CNI is Calico.
Several rules were tried but none of them are working (service and master host IPs, different CIDR masks):
...
egress:
- to:
- ipBlock:
cidr: 10.96.0.1/32
ports:
- protocol: TCP
port: 443
...
or using namespace (default and kube-system namespaces):
...
egress:
- to:
- namespaceSelector:
matchLabels:
name: default
ports:
- protocol: TCP
port: 443
...
Looks like ipBlock rules just don't work and namespace rules don't work because kubernetes api is non-standard pod.
Can it be configured? Kubernetes is 1.9.5, Calico is 3.1.1.
Problem still exists with GKE 1.13.7-gke.8 and calico 3.2.7
You need to get the real ip of the master using kubectl get endpoints --namespace default kubernetes and make an egress policy to allow that.
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-apiserver
namespace: test
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- ports:
- port: 443
protocol: TCP
to:
- ipBlock:
cidr: x.x.x.x/32
Update: Try Dave McNeill's answer first.
If it does not work for you (it did for me!), the following might be a workaround:
podSelector:
matchLabels:
white: listed
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
This will allow accessing the API server - along with all other IP addresses on the internet :-/
You can combine this with the DENY all non-whitelisted traffic from a namespace rule to deny egress for all other pods.
We aren't on GCP, but the same should apply.
We query AWS for the CIDR of our master nodes and use this data as values for helm charts creating the NetworkPolicy for the k8s API access.
In our case the masters are part of an auto-scaling group, so we need the CIDR. In your case the IP might be enough.