Allowing traffic between different pods using pod network policy - kubernetes

I have created the below 'pod` in default namespace
kubectl run myhttpd --image="docker.io/library/nginx:latest" --restart=Never -l app=httpd-server --port 80
I was creating another Pod on a different namespace to check the connectivity on port 80 on default namespace with the below command
kubectl run cli-httpd --rm -it --image=busybox --restart=Never -l app=myhttpd -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 100.64.9.198 (IP of application in default namespace)
In order to allow the connectivity between both the namespace , I have created the below Pod network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-ingress-80
namespace: default
spec:
podSelector:
matchLabels:
app: myhttpd
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.64.8.0/22
ports:
- protocol: TCP
port: 80
10.64.8.0/22 is the Pods network range.
But the connectivity is timing out. Please suggest to allow this connectivty

In NetworkPolicy, the ipBlock is usually meant to allow communications from outside your SDN.
What you want to do is to filter based on pod labels.
Having started your test pod, check for its labels
kubectl get pods --show-labels
Pick one that identify your Pod, while not matching anything else, then fix your NetworkPolicy. Should look something like:
spec:
ingress:
- from:
- podSelector: # assuming client pod belongs to same namespace as application
matchLabels:
app: my-test # netpol allows connections from any pod with label app=my-test
ports:
- port: 80 # netpol allows connections to port 80 only
protocol: TCP
podSelector:
matchLabels:
app: myhttpd # netpol applies to any pod with label app=myhttpd
policyTypes:
- Ingress
While ... I'm not certain what the NetworkPolicy specification says regarding ipBlocks (can they refer to SDN ranges?) ... depending on your SDN, I guess your configuration "should" work, in some cases, maybe. Maybe your issue is only related to label selectors?
Note, allowing connections from everywhere, I would use:
spec:
ingress:
- {}
....

Related

Kubernetes Network Policy Egress to pod via service

I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.
In this scenario I have two pods:
sleeper, the client
frontend, the server behind a Service called frontend-svc which forwards port 8080 to the pods port 80
Both running in the same namespace: ns
In the sleeper pod I simply wget a ping endpoint in the frontend pod:
wget -qO- http://frontend-svc.ns:8080/api/Ping
Here's my egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-to-frontend-egress
namespace: ns
spec:
podSelector:
matchLabels:
app: sleeper
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: frontend
As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.
Unfortunately, this breaks my ping:
wget: bad address 'frontend-svc.ns:8080'
However if I retrieve the pod's ip (using kubectl get po -o wide) and talk to the frontend directly I do get a response:
wget -qO- 10.x.x.x:80/api/Ping (x obviously replaced with values)
My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-kube-system
namespace: ns
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
podSelector: {}
policyTypes:
- Egress
For now I don't want to bother with the exact pod and port, so I allow all pods from the ns namespace to egress to kube-system pods.
However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.
I'm running on Azure Kubernetes with Calico Network Policies.
Any clue what might be the issue, because I'm out of ideas.
After getting it up and running, here's a more locked-down version of the DNS egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-pods-dns-egress
namespace: ns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
# This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
kubernetes.io/metadata.name: "kube-system"
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
I recreated your deployment and the final networkpolicy (egress to kube-system for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're wget command when accessing the pod directly (80).
Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: https://docs.otterize.com/quick-tutorials/k8s-network-policies.
It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.

Ingress-nginx how to set externalIPs of nginx ingress to 1 ip externalIP only

i installed nginx ingress with the yaml file
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
when deploy i can see that the endpoints/externalIPs by default are all the ip of my nodes
but i only want 1 externalIPs to be access able to my applications
i had tried bind-address(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address) in a configuration file and applied it but it doesn't work, my ConfigMap file:
apiVersion: v1
data:
bind-address: "192.168.30.16"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
I tried kubectl edit svc/ingress-nginx-controller -n ingress-nginx to edit the svc adding externalIPs but it still doesn't work.
The only thing the nginx ingress document mentioned is https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#external-ips but i tried editing the svc, after i changed, it was set to single IP, but later it re-add the IPs again. Seems like there an automatic update of external IPs mechanic in ingress-nginx?
Is there anyway to set nginx ingress externals ip to only 1 of the node ip? i'm running out of option for googling this. Hope someone can help me
but I only want 1 external IPs to be access able to my applications
If you wish to "control" who can access your service(s) and from which ip/subnet/namesapce etc you should use NetworkPolicy
https://kubernetes.io/docs/concepts/services-networking/network-policies/
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
Other pods that are allowed (exception: a pod cannot block access to itself)
Namespaces that are allowed.
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Dependent on whether there is a LoadBalancer implementation for your cluster that might as intended.
If you want to use a specified node use type: NodePort
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
It might then also be useful to use a nodeSelector so you can control what node the nxinx controller gets scheduled to, for DNS reasons.

Network policy to restrict communication of pods within namespace and port

Namespace 1: arango
Namespace 2: apache - 8080
Criteria to acheive:
The policy should not allow pods which are not listening in port 8080
The policy Should not allow pods from any other namespace except "arango"
Is the following ingress help acheive this? or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Your current config
Your current configuration, is allowing traffic to pods with label app: arango in default namespace on port: 8080 from pods which have label app: apache in default namespace
It will apply to default namespace as you didn't specify it. If namespace is not defined, Kubernetes always is using default namespace.
Questions
or is it manadtory to add egress as there are rules to deny other namespace pods and ports except 8080?
It depends on your requirements, if you want filter traffic from your pod to outside, from outside to your pod or both. It's well describe in Network Policy Resource documentation.
NetworkPolicy is namespaced resource so it will run in the namespace it was created in. If you want to allow another namespaces you should use namespaceSelector
The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules.
To sum up, ingress traffic is from outside to your pods and egress is from your pods to outside.
You want to apply two main rules:
The policy should not allow pods which are not listening in port 8080
If you would like to use this only for ingress traffic, it would looks like:
ingress:
- from:
ports:
- protocol: <protocol>
port: 8080
The policy Should not allow pods from any other namespace except "arango"
Please keep in mind that NetworkPolicy is namespaced resource thus it will work in the Namespace which was created. It should be specify in metadata.namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
namespace: arango
spec:
...
Requested Network Policy
I have tested this on my GKE cluster with Network Policy enabled.
In example below, incoming traffic to pods with label app: arango in arango namespace are allowed only if they come from pod with label app: apache, are listening on port: 8080 and were deployed in arango namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: arango
spec:
podSelector:
matchLabels:
app: arango
ingress:
- from:
- podSelector:
matchLabels:
app: apache
ports:
- protocol: TCP
port: 8080
Useful links:
Guide to Kubernetes Ingress Network Policies
Get started with Kubernetes network policy
If this answer didn't solve your issue, please clarify/provide more details how it should work and I will edit answer.

How to create a network policy that matches Kubernetes API

In our EKS Kubernetes cluster we have a general calico network policy to disallow all traffic. Then we add network policies to allow all traffic.
One of our pods needs to talk to the Kubernetes API but I can't seem to match that traffic with anything else than very broad ipBlock selectors. Is there any other way to do it?
This currently works but gives too broad access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
run: my-test-pod
policyTypes:
- Egress
egress:
- to: # To access the actual kubernetes API
- ipBlock:
cidr: 192.168.0.0/16
ports:
- protocol: TCP
port: 443
In AWS EKS I can't see the control plane pods but in my RPI cluster I can. In the RPI cluster, the API pods has labels "component=kube-apiserver,tier=control-plane" so I also tried using a podSelector with those labels but it does not match either in EKS or the RPI cluster:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
component: kube-apiserver
Any help would be appreciated.
What if you:
find API server by running kubectl cluster-info
look into smth like
Kubernetes master is running at ... lets say from the example https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com
translate that https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com to ip address, lets say it would be a.b.c.d
And finally use a.b.c.d/32 inside NetworkPolicy, e.g
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: a.b.c.d/32
ports:
- protocol: TCP
port: 443
Please correct me if I understood smth wrong

How to communicate between pods in a service?

Suppose I have a service containing two pods. One of the pods is an HTTP server, and the other pod needs to hit a REST endpoint on this pod. Is there a hostname that the second pod can use to address the first pod?
I'm assuming when you say "service" you aren't referring to the Kubernetes lexicon of a Service object, otherwise your two Pods in the Service would be identical, so let's start by teasing out what a "Service" means in Kubernetes land.
You will have to create an additional Kubernetes object called a Service to get your hostname for your HTTP server's Pod. When you create a Service you will define a .spec.selector that points to a set of labels on the HTTP service's Pod. For the sake of example, let's say the label is app: nginx. The name of that Service object will become the internal DNS record that can be queried by the second Pod.
A simplified example:
apiVersion: v1
kind: Pod
metadata:
name: http-service
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: my-http-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Now your second Pod can make requests to the HTTP service by the Service name, my-http-service.
It's also worth mentioning that Kubernetes best practice dictates that these Pods be managed by controllers such as Deployments or ReplicaSets for all sorts of reasons, including high availability of your applications.
Note that a service is a different concept in Docker then in K8s. The easiest way of getting what you want would be creating the two pods; say pod-1 and pod-2, with a yaml file similar to this one:
apiVersion: v1
kind: Pod
metadata:
name: NAME
labels:
app: LABEL
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Say NAME and LABEL are nginx and nginx-1, so you have now two pods called nginx and nginx-1, with labels app: nginx and app: nginx-1. Actually, as only one of them is going to be exposed, the other label is irrelevant.
Now you expose the pod either with a yaml file or from command line.
Yaml file:
apiVersion: v1
kind: Service
metadata:
name: server
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx
Command line:
kubectl expose pod nginx --port 80 --name server
If you now access the second pod (nginx-1) and curl the service directly, you would end up hitting the pod behind it (nginx):
nerus:~/workspace $ kubectl exec -it nginx-1 bash
root#nginx-1:/# curl -I server
HTTP/1.1 200 OK
You can expose your pod kubectl expose deployment --type=name of pod , then you can use the kubectl describe It will show you port number . Then you access you pod at http://localhost:portnumber in last command ....**.....Hope it will help .
Ironically, you answered your own question: a Service is a stable name and IP that abstracts over the individual coming-and-going of the Pods to which it will route traffic, as described very well in the fine manual.
If the-http-pod needs to reach the-rest-pod, then create a Service that matches the labels on the PodSpec that created the-rest-pod, and from that point forward the-http-pod can always use ${serviceName}.${serviceNamespace}.svc.cluster.local to each any Pod that has matching labels