Linkerd2 and Network Policys - kubernetes

i run an k3s Cluster in the latest version. Everything works like expected. Now i wrote a Programm for my CI/CD Pipeline that automatically create Network Policy´s based on my deployment Files. (only needed Ports are allowed for external or allowed containers..) In my bare Metal Cluster everything works fine. Now i want to encrypt my Traffic via mTLS an collect some Communication Logs via Linkerd2 so in installed Linkerd2 and Linkerd2-viz over the linkerd cli.
For my Network Policy´s i added some ingress allows like here:
https://ihcsim.medium.com/linkerd-2-x-with-network-policy-2657103333ca
When I inject the Linkerd-Proxy´s all my Webservers are Reachable. My clamav pod works as well, but if i try to connect to my Redis Container it wont work.
So my question is: Witch Ports/ Network Policy´s dose Linkerd2 need to Communicate?
Thanks for helping!
Example Network-Policy for Redis without the Linkerd2 parts:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: fw-redis
namespace: default
spec:
podSelector:
matchLabels:
app: redis
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
firewall/allowed.redis.6379-TCP: 'true'
ports:
- port: 6379
protocol: TCP
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.11.12.0/27

Related

Isolate k8s pods network between namespaces

I need to isolate k8s pods network between namespaces.
A pod-1 running in namespace ns-1 cannot access the network from a pod-2 in namespace ns-2.
The purpose of it, is creating a sandbox between namespaces and prevent network communications between specific pods based on it labels.
I was trying the NetworkPolicy to do this, but my knowledge about k8s is a little "crude".
Is this possible? Can someone provide an example?
I'm trying to block all intranet comunication and allow internet using this:
spec:
egress:
- ports:
- port: 53
protocol: UDP
to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/12
- 172.40.0.0/16
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
podSelector:
matchExpressions:
- key: camel.apache.org/integration
operator: Exists
policyTypes:
- Egress
But when I access something like google.com, it resolves the DNS correctly but not connects resulting in timeout.
The policy intention is to:
block all private network access
allow only the kube-dns nameserver resolver on port 53
but allow all access to internet
What am I doing wrong?
The settings of Network Policies are very flexible, and you can configure it in different way. If we look to your case, then you have to create 2 policies for your cluster. First it is namespace network policy for your production and second one is for your sandbox. Of course, before to start to modify your network be sure that you choose, install, and configure network provider.
It is example of NetworkPolicy .yaml file for isolate your name NameSpace:
# You can create a "default" policy for a namespace which prevents all ingress
# AND egress traffic by creating the following NetworkPolicy in that namespace.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: YourSandbox
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
And after that you can create your pod in this NameSpace and it will be isolated. Just add to your config file string with NameSpace name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-c
namespace: YourSandbox
And in this example, we add access to connect outside and inside to specific NameSpace and service:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-service-c
namespace: YourSandbox
spec:
podSelector:
matchLabels:
app: YourSandboxService
ingress:
- from:
- namespaceSelector:
matchLabels:
env: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
egress:
- to:
- namespaceSelector:
matchLabels:
name: YourProdactionNameSpase
- podSelector:
matchLabels:
app: NameOfYourService
Use this network policy ip Block to configure the egress blocking the default local private network IPs and allow the rest of the internet access open.
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except: 192.168.40.0/24 #Your pul of local private network IPs
If you use the length of the subnet prefix /32, that indicates that you are limiting the scope of the rule to this one IP address only.

Im trying to have a network policy which has egress to two other pods and a ip, which is a windows server

In gke cluster have a pod (hello) in default namespace, which acts like a client and connects to a server; installed in a windows vm present outside of cluster. Once connection is established between the client pod and server on windows vm; the pod receives transactions from server to the pod. In my policy I have given both egress and ingress rules. Strangely the even after giving wrong CIDR , the pod still receives traffic from windows vm. Hello app pod service is a loadbalancer type, which has external
node IP.
Expected result is when given wrong ip the connection should be denied, but I can still get transactions from server present on windows VM, even when i gave wrong ip in cidrblock.
here is my network policy.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: hello-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: hello
namespace: default
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 10.2.0.6/32 # ip of windows vm present outside of cluster
egress:
- to:
- ipBlock:
cidr: 10.7.0.3/32 # ip of db present outside of cluster
ports:
- port: 5432
- to:
- ipBlock:
cidr: 10.2.0.6/32 # pod to connect to the windows
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- port: 53
protocol: UDP
- to:
- podSelector:
matchLabels:
app: activemq
- to:
- podSelector:
matchLabels:
app: rabbitmq

Kubernetes NetworkPolicies Blocking DNS

I have an AKS cluster (Azure CNI) which I'm trying to implement NetworkPolicies on. I've created the network policy which is
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myserver
spec:
podSelector:
matchLabels:
service: my-server
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
service: myotherserver
- podSelector:
matchLabels:
service: gateway
- podSelector:
matchLabels:
service: yetanotherserver
ports:
- port: 8080
protocol: TCP
egress:
- to:
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
- port: 5432
protocol: TCP
- port: 8080
protocol: TCP
but when I apply the policy I'm seeing recurring messages that the host name cannot be resolved. I've installed dnsutils on the myserver pod; and can see the DNS requests are timing out; and I've also tried installing tcpdump on the same pod; and I can see requests going from myserver to kube-dns. I'm not seeing any responses coming back.
If I delete the networkpolicy DNS comes straight back; so I'm certain there's an issue with my networkpolicy but can't find a way to allow the DNS traffic. If anyone can shed any light on where I'm going wrong it would be greatly appreciated!
To avoid duplication create a separate network policy for opening up DNS traffic. First we label the kube-system namespace. Then allow DNS traffic from all pod to kube-system namespace.
kubectl label namespace kube-system name=kube-system
kubectl create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-access
namespace: <your-namespacename>
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
EOF
Solution which does not require a name label to the target namespace. It's necessary to define a namespaceSelector as well as a podSelector. The default namespaceSelector will target the pod's own namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-access
namespace: <your-namespacename>
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
EDIT: Changed namespaceSelector to only target kube-system namespace based on the kubernetes.io/metadata.name label. This assumes you have automatic labeling enabled. https://kubernetes.io/docs/concepts/overview/_print/#automatic-labelling
If you don't have this feature enabled, the next best thing is to define an allow-all namespaceSelector along with the podSelector.

How to create a network policy that matches Kubernetes API

In our EKS Kubernetes cluster we have a general calico network policy to disallow all traffic. Then we add network policies to allow all traffic.
One of our pods needs to talk to the Kubernetes API but I can't seem to match that traffic with anything else than very broad ipBlock selectors. Is there any other way to do it?
This currently works but gives too broad access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
run: my-test-pod
policyTypes:
- Egress
egress:
- to: # To access the actual kubernetes API
- ipBlock:
cidr: 192.168.0.0/16
ports:
- protocol: TCP
port: 443
In AWS EKS I can't see the control plane pods but in my RPI cluster I can. In the RPI cluster, the API pods has labels "component=kube-apiserver,tier=control-plane" so I also tried using a podSelector with those labels but it does not match either in EKS or the RPI cluster:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
component: kube-apiserver
Any help would be appreciated.
What if you:
find API server by running kubectl cluster-info
look into smth like
Kubernetes master is running at ... lets say from the example https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com
translate that https://EXAMPLE0A04F01705DD065655C30CC3D.yl4.us-west-2.eks.amazonaws.com to ip address, lets say it would be a.b.c.d
And finally use a.b.c.d/32 inside NetworkPolicy, e.g
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: a.b.c.d/32
ports:
- protocol: TCP
port: 443
Please correct me if I understood smth wrong

unable to connect internet from pod after applying egress network policy in GKE

I have a pod (kubectl run app1 --image tomcat:7.0-slim) in GKE after applying the egress network policy apt-get update command unable to connect internet.
Before applying policy:
After applying policy:
This is the policy applied:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app2-np
namespace: default
spec:
podSelector:
matchLabels:
name: app2
policyTypes:
- Egress
- Ingress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: app3
ports:
- port: 8080
- ports:
- port: 80
- port: 53
- port: 443
The Here am able to connect 8080 port of app3 pod in same namespace. Please help in correcting my netpol.
It happens because you are defining the egress rule only for app3 on port 8080, and it will block all internet connect attempts.
If you need to use access internet from some of your pods, you can tag them and create a NetworkPOlicy to permit the internet access.
In the example below, the pods with the tag networking/allow-internet-egress: "true" will be able to reach the internet:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-egress
spec:
podSelector:
matchLabels:
networking/allow-internet-egress: "true"
egress:
- {}
policyTypes:
- Egress
Another option is allow by ip blocks, in the example below, a rule will allow the internet access (0.0.0.0) except for the ipBlocks 10.0.0.0/8
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
Finally, in this site you can visualize your NetworkPolices in a good way to understand what is the exact behaviour.
References:
https://www.stackrox.com/post/2020/01/kubernetes-egress-network-policies/
Kubernets networkpolicy allow external traffic to internet only