Kubernetes Networkpolicy dosen't block traffic - kubernetes

i've a namespace called: test, and containing 3 pods: frontend, backend and database.
this is the manifest of pods:
kind: Pod
apiVersion: v1
metadata:
name: frontend
namespace: test
labels:
app: todo
tier: frontend
spec:
containers:
- name: frontend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: backend
namespace: test
labels:
app: todo
tier: backend
spec:
containers:
- name: backend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: database
namespace: test
labels:
app: todo
tier: database
spec:
containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: example
I would implement a network policy , that allow only allow incoming traffic from the backend to the database but disallow incoming traffic from the frontend.
this my network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-allow
namespace: test
spec:
podSelector:
matchLabels:
app: todo
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: todo
tier: backend
ports:
- protocol: TCP
port: 3306
- protocol: UDP
port: 3306
This is the output of kubectl get pods -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 28m 172.17.0.5 minikube <none> <none>
database 1/1 Running 0 28m 172.17.0.4 minikube <none> <none>
frontend 1/1 Running 0 28m 172.17.0.3 minikube <none> <none>
This is the output of kubectl get networkpolicy -n test -o wide
NAME POD-SELECTOR AGE
app-allow app=todo,tier=database 21m
when i execute telnet #ip-of-mysql-pod 3306 from the frontend pod , the connection look be established and the network policy is not working
kubectl exec -it pod/frontend bash -n test
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root#frontend:/# telnet 172.17.0.4 3306
Trying 172.17.0.4...
Connected to 172.17.0.4.
Escape character is '^]'.
J
8.0.25 k{%J\�#(t%~qI%7caching_sha2_password
there are something i missing ?
Thanks

It seems that you forgot to add "default deny" policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
The default behavior of NetworkPolicy is to allow all connections between pod unless explicitly denied.
More details here: https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic

Related

Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

Kubernetes Ingress issue baremetal

I am new to kubernetes, installed 3 nodes k8s cluster through kubeadm in my personal Laptop on top of VMware Workstation
a master and 2 worker nodes.
I have deployed nginx ingress controller through below URL, seems nginx ingress pods are working fine, I have deployed a httpd pod, service and ingress to point to the http server, but I am not able to point to the http URL, pasted all files.
But I didn't deploy any LoadBalancers(HAproxy/MetalLB), I am in a dilemma whether LoadBalancer or Proxy required to make ingress working on BareMetal multinode cluster.
# nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
[root#kube-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master01 Ready master 197d v1.19.0
kube-node01.example.com Ready worker 197d v1.19.0
kube-node02.example.com Ready worker 197d v1.19.0
[root#kube-master01 ~]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-5zcd5 0/1 Completed 0 41h
ingress-nginx-controller-67897c9494-pt5nl 1/1 Running 0 3h4m
[root#minikube01 httpd]# cat httpd-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: http-server
spec:
replicas: 1
selector:
matchLabels:
app: http-server
template:
metadata:
labels:
app: http-server
spec:
containers:
- name: http-server
image: httpd
ports:
- containerPort: 80
[root#minikube01 httpd]# cat httpd-service.yaml
apiVersion: v1
kind: Service
metadata:
name: httpd-service
spec:
selector:
app: http-server
ports:
- protocol: TCP
port: 8081
targetPort: 80
[root#minikube01 httpd]# cat httpd-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpd-ingress
spec:
rules:
- host: httpd.com
http:
paths:
- backend:
serviceName: httpd-service
servicePort: 8081
The same above files works fine in a minikube node without any issues.
Any assiatnace is appreciated.
Thanks in Advance
Niru

how to access service on rpi k8s cluster

I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend
You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080

Service does not work after deleting one pod in a deployment

I apply a deployment of 2 some http pods, and a service for it, and it works fine. I can curl the serviceip or servicename. The service did round robin well.
But after I delete one pod, k8s create a new one to replace it. When I curl the service, the new pod doesn't return, only the other old one is OK.
The question is why k8s not update new pod to the service so I can curl the serviceip or servicename as before?
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
spec:
selector:
matchLabels:
run: mvn-demo
replicas: 2
template:
metadata:
labels:
run: mvn-demo
spec:
containers:
- name: mvndemo
image: 192.168.0.193:59999/mvndemo
ports:
- containerPort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: mvn-svc
labels:
run: mvn-demo
spec:
ports:
- port: 8080
protocol: TCP
#type: NodePort
selector:
run: mvn-demo
kdes svc mvn-svc
Name: mvn-svc
Namespace: default
Labels: run=mvn-demo
Annotations: Selector: run=mvn-demo
Type: ClusterIP
IP: 10.97.21.218
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 100.101.153.220:8080,100.79.233.220:8080
Session Affinity: None
Events: <none>
kpod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mvn-dp-8f59c694f-2mwq8 1/1 Running 0 81m 100.79.233.220 worker2 <none> <none>
mvn-dp-8f59c694f-xmt6m 1/1 Running 0 87m 100.101.153.220 worker3 <none> <none>
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
Hello Docker World, from: mvn-dp-8f59c694f-xmt6m
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
**curl: (7) Failed connect to 10.97.21.218:8080; 连接超时(connetion timeout)**
As u can see the age of mvn-dp-8f59c694f-2mwq8 is newer than the other one, because I deleted one pod and k8s replace it with this new one.
Set a label to your deployment metadata
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
labels:
run: mvn-demo
it will work

Service not available in Kubernetes

I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the Kubernetes guestbook example.
Service definition and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller-redis
spec:
replicas: 1
template:
metadata:
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
containers:
- name: poller-redis
image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
ports:
- containerPort: 6379
env:
- name: GET_HOSTS_FROM
value: dns
imagePullSecrets:
- name: gcr-json-key
App deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-json-key
Relevant (I hope) Kubernetes info:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 24d
poller-redis 10.0.0.137 <none> 6379/TCP 20d
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
poller 1 1 1 1 12d
poller-redis 1 1 1 1 4d
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 24d
poller-redis 172.17.0.7:6379 20d
Inside the poller pod (custom app), I get environment variables created for Redis:
# env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379
However, if I try to connect to that port, I cannot. Doing something like:
nc -vz poller-redis 6379
fails.
What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.
Any ideas, please?
Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes.
I have posted that my service definition is:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
The problem is that metadata.labels and spec.selector are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller-redis
tier: backend
role: database
target: poller
ports:
- port: 6379
targetPort: 6379
I also now use straight up DNS lookup (i.e. ping poller-redis) rather than trying to connect to localhost:6379 from my target pods.
It could be related to kube-dns possibly not running.
From inside the poller pod can you verify that poller-redis resolves?
Does the following work from inside the container?
nc -v 10.0.0.137
One kube-dns service running in kube-system is enough. Did you run nc -vz poller-redis 6379 in pods which have same namespace as redis service?
poller-redis is simplified dns name of resdis service in same namespace. It will do not work in different namespace.
Since kube-dns is unavailable on nodes. So if you want to run nc or redisclient in nodes, please use clusterIP of redis service to replace dns name.