Kubernetes.default nslookup not able to resolve from different namespaces - kubernetes

I'm facing a problem resolving kubernetes.default.svc.cluster.local from outside default namespace
I'm running two busybox:1.30 pods on each namespace and the name successfully resolves from the default namespace only
[admin#devsvr3 ~]$ kubectl exec -n default -ti busybox -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[admin#devsvr3 ~]$ kubectl exec -n namespace-dev -ti busybox -- nslookup kubernetes
Server: 10.96.0.10
Address: 10.96.0.10:53
** server can't find kubernetes.namespace-dev.svc.cluster.local: NXDOMAIN
*** Can't find kubernetes.svc.cluster.local: No answer
*** Can't find kubernetes.cluster.local: No answer
*** Can't find kubernetes.namespace-dev.svc.cluster.local: No answer
*** Can't find kubernetes.svc.cluster.local: No answer
*** Can't find kubernetes.cluster.local: No answer
[admin#devsvr3 ~]$
I'm running CentOS 7 kubernetes cluster on an air-gaped environment and using weave net CNI add-on and this is my CoreDNS config
apiVersion: v1
data:
Corefile: |
.:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-01-28T10:59:25Z"
name: coredns
namespace: kube-system
resourceVersion: "1177652"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: c6b5ddae-22eb-11e9-8689-0017a4770068

Following your steps indeed I approached the same issue. But if you create the pod using this yaml it works correctly. Changing the busybox image seems to end up with your described error. Will try to find out why. But for now this is the solution.
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: namespace-dev
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
and then:
kubectl exec -ti -n=namespace-dev busybox -- nslookup kubernetes.default
it works as intended and explained here.
/ # nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes'
/ # nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

Related

Minikube: Kubernetes Ingress not reachabled / loads forever

I want to create an ingress for the kubernetes dashboard, but it loads forever inside the browser: Mac
minikube start --driver=docker
minikube addons enable ingress
minikube addons enable ingress-dns
minikube addons enable dashboard
minikube addons enable metrics-server
❯ kubectl get ns
NAME STATUS AGE
default Active 4m13s
ingress-nginx Active 109s
kube-node-lease Active 4m14s
kube-public Active 4m14s
kube-system Active 4m14s
kubernetes-dashboard Active 51s
❯ kubectl get service -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.105.2.220 <none> 8000/TCP 82s
kubernetes-dashboard ClusterIP 10.106.101.254 <none> 80/TCP 82s
// dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: dashboard.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: kubernetes-dashboard
port:
number: 80
❯ kubectl apply -f dashboard-ingress.yaml
ingress.networking.k8s.io/dashboard-ingress created
❯ kubectl get ingress -n kubernetes-dashboard
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard-ingress <none> dashboard.com 192.168.49.2 80 67s
// etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
192.168.49.2 dashboard.com
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
~
When I now try to load dashboard.com or also 192.168.49.2 in the browser, it just loads forever and does nothing.
When i try to load http://127.0.0.1/ I get a nginx 404
Am I missing something?
The ingress addon is currently not fully supported on docker driver on MacOs (due the limitation on docker bridge on mac) you need to use minikube tunnel command. Minikube docs - Known isses, GitHub issue
Enabling the ingress addon on Mac shows that the ingress will be available on 127.0.0.1. Support Ingress on MacOS, driver docker
So you only need to add the following line to your /etc/hosts file.
127.0.0.1 dashboard.com
Create tunnel (it will ask your sudo password)
minikube tunnel
Then you can verify that the Ingress controller is directing traffic:
curl dashboard.com
(also I used this Ingress)
kubectl apply -f - << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 80
EOF

NetworkPolicy doesn't block all the pods communication

I need to block pods communication to each other but I failed to do it.
I installed weave plug-in on my Minikube (v1.21.0), and started two pods in the same namespace:
kubectl run nginx1 --image=nginx -n ns1
kubectl run nginx2 --image=nginx -n ns2
The pods IPs:
nginx1 with IP: 172.17.0.3
nginx2 with IP: 172.17.0.4
I can ping nginx1 from nginx2 and vice vesra.
I wanted to try to deny it, so I firstly tried to deny all the network with this network policy:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol1
namespace: earth
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress: []
EOF
I still had ping, so I tried this one too:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol1
namespace: earth
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.17.0.0/16
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.17.0.0/16
EOF
I still can ping each pod from within the other pods in the same namespace.
I verified that weave is installed:
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-2r5z8 1/1 Running 0 2d3h
etcd-ip-172-31-37-46 1/1 Running 0 2d3h
kube-apiserver-ip-172-31-37-46 1/1 Running 0 2d3h
kube-controller-manager-ip-172-31-37-46 1/1 Running 0 2d3h
kube-proxy-787pj 1/1 Running 0 2d3h
kube-scheduler-ip-172-31-37-46 1/1 Running 0 2d3h
storage-provisioner 1/1 Running 0 2d3h
weave-net-wd52r 2/2 Running 1 23m
I also tried to restart kubelet but I still have access from each pod to the other one.
What can be the reason?
When you specify the Egress and Ingress resources, you do not specify the network protcol. In the kubernetes docs you can see that the protocol can be specified too. Your kubernetes cluster defaults your Egress and Ingress resources to a protocol if you do not specify one.
If you block all TCP or UDP networking, you will find that ping still works just fine. This is because ping uses the ICMP network protocol, not TCP or UDP.
The actual configuration you need depends on your networking plugin. I do not know how to configure Weave to block ICMP.
If you were using Calico, their docs explain how to handle the ICMP protocol.

Kubernetes: Not able to communicate within two services (different pod, same namespace)

I am not able to communicate between two services.
post-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector
tier: backend
template:
metadata:
labels:
app: python-web-selector
tier: backend
spec:
containers:
- name: python-web-pod
image: sakshiarora2012/python-backend:v10
ports:
- containerPort: 5000
post-deployment2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment2
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector2
tier: backend
template:
metadata:
labels:
app: python-web-selector2
tier: backend
spec:
containers:
- name: python-web-pod2
image: sakshiarora2012/python-backend:v8
ports:
- containerPort: 5000
post-service.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service
spec:
selector:
app: python-web-selector
tier: backend
ports:
- port: 5000
nodePort: 30400
type: NodePort
post-service2.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service2
spec:
selector:
app: python-web-selector2
tier: backend
ports:
- port: 5000
type: ClusterIP
When I go and try to ping from 1 container to another, it is not able to ping
root#python-data-deployment-7bd65dc685-htxmj:/project# ping python-data-service.default.svc.cluster.local
PING python-data-service.default.svc.cluster.local (10.107.11.236) 56(84) bytes of data.
^C
--- python-data-service.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 139ms
If I see dns entry it is showing
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service.default.svc.cluster.local
Address: 10.107.11.236
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service2
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service2.default.svc.cluster.local
Address: 10.103.97.40
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 0 5m54s 172.17.0.9 minikube <none> <none>
python-data-deployment-7bd65dc685-htxmj 1/1 Running 0 47m 172.17.0.6 minikube <none> <none>
python-data-deployment2-764744b97d-mc9gm 1/1 Running 0 43m 172.17.0.8 minikube <none> <none>
python-db-deployment-d54f6b657-rfs2b 1/1 Running 0 44h 172.17.0.7 minikube <none> <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service
Name: python-data-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service","namespace":"default"},"spec":{"ports":[{"no...
Selector: app=python-web-selector,tier=backend
Type: NodePort
IP: 10.107.11.236
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30400/TCP
Endpoints: 172.17.0.6:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service2
Name: python-data-service2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service2","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=python-web-selector2,tier=backend
Type: ClusterIP
IP: 10.103.97.40
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.8:5000
Session Affinity: None
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
I think if in DNS table it show if of range 172,17.0.X then it will work, but not sure why it is not showing in dns entry, Any pointers?
If you want to access python-data-service from outside the cluster using NodePort and you are using minikube you should be able to do so by using curl $(minikube service python-data-service --url) from anywhere outside the cluster i.e from your system
If you want to communicate between two microservice within the cluster then simply use ClusterIP type service instead of NodePort type.
To identity if it's a service issue or pod issue use PODIP directly in the curl command. From the output of kubectl describe svc python-data-service the Pod IP for service python-data-service is 172.17.0.6. So try curl 172.17.0.6:5000/getdata
In order to start debugging your services I would suggest the following steps:
Check that your service 1 is accessible as a Pod:
kubectl run test1 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://172.17.0.6:5000
Check that your service 2 is accessible as a Pod:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 172.17.0.8:5000
Then, check that your service 1 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.107.11.236:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service:5000
Then, check that your service 2 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.103.97.40:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service2:5000
Then, if needed, check that your service 2 is accessible through your node port (you would need to know the IP Address of the Node where the service has been exposed, for instance in minikube it should work:)
wget -O - http://192.168.99.101:30400
From your Service manifest I can recommend as a good practice to specify both port and targetPort as you can see at
https://canterafonseca.eu/kubernetes/certification/application/developer/cncf/k8s/cloud/native/computing/ckad/deployments/services/preparation-k8s-ckad-exam-part4-services.html#-services
On the other hand if you only need to expose to the outside world one of the services you can create a headless service (see also my blog post above).
Ping doesn't work on services ClusterIP addresses because they are from virtual addresses created by iptables rules that redirect packets to the endpoints(pods).
You should be able to ping a pod, but not a service.
You can use curl or wget
For example wget -qO- POD_IP:80
or You can try
wget -qO- http://your-service-name:port/yourpath
curl POD_IP:port_number
Are you able to connect to your pods maybe try port-forward to see if you can connect & then check the connectivity in two pods.
Last check if there is default deny network policy set there - maybe you have some restrictions at network level.
kubectl get networkpolicy -n <namespace>
Try to look into the logs using kubectl logs PODNAME so that you know what's happening. From first sight, I think you need to expose the ports of both services: kubectl port-forward yourService PORT:PORT.

Is there any restriction in kube-system namespace and kubedns?

I'm trying to understand why I do not have the same behavior on Kubedns with kube-system and another namespace.
For example, with this kind of pod:
apiVersion: v1
kind: Pod
metadata:
name: debian
namespace: kube-system
spec:
containers:
- image: debian
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
If I'm trying to reach dns service with this pod on kube-system namespace, it fails. However, if I'm using another namespace, it works.
Of course I'm trying to target a service name in the same namespace that the pod.
Any idea why it fails on kube-system?
I tested on both name space, it works on both name space. can you give some more details on your issue?
in kube-system namespace
dig #kube-dns.kube-system.svc.cluster.local +short NS google.com
ns1.google.com.
ns2.google.com.
ns4.google.com.
ns3.google.com.
in default namespace.
dig #kube-dns.kube-system.svc.cluster.local +short NS google.com
ns2.google.com.
ns1.google.com.
ns4.google.com.
ns3.google.com.

Can't resolve 'kubernetes' by skydns serivce in Kubernetes

core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes
Server: 10.100.0.10
Address 1: 10.100.0.10
nslookup: can't resolve 'kubernetes'
core#core-1-94 ~ $ kubectl get svc --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.100.0.10 53/UDP
53/TCP
kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui 10.100.110.236 80/TCP
core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.100.0.10:53
Server: 10.100.0.10
Address 1: 10.100.0.10
nslookup: can't resolve 'kubernetes'
core#core-1-94 ~ $ kubectl get endpoints --namespace=kube-system
NAME ENDPOINTS
kube-dns 10.244.31.16:53,10.244.31.16:53
kube-ui 10.244.3.2:8080
core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.244.31.16:53
Server: 10.244.31.16
Address 1: 10.244.31.16
Name: kubernetes
Address 1: 10.100.0.1
I think the service of kube-dns is Not available.
the skydns-svc.yaml :
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.100.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
Who can help ?
For DNS to work, the kubelet needs to be passed the flags --cluster_dns= and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec like this.
For any other information, you can look it.