Kubernetes: Not able to communicate within two services (different pod, same namespace) - kubernetes

I am not able to communicate between two services.
post-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector
tier: backend
template:
metadata:
labels:
app: python-web-selector
tier: backend
spec:
containers:
- name: python-web-pod
image: sakshiarora2012/python-backend:v10
ports:
- containerPort: 5000
post-deployment2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment2
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector2
tier: backend
template:
metadata:
labels:
app: python-web-selector2
tier: backend
spec:
containers:
- name: python-web-pod2
image: sakshiarora2012/python-backend:v8
ports:
- containerPort: 5000
post-service.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service
spec:
selector:
app: python-web-selector
tier: backend
ports:
- port: 5000
nodePort: 30400
type: NodePort
post-service2.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service2
spec:
selector:
app: python-web-selector2
tier: backend
ports:
- port: 5000
type: ClusterIP
When I go and try to ping from 1 container to another, it is not able to ping
root#python-data-deployment-7bd65dc685-htxmj:/project# ping python-data-service.default.svc.cluster.local
PING python-data-service.default.svc.cluster.local (10.107.11.236) 56(84) bytes of data.
^C
--- python-data-service.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 139ms
If I see dns entry it is showing
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service.default.svc.cluster.local
Address: 10.107.11.236
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service2
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service2.default.svc.cluster.local
Address: 10.103.97.40
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 0 5m54s 172.17.0.9 minikube <none> <none>
python-data-deployment-7bd65dc685-htxmj 1/1 Running 0 47m 172.17.0.6 minikube <none> <none>
python-data-deployment2-764744b97d-mc9gm 1/1 Running 0 43m 172.17.0.8 minikube <none> <none>
python-db-deployment-d54f6b657-rfs2b 1/1 Running 0 44h 172.17.0.7 minikube <none> <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service
Name: python-data-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service","namespace":"default"},"spec":{"ports":[{"no...
Selector: app=python-web-selector,tier=backend
Type: NodePort
IP: 10.107.11.236
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30400/TCP
Endpoints: 172.17.0.6:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service2
Name: python-data-service2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service2","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=python-web-selector2,tier=backend
Type: ClusterIP
IP: 10.103.97.40
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.8:5000
Session Affinity: None
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
I think if in DNS table it show if of range 172,17.0.X then it will work, but not sure why it is not showing in dns entry, Any pointers?

If you want to access python-data-service from outside the cluster using NodePort and you are using minikube you should be able to do so by using curl $(minikube service python-data-service --url) from anywhere outside the cluster i.e from your system
If you want to communicate between two microservice within the cluster then simply use ClusterIP type service instead of NodePort type.
To identity if it's a service issue or pod issue use PODIP directly in the curl command. From the output of kubectl describe svc python-data-service the Pod IP for service python-data-service is 172.17.0.6. So try curl 172.17.0.6:5000/getdata

In order to start debugging your services I would suggest the following steps:
Check that your service 1 is accessible as a Pod:
kubectl run test1 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://172.17.0.6:5000
Check that your service 2 is accessible as a Pod:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 172.17.0.8:5000
Then, check that your service 1 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.107.11.236:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service:5000
Then, check that your service 2 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.103.97.40:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service2:5000
Then, if needed, check that your service 2 is accessible through your node port (you would need to know the IP Address of the Node where the service has been exposed, for instance in minikube it should work:)
wget -O - http://192.168.99.101:30400
From your Service manifest I can recommend as a good practice to specify both port and targetPort as you can see at
https://canterafonseca.eu/kubernetes/certification/application/developer/cncf/k8s/cloud/native/computing/ckad/deployments/services/preparation-k8s-ckad-exam-part4-services.html#-services
On the other hand if you only need to expose to the outside world one of the services you can create a headless service (see also my blog post above).

Ping doesn't work on services ClusterIP addresses because they are from virtual addresses created by iptables rules that redirect packets to the endpoints(pods).
You should be able to ping a pod, but not a service.
You can use curl or wget
For example wget -qO- POD_IP:80
or You can try
wget -qO- http://your-service-name:port/yourpath
curl POD_IP:port_number

Are you able to connect to your pods maybe try port-forward to see if you can connect & then check the connectivity in two pods.
Last check if there is default deny network policy set there - maybe you have some restrictions at network level.
kubectl get networkpolicy -n <namespace>

Try to look into the logs using kubectl logs PODNAME so that you know what's happening. From first sight, I think you need to expose the ports of both services: kubectl port-forward yourService PORT:PORT.

Related

kubernetes dns list all ips for service

I have a list of pods like so:
❯ kubectl get pods -l app=test-pod (base)
NAME READY STATUS RESTARTS AGE
test-deployment-674667c867-jhvg4 1/1 Running 0 14m
test-deployment-674667c867-ssx6h 1/1 Running 0 14m
test-deployment-674667c867-t4crn 1/1 Running 0 14m
I have a service
kubectl get services (base)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-service ClusterIP 10.100.4.138 <none> 4000/TCP 15m
I perform a dns query:
❯ kubectl exec -ti test-deployment-674667c867-jhvg4 -- /bin/bash (base)
root#test-deployment-674667c867-jhvg4:/# busybox nslookup test-service
Server: 10.100.0.10
Address: 10.100.0.10:53
Name: test-service.default.svc.cluster.local
Address: 10.100.4.138
My config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: python-http-server
image: python:2.7
command: ["/bin/bash"]
args: ["-c", "echo \" Hello from $(hostname)\" > index.html; python -m SimpleHTTPServer 80"]
ports:
- name: http
containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- protocol: TCP
port: 4000
targetPort: http
How can I instead get a list of all the pods's ip addresses via a dns query?
Ideally I would like to perform an nslookup of a name and get a list of all the pod's ips in a list.
You have to use a headless service with selectors. It returns the ip addresses of the pods.
See here:
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
.spec.clusterIP must be "None"

Kubernetes : How to change service ports using patch

Let be the following service :
serivce1.yml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: ClusterIP
ports:
- port: 90
name: port0
targetPort: 40000
selector:
app: nginx
I apply as follow : kubectl apply -f service1.yml
Now I want to change the ports section. I could edit the yml and apply again but I prefere to use patch :
kubectl patch service service1 -p '{"spec":{"ports": [{"port": 80,"name":"anotherportspec"}]}}'
service/service1 patched
but this command adds a new port and keeps the old one :
$ kubectl describe svc service1
Name: service1
Namespace: abdelghani
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP Families: <none>
IP: 10.98.186.21
IPs: <none>
Port: anotherportspec 80/TCP
TargetPort: 80/TCP
Endpoints: 10.39.0.3:80
Port: port0 90/TCP
TargetPort: 40000/TCP
Endpoints: 10.39.0.3:40000
Session Affinity: None
Events: <none>
My question:
Is it possible to change the ports sections by replacing the old section with the one passed as parameter?
Thx
As we've (with #Abdelghani) discussed the problem is in patch strategy. Using following command:
$ kubectl patch svc service1 --type merge -p '{"spec":{"ports": [{"port": 80,"name":"anotherportspec"}]}}' solves the problem.
Flag --type merge will enable replace the existing values.
Read more: kubectl-patch.
As an alternative for patching you have couple options:
1. Edit your service using kubectl edit command:
In prompt $ kubectl edit svc <service_name> -n <namespace>
i - to edit the service
ESC, :wq - update your service
Paste proper port and save file.
2. You can also manually edit service conf file:
vi your-service.yaml
update port number and apply changes
$ kubectl apply -f your-service.yaml

Error when trying to expose a docker container in minikube

I am using minikube to learn about docker, but I have come across a problem.
I am following along with the examples in Kubernetes in Action, and I am trying to get a pod that I have pulled from my docker hub account, but I cannot make this pod visible.
if I run
kubectl get pod
I can see that the pod is present.
NAME READY STATUS RESTARTS AGE
kubia 1/1 Running 1 6d22h
However when I do the first step to create a service
kubectl expose rc kubia --type=LoadBalancer --name kubia-http service "kubia-http" exposed
I am getting this error returned
Error from server (NotFound): replicationcontrollers "kubia" not found
Error from server (NotFound): replicationcontrollers "service" not found
Error from server (NotFound): replicationcontrollers "kubia-http" not found
Error from server (NotFound): replicationcontrollers "exposed" not found
Any ideas why I am getting this error and what I need to do to correct it?
I am using minikube v1.13.1 on mac Mojave (v10.14.6), and I can't upgrade because I am using a company supplied machine, and all updates are controlled by HQ.
In this book, used command is kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1 which used to create ReplicationController back in the days when book was written however this object is currently depracated.
Now kubectl run command creates standalone pod without ReplicationController. So to expose it you should run:
kubectl expose pod kubia --type=LoadBalancer --name kubia-http
In order to create a replication it is recommended to use Deployment. To create it using CLI you can simply run
kubectl create deployment <name_of_deployment> --image=<image_to_be_used>
It will create a deployment and one pod. And then it can be exposed similarly to previous pod exposure:
kubectl expose deployment kubia --type=LoadBalancer --name kubia-http
Replication controllers are older concepts than creating services and deployments in Kubernetes, checkout this answer.
A service template looks like the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: App
ports:
- protocol: tcp
port: 80
targetPort: 8080
Then after saving the service config into a file you do kubectl apply -f <filename>
Checkout more at: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kubia.yaml
https://kubernetes.io/ko/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
shell
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
kubectl apply -f kubia.yaml
kubectl expose deployment kubia --type=LoadBalancer --port 8080 --name kubia-http
minikube tunnel &
curl 127.0.0.1:8080
if you change replicas
change kubia.yaml (3 -> 5)
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 5
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
re-apply
kubectl apply -f kubia.yaml
# as-is
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-qp7wl 1/1 Running 0 20s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 20s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 20s
# to-be
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-fsd49 0/1 ContainerCreating 0 6s
kubia-5f896dc5d5-qp7wl 1/1 Running 0 3m35s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 3m35s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 3m35s
kubia-5f896dc5d5-x84fr 1/1 Running 0 6s
ref: https://github.com/seunggabi/kubernetes-in-action/wiki/2.-Kubernetes

Kubernetes ingress (hostNetwork=true), can't reach service by node IP - GCP

I am trying to expose deployment using Ingress where DeamonSet has hostNetwork=true which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.
I am running Kubernetes version 1.11.16-gke.2 on GCP.
I setup my fresh cluster like this:
gcloud container clusters get-credentials gcp-cluster
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
I run the deployment:
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
EOF
Then I create service:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: hello-node
EOF
and ingress resource:
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-node-single-ingress
spec:
backend:
serviceName: hello-node
servicePort: 80
EOF
I get the node external IP:
12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address'
"35.197.204.75"
Check if ingress is running:
12:50 $ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
hello-node-single-ingress * 35.197.204.75 80 8m
12:50 $ kubectl get pods --namespace ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-ingress-controller-7kqgz 1/1 Running 0 23m
ingress-nginx-ingress-default-backend-677b99f864-tg6db 1/1 Running 0 23m
12:50 $ kubectl get svc --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller ClusterIP 10.43.250.102 <none> 80/TCP,443/TCP 24m
ingress-nginx-ingress-default-backend ClusterIP 10.43.255.43 <none> 80/TCP 24m
Then trying to connect from the external network:
curl 35.197.204.75
Unfortunately it times out
On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup:
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
which mentions:
"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."
I've tried to follow that and delete ingress-nginx services:
kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend
but this doesn't help.
Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !
EDIT:
When another service accessing my deployment with NodePort gets created:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node2
spec:
ports:
- port: 80
targetPort: 8080
type: NodePort
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 2m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 6s
I still can't access my service e.g. using: curl 35.197.204.75:31151.
However when I create 3rd service with LoadBalancer type:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node3
spec:
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 7m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 4m
hello-node3 LoadBalancer 10.47.250.47 35.189.106.111 80:31367/TCP 56s
I can access my service using the external LB: 35.189.106.111 IP.
The problem was missing firewall rules on GCP.
Found the answer: https://stackoverflow.com/a/42040506/2263395
Running:
gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301
Where 80 is the ingress port and 30301 is the NodePort port. On production you would probabaly use just the ingress port.

Can't resolve 'kubernetes' by skydns serivce in Kubernetes

core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes
Server: 10.100.0.10
Address 1: 10.100.0.10
nslookup: can't resolve 'kubernetes'
core#core-1-94 ~ $ kubectl get svc --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.100.0.10 53/UDP
53/TCP
kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui 10.100.110.236 80/TCP
core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.100.0.10:53
Server: 10.100.0.10
Address 1: 10.100.0.10
nslookup: can't resolve 'kubernetes'
core#core-1-94 ~ $ kubectl get endpoints --namespace=kube-system
NAME ENDPOINTS
kube-dns 10.244.31.16:53,10.244.31.16:53
kube-ui 10.244.3.2:8080
core#core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.244.31.16:53
Server: 10.244.31.16
Address 1: 10.244.31.16
Name: kubernetes
Address 1: 10.100.0.1
I think the service of kube-dns is Not available.
the skydns-svc.yaml :
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.100.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
Who can help ?
For DNS to work, the kubelet needs to be passed the flags --cluster_dns= and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec like this.
For any other information, you can look it.