kubernetes cannot ping another service - kubernetes

DNS resolution looks fine, but I cannot ping my service. What could be the reason?
From another pod in the cluster:
$ ping backend
PING backend.default.svc.cluster.local (10.233.14.157) 56(84) bytes of data.
^C
--- backend.default.svc.cluster.local ping statistics ---
36 packets transmitted, 0 received, 100% packet loss, time 35816ms
EDIT:
The service definition:
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
name: backend
spec:
ports:
- name: api
protocol: TCP
port: 10000
selector:
app: backend
The deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
run: backend
replicas: 1
template:
metadata:
labels:
run: backend
spec:
containers:
- name: backend
image: nha/backend:latest
imagePullPolicy: Always
ports:
- name: api
containerPort: 10000
I can curl my service from the same container:
kubectl exec -it backend-7f67c8cbd8-mf894 -- /bin/bash
root#backend-7f67c8cbd8-mf894:/# curl localhost:10000/my-endpoint
{"ok": "true"}
It looks like the endpoint on port 10000 does not get exposed though:
kubectl get ep
NAME ENDPOINTS AGE
backend <none> 2h

Ping doesn't work with service's cluster IPs like 10.233.14.157, as it is a virtual IP. You should be able to ping a specific pod, but no a service.

You can't ping a service. You can curl it.

Related

I expose my pod in kubernetes but I canΒ΄t seem to establish a connection with it

I am trying to expose a deployment I made on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
labels:
app: debian
spec:
replicas: 1
selector:
matchLabels:
app: debian
strategy: {}
template:
metadata:
labels:
app: debian
spec:
containers:
- image: agracia10/debian_bash:latest
name: debian
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
I decided to follow was is written on here
I try to expose the deployment using the following command:
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
but when I try to then access the service created by the expose command, using the url provided by
minikube service deployment-test-8497d6f458-xxhgm --url
it throws an error using packetsender to try and connect to the service:
packet sender log
Im not really sure what the reason for this could be, I think it has something to do with the fact that when I get the services it says on the external ip field. Also, when I try and retrieve the node IP using minikube ip it gives an address, but when the minikube service --url it gives the 127.0.0.1 address. In any case, using either one does not work.
it's not working due to a port configuration mismatch.
You deployment container running on the 8006 but you have exposed the 8080 and your target port is : --target-port=80
so due to this it's not working.
Ideal flow of traffic goes like :
service (node port, cluster IP or any) > Deployment > PODs
Below sharing the example for deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: agracia10/debian_bash:latest
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: HTTP
so things I have changed are image and target port.
Once your Node port service is up and running you will send the request on Port 80 or 31364
i will redirect the request internally to the target port which is 8006 for the container also.
Using this command you exposed your deployment on wrong target point
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
ideally it should be 8006
As I know the simplest way to expose the deployment to service we can run this command, you don't expose the pod but expose the deployment.
kubectl expose deployment deployment-test --port 80

how to access service on rpi k8s cluster

I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend
You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080

Access redis by service name in Kubernetes

I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### πŸ‘ˆ
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### πŸ‘ˆ
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### πŸ‘ˆ
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379

Kubernetes's LoadBalancer yaml not working even though CLI `expose` function works

This is my Service and Deployment yaml that I am running on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello-world
labels:
app: node-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: node-hello-world
template:
metadata:
labels:
app: node-hello-world
spec:
containers:
- name: node-hello-world
image: node-hello-world:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: node-hello-world-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
targetPort: 8080
nodePort: 30002
selector:
name: node-hello-world
Results:
$ minikube service node-hello-world-load-balancer --url
http://192.168.99.101:30002
$ curl http://192.168.99.101:30002
curl: (7) Failed to connect to 192.168.99.101 port 30002: Connection refused
However, running the following CLI worked:
$ kubectl expose deployment node-hello-world --type=LoadBalancer
$ minikube service node-hello-world --url
http://192.168.99.101:30130
$ curl http://192.168.99.101:30130
Hello World!
What am I doing wrong with my LoadBalancer yaml config?
you have configured wrong the service selector
selector:
name: node-hello-world
it should be:
selector:
app: node-hello-world
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
you can debug this by describing the service, and seeing that the endpoint list is empty, so there are no pods mapped to your endpoint's service list
kubectl describe svc node-hello-world-load-balancer | grep -i endpoints
Endpoints: <none>

Service not available in Kubernetes

I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the Kubernetes guestbook example.
Service definition and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller-redis
spec:
replicas: 1
template:
metadata:
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
containers:
- name: poller-redis
image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
ports:
- containerPort: 6379
env:
- name: GET_HOSTS_FROM
value: dns
imagePullSecrets:
- name: gcr-json-key
App deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-json-key
Relevant (I hope) Kubernetes info:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 24d
poller-redis 10.0.0.137 <none> 6379/TCP 20d
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
poller 1 1 1 1 12d
poller-redis 1 1 1 1 4d
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 24d
poller-redis 172.17.0.7:6379 20d
Inside the poller pod (custom app), I get environment variables created for Redis:
# env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379
However, if I try to connect to that port, I cannot. Doing something like:
nc -vz poller-redis 6379
fails.
What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.
Any ideas, please?
Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes.
I have posted that my service definition is:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
The problem is that metadata.labels and spec.selector are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller-redis
tier: backend
role: database
target: poller
ports:
- port: 6379
targetPort: 6379
I also now use straight up DNS lookup (i.e. ping poller-redis) rather than trying to connect to localhost:6379 from my target pods.
It could be related to kube-dns possibly not running.
From inside the poller pod can you verify that poller-redis resolves?
Does the following work from inside the container?
nc -v 10.0.0.137
One kube-dns service running in kube-system is enough. Did you run nc -vz poller-redis 6379 in pods which have same namespace as redis service?
poller-redis is simplified dns name of resdis service in same namespace. It will do not work in different namespace.
Since kube-dns is unavailable on nodes. So if you want to run nc or redisclient in nodes, please use clusterIP of redis service to replace dns name.