Service not available in Kubernetes - kubernetes

I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the Kubernetes guestbook example.
Service definition and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller-redis
spec:
replicas: 1
template:
metadata:
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
containers:
- name: poller-redis
image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
ports:
- containerPort: 6379
env:
- name: GET_HOSTS_FROM
value: dns
imagePullSecrets:
- name: gcr-json-key
App deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-json-key
Relevant (I hope) Kubernetes info:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 24d
poller-redis 10.0.0.137 <none> 6379/TCP 20d
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
poller 1 1 1 1 12d
poller-redis 1 1 1 1 4d
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 24d
poller-redis 172.17.0.7:6379 20d
Inside the poller pod (custom app), I get environment variables created for Redis:
# env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379
However, if I try to connect to that port, I cannot. Doing something like:
nc -vz poller-redis 6379
fails.
What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.
Any ideas, please?

Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes.
I have posted that my service definition is:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
The problem is that metadata.labels and spec.selector are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller-redis
tier: backend
role: database
target: poller
ports:
- port: 6379
targetPort: 6379
I also now use straight up DNS lookup (i.e. ping poller-redis) rather than trying to connect to localhost:6379 from my target pods.

It could be related to kube-dns possibly not running.
From inside the poller pod can you verify that poller-redis resolves?
Does the following work from inside the container?
nc -v 10.0.0.137

One kube-dns service running in kube-system is enough. Did you run nc -vz poller-redis 6379 in pods which have same namespace as redis service?
poller-redis is simplified dns name of resdis service in same namespace. It will do not work in different namespace.
Since kube-dns is unavailable on nodes. So if you want to run nc or redisclient in nodes, please use clusterIP of redis service to replace dns name.

Related

how to access service on rpi k8s cluster

I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend
You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080

Though external ip is resolved, the website returns connection timedout in kubernetes GKE

I have created a k8s deployment and service yaml for a static website. External IP address is also resolved in kubernetes service. But when I try to access the website through curl or browser, it returns connection timed out.
Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
K8s deployment yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ohno-website
labels:
app: ohno-website
spec:
replicas: 1
selector:
matchLabels:
app: ohno-website
template:
metadata:
labels:
app: ohno-website
spec:
containers:
- name: ohno-website
image: gkganeshr/ohno-website:v0.1
imagePullPolicy: Always
ports:
- containerPort: 80
k8s service yml:
apiVersion: v1
kind: Service
metadata:
name: ohno-website
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 9376
selector:
app: ohno-website
ohno_fooserver#cloudshell:~ (fourth-webbing-279817)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 8h
ohno-website LoadBalancer 10.16.12.162 34.70.213.174 80:31977/TCP 7h4m
The target port defined in the service defition YAML is incorrect. It should match with container port from pod definition in deployment YAML
targetPort: 9376
should be changed to
targetPort: 80

Kubernetes deployment not publicly accesible

im trying to access a deloyment on our Kubernetes cluster on Azure. This is a Azure Kubernetes Service (AKS). Here are the configuration files for the deployment and the service that should expose the deployment.
Configurations
apiVersion: apps/v1
kind: Deployment
metadata:
name: mira-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mira-api
template:
metadata:
labels:
app: mira-api
spec:
containers:
- name: backend
image: registry.gitlab.com/izit/mira-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: regcred
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: mira-api
When I check the cluster after applying these configurations I, I see the pod running correctly. Also the service is created and has public IP assigned.
After this deployment I don't see any requests getting handled. I get a error message in my browser saying the site is inaccessible. Any ideas what I could have configured wrong?
Your service selector labels and pod labels do not match.
You have app: mira-api label in deployment's pod template but have run: mira-api in service's label selector.
Change your service selector label to match the pod label as follows.
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: mira-api
To make sure your service is selecting the backend pods or not, you can run kubectl describe svc <svc name> command and check if it has any Endpoints listed.
# kubectl describe svc postgres
Name: postgres
Namespace: default
Labels: app=postgres
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"postgres"},"name":"postgres","namespace":"default"},"s...
Selector: app=postgres
Type: ClusterIP
IP: 10.106.7.183
Port: default 5432/TCP
TargetPort: 5432/TCP
Endpoints: 10.244.2.117:5432 <------- This line
Session Affinity: None
Events: <none>

Can't access service in my local kubernetes cluster using NodePort

I have a manifest as the following
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
hostPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
This is a sample that reproduces my problem. My intention here is to create a simple cluster that has a pod with a redis container in it, and it should be exposed to my localhost. Still, get services gives me the following output:
redis-service NodePort 10.107.233.66 <none> 6379:30036/TCP 10s
If I swap NodePort with LoadBalancer, I get an external-ip but still port doesn't work.
Can you help me identify why I'm failing to map the 6379 port to my localhost, please?
Thanks,
In order to access your app through node port, you have to use this url
http://{node ip}:{node port}.
If you are using minikube, your minikube ip is the node ip. You can retrieve it using minikube ip command.
You can also use minikube service redis-service --url command to get the url to access your application through node port.
For anybody who's interested in the question, I found the problem. After Ijaz's fix, I also needed to change the selector to match the label in the pod, it was a typo on my end!
pod has "app=my-redis" tag, but Service selector had "name=my-redis". Matching them fixed the access problem.
Dont need the hostPort:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
now the nodePort 30036 can be used to access the service on any worker node.
If the cluster node is somewhere else and you want to make the port available on you local client , then just do kubectl port forward
kubectl port-forward svc/redis-service 6379:6379
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Notes:
On-prem installs of k8s dont support service type of load balancer
ClusterIP is the IP on the pod network
Node IP is the IP of some machine that is running the k8s cluster

How to make two Kubernetes Services talk to each other?

Currently, I have working K8s API pods in a K8s service that connects to a K8s Redis service, with K8s pods of it's own. The problem is, I am using NodePort meaning BOTH are exposed to the public. I only want the API accessable to the public. The issue is that if I make the Redis service not public, the API can't see it. Is there a way to connect two Services without exposing one to the public?
This is my API service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-svc
spec:
selector:
app: app-api
tier: api
ports:
- protocol: TCP
port: 5000
nodePort: 30400
type: NodePort
And this is my Redis service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
nodePort: 30537
type: NodePort
First, configure the Redis service as a ClusterIP service. It will be private, visible only for other services. This is could be done removing the line with the option type.
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
targetPort: [the port exposed by the Redis pod]
Finally, when you configure the API to reach Redis, the address should be app-api-redis-svc:6379
And that's all. I have a lot of services communicating each other in this way. If this doesn't work for you, let me know in the comments.
I'm going to try to take the best from all answers and my own research and make a short guide that I hope you will find helpful:
1. Test connectivity
Connect to a different pod, eg ruby pod:
kubectl exec -it some-pod-name -- /bin/sh
Verify it can ping to the service in question:
ping redis
Can it connect to the port? (I found telnet did not work for this)
nc -zv redis 6379
2. Verify your service selectors are correct
If your service config looks like this:
kind: Service
apiVersion: v1
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
verify those selectors are also set on your pods?
get pods --selector=app=redis,role=master,tier=backend
Confirm that your service is tied to your pods by running:
$> describe service redis
Name: redis
Namespace: default
Labels: app=redis
role=master
tier=backend
Annotations: <none>
Selector: app=redis,role=master,tier=backend
Type: ClusterIP
IP: 10.47.250.121
Port: <unset> 6379/TCP
Endpoints: 10.44.0.16:6379
Session Affinity: None
Events: <none>
check the Endpoints: field and confirm it's not blank
More info can be found at:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#my-service-is-missing-endpoints
I'm not sure about redis, but I have a similar application. I have a Java web application running as a pod that is exposed to the outside world through a nodePort. I have a mongodb container running as a pod.
In the webapp deployment specifications, I map it to the mongodb service through its name by passing the service name as parameter, I have pasted the specification below. You can modify accordingly.There should be a similar mapping parameter in Redis also where you would have to use the service name which is "mongoservice" in my case.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
- resources:
limits:
cpu: 0.2
image: registryip:5000/employee:1
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
nodePort: 30062
type: NodePort
selector:
name: empapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 0.3
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
name: mongodb
Note that the mongodb service doesnt need to be exposed as a NodePort.
Kubernetes enables inter service communication by allowing services communicate with other services using their service name.
In your scenario, redis service should be accessible from other services on
http://app-api-redis-svc.default:6379. Here default is the namespace under which your service is running.
This internally routes your requests to your redis pod running on the target container port
Checkout this link for different modes of service discovery options provided by kubernetes
Hope it helps