I am trying to install the velero for k8s. During the installation when try to install mini.io I changes its service type from cluster IP to Node Port. My Pods run successfully and also I can see the node Port services is up and running.
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get pods -n velero -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
minio-8649b94fb5-vk7gv 1/1 Running 0 16m 10.244.1.102 node1k8s-virtual-machine <none> <none>
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get svc -n velero NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio NodePort 10.111.72.207 <none> 9000:31481/TCP 53m
When I try to access my services port number changes from 31481 to 45717 by it self. Every time when I correct port number and hit enter it changes back to new port and I am not able to access my application.
These are my codes from mini.io service file.
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
What I have done so far?
I look for the log and everything show successful No error. I also try it with Load balancer service. With Load balancer port not not changes but I am not able to access the application.
Noting found on google about this issue.
I also check all the namespaces pods and services to check if these Port numbers are being used. No services use these ports.
What Do I want?
Can you please help me to find out what cause my application to change its port. Where is the issue and how to fix it.? How can I access application dashbord?
Update Question
This is the full codes file. It may help to find my mistake.
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --config-dir=/config
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9002
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
ports:
- port: 9002
nodePort: 31482
targetPort: 9002
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
Edit2 Logs Of Pod
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.1.108:9000 http://127.0.0.1:9000
Console: http://10.244.1.108:33045 http://127.0.0.1:33045
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
Edit 3 Logs of Pod
master-k8s#masterk8s-virtual-machine:~/velero-1.9.5$ kubectl logs minio-8649b94fb5-qvzfh -n velero
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.2.131:9000 http://127.0.0.1:9000
Console: http://10.244.2.131:36649 http://127.0.0.1:36649
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
You can set the nodePort number inside the port config so that it won't be automatically set.
Try this Service:
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
nodePort: 31481
targetPort: 9000
protocol: TCP
selector:
component: minio
Related
I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.
I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.
However, the app can't reach the Database.
Here is the deployment-descriptor of the postgresql:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
Here is the deployments-descriptor of the meal-planer
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.
I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.
When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
I don't know, why the network is unreachable and I don't know what I can do about it.
First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use statefulset
StatefulSet is the workload API object used to manage stateful
applications.
Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.
Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf
Here is a minimal working example that'll help you get started:
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.
pi#rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi#rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi#rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi#rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi#rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root#test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
If you'll still have trouble connecting to DB, please attach following:
kubectl describe pod <<postgres_pod_name>>
kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf
Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)
I am developing a simple web app with web service and persistent layer. Web persistent layer is Cockroach db. I am trying to deploy my app with single command:
kubectl apply -f my-app.yaml
App is deployed successfully. However when backend has to store something in db the following error appears:
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
When I start my app I provide the following connection string to cockroach db and connection is successful but when I try to store something in db the above error appears:
postgresql://root#web-service-db:26257/defaultdb?sslmode=disable
For some reason web pod can not talk with db pod. My whole configuration is:
# Service for web application
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web-service
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: http
nodePort: 30103
externalIPs:
- 192.168.1.9 # < - my local ip
---
# Deployment of web app
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
selector:
matchLabels:
app: web-service
replicas: 1
template:
metadata:
labels:
app: web-service
spec:
hostNetwork: true
containers:
- name: web-service
image: my-local-img:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
hostPort: 8080
env:
- name: DB_CONNECT_STRING
value: "postgresql://root#web-service-db:26257/defaultdb?sslmode=disable"
---
### Kubernetes official doc PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: cockroach-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/my-local-volueme"
---
### Kubernetes official doc PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cockroach-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
# Cockroach used by web-service
apiVersion: v1
kind: Service
metadata:
name: web-service-cockroach
labels:
app: web-service-cockroach
spec:
selector:
app: web-service-cockroach
type: NodePort
ports:
- protocol: TCP
port: 26257
targetPort: 26257
nodePort: 30104
---
# Cockroach stateful set used to deploy locally
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-service-cockroach
spec:
serviceName: web-service-cockroach
replicas: 1
selector:
matchLabels:
app: web-service-cockroach
template:
metadata:
labels:
app: web-service-cockroach
spec:
volumes:
- name: cockroach-pv-storage
persistentVolumeClaim:
claimName: cockroach-pv-claim
containers:
- name: web-service-cockroach
image: cockroachdb/cockroach:latest
command:
- /cockroach/cockroach.sh
- start
- --insecure
volumeMounts:
- mountPath: "/tmp/my-local-volume"
name: cockroach-pv-storage
ports:
- containerPort: 26257
After deployment everything looks good.
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m
web-service NodePort 10.111.85.64 192.168.1.9 8080:30103/TCP 6m17s
webs-service-cockroach NodePort 10.96.42.121 <none> 26257:30104/TCP 6m8s
kubectl get pods
NAME READY STATUS RESTARTS AGE
web-service-6cc74b5f54-jlvd6 1/1 Running 0 24m
web-service-cockroach-0 1/1 Running 0 24m
Thanks in advance!
Looks like you have a problem with DNS.
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
Address 192.168.65.1 does not like a kube-dns service ip.
This could be explaind if you where using host network, and surprisingly you do.
When using hostNetwork: true, the default dns server used is the server that the host uses and that never is a kube-dns.
To solve it set:
spec:
dnsPolicy: ClusterFirstWithHostNet
It sets the dns server to the k8s one for the pod.
Have a look at kubernetes documentaion for more information about Pod's DNS Policy.
I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### 👈
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### 👈
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### 👈
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379
I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the Kubernetes guestbook example.
Service definition and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller-redis
spec:
replicas: 1
template:
metadata:
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
containers:
- name: poller-redis
image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
ports:
- containerPort: 6379
env:
- name: GET_HOSTS_FROM
value: dns
imagePullSecrets:
- name: gcr-json-key
App deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-json-key
Relevant (I hope) Kubernetes info:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 24d
poller-redis 10.0.0.137 <none> 6379/TCP 20d
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
poller 1 1 1 1 12d
poller-redis 1 1 1 1 4d
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 24d
poller-redis 172.17.0.7:6379 20d
Inside the poller pod (custom app), I get environment variables created for Redis:
# env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379
However, if I try to connect to that port, I cannot. Doing something like:
nc -vz poller-redis 6379
fails.
What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.
Any ideas, please?
Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes.
I have posted that my service definition is:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
The problem is that metadata.labels and spec.selector are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller-redis
tier: backend
role: database
target: poller
ports:
- port: 6379
targetPort: 6379
I also now use straight up DNS lookup (i.e. ping poller-redis) rather than trying to connect to localhost:6379 from my target pods.
It could be related to kube-dns possibly not running.
From inside the poller pod can you verify that poller-redis resolves?
Does the following work from inside the container?
nc -v 10.0.0.137
One kube-dns service running in kube-system is enough. Did you run nc -vz poller-redis 6379 in pods which have same namespace as redis service?
poller-redis is simplified dns name of resdis service in same namespace. It will do not work in different namespace.
Since kube-dns is unavailable on nodes. So if you want to run nc or redisclient in nodes, please use clusterIP of redis service to replace dns name.