I'm running Cassandra cluster, which I want to access from the client running in the same cluster through service. After installing Cassandra helm chart using my pods and services look like this:
NAME READY STATUS RESTARTS AGE
pod/cassandra-0 1/1 Running 0 10m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cassandra ClusterIP 10.97.169.106 <none> 9042/TCP,9160/TCP,8080/TCP 10m
service/cassandra-headless ClusterIP None <none> 7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP 10m
and I'm informd that Cassandra can be accessed through cassandra.default.svc.cluster.local:9042 from within the cluster.
After applying configmap, which looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: cassandra-configmap
data:
database_address: cassandra.default.svc.cluster.local
and db client deployment file, which looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cassandra-web
labels:
app: cassandra-web
spec:
replicas: 1
selector:
matchLabels:
app: cassandra-web
template:
metadata:
labels:
app: cassandra-web
spec:
containers:
- name: cassandra-web
image: metavige/cassandra-web
ports:
- containerPort: 9042
env:
- name: CASSANDRA_HOST
valueFrom:
configMapKeyRef:
name: cassandra-configmap
key: database_address
- name: CASSANDRA_USER
value: cassandra
- name: CASSANDRA_PASSWORD
value: somePassword
The problem is my client can't translate service name to address and is unable to connect to the database, saying IPAddr::InvalidAddressError: invalid address. If I replace cassandra.default.svc.cluster.local with 10.97.169.106, everything works great, so there has to be some problem in resolving names.
I tried to run busybox, which looks OK.
$kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
$kubectl exec -ti busybox -- nslookup cassandra.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: cassandra.default.svc.cluster.local
Address 1: 10.97.169.106 cassandra.default.svc.cluster.local
Related
I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.
I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.
However, the app can't reach the Database.
Here is the deployment-descriptor of the postgresql:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
Here is the deployments-descriptor of the meal-planer
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.
I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.
When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
I don't know, why the network is unreachable and I don't know what I can do about it.
First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use statefulset
StatefulSet is the workload API object used to manage stateful
applications.
Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.
Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf
Here is a minimal working example that'll help you get started:
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.
pi#rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi#rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi#rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi#rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi#rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root#test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
If you'll still have trouble connecting to DB, please attach following:
kubectl describe pod <<postgres_pod_name>>
kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf
Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)
I have a list of pods like so:
❯ kubectl get pods -l app=test-pod (base)
NAME READY STATUS RESTARTS AGE
test-deployment-674667c867-jhvg4 1/1 Running 0 14m
test-deployment-674667c867-ssx6h 1/1 Running 0 14m
test-deployment-674667c867-t4crn 1/1 Running 0 14m
I have a service
kubectl get services (base)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-service ClusterIP 10.100.4.138 <none> 4000/TCP 15m
I perform a dns query:
❯ kubectl exec -ti test-deployment-674667c867-jhvg4 -- /bin/bash (base)
root#test-deployment-674667c867-jhvg4:/# busybox nslookup test-service
Server: 10.100.0.10
Address: 10.100.0.10:53
Name: test-service.default.svc.cluster.local
Address: 10.100.4.138
My config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: python-http-server
image: python:2.7
command: ["/bin/bash"]
args: ["-c", "echo \" Hello from $(hostname)\" > index.html; python -m SimpleHTTPServer 80"]
ports:
- name: http
containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- protocol: TCP
port: 4000
targetPort: http
How can I instead get a list of all the pods's ip addresses via a dns query?
Ideally I would like to perform an nslookup of a name and get a list of all the pod's ips in a list.
You have to use a headless service with selectors. It returns the ip addresses of the pods.
See here:
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
.spec.clusterIP must be "None"
I'm having a problem getting an endpoint for my postgres-service. I've checked the selector and it does seem to match the pod name, but I've posted both yamls below.
I've tried resetting Minikube and following the Kubernetes debugging instructions, but no luck.
Can anyone spot where I'm going wrong? Thanks!
postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.1
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: db0
- name: POSTGRES_USER
value: somevalue
- name: POSTGRES_PASSWORD
value: somevalue
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: "somevalue-pgdata"
volumes:
- hostPath:
path: "/home/docker/pgdata"
name: somevalue-pgdata
And then my postgres-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
ports:
- port: 5432
selector:
service: postgres
And showing my services, and no endpoint:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m
postgres-service ClusterIP 10.97.4.3 <none> 5432/TCP 3s
$ kubectl get endpoints postgres-service
NAME ENDPOINTS AGE
postgres-service <none> 8s
Resolved - modified service.yaml to point to app instead of service. For anyone else, this is the working version:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
I created one busy pod to test db connection by following yaml
pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: marks-dummy-pod
spec:
containers:
- name: marks-dummy-pod
image: djtijare/ubuntuping:v1
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: Never
Dockerfile used :-
FROM ubuntu
RUN apt-get update && apt-get install -y iputils-ping
CMD bash
I create service as
postgresservice.yaml
kind: Service
apiVersion: v1
metadata:
name: postgressvc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
Endpoint for created service as
kind: Endpoints
apiVersion: v1
metadata:
name: postgressvc
subsets:
- addresses:
- ip: 172.31.6.149
ports:
- port: 5432
Then i ran ping 172.31.6.149 inside pod (kubectl exec -it mark-dummy-pod bash) but not working.(ping localhost is working)
output of kubectl get pods,svc,ep -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/marks-dummy-pod 1/1 Running 0 43m 192.168.1.63 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgressvc ClusterIP 10.107.58.81 <none> 5432/TCP 33m <none>
NAME ENDPOINTS AGE
endpoints/postgressvc 172.31.6.149:5432 32m
Output for answer by P Ekambaram
kubectl get pods,svc,ep -o wide gives
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/postgres-855696996d-w6h6c 1/1 Running 0 44s 192.168.1.66 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgres NodePort 10.110.203.204 <none> 5432:31076/TCP 44s app=postgres
NAME ENDPOINTS AGE
endpoints/postgres 192.168.1.66:5432 44s
So problem was in my DNS pod in namespace=kube-system
I just create new kubernetes setup and make sure that DNS is working
For new setup refer to my answer of another question
How to start kubelet service??
postgres pod is missing?
did you create endpoint object or was it auto generated?
share the pod definition YAML
you shouldnt be creating endpoint. it is wrong. follow the below deployment for postgres.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
undeploy postgres service and endpoint and deploy the above YAML.
it should work
why NODE ip is prefixed with ip-
you should create deployment for your database and then make a service that target this deployment and then ping using this service why ping with ip ?
Trying to run a local registry. I have the following configuration:
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: registry
labels:
app: registry
role: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 5000
volumeMounts:
- mountPath: '/registry'
name: registry-volume
volumes:
- name: registry-volume
hostPath:
path: '/data'
type: Directory
Service:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
role: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
It all works well when I create deployment/service. kubectl shows status as Running for both service and deployment:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME READY STATUS RESTARTS AGE
po/registry-6549cbc974-mmqpj 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37m
svc/registry NodePort 10.0.0.6 <none> 5000:31001/TCP 7m
However, when I try to get external IP for service using: minikube service registry --url, it times-out/fails: Waiting, endpoint for service is not ready yet....
When I delete the service (keeping deployment intact), and manually expose the deployment using kubectl expose deployment registry --type=NodePort, I am able to get it working.
Minikube log can be found here.
You need to specify the correct spec.selector in registry service manifest:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
app: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
Now registry service correctly points to the registry pod:
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 14m
registry 172.17.0.4:5000 4s
And you can get external url as well:
$ minikube service registry --url
http://192.168.99.106:31001