Can't ping postgres pod from another pod in kubernetes - kubernetes

I created one busy pod to test db connection by following yaml
pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: marks-dummy-pod
spec:
containers:
- name: marks-dummy-pod
image: djtijare/ubuntuping:v1
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: Never
Dockerfile used :-
FROM ubuntu
RUN apt-get update && apt-get install -y iputils-ping
CMD bash
I create service as
postgresservice.yaml
kind: Service
apiVersion: v1
metadata:
name: postgressvc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
Endpoint for created service as
kind: Endpoints
apiVersion: v1
metadata:
name: postgressvc
subsets:
- addresses:
- ip: 172.31.6.149
ports:
- port: 5432
Then i ran ping 172.31.6.149 inside pod (kubectl exec -it mark-dummy-pod bash) but not working.(ping localhost is working)
output of kubectl get pods,svc,ep -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/marks-dummy-pod 1/1 Running 0 43m 192.168.1.63 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgressvc ClusterIP 10.107.58.81 <none> 5432/TCP 33m <none>
NAME ENDPOINTS AGE
endpoints/postgressvc 172.31.6.149:5432 32m
Output for answer by P Ekambaram
kubectl get pods,svc,ep -o wide gives
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/postgres-855696996d-w6h6c 1/1 Running 0 44s 192.168.1.66 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgres NodePort 10.110.203.204 <none> 5432:31076/TCP 44s app=postgres
NAME ENDPOINTS AGE
endpoints/postgres 192.168.1.66:5432 44s

So problem was in my DNS pod in namespace=kube-system
I just create new kubernetes setup and make sure that DNS is working
For new setup refer to my answer of another question
How to start kubelet service??

postgres pod is missing?
did you create endpoint object or was it auto generated?
share the pod definition YAML
you shouldnt be creating endpoint. it is wrong. follow the below deployment for postgres.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
undeploy postgres service and endpoint and deploy the above YAML.
it should work
why NODE ip is prefixed with ip-

you should create deployment for your database and then make a service that target this deployment and then ping using this service why ping with ip ?

Related

Using MetalLB and Nginx-Ingress for exposing ports 80, 443 and 5432 (Postgres)

I'm working on a school project where I have bare metal VPS and I'm trying to deploy web application (nginx ingress) with Postgres database (which has to be reachable from the outside I.e. :5432).
I went through tens of link in Google and on StackOverflow and nothing actually worked out - I'm still getting Connection refused. Is the server running on host "<>" and accepting TCP/IP connections on port 5432?
I've successfully deployed MetalLB with few simple steps:
Install metallb
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
Created config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production
protocol: layer2
addresses:
- <VPS-external-IP>-<VPS-external-IP>
Created LoadBalancer Service
apiVersion: v1
kind: Service
metadata:
name: nginx-balancer
namespace: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: db
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: nginx
type: LoadBalancer
And created Nginx deployment (nginx-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
- containerPort: 443
Now after runnign kubectl apply -f config.yaml and kubectl apply -f nginx-deployment.yaml I'm able to resolve "Welcome to nginx!" curl <VPS-external-IP>:80
with output of:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-7f8d9cf649-cssrb 1/1 Running 0 117m
nginx-deployment-7f8d9cf649-f7q9w 1/1 Running 0 117m
nginx-deployment-7f8d9cf649-fmntb 1/1 Running 0 117m
kubectl get svc -n nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.108.97.186 <none> 80/TCP,443/TCP 26m
nginx-balancer LoadBalancer 10.103.72.75 46.36.38.200 80:32529/TCP,443:30432/TCP,5432:31081/TCP 42m
kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-57c458c998-r78wn 1/1 Running 0 13h
speaker-clr6j 1/1 Running 0 13h
kubectl get configmap -n metallb-system
NAME DATA AGE
config 1 13h
kube-root-ca.crt 1 13h
But the real issue comes now. I need to access Postgres Database using :5432 but I'm honestly just lost in Kubernetes...
I've created all the neccessary stuff for Postgres in order to make it run such as:
PersistentVolume and PersistentVolumeClaim (I don't think it's neccessary to share here, since it's just basic PersistentVolume and PersistentVolumeClaim yaml)
Secrets (I don't think it's neccessary to share here, since it's just basic secrets yaml)
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
namespace: postgres
data:
POSTGRES_DB: db_production
Postgres Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
namespace: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secrets
- configMapRef:
name: postgres-configmap
volumeMounts:
- name: postgres-pv-claim
mountPath: /var/lib/pgsql/data
volumes:
- name: postgres-pv-claim
persistentVolumeClaim:
claimName: postgres-pv-claim
And of course - Postgres Service
apiVersion: v1
kind: Service
metadata:
name: db
namespace: postgres
labels:
run: postgres
spec:
selector:
name: postgres
ports:
- port: 5432
targetPort: 5432
protocol: TCP
The postgres by itself seems to work.
kubectl get pods -n postgres
NAME READY STATUS RESTARTS AGE
postgres-deployment-74fff7c576-6kb5q 1/1 Running 0 96m
kubectl get svc -n postgres
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
db ClusterIP 10.102.5.192 <none> 5432/TCP 93m
kubectl get deployments -n postgres
NAME READY UP-TO-DATE AVAILABLE AGE
postgres-deployment 1/1 1 1 94m
kubectl get pv -n postgres
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
postgres-pv 1Gi RWO Retain Bound postgres/postgres-pv-claim manual 55m
kubectl get pvc -n postgres
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pv-claim Bound postgres-pv 1Gi RWO manual 55m
(I've deleted and it recreated it many times (hence why so low ages... :-( ))
So even though I've got postgres/db service running on 5432 and MetalLB LoadBalancer with Nginx working, it just simply doesn't expose 5432 to the world.
I'll be very happy for all suggestions because I'm starting to loose the hype I had in the beginning about how nice it'd be to set up Kubernetes... :-)
Thank you.
Update 31.1.2022
I've installed Kubernetes using Kubeadm pretty much step by step using several tutorials online and official docs.
Output of kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
and kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:24:08Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes Service unreachable

I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.
I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.
However, the app can't reach the Database.
Here is the deployment-descriptor of the postgresql:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
Here is the deployments-descriptor of the meal-planer
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.
I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.
When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
I don't know, why the network is unreachable and I don't know what I can do about it.
First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use statefulset
StatefulSet is the workload API object used to manage stateful
applications.
Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.
Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf
Here is a minimal working example that'll help you get started:
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.
pi#rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi#rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi#rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi#rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi#rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root#test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
If you'll still have trouble connecting to DB, please attach following:
kubectl describe pod <<postgres_pod_name>>
kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf
Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)

kubernetes dns list all ips for service

I have a list of pods like so:
❯ kubectl get pods -l app=test-pod (base)
NAME READY STATUS RESTARTS AGE
test-deployment-674667c867-jhvg4 1/1 Running 0 14m
test-deployment-674667c867-ssx6h 1/1 Running 0 14m
test-deployment-674667c867-t4crn 1/1 Running 0 14m
I have a service
kubectl get services (base)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-service ClusterIP 10.100.4.138 <none> 4000/TCP 15m
I perform a dns query:
❯ kubectl exec -ti test-deployment-674667c867-jhvg4 -- /bin/bash (base)
root#test-deployment-674667c867-jhvg4:/# busybox nslookup test-service
Server: 10.100.0.10
Address: 10.100.0.10:53
Name: test-service.default.svc.cluster.local
Address: 10.100.4.138
My config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: python-http-server
image: python:2.7
command: ["/bin/bash"]
args: ["-c", "echo \" Hello from $(hostname)\" > index.html; python -m SimpleHTTPServer 80"]
ports:
- name: http
containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- protocol: TCP
port: 4000
targetPort: http
How can I instead get a list of all the pods's ip addresses via a dns query?
Ideally I would like to perform an nslookup of a name and get a list of all the pod's ips in a list.
You have to use a headless service with selectors. It returns the ip addresses of the pods.
See here:
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
.spec.clusterIP must be "None"

How to access Kubernetes service through its name?

I'm running Cassandra cluster, which I want to access from the client running in the same cluster through service. After installing Cassandra helm chart using my pods and services look like this:
NAME READY STATUS RESTARTS AGE
pod/cassandra-0 1/1 Running 0 10m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cassandra ClusterIP 10.97.169.106 <none> 9042/TCP,9160/TCP,8080/TCP 10m
service/cassandra-headless ClusterIP None <none> 7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP 10m
and I'm informd that Cassandra can be accessed through cassandra.default.svc.cluster.local:9042 from within the cluster.
After applying configmap, which looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: cassandra-configmap
data:
database_address: cassandra.default.svc.cluster.local
and db client deployment file, which looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cassandra-web
labels:
app: cassandra-web
spec:
replicas: 1
selector:
matchLabels:
app: cassandra-web
template:
metadata:
labels:
app: cassandra-web
spec:
containers:
- name: cassandra-web
image: metavige/cassandra-web
ports:
- containerPort: 9042
env:
- name: CASSANDRA_HOST
valueFrom:
configMapKeyRef:
name: cassandra-configmap
key: database_address
- name: CASSANDRA_USER
value: cassandra
- name: CASSANDRA_PASSWORD
value: somePassword
The problem is my client can't translate service name to address and is unable to connect to the database, saying IPAddr::InvalidAddressError: invalid address. If I replace cassandra.default.svc.cluster.local with 10.97.169.106, everything works great, so there has to be some problem in resolving names.
I tried to run busybox, which looks OK.
$kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
$kubectl exec -ti busybox -- nslookup cassandra.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: cassandra.default.svc.cluster.local
Address 1: 10.97.169.106 cassandra.default.svc.cluster.local

Minikube unable to expose service with yaml

Trying to run a local registry. I have the following configuration:
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: registry
labels:
app: registry
role: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 5000
volumeMounts:
- mountPath: '/registry'
name: registry-volume
volumes:
- name: registry-volume
hostPath:
path: '/data'
type: Directory
Service:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
role: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
It all works well when I create deployment/service. kubectl shows status as Running for both service and deployment:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME READY STATUS RESTARTS AGE
po/registry-6549cbc974-mmqpj 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37m
svc/registry NodePort 10.0.0.6 <none> 5000:31001/TCP 7m
However, when I try to get external IP for service using: minikube service registry --url, it times-out/fails: Waiting, endpoint for service is not ready yet....
When I delete the service (keeping deployment intact), and manually expose the deployment using kubectl expose deployment registry --type=NodePort, I am able to get it working.
Minikube log can be found here.
You need to specify the correct spec.selector in registry service manifest:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
app: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
Now registry service correctly points to the registry pod:
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 14m
registry 172.17.0.4:5000 4s
And you can get external url as well:
$ minikube service registry --url
http://192.168.99.106:31001