No endpoint set for postgres-service - postgresql

I'm having a problem getting an endpoint for my postgres-service. I've checked the selector and it does seem to match the pod name, but I've posted both yamls below.
I've tried resetting Minikube and following the Kubernetes debugging instructions, but no luck.
Can anyone spot where I'm going wrong? Thanks!
postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.1
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: db0
- name: POSTGRES_USER
value: somevalue
- name: POSTGRES_PASSWORD
value: somevalue
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: "somevalue-pgdata"
volumes:
- hostPath:
path: "/home/docker/pgdata"
name: somevalue-pgdata
And then my postgres-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
ports:
- port: 5432
selector:
service: postgres
And showing my services, and no endpoint:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m
postgres-service ClusterIP 10.97.4.3 <none> 5432/TCP 3s
$ kubectl get endpoints postgres-service
NAME ENDPOINTS AGE
postgres-service <none> 8s

Resolved - modified service.yaml to point to app instead of service. For anyone else, this is the working version:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres

Related

How can I expose my postgresSQL pods with LoadBalancer service?

I setup 1 master node and 2 worker nodes on bare matel server. I deploy my postgressSQL with 3 replica sets. This is my deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 3
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: standard
capacity:
storage: 15Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: root
---
I also follow the MetalLB https://metallb.universe.tf/installation/ and set up layer2 load balancer. which is running fine and I can even expose nginx pod with this service.
As you can see here.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h28m
nginx LoadBalancer 10.107.29.158 153.10.19.35 80:30703/TCP 162m
These are my running pods
NAME READY STATUS RESTARTS AGE
nginx-76d6c9b8c-lrljz 1/1 Running 0 3h29m
postgres-7dff8d6d74-8mlnt 1/1 Running 0 136m
postgres-7dff8d6d74-9zxsk 1/1 Running 0 136m
postgres-7dff8d6d74-xzkkx 1/1 Running 0 136m
What Issue Do I face?
When I try to expose the postgresSQL pods with load balancer I am not able to connect. Server is not reachable.
I try to expose as follow
kubectl expose deploy postgres --port 30432 --type LoadBalancer
I also try to create a yaml file for this service and still not successful.
kind: Service
apiVersion: v1
metadata:
name: postgres-svc
labels:
app: postgres
spec:
type: LoadBalancer
ports:
- port: 5432
targetPort: 30432
type: LoadBalancer
selector:
metallb-service: postgres
What Do I expect?
I want to expose my Pods to external network with this load balancer service so all the new data should be updated in all 3 replica set. Can you please help me to fix my service.yaml file?
I will be very thanks full
You don't specify a port in your postres container.
With kubectl expose you should specify a targetPort:
kubectl expose deploy postgres --port 30432 --target-port 5432 --type LoadBalancer
In your YAML you have to switch ports:
kind: Service
apiVersion: v1
metadata:
name: postgres-svc
labels:
app: postgres
spec:
type: LoadBalancer
ports:
- port: 30432
targetPort: 5432
type: LoadBalancer
selector:
app: postgres
Here also the selector was wrong. It has to match labels on the pod.

Cant access Kubernetes service with docker desktop on win10 machine

this is my pod.yaml file
apiVersion: v1
kind: Pod
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: demo-voting-app
spec:
containers:
- name: voting-app
image: kodekloud/examplevotingapp_vote:v1
ports:
- containerPort: 80
and this is my service.yaml file
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: demo-voting-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
name: voting-app-pod
app: demo-voting-app
after executing
kubectl get pods,svc
i get
NAME READY STATUS RESTARTS AGE
pod/voting-app-pod 1/1 Running 0 37m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
service/voting-service NodePort 10.107.145.225 <none> 80:30004/TCP 6m45s
I tried to a access the service trough http://localhost:30004 and also i tried
http://127.0.0.1:30004
with no success
Please have selector and template label declaration in deployment as below. For more details on the attributes, refer kubernetes documentation
spec:
selector:
matchLabels:
app: demo-voting-app
template:
metadata:
labels:
app: demo-voting-app
And in service as below
selector:
app: demo-voting-app

K8s minikube can't access hosted http (external) service

Environment
Windows 10 OS
WSL2 ( minikube is using Linux containers )
minikube v1.25.2
kubectl v1.23.0
Use-case
pod 1 - mongodb
pod 2 - mongo express
Internal service for accessing mongodb
External service (LoadBalancer) for accessing mongo express from a browser
Problem at hand
Running the following opens a browser on service EndPoint but the mongo express page doesn't load giving "This site can’t be reached"
minikube.exe service mongo-express-service
YAML Files
mongo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
mongoexpress-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
mongo-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
mongo-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: cm9vdA==
mongo-root-password: ZXhhbXBsZQ==
.\kubectl.exe get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-express-68c4748bd6-5qnnh 1/1 Running 2 (16h ago) 20h 172.17.0.3 minikube <none> <none>
pod/mongodb-deployment-7bb6c6c4c7-w2bdx 1/1 Running 1 (16h ago) 22h 172.17.0.4 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d <none>
service/mongo-express-service LoadBalancer 10.108.182.35 <pending> 8081:30000/TCP 20h app=mongo-express
service/mongodb-service ClusterIP 10.107.207.139 <none> 27017/TCP 21h app=mongodb
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/mongo-express 1/1 1 1 20h mongo-express mongo-express app=mongo-express
deployment.apps/mongodb-deployment 1/1 1 1 22h mongodb mongo app=mongodb
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/mongo-express-68c4748bd6 1 1 1 20h mongo-express mongo-express app=mongo-express,pod-template-hash=68c4748bd6
replicaset.apps/mongo-express-6f76745c84 0 0 0 20h mongo-express mongo-express app=mongo-express,pod-template-hash=6f76745c84
replicaset.apps/mongodb-deployment-7bb6c6c4c7 1 1 1 22h mongodb mongo app=mongodb,pod-template-hash=7bb6c6c4c7
While executing minikube.exe service mongo-express-service the browser opens on http://192.168.49.2:30000/ and returns "This site can’t be reached 192.168.49.2 took too long to respond.", same happen if I use the loopback IP, what am I doing wrong ?
Well, it appears that I had to use minikube tunnel to enable access to the service, strangely, the mongo express service was accessible on port 8081 rather than 30000... I guess this is because it is bring access by the tunnel and not directly
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
mongo-express-service LoadBalancer 10.108.182.35 127.0.0.1 8081:30000/TCP 22h
mongodb-service ClusterIP 10.107.207.139 <none> 27017/TCP 23h

why my kubernetes service doesn't target pods

I have a kubernetes service, and I wanna it to target pods.
This is how I define my service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
and this is how I define my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
name: my-pod
labels:
app: myapp
spec:
containers:
- name: httd
image: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
Basically what I did is linking port 80 in the pod to port 77 in the service to port 32766 in the node.
I already know that my container is running on port 80 because If I do this:
docker run -p 8989:80 httpd
and ask for localhost:8989 I can see the page.
If I do kubetctl get services I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h
my-service NodePort 10.100.157.161 <none> 77:32766/TCP 20m
I tried calling:
10.100.157.161:32766
10.100.157.161:77
But both give connection error.
What did I miss?
Use the below YAMLs. It works
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: httpd
name: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
my-service NodePort 10.110.219.232 <none> 77:32766/TCP 12s
master $
master $
master $ curl 10.110.219.232:77
<html><body><h1>It works!</h1></body></html>
master $
master $
master $ curl $(hostname -i):32766
<html><body><h1>It works!</h1></body></html>
master $

unable to access the application deployed on kubernetes cluster using kubernetes playground

I have a 3 node cluster created on kubernetes playground
The 3 nodes as seen on the UI are :
192.168.0.13 : Master
192.168.0.12 : worker
192.168.0.11 : worker
I have a front end app connected to backend mysql.
The deployment and service definition for front end is as below.
apiVersion: v1
kind: Service
metadata:
name: springboot-app
spec:
type: NodePort
ports:
- port: 8080
selector:
app: springboot-app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: springboot-app
spec:
replicas: 3
selector:
matchLabels:
app: springboot-app
template:
metadata:
labels:
app: springboot-app
spec:
containers:
- image: chinmayeepdas/springbootapp:1.0
name: springboot-app
env:
- name: DATABASE_HOST
value: demo-mysql
- name: DATABASE_NAME
value: chinmayee
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: root
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 8080
name: app-port
My PODs for UI and backend are up and running.
[node1 ~]$ kubectl describe service springboot-app
Name: springboot-app
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=springboot-app
Type: NodePort
IP: 10.96.187.226
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30373/TCP
Endpoints: 10.32.0.2:8080,10.32.0.3:8080,10.40.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when i do,
http://192.168.0.12:30373/employee/getAll
I dont see any result. I get This site can’t be reached
What IP address i have to give in the URL?
try this solution
kubectl proxy --address 0.0.0.0
Then access it as http://localhost:30373/employee/getAll
or maybe:
http://localhost:8080/employee/getAll
let me know if this fixes the access issue and which one works.