minikube service URL gives ECONNREFUSED on mac os Monterey - postgresql

I have a spring-boot postgres setup that I am trying to containerize and deploy in minikube. My pods and services show that they are up.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-deployment-5bc57dcd4f-zrwzs 1/1 Running 0 14m
postgres-7f887f4d7d-5b8v5 1/1 Running 0 25m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
server-service NodePort 10.100.21.232 <none> 8080:31457/TCP 15m
postgres ClusterIP 10.97.19.125 <none> 5432/TCP 26m
$ minikube service list
|-------------|------------------|--------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------------|--------------|-----------------------------|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| custom | server-service | http/8080 | http://192.168.59.106:31457 |
| custom | postgres | No node port |
|-------------|------------------|--------------|-----------------------------|
But when I try to hit any of my endpoints using postman, I get:
Could not send request. Error: connect ECONNREFUSED 192.168.59.106:31457
I don't know where I am going wrong. I tried deploying the individual containers directly in docker (I had to modify some of the application.properties to get the rest server talking to the db container) and that works without a problem so clearly my server side code should not be a problem.
Here is the yml for the rest-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
namespace: custom
spec:
selector:
matchLabels:
app: server-deployment
template:
metadata:
name: server-deployment
labels:
app: server-deployment
spec:
containers:
- name: server-deployment
image: aruns1494/rest-server-k8s:latest
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_SERVICE
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_service
---
apiVersion: v1
kind: Service
metadata:
name: server-service
namespace: custom
spec:
selector:
name: server-deployment
ports:
- name: http
port: 8080
type: NodePort
I have not changed the spring boot's default port so I expect it to work on 8080. I tried connecting to that URL through chrome and Firefox and I get the same error message. I expect it to fall back to a default error message page when I try to hit the / endpoint.
I did look up several online articles but none of them seem to help. I am also attaching my kube-system pods if that helps:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-x6mv6 1/1 Running 0 39m
etcd-minikube 1/1 Running 0 40m
kube-apiserver-minikube 1/1 Running 0 40m
kube-controller-manager-minikube 1/1 Running 0 40m
kube-proxy-dnr8p 1/1 Running 0 39m
kube-scheduler-minikube 1/1 Running 0 40m
storage-provisioner 1/1 Running 1 (39m ago) 40m

My proposition is to check that provided Deployment and Service have the same labels and selectors, because now in the Deployment config pod label is app: server-deployment, but in the Service config selector is name: server-deployment.
If we want to use name: server-deployment selector for the Service, then we need to update the Deployment as shown below (matchLabels and labels fields):
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
namespace: custom
spec:
selector:
matchLabels:
name: server-deployment
template:
metadata:
name: server-deployment
labels:
name: server-deployment
spec:
containers:
...

Possibly MacOS Firewall is blocking the connection. Could you try navigating to System Preferences > Security & Privacy and see if the port is being blocked in General tab? You can also disable Firewall in Firewall tab.

Related

Kubernetes Ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io"

Playing around with K8 and ingress in local minikube setup. Creating ingress from yaml file in networking.k8s.io/v1 api version fails. See below output.
Executing
> kubectl apply -f ingress.yaml
returns
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
in local minikube environment with hyperkit as vm driver.
Here is the ingress.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mongodb-express-ingress
namespace: hello-world
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mongodb-express-service-internal
port:
number: 8081
Here is the mongodb-express deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-express
namespace: hello-world
labels:
app: mongodb-express
spec:
replicas: 1
selector:
matchLabels:
app: mongodb-express
template:
metadata:
labels:
app: mongodb-express
spec:
containers:
- name: mongodb-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: mongodb_url
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-external
namespace: hello-world
spec:
selector:
app: mongodb-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-internal
namespace: hello-world
spec:
selector:
app: mongodb-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
Some more information:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
> minikube version
minikube version: v1.19.0
commit: 15cede53bdc5fe242228853e737333b09d4336b5
> kubectl get all -n hello-world
NAME READY STATUS RESTARTS AGE
pod/mongodb-68d675ddd7-p4fh7 1/1 Running 0 3h29m
pod/mongodb-express-6586846c4c-5nfg7 1/1 Running 6 3h29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongodb-express-service-external LoadBalancer 10.106.185.132 <pending> 8081:30000/TCP 3h29m
service/mongodb-express-service-internal ClusterIP 10.103.122.120 <none> 8081/TCP 3h3m
service/mongodb-service ClusterIP 10.96.197.136 <none> 27017/TCP 3h29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongodb 1/1 1 1 3h29m
deployment.apps/mongodb-express 1/1 1 1 3h29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongodb-68d675ddd7 1 1 1 3h29m
replicaset.apps/mongodb-express-6586846c4c 1 1 1 3h29m
> minikube addons enable ingress
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-2bn8h 0/1 Completed 0 4h4m
pod/ingress-nginx-admission-patch-vsdqn 0/1 Completed 0 4h4m
pod/ingress-nginx-controller-5d88495688-n6f67 1/1 Running 0 4h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.111.176.223 <none> 80:32740/TCP,443:30636/TCP 4h4m
service/ingress-nginx-controller-admission ClusterIP 10.97.107.77 <none> 443/TCP 4h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-5d88495688 1 1 1 4h4m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 4h4m
job.batch/ingress-nginx-admission-patch 1/1 9s 4h4m
However, it works for the beta api version, i.e.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mongodb-express-ingress-deprecated
namespace: hello-world
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
backend:
serviceName: mongodb-express-service-internal
servicePort: 8081
Any help very much appreciated.
I had the same issue. I successfully fixed it using:
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
then apply the yaml files:
kubectl apply -f ingress_file.yaml
I have the same problem with you, and you can see this issue https://github.com/kubernetes/minikube/issues/11121.
Two way you can try:
download the new version ,or go back the old version
Do a strange thing like what balnbibarbi said.
2. The Strange Thing
# Run without --addons=ingress
sudo minikube start --vm-driver=none #--addons=ingress
# install external ingress-nginx
sudo helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
sudo helm repo update
sudo helm install ingress-nginx ingress-nginx/ingress-nginx
# expose your services
And then you will find your Ingress lacks Endpoints. And then:
sudo minikube addons enable ingress
After minitues, the Endpoints appears.
Problem
If you search examples with addons Ingress by Google, you will find what the below lacks is ingress.
root#ubuntu:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-xnmx2 1/1 Running 1 4h40m
etcd-ubuntu 1/1 Running 1 4h40m
kube-apiserver-ubuntu 1/1 Running 1 4h40m
kube-controller-manager-ubuntu 1/1 Running 1 4h40m
kube-proxy-k9lnl 1/1 Running 1 4h40m
kube-scheduler-ubuntu 1/1 Running 2 4h40m
storage-provisioner 1/1 Running 3 4h40m
Ref: Expecting apiVersion - networking.k8s.io/v1 instead of extensions/v1beta1
TL;DR
kubectl explain predated a lot of the generic resource parsing logic, so it has a dedicated --api-version flag. This should do what you want.
kubectl explain ingresses --api-version=networking.k8s.io/v1
This should solve your doubt!
In my case, it was a previous deployment of NGINX. Check with:
kubectl get ValidatingWebhookConfiguration -A
If there is more than one NGINX, then delete the older one.
You can also get this error on GKE private clusters as a firewall rule is not configured automatically.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules
https://github.com/kubernetes/kubernetes/issues/79739

kubernetes dns list all ips for service

I have a list of pods like so:
❯ kubectl get pods -l app=test-pod (base)
NAME READY STATUS RESTARTS AGE
test-deployment-674667c867-jhvg4 1/1 Running 0 14m
test-deployment-674667c867-ssx6h 1/1 Running 0 14m
test-deployment-674667c867-t4crn 1/1 Running 0 14m
I have a service
kubectl get services (base)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-service ClusterIP 10.100.4.138 <none> 4000/TCP 15m
I perform a dns query:
❯ kubectl exec -ti test-deployment-674667c867-jhvg4 -- /bin/bash (base)
root#test-deployment-674667c867-jhvg4:/# busybox nslookup test-service
Server: 10.100.0.10
Address: 10.100.0.10:53
Name: test-service.default.svc.cluster.local
Address: 10.100.4.138
My config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: python-http-server
image: python:2.7
command: ["/bin/bash"]
args: ["-c", "echo \" Hello from $(hostname)\" > index.html; python -m SimpleHTTPServer 80"]
ports:
- name: http
containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- protocol: TCP
port: 4000
targetPort: http
How can I instead get a list of all the pods's ip addresses via a dns query?
Ideally I would like to perform an nslookup of a name and get a list of all the pod's ips in a list.
You have to use a headless service with selectors. It returns the ip addresses of the pods.
See here:
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
.spec.clusterIP must be "None"

Can't ping postgres pod from another pod in kubernetes

I created one busy pod to test db connection by following yaml
pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: marks-dummy-pod
spec:
containers:
- name: marks-dummy-pod
image: djtijare/ubuntuping:v1
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: Never
Dockerfile used :-
FROM ubuntu
RUN apt-get update && apt-get install -y iputils-ping
CMD bash
I create service as
postgresservice.yaml
kind: Service
apiVersion: v1
metadata:
name: postgressvc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
Endpoint for created service as
kind: Endpoints
apiVersion: v1
metadata:
name: postgressvc
subsets:
- addresses:
- ip: 172.31.6.149
ports:
- port: 5432
Then i ran ping 172.31.6.149 inside pod (kubectl exec -it mark-dummy-pod bash) but not working.(ping localhost is working)
output of kubectl get pods,svc,ep -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/marks-dummy-pod 1/1 Running 0 43m 192.168.1.63 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgressvc ClusterIP 10.107.58.81 <none> 5432/TCP 33m <none>
NAME ENDPOINTS AGE
endpoints/postgressvc 172.31.6.149:5432 32m
Output for answer by P Ekambaram
kubectl get pods,svc,ep -o wide gives
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/postgres-855696996d-w6h6c 1/1 Running 0 44s 192.168.1.66 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgres NodePort 10.110.203.204 <none> 5432:31076/TCP 44s app=postgres
NAME ENDPOINTS AGE
endpoints/postgres 192.168.1.66:5432 44s
So problem was in my DNS pod in namespace=kube-system
I just create new kubernetes setup and make sure that DNS is working
For new setup refer to my answer of another question
How to start kubelet service??
postgres pod is missing?
did you create endpoint object or was it auto generated?
share the pod definition YAML
you shouldnt be creating endpoint. it is wrong. follow the below deployment for postgres.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
undeploy postgres service and endpoint and deploy the above YAML.
it should work
why NODE ip is prefixed with ip-
you should create deployment for your database and then make a service that target this deployment and then ping using this service why ping with ip ?

Getting "CrashLoopBackOff" as status of deployed pod

How to debug why it's status is CrashLoopBackOff?
I am not using minikube , working on Aws Kubernetes instance.
I followed this tutorial.
https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample
When I do
kubectl create -f specs/spring-boot-app.yml
and check status by
kubectl get pods
it gives
spring-boot-postgres-sample-67f9cbc8c-qnkzg 0/1 CrashLoopBackOff 14 50m
Below Command
kubectl describe pods spring-boot-postgres-sample-67f9cbc8c-qnkzg
gives
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m18s (x350 over 78m) kubelet, ip-172-31-11-87 Back-off restarting failed container
Command kubectl get pods --all-namespaces gives
NAMESPACE NAME READY STATUS RESTARTS AGE
default constraintpod 1/1 Running 1 88d
default postgres-78f78bfbfc-72bgf 1/1 Running 0 109m
default rcsise-krbxg 1/1 Running 1 87d
default spring-boot-postgres-sample-667f87cf4c-858rx 0/1 CrashLoopBackOff 4 110s
default twocontainers 2/2 Running 479 89d
kube-system coredns-86c58d9df4-kr4zj 1/1 Running 1 89d
kube-system coredns-86c58d9df4-qqq2p 1/1 Running 1 89d
kube-system etcd-ip-172-31-6-149 1/1 Running 8 89d
kube-system kube-apiserver-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-controller-manager-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-4h4x7 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-fcvf2 1/1 Running 1 89d
kube-system kube-proxy-5sgjb 1/1 Running 1 89d
kube-system kube-proxy-hd7tr 1/1 Running 1 89d
kube-system kube-scheduler-ip-172-31-6-149 1/1 Running 1 89d
Command kubectl logs spring-boot-postgres-sample-667f87cf4c-858rx
doesn't print anything.
Why don't you...
run a dummy container (run an endless sleep command)
kubectl exec -it bash
Run the program directly and have a look at the logs directly.
Its an easier form of debugging on K8s.
First of all I fixed by postgres deployment, there was some error of "pod has unbound PersistentVolumeClaims" , so i fixed that error by this post
pod has unbound PersistentVolumeClaims
So now my postgres deployment is running.
kubectl logs spring-boot-postgres-sample-67f9cbc8c-qnkzg doesn't print anything, it means there is something wrong in config file.
kubectl describe pod spring-boot-postgres-sample-67f9cbc8c-qnkzg stating that container is terminated and reason is completed,
I fixed it by running container infinity time
by adding
# Just sleep forever
command: [ "sleep" ]
args: [ "infinity" ]
So now my deployment is running.
But now i Exposed my service by
kubectl expose deployment spring-boot-postgres-sample --type=LoadBalancer --port=8080
but can't able to get External-Ip , so I did
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
So I get my external-Ip as "172.31.71.218"
But now the problem is curl http://172.31.71.218:8080/ getting timeout
Anything i did wrong?
Here is my deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-postgres-sample
namespace: default
spec:
replicas: 1
template:
metadata:
name: spring-boot-postgres-sample
labels:
app: spring-boot-postgres-sample
spec:
containers:
- name: spring-boot-postgres-sample
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: hostname-config
key: postgres_host
image: <mydockerHUbaccount>/spring-boot-postgres-on-k8s:v1
Here is my postgres.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: default
data:
postgres_user: postgresuser
postgres_password: password
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
Here How i got host-config map
kubectl create configmap hostname-config --from-literal=postgres_host=$(kubectl get svc postgres -o jsonpath="{.spec.clusterIP}")
I was able to reproduce the scenario. Seems there is a connectivity issue between the app and Postgres DB. So the app failed to initiate. Please find the logs below it might help you.
$ kubectl get po
NAME READY STATUS RESTARTS AGE
spring-boot-postgres-sample-5d7c85d98b-qwvjr 0/1 CrashLoopBackOff 19 1h
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaAutoConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
2019-05-23 10:53:01.889 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to :5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:262) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]

Minikube unable to expose service with yaml

Trying to run a local registry. I have the following configuration:
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: registry
labels:
app: registry
role: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 5000
volumeMounts:
- mountPath: '/registry'
name: registry-volume
volumes:
- name: registry-volume
hostPath:
path: '/data'
type: Directory
Service:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
role: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
It all works well when I create deployment/service. kubectl shows status as Running for both service and deployment:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME READY STATUS RESTARTS AGE
po/registry-6549cbc974-mmqpj 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37m
svc/registry NodePort 10.0.0.6 <none> 5000:31001/TCP 7m
However, when I try to get external IP for service using: minikube service registry --url, it times-out/fails: Waiting, endpoint for service is not ready yet....
When I delete the service (keeping deployment intact), and manually expose the deployment using kubectl expose deployment registry --type=NodePort, I am able to get it working.
Minikube log can be found here.
You need to specify the correct spec.selector in registry service manifest:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
app: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
Now registry service correctly points to the registry pod:
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 14m
registry 172.17.0.4:5000 4s
And you can get external url as well:
$ minikube service registry --url
http://192.168.99.106:31001