K8s minikube can't access hosted http (external) service - kubernetes

Environment
Windows 10 OS
WSL2 ( minikube is using Linux containers )
minikube v1.25.2
kubectl v1.23.0
Use-case
pod 1 - mongodb
pod 2 - mongo express
Internal service for accessing mongodb
External service (LoadBalancer) for accessing mongo express from a browser
Problem at hand
Running the following opens a browser on service EndPoint but the mongo express page doesn't load giving "This site can’t be reached"
minikube.exe service mongo-express-service
YAML Files
mongo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
mongoexpress-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
mongo-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
mongo-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: cm9vdA==
mongo-root-password: ZXhhbXBsZQ==
.\kubectl.exe get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-express-68c4748bd6-5qnnh 1/1 Running 2 (16h ago) 20h 172.17.0.3 minikube <none> <none>
pod/mongodb-deployment-7bb6c6c4c7-w2bdx 1/1 Running 1 (16h ago) 22h 172.17.0.4 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d <none>
service/mongo-express-service LoadBalancer 10.108.182.35 <pending> 8081:30000/TCP 20h app=mongo-express
service/mongodb-service ClusterIP 10.107.207.139 <none> 27017/TCP 21h app=mongodb
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/mongo-express 1/1 1 1 20h mongo-express mongo-express app=mongo-express
deployment.apps/mongodb-deployment 1/1 1 1 22h mongodb mongo app=mongodb
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/mongo-express-68c4748bd6 1 1 1 20h mongo-express mongo-express app=mongo-express,pod-template-hash=68c4748bd6
replicaset.apps/mongo-express-6f76745c84 0 0 0 20h mongo-express mongo-express app=mongo-express,pod-template-hash=6f76745c84
replicaset.apps/mongodb-deployment-7bb6c6c4c7 1 1 1 22h mongodb mongo app=mongodb,pod-template-hash=7bb6c6c4c7
While executing minikube.exe service mongo-express-service the browser opens on http://192.168.49.2:30000/ and returns "This site can’t be reached 192.168.49.2 took too long to respond.", same happen if I use the loopback IP, what am I doing wrong ?

Well, it appears that I had to use minikube tunnel to enable access to the service, strangely, the mongo express service was accessible on port 8081 rather than 30000... I guess this is because it is bring access by the tunnel and not directly
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
mongo-express-service LoadBalancer 10.108.182.35 127.0.0.1 8081:30000/TCP 22h
mongodb-service ClusterIP 10.107.207.139 <none> 27017/TCP 23h

Related

Cant access Kubernetes service with docker desktop on win10 machine

this is my pod.yaml file
apiVersion: v1
kind: Pod
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: demo-voting-app
spec:
containers:
- name: voting-app
image: kodekloud/examplevotingapp_vote:v1
ports:
- containerPort: 80
and this is my service.yaml file
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: demo-voting-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
name: voting-app-pod
app: demo-voting-app
after executing
kubectl get pods,svc
i get
NAME READY STATUS RESTARTS AGE
pod/voting-app-pod 1/1 Running 0 37m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
service/voting-service NodePort 10.107.145.225 <none> 80:30004/TCP 6m45s
I tried to a access the service trough http://localhost:30004 and also i tried
http://127.0.0.1:30004
with no success
Please have selector and template label declaration in deployment as below. For more details on the attributes, refer kubernetes documentation
spec:
selector:
matchLabels:
app: demo-voting-app
template:
metadata:
labels:
app: demo-voting-app
And in service as below
selector:
app: demo-voting-app

Nginx minikube ingress : 503 Server error

I am trying to use minikube to deploy a sample flask app. But getting 503 nginx error. Please note I am able to access the app using the Nodeport service config.
I checked with minikube IP which is mapped to local host and tried to access the app, but getting 503 error. Not sure if I missed anything. I enable the minikube addons for nginx.
Here are my files -
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp-deployment
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: <repo>/sample-flask-app:1.0
ports:
- containerPort: 5000
env:
- name: APPLICATION_SETTINGS
value: prd_config.py
imagePullSecrets:
- name: jfrog-secret
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: flaskapp-service
labels:
app: flaskapp
spec:
selector:
app: flaskapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flaskapp-ingress
labels:
app: flaskapp
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
rules:
- host: mydashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flaskapp-service
port:
number: 5000
Ingress status :
minikube kubectl -- get ingress flaskapp-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
flaskapp-ingress nginx mydashboard.com localhost 80 18m
Cluster status:
minikube kubectl -- get all
NAME READY STATUS RESTARTS AGE
pod/flaskapp-deployment-7f59f96fd5-j9mv9 1/1 Running 1 (103m ago) 15h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flaskapp-deployment ClusterIP 10.103.143.58 <none> 5000/TCP 34m
service/flaskapp-service ClusterIP 10.111.242.99 <none> 5000/TCP 15h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapp-deployment 1/1 1 1 15h
NAME DESIRED CURRENT READY AGE
replicaset.apps/flaskapp-deployment-7f59f96fd5 1 1 1 15h

Kubernentes External Ip is working only in the cluster

I am new to Kubernetes and I am trying to host a testing site,I have pods running as below
NAME READY STATUS RESTARTS AGE
sasank-website-78864ff54b-656ld 1/1 Running 0 30m
sasank-website-78864ff54b-qdn65 1/1 Running 0 30m
Deployment file used:
piVersion: apps/v1
kind: Deployment
metadata:
name: sasank-website
labels:
app: website
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: webtesting
image: 9110727495/userdetails:latest
ports:
- containerPort: 80
Service file used:
apiVersion: v1
kind: Service
metadata:
name: testingsite
labels:
app: website
spec:
type: NodePort
externalIPs:
- 192.168.1.10
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: website
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m
testingsite NodePort 10.96.246.110 192.168.1.10 80:31438/TCP 5m9s
When I try to access the Ip with port 31438 it is refusing to connect but it is using port 80 in the clustr. When I try to access with the same IP outside the cluster it is refusing to connect even to port 80. I am not sure how to understand this.. Please help. Thank you.

No endpoint set for postgres-service

I'm having a problem getting an endpoint for my postgres-service. I've checked the selector and it does seem to match the pod name, but I've posted both yamls below.
I've tried resetting Minikube and following the Kubernetes debugging instructions, but no luck.
Can anyone spot where I'm going wrong? Thanks!
postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.1
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: db0
- name: POSTGRES_USER
value: somevalue
- name: POSTGRES_PASSWORD
value: somevalue
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: "somevalue-pgdata"
volumes:
- hostPath:
path: "/home/docker/pgdata"
name: somevalue-pgdata
And then my postgres-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
ports:
- port: 5432
selector:
service: postgres
And showing my services, and no endpoint:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m
postgres-service ClusterIP 10.97.4.3 <none> 5432/TCP 3s
$ kubectl get endpoints postgres-service
NAME ENDPOINTS AGE
postgres-service <none> 8s
Resolved - modified service.yaml to point to app instead of service. For anyone else, this is the working version:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres

No resources found when installing kubernetes dashboard

I am install kubernetes dashboard using this command:
[root#iZuf63refzweg1d9dh94t8Z ~]# kubectl create -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
this is my kubernetes yaml config:
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
get the result:
[root#iZuf63refzweg1d9dh94t8Z ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 102d
kubernetes-dashboard NodePort 10.254.180.117 <none> 443:31720/TCP 58s
metrics-server ClusterIP 10.43.96.112 <none> 443/TCP 102d
[root#iZuf63refzweg1d9dh94t8Z ~]# kubectl get pods -n kube-system
No resources found.
but when I check the port 31720:
lsof -i:31720
the output is empty.Is the service deploy success? How to check the deploy log? Why the port not binding success?
it is under its own namespace - "kubernetes-dashboard". So, just use kubectl get all -n kubernetes-dashboard to see everything