Google Cloud Container: Can not connect to mongodb service - mongodb

I created a mongodb replication controller and a mongo service. I tried to connect to it from a different mongo pod just to test the connection. But that does not work
root#mongo-test:/# mongo mongo-service/mydb
MongoDB shell version: 3.2.0
connecting to: mongo-service/mydb
2015-12-09T11:05:55.256+0000 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongo-service:27017' :
connect#src/mongo/shell/mongo.js:226:14
#(connect):1:6
exception: connect failed
I am not sure what I have done wrong in the configuration. I may miss something here
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
mongo mongo mongo:latest name=mongo 1 9s
kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-6bnak 1/1 Running 0 1m
mongo-test 1/1 Running 0 21m
kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.119.240.1 <none> 443/TCP <none> 23h
mongo-service 10.119.254.202 <none> 27017/TCP name=mongo,role=mongo 1m
I configured the RC and Service with the following configs
mongo-rc
{
"metadata": {
"name": "mongo",
"labels": { "name": "mongo" }
},
"kind": "ReplicationController",
"apiVersion": "v1",
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": { "name": "mongo" }
},
"spec": {
"volumes": [
{
"name": "mongo-disk",
"gcePersistentDisk": {
"pdName": "mongo-disk",
"fsType": "ext4"
}
}
],
"containers": [
{
"name": "mongo",
"image": "mongo:latest",
"ports": [{
"name":"mongo",
"containerPort": 27017
}],
"volumeMounts": [
{
"name": "mongo-disk",
"mountPath": "/data/db"
}
]
}
]
}
}
}
}
mongo-service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "mongo-service"
},
"spec": {
"ports": [
{
"port": 27017,
"targetPort": "mongo"
}
],
"selector": {
"name": "mongo",
"role": "mongo"
}
}
}

Almost a bit embarrassing.
The issue was that I used the selector "role" in the service but did not define it on the RC.

Related

Unable to open service via kubectl proxy

➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 54d
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 54d
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 54d
➜ kubectl proxy
xdg-open http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy/#q=ip-192-168-114-35
and it fails with
Error trying to reach service: 'dial tcp 10.0.102.174:80: i/o timeout'
However if I expose the service via kubectl port-forward I can open the service in the browser
kubectl port-forward service/backend-service 8080:80 -n edna
xdg-open HTTP://localhost:8080
So how to open the service via that long URL (similar how we open the kubernetes dashboard?
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
If I query the API with CURL I see the output
➜ curl http://127.0.0.1:8001/api/v1/namespaces/edna/services/backend-service/
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "backend-service",
"namespace": "edna",
"selfLink": "/api/v1/namespaces/edna/services/backend-service",
"uid": "7163dd4e-e76d-4517-b0fe-d2d516b5dc16",
"resourceVersion": "6433582",
"creationTimestamp": "2020-08-14T05:58:45Z",
"labels": {
"app.kubernetes.io/instance": "backend-etl"
},
"annotations": {
"argocd.argoproj.io/sync-wave": "10",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{\"argocd.argoproj.io/sync-wave\":\"10\"},\"labels\":{\"app.kubernetes.io/instance\":\"backend-etl\"},\"name\":\"backend-service\",\"namespace\":\"edna\"},\"spec\":{\"ports\":[{\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"app\":\"edna-backend\"},\"type\":\"ClusterIP\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 80
}
],
"selector": {
"app": "edna-backend"
},
"clusterIP": "172.20.39.154",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
Instead of your URL:
http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy
Try without 'http:'
http://127.0.0.1:8001/api/v1/namespaces/edna/services/airflow-service/proxy

When updating service target I have 502 errors during 5 to 60 seconds

I want to manually reroute (when I have needs) a web server (A Helm deployment on GKE) to another one.
To do that I have 3 Helm deployments :
Application X
Application Y
Ingress on application X
All work fine, but if I launch a Helm update with Ingress chart changing uniquely the selector of the service I target I have 502 errors :(
Source of service :
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.Service.Name }}-https
labels:
app: {{ .Values.Service.Name }}
type: svc
name: {{ .Values.Service.Name }}
environment: {{ .Values.Environment.Name }}
annotations:
cloud.google.com/neg: '{"ingress": true}'
beta.cloud.google.com/backend-config: '{"ports": {"{{ .Values.Application.Port }}":"{{ .Values.Service.Name }}-https"}}'
spec:
type: NodePort
selector:
name: {{ .Values.Application.Name }}
environment: {{ .Values.Environment.Name }}
ports:
- protocol: TCP
port: {{ .Values.Application.Port }}
targetPort: {{ .Values.Application.Port }}
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: {{ .Values.Service.Name }}-https
spec:
timeoutSec: 50
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 300
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Values.Service.Name }}-https
labels:
app: {{ .Values.Service.Name }}
type: ingress
name: {{ .Values.Service.Name }}
environment: {{ .Values.Environment.Name }}
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.Service.PublicIpName }}
networking.gke.io/managed-certificates: "{{ join "," .Values.Service.DomainNames }}"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: {{ $.Values.Service.Name }}-https
servicePort: 80
rules:
{{- range .Values.Service.DomainNames }}
- host: {{ . | title | lower }}
http:
paths:
- backend:
serviceName: {{ $.Values.Service.Name }}-https
servicePort: 80
{{- end }}
The only thing which change from one call to another is the value of "{{ .Values.Application.Name }}", all other values are strictly the same.
Targeted PODS are always UP & RUNNING and all respond 200 using "kubectl" port forwarding test.
Here is the status of all my namespace objects :
NAME READY STATUS RESTARTS AGE
pod/drupal-dummy-404-v1-pod-744454b7ff-m4hjk 1/1 Running 0 2m32s
pod/drupal-dummy-404-v1-pod-744454b7ff-z5l29 1/1 Running 0 2m32s
pod/drupal-dummy-v1-pod-77f5bf55c6-9dq8n 1/1 Running 0 3m58s
pod/drupal-dummy-v1-pod-77f5bf55c6-njfl9 1/1 Running 0 3m57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/drupal-dummy-v1-service-https NodePort 172.16.90.71 <none> 80:31391/TCP 3m49s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/drupal-dummy-404-v1-pod 2/2 2 2 2m32s
deployment.apps/drupal-dummy-v1-pod 2/2 2 2 3m58s
NAME DESIRED CURRENT READY AGE
replicaset.apps/drupal-dummy-404-v1-pod-744454b7ff 2 2 2 2m32s
replicaset.apps/drupal-dummy-v1-pod-77f5bf55c6 2 2 2 3m58s
NAME AGE
managedcertificate.networking.gke.io/d8.syspod.fr 161m
managedcertificate.networking.gke.io/d8gfi.syspod.fr 128m
managedcertificate.networking.gke.io/dummydrupald8.cnes.fr 162m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/drupal-dummy-v1-service-https d8gfi.syspod.fr 34.120.106.136 80 3m50s
Another test has to pre-launch two services, one for each deployment and just update the Ingress Helm deployment changing this time "{{ $.Values.Service.Name }}", same problem, and the site indisponibility is here from 60s to 300s.
Here is the status of all my namespace objects (for this second test) :
root#47475bc8c41f:/opt/bin# k get all,svc,ingress,managedcertificates
NAME READY STATUS RESTARTS AGE
pod/drupal-dummy-404-v1-pod-744454b7ff-8r5pm 1/1 Running 0 26m
pod/drupal-dummy-404-v1-pod-744454b7ff-9cplz 1/1 Running 0 26m
pod/drupal-dummy-v1-pod-77f5bf55c6-56dnr 1/1 Running 0 30m
pod/drupal-dummy-v1-pod-77f5bf55c6-mg95j 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/drupal-dummy-404-v1-pod-https NodePort 172.16.106.121 <none> 80:31030/TCP 26m
service/drupal-dummy-v1-pod-https NodePort 172.16.245.251 <none> 80:31759/TCP 27m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/drupal-dummy-404-v1-pod 2/2 2 2 26m
deployment.apps/drupal-dummy-v1-pod 2/2 2 2 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bastion-66bb77bfd5 1 1 1 148m
replicaset.apps/drupal-dummy-404-v1-pod-744454b7ff 2 2 2 26m
replicaset.apps/drupal-dummy-v1-pod-77f5bf55c6 2 2 2 30m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/drupal-dummy-v1-service-https d8gfi.syspod.fr 34.120.106.136 80 14m
Does anybody have any explanation (and solution) ?
Added deployment DUMP (sure something is missing but I don't see) :
root#c55834fbdf1a:/# k get deployment.apps/drupal-dummy-v1-pod -o json
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "2",
"meta.helm.sh/release-name": "drupal-dummy-v1-pod",
"meta.helm.sh/release-namespace": "e1"
},
"creationTimestamp": "2020-06-23T18:49:59Z",
"generation": 2,
"labels": {
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
},
"name": "drupal-dummy-v1-pod",
"namespace": "e1",
"resourceVersion": "3977170",
"selfLink": "/apis/apps/v1/namespaces/e1/deployments/drupal-dummy-v1-pod",
"uid": "56f74fb9-b582-11ea-9df2-42010a000006"
},
"spec": {
"progressDeadlineSeconds": 600,
"replicas": 2,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": "25%",
"maxUnavailable": "25%"
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "APPLICATION",
"value": "drupal-dummy-v1-pod"
},
{
"name": "DB_PASS",
"valueFrom": {
"secretKeyRef": {
"key": "password",
"name": "dbpassword"
}
}
},
{
"name": "DB_FQDN",
"valueFrom": {
"configMapKeyRef": {
"key": "dbip",
"name": "gcpenv"
}
}
},
{
"name": "DB_PORT",
"valueFrom": {
"configMapKeyRef": {
"key": "dbport",
"name": "gcpenv"
}
}
},
{
"name": "DB_NAME",
"valueFrom": {
"configMapKeyRef": {
"key": "dbdatabase",
"name": "gcpenv"
}
}
},
{
"name": "DB_USER",
"valueFrom": {
"configMapKeyRef": {
"key": "dbuser",
"name": "gcpenv"
}
}
}
],
"image": "eu.gcr.io/gke-drupal-276313/drupal-dummy:1.0.0",
"imagePullPolicy": "Always",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "drupal-dummy-v1-pod",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/www/html/sites/default",
"name": "drupal-dummy-v1-pod"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"name": "drupal-dummy-v1-pod",
"persistentVolumeClaim": {
"claimName": "drupal-dummy-v1-pod"
}
}
]
}
}
},
"status": {
"availableReplicas": 2,
"conditions": [
{
"lastTransitionTime": "2020-06-23T18:56:05Z",
"lastUpdateTime": "2020-06-23T18:56:05Z",
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2020-06-23T18:49:59Z",
"lastUpdateTime": "2020-06-23T18:56:05Z",
"message": "ReplicaSet \"drupal-dummy-v1-pod-6865d969cd\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 2,
"readyReplicas": 2,
"replicas": 2,
"updatedReplicas": 2
}
}
Here service DUMP too :
root#c55834fbdf1a:/# k get service/drupal-dummy-v1-service-https -o json
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"beta.cloud.google.com/backend-config": "{\"ports\": {\"80\":\"drupal-dummy-v1-service-https\"}}",
"cloud.google.com/neg": "{\"ingress\": true}",
"cloud.google.com/neg-status": "{\"network_endpoint_groups\":{\"80\":\"k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551\"},\"zones\":[\"europe-west3-a\",\"europe-west3-b\"]}",
"meta.helm.sh/release-name": "drupal-dummy-v1-service",
"meta.helm.sh/release-namespace": "e1"
},
"creationTimestamp": "2020-06-23T18:50:45Z",
"labels": {
"app": "drupal-dummy-v1-service",
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-service",
"type": "svc"
},
"name": "drupal-dummy-v1-service-https",
"namespace": "e1",
"resourceVersion": "3982781",
"selfLink": "/api/v1/namespaces/e1/services/drupal-dummy-v1-service-https",
"uid": "722d3a99-b582-11ea-9df2-42010a000006"
},
"spec": {
"clusterIP": "172.16.103.181",
"externalTrafficPolicy": "Cluster",
"ports": [
{
"nodePort": 32396,
"port": 80,
"protocol": "TCP",
"targetPort": 80
}
],
"selector": {
"environment": "e1",
"name": "drupal-dummy-v1-pod"
},
"sessionAffinity": "None",
"type": "NodePort"
},
"status": {
"loadBalancer": {}
}
}
And ingress one :
root#c55834fbdf1a:/# k get ingress.extensions/drupal-dummy-v1-service-https -o json
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"annotations": {
"ingress.gcp.kubernetes.io/pre-shared-cert": "mcrt-a15e339b-6c3f-4f23-8f6b-688dc98b33a6,mcrt-f3a385de-0541-4b9c-8047-6dcfcbd4d74f",
"ingress.kubernetes.io/backends": "{\"k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551\":\"HEALTHY\"}",
"ingress.kubernetes.io/forwarding-rule": "k8s-fw-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/https-forwarding-rule": "k8s-fws-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/https-target-proxy": "k8s-tps-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/ssl-cert": "mcrt-a15e339b-6c3f-4f23-8f6b-688dc98b33a6,mcrt-f3a385de-0541-4b9c-8047-6dcfcbd4d74f",
"ingress.kubernetes.io/target-proxy": "k8s-tp-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/url-map": "k8s-um-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"kubernetes.io/ingress.global-static-ip-name": "gkxe-k1312-e1-drupal-dummy-v1",
"meta.helm.sh/release-name": "drupal-dummy-v1-service",
"meta.helm.sh/release-namespace": "e1",
"networking.gke.io/managed-certificates": "dummydrupald8.cnes.fr,d8.syspod.fr",
"nginx.ingress.kubernetes.io/rewrite-target": "/"
},
"creationTimestamp": "2020-06-23T18:50:45Z",
"generation": 1,
"labels": {
"app": "drupal-dummy-v1-service",
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-service",
"type": "ingress"
},
"name": "drupal-dummy-v1-service-https",
"namespace": "e1",
"resourceVersion": "3978178",
"selfLink": "/apis/extensions/v1beta1/namespaces/e1/ingresses/drupal-dummy-v1-service-https",
"uid": "7237fc51-b582-11ea-9df2-42010a000006"
},
"spec": {
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
},
"rules": [
{
"host": "dummydrupald8.cnes.fr",
"http": {
"paths": [
{
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
}
}
]
}
},
{
"host": "d8.syspod.fr",
"http": {
"paths": [
{
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "34.98.97.102"
}
]
}
}
}
I have seen that in kubernetes events (only when I reconfigure my service selector to target first or second deployment).
Switch to indisponibility page (3 seconds 502) :
81s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-b")
78s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-a")
Switch back to application (15 seconds 502 -> Never the same duration):
7s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-a")
7s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-b")
I could check that the NEG events appear just before 502 error ends, I suspect that when we change service definition a new NEG is implemented but the time is not immediate, and while we wait to have it we still not have the old service up and there is no service during this time :(
There is no "rolling update" of services definition ?
No solution at all, even with having two services and just upgrading the Ingress. Seen witn GCP support, GKE will always destroy then recretate something it is not possible to do any rolling update of service or ingress with no downtime. They suggest to have two full silos then play with DNS, we have chosen another solution, having only one deployment and just do a simple rolling update of deployment changing referenced docker image. Not really in the target, but it works...

services “kubernetes-dashboard” not found when accessing kubernetes UI interface

I followed the manual to install kubernetes dashboard.
Step 1:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
serviceaccount "kubernetes-dashboard" created
service "kubernetes-dashboard" created
secret "kubernetes-dashboard-certs" created
secret "kubernetes-dashboard-csrf" created
secret "kubernetes-dashboard-key-holder" created
configmap "kubernetes-dashboard-settings" created
role.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
deployment.apps "kubernetes-dashboard" created
service "dashboard-metrics-scraper" created
The Deployment "dashboard-metrics-scraper" is invalid: spec.template.annotations.seccomp.security.alpha.kubernetes.io/pod: Invalid value: "runtime/default": must be a valid seccomp profile
Step 2:
kubectl proxy --port=6001 & disown
The output is -
Starting to serve on 127.0.0.1:6001
Now when I'm accessing the site -
http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
it gives the following error -
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
Also checking pods do not show kubernetes dashboard
kubectl get pod --namespace=kube-system
shows
NAME READY STATUS RESTARTS AGE
etcd-docker-for-desktop 1/1 Running 0 13d
kube-apiserver-docker-for-desktop 1/1 Running 0 13d
kube-controller-manager-docker-for-desktop 1/1 Running 0 13d
kube-scheduler-docker-for-desktop 1/1 Running 0 13d.
.
kubectl get pod --namespace=kubernetes-dashboard
returns-
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-659f6797cf-8v45l 0/1 CrashLoopBackOff 15 1h
How to fix the problem ?
Update: The following link http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services gives below output -
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services",
"resourceVersion": "254593"
},
"items": [
{
"metadata": {
"name": "dashboard-metrics-scraper",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper",
"uid": "932dc2d5-4675-11ea-952a-025000000001",
"resourceVersion": "202570",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "dashboard-metrics-scraper"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"dashboard-metrics-scraper\"},\"name\":\"dashboard-metrics-scraper\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":8000,\"targetPort\":8000}],\"selector\":{\"k8s-app\":\"dashboard-metrics-scraper\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8000,
"targetPort": 8000
}
],
"selector": {
"k8s-app": "dashboard-metrics-scraper"
},
"clusterIP": "10.106.158.177",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard",
"uid": "931a96eb-4675-11ea-952a-025000000001",
"resourceVersion": "202558",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "kubernetes-dashboard"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.108.57.147",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
]
}
Working dashboard application should list below resources in running sate
$ kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-76585494d8-c6n5x 1/1 Running 0 136m
pod/kubernetes-dashboard-5996555fd8-wmc44 1/1 Running 0 136m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.109.217.134 <none> 8000/TCP 136m
service/kubernetes-dashboard ClusterIP 10.108.201.245 <none> 443/TCP 136m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 136m
deployment.apps/kubernetes-dashboard 1/1 1 1 136m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-76585494d8 1 1 1 136m
replicaset.apps/kubernetes-dashboard-5996555fd8 1 1 1 136m
Run describe command on failed pod and verify the events listed to find issue
Example:
$ kubectl describe -n kubernetes-dashboard pod kubernetes-dashboard-5996555fd8-wmc44
Events: <none>

How to check what port a pod is listening on with kubectl and not looking at the dockerFile?

I have a pod running and want to port forward so i can access the pod from the internal network.
I don't know what port it is listening on though, there is no service yet.
I describe the pod:
$ kubectl describe pod queue-l7wck
Name: queue-l7wck
Namespace: default
Priority: 0
Node: minikube/192.168.64.3
Start Time: Wed, 18 Dec 2019 05:13:56 +0200
Labels: app=work-queue
chapter=jobs
component=queue
Annotations: <none>
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/queue
Containers:
queue:
Container ID: docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63
Image: gcr.io/kuar-demo/kuard-amd64:blue
Image ID: docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 18 Dec 2019 05:14:02 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mbn5b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-mbn5b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mbn5b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/queue-l7wck to minikube
Normal Pulling 31h kubelet, minikube Pulling image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Pulled 31h kubelet, minikube Successfully pulled image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Created 31h kubelet, minikube Created container queue
Normal Started 31h kubelet, minikube Started container queue
even the JSON has nothing:
$ kubectl get pods queue-l7wck -o json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2019-12-18T03:13:56Z",
"generateName": "queue-",
"labels": {
"app": "work-queue",
"chapter": "jobs",
"component": "queue"
},
"name": "queue-l7wck",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicaSet",
"name": "queue",
"uid": "a9ec07f7-07a3-4462-9ac4-a72226f54556"
}
],
"resourceVersion": "375402",
"selfLink": "/api/v1/namespaces/default/pods/queue-l7wck",
"uid": "af43027d-8377-4227-b366-bcd4940b8709"
},
"spec": {
"containers": [
{
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imagePullPolicy": "Always",
"name": "queue",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-mbn5b",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"enableServiceLinks": true,
"nodeName": "minikube",
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-mbn5b",
"secret": {
"defaultMode": 420,
"secretName": "default-token-mbn5b"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63",
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imageID": "docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229",
"lastState": {},
"name": "queue",
"ready": true,
"restartCount": 0,
"started": true,
"state": {
"running": {
"startedAt": "2019-12-18T03:14:02Z"
}
}
}
],
"hostIP": "192.168.64.3",
"phase": "Running",
"podIP": "172.17.0.2",
"podIPs": [
{
"ip": "172.17.0.2"
}
],
"qosClass": "BestEffort",
"startTime": "2019-12-18T03:13:56Z"
}
}
How do you checker what port a pod is listening on with kubectl?
Update
If I ssh into the pod and run netstat -tulpn as suggested in the comments I get:
$ kubectl exec -it queue-pfmq2 -- sh
~ $ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/kuard
But this method is not using kubectl.
Your container image has a port opened during the build (looks like port 8080 in your case) using the EXPOSE command in the Dockerfile. Since the exposed port is baked into the image, k8s does not keep track of this open port since k8s does not need to take steps to open it.
Since k8s is not responsible for opening the port, you won't be able to find the listening port using kubectl or checking the pod YAML
Try the combination of both kubectl and your Linux command to get the Port container is listening on:
kubectl exec <pod name here> -- netstat -tulpn
Further you can pipe this result with grep to narrow the findings if required eg.
kubectl exec <pod name here> -- netstat -tulpn | grep "search string"
Note: It will work only if your container's base image supports the command netstat. and as per your Update section it seems it supports.
Above solution is nothing but a smart use of the commands you have used in two parts first to exec the container in interactive mode using -it second in the container to list the listening port.
One answer suggested to run netstat inside the container.
This only works if netstat is part of the container's image.
As an alternative, you can run netstat on the host executing it in the container's network namespace..
Get the container's process ID on the host (this is the application running inside the container). Then change to the container's network namespace (run as root on the host):
host# PS1='container# ' nsenter -t <PID> -n
Modifying the PS1 environment variable is used to show a different prompt while you are in the container's network namespace.
Get the listening ports in the container:
container# netstat -na
....
container# exit
If who created the image added the right Openshift label then you can use the following command (unfortunately your image does not have the label) :
skopeo inspect docker://image-url:tag | grep expose-service
e.g.
skopeo inspect docker://quay.io/redhattraining/loadtest:v1.0 | grep expose-service
output:
"io.openshift.expose-services": "8080:http"
So 8080 is the port exposed by the image
Hope this helps
normally. a container will able to run curl . so you can use curl to check whether a port is open.
for port in 8080 50000 443 8443;do curl -I - connect-timeout 1 127.0.0.1:$port;done
this can be run with sh.

Kubernetes Service does not get its external IP address

When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address:
initially:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 80/TCP app=app-1 7s
and after about 30s:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s
But when I do it in one step providing the Service and the ReplicationController to the kubectl create -f dir_with_2_files the service gets created but it does not get and External IP:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.251.171 <none> 80/TCP app=app-1 2m
The <none> under External IP worries me.
For the Service I use the JSON file:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "app-1"
},
"spec": {
"selector": {
"app": "app-1"
},
"ports": [
{
"port": 80,
"targetPort": 8000
}
]
}
}
and for the ReplicationController:
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "app-1"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "app-1"
}
},
"spec": {
"containers": [
{
"name": "service",
"image": "gcr.io/sigma-cairn-99810/service:latest",
"ports": [
{
"containerPort": 8000
}
]
}
]
}
}
}
}
and to expose the Service manually I use the command:
kubectl expose rc app-1 --port 80 --target-port=8000 --type="LoadBalancer"
If you don't specify the type of a Service it defaults to ClusterIP. If you want the equivalent of expose you must:
Make sure your Service selects pods from the RC via matching label selectors
Make the Service type=LoadBalancer