services “kubernetes-dashboard” not found when accessing kubernetes UI interface - kubernetes

I followed the manual to install kubernetes dashboard.
Step 1:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
serviceaccount "kubernetes-dashboard" created
service "kubernetes-dashboard" created
secret "kubernetes-dashboard-certs" created
secret "kubernetes-dashboard-csrf" created
secret "kubernetes-dashboard-key-holder" created
configmap "kubernetes-dashboard-settings" created
role.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
deployment.apps "kubernetes-dashboard" created
service "dashboard-metrics-scraper" created
The Deployment "dashboard-metrics-scraper" is invalid: spec.template.annotations.seccomp.security.alpha.kubernetes.io/pod: Invalid value: "runtime/default": must be a valid seccomp profile
Step 2:
kubectl proxy --port=6001 & disown
The output is -
Starting to serve on 127.0.0.1:6001
Now when I'm accessing the site -
http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
it gives the following error -
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
Also checking pods do not show kubernetes dashboard
kubectl get pod --namespace=kube-system
shows
NAME READY STATUS RESTARTS AGE
etcd-docker-for-desktop 1/1 Running 0 13d
kube-apiserver-docker-for-desktop 1/1 Running 0 13d
kube-controller-manager-docker-for-desktop 1/1 Running 0 13d
kube-scheduler-docker-for-desktop 1/1 Running 0 13d.
.
kubectl get pod --namespace=kubernetes-dashboard
returns-
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-659f6797cf-8v45l 0/1 CrashLoopBackOff 15 1h
How to fix the problem ?
Update: The following link http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services gives below output -
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services",
"resourceVersion": "254593"
},
"items": [
{
"metadata": {
"name": "dashboard-metrics-scraper",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper",
"uid": "932dc2d5-4675-11ea-952a-025000000001",
"resourceVersion": "202570",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "dashboard-metrics-scraper"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"dashboard-metrics-scraper\"},\"name\":\"dashboard-metrics-scraper\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":8000,\"targetPort\":8000}],\"selector\":{\"k8s-app\":\"dashboard-metrics-scraper\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8000,
"targetPort": 8000
}
],
"selector": {
"k8s-app": "dashboard-metrics-scraper"
},
"clusterIP": "10.106.158.177",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard",
"uid": "931a96eb-4675-11ea-952a-025000000001",
"resourceVersion": "202558",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "kubernetes-dashboard"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.108.57.147",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
]
}

Working dashboard application should list below resources in running sate
$ kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-76585494d8-c6n5x 1/1 Running 0 136m
pod/kubernetes-dashboard-5996555fd8-wmc44 1/1 Running 0 136m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.109.217.134 <none> 8000/TCP 136m
service/kubernetes-dashboard ClusterIP 10.108.201.245 <none> 443/TCP 136m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 136m
deployment.apps/kubernetes-dashboard 1/1 1 1 136m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-76585494d8 1 1 1 136m
replicaset.apps/kubernetes-dashboard-5996555fd8 1 1 1 136m
Run describe command on failed pod and verify the events listed to find issue
Example:
$ kubectl describe -n kubernetes-dashboard pod kubernetes-dashboard-5996555fd8-wmc44
Events: <none>

Related

Unable to open service via kubectl proxy

➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 54d
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 54d
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 54d
➜ kubectl proxy
xdg-open http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy/#q=ip-192-168-114-35
and it fails with
Error trying to reach service: 'dial tcp 10.0.102.174:80: i/o timeout'
However if I expose the service via kubectl port-forward I can open the service in the browser
kubectl port-forward service/backend-service 8080:80 -n edna
xdg-open HTTP://localhost:8080
So how to open the service via that long URL (similar how we open the kubernetes dashboard?
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
If I query the API with CURL I see the output
➜ curl http://127.0.0.1:8001/api/v1/namespaces/edna/services/backend-service/
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "backend-service",
"namespace": "edna",
"selfLink": "/api/v1/namespaces/edna/services/backend-service",
"uid": "7163dd4e-e76d-4517-b0fe-d2d516b5dc16",
"resourceVersion": "6433582",
"creationTimestamp": "2020-08-14T05:58:45Z",
"labels": {
"app.kubernetes.io/instance": "backend-etl"
},
"annotations": {
"argocd.argoproj.io/sync-wave": "10",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{\"argocd.argoproj.io/sync-wave\":\"10\"},\"labels\":{\"app.kubernetes.io/instance\":\"backend-etl\"},\"name\":\"backend-service\",\"namespace\":\"edna\"},\"spec\":{\"ports\":[{\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"app\":\"edna-backend\"},\"type\":\"ClusterIP\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 80
}
],
"selector": {
"app": "edna-backend"
},
"clusterIP": "172.20.39.154",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
Instead of your URL:
http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy
Try without 'http:'
http://127.0.0.1:8001/api/v1/namespaces/edna/services/airflow-service/proxy

Kubernetes Dashboard Error trying to reach service: 'Proxy Error ( Connection refused )'

I Ran this:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
Then Ran this:
nohup kubectl proxy &
Then this:
curl http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Getting this error:
Error trying to reach service: 'Proxy Error ( Connection refused )'
Even though when i run this:
curl http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/
I get this output,which means i can reach port 8001 very well:
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/",
"resourceVersion": "2232832"
},
"items": [
{
"metadata": {
"name": "dashboard-metrics-scraper",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper",
"uid": "716c5d85-d3d5-4f49-bcae-b77b848fc129",
"resourceVersion": "2216232",
"creationTimestamp": "2020-03-17T06:49:58Z",
"labels": {
"k8s-app": "dashboard-metrics-scraper"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8000,
"targetPort": 8000
}
],
"selector": {
"k8s-app": "dashboard-metrics-scraper"
},
"clusterIP": "10.107.93.77",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard",
"uid": "76832e87-fa3a-44de-9d19-06b15efc8073",
"resourceVersion": "2216216",
"creationTimestamp": "2020-03-17T06:49:58Z",
"labels": {
"k8s-app": "kubernetes-dashboard"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.111.110.202",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
]
}
someone please advise what i might have overlooked.
kubectl get pods -o wide --all-namespaces
kube-system coredns-6955765f44-22n9z 1/1 Running 4 11d 10.32.0.2 svomniadm01dev.co-opbank.co.ke <none> <none>
kube-system coredns-6955765f44-n9z5x 1/1 Running 4 11d 10.32.0.3 svomniadm01dev.co-opbank.co.ke <none> <none>
kube-system etcd-svomniadm01dev.co-opbank.co.ke 1/1 Running 4 11d 172.16.20.46 svomniadm01dev.co-opbank.co.ke <none> <none>
kube-system kube-apiserver-svomniadm01dev.co-opbank.co.ke 1/1 Running 4 11d 172.16.20.46 svomniadm01dev.co-opbank.co.ke <none> <none>
kube-system kube-controller-manager-svomniadm01dev.co-opbank.co.ke 1/1 Running 4 11d 172.16.20.46 svomniadm01dev.co-opbank.co.ke <none> <none>
kube-system kube-proxy-29czc 1/1 Running 0 8d 172.16.20.48 svomniadm03dev.co-opbank.co.ke <none> <none>
kube-system kube-proxy-gfzm2 1/1 Running 4 11d 172.16.20.46 svomniadm01dev.co-opbank.co.ke <none> <none>
kube-system kube-proxy-xj2nb 1/1 Running 0 9d 172.16.20.47 svomniadm02dev.co-opbank.co.ke <none> <none>
kube-system kube-scheduler-svomniadm01dev.co-opbank.co.ke 1/1 Running 4 11d 172.16.20.46 svomniadm01dev.co-opbank.co.ke <none> <none>
kube-system weave-net-4ltpp 2/2 Running 1 9d 172.16.20.47 svomniadm02dev.co-opbank.co.ke <none> <none>
kube-system weave-net-589gn 2/2 Running 1 8d 172.16.20.48 svomniadm03dev.co-opbank.co.ke <none> <none>
kube-system weave-net-pj7bn 2/2 Running 11 11d 172.16.20.46 svomniadm01dev.co-opbank.co.ke <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-566cddb686-fs77w 1/1 Running 0 3h22m 10.44.0.1 svomniadm03dev.co-opbank.co.ke <none> <none>
kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-w2bxf 1/1 Running 0 3h22m 10.32.0.2 svomniadm02dev.co-opbank.co.ke <none> <none>

How to check what port a pod is listening on with kubectl and not looking at the dockerFile?

I have a pod running and want to port forward so i can access the pod from the internal network.
I don't know what port it is listening on though, there is no service yet.
I describe the pod:
$ kubectl describe pod queue-l7wck
Name: queue-l7wck
Namespace: default
Priority: 0
Node: minikube/192.168.64.3
Start Time: Wed, 18 Dec 2019 05:13:56 +0200
Labels: app=work-queue
chapter=jobs
component=queue
Annotations: <none>
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/queue
Containers:
queue:
Container ID: docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63
Image: gcr.io/kuar-demo/kuard-amd64:blue
Image ID: docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 18 Dec 2019 05:14:02 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mbn5b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-mbn5b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mbn5b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/queue-l7wck to minikube
Normal Pulling 31h kubelet, minikube Pulling image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Pulled 31h kubelet, minikube Successfully pulled image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Created 31h kubelet, minikube Created container queue
Normal Started 31h kubelet, minikube Started container queue
even the JSON has nothing:
$ kubectl get pods queue-l7wck -o json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2019-12-18T03:13:56Z",
"generateName": "queue-",
"labels": {
"app": "work-queue",
"chapter": "jobs",
"component": "queue"
},
"name": "queue-l7wck",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicaSet",
"name": "queue",
"uid": "a9ec07f7-07a3-4462-9ac4-a72226f54556"
}
],
"resourceVersion": "375402",
"selfLink": "/api/v1/namespaces/default/pods/queue-l7wck",
"uid": "af43027d-8377-4227-b366-bcd4940b8709"
},
"spec": {
"containers": [
{
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imagePullPolicy": "Always",
"name": "queue",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-mbn5b",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"enableServiceLinks": true,
"nodeName": "minikube",
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-mbn5b",
"secret": {
"defaultMode": 420,
"secretName": "default-token-mbn5b"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63",
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imageID": "docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229",
"lastState": {},
"name": "queue",
"ready": true,
"restartCount": 0,
"started": true,
"state": {
"running": {
"startedAt": "2019-12-18T03:14:02Z"
}
}
}
],
"hostIP": "192.168.64.3",
"phase": "Running",
"podIP": "172.17.0.2",
"podIPs": [
{
"ip": "172.17.0.2"
}
],
"qosClass": "BestEffort",
"startTime": "2019-12-18T03:13:56Z"
}
}
How do you checker what port a pod is listening on with kubectl?
Update
If I ssh into the pod and run netstat -tulpn as suggested in the comments I get:
$ kubectl exec -it queue-pfmq2 -- sh
~ $ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/kuard
But this method is not using kubectl.
Your container image has a port opened during the build (looks like port 8080 in your case) using the EXPOSE command in the Dockerfile. Since the exposed port is baked into the image, k8s does not keep track of this open port since k8s does not need to take steps to open it.
Since k8s is not responsible for opening the port, you won't be able to find the listening port using kubectl or checking the pod YAML
Try the combination of both kubectl and your Linux command to get the Port container is listening on:
kubectl exec <pod name here> -- netstat -tulpn
Further you can pipe this result with grep to narrow the findings if required eg.
kubectl exec <pod name here> -- netstat -tulpn | grep "search string"
Note: It will work only if your container's base image supports the command netstat. and as per your Update section it seems it supports.
Above solution is nothing but a smart use of the commands you have used in two parts first to exec the container in interactive mode using -it second in the container to list the listening port.
One answer suggested to run netstat inside the container.
This only works if netstat is part of the container's image.
As an alternative, you can run netstat on the host executing it in the container's network namespace..
Get the container's process ID on the host (this is the application running inside the container). Then change to the container's network namespace (run as root on the host):
host# PS1='container# ' nsenter -t <PID> -n
Modifying the PS1 environment variable is used to show a different prompt while you are in the container's network namespace.
Get the listening ports in the container:
container# netstat -na
....
container# exit
If who created the image added the right Openshift label then you can use the following command (unfortunately your image does not have the label) :
skopeo inspect docker://image-url:tag | grep expose-service
e.g.
skopeo inspect docker://quay.io/redhattraining/loadtest:v1.0 | grep expose-service
output:
"io.openshift.expose-services": "8080:http"
So 8080 is the port exposed by the image
Hope this helps
normally. a container will able to run curl . so you can use curl to check whether a port is open.
for port in 8080 50000 443 8443;do curl -I - connect-timeout 1 127.0.0.1:$port;done
this can be run with sh.

Google Cloud Container: Can not connect to mongodb service

I created a mongodb replication controller and a mongo service. I tried to connect to it from a different mongo pod just to test the connection. But that does not work
root#mongo-test:/# mongo mongo-service/mydb
MongoDB shell version: 3.2.0
connecting to: mongo-service/mydb
2015-12-09T11:05:55.256+0000 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongo-service:27017' :
connect#src/mongo/shell/mongo.js:226:14
#(connect):1:6
exception: connect failed
I am not sure what I have done wrong in the configuration. I may miss something here
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
mongo mongo mongo:latest name=mongo 1 9s
kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-6bnak 1/1 Running 0 1m
mongo-test 1/1 Running 0 21m
kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.119.240.1 <none> 443/TCP <none> 23h
mongo-service 10.119.254.202 <none> 27017/TCP name=mongo,role=mongo 1m
I configured the RC and Service with the following configs
mongo-rc
{
"metadata": {
"name": "mongo",
"labels": { "name": "mongo" }
},
"kind": "ReplicationController",
"apiVersion": "v1",
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": { "name": "mongo" }
},
"spec": {
"volumes": [
{
"name": "mongo-disk",
"gcePersistentDisk": {
"pdName": "mongo-disk",
"fsType": "ext4"
}
}
],
"containers": [
{
"name": "mongo",
"image": "mongo:latest",
"ports": [{
"name":"mongo",
"containerPort": 27017
}],
"volumeMounts": [
{
"name": "mongo-disk",
"mountPath": "/data/db"
}
]
}
]
}
}
}
}
mongo-service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "mongo-service"
},
"spec": {
"ports": [
{
"port": 27017,
"targetPort": "mongo"
}
],
"selector": {
"name": "mongo",
"role": "mongo"
}
}
}
Almost a bit embarrassing.
The issue was that I used the selector "role" in the service but did not define it on the RC.

Kubernetes Service does not get its external IP address

When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address:
initially:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 80/TCP app=app-1 7s
and after about 30s:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s
But when I do it in one step providing the Service and the ReplicationController to the kubectl create -f dir_with_2_files the service gets created but it does not get and External IP:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.251.171 <none> 80/TCP app=app-1 2m
The <none> under External IP worries me.
For the Service I use the JSON file:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "app-1"
},
"spec": {
"selector": {
"app": "app-1"
},
"ports": [
{
"port": 80,
"targetPort": 8000
}
]
}
}
and for the ReplicationController:
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "app-1"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "app-1"
}
},
"spec": {
"containers": [
{
"name": "service",
"image": "gcr.io/sigma-cairn-99810/service:latest",
"ports": [
{
"containerPort": 8000
}
]
}
]
}
}
}
}
and to expose the Service manually I use the command:
kubectl expose rc app-1 --port 80 --target-port=8000 --type="LoadBalancer"
If you don't specify the type of a Service it defaults to ClusterIP. If you want the equivalent of expose you must:
Make sure your Service selects pods from the RC via matching label selectors
Make the Service type=LoadBalancer