Kubernetes Service does not get its external IP address - kubernetes

When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address:
initially:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 80/TCP app=app-1 7s
and after about 30s:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s
But when I do it in one step providing the Service and the ReplicationController to the kubectl create -f dir_with_2_files the service gets created but it does not get and External IP:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.251.171 <none> 80/TCP app=app-1 2m
The <none> under External IP worries me.
For the Service I use the JSON file:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "app-1"
},
"spec": {
"selector": {
"app": "app-1"
},
"ports": [
{
"port": 80,
"targetPort": 8000
}
]
}
}
and for the ReplicationController:
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "app-1"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "app-1"
}
},
"spec": {
"containers": [
{
"name": "service",
"image": "gcr.io/sigma-cairn-99810/service:latest",
"ports": [
{
"containerPort": 8000
}
]
}
]
}
}
}
}
and to expose the Service manually I use the command:
kubectl expose rc app-1 --port 80 --target-port=8000 --type="LoadBalancer"

If you don't specify the type of a Service it defaults to ClusterIP. If you want the equivalent of expose you must:
Make sure your Service selects pods from the RC via matching label selectors
Make the Service type=LoadBalancer

Related

Unable to open service via kubectl proxy

➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 54d
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 54d
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 54d
➜ kubectl proxy
xdg-open http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy/#q=ip-192-168-114-35
and it fails with
Error trying to reach service: 'dial tcp 10.0.102.174:80: i/o timeout'
However if I expose the service via kubectl port-forward I can open the service in the browser
kubectl port-forward service/backend-service 8080:80 -n edna
xdg-open HTTP://localhost:8080
So how to open the service via that long URL (similar how we open the kubernetes dashboard?
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
If I query the API with CURL I see the output
➜ curl http://127.0.0.1:8001/api/v1/namespaces/edna/services/backend-service/
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "backend-service",
"namespace": "edna",
"selfLink": "/api/v1/namespaces/edna/services/backend-service",
"uid": "7163dd4e-e76d-4517-b0fe-d2d516b5dc16",
"resourceVersion": "6433582",
"creationTimestamp": "2020-08-14T05:58:45Z",
"labels": {
"app.kubernetes.io/instance": "backend-etl"
},
"annotations": {
"argocd.argoproj.io/sync-wave": "10",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{\"argocd.argoproj.io/sync-wave\":\"10\"},\"labels\":{\"app.kubernetes.io/instance\":\"backend-etl\"},\"name\":\"backend-service\",\"namespace\":\"edna\"},\"spec\":{\"ports\":[{\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"app\":\"edna-backend\"},\"type\":\"ClusterIP\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 80
}
],
"selector": {
"app": "edna-backend"
},
"clusterIP": "172.20.39.154",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
Instead of your URL:
http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy
Try without 'http:'
http://127.0.0.1:8001/api/v1/namespaces/edna/services/airflow-service/proxy

services “kubernetes-dashboard” not found when accessing kubernetes UI interface

I followed the manual to install kubernetes dashboard.
Step 1:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
serviceaccount "kubernetes-dashboard" created
service "kubernetes-dashboard" created
secret "kubernetes-dashboard-certs" created
secret "kubernetes-dashboard-csrf" created
secret "kubernetes-dashboard-key-holder" created
configmap "kubernetes-dashboard-settings" created
role.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
deployment.apps "kubernetes-dashboard" created
service "dashboard-metrics-scraper" created
The Deployment "dashboard-metrics-scraper" is invalid: spec.template.annotations.seccomp.security.alpha.kubernetes.io/pod: Invalid value: "runtime/default": must be a valid seccomp profile
Step 2:
kubectl proxy --port=6001 & disown
The output is -
Starting to serve on 127.0.0.1:6001
Now when I'm accessing the site -
http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
it gives the following error -
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
Also checking pods do not show kubernetes dashboard
kubectl get pod --namespace=kube-system
shows
NAME READY STATUS RESTARTS AGE
etcd-docker-for-desktop 1/1 Running 0 13d
kube-apiserver-docker-for-desktop 1/1 Running 0 13d
kube-controller-manager-docker-for-desktop 1/1 Running 0 13d
kube-scheduler-docker-for-desktop 1/1 Running 0 13d.
.
kubectl get pod --namespace=kubernetes-dashboard
returns-
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-659f6797cf-8v45l 0/1 CrashLoopBackOff 15 1h
How to fix the problem ?
Update: The following link http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services gives below output -
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services",
"resourceVersion": "254593"
},
"items": [
{
"metadata": {
"name": "dashboard-metrics-scraper",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper",
"uid": "932dc2d5-4675-11ea-952a-025000000001",
"resourceVersion": "202570",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "dashboard-metrics-scraper"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"dashboard-metrics-scraper\"},\"name\":\"dashboard-metrics-scraper\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":8000,\"targetPort\":8000}],\"selector\":{\"k8s-app\":\"dashboard-metrics-scraper\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8000,
"targetPort": 8000
}
],
"selector": {
"k8s-app": "dashboard-metrics-scraper"
},
"clusterIP": "10.106.158.177",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard",
"uid": "931a96eb-4675-11ea-952a-025000000001",
"resourceVersion": "202558",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "kubernetes-dashboard"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.108.57.147",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
]
}
Working dashboard application should list below resources in running sate
$ kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-76585494d8-c6n5x 1/1 Running 0 136m
pod/kubernetes-dashboard-5996555fd8-wmc44 1/1 Running 0 136m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.109.217.134 <none> 8000/TCP 136m
service/kubernetes-dashboard ClusterIP 10.108.201.245 <none> 443/TCP 136m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 136m
deployment.apps/kubernetes-dashboard 1/1 1 1 136m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-76585494d8 1 1 1 136m
replicaset.apps/kubernetes-dashboard-5996555fd8 1 1 1 136m
Run describe command on failed pod and verify the events listed to find issue
Example:
$ kubectl describe -n kubernetes-dashboard pod kubernetes-dashboard-5996555fd8-wmc44
Events: <none>

Call a service from any POD

I would like how to call a service from any pod inside or outsite the node.
I have 3 nodes with deployment and services. I already have a kube-proxy.
I exec bash on other pod:
kubectl exec --namespace=develop myotherdpod-78c6bfd876-6zvh2 -i -t -- /bin/bash
And inside my other pod I have tried to exec curl:
curl -v http://myservice.develop.svc.cluster.local/user
This is my created service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "myservice",
"namespace": "develop",
"selfLink": "/api/v1/namespaces/develop/services/mydeployment-svc",
"uid": "1b5fb4ae-ecd1-11e7-8599-02cc6a4bf8be",
"resourceVersion": "10660278",
"creationTimestamp": "2017-12-29T19:47:30Z",
"labels": {
"app": "mydeployment-deployment"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 8080
}
],
"selector": {
"app": "mydeployment-deployment"
},
"clusterIP": "100.99.99.140",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
It looks to me that something may be incorrect with the Network Overlay you deployed. First of all, I would double check that the pod can access kube-dns and obtain the proper IP of the service.
nslookup myservice.develop.svc.cluster.local
nslookup myservice # If they are in the same namespace it should work as well
If you are able to confirm that, then I would also check if services like kube-proxy are working correctly. You can do it by using
systemctl status kube-proxy
If that does not work I will also check the pods from the Overlay network by executing
kubectl get pods --namespace=kube-system
If they are all ok, I would try using a different network overlay: https://kubernetes.io/docs/concepts/cluster-administration/networking/
If that did not work either, I would check if there are firewall rules preventing some communication between the nodes.

How to access pod externally, i.e. bind localhost:8888 to cluster IP

I have a service running on localhost:8888 and trying to bind that to cluster's public IP so that I can open it up from my web browser. I created another service using the following yaml file:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "example-service"
},
"spec": {
"ports": [{
"port": 8888,
"targetPort": 8888
}],
"selector": {
"app": "example"
},
"type": "LoadBalancer"
}
}
Then I do kubectl describe services example-service:
Name: example-service
Namespace: spark-cluster
Labels: <none>
Selector: app=example
Type: LoadBalancer
IP: 10.3.0.66
LoadBalancer Ingress: a123b456c789.us-west-1.elb.amazonaws.com
Port: <unset> 8888/TCP
NodePort: <unset> 32767/TCP
Endpoints: <none>
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
14s 14s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
11s 11s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer
When I open up a123b456c789.us-west-1.elb.amazonaws.com:8888 in my web browser, it doesn't load. What are the correct steps to access my pod externally?
With your setup the application is available on ip adress of one of your node on port 32767 ( NodePort parameter ) if you want to give a NodePort yourself you need to change the code like this :
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "example-service"
},
"spec": {
"ports": [{
"port": 8888,
"targetPort": 8888
"nodePort": 8888
}],
"selector": {
"app": "example"
},
"type": "LoadBalancer"
}
}
http://kubernetes.io/docs/user-guide/services/#type-nodeport

Google Cloud Container: Can not connect to mongodb service

I created a mongodb replication controller and a mongo service. I tried to connect to it from a different mongo pod just to test the connection. But that does not work
root#mongo-test:/# mongo mongo-service/mydb
MongoDB shell version: 3.2.0
connecting to: mongo-service/mydb
2015-12-09T11:05:55.256+0000 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongo-service:27017' :
connect#src/mongo/shell/mongo.js:226:14
#(connect):1:6
exception: connect failed
I am not sure what I have done wrong in the configuration. I may miss something here
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
mongo mongo mongo:latest name=mongo 1 9s
kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-6bnak 1/1 Running 0 1m
mongo-test 1/1 Running 0 21m
kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.119.240.1 <none> 443/TCP <none> 23h
mongo-service 10.119.254.202 <none> 27017/TCP name=mongo,role=mongo 1m
I configured the RC and Service with the following configs
mongo-rc
{
"metadata": {
"name": "mongo",
"labels": { "name": "mongo" }
},
"kind": "ReplicationController",
"apiVersion": "v1",
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": { "name": "mongo" }
},
"spec": {
"volumes": [
{
"name": "mongo-disk",
"gcePersistentDisk": {
"pdName": "mongo-disk",
"fsType": "ext4"
}
}
],
"containers": [
{
"name": "mongo",
"image": "mongo:latest",
"ports": [{
"name":"mongo",
"containerPort": 27017
}],
"volumeMounts": [
{
"name": "mongo-disk",
"mountPath": "/data/db"
}
]
}
]
}
}
}
}
mongo-service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "mongo-service"
},
"spec": {
"ports": [
{
"port": 27017,
"targetPort": "mongo"
}
],
"selector": {
"name": "mongo",
"role": "mongo"
}
}
}
Almost a bit embarrassing.
The issue was that I used the selector "role" in the service but did not define it on the RC.