Kubernetes clusterIP does not load balance requests [duplicate] - kubernetes

My Environment: Mac dev machine with latest Minikube/Docker
I built (locally) a simple docker image with a simple Django REST API "hello world".I'm running a deployment with 3 replicas. This is my yaml file for defining it:
apiVersion: v1
kind: Service
metadata:
name: myproj-app-service
labels:
app: myproj-be
spec:
type: LoadBalancer
ports:
- port: 8000
selector:
app: myproj-be
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproj-app-deployment
labels:
app: myproj-be
spec:
replicas: 3
selector:
matchLabels:
app: myproj-be
template:
metadata:
labels:
app: myproj-be
spec:
containers:
- name: myproj-app-server
image: myproj-app-server:4
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
value: postgres://myname:#10.0.2.2:5432/myproj2
- name: REDIS_URL
value: redis://10.0.2.2:6379/1
When I apply this yaml it generates things correctly.
- one deployment
- one service
- three pods
Deployments:
NAME READY UP-TO-DATE AVAILABLE AGE
myproj-app-deployment 3/3 3 3 79m
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 83m
myproj-app-service LoadBalancer 10.96.91.44 <pending> 8000:31559/TCP 79m
Pods:
NAME READY STATUS RESTARTS AGE
myproj-app-deployment-77664b5557-97wkx 1/1 Running 0 48m
myproj-app-deployment-77664b5557-ks7kf 1/1 Running 0 49m
myproj-app-deployment-77664b5557-v9889 1/1 Running 0 49m
The interesting thing is that when I SSH into the Minikube, and hit the service using curl 10.96.91.44:8000 it respects the LoadBalancer type of the service and rotates between all three pods as I hit the endpoints time and again. I can see that in the returned results which I have made sure to include the HOSTNAME of the pod.
However, when I try to access the service from my Hosting Mac -- using kubectl port-forward service/myproj-app-service 8000:8000 -- Every time I hit the endpoint, I get the same pod to respond. It doesn't load balance. I can see that clearly when I kubectl logs -f <pod> to all three pods and only one of them is handling the hits, as the other two are idle...
Is this a kubectl port-forward limitation or issue? or am I missing something greater here?

kubectl port-forward looks up the first Pod from the Service information provided on the command line and forwards directly to a Pod rather than forwarding to the ClusterIP/Service port. The cluster doesn't get a chance to load balance the service like regular service traffic.
The kubernetes API only provides Pod port forward operations (CREATE and GET). Similar API operations don't exist for Service endpoints.
kubectl code
Here's a little bit of the flow from the kubectl code that seems to back that up (I'll just add that Go isn't my primary language)
The portforward.go Complete function is where kubectl portforward does the first look up for a pod from options via AttachablePodForObjectFn:
The AttachablePodForObjectFn is defined as attachablePodForObject in this interface, then here is the attachablePodForObject function.
To my (inexperienced) Go eyes, it appears the attachablePodForObject is the thing kubectl uses to look up a Pod to from a Service defined on the command line.
Then from there on everything deals with filling in the Pod specific PortForwardOptions (which doesn't include a service) and is passed to the kubernetes API.

The reason was that my pods were randomly in a crashing state due to Python *.pyc files that were left in the container. This causes issues when Django is running in a multi-pod Kubernetes deployment. Once I removed this issue and all pods ran successfully, the round-robin started working.

Related

Testing locally k8s distributed system

I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.
Main services are two Python applications MothershipService and Ship. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. Ship is running some (untrusted) user code.
MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
I can manage fine to get up the ship service
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
My question is how do I go about testing this via curl or a browser? These are all backend services so NodePort seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.
ship.yml (pseudo-spec)
kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
One possibility is to use the kubectl port-forward command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
I can expose that on my local system by running:
kubectl port-forward deploy/example 8000:8000
As long as that port-forward command is running, I can point my browser (or curl) at http://localhost:8000 to access the service.
Alternately, I can use kubectl exec to run commands (like curl or wget) inside the pod:
kubectl exec -it web -- wget -O- http://127.0.0.1:8000
Example process on how to create a Kubernetes Service object that exposes an external IP address :
**Creating a service for an application running in five pods: **
Run a Hello World application in your cluster:
kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
The preceding command creates a Deployment object and an associated ReplicaSet object. The ReplicaSet has five Pods, each of which runs the Hello World application.
Display information about the Deployment:
kubectl get deployments hello-world
kubectl describe deployments hello-world
Display information about your ReplicaSet objects:
kubectl get replicasets
kubectl describe replicasets
Create a Service object that exposes the deployment:
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Display information about the Service:
kubectl get services my-service
The output is similar to this:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service 10.3.245.137 104.198.205.71 8080/TCP 54s
Note: If the external IP address is shown as , wait for a minute and enter the same command again.
Display detailed information about the Service:
kubectl describe services my-service
The output is similar to this:
Name: my-service
Namespace: default
Labels: run=load-balancer-example
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
Session Affinity: None
Events:
Make a note of the external IP address exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port. In this example, the port is 8080.
In the preceding output, you can see that the service has several endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more. These are internal addresses of the pods that are running the Hello World application. To verify these are pod addresses, enter this command:
kubectl get pods --output=wide
The output is similar to this:
NAME ... IP NODE
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-2e5uh ... 0.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
Use the external IP address to access the Hello World application:
curl http://<external-ip>:<port>
where <external-ip> is the external IP address of your Service, and <port> is the value of Port in your Service description.
The response to a successful request is a hello message:
Hello Kubernetes!
Please refer to How to Use external IP in GKE and Exposing an External IP Address to Access an Application in a Cluster for more information.

how to restrict a pod to connect only to 2 pods using networkpolicy and test connection in k8s in simple way?

Do I still need to expose pod via clusterip service?
There are 3 pods - main, front, api. I need to allow ingress+egress connection to main pod only from the pods- api and frontend. I also created service-main - service that exposes main pod on port:80.
I don't know how to test it, tried:
k exec main -it -- sh
netcan -z -v -w 5 service-main 80
and
k exec main -it -- sh
curl front:80
The main.yaml pod:
apiVersion: v1
kind: Pod
metadata:
labels:
app: main
item: c18
name: main
spec:
containers:
- image: busybox
name: main
command:
- /bin/sh
- -c
- sleep 1d
The front.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
The api.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: api
name: api
spec:
containers:
- image: busybox
name: api
command:
- /bin/sh
- -c
- sleep 1d
The main-to-front-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front-end-policy
spec:
podSelector:
matchLabels:
app: main
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
What am I doing wrong? Do I still need to expose main pod via service? But should not network policy take care of this already?
Also, do I need to write containerPort:80 in main pod? How to test connectivity and ensure ingress-egress works only for main pod to api, front pods?
I tried the lab from ckad prep course, it had 2 pods: secure-pod and web-pod. There was issue with connectivity, the solution was to create network policy and test using netcat from inside the web-pod's container:
k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open
UPDATE: ideally I want answers to these:
a clear explanation of the diff btw service and networkpolicy.
If both service and netpol exist - what is the order of evaluation that the traffic/request goes thru? It first goes thru netpol then service? Or vice versa?
if I want front and api pods to send/receive traffic to main - do I need separate services exposing front and api pods?
Network policies and services are two different and independent Kubernetes resources.
Service is:
An abstract way to expose an application running on a set of Pods as a network service.
Good explanation from the Kubernetes docs:
Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
Enter Services.
Also another good explanation in this answer.
For production you should use a workload resources instead of creating pods directly:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:
Deployment
StatefulSet
DaemonSet
And use services to make requests to your application.
Network policies are used to control traffic flow:
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
Network policies target pods, not services (an abstraction). Check this answer and this one.
Regarding your examples - your network policy is correct (as I tested it below). The problem is that your cluster may not be compatible:
For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!
Test on kubeadm cluster with Calico plugin -> I created similar pods as you did, but I changed container part:
spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 8080
So NGINX app is available at the 8080 port.
Let's check pods IP:
user#shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 <none> <none>
front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 <none> <none>
main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 <none> <none>
Let's exec into running main pod and try to make request to the front pod:
root#main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
It is working.
After applying your network policy:
user#shell:~$ kubectl apply -f main-to-front.yaml
networkpolicy.networking.k8s.io/front-end-policy created
user#shell:~$ kubectl exec -it main -- bash
root#main:/# curl 192.168.156.61:8080
...
Not working anymore, so it means that network policy is applied successfully.
Nice option to get more information about applied network policy is to run kubectl describe command:
user#shell:~$ kubectl describe networkpolicy front-end-policy
Name: front-end-policy
Namespace: default
Created on: 2022-01-26 15:17:58 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=main
Allowing ingress traffic:
To Port: 8080/TCP
From:
PodSelector: app=front
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: app=front
Policy Types: Ingress, Egress

Why Can't I Access My Kubernetes Cluster Using the minikube IP?

I have some questions regarding my minikube cluster, specifically why there needs to be a tunnel, what the tunnel means actually, and where the port numbers come from.
Background
I'm obviously a total kubernetes beginner...and don't have a ton of networking experience.
Ok. I have the following docker image which I pushed to docker hub. It's a hello express app that just prints out "Hello world" at the / route.
DockerFile:
FROM node:lts-slim
RUN mkdir /code
COPY package*.json server.js /code/
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
I have the following pod spec:
web-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
The following service:
web-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 8080
targetPort: 3000
protocol: TCP
name: http
And the following deployment:
web-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
service: web-service
template:
metadata:
labels:
app: web-pod
service: web-service
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
protocol: TCP
All the objects are up and running and look good after I create them with kubectl.
I do this:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h5m
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Then, as per a book I'm reading if I do:
$ curl $(minikube ip):8080 # or :32177, # or :3000
I get no response.
I found when I do this, however I can access the app by going to http://127.0.0.1:52650/:
$ minikube service web-service
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | web-service | http/8080 | http://192.168.49.2:32177 |
|-----------|-------------|-------------|---------------------------|
🏃 Starting tunnel for service web-service.
|-----------|-------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|------------------------|
| default | web-service | | http://127.0.0.1:52472 |
|-----------|-------------|-------------|------------------------|
Questions
what this "tunnel" is and why we need it?
what the targetPort is for (8080)?
What this line means when I do kubectl get services:
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Specifically, what is that port mapping means and where 32177 comes from?
Is there some kind of problem with simply mapping the internal port to the same port number externally, e.g. 3000:3000? If so, do we specifically have to provide this mapping?
Let me answer on all your questions.
0 - There's no need to create pods separately (unless it's something to test), this should be done by creating deployments (or statefulsets, depends on the app and needs) which will create a replicaset which will be responsible for keeping right amount of pods in operational conditions. (you can get familiar with deployments in kubernetes.
1 - Tunnel is used to expose the service from inside of VM where minikube is running to the host machine's network. Works with LoadBalancer service type. Please refer to access applications in minikube.
1.1 - Reason why the application is not accessible on the localhost:NodePort is NodePort is exposed within VM where minikube is running, not on your local machine.
You can find minikube VM's IP by running minikube IP and then curl %GIVEN_IP:NodePort. You should get a response from your app.
2 - targetPort indicates the service with which port connection should be established. Please refer to define the service.
In minikube it may be confusing since it's pointed to the service port, not to the targetPort which is define within the service. I think idea was to indicate on which port service is accessible within the cluster.
3 - As for this question, there are headers presented, you can treat them literally. For instance:
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service NodePort 10.106.206.158 <none> 80:30001/TCP 21m app=web-pod
NodePort comes from your web-service.yaml for service object. Type is explicitly specified and therefore NodePort is allocated. If you don't specify type of service, it will be created as ClusterIP type and will be accessible only within kubernetes cluster. Please refer to Publishing Services (ServiceTypes).
When service is created with ClusterIP type, there won't be a NodePort in output. E.g.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.106.206.158 <none> 80/TCP 23m
External-IP will pop up when LoadBalancer service type is used. Additionally for minikube address will appear once you run minikube tunnel in a different shell. After your service will be accessible on your host machine by External-IP + service port.
4 - There are not issues with such mapping. Moreover this is a default behaviour for kubernetes:
Note: A Service can map any incoming port to a targetPort. By default
and for convenience, the targetPort is set to the same value as the
port field.
Please refer to define a service
Edit:
Depending on the driver of minikube (usually this is a virtual box or docker - can be checked on linux VM in .minikube/profiles/minikube/config.json), minikube can have different port forwarding. E.g. I have a minikube based on docker driver and I can see some mappings:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebcbc898b557 gcr.io/k8s-minikube/kicbase:v0.0.23 "/usr/local/bin/entr…" 5 days ago Up 5 days 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
For instance 22 for ssh to ssh into minikube VM. This may be an answer why you got response from http://127.0.0.1:52650/

Disable Kubernetes ClusterIP service environment variables on pods

Whenever a new pod is created in the cluster, environment variables related to the default Kubernetes clusterIP service are being injected into it.
Kubernetes clusterIp service running:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.116.0.1 <none> 443/TCP 27d
No matter on which namespace the pod is running, the following env vars will always appear:
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.116.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.116.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.116.0.1:443
KUBERNETES_SERVICE_HOST=10.116.0.1
I'm using enableServiceLinks=false as a mechanism to avoid service environment variables to be injected into pods, but it looks like it doesn't work for the default Kubernetes clusterIp service.
Deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: indecision-app-deployment
labels:
app: indecision-app
spec:
selector:
matchLabels:
app: indecision-app
template:
metadata:
labels:
app: indecision-app
spec:
enableServiceLinks: false
containers:
- name: indecision-app
image: hleal18/indecision-app:latest
ports:
- containerPort: 8080
Is it expected that enableServiceLinks=false also avoids the default Kubernetes clusterIP service of being injected?
In k8s source code you can find this comment:
// We always want to add environment variabled for master services
// from the master service namespace, even if enableServiceLinks is false.
and the code that adds these environemt variables:
if service.Namespace == kl.masterServiceNamespace && masterServices.Has(serviceName) {
if _, exists := serviceMap[serviceName]; !exists {
serviceMap[serviceName] = service
}
As you can see, kubelet adds services from masterServiceNamespace which defaults to "default".
Digging a bit more I have found out that there is a flag --master-service-namespace
--master-service-namespace The namespace from which the kubernetes master services should be injected into pods (default "default") (DEPRECATED: This flag will be removed in a future version.)
Now the flag is depricated and may be deleted in future.
Setting it on every kubelet should solve your issue but this is probably not the best thing to do as it is probably depricated for a reason.

Kubernetes dashboard through kubectl proxy - port confusion

I have seen that the standard way to access http services through the kubectl proxy is the following:
http://api.host/api/v1/namespaces/NAMESPACE/services/SERVICE_NAME:SERVICE_PORT/proxy/
Why is it that the kubernetes-dashboard uses https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
I would assume from the following that it would be kubernetes_dashboard:443.
kubectl -n kube-system get service kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard ClusterIP 10.233.50.212 <none> 443:31663/TCP 15d k8s-app=kubernetes-dashboard
Additionally, what is the meaning of the port show 443:31663 when all other services will just have x/TCP (x being one number instead of x:y)
Lastly, kubectl cluster-info will show
Kubernetes master is running at https://x.x.x.x:x
kubernetes-dashboard is running at https://x.x.x.x:x/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
I have created a simple service but it does not show here and I am confused how to determine what services show here or not.
Why is it that the kubernetes-dashboard uses
https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
Additionally, what is the meaning of the port show 443:31663 when all
other services will just have x/TCP (x being one number instead of
x:y)
As described in Manually constructing apiserver proxy URLs, the default way is
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
By default, the API server proxies to your service using http. To use
https, prefix the service name with https::
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/https:service_name:[port_name]/proxy
The supported formats for the name segment of the URL are:
<service_name> - proxies to the default or unnamed port using http
<service_name>:<port_name> - proxies to the specified port using http
https:<service_name>: - proxies to the default or unnamed port using https (note the trailing colon)
https:<service_name>:<port_name> - proxies to the specified port using https
Next:
I have created a simple service but it does not show here and I am
confused how to determine what services show here or not.
What is what I found and tested for you:
cluster-info API reference:
Display addresses of the master and services with label kubernetes.io/cluster-service=true To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
So, as soon as you add kubernetes.io/cluster-service: "true" label - the service starts to be seen under kubectl cluster-info.
BUT!! There is an expected behavior when you see that you service disappear from output in couple of minutes. Explanation has been found here - I only copy paste it here for future reference.
The other part is the addon manager. It uses this annotation to
synchronizes the cluster state with static manifest files. The
behavior was something like this:
1) addon manager reads a yaml from disk -> deploys the contents
2) addon manager reads all deployments from api server with annotation cluster-service:true -> deletes all that do not exist as files
As a result, if you add this annotation, addon manager will remove dashboard after a minute or so.
So,
dashboard is deployed after cluster creation -> annotation should not be set:
https://github.com/kubernetes/dashboard/blob/b98d167dadaafb665a28091d1e975cf74eb31c94/src/deploy/kubernetes-dashboard.yaml
dashboard is deployed part of cluster creation -> annotation should be set:
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dashboard/dashboard-controller.yaml
At least this was the behavior some time ago. I think kubeadm does not use addon-manager. But it is still part of kube-up script.
Solution for this behavior also exists: add additional label addonmanager.kubernetes.io/mode: EnsureExists
Explanation is here
You final service should look like:
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
kubectl get svc kubernetes-dashboard -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kubernetes-dashboard","kubernetes.io/cluster-service":"true"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
kubectl cluster-info
Kubernetes master is running at https://*.*.*.*
...
kubernetes-dashboard is running at https://*.*.*.*/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
...