Kubernetes dashboard through kubectl proxy - port confusion - kubernetes

I have seen that the standard way to access http services through the kubectl proxy is the following:
http://api.host/api/v1/namespaces/NAMESPACE/services/SERVICE_NAME:SERVICE_PORT/proxy/
Why is it that the kubernetes-dashboard uses https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
I would assume from the following that it would be kubernetes_dashboard:443.
kubectl -n kube-system get service kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard ClusterIP 10.233.50.212 <none> 443:31663/TCP 15d k8s-app=kubernetes-dashboard
Additionally, what is the meaning of the port show 443:31663 when all other services will just have x/TCP (x being one number instead of x:y)
Lastly, kubectl cluster-info will show
Kubernetes master is running at https://x.x.x.x:x
kubernetes-dashboard is running at https://x.x.x.x:x/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
I have created a simple service but it does not show here and I am confused how to determine what services show here or not.

Why is it that the kubernetes-dashboard uses
https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
Additionally, what is the meaning of the port show 443:31663 when all
other services will just have x/TCP (x being one number instead of
x:y)
As described in Manually constructing apiserver proxy URLs, the default way is
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
By default, the API server proxies to your service using http. To use
https, prefix the service name with https::
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/https:service_name:[port_name]/proxy
The supported formats for the name segment of the URL are:
<service_name> - proxies to the default or unnamed port using http
<service_name>:<port_name> - proxies to the specified port using http
https:<service_name>: - proxies to the default or unnamed port using https (note the trailing colon)
https:<service_name>:<port_name> - proxies to the specified port using https
Next:
I have created a simple service but it does not show here and I am
confused how to determine what services show here or not.
What is what I found and tested for you:
cluster-info API reference:
Display addresses of the master and services with label kubernetes.io/cluster-service=true To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
So, as soon as you add kubernetes.io/cluster-service: "true" label - the service starts to be seen under kubectl cluster-info.
BUT!! There is an expected behavior when you see that you service disappear from output in couple of minutes. Explanation has been found here - I only copy paste it here for future reference.
The other part is the addon manager. It uses this annotation to
synchronizes the cluster state with static manifest files. The
behavior was something like this:
1) addon manager reads a yaml from disk -> deploys the contents
2) addon manager reads all deployments from api server with annotation cluster-service:true -> deletes all that do not exist as files
As a result, if you add this annotation, addon manager will remove dashboard after a minute or so.
So,
dashboard is deployed after cluster creation -> annotation should not be set:
https://github.com/kubernetes/dashboard/blob/b98d167dadaafb665a28091d1e975cf74eb31c94/src/deploy/kubernetes-dashboard.yaml
dashboard is deployed part of cluster creation -> annotation should be set:
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dashboard/dashboard-controller.yaml
At least this was the behavior some time ago. I think kubeadm does not use addon-manager. But it is still part of kube-up script.
Solution for this behavior also exists: add additional label addonmanager.kubernetes.io/mode: EnsureExists
Explanation is here
You final service should look like:
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
kubectl get svc kubernetes-dashboard -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kubernetes-dashboard","kubernetes.io/cluster-service":"true"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
kubectl cluster-info
Kubernetes master is running at https://*.*.*.*
...
kubernetes-dashboard is running at https://*.*.*.*/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
...

Related

Is there a reversal of `ExternalName` service in Kubernetes?

I'm trying to move my development environment to Kubernetes to be more in line with existing deployment stages. In that context I need to call a service by its Ingress DNS name internally, while this DNS name resolves to an IP unreachable from the cluster itself. I would like to create a DNS alias inside the cluster which would point to the service, basically a reversal of a ExternalName service.
Example:
The external DNS name is my-service.my-domain.local, resolving to 127.0.0.1
Internal service is my-service.my-namespace.svc.cluster.local
A process running in a pod can't reach my-service.my-domain.local because of the resolved IP, but could reach my-service.my-namespace.svc.cluster.local, but needs to be accessing the former by name
I would like to have a cluster-internal DNS name my-service.my-domain.local, resolving to the service my-service.my-namespace.svc.cluster.local (ExternalName service would do the exact opposite).
Is there a way to implement this in Kubernetes?
You can use the core dns and add the entry over there using configmap
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
name: coredns
namespace: kube-system
data:
Corefile: |
domain-name:port {
errors
cache 30
forward . <IP or custom DNS>
reload
}
To test you can start one busy box pod
kubectl run busybox --restart=Never --image=busybox:1.28 -- sleep 3600
hit the domain name from inside of busy box
kubectl exec busybox -- nslookup domain-name
Official doc ref : https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
Nice article for ref : https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
Or
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
Use the Rewrite plug-in of CoreDNS to resolve a specified domain name
to the domain name of a Service.
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
release: cceaddon-coredns
name: coredns
namespace: kube-system

k3s - Can't access my service based on service name

I have created a service like this:
apiVersion: v1
kind: Service
metadata:
name: amen-sc
spec:
ports:
- name: http
port: 3030
targetPort: 8000
selector:
component: scc-worker
I am able to access this service, from within my pods of the same cluster (& Namespace), using the IP address I get from kubectl get svc, but I am not able to access using the service name like curl amen-sc:3030.
Please advise what could possibly be wrong.
I intend to expose certain pods, only within my cluster and access them using the service-name:port format.
Make sure you have DNS service configured and corresponding pods are running.
kubectl get svc -n kube-system -l k8s-app=kube-dns
and
kubectl get pods -n kube-system -l k8s-app=kube-dns

Kubernetes Service get Connection Refused

I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080
Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests

Kubernetes clusterIP does not load balance requests [duplicate]

My Environment: Mac dev machine with latest Minikube/Docker
I built (locally) a simple docker image with a simple Django REST API "hello world".I'm running a deployment with 3 replicas. This is my yaml file for defining it:
apiVersion: v1
kind: Service
metadata:
name: myproj-app-service
labels:
app: myproj-be
spec:
type: LoadBalancer
ports:
- port: 8000
selector:
app: myproj-be
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproj-app-deployment
labels:
app: myproj-be
spec:
replicas: 3
selector:
matchLabels:
app: myproj-be
template:
metadata:
labels:
app: myproj-be
spec:
containers:
- name: myproj-app-server
image: myproj-app-server:4
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
value: postgres://myname:#10.0.2.2:5432/myproj2
- name: REDIS_URL
value: redis://10.0.2.2:6379/1
When I apply this yaml it generates things correctly.
- one deployment
- one service
- three pods
Deployments:
NAME READY UP-TO-DATE AVAILABLE AGE
myproj-app-deployment 3/3 3 3 79m
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 83m
myproj-app-service LoadBalancer 10.96.91.44 <pending> 8000:31559/TCP 79m
Pods:
NAME READY STATUS RESTARTS AGE
myproj-app-deployment-77664b5557-97wkx 1/1 Running 0 48m
myproj-app-deployment-77664b5557-ks7kf 1/1 Running 0 49m
myproj-app-deployment-77664b5557-v9889 1/1 Running 0 49m
The interesting thing is that when I SSH into the Minikube, and hit the service using curl 10.96.91.44:8000 it respects the LoadBalancer type of the service and rotates between all three pods as I hit the endpoints time and again. I can see that in the returned results which I have made sure to include the HOSTNAME of the pod.
However, when I try to access the service from my Hosting Mac -- using kubectl port-forward service/myproj-app-service 8000:8000 -- Every time I hit the endpoint, I get the same pod to respond. It doesn't load balance. I can see that clearly when I kubectl logs -f <pod> to all three pods and only one of them is handling the hits, as the other two are idle...
Is this a kubectl port-forward limitation or issue? or am I missing something greater here?
kubectl port-forward looks up the first Pod from the Service information provided on the command line and forwards directly to a Pod rather than forwarding to the ClusterIP/Service port. The cluster doesn't get a chance to load balance the service like regular service traffic.
The kubernetes API only provides Pod port forward operations (CREATE and GET). Similar API operations don't exist for Service endpoints.
kubectl code
Here's a little bit of the flow from the kubectl code that seems to back that up (I'll just add that Go isn't my primary language)
The portforward.go Complete function is where kubectl portforward does the first look up for a pod from options via AttachablePodForObjectFn:
The AttachablePodForObjectFn is defined as attachablePodForObject in this interface, then here is the attachablePodForObject function.
To my (inexperienced) Go eyes, it appears the attachablePodForObject is the thing kubectl uses to look up a Pod to from a Service defined on the command line.
Then from there on everything deals with filling in the Pod specific PortForwardOptions (which doesn't include a service) and is passed to the kubernetes API.
The reason was that my pods were randomly in a crashing state due to Python *.pyc files that were left in the container. This causes issues when Django is running in a multi-pod Kubernetes deployment. Once I removed this issue and all pods ran successfully, the round-robin started working.

ambassador service stays "pending"

Currently running a fresh "all in one VM" (stacked master/worker approach) kubernetes v1.21.1-00 on Ubuntu Server 20 LTS, using
cri-o as container runtime interface
calico for networking/security
also installed the kubernetes-dashboard (but I guess that's not important for my issue 😉). Taking this guide for installing ambassador: https://www.getambassador.io/docs/edge-stack/latest/topics/install/yaml-install/ I come along the issue that the service is stuck in status "pending".
kubectl get svc -n ambassador prints out the following stuff
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.97.117.249 <pending> 80:30925/TCP,443:32259/TCP 5h
ambassador-admin ClusterIP 10.101.161.169 <none> 8877/TCP,8005/TCP 5h
ambassador-redis ClusterIP 10.110.32.231 <none> 6379/TCP 5h
quote ClusterIP 10.104.150.137 <none> 80/TCP 5h
While changing the type from LoadBalancer to NodePort in the service sets it up correctly, I'm not sure of the implications coming along. Again, I want to use ambassador as an ingress component here - with my setup (only one machine), "real" loadbalancing might not be necessary.
For covering all the subdomain stuff, I setup a wildcard recording for pointing to my machine, means I got a CNAME for *.k8s.my-domain.com which points to this host. Don't know, if this approach was that smart for setting up an ingress.
Edit: List of events, as requested below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 116s default-scheduler Successfully assigned ambassador/ambassador-redis-584cd89b45-js5nw to dev-bvpl-099
Normal Pulled 116s kubelet Container image "redis:5.0.1" already present on machine
Normal Created 116s kubelet Created container redis
Normal Started 116s kubelet Started container redis
Additionally, here's the service pending in yaml presenation (exported via kubectl get svc -n ambassador -o yaml ambassador)
apiVersion: v1
kind: Service
metadata:
annotations:
a8r.io/bugs: https://github.com/datawire/ambassador/issues
a8r.io/chat: http://a8r.io/Slack
a8r.io/dependencies: ambassador-redis.ambassador
a8r.io/description: The Ambassador Edge Stack goes beyond traditional API Gateways
and Ingress Controllers with the advanced edge features needed to support developer
self-service and full-cycle development.
a8r.io/documentation: https://www.getambassador.io/docs/edge-stack/latest/
a8r.io/owner: Ambassador Labs
a8r.io/repository: github.com/datawire/ambassador
a8r.io/support: https://www.getambassador.io/about-us/support/
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"a8r.io/bugs":"https://github.com/datawire/ambassador/issues","a8r.io/chat":"http://a8r.io/Slack","a8r.io/dependencies":"ambassador-redis.ambassador","a8r.io/description":"The Ambassador Edge Stack goes beyond traditional API Gateways and Ingress Controllers with the advanced edge features needed to support developer self-service and full-cycle development.","a8r.io/documentation":"https://www.getambassador.io/docs/edge-stack/latest/","a8r.io/owner":"Ambassador Labs","a8r.io/repository":"github.com/datawire/ambassador","a8r.io/support":"https://www.getambassador.io/about-us/support/"},"labels":{"app.kubernetes.io/component":"ambassador-service","product":"aes"},"name":"ambassador","namespace":"ambassador"},"spec":{"ports":[{"name":"http","port":80,"targetPort":8080},{"name":"https","port":443,"targetPort":8443}],"selector":{"service":"ambassador"},"type":"LoadBalancer"}}
creationTimestamp: "2021-05-22T07:18:23Z"
labels:
app.kubernetes.io/component: ambassador-service
product: aes
name: ambassador
namespace: ambassador
resourceVersion: "4986406"
uid: 68e4582c-be6d-460c-909e-dfc0ad84ae7a
spec:
clusterIP: 10.107.194.191
clusterIPs:
- 10.107.194.191
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 32542
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32420
port: 443
protocol: TCP
targetPort: 8443
selector:
service: ambassador
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
EDIT#2: I wonder, if https://stackoverflow.com/a/44112285/667183 applies for my process as well?
Answer is pretty much here: https://serverfault.com/questions/1064313/ambassador-service-stays-pending . After installing a load balancer the whole setup worked. I decided to go with metallb (https://metallb.universe.tf/installation/#installation-by-manifest for installation). I decided to go with the following configuration for a single-node kubernetes cluster:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.16.0.99-10.16.0.99
After a few seconds the load balancer is detected and everything goes fine.