Kubernetes v1.20 endpoints resource cannot view controller manger or scheduler - kubernetes

I want to check the election of basic components, but the information displayed is different in different versions of kubernetes binary installation methods.
Is the corresponding information cancelled in kubernetes v1.20 +? Or is there any other way to view the election of basic components?
The following kubernetes configuration parameters are consistent, except that the binary executable file is replaced
Kubernetes v1.20.8 or Kubernetes v1.20.2
$ kubectl get endpoints -n kube-system
No resources found in kube-system namespace.
Kubernetes v1.19.12
$ kubectl get endpoints -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 9m12s
kube-scheduler <none> 9m13s

I found the cause of the problem
The difference between the two versions is the default value of --leader-select resource-lock
Kubernetes v1.20.8 or Kubernetes v1.20.2
--leader-elect-resource-lock string The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. (default "leases")
Kubernetes v1.19.12
--leader-elect-resource-lock string The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. (default "endpointsleases")
When I don't set --leader-select-resource-lock string in controller-manager or scheduler in v1.20.8, the default value is leaders.
so I can use the following command to view the information of component leaders.
$ kubectl get leases -n kube-system
NAME HOLDER AGE
kube-controller-manager master01_dec12376-f89e-4721-92c5-a20267a483b8 45h
kube-scheduler master02_c0c373aa-1642-474d-9dbd-ec41c4da089d 45h

Related

Is there any way to know which pod the service is load-balanced in Kubernetes?

I manage 3 Pods through Deployment and connect through NodePort of Service.
I wonder which pod the service load balanced whenever I connect from outside.
It's hard to check with Pods log, can I find out through the event or kubectl command?
I am not sure if this is exactly what you're looking for, but you can use Istio to generate detailed telemetry for all service communications.
You may be particularly interested in Distributed tracing:
Istio generates distributed trace spans for each service, providing operators with a detailed understanding of call flows and service dependencies within a mesh.
By using distributed tracing, you are able to monitor every requests as they flow through a mesh.
More information about Distributed Tracing with Istio can be found in the FAQ on Distributed Tracing documentation.
Istio supports multiple tracing backends (e.g. Jaeger).
Jaeger is a distributed tracing system similar to OpenZipkin and as we can find in the jaegertracing documentation:
It is used for monitoring and troubleshooting microservices-based distributed systems, including:
Distributed context propagation
Distributed transaction monitoring
Root cause analysis
Service dependency analysis
Performance / latency optimization
Of course, you don't need to install Istio to use Jaeger, but you'll have to instrument your application so that trace data from different parts of the stack are sent to Jaeger.
I'll show you how you can use Jaeger to monitor a sample request.
Suppose I have an app-1 Deployment with three Pods exposed using the NodePort service.
$ kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE IP
app-1-7ddf4f77c6-g682z 2/2 Running 0 25m 10.60.1.11
app-1-7ddf4f77c6-smlcr 2/2 Running 0 25m 10.60.0.7
app-1-7ddf4f77c6-zn7kh 2/2 Running 0 25m 10.60.2.5
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app-1 3/3 3 3 21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app-1 NodePort 10.64.0.88 <none> 80:30881/TCP 25m
Additionally, I deployed jaeger (with istio):
$ kubectl get deploy -n istio-system | grep jaeger
jaeger 1/1 1 1 67m
To check if Jaeger is working as expected, I will try to connect to this app-1 application from outside the cluster (using the NodePort service):
$ curl <PUBLIC_IP>:30881
app-1
Let's find this trace with Jaeger:
As you can see, we can easily find out which Pod has received our request.

node(s) didn't have free ports for the requested pod ports

Deployed an ingress-controller in default namespace and tried to deployed in another namespace as well but have been getting this error:
0/8 nodes are available: 8 node(s) didn't have free ports for the requested pod ports.
but saw a similar error with a solution saying, You don't need to deploy multiple ingress controllers in a cluster. Ingress controller deployed in a namespace should be able to work across the cluster for all pods across all namespace. Ingress controller generally have clusteroles which permits it to access ingress, services. endpoints across the cluster for all namespaces.
0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 2 node(s) didn't match node selector
So, is it ok if i have it working in one namepsace?
Problem described in link you have mentioned is connected with traefik.
Overally such error is scheduling related issue. It is produced by the kubernetes scheduler. It is not related to ingress-nginx.
You should check what is using the ports.
Take a look: ingress-scheduler.
Also speaking about multiple Ingress Controllers in cluster.
For example if you are using NGINX Ingress Controller, you have three options with regards to which configuration resources it handles:
Single-namespace Ingress Controller - it handles configuration resources only from a particular namespace, which is controlled through the --watch-namespace command-line flag. It is useful if you want to use different NGINX Ingress Controllers for different applications, both in aspect of isolation and/or operation.
Cluster-wide Ingress Controller - it handles configuration resources created in any namespace of the cluster. As NGINX is a high-performance load balancer capable of serving many applications at the same time, this option is used by default.
Ingress Controller for Specific Ingress Class. It works in conjunction with either of the options above. You can further customize which configuration resources are handled by the Ingress Controller by configuring the class of the Ingress Controller and using that class in your configuration resources.
By default such controllers are cluster wide - they handle resources created in any namespace, so there is no need to create multiple controllers to be sure that they will work for resources in every namespace.
Read more: ingress-controllers.
I encountered the same problem. The problem-solving process is as follows:
How to find problems?
kubectl describe pods nginx-ingress-controller-f9d9cfd7c-q74h4
Use the above command to check where the pod initialization process is stuck, result as follows:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 39s (x2 over 111s) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Analysis of the cause of the problem
kubectl get all - n ingress nginx
I found that multiple ingress controllers have been started in the same namespace, resulting in the port being occupied by the previous one. The status of the previous ingress controller is CrashLoopBackOff. Details as follows
bogon:nginx-ingress$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-748d7f9c84-vd2pl 0/1 CrashLoopBackOff 21 (64s ago) 59m
nginx-ingress-controller-f9d9cfd7c-q74h4 0/1 Pending 0 4m39s
My solution
My solution is to delete the first pod in the status of CrashLoopBackOff
# kubectl delete pod nginx-ingress-controller-748d7f9c84-vd2pl -n ingress-nginx
# kubectl apply -f mandatory.yaml

Why I can not get master node information in full-managed kubernetes?

everyone.
Please teach me why kubectl get nodes command does not return master node information in full-managed kubernetes cluster.
I have a kubernetes cluster in GKE. When I type kubectl get nodescommand, I get below information.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-istio-test-01-pool-01-030fc539-c6xd Ready <none> 3m13s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-d74k Ready <none> 3m18s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-j685 Ready <none> 3m18s v1.13.11-gke.14
$
Off course, I can get worker nodes information. This information is same with GKE web console.
By the way, I have another kubernetes cluster which is constructed with three raspberry pi and kubeadm. When I type kubectl get nodes command to this cluster, I get below result.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 262d v1.14.1
node01 Ready <none> 140d v1.14.1
node02 Ready <none> 140d v1.14.1
$
This result includes master node information.
I'm curious why I cannot get the master node information in full-managed kubernetes cluster.
I understand that the advantage of a full-managed service is that we don't have to manage about the management layer. I want to know how to create a kubernetes cluster which the master node information is not displayed.
I tried to create a cluster with "the hard way", but couldn't find any information that could be a hint.
At the least, I'm just learning English now. Please correct me if I'm wrong.
It's a good question!
The key is kubelet component of the Kubernetes.
Managed Kubernetes versions run Control Plane components on masters, but they don't run kubelet. You can easily achieve the same on your DIY cluster.
The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.
https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes
Because there are no nodes with that role. The control plane for GKE is hosted within their own magic system, not on your own nodes.

How to configure solace helm chart for use on a kubeadm cluster

We have a private kubernetes cluster. We are trying to follow these quick start instructions to install solace.
https://github.com/SolaceProducts/solace-kubernetes-quickstart
The solace helm chart installation steps were as follows:
git clone https://github.com/SolaceProducts/solace-kubernetes-quickstart.git
cd solace-kubernetes-quickstart/solace
../scripts/configure.sh -p admin
helm install . -f values.yaml
The default yaml is the one drawn from the clone
https://github.com/SolaceProducts/solace-kubernetes-quickstart/blob/master/solace/values.yaml
The install was largely successful.
[root#togo solace]# kubectl get pods
NAME READY STATUS RESTARTS AGE
brawny-walrus-solace-0 1/1 Running 0 41m
[root#togo solace]# kubectl get statefulsets
NAME READY AGE
brawny-walrus-solace 1/1 42m
However the default set of services includes a loadbalancer with a pending external-ip
[root#togo solace]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
brawny-walrus-solace LoadBalancer 10.101.58.127 <pending> 22:31475/TCP,8080:30940/TCP,55555:30575/TCP,55003:32142/TCP,55443:32096/TCP,943:30133/TCP,80:32276/TCP,443:30643/TCP 43m
brawny-walrus-solace-discovery ClusterIP None <none> 8080/TCP 43m
A quick stack search seems to suggest this is because the loadbalancer expects to work inside a cloud, with an external load balancer:
kubernetes service external ip pending
Furthermore one of the answers suggests using an Ingress Controller when using a custom kubeadm cluster (which is our case).
https://stackoverflow.com/a/44112285/2025407
Solace provides a variety of example "values.yaml".. though a first glance at these does not suggest how to get solace working on a kubeadm cluster.
https://github.com/SolaceProducts/solace-kubernetes-quickstart/tree/master/solace/values-examples
So my simple question for the Solace and/or Kubernetes experts is ... what is the simplest way for me to update my helm chart configuration file (values.yaml) in order to expose ports such as the solace admin port (8080 - i believe) in a fashion that is accessible?
If the helm chart does not support this configuration (which I think it must), then I can also create the appropriate service or services to expose the solace resources properly.. however this would not be the best way to get my solace chart working.
Thanks in advance for any help on this.
You can set the service.type parameter to NodePort.
Here is an simple example to demonstrate NodePort being used.
helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/helm-charts
helm install my-release solacecharts/pubsubplus-dev --set service.type=NodePort,storage.persistent=false
Follow the instructions in helm status my-release to figure out the ports.
Example:
$ echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace default my-release-pubsubplus-dev -o jsonpath="{range .spec.ports[*]}{.name}\t<NodeIP>:{.nodePort}\n"`
Protocol Address
ssh <NodeIP>:31359
semp <NodeIP>:30522
semptls <NodeIP>:30891
smf <NodeIP>:30019
smfcomp <NodeIP>:32518
smftls <NodeIP>:30791
web <NodeIP>:31568
webtls <NodeIP>:30087
amqp <NodeIP>:32427
mqtt <NodeIP>:32060
rest <NodeIP>:30746
Note that is just an example and isn't suitable for production. For example, persistent storage is not in use, which means that all spooled messages can be lost.
Refer to https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart for more details about the Solace Kubernetes quickstart.

Kubernetes' High Availability Leader Lease

I have a question regarding Kubernetes' leader/follower lease management for the kube-controller-manager and the kube-scheduler: As far as I understand Kubernetes tracks the current leader as Endpoints in the kube-system namespace.
You can get the leader via
$ kubectl get endpoints -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 20m
kube-scheduler <none> 20m
then e.g.
$ kubectl describe endpoints kube-scheduler -n kube-system
Name: kube-scheduler
Namespace: kube-system
Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"controller-0", ...}
The current leader is the holderIdentity of the control-plane.alpha.kubernetes.io/leader annotation.
My question:
Lease management like acquiring leases, renewing leases, time to live, etc. is implemented in leaderelection.go on top of Kubernetes Endpoints. Is there a specific reason lease management is not implemented directly on Etcd with "out-of-the-box" Etcd primitives like Etcd's compare and swap operation and time to live on objects?
Edit(s)
add Etcd compare and swap
add Etcd time to live
A few reasons:
Etcd might be running externally to Kubernetes' network, which means network latency
Etcd could be busy/loaded and therefore slow
The etcd cluster is very likely to have fewer nodes than the Kubernetes master, making it less reliable
For security reasons, only the API server should have access to etcd. Keep in mind that if etcd was used for leader leases by convention, custom controllers and operators using leader election would also need access to etcd which would be inadvisable given how critical the data stored in etcd is.
Ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-etcd-clusters