How to enable listening 10255 in my kubelet service - kubernetes

I am learning to work with Kubernetes and trying to configure monitoring of my Kubernetes cluster. For this I use metricbeat and elk.
After deploying and configuring metricbeat, I get an error:
error making http request: Get http://172.16.0.205:10255/stats/summary: dial tcp 172.16.0.205:10255: connect: connection refused
I found that my Kubelet is not listening on port 10255:
[root#kube2 /]# netstat -ap | grep -i "listen" | grep "kubelet"
tcp 0 0 localhost:40450 0.0.0.0:* LISTEN 8560/kubelet
tcp 0 0 localhost:10248 0.0.0.0:* LISTEN 8560/kubelet
tcp6 0 0 [::]:10250 [::]:* LISTEN 8560/kubelet
How can I enable this port. I found information that I need to use the parameter --read-only-port = 10255, but how do I apply it to my kubelet, I do not quite understand. For example:
[root#kube2 /]# kubelet --config --read-only-port=10255
\F1010 13:32:48.592306 15851 server.go:196] failed to load Kubelet config file --read-only-port=10255, error failed to read kubelet config file "/--read-only-port=10255", error: open /--read-only-port=10255: no such file or directory
It's does't work. Which file does it need?
Can anyone help me with a solution to this problem?

I resolved this issue. I added flags in /var/lib/kubelet/kubelet-flags in every my kubertenes' nodes:
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --read-only-port=10255"
and restart kubelet service.
Now I have open port 10255:
[root#kube2 7.1]# netstat -ap | grep -i "listen" | grep "kubelet"
tcp 0 0 localhost:44799 0.0.0.0:* LISTEN 6281/kubelet
tcp 0 0 localhost:10248 0.0.0.0:* LISTEN 6281/kubelet
tcp6 0 0 [::]:10250 [::]:* LISTEN 6281/kubelet
tcp6 0 0 [::]:10255 [::]:* LISTEN 6281/kubelet
And I see some logs of kubernetes in my kibana.

Related

kubernetes v1.18.8 installation issue [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have deployed Kubernetes cluster v1.18.8 with kubeadm on production environment.Cluster setup is 3 Master and 3 Worker nodes with external Kube-api loadbalancer, etcd residing in Master nodes.Didn't see any issue during installation and all pods in kube-system are running. However when i get error when i run below command i get error:
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
While troubleshooting i found that the ports are not being listened.
sudo netstat -tlpn |grep kube
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 132584/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 133300/kube-proxy
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 197705/kube-control
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 213741/kube-schedul
tcp6 0 0 :::10250 :::* LISTEN 132584/kubelet
tcp6 0 0 :::6443 :::* LISTEN 132941/kube-apiserv
tcp6 0 0 :::10256 :::* LISTEN 133300/kube-proxy
If i check the same thing on development enviroment kubernetes cluster(v1.17) i see no issue.
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
sudo netstat -tlpn |grep 102
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 2141/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 2209/kube-scheduler
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1230/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2668/kube-proxy
tcp6 0 0 :::10256 :::* LISTEN 2668/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1230/kubelet
tcp6 0 0 :::10251 :::* LISTEN 2209/kube-scheduler
tcp6 0 0 :::10252 :::* LISTEN 2141/kube-controlle
On newly created prodction cluster i have deployed nginx and another application just to test how the kubernetes components behave, didn't see any error.
Is it the expected behaviour in version v1.18? Will really apprecite any help on this.
NOTE: No port is being blocked in internal communication
The command Kubectl get componentstatus is depreciated in newer version(1.19) and it already has many issues.
The main point to note here is that Kubernetes has disabled insecure serving of
these components for older versions(atleast from v1.18). Hence i couldn't see kube-controller and kube-scheduler being listned on 1051 and 1052 ports. To restore this functionality you can remove the --port=0 from their manifests files(Not recommended as this can expose their metrics to the whole internet) that you can see inside:
/etc/kubernetes/manifests/
I commented out --port=0 field from the manifest file just to check this, kubectl get componentstatus command worked.

Kubernetes node failed to join master due "Timeout exceeded while awaiting headers error"

I am trying to setup k8s cluster with master and two worker nodes in Digital Ocean.
My Config:
I have created three droplets as follows:
Master: 2cpu, 3GB Mem
Worker Node1: 1cpu, 2GB Mem
Worker Node2: 1cpu, 2GB Mem
I was able to setup master node successfully
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 139m v1.18.3
I am unable to add worker to master.
Command i ran to join:
$ kubeadm join <PUBLIC IP>:6443 --token <token> --discovery-token-ca-cert-hash <hash>
Token had 23h of validity left at the time of executing the above command.
Error that i got:
W0528 14:13:09.920404 25129 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: Get https://PUBLIC_IP:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
To see the stack trace of this error execute with --v=5 or higher
My observations on this issue:
$ netstat -pnltu
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:40389 0.0.0.0:* LISTEN 25074/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 25074/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 25478/kube-proxy
tcp 0 0 127.0.0.1:9099 0.0.0.0:* LISTEN 29823/calico-node
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 24580/kube-controll
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 24742/kube-schedule
tcp6 0 0 :::10250 :::* LISTEN 25074/kubelet
tcp6 0 0 :::10251 :::* LISTEN 24742/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 24725/kube-apiserve
tcp6 0 0 :::10252 :::* LISTEN 24580/kube-controll
tcp6 0 0 :::10256 :::* LISTEN 25478/kube-proxy
Is it because the API service is listening in IPV6 instead of IPV4?
here is the output of cluster-info:
$ kubectl cluster-info
Kubernetes master is running at https://<PUBLIC_IP>:6443
KubeDNS is running at https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Any help to fix this issue is much appreciated.

Kube Controller Manager CrashLoopBackOff

My kube-controller-managerkeeps staying on CrashLoopBackOff status.
I found this upon looking in the logs of the pod:
failed to create listener: failed to listen on 0.0.0.0:10252: listen tcp 0.0.0.0:10252: bind: address already in use
Then I stumbled upon this article who fortunately was able to find a fix for it. Where he killed the process using the port and restarted his kube-controller-manager pod. https://medium.com/#deepeshtripathi/kubernetes-controller-pod-crashloopbackoff-resolved-16aaa1c27cfc
So I did follow the steps he made. When I have tried to get into the master node to find which process is using this port, I can't see anything that uses it.
root#ip:/# netstat -tunlp | grep 1025
tcp6 0 0 :::10250 :::* LISTEN 1598/kubelet
tcp6 0 0 :::10251 :::* LISTEN 7472/kube-scheduler
tcp6 0 0 :::10255 :::* LISTEN 1598/kubelet
tcp6 0 0 :::10256 :::* LISTEN 5629/kube-proxy
Is there anyone else know any solution on how to fix this?
failed to create listener: failed to listen on 0.0.0.0:10252: listen tcp 0.0.0.0:10252: bind: address already in use
According to the error message port 10252 is in use. So need to stop listening on this port. You can do that by running
fuser -k 10252/tcp

Configure Kafka to expose JMX only on 127.0.0.1

I'm struggling to configure Kafka's JMX to be exposed only on localhost. By default, when I start Kafka, it exposes three ports, whereas two of them are automatically bound to 0.0.0.0, meaning that they're accessible to everyone.
I managed to bind the broker itself to 127.0.0.1 (because I only need it locally), but the JMX ports are really giving me headaches.
I have to following env vars defined:
export JMX_PORT=${JMX_PORT:-9999}
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote=true -Djava.rmi.server.hostname=127.0.0.1 -Djava.net.preferIPv4Stack=true"
If I now look at the bound ports/ips, I see this:
$ netstat -tulpn | grep 9864
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 9864/java
tcp 0 0 0.0.0.0:44895 0.0.0.0:* LISTEN 9864/java
tcp 0 0 127.0.0.1:9092 0.0.0.0:* LISTEN 9864/java
meaning that JMX listens on 0.0.0.0, and there's even another open port 44895 which I don't know its purpose.
What I'd like to achieve is that Kafka ports are only opened on 127.0.0.1. Can anybody give a hint? Thanks in advance!
EDIT:
I was partially successful by adding -Dcom.sun.management.jmxremote.host=localhost, but there's still one open port exposed on 0.0.0.0:
$ netstat -tulpn | grep 12789
tcp 0 0 127.0.0.1:9999 0.0.0.0:* LISTEN 12789/java
tcp 0 0 0.0.0.0:43513 0.0.0.0:* LISTEN 12789/java
tcp 0 0 127.0.0.1:9092 0.0.0.0:* LISTEN 12789/java
I just managed to make Kafka only listen to the defined broker port, and disabling JMX altogether:
export KAFKA_JMX_OPTS="-Djava.rmi.server.hostname=localhost -Djava.net.preferIPv4Stack=true"
When starting a fresh Kafka 1.1.0 broker on Ubuntu, I initially saw two open ports:
$ netstat -tulpn | grep 19894
tcp6 0 0 :::40487 :::* LISTEN 19894/java
tcp6 0 0 127.0.0.1:9092 :::* LISTEN 19894/java
After setting the above environment variable in the kafka-server-start.sh file, the second port is no longer opened:
$ netstat -tulpn | grep :9092
tcp 0 0 127.0.0.1:9092 0.0.0.0:* LISTEN 20345/java
$ netstat -tulpn | grep 20345
tcp 0 0 127.0.0.1:9092 0.0.0.0:* LISTEN 20345/java
just
export KAFKA_JMX_OPTS="-Djava.rmi.server.hostname=localhost"
is enough

Cannot curl kubelet read-only port

I have a heapster pod running on one of the nodes in my Kubernetes cluster. It is able to get http://<node-with-heapster-pod>:10255/stats/summary just fine, but whenever it runs the same get request on another node, it cannot. When I run curl from within any given node I can access that port, but when I curl any node from another machine I get the following error:
Failed to connect to 128.180.120.229 port 10255: No route to host
The following is the netstat output for all ports on which the kubelet is listening:
netstat -ap | grep -i "listen" | grep "kubelet"
tcp 0 0 localhost:10248 0.0.0.0:* LISTEN 7562/kubelet
tcp6 0 0 [::]:4194 [::]:* LISTEN 7562/kubelet
tcp6 0 0 [::]:10250 [::]:* LISTEN 7562/kubelet
tcp6 0 0 [::]:10255 [::]:* LISTEN 7562/kubelet
unix 2 [ ACC ] STREAM LISTENING 621349 7562/kubelet /var/run/dockershim.sock
I apologize for the messy last column. Any ideas why this may be? My iptables rules are set up to accept all incoming connections, and any node can ping port 10250 fine, just not 10255.
you may not have ip_forward enabled on your system. can you check this settings?
sysctl -n net.ipv4.ip_forward
If anybody still cares, port 10255 is the kubelet's read only port and may or may not be configured. You can confirm this by accessing the worker node in question then looking at the kubelet's startup command.
systemctl status kubelet-worker.service
Some on-prem kubernetes solutions set this to 0 as mentioned below
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
--read-only-port int32 The read-only port for the Kubelet to serve on with no authentication/authorization (set to 0 to disable) (default 10255) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)