How to send http requests to the kubelet api server - kubernetes

I have a kubernetes cluster on EKS, in which I would like to send requests to the kubelet server (not the kube-apiserver, but the kubelet, which runs on each of the worker nodes).
My understanding is that the kubelet runs an http server on port 10250, so I opened the firewall (security group) in one of the worker nodes for that port so I can reach it from my IP. Example of a request:
curl -k https://public-ip-of-worker-node:10250/metrics/probes
but I get a 401 in response. I guess this is expected, as I am not authenticating in any way.
So, how can I authenticate to the kubelet server? I can communicate without problem with the kube-apiserver using kubectl, so I do have enough permissions from the IAM side.

From the docs start the kubelet with the --authentication-token-webhook and the --kubeconfig flags.
Then you can create a service-account and define role and rolebinding on the service account and use the service accounts bearer token with the curl command to call the kubelet API.

Related

Kubernetes pod response size limit

We have a rancher based kubernetes cluster with calico on openstack.
A spring config server (server pod here onwards) is deployed as a service.
Service is exposed on nodeport.
Another pod is deployed with curl (client pod here onwards)
Doing a curl from client pod on server pod nodeport with its node ip on which server pod is running gives proper result.
Doing curl from client pod on server nodeport with another node's ip where server pod is not running pod gives curl(56) connection reset by peer for bigger response
Doing curl from client pod on service and it's port gives results for small data but for bigger response again gives curl(56)
If both client and server pods are running on same node, response is fine.
My understanding is:
No issues in server pod, as able to get response on nodeport
No issues in client pod/curl as able to get response from nodeport
Service and pod linkage is fine as it works well with small response size
When I say bigger response, I mean just 1 kb+
It will give an error when there is no reply from the server.
Ex:
$ curl http://youtube.com:443
Curl: (52) Empty reply from server
Please recheck your proxy, firewall settings.

kubectl get nodes from pod (NetworkPolicy)

I try to run using Python kubectl to get nodes inside the POD.
How I should set up a Network Policy for this pod?
I tried to connect my namespace to the kube-system namespace, but it was not working.
Thanks.
As per Accessing the API from a Pod:
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.
All you need is a service account with enough privileges, and use the API server DNS name as stated above. Example:
# Export the token value
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use kubectl to talk internally with the API server
kubectl --insecure-skip-tls-verify=true \
--server="https://kubernetes.default.svc:443" \
--token="${TOKEN}" \
get nodes
The Network Policy may be restrictive and prevent this type of call, however, by default, the above should work.

Kubernetes Dashboard does not accept service account's token over HTTP: Authentication failed. Please try again

I have installed Kubernetes Dashboard on a Kubernetes 1.13 cluster as described here:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've also configured the dashboard to serve its content insecurely because I want to expose it via an Ingress at https://my-cluster/dashboard where the TLS connection will be terminated at the ingress controller.
I have edited service/kubernetes-dashboard in namespace kube-system and changed ports from {port:443, targetPort:8443} to {port:80, protocol:TCP, targetPort:9090}.
I have edited deployment/kubernetes-dashboard in namespace kube-system and changed
ports from {containerPort: 8443, protocol:TCP} to {containerPort: 9090, protocol:TCP} (and livenessProbe analagously). I have also changed args from [ --auto-generate-certificates ] to [ --enable-insecure-login ].
This allows me to contact the dashboard from a cluster node via HTTP at the service's cluster IP address and port 80 (no Ingress is configured yet).
I have also created a sample user as explained here and extracted its token. The token works e.g. in kubectl --token $token get pod --all-namespaces, so it apparently possesses cluster-admin privileges. However, if I enter the same token into the dashboards' login screen I get "Authentication failed. Please try again.".
What could be the reason why? How can I further diagnose and solve the issue? (The dashboard's log does not provide any help at this point.)
UPDATE If I keep the dashboard's standard configuration (i.e. for secure access over HTTPS) the same token is accepted.

Kubernetes-dashboard pod is crashing again and again

I have installed and configured Kubernetes on my ubuntu machine, followed this Document
After deploying the Kubernetes-dashboard, container keep crashing
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Started the Proxy using:
kubectl proxy --address='0.0.0.0' --accept-hosts='.*' --port=8001
Pod status:
kubectl get pods -o wide --all-namespaces
....
....
kube-system kubernetes-dashboard-64576d84bd-z6pff 0/1 CrashLoopBackOff 26 2h 192.168.162.87 kb-node <none>
Kubernetes system log:
root#KB-master:~# kubectl -n kube-system logs kubernetes-dashboard-64576d84bd-z6pff --follow
2018/09/11 09:27:03 Starting overwatch
2018/09/11 09:27:03 Using apiserver-host location: http://192.168.33.30:8001
2018/09/11 09:27:03 Skipping in-cluster config
2018/09/11 09:27:03 Using random key for csrf signing
2018/09/11 09:27:03 No request provided. Skipping authorization
2018/09/11 09:27:33 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://192.168.33.30:8001/version: dial tcp 192.168.33.30:8001: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
Getting the msg when I'm trying to hit below link on the browser
URL:http://192.168.33.30:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Error: 'dial tcp 192.168.162.87:8443: connect: connection refused'
Trying to reach: 'https://192.168.162.87:8443/'
Anyone can help me with this.
http://192.168.33.30:8001 is not a legitimate API server URL. All communications with the API server use TLS internally (https:// URL scheme). These communications are verified using the API server CA certificate and are authenticated by mean of tokens signed by the same CA.
What you see is the result of a misconfiguration. At first sight it seems like you mixed pod, service and host networks.
Make sure you understand the difference between Host network, Pod network and Service network. These 3 networks can not overlap. For example --pod-network-cidr=192.168.0.0/16 must not include the IP address of your host, change it to 10.0.0.0/16 or something smaller if necessary.
After you have a clear overview of the network topology, run the setup again and everything will be configured correctly, including the Kubernetes CA.

Does the kube-apiserver expect the presence of kube-proxy?

I've been running my kubernetes masters separate from my kubernetes nodes. So I have kube-apiserver, kube-scheduler and kube-controllermanager running on a server without kubelet, kube-proxy or flannel.
So far this has worked perfectly. However, today I attempted to set up the Web UI and access it through an API server. I got the the following error when accessing http://kube-master-0:8080/ui:
Error: 'dial tcp 172.16.72.12:9090: getsockopt: connection timed out'
Trying to reach: 'http://172.16.72.12:9090/'
This suggests to me that the API server is trying to connect to the pod IP, as we don't have flannel or kube-proxy running on this host, the 172.16.72.12 IP will not be routed.
Am I expected to run kube-proxy and flannel on my API servers? Is there another way to let the API server proxy the UI?
It's not required, but it will certainly make your life easier.
The reason this isn't working is because kube-proxy isn't directing traffic to the service. Try kube-node:8080/ui (assuming you have exposed it as with NodePort configuration
In theory, Kube apiserver does not expect the presence of kube-proxy.
This means kube apiserver will run correctly, receives requests and handles them(mostly reads from and writes to etcd).
But if you want the whole cluster working, you will need other components running, for example:
if you want pods or deployments to be scheduled, kube-scheduler should be running
if you want pods and containers be running in nodes, kubelet has to be running
if you want replications can be guarded, controller-manager should be runing
As for kube-proxy and flannel, they are critical parts to make sure networking is working. Load Balance, service, across-hosts pod communication etc all depends on them.