Kubernetes Dashboard does not accept service account's token over HTTP: Authentication failed. Please try again - kubernetes

I have installed Kubernetes Dashboard on a Kubernetes 1.13 cluster as described here:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've also configured the dashboard to serve its content insecurely because I want to expose it via an Ingress at https://my-cluster/dashboard where the TLS connection will be terminated at the ingress controller.
I have edited service/kubernetes-dashboard in namespace kube-system and changed ports from {port:443, targetPort:8443} to {port:80, protocol:TCP, targetPort:9090}.
I have edited deployment/kubernetes-dashboard in namespace kube-system and changed
ports from {containerPort: 8443, protocol:TCP} to {containerPort: 9090, protocol:TCP} (and livenessProbe analagously). I have also changed args from [ --auto-generate-certificates ] to [ --enable-insecure-login ].
This allows me to contact the dashboard from a cluster node via HTTP at the service's cluster IP address and port 80 (no Ingress is configured yet).
I have also created a sample user as explained here and extracted its token. The token works e.g. in kubectl --token $token get pod --all-namespaces, so it apparently possesses cluster-admin privileges. However, if I enter the same token into the dashboards' login screen I get "Authentication failed. Please try again.".
What could be the reason why? How can I further diagnose and solve the issue? (The dashboard's log does not provide any help at this point.)
UPDATE If I keep the dashboard's standard configuration (i.e. for secure access over HTTPS) the same token is accepted.

Related

kubectl get nodes from pod (NetworkPolicy)

I try to run using Python kubectl to get nodes inside the POD.
How I should set up a Network Policy for this pod?
I tried to connect my namespace to the kube-system namespace, but it was not working.
Thanks.
As per Accessing the API from a Pod:
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.
All you need is a service account with enough privileges, and use the API server DNS name as stated above. Example:
# Export the token value
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use kubectl to talk internally with the API server
kubectl --insecure-skip-tls-verify=true \
--server="https://kubernetes.default.svc:443" \
--token="${TOKEN}" \
get nodes
The Network Policy may be restrictive and prevent this type of call, however, by default, the above should work.

Grafana running in Kubernetes Slack Webhook Error 502

I set up Grafana to run in GKE (Kubernetes) with a service and default Ingress controller to open it to the internet. Everything is working without any issues.
After creating some dashboards I wanted to setup Slack alerting using the slack webhook. After filling out all the details I received a 502 bad gateway error.
I have setup a second service to open port 443(Default slack webhook port) and exposed it with kubectl expose deployment --type=NodePort --port=443 and have also tried --type=LoadBalancer with no luck.
I've also tried setting up a second Ingress service pointing the second service, but then I run into readinessProbe issues.
Anyone had the same issue and if so how was it resolved?
Network Policy was enabled on the cluster and there were cluster policies denying outgoing traffic. After setting up my network policies for the pods in my own namespace I was able to connect without any issues.

Is it possible to access kubernetes dashboard directly in browser in Azure Kuberenetes Service (AKS) (without additional commands)

When I run kubectl cluster-info in AKS I get this:
kubernetes-dashboard is running at https://clusterUrl/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
This url is reachable in browser, but only returns 401.
I am wondering if it is possible to log in to Azure in some way so that this url is accessible? Would be quite convenient to access it directly.
using kubectl proxy, you can access the dashboard
[root#ae740dbd82bf /]# kubectl proxy
Starting to serve on 127.0.0.1:8001
open your browser and navigate to
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
If you want to access it externally then you have two choices
define service port as NodePort. Then you would be able to access dashboard on any CLUSTER_HOST:NODEPORT
Deploy an ingress controller and define a rule between the dashboard dns and the dashboard kubernetes service

Access Kubernetes API with kubectl failed after enabling RBAC

I'm trying to enable RBAC on my cluster and iadded those following line to the kube-apiserver.yml :
- --authorization-mode=RBAC
- --runtime-config=rbac.authorization.k8s.io/v1beta1
- --authorization-rbac-super-user=admin
and i did systemctl restart kubelet ;
the apiserver starts successfully but i'm not able to run kubectl command and i got this error :
kubectl get po
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
where am I going wrong or i should create some roles to the kubectl user ? if so how that possible
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
You are using user kubectl to access cluster by kubectl utility, but you set --authorization-rbac-super-user=admin, which means your super-user is admin.
To fix the issue, launch kube-apiserver with superuser "kubectl" instead of "admin."
Just update the value of the option: --authorization-rbac-super-user=kubectl.
Old question but for google searchers, you can use the insecure port:
If your API server runs with the insecure port enabled (--insecure-port), you can also make API calls via that port, which does not enforce authentication or authorization.
Source: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping
So add --insecure-port=8080 to your kube-apiserver options and then restart it.
Then run:
kubectl create clusterrolebinding kubectl-cluster-admin-binding --clusterrole=cluster-admin --user=kubectl
Then turn the insecure-port off.

I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system

I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system.
Following is my setup.
Two vm's with cluster deployed, one master one node.
dashboard running without any issue the kube-dns is also working as expected.
kubernetes version is 1.7.
Issue: When trying to access dashboard externally through kubectl proxy. i get unauthorized response.
This is with rbac role and rolebindings enabled.
How to i configure the cluster for http browser access to dashboard from external system.
Any hint/suggestions are most welcome.
kubectl proxy not working > 1.7
try this:
copy ~/.kube/config file to your desktop
then run the kubect like this
export POD_NAME=$(kubectl --kubeconfig=config get pods -n kube-system -l "app=kubernetes-dashboard,release=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:9090/
kubectl --kubeconfig=config -n kube-system port-forward $POD_NAME 9090:9090
Then access the ui like this: http://127.0.0.1:9090
see this helps
If kubectl proxy gives the Unauthorized error, there can be 2 reasons:
Your user cert doesn't have the appropriate permissions. This is unlikely since you successfully deployed kube-dns and the dashboard.
kubelet authn/authz is enabled and it's not setup correctly. See the answer to my question.