Minishift Kubernetes Dashboard throw error: services "kubernetes-dashboard" not found - kubernetes

I am attempting to monitor the performance of my pods within MiniShift and tried to implement the Kubernetes Dashboard (https://github.com/kubernetes/dashboard) following all instructions.
It creates the Kubernetes-Dashboard project (separate from the NodeJs project I am attempting to monitor) and when I run kubectl proxy and access the URL (http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/) it gives the following error.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"kubernetes-dashboard\" not found",
"reason": "NotFound",
"details": {
"name": "kubernetes-dashboard",
"kind": "services"
},
"code": 404
}

If you attempt to use dashboard in minikube the situation is similar to the minishift. You don't deploy the dashboard since minikube has integrated support for the dashboard.
To access the dashboard you use this command:
minikube dashboard
This will enable the dashboard add-on, and open the proxy in the default web browser. If you want just the simple url here is the dashboard command can also simply emit a URL:
minikube dashboard --url
Coming back to minishift you might want to check out the minishift add-ons and it's kubernetes dashboard add-on

As described by acid_fuji, you can enable kubernetes dashboard via minikube addons:
minikube addons list
minikube addons enable dashboard
# in addition to get information about CPU/memory/usage please enable metrics-server
minikube addons enable metrics-server
If you are trying to install manually dashboard please refer to the docs
1. Apply manifest by running:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
2. Make sure that your deployment, service and corresponding endpoints were deployed by running:
kubectl get all -n kubernetes-dashboard
3. create Service Account/ClusterRoleBinding and obtain Bearer Token to access kubernetes dashboard:
Note:
IMPORTANT: Make sure that you know what you are doing before proceeding. Granting admin privileges to Dashboard's Service Account might be a security risk
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Getting a Bearer Token:
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
4. Addditional resources:
Accessing Dashboard
Access control

Related

Enable REST APIs for GKE deployment, service and others

I am trying to deploy applications on GKE using REST APIs. However, the GKE documentation is all mixed up and unclear as to how to enable the Kubernetes REST API access.
Does anyone here have a clear idea about how to create a Deployment on Kubernetes cluster on Google Cloud?
If yes, I would love to know the detailed steps for enabling the same. Currently, this is what I get.
https://xx.xx.xx.xx/apis/apps/v1/namespaces/default/deployments/nginx-1 GET call gives below JSON output despite valid authorization token
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"nginx-1\" is forbidden: User \"system:serviceaccount:default:default\" cannot get resource \"deployments\" in API group \"apps\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "nginx-1",
"group": "apps",
"kind": "deployments"
},
"code": 403
}
Administration APIs however seems to be enabled:
Following the instructions at this link and executing the below commands:
# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server referring the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
gives the desired output.
The service account default in default namespace does not have RBAC to perform get verb on deployment resource in default namespace.
Use below role and rolebinding to provide the necessary permission to the service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: deployment-reader
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-deployment
namespace: default
subjects:
# You can specify more than one "subject"
- kind: ServiceAccount
name: default # "name" is case sensitive
namespace: default
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: deployment-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
To verify the permission
kubectl auth can-i get deployments --as=system:serviceaccount:default:default -n default
yes

Unable to sign in kubernetes dashboard?

After follow all steps to create service account and role binding unable to sign in
kubectl create serviceaccount dashboard -n default
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
and apply yml file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
--------
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
Getting following error when i click on sign in for config sevice
[![{
"status": 401,
"plugins": \[\],
"errors": \[
{
"ErrStatus": {
"metadata": {},
"status": "Failure",
"message": "MSG_LOGIN_UNAUTHORIZED_ERROR",
"reason": "Unauthorized",
"code": 401
}
}][1]][1]
Roman Marusyk is on the right path. This is what you have to do.
$ kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-token-rqw2l kubernetes.io/service-account-token 3 9m8s
kubernetes-dashboard-certs Opaque 0 9m8s
kubernetes-dashboard-csrf Opaque 1 9m8s
kubernetes-dashboard-key-holder Opaque 2 9m8s
kubernetes-dashboard-token-5tqvd kubernetes.io/service-account-token 3 9m8s
From here you will get the kubernetes-dashboard-token-5tqvd, which is the secret holding the token to access the dashboard.
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F : '{print $2}'
ZXlK...
Now you will need to decode it:
echo -n ZXlK... | base64 -d
eyJhb...
introduce the token in the sign in page:
And you are in.
UPDATE
You can also do the 2 steps in one to get the secret and decode it:
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F ' ' '{print $2}' | base64 -d
namespace: kube-system
Usually, Kubernetes Dashboard runs in namespace: kubernetes-dashboard
Please ensure that you have the correct namespace for ClusterRoleBinding and ServiceAccount.
If you have been installing the dashboard according to the official doc, then it was installed to kubernetes-dashboard namespace.
You can check this post for reference.
EDIT 22-May-2020
Issue was i'm accessing dashboard UI through http protocal from external machine and use default ClusterIP.
Indeed, the Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking 'Sign In' button on login page.
That is discussed in details in this post on StackOverflow. There is a workaround for that behavior discussed in a same thread. Basically it is the same thing I've been referring to here: "You can check this post for reference. "

Kubernetes namespace default service account

If not specified, pods are run under a default service account.
How can I check what the default service account is authorized to do?
Do we need it to be mounted there with every pod?
If not, how can we disable this behavior on the namespace level or cluster level.
What other use cases the default service account should be handling?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace? For example we will not use real user accounts to create things in the cluster because users come and go.
Environment: Kubernetes 1.12 , with RBAC
A default service account is automatically created for each namespace.
kubectl get serviceaccount
NAME SECRETS AGE
default 1 1d
Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.
A pod can only use one service account from the same namespace.
Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.
The default permissions for a service account don't allow it to
list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.
Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.
kubectl exec -it test -n foo sh / # curl
localhost:8001/api/v1/namespaces/foo/services { "kind": "Status",
"apiVersion": "v1", "metadata": {
}, "status": "Failure", "message": "services is forbidden: User
"system:serviceaccount:foo:default" cannot list resource
"services" in API group "" in the namespace "foo"", "reason":
"Forbidden", "details": {
"kind": "services" }, "code": 403
as can be seen above the default service account cannot list services
but when given proper role and role binding like below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: foo-role
namespace: foo
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: test-foo
namespace: foo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: foo-role
subjects:
- kind: ServiceAccount
name: default
namespace: foo
now i am able to list the resurce service
kubectl exec -it test -n foo sh
/ # curl localhost:8001/api/v1/namespaces/foo/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/bar/services",
"resourceVersion": "457324"
},
"items": []
Giving all your service accounts the clusteradmin ClusterRole is a
bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.
It’s a good idea to create a specific service account for each pod
and then associate it with a tailor-made role or a ClusterRole through a
RoleBinding.
If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the serviceaccountName property in the
pod spec.
You can refer the below link for an in-depth explanation.
Service account example with roles
You can check kubectl explain serviceaccount.automountServiceAccountToken and edit the service account
kubectl edit serviceaccount default -o yaml
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2018-10-14T08:26:37Z
name: default
namespace: default
resourceVersion: "459688"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: de71e624-cf8a-11e8-abce-0642c77524e8
secrets:
- name: default-token-q66j4
Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.
kubectl exec tp -it bash
root#tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory
An application/deployment can run with a service account other than default by specifying it in the serviceAccountName field of a deployment configuration.
What I service account, or any other user, can do is determined by the roles it is given (bound to) - see roleBindings or clusterRoleBindings; the verbs are per a role's apiGroups and resources under the rules definitions.
The default service account doesn't seem to be given any roles by default. It is possible to grant a role to the default service account as described in #2 here.
According to this, "...In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account".
HTH
How can I check what the default service account is authorized to do?
There isn't an easy way, but auth can-i may be helpful. Eg
$ kubectl auth can-i get pods --as=system:serviceaccount:default:default
no
For users there is auth can-i --list but this does not seem to work with --as which I suspect is a bug. In any case, you can run the above commands on a few verbs and the answer will be no in all cases, but I only tried a few. Conclusion: it seems that the default service account has no permissions by default (since in the cluster where I checked, we have not configured it, AFAICT).
Do we need it to be mounted there with every pod?
Not sure what the question means.
If not, how can we disable this behavior on the namespace level or cluster level.
You can set automountServiceAccountToken: false on a service or an individual pod. Service accounts are per namespace, so when done on a service account, any pods in that namespace that use this account will be affected by that setting.
What other use cases the default service account should be handling?
The default service account is a fallback, it is the SA that gets used if a pod does not specify one. So the default service account should have no privileges whatsoever. Why would a pod need to talk to the kube API by default?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace?
I don't recommend that, see previous answer. Instead, you should create a service account (bound to appropriate role/clusterrole) for each pod type that needs access to the API, following principle of least privileges. All other pod types can use default service account, which should not mount SA token automatically and should not be bound to any role.
kubectl auth can-i --list --as=system:serviceaccount:<namespace>:<serviceaccount> -n <namespace>
as a simple example. to check the default service account in the testns namespace
kubectl auth can-i --list --as=system:serviceaccount:testns:default -n testns
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[ ... ]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]

Service account role bindings not working for API access

I am working on developing tools to interact with Kubernetes. I have OpenShift setup with the allow all authentication provider. I can log into the web console as I would expect.
I have also been able to setup a service account and assign a cluster role binding to the service account user. Despite this, when I access the REST API using a token of that service account, I get forbidden.
Here is what happens when I try to setup role bindings via OpenShift commands:
[root#host1 ~]# oadm policy add-cluster-role-to-user view em7 --namespace=default
[root#host1 ~]# oadm policy add-cluster-role-to-user cluster-admin em7 --namespace=default
[root#host1 ~]# oadm policy add-cluster-role-to-user cluster-reader em7 --namespace=default
[root#host1 ~]# oc get secrets | grep em7
em7-dockercfg-hnl6m kubernetes.io/dockercfg 1 18h
em7-token-g9ujh kubernetes.io/service-account-token 4 18h
em7-token-rgsbz kubernetes.io/service-account-token 4 18h
TOKEN=`oc describe secret em7-token-g9ujh | grep token: | awk '{ print $2 }'`
[root#host1 ~]# curl -kD - -H "Authorization: Bearer $TOKEN" https://localhost:8443/api/v1/pods
HTTP/1.1 403 Forbidden
Cache-Control: no-store
Content-Type: application/json
Date: Tue, 19 Jun 2018 15:36:40 GMT
Content-Length: 260
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:default:em7\" cannot list all pods in the cluster",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
I can also try using the yaml file from (Openshift Admin Token):
# creates the service account "ns-reader"
apiVersion: v1
kind: ServiceAccount
metadata:
name: ns-reader
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: global-reader
rules:
- apiGroups: [""]
# add other rescources you wish to read
resources: ["pods", "secrets"]
verbs: ["get", "watch", "list"]
---
# This cluster role binding allows service account "ns-reader" to read pods in all available namespace
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-ns
subjects:
- kind: ServiceAccount
name: ns-reader
namespace: default
roleRef:
kind: ClusterRole
name: global-reader
apiGroup: rbac.authorization.k8s.io
When I run this, I get the following error:
[root#host1 ~]# kubectl create -f stack_overflow_49667238.yaml
error validating "stack_overflow_49667238.yaml": error validating data: API version "rbac.authorization.k8s.io/v1" isn't supported, only supports API versions ["federation/v1beta1" "v1" "authentication.k8s.io/v1beta1" "componentconfig/v1alpha1" "policy/v1alpha1" "rbac.authorization.k8s.io/v1alpha1" "apps/v1alpha1" "authorization.k8s.io/v1beta1" "autoscaling/v1" "extensions/v1beta1" "batch/v1" "batch/v2alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false
I have tried several different API versions from the list but they all failed in a similar way.
oadm policy add-cluster-role-to-user view em7 grants to the user named em7
you need to grant permissions to the service account, e.g. oadm policy add-cluster-role-to-user view system:serviceaccount:default:em7

api request other than /api/v1 return 403 "Forbidden"

Hi I installed a fresh kubernetes cluster on Ubuntu 16.04 using this tutorial:https://blog.alexellis.io/kubernetes-in-10-minutes/
However as soon as I try to access my api (for example: https://[server-ip]:6443 /api/v1/namespaces) I get the following message
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:bootstrap:a916af\" cannot list namespaces at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
Does anyone know how to fix this or what I am doing wrong?
While I haven't run through that tutorial, the service account with which you're making the request doesn't have access to cluster-level information, like listing namespaces. RBAC (Role-Based Access Control) binds users with either a Role or a ClusterRole, which grant them different permissions. My guess is that service account shouldn't ever need to know what other namespaces exist, therefore doesn't have access to list them.
In terms of "fixing" this, aside from creating a serviceaccount/user with correct permissions, that tutorial makes several references to a config file stored at $HOME/.kube/config, which stores the credentials for a user that should have access to cluster-level resources, including listing namespaces. You could start there.
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
I solve it by create
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
kubectl apply -f fabric8-rbac.yaml