Service account role bindings not working for API access - kubernetes

I am working on developing tools to interact with Kubernetes. I have OpenShift setup with the allow all authentication provider. I can log into the web console as I would expect.
I have also been able to setup a service account and assign a cluster role binding to the service account user. Despite this, when I access the REST API using a token of that service account, I get forbidden.
Here is what happens when I try to setup role bindings via OpenShift commands:
[root#host1 ~]# oadm policy add-cluster-role-to-user view em7 --namespace=default
[root#host1 ~]# oadm policy add-cluster-role-to-user cluster-admin em7 --namespace=default
[root#host1 ~]# oadm policy add-cluster-role-to-user cluster-reader em7 --namespace=default
[root#host1 ~]# oc get secrets | grep em7
em7-dockercfg-hnl6m kubernetes.io/dockercfg 1 18h
em7-token-g9ujh kubernetes.io/service-account-token 4 18h
em7-token-rgsbz kubernetes.io/service-account-token 4 18h
TOKEN=`oc describe secret em7-token-g9ujh | grep token: | awk '{ print $2 }'`
[root#host1 ~]# curl -kD - -H "Authorization: Bearer $TOKEN" https://localhost:8443/api/v1/pods
HTTP/1.1 403 Forbidden
Cache-Control: no-store
Content-Type: application/json
Date: Tue, 19 Jun 2018 15:36:40 GMT
Content-Length: 260
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:default:em7\" cannot list all pods in the cluster",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
I can also try using the yaml file from (Openshift Admin Token):
# creates the service account "ns-reader"
apiVersion: v1
kind: ServiceAccount
metadata:
name: ns-reader
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: global-reader
rules:
- apiGroups: [""]
# add other rescources you wish to read
resources: ["pods", "secrets"]
verbs: ["get", "watch", "list"]
---
# This cluster role binding allows service account "ns-reader" to read pods in all available namespace
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-ns
subjects:
- kind: ServiceAccount
name: ns-reader
namespace: default
roleRef:
kind: ClusterRole
name: global-reader
apiGroup: rbac.authorization.k8s.io
When I run this, I get the following error:
[root#host1 ~]# kubectl create -f stack_overflow_49667238.yaml
error validating "stack_overflow_49667238.yaml": error validating data: API version "rbac.authorization.k8s.io/v1" isn't supported, only supports API versions ["federation/v1beta1" "v1" "authentication.k8s.io/v1beta1" "componentconfig/v1alpha1" "policy/v1alpha1" "rbac.authorization.k8s.io/v1alpha1" "apps/v1alpha1" "authorization.k8s.io/v1beta1" "autoscaling/v1" "extensions/v1beta1" "batch/v1" "batch/v2alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false
I have tried several different API versions from the list but they all failed in a similar way.

oadm policy add-cluster-role-to-user view em7 grants to the user named em7
you need to grant permissions to the service account, e.g. oadm policy add-cluster-role-to-user view system:serviceaccount:default:em7

Related

Enable REST APIs for GKE deployment, service and others

I am trying to deploy applications on GKE using REST APIs. However, the GKE documentation is all mixed up and unclear as to how to enable the Kubernetes REST API access.
Does anyone here have a clear idea about how to create a Deployment on Kubernetes cluster on Google Cloud?
If yes, I would love to know the detailed steps for enabling the same. Currently, this is what I get.
https://xx.xx.xx.xx/apis/apps/v1/namespaces/default/deployments/nginx-1 GET call gives below JSON output despite valid authorization token
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"nginx-1\" is forbidden: User \"system:serviceaccount:default:default\" cannot get resource \"deployments\" in API group \"apps\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "nginx-1",
"group": "apps",
"kind": "deployments"
},
"code": 403
}
Administration APIs however seems to be enabled:
Following the instructions at this link and executing the below commands:
# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server referring the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
gives the desired output.
The service account default in default namespace does not have RBAC to perform get verb on deployment resource in default namespace.
Use below role and rolebinding to provide the necessary permission to the service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: deployment-reader
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-deployment
namespace: default
subjects:
# You can specify more than one "subject"
- kind: ServiceAccount
name: default # "name" is case sensitive
namespace: default
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: deployment-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
To verify the permission
kubectl auth can-i get deployments --as=system:serviceaccount:default:default -n default
yes

Unable to sign in kubernetes dashboard?

After follow all steps to create service account and role binding unable to sign in
kubectl create serviceaccount dashboard -n default
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
and apply yml file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
--------
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
Getting following error when i click on sign in for config sevice
[![{
"status": 401,
"plugins": \[\],
"errors": \[
{
"ErrStatus": {
"metadata": {},
"status": "Failure",
"message": "MSG_LOGIN_UNAUTHORIZED_ERROR",
"reason": "Unauthorized",
"code": 401
}
}][1]][1]
Roman Marusyk is on the right path. This is what you have to do.
$ kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-token-rqw2l kubernetes.io/service-account-token 3 9m8s
kubernetes-dashboard-certs Opaque 0 9m8s
kubernetes-dashboard-csrf Opaque 1 9m8s
kubernetes-dashboard-key-holder Opaque 2 9m8s
kubernetes-dashboard-token-5tqvd kubernetes.io/service-account-token 3 9m8s
From here you will get the kubernetes-dashboard-token-5tqvd, which is the secret holding the token to access the dashboard.
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F : '{print $2}'
ZXlK...
Now you will need to decode it:
echo -n ZXlK... | base64 -d
eyJhb...
introduce the token in the sign in page:
And you are in.
UPDATE
You can also do the 2 steps in one to get the secret and decode it:
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F ' ' '{print $2}' | base64 -d
namespace: kube-system
Usually, Kubernetes Dashboard runs in namespace: kubernetes-dashboard
Please ensure that you have the correct namespace for ClusterRoleBinding and ServiceAccount.
If you have been installing the dashboard according to the official doc, then it was installed to kubernetes-dashboard namespace.
You can check this post for reference.
EDIT 22-May-2020
Issue was i'm accessing dashboard UI through http protocal from external machine and use default ClusterIP.
Indeed, the Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking 'Sign In' button on login page.
That is discussed in details in this post on StackOverflow. There is a workaround for that behavior discussed in a same thread. Basically it is the same thing I've been referring to here: "You can check this post for reference. "

Minishift Kubernetes Dashboard throw error: services "kubernetes-dashboard" not found

I am attempting to monitor the performance of my pods within MiniShift and tried to implement the Kubernetes Dashboard (https://github.com/kubernetes/dashboard) following all instructions.
It creates the Kubernetes-Dashboard project (separate from the NodeJs project I am attempting to monitor) and when I run kubectl proxy and access the URL (http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/) it gives the following error.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"kubernetes-dashboard\" not found",
"reason": "NotFound",
"details": {
"name": "kubernetes-dashboard",
"kind": "services"
},
"code": 404
}
If you attempt to use dashboard in minikube the situation is similar to the minishift. You don't deploy the dashboard since minikube has integrated support for the dashboard.
To access the dashboard you use this command:
minikube dashboard
This will enable the dashboard add-on, and open the proxy in the default web browser. If you want just the simple url here is the dashboard command can also simply emit a URL:
minikube dashboard --url
Coming back to minishift you might want to check out the minishift add-ons and it's kubernetes dashboard add-on
As described by acid_fuji, you can enable kubernetes dashboard via minikube addons:
minikube addons list
minikube addons enable dashboard
# in addition to get information about CPU/memory/usage please enable metrics-server
minikube addons enable metrics-server
If you are trying to install manually dashboard please refer to the docs
1. Apply manifest by running:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
2. Make sure that your deployment, service and corresponding endpoints were deployed by running:
kubectl get all -n kubernetes-dashboard
3. create Service Account/ClusterRoleBinding and obtain Bearer Token to access kubernetes dashboard:
Note:
IMPORTANT: Make sure that you know what you are doing before proceeding. Granting admin privileges to Dashboard's Service Account might be a security risk
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Getting a Bearer Token:
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
4. Addditional resources:
Accessing Dashboard
Access control

Forbidden: user cannot get path "/" (not anonymous user)

When accessing the k8s api endpoint (FQDN:6443), a successful retrieval will return a JSON object containing REST endpoint paths. I have a user who is granted cluster-admin privileges on the cluster who is able to successfully interact with that endpoint.
I've created a certificate for another user and granted them a subset of privileges in my cluster. The error I'm attempting to correct: They cannot access FQDN:6443, but instead get a 403 with a message that "User cannot get path /". I get the same behavior whether I specify as FQDN:6443/ or FQDN:6443 (no trailing slash). I've examined the privileges granted to cluster-admin role users, but have not recognized the gap.
Other behavior: They CAN access FQDN:6443/api, which I have not otherwise explicitly granted them, as well as the various endpoints I have explicitly granted. I believe they the api endpoint via the system:discovery role granted to the system:authenticated group. Also, if I attempt to interact with the cluster without a certificate, I correctly am identified as an anonymous user. If I interact with the cluster with a certificate whose user name does not match my rolebindings, I get the expected behaviors for all but the FQDN:6443 endpoint.
I had a similar similar issue. I was trying to curl the base url: https://api_server_ip:6443 with the correct certificates.
I got this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"kubernetes\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
It appears the system:discovery doesn't grant access to the base url: https://api_server_ip:6443/. The system:discovery roles only gives access to the following paths:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:discovery
rules:
- nonResourceURLs:
- /api
- /api/*
- /apis
- /apis/*
- /healthz
- /openapi
- /openapi/*
- /swagger-2.0.0.pb-v1
- /swagger.json
- /swaggerapi
- /swaggerapi/*
- /version
- /version/
verbs:
- get
No access to / granted. So I created the following ClusterRole which I called discover_base_url. It grants access to the / path:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: discover_base_url
rules:
- nonResourceURLs:
- /
verbs:
- get
Then I created a ClusterRoleBinding binding the forbidden user "kubernetes", (it could be any user) to the above cluster role. The following is the yaml for the ClusterRoleBinding (replace "kubernetes" with your user):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: discover-base-url
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: discover_base_url
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
After creating these two resources, the curl request works:
curl --cacert ca.pem --cert kubernetes.pem --key kubernetes-key.pem https://api_server_ip:6443
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
"/apis/apiregistration.k8s.io/v1beta1",
"/apis/apps",
"/apis/apps/v1",
"/apis/apps/v1beta1",
"/apis/apps/v1beta2",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2beta1",
"/apis/autoscaling/v2beta2",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v1beta1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1beta1",
"/apis/coordination.k8s.io",
"/apis/coordination.k8s.io/v1beta1",
"/apis/events.k8s.io",
"/apis/events.k8s.io/v1beta1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
"/apis/rbac.authorization.k8s.io/v1beta1",
"/apis/scheduling.k8s.io",
"/apis/scheduling.k8s.io/v1beta1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/etcd",
"/healthz/log",
"/healthz/ping",
"/healthz/poststarthook/apiservice-openapi-controller",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/ca-registration",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-admission-initializer",
"/healthz/poststarthook/start-kube-apiserver-informers",
"/logs",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger-ui/",
"/swagger.json",
"/swaggerapi",
"/version"
]
}

api request other than /api/v1 return 403 "Forbidden"

Hi I installed a fresh kubernetes cluster on Ubuntu 16.04 using this tutorial:https://blog.alexellis.io/kubernetes-in-10-minutes/
However as soon as I try to access my api (for example: https://[server-ip]:6443 /api/v1/namespaces) I get the following message
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:bootstrap:a916af\" cannot list namespaces at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
Does anyone know how to fix this or what I am doing wrong?
While I haven't run through that tutorial, the service account with which you're making the request doesn't have access to cluster-level information, like listing namespaces. RBAC (Role-Based Access Control) binds users with either a Role or a ClusterRole, which grant them different permissions. My guess is that service account shouldn't ever need to know what other namespaces exist, therefore doesn't have access to list them.
In terms of "fixing" this, aside from creating a serviceaccount/user with correct permissions, that tutorial makes several references to a config file stored at $HOME/.kube/config, which stores the credentials for a user that should have access to cluster-level resources, including listing namespaces. You could start there.
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
I solve it by create
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
kubectl apply -f fabric8-rbac.yaml