Error while accessing Web UI Dashboard using RBAC - kubernetes

I created a cluster role "try-usr"
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: try-usr
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- get
- list
- watch
While accessing the Web UI(Dashboard), it's throwing an error as follows:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"xyz\" cannot get services/proxy in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "https:kubernetes-dashboard:",
"kind": "services"
},
"code": 403
}

Depending on the kubernetes version, the dashboard will require different permissions according to the docs
v1.7
create and watch permissions for secrets in kube-system namespace required to - create and watch for changes of kubernetes-dashboard-key-holder secret.
get, update and delete permissions for secrets named kubernetes-dashboard-key-holder and kubernetes-dashboard-certs in kube-system namespace.
proxy permission to heapster service in kube-system namespace required to allow getting metrics from heapster.
v1.8
create permission for secrets in kube-system namespace required to create kubernetes-dashboard-key-holder secret.
get, update and delete permissions for secrets named kubernetes-dashboard-key-holder and kubernetes-dashboard-certs in kube-system namespace.
get and update permissions for config map named kubernetes-dashboard-settings in kube-system namespace.
proxy permission to heapster service in kube-system namespace required to allow getting metrics from heapster.

Related

How allow pod from default namespace, read secret from other namespace [duplicate]

This question already has answers here:
Sharing secret across namespaces
(18 answers)
Closed 6 months ago.
In Azure Kubernetes I want have a pod with jenkins in defualt namespace, that needs read secret from my aplication workspace.
When I tried I get the next error:
Error from server (Forbidden): secrets "myapp-mongodb" is forbidden: User "system:serviceaccount:default:jenkinspod" cannot get resource "secrets" in API group "" in the namespace "myapp"
How I can bring access this jenkisn pod to read secrets in 'myapp' namespace
secret is a namespaced resource and can be accessed via proper rbac permissions. However any improper rbac permissions may lead to leakage.
You must role bind the pod's associated service account. Here is a complete example. I have created a new service account for role binding in this example. However, you can use the default service account if you want.
step-1: create a namespace called demo-namespace
kubectl create ns demo-namespace
step-2: create a secret in demo-namespace:
kubectl create secret generic other-secret -n demo-namespace --from-literal foo=bar
secret/other-secret created
step-2: Create a service account(my-custom-sa) in the default namespace.
kubectl create sa my-custom-sa
step-3: Validate that, by default, the service account you created in the last step has no access to the secrets present in demo-namespace.
kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa
no
step-4: Create a cluster role with permissions of get and list secrets from demo-namespace namespace.
kubectl create clusterrole role-for-other-user --verb get,list --resource secret
clusterrole.rbac.authorization.k8s.io/role-for-other-user created
step-5: Create a rolebinding to bind the cluster role created in last step.
kubectl create rolebinding role-for-other-user -n demo-namespace --serviceaccount default:my-custom-sa --clusterrole role-for-other-user
rolebinding.rbac.authorization.k8s.io/role-for-other-user created
step-6: validate that the service account in the default ns now has access to the secrets of demo-namespace. (note the difference from step 3)
kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa
yes
step-7: create a pod in default namsepace and mount the service account you created earlier.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-pod
name: my-pod
spec:
serviceAccountName: my-custom-sa
containers:
- command:
- sleep
- infinity
image: bitnami/kubectl
name: my-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
step-7: Validate that you can read the secret of demo-namespace from the pod in the default namespace.
curl -sSk -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/demo-namespace/secrets
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "668709"
},
"items": [
{
"metadata": {
"name": "other-secret",
"namespace": "demo-namespace",
"uid": "5b3b9dba-be5d-48cc-ab16-4e0ceb3d1d72",
"resourceVersion": "662043",
"creationTimestamp": "2022-08-19T14:51:15Z",
"managedFields": [
{
"manager": "kubectl-create",
"operation": "Update",
"apiVersion": "v1",
"time": "2022-08-19T14:51:15Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:foo": {}
},
"f:type": {}
}
}
]
},
"data": {
"foo": "YmFy"
},
"type": "Opaque"
}
]
}

Enable REST APIs for GKE deployment, service and others

I am trying to deploy applications on GKE using REST APIs. However, the GKE documentation is all mixed up and unclear as to how to enable the Kubernetes REST API access.
Does anyone here have a clear idea about how to create a Deployment on Kubernetes cluster on Google Cloud?
If yes, I would love to know the detailed steps for enabling the same. Currently, this is what I get.
https://xx.xx.xx.xx/apis/apps/v1/namespaces/default/deployments/nginx-1 GET call gives below JSON output despite valid authorization token
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"nginx-1\" is forbidden: User \"system:serviceaccount:default:default\" cannot get resource \"deployments\" in API group \"apps\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "nginx-1",
"group": "apps",
"kind": "deployments"
},
"code": 403
}
Administration APIs however seems to be enabled:
Following the instructions at this link and executing the below commands:
# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server referring the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
gives the desired output.
The service account default in default namespace does not have RBAC to perform get verb on deployment resource in default namespace.
Use below role and rolebinding to provide the necessary permission to the service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: deployment-reader
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-deployment
namespace: default
subjects:
# You can specify more than one "subject"
- kind: ServiceAccount
name: default # "name" is case sensitive
namespace: default
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: deployment-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
To verify the permission
kubectl auth can-i get deployments --as=system:serviceaccount:default:default -n default
yes

Service account role bindings not working for API access

I am working on developing tools to interact with Kubernetes. I have OpenShift setup with the allow all authentication provider. I can log into the web console as I would expect.
I have also been able to setup a service account and assign a cluster role binding to the service account user. Despite this, when I access the REST API using a token of that service account, I get forbidden.
Here is what happens when I try to setup role bindings via OpenShift commands:
[root#host1 ~]# oadm policy add-cluster-role-to-user view em7 --namespace=default
[root#host1 ~]# oadm policy add-cluster-role-to-user cluster-admin em7 --namespace=default
[root#host1 ~]# oadm policy add-cluster-role-to-user cluster-reader em7 --namespace=default
[root#host1 ~]# oc get secrets | grep em7
em7-dockercfg-hnl6m kubernetes.io/dockercfg 1 18h
em7-token-g9ujh kubernetes.io/service-account-token 4 18h
em7-token-rgsbz kubernetes.io/service-account-token 4 18h
TOKEN=`oc describe secret em7-token-g9ujh | grep token: | awk '{ print $2 }'`
[root#host1 ~]# curl -kD - -H "Authorization: Bearer $TOKEN" https://localhost:8443/api/v1/pods
HTTP/1.1 403 Forbidden
Cache-Control: no-store
Content-Type: application/json
Date: Tue, 19 Jun 2018 15:36:40 GMT
Content-Length: 260
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:default:em7\" cannot list all pods in the cluster",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
I can also try using the yaml file from (Openshift Admin Token):
# creates the service account "ns-reader"
apiVersion: v1
kind: ServiceAccount
metadata:
name: ns-reader
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: global-reader
rules:
- apiGroups: [""]
# add other rescources you wish to read
resources: ["pods", "secrets"]
verbs: ["get", "watch", "list"]
---
# This cluster role binding allows service account "ns-reader" to read pods in all available namespace
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-ns
subjects:
- kind: ServiceAccount
name: ns-reader
namespace: default
roleRef:
kind: ClusterRole
name: global-reader
apiGroup: rbac.authorization.k8s.io
When I run this, I get the following error:
[root#host1 ~]# kubectl create -f stack_overflow_49667238.yaml
error validating "stack_overflow_49667238.yaml": error validating data: API version "rbac.authorization.k8s.io/v1" isn't supported, only supports API versions ["federation/v1beta1" "v1" "authentication.k8s.io/v1beta1" "componentconfig/v1alpha1" "policy/v1alpha1" "rbac.authorization.k8s.io/v1alpha1" "apps/v1alpha1" "authorization.k8s.io/v1beta1" "autoscaling/v1" "extensions/v1beta1" "batch/v1" "batch/v2alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false
I have tried several different API versions from the list but they all failed in a similar way.
oadm policy add-cluster-role-to-user view em7 grants to the user named em7
you need to grant permissions to the service account, e.g. oadm policy add-cluster-role-to-user view system:serviceaccount:default:em7

api request other than /api/v1 return 403 "Forbidden"

Hi I installed a fresh kubernetes cluster on Ubuntu 16.04 using this tutorial:https://blog.alexellis.io/kubernetes-in-10-minutes/
However as soon as I try to access my api (for example: https://[server-ip]:6443 /api/v1/namespaces) I get the following message
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:bootstrap:a916af\" cannot list namespaces at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
Does anyone know how to fix this or what I am doing wrong?
While I haven't run through that tutorial, the service account with which you're making the request doesn't have access to cluster-level information, like listing namespaces. RBAC (Role-Based Access Control) binds users with either a Role or a ClusterRole, which grant them different permissions. My guess is that service account shouldn't ever need to know what other namespaces exist, therefore doesn't have access to list them.
In terms of "fixing" this, aside from creating a serviceaccount/user with correct permissions, that tutorial makes several references to a config file stored at $HOME/.kube/config, which stores the credentials for a user that should have access to cluster-level resources, including listing namespaces. You could start there.
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
I solve it by create
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
kubectl apply -f fabric8-rbac.yaml

GKE RBAC role / rolebinding to access node status in the cluster

I can't get a rolebinding right in order to get node status from an app which runs in a pod on GKE.
I am able to create a pod from there but not get node status.
He is the role I am creating:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["nodes"]
verbs: ["get", "watch", "list"]
This is the error I get when I do a getNodeStatus:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "nodes \"gke-cluster-1-default-pool-36c26e1e-2lkn\" is forbidden: User \"system:serviceaccount:default:sa-poc\" cannot get nodes/status at the cluster scope: Unknown user \"system:serviceaccount:default:sa-poc\"",
"reason": "Forbidden",
"details": {
"name": "gke-cluster-1-default-pool-36c26e1e-2lkn",
"kind": "nodes"
},
"code": 403
}
I tried with some minor variations but did not succeed.
Kubernetes version on GKE is 1.8.4-gke.1
Any help appreciated, thanks!
Subresource permissions are represented as <resource>/<subresource>, so in the role, you would specify resources: ["nodes","nodes/status"]