Access SubjectAccessReview objects with `get` - kubernetes

The following says "I can" use get SubjectAccessReview, but then it returns a MethodNotAllowed error. Why?
❯ kubectl auth can-i get SubjectAccessReview
Warning: resource 'subjectaccessreviews' is not namespace scoped in group 'authorization.k8s.io'
yes
❯ kubectl get SubjectAccessReview
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
❯ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.3+k3s1
If I cannot get, then can-i should NOT return yes. Right?

kubectl auth can-i is not wrong.
The can-i command is checking cluster RBAC (does there exist a role and rolebinding that grant you access to that operation). It doesn't know or care about "supported methods". Somewhere there is a role that grants you the get verb on those resources...possibly implicitly e.g. via resources: ['*'].
For example, I'm accessing a local cluster with cluster-admin privileges, which means my access is controlled by this role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
The answer to kubectl auth can-i get <anything> is going to be yes, regardless of whether or not that operation makes sense for a given resource.

Related

Inadvertently deleted admin clusterrole and can't access cluster resources

I deleted my cluster-admin role via kubectl using:
kubectl delete clusterrole cluster-admin
Not sure what I expected, but now I don't have access to the cluster from my account. Any attempt to get or change resources using kubectl returns a 403, Forbidden.
Is there anything I can do to revert this change without blowing away the cluster and creating a new one? I have a managed cluster on Digital Ocean.
Not sure what I expected, but now I don't have access to the cluster from my account.
If none of the kubectl commands actually work, unfortunately you will not be able to create a new cluster role. The problem is that you won't be able to do anything without an admin role. You can try creating the cluster-admin role directly through the API (not using kubectl), but if that doesn't help you have to recreate the cluster.
Try applying this YAML to creaste the new Cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
apply the YAML file changes
kubectl apply -f <filename>.yaml

Adding a create permission for pods/portforward seems to remove the get permission for configmaps

I am trying to run helm status --tiller-namespace=$NAMESPACE $RELEASE_NAME from a container inside that namespace.
I have a role with the rule
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
bound to the default service account. But I was getting the error
Error: pods is forbidden: User "system:serviceaccount:mynamespace:default" cannot list resource "pods" in API group "" in the namespace "mynamespace"
So I added the list verb like so
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
and now I have progressed to the error cannot create resource "pods/portforward" in API group "". I couldn't find anything in the k8s docs on how to assign different verbs to different resources in the same apiGroup but based on this example I assumed this should work:
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
however, now I get the error cannot get resource "configmaps" in API group "". Note I am running a kubectl get cm $CMNAME before I run the helm status command.
So it seems that I did have permission to do a kubectl get cm until I tried to add the permission to create a pods/portforward.
Can anyone explain this to me please?
also the cluster is running k8s version
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7+1.2.3.el7", GitCommit:"cfc2012a27408ac61c8883084204d10b31fe020c", GitTreeState:"archive", BuildDate:"2019-05-23T20:00:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
and helm version
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
My issue was that I was deploying the manifests with these Roles as part of a helm chart (using helm 2). However, the service account for the tiller doing the deploying did not have the create pods/portforward permission. Thus it was unable to grant that permission and so it errored when trying to deploy the manifest with the Roles. This meant that the configmap get permission Role wasn't created hence the weird error.

openshift Crash Loop Back Off error with turbine-server

Hi I created a project in Openshift and attempted to add a turbine-server image to it. A Pod was added but I keep receiving the following error in the logs. I am very new to OpenShift and i would appreciate any advice or suggestions as to how to resolve this error. I can supply either further information that is required.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/booking/pods/turbine-server-2-q7v8l . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
How to diagnose
Make sure you have configured a service account, role, and role binding to the account. Make sure the service account is set to the pod spec.
spec:
serviceAccountName: your-service-account
Start monitoring atomic-openshift-node service on the node the pod is deployed and the API server.
$ journalctl -b -f -u atomic-openshift-node
Run the pod and monitor the journald output. You would see "Forbidden".
Jan 28 18:27:38 <hostname> atomic-openshift-node[64298]:
logging error output: "Forbidden (user=system:serviceaccount:logging:appuser, verb=get, resource=nodes, subresource=proxy)"
This means the service account appuser doest not have the authorisation to do get on the nodes/proxy resource. Then update the role to be able to allow the verb "get" on the resource.
- apiGroups: [""]
resources:
- "nodes"
- "nodes/status"
- "nodes/log"
- "nodes/metrics"
- "nodes/proxy" <----
- "nodes/spec"
- "nodes/stats"
- "namespaces"
- "events"
- "services"
- "pods"
- "pods/status"
verbs: ["get", "list", "view"]
Note that some resources are not default legacy "" group as in Unable to list deployments resources using RBAC.
How to verify the authorisations
To verify who can execute the verb against the resource, for example patch verb against pod.
$ oadm policy who-can patch pod
Namespace: default
Verb: patch
Resource: pods
Users: auser
system:admin
system:serviceaccount:cicd:jenkins
Groups: system:cluster-admins
system:masters
OpenShift vs K8S
OpenShift has command oc policy or oadm policy:
oc policy add-role-to-user <role> <user-name>
oadm policy add-cluster-role-to-user <role> <user-name>
This is the same with K8S role binding. You can use K8S RBAC but the API version in OpenShift needs to be v1 instead of rbac.authorization.k8s.io/v1 in K8s.
References
Managing Authorization Policies
Using RBAC Authorization
User and Role Management
Hi thank you for the replies - I was able to resolve the issue by executing the following commands using the oc command line utility:
oc policy add-role-to-group view system:serviceaccounts -n <project>
oc policy add-role-to-group edit system:serviceaccounts -n <project>

how to control access for pods/exec only in kubernetes rbac without pods create binded?

I checked the kubernetes docs, find that pods/exec resources has no verb,
and do not know how to only control access for it? Since I create a pod, someone else need to access it use 'exec' but cannot create anything in my cluster.
How to implement this?
Since pods/exec is a subresource of pods, If you want to exec a pod, you first need to get the pod, so here is my role definition.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Maybe you can try this kubectl plugin: https://github.com/zhangweiqaz/go_pod
kubectl go -h
kubectl exec in pod with username. For example:
kubectl go pod_name
Usage:
go [flags]
Flags:
-c, --containerName string containerName
-h, --help help for go
-n, --namespace string namespace
-u, --username string username, this user must exist in image, default: dev

kubernetes: Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope even after granting permission

Even after granting cluster roles to user, I get Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
I have the following set for the user:
- context:
cluster: kubernetes
user: user#gmail.com
name: user#kubernetes` set in the ~/.kube/config file
and the below added to admin.yaml to create cluster-role and cluster-rolebindings:
kind: CluserRouster: kubernetes user: nsp#gmail.com name: nsp#kubernetese
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
oidckind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: User
name: nsp#gmail.com
roleRef:
kind: ClusterRole
name: admin-role
When I try the command I still get error.
kubectl --username=user#gmail.com get nodes
Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
Can someone please suggest on how to proceed.
Your problem is not with your ClusterRoleBindings but rather with user authentication. Kubernetes tells you that it identified you as system:anonymous (which is similar to *NIX's nobody) and not nsp#example.com (to which you applied your binding).
In your specific case the reason for that is that the username flag uses HTTP Basic authentication and needs the password flag to actually do anything. But even if you did supply the password, you'd still need to actually tell the API server to accept that specific user.
Have a look at this part of the Kubernetes documentation which deals with different methods of authentication. For the username and password authentication to work, you'd want to look at the Static Password File section, but I would actually recommend you go with X509 Client Certs since they are more secure and are operationally much simpler (no secrets on the Server, no state to replicate between API servers).
In my case i was receiving nearly similar error due to RBAC
Error
root#k8master:~# kubectl cluster-info dump --insecure-skip-tls-verify=true
Error from server (Forbidden): nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Solution:
As Solution i have done below things to reconfigure my user to access cluster
cd $HOME
sudo whoami
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
After doing above when i take cluster dump i got result
root#k8master:~# kubectl cluster-info
Kubernetes master is running at https://192.168.10.15:6443
KubeDNS is running at https://192.168.10.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy