Adding a create permission for pods/portforward seems to remove the get permission for configmaps - kubernetes

I am trying to run helm status --tiller-namespace=$NAMESPACE $RELEASE_NAME from a container inside that namespace.
I have a role with the rule
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
bound to the default service account. But I was getting the error
Error: pods is forbidden: User "system:serviceaccount:mynamespace:default" cannot list resource "pods" in API group "" in the namespace "mynamespace"
So I added the list verb like so
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
and now I have progressed to the error cannot create resource "pods/portforward" in API group "". I couldn't find anything in the k8s docs on how to assign different verbs to different resources in the same apiGroup but based on this example I assumed this should work:
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
however, now I get the error cannot get resource "configmaps" in API group "". Note I am running a kubectl get cm $CMNAME before I run the helm status command.
So it seems that I did have permission to do a kubectl get cm until I tried to add the permission to create a pods/portforward.
Can anyone explain this to me please?
also the cluster is running k8s version
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7+1.2.3.el7", GitCommit:"cfc2012a27408ac61c8883084204d10b31fe020c", GitTreeState:"archive", BuildDate:"2019-05-23T20:00:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
and helm version
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}

My issue was that I was deploying the manifests with these Roles as part of a helm chart (using helm 2). However, the service account for the tiller doing the deploying did not have the create pods/portforward permission. Thus it was unable to grant that permission and so it errored when trying to deploy the manifest with the Roles. This meant that the configmap get permission Role wasn't created hence the weird error.

Related

Access SubjectAccessReview objects with `get`

The following says "I can" use get SubjectAccessReview, but then it returns a MethodNotAllowed error. Why?
❯ kubectl auth can-i get SubjectAccessReview
Warning: resource 'subjectaccessreviews' is not namespace scoped in group 'authorization.k8s.io'
yes
❯ kubectl get SubjectAccessReview
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
❯ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.3+k3s1
If I cannot get, then can-i should NOT return yes. Right?
kubectl auth can-i is not wrong.
The can-i command is checking cluster RBAC (does there exist a role and rolebinding that grant you access to that operation). It doesn't know or care about "supported methods". Somewhere there is a role that grants you the get verb on those resources...possibly implicitly e.g. via resources: ['*'].
For example, I'm accessing a local cluster with cluster-admin privileges, which means my access is controlled by this role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
The answer to kubectl auth can-i get <anything> is going to be yes, regardless of whether or not that operation makes sense for a given resource.

What are the correct permissions to give Argo workflows given googleapi: Error 403?

I am seeing the following error running Argo workflows in GKE.
time="2022-05-13T14:17:40.740Z" level=info msg="node changed" new.message="Error (exit code 1): upload /tmp/argo/outputs/artifacts/message.tgz: writer close: googleapi: Error 403: Access denied., forbidden" new.phase=Error new.progress=0/1 nodeID=hello-world-3126142299 old.message= old.phase=Pending old.progress=0/1
I am using a permission set that worked once. I think - I used to be able to run this workflow, but it has been a while. When I first ran the workflow, it gave me an error saying something like "you are using a deprecated permissions, look here (https://argoproj.github.io/argo-workflows/workflow-rbac/)." I updated to the below set and now am getting the above error.
Here is my current clusterrole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: sandbox-dev
namespace: sandbox
rules:
# pod get/watch is used to identify the container IDs of the current pod
- apiGroups: ["", argoproj.io"]
resources:
- pods
- volumes
- persistentvolumes
- pods/log
- pods/exec
- configmaps
- workflows
- workflowtemplates
- workflowtasksets
- workflowtaskresult
verbs:
- get
- create
- watch
- patch
- list
- delete
- update

How to provide access to a pod so that it can list and get other pods and other resource in the namespaces/cluster

I have been working on creating a application which can perform verification test on the deployed istio components in the kube-cluster. The constraint in my case is that I have run this application as a pod inside the kubernetes and I cannot provide cluster-admin role to the pod of the application so that it can do all the operations. I have to create a restricted ClusterRole just to provide enough access so that application list and get all the required deployed istio resources (Reason for creating a cluster role is because when istio is deployed it created both namespace level and cluster level resources). Currently my application won't run at all if I use my restricted ClusterRole and outputs and error
Error: failed to fetch istiod pod, error: pods is forbidden: User "system:serviceaccount:istio-system:istio-deployment-verification-sa" cannot list resource "pods" in API group "" in the namespace "istio-system"
Above error doesn't make sense as I have explicitly mentioned the core api group in my ClusterRole and also mentioned pods as a resource in the resourceType child of my ClusterRole definition.
Clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
namespace: {{ .Values.clusterrole.clusterrolens}}
rules:
- apiGroups:
- "rbac.authorization.k8s.io"
- "" #enabling access to core API
- "networking.istio.io"
- "install.istio.io"
- "autoscaling"
- "apps"
- "admissionregistration.k8s.io"
- "policy"
- "apiextensions.k8s.io"
resources:
- "clusterroles"
- "clusterolebindings"
- "serviceaccounts"
- "roles"
- "rolebindings"
- "horizontalpodautoscalers"
- "configmaps"
- "deployments"
- "mutatingwebhookconfigurations"
- "poddisruptionbudgets"
- "envoyfilters"
- "validatingwebhookconfigurations"
- "pods"
- "wasmplugins"
- "destinationrules"
- "envoyfilters"
- "gateways"
- "serviceentries"
- "sidecars"
- "virtualservices"
- "workloadentries"
- "workloadgroups"
- "authorizationpolicies"
- "peerauthentications"
- "requestauthentications"
- "telemetries"
- "istiooperators"
resourceNames:
- "istiod-istio-system"
- "istio-reader-istio-system"
- "istio-reader-service-account"
- "istiod-service-account"
- "wasmplugins.extensions.istio.io"
- "destinationrules.networking.istio.io"
- "envoyfilters.networking.istio.io"
- "gateways.networking.istio.io"
- "serviceentries.networking.istio.io"
- "sidecars.networking.istio.io"
- "virtualservices.networking.istio.io"
- "workloadentries.networking.istio.io"
- "workloadgroups.networking.istio.io"
- "authorizationpolicies.security.istio.io"
- "peerauthentications.security.istio.io"
- "requestauthentications.security.istio.io"
- "telemetries.telemetry.istio.io"
- "istiooperators.install.istio.io"
- "istiod"
- "istiod-clusterrole-istio-system"
- "istiod-gateway-controller-istio-system"
- "istiod-clusterrole-istio-system"
- "istiod-gateway-controller-istio-system"
- "istio"
- "istio-sidecar-injector"
- "istio-reader-clusterrole-istio-system"
- "stats-filter-1.10"
- "tcp-stats-filter-1.10"
- "stats-filter-1.11"
- "tcp-stats-filter-1.11"
- "stats-filter-1.12"
- "tcp-stats-filter-1.12"
- "istio-validator-istio-system"
- "istio-ingressgateway-microservices"
- "istio-ingressgateway-microservices-sds"
- "istio-ingressgateway-microservices-service-account"
- "istio-ingressgateway-public"
- "istio-ingressgateway-public-sds"
- "istio-ingressgateway-public-service-account"
verbs:
- get
- list
Application I have built leverage the istioctl docker container published by istio on dockerhub. Link.
I want to understand what changes are required in above ClusterRole definition so that I can perform the get and list operations for the pods in the namespace.
I would also want to understand that is it possible that the error I am getting is trying to reference some other resource in the cluster?
Cluster information:
Kubernetes version: 1.20
Istioctl docker image version: 1.12.2
Istio version: 1.12.1
As OP mentioned in the comment problem is resolved after my suggestion:
Please run the command kubectl auth can-i list pods --namespace istio-system --as system:serviceaccount:istio-system:istio-deployment-verification-sa and attach result to the question. Look also here
OP has confirmed that problem is resolved:
thanx for the above command using above I was finally able to nail down the issue and found the issue to be with first resourceName and second we need to mention core api in the api group before any other. Thank you issue is resolved now.

Jenkins pipeline k8s deployment failed

I'm trying to deploy my application into EKS cluster. When I am running jenkins job I could able to get kubectl get pod details with running state at the same time when I'm trying to deploy yaml file via jenkins I'm getting below error:
+ kubectl create -f deployment.yaml
Error from server (Forbidden): error when creating "deployment.yaml": deployments.apps is forbidden: User "system:node:ip-10-2-3-4.eu-central-1.compute.internal" cannot create resource "deployments" in API group "apps" in the namespace "default"
create pod
+ kubectl '--kubeconfig=****' '--context=arn:aws:eks:eu-central-1:123456789101:cluster/my-cluster' sh "kubectl auth can-i list pods"
yes
Create deployment
+ kubectl '--kubeconfig=****' '--context=arn:aws:eks:eu-central-1:123456789101:cluster/my-cluster' sh "kubectl auth can-i create deployment"
no
So that means you have permissions to read/list pods data but dont have an access for creating deployment object.
Below is 2 examples, check and compare them.
1st is just reading rules(what you currently have)
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Pod
# objects is "pods"
resources: ["pods"]
verbs: ["get", "list", "watch"]
2nd is permissions for deployment creation (verbs: ["**create**"]). Most probably you miss this part
rules:
- apiGroups: ["extensions", "apps"]
#
# at the HTTP level, the name of the resource for accessing Deployment
# objects is "deployments"
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
For more options, examples and explanations please check Using RBAC Authorization

How is the verb "proxy" determined in the audit logs?

According to the kubernetes documentation, those are the following verbs:
https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb
I've been looking at kubernetes-dashboard Role, and i saw this:
- apiGroups:
- ""
resourceNames:
- heapster
- dashboard-metrics-scraper
resources:
- services
verbs:
- proxy
This is the role of kubernetes-dashboard
I didn't see any indication that in the audits they use the "proxy" verb. Any clearance would be great
When you perform kubectl proxy the request goes to API Server which calls POST /api/v1/namespaces/{namespace}/services/{name}/proxy the proxy api of services as documented here.
Proxy is not a verb, it's a sub resource exposed by Kubelet.