How to provide access to a pod so that it can list and get other pods and other resource in the namespaces/cluster - kubernetes

I have been working on creating a application which can perform verification test on the deployed istio components in the kube-cluster. The constraint in my case is that I have run this application as a pod inside the kubernetes and I cannot provide cluster-admin role to the pod of the application so that it can do all the operations. I have to create a restricted ClusterRole just to provide enough access so that application list and get all the required deployed istio resources (Reason for creating a cluster role is because when istio is deployed it created both namespace level and cluster level resources). Currently my application won't run at all if I use my restricted ClusterRole and outputs and error
Error: failed to fetch istiod pod, error: pods is forbidden: User "system:serviceaccount:istio-system:istio-deployment-verification-sa" cannot list resource "pods" in API group "" in the namespace "istio-system"
Above error doesn't make sense as I have explicitly mentioned the core api group in my ClusterRole and also mentioned pods as a resource in the resourceType child of my ClusterRole definition.
Clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
namespace: {{ .Values.clusterrole.clusterrolens}}
rules:
- apiGroups:
- "rbac.authorization.k8s.io"
- "" #enabling access to core API
- "networking.istio.io"
- "install.istio.io"
- "autoscaling"
- "apps"
- "admissionregistration.k8s.io"
- "policy"
- "apiextensions.k8s.io"
resources:
- "clusterroles"
- "clusterolebindings"
- "serviceaccounts"
- "roles"
- "rolebindings"
- "horizontalpodautoscalers"
- "configmaps"
- "deployments"
- "mutatingwebhookconfigurations"
- "poddisruptionbudgets"
- "envoyfilters"
- "validatingwebhookconfigurations"
- "pods"
- "wasmplugins"
- "destinationrules"
- "envoyfilters"
- "gateways"
- "serviceentries"
- "sidecars"
- "virtualservices"
- "workloadentries"
- "workloadgroups"
- "authorizationpolicies"
- "peerauthentications"
- "requestauthentications"
- "telemetries"
- "istiooperators"
resourceNames:
- "istiod-istio-system"
- "istio-reader-istio-system"
- "istio-reader-service-account"
- "istiod-service-account"
- "wasmplugins.extensions.istio.io"
- "destinationrules.networking.istio.io"
- "envoyfilters.networking.istio.io"
- "gateways.networking.istio.io"
- "serviceentries.networking.istio.io"
- "sidecars.networking.istio.io"
- "virtualservices.networking.istio.io"
- "workloadentries.networking.istio.io"
- "workloadgroups.networking.istio.io"
- "authorizationpolicies.security.istio.io"
- "peerauthentications.security.istio.io"
- "requestauthentications.security.istio.io"
- "telemetries.telemetry.istio.io"
- "istiooperators.install.istio.io"
- "istiod"
- "istiod-clusterrole-istio-system"
- "istiod-gateway-controller-istio-system"
- "istiod-clusterrole-istio-system"
- "istiod-gateway-controller-istio-system"
- "istio"
- "istio-sidecar-injector"
- "istio-reader-clusterrole-istio-system"
- "stats-filter-1.10"
- "tcp-stats-filter-1.10"
- "stats-filter-1.11"
- "tcp-stats-filter-1.11"
- "stats-filter-1.12"
- "tcp-stats-filter-1.12"
- "istio-validator-istio-system"
- "istio-ingressgateway-microservices"
- "istio-ingressgateway-microservices-sds"
- "istio-ingressgateway-microservices-service-account"
- "istio-ingressgateway-public"
- "istio-ingressgateway-public-sds"
- "istio-ingressgateway-public-service-account"
verbs:
- get
- list
Application I have built leverage the istioctl docker container published by istio on dockerhub. Link.
I want to understand what changes are required in above ClusterRole definition so that I can perform the get and list operations for the pods in the namespace.
I would also want to understand that is it possible that the error I am getting is trying to reference some other resource in the cluster?
Cluster information:
Kubernetes version: 1.20
Istioctl docker image version: 1.12.2
Istio version: 1.12.1

As OP mentioned in the comment problem is resolved after my suggestion:
Please run the command kubectl auth can-i list pods --namespace istio-system --as system:serviceaccount:istio-system:istio-deployment-verification-sa and attach result to the question. Look also here
OP has confirmed that problem is resolved:
thanx for the above command using above I was finally able to nail down the issue and found the issue to be with first resourceName and second we need to mention core api in the api group before any other. Thank you issue is resolved now.

Related

What are the correct permissions to give Argo workflows given googleapi: Error 403?

I am seeing the following error running Argo workflows in GKE.
time="2022-05-13T14:17:40.740Z" level=info msg="node changed" new.message="Error (exit code 1): upload /tmp/argo/outputs/artifacts/message.tgz: writer close: googleapi: Error 403: Access denied., forbidden" new.phase=Error new.progress=0/1 nodeID=hello-world-3126142299 old.message= old.phase=Pending old.progress=0/1
I am using a permission set that worked once. I think - I used to be able to run this workflow, but it has been a while. When I first ran the workflow, it gave me an error saying something like "you are using a deprecated permissions, look here (https://argoproj.github.io/argo-workflows/workflow-rbac/)." I updated to the below set and now am getting the above error.
Here is my current clusterrole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: sandbox-dev
namespace: sandbox
rules:
# pod get/watch is used to identify the container IDs of the current pod
- apiGroups: ["", argoproj.io"]
resources:
- pods
- volumes
- persistentvolumes
- pods/log
- pods/exec
- configmaps
- workflows
- workflowtemplates
- workflowtasksets
- workflowtaskresult
verbs:
- get
- create
- watch
- patch
- list
- delete
- update

Modify ClusterRole for Kubernetes

I want to use the ClusterRole edit for some users of my Kubernetes cluster (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
However, it is unfortunate that the user can be accessing and modifying Resource Quotas and Limit Ranges.
My question is now: How can I grant Users via a RoleBinding access to a namespace, such that the Role is essentially the CluserRole edit, but without having any access to Resource Quotas and Limit Ranges?
The edit role gives only read access to resourcequotas and limitranges:
- apiGroups:
- ""
resources:
- bindings
- events
- limitranges
- namespaces/status
- pods/log
- pods/status
- replicationcontrollers/status
- resourcequotas
- resourcequotas/status
verbs:
- get
- list
- watch
If you want a role that doesn't include read access to these resources, just make a copy of the edit role with those resources excluded.

How do I fix a role-based problem when my role appears to have the correct permissions?

I am trying to establish the namespace "sandbox" in Kubernetes and have been using it for several days for several days without issue. Today I got the below error.
I have checked to make sure that I have all of the requisite configmaps in place.
Is there a log or something where I can find what this is referring to?
panic: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I did find this (MountVolume.SetUp failed for volume "kube-api-access-fcz9j" : object "default"/"kube-root-ca.crt" not registered) thread and have applied the below patch to my service account, but I am still getting the same error.
automountServiceAccountToken: false
UPDATE:
In answer to #p10l I am working in a bare-metal cluster version 1.23.0. No terraform.
I am getting closer, but still not there.
This appears to be another RBAC problem, but the error does not make sense to me.
I have a user "dma." I am running workflows in the "sandbox" namespace using the context dma#kubernetes
The error now is
Create request failed: workflows.argoproj.io is forbidden: User "dma" cannot create resource "workflows" in API group "argoproj.io" in the namespace "sandbox"
but that user indeed appears to have the correct permissions.
This is the output of
kubectl get role dma -n sandbox -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"dma","namespace":"sandbox"},"rules":[{"apiGroups":["","apps","autoscaling","batch","extensions","policy","rbac.authorization.k8s.io","argoproj.io"],"resources":["pods","configmaps","deployments","events","pods","persistentvolumes","persistentvolumeclaims","services","workflows"],"verbs":["get","list","watch","create","update","patch","delete"]}]}
creationTimestamp: "2021-12-21T19:41:38Z"
name: dma
namespace: sandbox
resourceVersion: "1055045"
uid: 94191881-895d-4457-9764-5db9b54cdb3f
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
- argoproj.io
- workflows.argoproj.io
resources:
- pods
- configmaps
- deployments
- events
- pods
- persistentvolumes
- persistentvolumeclaims
- services
- workflows
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
This is the output of kubectl get rolebinding -n sandbox dma-sandbox-rolebinding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"dma-sandbox-rolebinding","namespace":"sandbox"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"dma"},"subjects":[{"kind":"ServiceAccount","name":"dma","namespace":"sandbox"}]}
creationTimestamp: "2021-12-21T19:56:06Z"
name: dma-sandbox-rolebinding
namespace: sandbox
resourceVersion: "1050593"
uid: d4d53855-b5fc-4f29-8dbd-17f682cc91dd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dma
subjects:
- kind: ServiceAccount
name: dma
namespace: sandbox
The issue you are describing is a reoccuring one, described here and here where your cluster lacks KUBECONFIG environment variable.
First, run echo $KUBECONFIG on all your nodes to see if it's empty.
If it is, look for the config file in your cluster, then copy it to all the nodes, then export this variable by running export KUBECONFIG=/path/to/config. This file can be usually found at ~/.kube/config/ or /etc/kubernetes/admin.conf` on master nodes.
Let me know, if this solution worked in your case.

Adding a create permission for pods/portforward seems to remove the get permission for configmaps

I am trying to run helm status --tiller-namespace=$NAMESPACE $RELEASE_NAME from a container inside that namespace.
I have a role with the rule
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
bound to the default service account. But I was getting the error
Error: pods is forbidden: User "system:serviceaccount:mynamespace:default" cannot list resource "pods" in API group "" in the namespace "mynamespace"
So I added the list verb like so
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
and now I have progressed to the error cannot create resource "pods/portforward" in API group "". I couldn't find anything in the k8s docs on how to assign different verbs to different resources in the same apiGroup but based on this example I assumed this should work:
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
however, now I get the error cannot get resource "configmaps" in API group "". Note I am running a kubectl get cm $CMNAME before I run the helm status command.
So it seems that I did have permission to do a kubectl get cm until I tried to add the permission to create a pods/portforward.
Can anyone explain this to me please?
also the cluster is running k8s version
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7+1.2.3.el7", GitCommit:"cfc2012a27408ac61c8883084204d10b31fe020c", GitTreeState:"archive", BuildDate:"2019-05-23T20:00:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
and helm version
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
My issue was that I was deploying the manifests with these Roles as part of a helm chart (using helm 2). However, the service account for the tiller doing the deploying did not have the create pods/portforward permission. Thus it was unable to grant that permission and so it errored when trying to deploy the manifest with the Roles. This meant that the configmap get permission Role wasn't created hence the weird error.

How is the verb "proxy" determined in the audit logs?

According to the kubernetes documentation, those are the following verbs:
https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb
I've been looking at kubernetes-dashboard Role, and i saw this:
- apiGroups:
- ""
resourceNames:
- heapster
- dashboard-metrics-scraper
resources:
- services
verbs:
- proxy
This is the role of kubernetes-dashboard
I didn't see any indication that in the audits they use the "proxy" verb. Any clearance would be great
When you perform kubectl proxy the request goes to API Server which calls POST /api/v1/namespaces/{namespace}/services/{name}/proxy the proxy api of services as documented here.
Proxy is not a verb, it's a sub resource exposed by Kubelet.