latest Kubernetes version throws RBAC error - kubernetes

I have a scaler service that was working fine, until my recent kubernetes version upgrade. Now I keep getting the following error. (some info redacted)
Error from server (Forbidden): deployments.extensions "redacted" is forbidden: User "system:serviceaccount:namesspace:saname" cannot get resource "deployments/scale" in API group "extensions" in the namespace "namespace"
I have below cluster role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: redacted
chart: redacted
heritage: Tiller
release: redacted
name: redacted
rules:
- apiGroups:
- '*'
resources: ["configmaps", "endpoints", "services", "pods", "secrets", "namespaces", "serviceaccounts", "ingresses", "daemonsets", "statefulsets", "persistentvolumeclaims", "replicationcontrollers", "deployments", "replicasets"]
verbs: ["get", "list", "watch", "edit", "delete", "update", "scale", "patch", "create"]
- apiGroups:
- '*'
resources: ["nodes"]
verbs: ["list", "get", "watch"]

scale is a subresource, not a verb. Include "deployments/scale" in the resources list.

Related

User "system:serviceaccount:gitlab:default" cannot get resource "deployments" in API group "apps" in the namespace "gitlab"

I am trying to run a k8s deployment job from gitlab kubernetes executor.
I deployed the kubernetes runner using helm as following.
my values.yaml includes the following rbac rules:
rbac:
create: true
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
clusterWideAccess: true
podSecurityPolicy:
enabled: false
resourceNames:
- gitlab-runner
then
helm install --namespace gitlab gitlab-runner -f values.yaml gitlab/gitlab-runner
and, my .gitlab-ci.yml has the following stage:
script:
- mkdir -p /etc/deploy
- echo $kube_config |base64 -d > $KUBECONFIG
- sed -i "s/IMAGE_TAG/$CI_PIPELINE_ID/g" deployment.yaml
- cat deployment.yaml
- kubectl apply -f deployment.yaml
and, I got the following error in pipeline logs:
$ kubectl apply -f deployment.yaml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "java-demo", Namespace: "gitlab"
Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"java-demo" "namespace":"gitlab"] "spec":map["replicas":'\x01' "selector":map["matchLabels":map["app":"java-demo"]] "template":map["metadata":map["labels":map["app":"java-demo"]] "spec":map["containers":[map["image":"square2019/dummy-repo:555060965" "imagePullPolicy":"Always" "name":"java-demo" "ports":[map["containerPort":'\u1f90']]]]]]]]}
from server for: "deployment.yaml": deployments.apps "java-demo" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "deployments" in API group "apps" in the namespace "gitlab"
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service"
Name: "java-demo", Namespace: "gitlab"
Object: &{map["apiVersion":"v1" "kind":"Service" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"java-demo" "namespace":"gitlab"] "spec":map["ports":[map["name":"java-demo" "port":'P' "targetPort":'\u1f90']] "selector":map["app":"java-demo"] "type":"ClusterIP"]]}
from server for: "deployment.yaml": services "java-demo" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "services" in API group "" in the namespace "gitlab"
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
Am I missing some RBAC rules here?
thank you!
=== update 2022.06.04 =====
kubectl get role -n gitlab -o yaml
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
=== update 2022.06.05 ===
Looking at the logic in https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/main/templates/role.yaml, I modified values.yaml with
clusterWideAccess: false
and, now i get the role as:
k get role -n gitlab -o yaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
meta.helm.sh/release-name: gitlab-runner
meta.helm.sh/release-namespace: gitlab
creationTimestamp: "2022-06-05T03:49:57Z"
labels:
app: gitlab-runner
app.kubernetes.io/managed-by: Helm
chart: gitlab-runner-0.41.0
heritage: Helm
release: gitlab-runner
name: gitlab-runner
namespace: gitlab
resourceVersion: "283754"
uid: 8040b295-c9fc-47cb-8c5c-74cbf6c4d8a7
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- get
- watch
- create
- delete
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- apiGroups:
- ""
resources:
- pods/attach
verbs:
- list
- get
- create
- delete
- update
- apiGroups:
- ""
resources:
- secrets
verbs:
- list
- get
- create
- delete
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- list
- get
- create
- delete
- update
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
kind: List
metadata:
resourceVersion: ""
service account and RoleBinding
k get sa -n gitlab
NAME SECRETS AGE
default 1 3d2h
gitlab-runner 1 2d2h
k get RoleBinding -n gitlab
NAME ROLE AGE
gitlab-runner Role/gitlab-runner 9h
however, the same error persists.
=== update 2022.06.06 ===
I applied the following to fix the issue for the moment
kubectl create rolebinding --namespace=gitlab gitlab-runner-4 --role=gitlab-runner --serviceaccount=gitlab:default

setting up build pod: Timed out while waiting for ServiceAccount/<service_account_name> to be present in the cluster

I am using helm charts to deploy Gitlab Runner into Kubernetes cluster. I want that the created pods when runner is triggered to have a costume services account instead of the default one. I did create role and cluster role and did the role bindings.
However, I am getting the following error when running a CI job
From Gitlab CI
Running with gitlab-runner 15.0.0 (cetx4b)
on initial-runner -P-d1RhT
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: namespace_test
Using Kubernetes executor with image registry.gitlab.com/docker-images/ubuntu-base:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:05
ERROR: Job failed (system failure): prepare environment: setting up build pod: Timed out while waiting for ServiceAccount/gitlab-runner to be present in the cluster. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
list roles and services accounts
# get rolebindings & clusterrolebindings
kubectl get rolebindings,clusterrolebindings -n namespace_test | grep gitlab-runner
# output
# rolebinding.rbac.authorization.k8s.io/gitlab-runner Role/gitlab-runner
# clusterrolebinding.rbac.authorization.k8s.io/gitlab-runner ClusterRole/gitlab-runner
---
# get serviceaccounts
kubectl get serviceaccounts -n namespace_test
# output
# NAME SECRETS AGE
# default 1 6h50m
# gitlab-runner 1 24m
# kubernetes-dashboard 1 6h50m
# mysql 2 6h49m
helm values
runners:
concurrent: 8
name: initial-runner
config: |
[[runners]]
[runners.kubernetes]
namespace = "namespace_test"
image = "registry.gitlab.com/docker-images/ubuntu-base:latest"
service_account = "gitlab-runner"
tags: base
rbac:
create: false
serviceAccountName: gitlab-runner
any ideas on how to solve this issue?
In my case, I forgot to give the "gitlab-runner" cluster role the right permissions on "serviceaccounts" resource.
Ensure the role that is attached to your Gitlab runner has the following specification:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: gitlab-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["list", "get", "create", "delete", "update"]

CephFS failing to mount on Kubernetes

I set up a Ceph cluster and mounted manually using the sudo mount -t command following the official documentation, and I checked the status of my Ceph cluster - no problems there. Now I am trying to mount my CephFS on Kubernetes but my pod is stuck in ContainerCreating when I run the kubectl create command because it is failing to mount. I looked at many related problems/solutions online but nothing works.
As reference, I am following this guide: https://medium.com/velotio-perspectives/an-innovators-guide-to-kubernetes-storage-using-ceph-a4b919f4e469
My setup consists of 5 AWS instances, and they are as follows:
Node 1: Ceph Mon
Node 2: OSD1 + MDS
Node 3: OSD2 + K8s Master
Node 4: OSD3 + K8s Worker1
Node 5: CephFS + K8s Worker2
Is it okay to stack K8s on top of the same instance as Ceph? I am pretty sure that is allowed, but if that is not allowed, please let me know.
In the describe pod logs, this is the error/warning:
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30 --scope -- mount -t ceph -o name=kubernetes-dynamic-user-4d05a2df-3639-11ea-b2d3-5a4147fda646,secret=AQC4whxeqQ9ZERADD2nUgxxOktLE1OIGXThBmw== 172.31.15.110:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-4d05a269-3639-11ea-b2d3-5a4147fda646 /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30
Output: Running scope as unit run-2382233.scope.
couldn't finalize options: -34
These are my .yaml files:
Provisioner:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "create"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
subjects:
- kind: ServiceAccount
name: test-provisioner-dt
namespace: test-dt
roleRef:
kind: ClusterRole
name: test-provisioner-dt
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgres-pv
namespace: test-dt
provisioner: ceph.com/cephfs
parameters:
monitors: 172.31.15.110:6789
adminId: admin
adminSecretName: ceph-secret-admin-dt
adminSecretNamespace: test-dt
claimRoot: /pvc-volumes
PVC:
apiVersion: v1
metadata:
name: postgres-pvc
namespace: test-dt
spec:
storageClassName: postgres-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Output of kubectl get pv and kubectl get pvc show the volumes are bound and claimed, no errors.
Output of the provisioner pod logs all show success/no errors.
Please help!

What apiGroups and resources exist for RBAC rules in kubernetes?

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: xi-{{instanceId}}
name: deployment-creation
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
In the above example, I permit various operations on pods and jobs.
For pods, the apiGroup is blank. For jobs, the apiGroup may be batch or extensions.
Where can I find all the possible resources, and which apiGroup I should use with each resource?
kubectl api-resources will list all the supported resource types and api-group. Here is the table of resource-types
just to add to #suresh's answer, here is a list of apiGroups

kubernetes RBAC role verbs to exec to pod

I my 1.9 cluster created this deployment role for the dev user. Deployment works as expected. Now I want to give exec and logs access to developer. What role I need to add for exec to the pod?
kind: Role
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Error message:
kubectl exec nginx -it -- sh
Error from server (Forbidden): pods "nginx" is forbidden: User "dev" cannot create pods/exec in the namespace "dev"
Thanks
SR
The RBAC docs say that
Most resources are represented by a string representation of their name, such as “pods”, just as it appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a “subresource”, such as the logs for a pod. [...] To represent this in an RBAC role, use a slash to delimit the resource and subresource.
To allow a subject to read both pods and pod logs, and be able to exec into the pod, you would write:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Some client libraries may do an http GET to negotiate a websocket first, which would require the "get" verb. kubectl sends an http POST instead, that's why it requires the "create" verb in that case.