Two Kubernetes clusters act differently for RBAC - kubernetes

I have created an application that needs to have access to list, create, update and delete different Kubernetes resources and I created a clusterrole for it as below. Everything works fine on my local K8s cluster that runs on Microk8s but when I deployed it on a bare-metal cluster with the same version of K8s I am getting errors that I don't have proper access.
How this is possible (both should act the same) and is there a way to find these errors in advance?
My clusterrole:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: {{ .Release.Namespace }}-cluster-manager-role
rules:
- apiGroups: ["","apps","core", "autoscaling"] # --> I was getting error that I cannot create HPA but after I added "autoscaling" to the apigroup now I can create HPA
resources: ["*", "namespaces"]
verbs: ["get", "watch", "list", "patch", "create", "delete", "update"]
# ================
# Current clusterrole on microk8s (which allows me to do all the things)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: "2021-05-31T12:05:58Z"
name: default-cluster-manager-role
resourceVersion: "937643"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/default-cluster-manager-role
uid: 16fb63d6-1261-48a9-bc7f-5c8fffb72c9d
rules:
- apiGroups:
- ""
- apps
- core
resources:
- '*'
- namespaces
verbs:
- get
- watch
- list
- patch
- create
- delete
- update
Kubernetes version:
# Microk8s
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.15", GitCommit:"2adc8d7091e89b6e3ca8d048140618ec89b39369", GitTreeState:"clean", BuildDate:"2020-09-02T11:31:21Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
# Bare-metal
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.15", GitCommit:"2adc8d7091e89b6e3ca8d048140618ec89b39369", GitTreeState:"clean", BuildDate:"2020-09-02T11:31:21Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Some of the errors that I got:
time="2021-06-22T08:45:31Z" level=error msg="Getting list of PVCs for namespace wws-test failed." func=src/k8s.CreateClusterRole file="/src/k8s/k8s.go:1304"
time="2021-06-22T08:45:31Z" level=error msg="clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:wws:wws-cluster-manager-sa\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" func=src/k8s.CreateClusterRole file="/src/k8s/k8s.go:1305"
time="2021-06-22T08:45:31Z" level=error msg="Getting list of PVCs for namespace wws-test failed." func=src/k8s.CreateClusterRoleBinding file="/src/k8s/k8s.go:1232"
time="2021-06-22T08:45:31Z" level=error msg="clusterrolebindings.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:wws:wws-cluster-manager-sa\" cannot create resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" func=src/k8s.CreateClusterRoleBinding file="/src/k8s/k8s.go:1233"
time="2021-06-22T08:45:32Z" level=error msg="Getting list of PVCs for namespace wws-test failed." func=src/k8s.CreateRole file="/src/k8s/k8s.go:1448"
time="2021-06-22T08:45:32Z" level=error msg="roles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:wws:wws-cluster-manager-sa\" cannot create resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"wws-test\"" func=src/k8s.CreateRole file="/src/k8s/k8s.go:1449"

You should have a look at the ClusterRoleBindings (k get ClusterRoleBinding -o wide) that are applied to the ServiceAccount: system:serviceaccount:wws:wws-cluster-manager-sa)
I guess that on Minikube your user can do anything on your local cluster.
But, the real cluster would not allow you to create new ClusterRoles/CluterRoleBindings with the default user.

I don't know why this happened but I solved the issue by using * for all three fields of apiGroups, resources and verbs:
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
I am aware that this is not a clean and perfect solution especially if you want more control on the role and the resources or the verbs that the role should have access to but since no one (even I posted this on Kubernetes repo github as an issue) knows why this happened and I don't have time to dig deeper into this I accept my own answer.

Related

How does priorityClass Works

I try to use priorityClass.
I create two pods, the first has system-node-critical priority and the second cluster-node-critical priority.
Both pods need to run in a node labeled with nodeName: k8s-minion1 but such a node has only 2 cpus while both pods request 1.5 cpu.
I then expect that the second pod runs and the first is in pending status. Instead, the first pod always runs no matter the classpriority I affect to the second pod.
I even tried to label the node afted I apply my manifest but does not change anything.
Here is my manifest :
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
nodeSelector:
nodeName: k8s-minion1
priorityClassName: cluster-node-critical
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
priorityClassName: system-node-critical
nodeSelector:
nodeName: k8s-minion1
It is worth noting that I get an error "unknown object : priorityclass" when I do kubectl get priorityclass and when I export my running pod in yml with kubectl get pod secondpod -o yaml, I cant find any classpriority: field.
Here Are my version infos:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any ideas why this is not working?
Thanks in advance,
Abdelghani
PriorityClasses first appeared in k8s 1.8 as alpha feature.
It graduated to beta in 1.11
You are using 1.10 and this means that this feature is in alpha.
Alpha features are not enabled by default so you would need to enable it.
Unfortunately k8s version 1.10 is no longer supported, so I'd suggest to upgrade at least to 1.14 where priorityClass feature became stable.

Adding a create permission for pods/portforward seems to remove the get permission for configmaps

I am trying to run helm status --tiller-namespace=$NAMESPACE $RELEASE_NAME from a container inside that namespace.
I have a role with the rule
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
bound to the default service account. But I was getting the error
Error: pods is forbidden: User "system:serviceaccount:mynamespace:default" cannot list resource "pods" in API group "" in the namespace "mynamespace"
So I added the list verb like so
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
and now I have progressed to the error cannot create resource "pods/portforward" in API group "". I couldn't find anything in the k8s docs on how to assign different verbs to different resources in the same apiGroup but based on this example I assumed this should work:
- apiGroups:
- ""
resources:
- pods
- configmaps
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
however, now I get the error cannot get resource "configmaps" in API group "". Note I am running a kubectl get cm $CMNAME before I run the helm status command.
So it seems that I did have permission to do a kubectl get cm until I tried to add the permission to create a pods/portforward.
Can anyone explain this to me please?
also the cluster is running k8s version
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7+1.2.3.el7", GitCommit:"cfc2012a27408ac61c8883084204d10b31fe020c", GitTreeState:"archive", BuildDate:"2019-05-23T20:00:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
and helm version
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
My issue was that I was deploying the manifests with these Roles as part of a helm chart (using helm 2). However, the service account for the tiller doing the deploying did not have the create pods/portforward permission. Thus it was unable to grant that permission and so it errored when trying to deploy the manifest with the Roles. This meant that the configmap get permission Role wasn't created hence the weird error.

How to update api versions list in Kubernetes

I am trying to use "autoscaling/v2beta2" apiVersion in my configuration following this tutorial. And also I am on Google Kubernetes Engine.
However I get this error:
error: unable to recognize "backend-hpa.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2beta2"
When I list the available api-versions:
$ kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
certmanager.k8s.io/v1alpha1
cloud.google.com/v1beta1
coordination.k8s.io/v1beta1
custom.metrics.k8s.io/v1beta1
extensions/v1beta1
external.metrics.k8s.io/v1beta1
internal.autoscaling.k8s.io/v1alpha1
metrics.k8s.io/v1beta1
networking.gke.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scalingpolicy.kope.io/v1alpha1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
So indeed I am missing autoscaling/v2beta2.
Then I check my kubernetes version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", GitCommit:"abdda3f9fefa29172298a2e42f5102e777a8ec25", GitTreeState:"clean", BuildDate:"2019-05-08T13:53:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.6-gke.13", GitCommit:"fcbc1d20b6bca1936c0317743055ac75aef608ce", GitTreeState:"clean", BuildDate:"2019-06-19T20:50:07Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
So it looks like I have a version of 1.13.6. Supposedly autoscaling/v2beta2 is available since 1.12.
So why it is not available for me?
Unfortunaltely HPA autoscaling API v2beta2 is not available on GKE yet. It can be freely used with Kubeadm and Minikube.
There is already opened issue on issuetracker - https://issuetracker.google.com/135624588

Kubenates RunAsUser is forbidden

when I try to create a pods with non-root fsgroup (here 2000)
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
hitting error
Error from server (Forbidden): error when creating "test.yml": pods "security-context-demo" is forbidden: pod.Spec.SecurityContext.RunAsUser is forbidden
Version
root#ubuntuguest:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Can any one help me how to set ClusterRoleBinding in cluster.
If the issue is indeed because of RBAC permissions, then you can try creating a ClusterRoleBinding with cluster role as explained here.
Instead of the last step in that post (using the authentication token to log in to dashboard), you'll have to use that token and the config in your kubectl client when creating the pod.
For more info on the use of contexts, clusters, and users visit here
Need to disable admission plugins SecurityContextDeny while setting up Kube-API
On Master node
ps -ef | grep kube-apiserver
And check enable plugins
--enable-admission-plugins=LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,DenyEscalatingExec
Ref: SecurityContextDeny
cd /etc/kubernetes
cp apiserver.conf apiserver.conf.bak
vim apiserver.conf
find SecurityContextDeny keywords and delete it.
:wq
systemctl restart kube-apiserver
then fixed it

How to avoid default secret being attached to ServiceAccount?

I'm trying to create a service account with either no secrets or just secret I specify and the kubelet always seems to be attaching the default secret no matter what.
Service Account definition
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
name: test
secrets:
- name: default-token-4pbsm
Submit
$ kubectl create -f service-account.yaml
serviceaccount "test" created
Get
$ kubectl get -o=yaml serviceaccount test
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2017-05-30T12:25:30Z
name: test
namespace: default
resourceVersion: "31414"
selfLink: /api/v1/namespaces/default/serviceaccounts/test
uid: 122b0643-4533-11e7-81c6-42010a8a005b
secrets:
- name: default-token-4pbsm
- name: test-token-5g3wb
As you can see above the test-token-5g3wb was automatically created & attached to the service account without me specifying it.
As far as I understand the automountServiceAccountToken only affects mounting of those secrets to a pod which was launched via that service account. (?)
Is there any way I can avoid that default secret being ever created and attached?
Versions
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T20:41:24Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Your understanding of automountServiceAccountToken is right it is for pod that will be launched.
The automatic token addition is done by Token controller. Even if you edit the config to delete the token it will be added again.
You must pass a service account private key file to the token controller in the controller-manager by using the --service-account-private-key-file option. The private key will be used to sign generated service account tokens. Similarly, you must pass the corresponding public key to the kube-apiserver using the --service-account-key-file option. The public key will be used to verify the tokens during authentication.
Above is taken from k8s docs. So try not passing those flags, but not sure how to do that. But I not recommending doing that.
Also this doc you might helpful.