Error while importing AKS to Rancher 2.6.2 - kubernetes

I have deployed Rancher 2.6.2 via helm, when importing existing aks cluster in Rancher, it's giving error:
Below error is at pod level:
{APIGroups:["rke-machine-config.cattle.io"], Resources:["*"], Verbs:["create"]}; resolution errors: [[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found, clusterroles.rbac.authorization.k8s.io "system:discovery" not found]], requeuing
Manifest for cluster role and clusterrolebinding:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name:rancher-cr
namespace: rancher-demo
rules:
- apiGroups:
- '*'
resources:
- statefulsets
- secrets
verbs:
- create
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
& cluster role binding:
---
# Source: rancher/templates/clusterRoleBinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rancher-crb
namespace: rancher-demo
labels:
chart: rancher-2.6.2
subjects:
- kind: ServiceAccount
name: rancher-sa
namespace: rancher-demo
roleRef:
kind: ClusterRole
name: rancher-cr
apiGroup: rbac.authorization.k8s.io

Related

Any docs on what rights need to be given to do a thing on kubernetes?

Here my first ServiceAccount, ClusterRole, And ClusterRoleBinding
---
# Create namespace
apiVersion: v1
kind: Namespace
metadata:
name: devops-tools
---
# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: devops-tools
name: bino
---
# Set Secrets for SA
# k8s >= 1.24 need to manualy created
# https://stackoverflow.com/a/72258300
apiVersion: v1
kind: Secret
metadata:
name: bino-token
namespace: devops-tools
annotations:
kubernetes.io/service-account.name: bino
type: kubernetes.io/service-account-token
---
# Create Cluster Role
# Beware !!! This is Cluster wide FULL RIGHTS
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: devops-tools-role
namespace: devops-tools
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- networking.k8s.io
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# Bind the SA to Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: devops-tools-role-binding
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: bino
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: devops-tools-role
---
It work when I use to create NameSpace, Deployment, and Service.
But it fail (complain about 'have no right') when I try to create kind: Ingress.
Then I try to add
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: devops-tools-role-binding-admin
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: bino
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
and now 'bino' can do all things.
My question is: Is there any docs on what 'apiGroups' and 'resources' need to be assigned so one service account can do some-things (not all-things)?
Sincerely
-bino-
You can run this command to determine the apiGroup of a resource:
kubectl api-resources
You will see something like:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
ingresses ing networking.k8s.io/v1 true Ingress
So you would need to add this to the rules of your ClusterRole:
- apiGroups:
- "networking.k8s.io/v1"
resources:
- "ingresses"
verbs:
- "get"

Kubernetes cannot create resource "namespaces" in API group "" at the cluster scope even after creating rolebindings

Im running a pipeline that creates a kubernetes namespace but when I run it I get:
Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:gitlab-runner:default" cannot create resource "namespaces" in API group "" at the cluster scope
I created a ClusterRole and a ClusterRoleBinding to allow the service user default in the gitlab-runner namespace to create namespaces with:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: [""]
resources:
- namespace
verbs:
- create
and:
ind: ClusterRoleBinding
metadata:
name: modify-namespace-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: modify-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-runner
But that gives me the same error.
What am I doing wrong?
[""] in clusterrole manifest it should be just "".
because [""] will be array where apiGroups expects a string.
under resources it should be namespaces not namespace because :
kubectl api-resources | grep 'namespace\|NAME'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
namespaces ns v1 false Namespace
so clusterrole manifest should be as following :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: ""
resources:
- namespaces
verbs:
- create
I had this issue below:
Namespaces is forbidden: User "system:serviceaccount:openshift-operators:minio-operator" cannot create resource "namespaces" in API group "" at the cluster scope
Got solved with below yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-role-cesar-3
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-binding-cesar-3
namespace: openshift-operators
subjects:
- kind: ServiceAccount
name: minio-operator
namespace: openshift-operators
roleRef:
kind: ClusterRole
name: cluster-role-cesar-3
apiGroup: rbac.authorization.k8s.io

How to write a psp in k8s only for a specific user?

minikube start
--extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy
--addons=pod-security-policy
we have a default namespace in which the nginx service account does not have the rights to launch the nginx container
when creating a pod, use the command
kubectl run nginx --image=nginx -n default --as system:serviceaccount:default:nginx-sa
as a result, we get an error
Error: container has runAsNonRoot and image will run as root (pod: "nginx_default(49e939b0-d238-4e04-a122-43f4cfabea22)", container: nginx)
as I understand it, it is necessary to write a psp policy that will allow the nginx-sa service account to run under, but I do not understand how to write it correctly for a specific service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-sa
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role
namespace: default
rules:
- apiGroups: ["extensions", "apps",""]
resources: [ "deployments","pods" ]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
roleRef:
kind: Role
name: nginx-sa-role
apiGroup: rbac.authorization.k8s.io
...but I do not understand how to write it correctly for a specific service account
After you get your special psp ready for your nginx, you can grant your nginx-sa to use the special psp like this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-to-use-special-psp
rules:
- apiGroups:
- policy
resourceNames:
- special-psp-for-nginx
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bind-to-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-to-use-special-psp
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default

Forbidden error while describe/scale deployment by user system:node:ip.xx

I'm trying to execute K8S kubectl cmds from inside the container(name: autodeploy).
I have configured ClusterRole, ServiceAccount and ClusterRoleBinding. But getting Forbidden error while performing Describe and Scale actions on K8S Deployments.
Error from server (Forbidden): deployments.apps "test-deployment" is
forbidden: User "system:node:ip-xx-xx-xx-xx.ec2.internal" cannot get
resource "deployments" in API group "apps" in the namespace "abc"
autodeploy container also in same namespace abc
ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: autodeploy
rules:
- apiGroups: ["*"]
resources: ["deployments", "deployments/scale", "pods"]
verbs: ["get", "list", "update"]
ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: autodeploy
namespace: abc
ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: autodeploy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: autodeploy
subjects:
- kind: ServiceAccount
name: autodeploy
namespace: abc

Kubernetes RBAC rules for PersistentVolume

I'm trying to create RBAC Role / rules for a service that needs a persistent volume and it's still failing with forbidden error.
Here is my role config:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-full-access
namespace: logdrop
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
And this is my cut down PersistentVolume manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: logdrop-pv
namespace: logdrop
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: logdrop
name: logdrop-pvc
hostPath:
path: /efs/logdrop/logdrop-pv
When I try to apply it I get a forbidden error.
$ kubectl --kubeconfig ~/logdrop/kubeconfig-logdrop.yml apply -f pv-test.yml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "logdrop-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"logdrop-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"10Gi"] "claimRef":map["name":"logdrop-pvc" "namespace":"logdrop"] "hostPath":map["path":"/efs/logdrop/logdrop-pv"] "persistentVolumeReclaimPolicy":"Retain"]]}
from server for: "pv-test.yml": persistentvolumes "logdrop-pv" is forbidden: User "system:serviceaccount:logdrop:logdrop-user" cannot get resource "persistentvolumes" in API group "" at the cluster scope
On the last line it specifically says resource "persistentvolumes" in API group "" - that's what I have allowed in the rules!
I can create the PV with admin credentials from the same yaml file and I can create any other resources (pods, services, etc) with the logdrop permissions. Just the PersistentVolume doesn't work for some reason. Any idea why?
I'm using Kubernetes 1.15.0.
Update:
This is my role binding as requested:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-view
namespace: logdrop
subjects:
- kind: ServiceAccount
name: logdrop-user
namespace: logdrop
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: logdrop-user-full-access
It's not a ClusterRoleBinding as my intention is to give the user access only to one namespace (logdrop), not to all namespaces across the cluster.
PVs, namespaces, nodes and storages are cluster-scoped objects. As a best practice, to be able to list/watch those objects, you need to create ClusterRole and bind them to a ServiceAccount via ClusterRoleBinding. As an example;
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <name of your cluster role>
rules:
- apiGroups: [""]
resources:
- nodes
- persistentvolumes
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <name of your cluster role binding>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <name of your cluster role which should be matched with the previous one>
subjects:
- kind: ServiceAccount
name: <service account name>
I see a potential problem here.
PersistentVolumes are cluster scoped resources. They are expected to be provisioned by the administrator without any namespace.
PersistentVolumeClaims however, can be created by users within a particular namespace as they are a namespaced resources.
That's why when you use admin credentials it works but with logdrop it returns an error.
Please let me know if that makes sense.
The new role needs to be granted to a user, or group of users, with a rolebinding, e.g.:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: logdrop-rolebinding
namespace: logdrop
subjects:
- kind: User
name: logdrop-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: logdrop-user-full-access
apiGroup: rbac.authorization.k8s.io