I used below file to create service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-reader
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: reader-cr
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-only-rb
subjects:
- kind: ServiceAccount
name: sa-reader
roleRef:
kind: ClusterRole
name: reader-cr
apiGroup: rbac.authorization.k8s.io
The kubeconfig I created is something similar
apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: qa
cluster:
certificate-authority-data: ca
server: https:/<server>:443
users:
- name: sa-reader
user:
as-user-extra: {}
token: <token>
contexts:
- name: qa
context:
cluster: qa
user: sa-reader
namespace: default
current-context: qa
With this kubeconfig file, I am able to access resources in the default namespace but not any other namespace. How to access resources in other namespaces as well?
You can operate on a namespace explicitly by using the -n (--namespace) option to kubectl:
$ kubectl -n my-other-namespace get pod
Or by changing your default namespace with the kubectl config command:
$ kubectl config set-context --current --namespace my-other-namespace
With the above command, all future invocations of kubectl will assume the my-other-namespace namespace.
An empty namespace in metadata, defaults to namespace: default and so, your RoleBinding is only applied to the default namespace.
See ObjectMeta.
I suspect (!) you need to apply to RoleBinding to each of the namespaces in which you want the Service Account to be permitted.
Related
Here my first ServiceAccount, ClusterRole, And ClusterRoleBinding
---
# Create namespace
apiVersion: v1
kind: Namespace
metadata:
name: devops-tools
---
# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: devops-tools
name: bino
---
# Set Secrets for SA
# k8s >= 1.24 need to manualy created
# https://stackoverflow.com/a/72258300
apiVersion: v1
kind: Secret
metadata:
name: bino-token
namespace: devops-tools
annotations:
kubernetes.io/service-account.name: bino
type: kubernetes.io/service-account-token
---
# Create Cluster Role
# Beware !!! This is Cluster wide FULL RIGHTS
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: devops-tools-role
namespace: devops-tools
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- networking.k8s.io
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# Bind the SA to Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: devops-tools-role-binding
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: bino
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: devops-tools-role
---
It work when I use to create NameSpace, Deployment, and Service.
But it fail (complain about 'have no right') when I try to create kind: Ingress.
Then I try to add
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: devops-tools-role-binding-admin
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: bino
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
and now 'bino' can do all things.
My question is: Is there any docs on what 'apiGroups' and 'resources' need to be assigned so one service account can do some-things (not all-things)?
Sincerely
-bino-
You can run this command to determine the apiGroup of a resource:
kubectl api-resources
You will see something like:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
ingresses ing networking.k8s.io/v1 true Ingress
So you would need to add this to the rules of your ClusterRole:
- apiGroups:
- "networking.k8s.io/v1"
resources:
- "ingresses"
verbs:
- "get"
How do you disable shell or bash access to pods in a container? I do not want anyone to get access inside the pod via kubectl exec or docker exec or via k9s
Kubectl is a CLI tool so it connects with the K8s API server and authenticates.
You can restrict the user by their Role, so using the RBAC with proper permission will resolve your issue.
Ref : https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Example :
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-role
namespace: default
rules:
- apiGroups: [""]
resources:
- pods/attach
- pods/exec
verbs: [""]
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-view
namespace: default
subjects:
- kind: ServiceAccount
name: serviceaccount
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: default-user-role
check auth using the
kubectl auth can-i --as=system:serviceaccount:default:serviceaccount exec pod
Problem
I have a simple RBAC configuration to access the Kubernetes API in-cluster. However I am getting what appears to be conflicting information from kubectl. After deploying the manifest, it appears that RBAC is set up properly.
$ kubectl exec -ti pod/controller -- kubectl auth can-i get namespaces
Warning: resource 'namespaces' is not namespace scoped
yes
However, actually making the request yields a permission error
$ kubectl exec -ti pod/controller -- kubectl get namespaces
Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:default:controller" cannot list resource "namespaces" in API group "" at the cluster scope
command terminated with exit code 1
Manifest
apiVersion: 'v1'
kind: 'ServiceAccount'
metadata:
name: 'controller'
---
apiVersion: 'rbac.authorization.k8s.io/v1'
kind: 'Role'
metadata:
name: 'read-namespaces'
rules:
- apiGroups:
- ''
resources:
- 'namespaces'
verbs:
- 'get'
- 'watch'
- 'list'
---
apiVersion: 'rbac.authorization.k8s.io/v1'
kind: 'RoleBinding'
metadata:
name: 'read-namespaces'
roleRef:
apiGroup: ''
kind: 'Role'
name: 'read-namespaces'
subjects:
- kind: 'ServiceAccount'
name: 'controller'
---
apiVersion: 'v1'
kind: 'Pod'
metadata:
name: 'controller'
labels:
'app': 'controller'
spec:
containers:
- name: 'kubectl'
image: 'bitnami/kubectl:latest'
imagePullPolicy: 'Always'
command:
- 'sleep'
- '3600'
serviceAccountName: 'controller'
---
Other Info
I've tried kubectl auth reconcile -f manifest.yaml as well as kubectl apply -f manifest.yaml and the results are the same.
I've also set "read-namespaces" RoleBinding.subjects[0].namespace to the proper namespace ("default" in this case). No change in output.
Namespace is a cluster scoped resource. So you need a ClusterRole and a ClusterRoleBinding.
apiVersion: 'rbac.authorization.k8s.io/v1'
kind: 'ClusterRole'
metadata:
name: 'read-namespaces'
rules:
- apiGroups:
- ''
resources:
- 'namespaces'
verbs:
- 'get'
- 'watch'
- 'list'
---
apiVersion: 'rbac.authorization.k8s.io/v1'
kind: 'ClusterRoleBinding'
metadata:
name: 'read-namespaces'
roleRef:
apiGroup: 'rbac.authorization.k8s.io'
kind: 'ClusterRole'
name: 'read-namespaces'
subjects:
- kind: 'ServiceAccount'
name: 'controller'
---
Roles are per namespace, you need to create a cluster role and binding using ClusterRoleBinding
If you want to bind your cluster role to specific namespace, you could do something like this, using RoleBinding on a ClusterRole:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa
namespace: myapp
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-myapp
rules:
- apiGroups:
- batch
resources:
- cronjobs
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: job001
namespace: myapp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-myapp
subjects:
- kind: ServiceAccount
name: sa
namespace: myapp
I want to have a service account that can create a deployment. So I am creating a service account, then a role and then a rolebinding. The yaml files are below:
ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: testsa
namespace: default
Role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrole
namespace: default
rules:
- apiGroups:
- ""
- batch
- apps
resources:
- jobs
- pods
- deployments
- deployments/scale
- replicasets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- scale
RoleBinding:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: testsa
namespace: default
roleRef:
kind: Role
name: testrole
apiGroup: rbac.authorization.k8s.io
But after applying these files, when I do the following command to check if the service account can create a deployment, it answers no.
kubectl auth can-i --as=system:serviceaccount:default:testsa create deployment
The exact answer is:
no - no RBAC policy matched
It works fine when I do checks for Pods.
What am I doing wrong?
My kubernetes versions are as follows:
kubectl version --short
Client Version: v1.16.1
Server Version: v1.12.10-gke.17
Since you're using a 1.12 cluster, you should include the extensions API group in the Role for the deployments resource.
This was deprecated in Kubernetes 1.16 in favor of the apps group: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
I am trying to create a Role and RoleBinding so I can use Helm. What are the equivelant kubectl commands to create the following resources? Using the command line makes dev-ops simpler in my scenario.
Role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager-foo
namespace: foo
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding-foo
namespace: foo
subjects:
- kind: ServiceAccount
name: tiller-foo
namespace: foo
roleRef:
kind: Role
name: tiller-manager-foo
apiGroup: rbac.authorization.k8s.io
Update
According to #nightfury1204 I can run the following to create the Role:
kubectl create role tiller-manager-foo --namespace foo --verb=* --resource=.,.apps,.batch,
.extensions -n foo --dry-run -o yaml
This outputs:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: tiller-manager-foo
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apps
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- '*'
verbs:
- '*'
- apiGroups:
- extensions
resources:
- '*'
verbs:
- '*'
The namespace is missing and secondly, is this equivelant?
For Role:
kubectl create role tiller-manager-foo --verb=* --resource=*.batch,*.extensions,*.apps,*. -n foo
--resource=* support added on kubectl 1.12 version
For Rolebinding:
kubectl create rolebinding tiller-binding-foo --role=tiller-manager-foo --serviceaccount=foo:tiller-foo -n foo
kubectl apply -f can submit an arbitrary Kubernetes YAML file like what you have in the question.
I’d specifically suggest this here because you can commit these YAML files to source control, and if you’re using Helm anyways then this is far from the only Kubernetes YAML file you have. That gives you a consistent path even to bootstrap your Helm setup.