following my previous question about dapr and k8s secrets.
I have a k8s secret defined as follow:
apiVersion: v1
kind: Secret
metadata:
name: secretstore
namespace: my-namespace
type: Opaque
data:
MY_KEY: <some base64>
I then defined a dpar component as:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: secretstore
namespace: my-namespace
spec:
type: secretstores.kubernetes
version: v1
metadata: []
and gave the permission to access the secrets (according with dapr docs)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dapr-secret-reader
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader
subjects:
- kind: ServiceAccount
name: default
but with all of this, dapr cannot access the secret I want it to retrieve.
The error is
Stacktrace: {"Status":{"StatusCode":13,"Detail":"failed getting secret with key MY_KEY from secret store secretstore: secrets \"MY_KEY\" not found","DebugException":null},"StatusCode":13,"Trailers":[],"Message":"Status(StatusCode=\"Internal\", Detail=\"failed getting secret with key MY_KEY from secret store secretstore: secrets \"MY_KEY\" not found\")","Data":{},"InnerException":null,"HelpLink":null,"Source":"System.Private.CoreLib","HResult":-2146233088,"StackTrace":" at Dapr.Client.DaprClientGrpc.GetSecretAsync(String storeName, String key, IReadOnlyDictionary`2 metadata, CancellationToken cancellationToken)"}
Any clue about what I am missing? I went through dapr documentation several times but I wan't able to find anything that could help.
Thanks in advance!!
Related
questions:
Create a service account name dev-sa in default namespace, dev-sa can create below components in dev namespace:
Deployment
StatefulSet
DaemonSet
result:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: [""]
resources: ["deployment","statefulset","daemonset"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
Validation:
kubectl auth can-i create deployment -n dev \
--as=system:serviceaccount:default:dev-sa
no
This is an exam question, but I can't pass
Can you tell me where the mistake is? thx
in Role, use * on api group, and add s on resource name.
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: ["*"]
resources: ["deployments", "statefulsets", "daemonsets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
First, the apiGroups of Deployment, daemonSet, and statefulSet is apps, not core. So, for the apiGroups value, instead of "", put "apps". (an empty string representing core)
Second, remember: resources always define in Plural of "kind". So, for resources values, you always should use plural names. e.g. instead of deployment, you use deployments
So, your file should be something like this:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","daemonsets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
For apiGroups's values, be sure to check the docs
I suggest you read this article about Users and Permissions in Kubernetes.
I'm trying to provision emepheral environments via automation leveraging Kubernetes namespaces. My automation workers deployed in Kubernetes must be able to create Namespaces. So far my experimentation with this led me nowhere. Which binding do I need to attach to the Service Account to allow it to control Namespaces? Or is my approach wrong?
My code so far:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-deployer
namespace: tooling
labels:
app: k8s-deployer
spec:
replicas: 1
selector:
matchLabels:
app: k8s-deployer
template:
metadata:
name: k8s-deployer
labels:
app: k8s-deployer
spec:
serviceAccountName: k8s-deployer
containers: ...
rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# this lets me view namespaces, but not write
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: administer-cluster
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
To give a pod control over something in Kubernetes you need at least four things:
Create or select existing Role/ClusterRole (you picked administer-cluster, which rules are unknown to me).
Create or select existing ServiceAccount (you created k8s-deployer in namespace tooling).
Put the two together with RoleBinding/ClusterRoleBinding.
Assign the ServiceAccount to a pod.
Here's an example that can manage namespaces:
# Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# Create a cluster role that allowed to perform
# ["get", "list", "create", "delete", "patch"] over ["namespaces"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-deployer
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "create", "delete", "patch"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-deployer
# make sure NOT to mention 'namespace' here or
# the permissions will only have effect in the
# given namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-deployer
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
After that you need to mention the service account name in pod spec as you already did. More info about RBAC in the documentation.
So I have namespaces
ns1, ns2, ns3, and ns4.
I have a service account sa1 in ns1. I am deploying pods to ns2, ns4 that use sa1. when I look at the logs it tells me that the sa1 in ns2 can't be found.
error:
Error creating: pods "web-test-2-795f5fd489-" is forbidden: error looking up service account ns2/sa: serviceaccount "sa" not found
Is there a way to make service accounts cluster wide? Or, can I create multiple service accounts with the same secret? in different namespaces?
you can use that
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubernetes-enforce
rules:
- apiGroups: ["apps"]
resources: ["deployments","pods","daemonsets"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: ["*"]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-logging
namespace: cattle-logging
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-prome
namespace: cattle-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-system
namespace: cattle-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
No there is no way to create a cluster wide service account as service account is a namespace scoped resources. This follows the principle of least privilege.
You can create a service account with same name(for example default) into all the necessary namespaces where you are deploying pod pretty easily by applying the service account yaml targeting those namespaces.
Then you can deploy the pod using yaml. This way you don't need to change anything in the pod because the service account name is same although it will have different secret and that should not matter as long as you have defined RBAC via role and rolebinding to all the service accounts across those namespaces.
While service accounts can not be cluster scoped you can have clusterrole and clusterrolebinding which are cluster scoped.
If your namespaces for example are in values.yaml (that is they are somehow dynamic), you could do:
apiVersion: v1
kind: List
items:
{{- range $namespace := .Values.namespaces }}
- kind: ServiceAccount
apiVersion: v1
metadata:
name: <YourAccountName>
namespace: {{ $namespace }}
{{- end }}
where in values.yaml you would have:
namespaces:
- namespace-a
- namespace-b
- default
# define a clusterrole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: supercr
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# define a serviceaccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: supersa
namespace: namespace-1
---
# bind serviceaccount to clusterrole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: supercrb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: supercr
subjects:
- kind: ServiceAccount
name: supersa
namespace: namespace-1
Please note serviceaccount is namespaced. You can't create a cluster-wide serviceaccount. However you can bind a serviceaccount to a clusterrole with permissions to all api resources.
I've tried creating a cluster role that only has access to view pods, however, for some reason that account can still see everything; secrets, deployments, nodes etc. I also enabled skip-login, and it seems like by default anonymous users don't have any restrictions either.
Service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-example
namespace: default
Cluster Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cr-example
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
Cluster Role Binding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: crb-example
roleRef:
apiGroup: rbac.authorization.k8s.io
name: cr-example
kind: ClusterRole
subjects:
- kind: ServiceAccount
name: sa-example
namespace: default
Context:
K8s version: 1.17.3
Dashboard version: v2.0.0-rc5
Cluster type: bare metal
authorization-mode=Node,RBAC
How did You check if it works or no?
I made a reproduction of your issue with below yamls
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-example
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cr-example
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: crb-example
roleRef:
apiGroup: rbac.authorization.k8s.io
name: cr-example
kind: ClusterRole
subjects:
- kind: ServiceAccount
name: sa-example
namespace: default
And I used kubectl auth can-i to verify if it works
kubectl auth can-i get pods --as=system:serviceaccount:default:sa-example
yes
kubectl auth can-i get deployment --as=system:serviceaccount:default:sa-example
no
kubectl auth can-i get secrets --as=system:serviceaccount:default:sa-example
no
kubectl auth can-i get nodes --as=system:serviceaccount:default:sa-example
no
And it seems like everything works just fine
The only thing which if different in my yaml is
kind: ClusterRole
metadata:
name: cr-example instead of cr-<role>
So it actually match ClusterRoleBinding
I hope it help you with your issues. Let me know if you have any more questions.
Whats the best approach to provide a .kube/config file in a rest service deployed on kubernetes?
This will enable my service to (for example) use the kuberntes client api.
R
Create service account:
kubectl create serviceaccount example-sa
Create a role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
Create role binding:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: example-role-binding
namespace: default
subjects:
- kind: "ServiceAccount"
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
create pod using example-sa
kind: Pod
apiVersion: v1
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: secret-access-container
image: example-image
The most important line in pod definition is serviceAccountName: example-sa. After creating service account and adding this line to your pod's definition you will be able to access your api access token at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here you can find a little bit more detailed version of the above example.