Is it possible to share a ServiceAccount between namespaces or somehow start a pod with a ServiceAccount from a different namespace?
We are looking to use vault to store common secret data between dynamic development environments. Following the very good walk though HERE we were able to authenticate and pull secrets for a single namespace. However in our use case we will be creating a new namespace for each development environment during it's lifetime.
If possible we would like to avoid having to also configure vault with a new auth backend for each namespace.
When you create the Vault role, you can configure bound_service_account_namespaces to be the special value *, and allow a fixed service account name from any namespace. To adapt the "create role" example from the documentation:
vault write auth/kubernetes/role/demo \
bound_service_account_names=vault-auth \
bound_service_account_namespaces='*' \
policies=default \
ttl=1h
You have to recreate the Kubernetes service account in every namespace, and it must have the exact name specified in the role. However, the Kubernetes service account is a single k8s object and it's not any harder than the Deployments, Services, ConfigMaps, and Secrets you already have; this pattern doesn't require any Vault reconfiguration.
(If you're using a templating tool like Helm, the service account can't follow a naming convention like {{ .Release.Name }}-{{ .Chart.Name }}: Vault doesn't appear to have any sort of pattern matching on this name.)
Service Accounts are namespaced therefore not shared , so you may copy the token from one account to another , but that is not the recommneded way.
C02W84XMHTD5:kubernetes-gitlab iahmad$ kubectl api-resources --namespaced | grep service
serviceaccounts sa true ServiceAccount
services svc true Service
C02W84XMHTD5:kubernetes-gitlab iahmad$
If you want to share a secret or account the way you are trying to do , then there is no need to use vault at all.
You may just need to automate this process , instead of sharing accounts.
Related
I am trying to create a K8s cluster in Azure AKS and when cluster is ready I can see couple of resources are created within the default namespace. Example secret, configmap:
As a security recommendation NO k8s resources should be created under the default namespace so how to avoid it? It's created by default during cluster creation.
I have found the same question asked here:
User srbose-msft (Microsoft employee) explained the principle of operation very well:
In Kubernetes, a ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named "default" exists in every active namespace. [Reference]
TokenController runs as part of kube-controller-manager. It acts asynchronously. It watches ServiceAccount creation and creates a corresponding ServiceAccount token Secret to allow API access. [Reference] Thus, the secret for the default ServiceAccount token is also created.
Trusting the custom CA from an application running as a pod usually requires some extra application configuration. You will need to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. For example, you would do this with a golang TLS config by parsing the certificate chain and adding the parsed certificates to the RootCAs field in the tls.Config struct.
You can distribute the CA certificate as a ConfigMap that your pods have access to use. [Reference] AKS implements this in all active namespaces through ConfigMaps named kube-root-ca.crt in these namespaces.
You shall also find a Service named kubernetes in the default namespace. It has a ServiceType of ClusterIP and exposes the API Server Endpoint also named kubernetes internally to the cluster in the default namespace.
All the resources mentioned above will be created by design at the time of cluster creation and their creation cannot be prevented. If you try to remove these resources manually, they will be recreated to ensure desired goal state by the kube-controller-manager.
Additionally:
The Kubernetes clusters should not use the default namespace Policy is still in Preview. Currently the schema does not explicitly allow for Kubernetes resources in the default namespace to be excluded during policy evaluation. However, at the time of writing, the schema allows for labelSelector.matchExpressions[].operator which can be set to NotIn with appropriate labelSelector.matchExpressions[].values for the Service default/kubernetes with label:
component=apiserver
The default ServiceAccount, the default ServiceAccount token Secret and the RootCA ConfigMap themselves are not created with any labels and hence cannot to added to this list. If this is impeding your use-case I would urge you to share your feedback at https://techcommunity.microsoft.com/t5/azure/ct-p/Azure
We need to disable the automount of service account from our existing deployments in AKS cluster. There are 2 ways to do by adding the property "automountserviceaccount : false" in either in the service account manifest or pod template.
We are using separate service account specified in our application deployments, however when we looked in the namespace, there are default service account also created.
So inorder to secure our cluster, do we need to disable the automount property for both default and application specific service accounts?.
Since our app already live, will there be any impact by adding this to the service account s.
How to know the used service accounts of a pod and it's dependencies ?
So inorder to secure our cluster, do we need to disable the automount property for both default and application specific service accounts?.
The design behind the default ServiceAccount is that it does not have any rights unless you give them some. So from a security point of view there is not much need to disable the mount unless you granted them access for some reason. Instead, whenever an application truly needs some access, go ahead and create a ServiceAccount for that particular application and grant it the permissions it needs via RBAC.
Since our app already live, will there be any impact by adding this to the service account s.
In case you truly want to disable the mount there won't be an impact on your application if it didn't use the ServiceAccount beforehand. What is going to happen though, is that a new Pod will be created and the existing one is being delete. However, if you properly configured readinessProbes and a rolling update strategy, then Kubernetes will ensure that there will be no downtime.
How to know the used service accounts of a pod and it's dependencies ?
You can check what ServiceAccount a Pod is mounting by executing kubectl get pods <pod-name> -o yaml. The output is going to show you the entirety of the Pod's manifest and the field spec.serviceAccountName contains information on which ServiceAccount the Pod is mounting.
Disable auto-mount of default service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
automountServiceAccountToken: false
https://gist.github.com/pjbgf/0a8c8a1459e5a2eb20e9d0852ba8c4be
We're trying to create different kuberentes secrets and offer access to specific secrets through specific service accounts that are assigned to pods. For example:
Secrets
- User-Service-Secret
- Transaction-Service-Secret
Service Account
- User-Service
- Transaction-Service
Pods
- User-Service-Pod
- Transaction-Service-Pod
The idea is to restrict access to User-Service-Secretsecret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant kuberentes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
How do we actually enforce the RBAC system?
FYI we are using EKS
First it is important to distinguish between API access to the secret and consuming the secret as an environment variable or a mounted volume.
TLDR:
RBAC controls who can access a secret (or any other resource) using K8s API requests.
Namespaces or the service account's secrets attribute control if a pod can consume a secret as an environment variable or through a volume mount.
API access
RBAC is used to control if an identity (in your example the service account) is allowed to access a resource via the K8s API. You control this by creating a RoleBinding (namespaced) or a ClusterRoleBinding (cluster-wide) that binds an identity to a Role (namespaced) or a ClusterRole (not-namespaced) to your identity (service account). Then, when you assign the service account to a pod by setting the serviceAccountName attribute, running kubectl get secret in that pod or the equivalent method from one of the client libraries would mean you have credentials available to make the API request.
Consuming Secrets
This however is independent of configuring the pod to consume the secret as an environment variable or a volume mount. If the container spec in a pod spec references the secret it is made available inside that container. Note, per container, not per pod. You can limit what secret a pod can mount by having the pods in different namespaces, because a pod can only refer to a secret in the same namespace. Additionally, you can use the service account's secrets attribute, to limit what secrets a pod with thet service account can refer to.
$ kubectl explain sa.secrets
KIND: ServiceAccount
VERSION: v1
RESOURCE: secrets <[]Object>
DESCRIPTION:
Secrets is the list of secrets allowed to be used by pods running using
this ServiceAccount. More info:
https://kubernetes.io/docs/concepts/configuration/secret
ObjectReference contains enough information to let you inspect or modify
the referred object.
You can learn more about the security implications of Kubernetes secrets in the secret documentation.
The idea is to restrict access to User-Service-Secret secret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant Kubernetes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
Yes, this is correct.
This is documented for Kubernetes on privilege escalation via pod creation - within a namespace.
Users who have the ability to create pods in a namespace can potentially escalate their privileges within that namespace. They can create pods that access their privileges within that namespace. They can create pods that access secrets the user cannot themselves read, or that run under a service account with different/greater permissions.
To actually enforce this kind of Security Policies, you probably have to add an extra layer of policies via the admission controller. The Open Policy Agent in the form of OPA Gatekeeper is most likely a good fit for this kind of policy enforcement.
I am using Google cloud's GKE for my kubernetes operations.
I am trying to restrict access to the users that access the clusters using command line. I have applied IAM roles in Google cloud and given view role to the Service accounts and users. It all works fine if we use it through api or "--as " in kubectl commands but when someone tries to do a kubectl create an object without specifying "--as" object still gets created with "default" service account of that particular namespace.
To overcome this problem we gave restricted access to "default" service account but still we were able to create objects.
$ kubectl auth can-i create deploy --as default -n test-rbac
no
$ kubectl run nginx-test-24 -n test-rbac --image=nginx
deployment.apps "nginx-test-24" created
$ kubectl describe rolebinding default-view -n test-rbac
Name: default-view
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: view
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default test-rbac
I expect users who are accessing cluster through CLI should not be able to create objects if they dont have permisssions, even if they dont use "--as" flag they should be restricted.
Please take in count that first you need to review the prerequisites to use RBAC in GKE
Also, please note that IAM roles applies to the entire Google Cloud project and all clusters within that project and RBAC enables fine grained authorization at a namespace level. So, with GKE these approaches to authorization work in parallel.
For more references, please take a look on this document RBAC in GKE
For all the haters of this question, I wish you could've tried pointing to this:
there is a file at:
~/.config/gcloud/configurations/config_default
in this there is a option under [container] section:
use_application_default_credentials
set to true
Here you go , you learnt something new.. enjoy. Wish you could have tried helping instead of down-voting.
We have multiple development teams who work and deploy their applications on kuberenetes. We use helm to deploy our application on kubernetes.
Currently the challenge we are facing with one of our shared clusters. We would like to deploy tiller separate for each team. So they have access to their resources. default Cluster-admin role will not help us and we don't want that.
Let's say we have multiple namespaces for one team. I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces.
Team > multiple namespaces
tiller using the service account that has the role ( having full access to namespaces - not all ) associated with it.
I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces
According to the fine manual, you'll need a ClusterRole per team, defining the kinds of operations on the kinds of resources, but then use a RoleBinding to scope those rules to a specific namespace. The two ends of the binding target will be the team's tiller's ServiceAccount and the team's ClusterRole, and then one RoleBinding instance per Namespace (even though they will be textually identical except for the namespace: portion)
I actually would expect you could make an internal helm chart that would automate the specifics of that relationship, and then helm install --name team-alpha --set team-namespaces=ns-alpha,ns-beta my-awesome-chart and then grant your tiller cluster-admin or whatever more restrictive ClusterRole you wish.