Are ConfigMaps and Secrets managed at node level? - kubernetes

I want to know if different nodes can share Secrets and ConfigMaps. Went through the Kubernetes documentation at https://kubernetes.io/docs/concepts/configuration/secret/ but could not find exact information.

All Kubernetes resources are stored centrally in the etcd database and access through the Kubernetes API server. When using Config Maps or Secrets, the data inside them are directly embedded into the resource it self (i.e. unlike ParsistentVolume for example, they do not just reference to the data stored somewhere else). This is also the reason why the size of ConfigMap or Secret is limited.
As such they can be used on all Kubernetes nodes. When you have a Pod which is using them, the ConfigMaps or Secrets will be mapped to the node where the Pod is scheduled. So the files from the ConfigMap or Secret might exist on given node, but that will be just copies of the original ConfigMap or Secret stored centrally in the etcd database.

Related

How can I share some value between kubernetes pods?

I have several pods which belong to the same service. I need to share a value between all pods in this service.
Per my understanding, the shared volume won't work well, because pods may end up being on different nodes.
Having any kind of database (even most lightweight) exposed as a service to share this value would be overkill (however, probably it's my backup plan).
I was wondering whether there is some k8s native way to share the value.
Put the values in a ConfigMap and mount it in the Pods. You can include the values of the ConfigMap in the containers of a Pod either as a volume or as environment variables.
See Configure a Pod to Use a ConfigMap in the Kubernetes documentation.
If the Pods need to update the shared values they can write to the ConfigMap (requires Kubernetes API permissions). However, in this case the ConfigMap must be included as a volume, since environment variable values from a ConfigMap are not updated when the ConfigMap changes.

Can we use vault with kubernetes without sidecar or init container?

Our current model is to use init containers to fetch secrets from vault. But When the application crashes due to OOM issues, the pod goes into crashloopback state. Also, we don't want to overload the pod with a sidecar container. Is there any other way to use vault with kubernetes?
Yes, you can use Vault without using the side-car container.
You can create a path into the vault and save the key-value pair inside it.
As per requirement use the KV1 and KV2
To sync vault values with Kubernetes secret you can use :
https://github.com/DaspawnW/vault-crd
Vault CRD is the custom resource that will sync your vault variables to Kubernetes secret on the specific intervals you define.
Each time new value updated in vault that will sync back to secret and you can inject that secret into deployment or statefulset as per need

How is Kubernetes RBAC Actually Enforced for Service Accounts?

We're trying to create different kuberentes secrets and offer access to specific secrets through specific service accounts that are assigned to pods. For example:
Secrets
- User-Service-Secret
- Transaction-Service-Secret
Service Account
- User-Service
- Transaction-Service
Pods
- User-Service-Pod
- Transaction-Service-Pod
The idea is to restrict access to User-Service-Secretsecret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant kuberentes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
How do we actually enforce the RBAC system?
FYI we are using EKS
First it is important to distinguish between API access to the secret and consuming the secret as an environment variable or a mounted volume.
TLDR:
RBAC controls who can access a secret (or any other resource) using K8s API requests.
Namespaces or the service account's secrets attribute control if a pod can consume a secret as an environment variable or through a volume mount.
API access
RBAC is used to control if an identity (in your example the service account) is allowed to access a resource via the K8s API. You control this by creating a RoleBinding (namespaced) or a ClusterRoleBinding (cluster-wide) that binds an identity to a Role (namespaced) or a ClusterRole (not-namespaced) to your identity (service account). Then, when you assign the service account to a pod by setting the serviceAccountName attribute, running kubectl get secret in that pod or the equivalent method from one of the client libraries would mean you have credentials available to make the API request.
Consuming Secrets
This however is independent of configuring the pod to consume the secret as an environment variable or a volume mount. If the container spec in a pod spec references the secret it is made available inside that container. Note, per container, not per pod. You can limit what secret a pod can mount by having the pods in different namespaces, because a pod can only refer to a secret in the same namespace. Additionally, you can use the service account's secrets attribute, to limit what secrets a pod with thet service account can refer to.
$ kubectl explain sa.secrets
KIND: ServiceAccount
VERSION: v1
RESOURCE: secrets <[]Object>
DESCRIPTION:
Secrets is the list of secrets allowed to be used by pods running using
this ServiceAccount. More info:
https://kubernetes.io/docs/concepts/configuration/secret
ObjectReference contains enough information to let you inspect or modify
the referred object.
You can learn more about the security implications of Kubernetes secrets in the secret documentation.
The idea is to restrict access to User-Service-Secret secret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant Kubernetes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
Yes, this is correct.
This is documented for Kubernetes on privilege escalation via pod creation - within a namespace.
Users who have the ability to create pods in a namespace can potentially escalate their privileges within that namespace. They can create pods that access their privileges within that namespace. They can create pods that access secrets the user cannot themselves read, or that run under a service account with different/greater permissions.
To actually enforce this kind of Security Policies, you probably have to add an extra layer of policies via the admission controller. The Open Policy Agent in the form of OPA Gatekeeper is most likely a good fit for this kind of policy enforcement.

Difference bw configmap and downwardapi

I am new to kubernetes can somebody please explain why there are multiple volume types like
configMap
emptyDir
projected
secret
downwardAPI
persistentVolumeClaim
For few I am able to figure out, like why we need secret instead of configmap.
For rest I am not able to understand the need for others.
Your question is too generic to answer, here are the few comments off the top of my head
If the deployed pod or containers wants to have configuration data then you need to use configMap resource, if there are secrets or passwords its obvious to use secret resource.
Now if the deployed pods wants to use POD_NAME which is generated during schedule or Run time, then it need to use DownwardAPI resources.
Emptydir share the lifecycle with the Deployed pod, If the pods dies then all of the data which are stored using emptydir resource will be gone, now If you want to persist the data then you need to use persistentVolume, persistentVolumeClaim and storageclass Resources.
for further information k8s volumes
Configmap is used to make the application specific configuration data available to container at run time.
DownwardAPI is used to make kubernetes metadata ( like pod namespace, pod name, pod ip, pod lebels etc ) available to container at run time

Where does ConfigMap data gets stored?

I created ConfigMap using kubectl and I can also see it using:
kubectl get cm
I am just curious where kubernetes stores this data/information within the cluster? Does it store in etc? How do I view it, if it stored in etcd?
Does it store in any file/folder location or anywhere else?
I mean where kubernetes stores it internally?
Yes etcd is used for storing ConfigMaps and other resources you deploy to the cluster. See https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.html and note https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264
You view the content of the configmap with 'kubectl get cm -oyaml' i.e. through the k8s API directly as illustrated in https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ You don't need to look inside etcd to see the content of a configmap.