Where does ConfigMap data gets stored? - kubernetes

I created ConfigMap using kubectl and I can also see it using:
kubectl get cm
I am just curious where kubernetes stores this data/information within the cluster? Does it store in etc? How do I view it, if it stored in etcd?
Does it store in any file/folder location or anywhere else?
I mean where kubernetes stores it internally?

Yes etcd is used for storing ConfigMaps and other resources you deploy to the cluster. See https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.html and note https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264
You view the content of the configmap with 'kubectl get cm -oyaml' i.e. through the k8s API directly as illustrated in https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ You don't need to look inside etcd to see the content of a configmap.

Related

How can I share some value between kubernetes pods?

I have several pods which belong to the same service. I need to share a value between all pods in this service.
Per my understanding, the shared volume won't work well, because pods may end up being on different nodes.
Having any kind of database (even most lightweight) exposed as a service to share this value would be overkill (however, probably it's my backup plan).
I was wondering whether there is some k8s native way to share the value.
Put the values in a ConfigMap and mount it in the Pods. You can include the values of the ConfigMap in the containers of a Pod either as a volume or as environment variables.
See Configure a Pod to Use a ConfigMap in the Kubernetes documentation.
If the Pods need to update the shared values they can write to the ConfigMap (requires Kubernetes API permissions). However, in this case the ConfigMap must be included as a volume, since environment variable values from a ConfigMap are not updated when the ConfigMap changes.

Are ConfigMaps and Secrets managed at node level?

I want to know if different nodes can share Secrets and ConfigMaps. Went through the Kubernetes documentation at https://kubernetes.io/docs/concepts/configuration/secret/ but could not find exact information.
All Kubernetes resources are stored centrally in the etcd database and access through the Kubernetes API server. When using Config Maps or Secrets, the data inside them are directly embedded into the resource it self (i.e. unlike ParsistentVolume for example, they do not just reference to the data stored somewhere else). This is also the reason why the size of ConfigMap or Secret is limited.
As such they can be used on all Kubernetes nodes. When you have a Pod which is using them, the ConfigMaps or Secrets will be mapped to the node where the Pod is scheduled. So the files from the ConfigMap or Secret might exist on given node, but that will be just copies of the original ConfigMap or Secret stored centrally in the etcd database.

Is there a way to apply different configmap for each pod generated by damonset?

I am using filebeat as a daemonset and I would like each generated pod to export to a single port for the logstash.
Is there an approach to be used for this?
No. You cannot provide different configmap to the pods of same daemonset or deployment. If you want each of your pods of daemonset to have different configurations then you can mount some local-volume (using hostpath) so that all the pods will take configuration from that path and that can be different on each node. Or you need to deploy different daemonsets with different configmaps and select different nodes for each of them.
As you can read here:
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
...a copy of Pod based on a single template and this is the reason why you cannot specify different ConfigMaps to be used by different Pods managed by DaemonSet Controller.
As an alternative you can configure many different DaemonSets where each one will be responsible for running copy of a Pod specified in the template only on specific node.
Another alternative is using static pods:
It is possible to create Pods by writing a file to a certain directory
watched by Kubelet. These are called static pods. Unlike DaemonSet,
static Pods cannot be managed with kubectl or other Kubernetes API
clients. Static Pods do not depend on the apiserver, making them
useful in cluster bootstrapping cases. Also, static Pods may be
deprecated in the future.
The whole procedure of creation a static Pod is described here.
I hope it helps.
You can use a ConfigMap containing config in each node and expose spec.nodeName environment to your pods. Then your pods can know which node it's running on and decide which config it loads.

Difference bw configmap and downwardapi

I am new to kubernetes can somebody please explain why there are multiple volume types like
configMap
emptyDir
projected
secret
downwardAPI
persistentVolumeClaim
For few I am able to figure out, like why we need secret instead of configmap.
For rest I am not able to understand the need for others.
Your question is too generic to answer, here are the few comments off the top of my head
If the deployed pod or containers wants to have configuration data then you need to use configMap resource, if there are secrets or passwords its obvious to use secret resource.
Now if the deployed pods wants to use POD_NAME which is generated during schedule or Run time, then it need to use DownwardAPI resources.
Emptydir share the lifecycle with the Deployed pod, If the pods dies then all of the data which are stored using emptydir resource will be gone, now If you want to persist the data then you need to use persistentVolume, persistentVolumeClaim and storageclass Resources.
for further information k8s volumes
Configmap is used to make the application specific configuration data available to container at run time.
DownwardAPI is used to make kubernetes metadata ( like pod namespace, pod name, pod ip, pod lebels etc ) available to container at run time

Kubernetes up storage for a pod

My pod jenkins nexus pod has run out of disk space and I need to up the persistent volume claim.
I can see the yaml file for this in the kubernetes dashboard, however when I try to change it I get - PersistentVolumeClaim "jenkins-x-nexus" is invalid: spec: Forbidden: field is immutable after creation
Deleting the pod and quickly trying to update the yaml doesn't work either.
Our version of kubernetes (1.8) doens't have kubectl stop, so is there a way to stop the replication controller in order to change the yaml?
Our version of kubernetes (1.8) doens't have kubectl stop, so is there a way to stop the replication controller in order to change the yaml?
You can scale RC to 0, and it will stop spawning pods.
I can see the yaml file for this in the kubernetes dashboard, however when I try to change it I get - PersistentVolumeClaim "jenkins-x-nexus" is invalid: spec: Forbidden: field is immutable after creation
That message means that you cannot change the size of your volume. There are several tickets on GitHub about that limitation, and regarding different types of volumes, that one for example.
So, to change size, you need to create a new bigger PVC and somehow migrate your data from old volume to the new one.