I am new to kubernetes can somebody please explain why there are multiple volume types like
configMap
emptyDir
projected
secret
downwardAPI
persistentVolumeClaim
For few I am able to figure out, like why we need secret instead of configmap.
For rest I am not able to understand the need for others.
Your question is too generic to answer, here are the few comments off the top of my head
If the deployed pod or containers wants to have configuration data then you need to use configMap resource, if there are secrets or passwords its obvious to use secret resource.
Now if the deployed pods wants to use POD_NAME which is generated during schedule or Run time, then it need to use DownwardAPI resources.
Emptydir share the lifecycle with the Deployed pod, If the pods dies then all of the data which are stored using emptydir resource will be gone, now If you want to persist the data then you need to use persistentVolume, persistentVolumeClaim and storageclass Resources.
for further information k8s volumes
Configmap is used to make the application specific configuration data available to container at run time.
DownwardAPI is used to make kubernetes metadata ( like pod namespace, pod name, pod ip, pod lebels etc ) available to container at run time
Related
I have several pods which belong to the same service. I need to share a value between all pods in this service.
Per my understanding, the shared volume won't work well, because pods may end up being on different nodes.
Having any kind of database (even most lightweight) exposed as a service to share this value would be overkill (however, probably it's my backup plan).
I was wondering whether there is some k8s native way to share the value.
Put the values in a ConfigMap and mount it in the Pods. You can include the values of the ConfigMap in the containers of a Pod either as a volume or as environment variables.
See Configure a Pod to Use a ConfigMap in the Kubernetes documentation.
If the Pods need to update the shared values they can write to the ConfigMap (requires Kubernetes API permissions). However, in this case the ConfigMap must be included as a volume, since environment variable values from a ConfigMap are not updated when the ConfigMap changes.
I want to know if different nodes can share Secrets and ConfigMaps. Went through the Kubernetes documentation at https://kubernetes.io/docs/concepts/configuration/secret/ but could not find exact information.
All Kubernetes resources are stored centrally in the etcd database and access through the Kubernetes API server. When using Config Maps or Secrets, the data inside them are directly embedded into the resource it self (i.e. unlike ParsistentVolume for example, they do not just reference to the data stored somewhere else). This is also the reason why the size of ConfigMap or Secret is limited.
As such they can be used on all Kubernetes nodes. When you have a Pod which is using them, the ConfigMaps or Secrets will be mapped to the node where the Pod is scheduled. So the files from the ConfigMap or Secret might exist on given node, but that will be just copies of the original ConfigMap or Secret stored centrally in the etcd database.
I'm learning Kubernetes and there is something I don't get well.
There are 3 ways of setting up static storage:
Pods with volumes you attach diretctly the storage to
Pods with a PVC attached to its volume
StatefulSets with also PVC inside
I can understand the power of PVC when working together with StorageClass, but not when working with static storage and local storage like hostPath
To me, it sounds very similar:
In the first case I have a volume directly attached to a pod.
In the second case I have a volume statically attached to a PVC, which is also manually attached to a Pod. In the end, the volume will be statically attached to the Pod.
On both cases, the data will remain when the Pod is terminates and will be adopted by the next Pod which the corresponing definition, right?
The only profit I see from using PVCs over plain Pod is that you can define the acces mode. Apart of that. Is there a difference when working with hostpath?
On the other hand, the advantage of using a StatefulSet instead of a PVC is (if understood properly) that it get a headless service, and that the rollout and rollback mechanism works differently. Is that the point?
Thank you in advance!
Extracted from this blog:
The biggest difference is that the Kubernetes scheduler understands
which node a Local Persistent Volume belongs to. With HostPath
volumes, a pod referencing a HostPath volume may be moved by the
scheduler to a different node resulting in data loss. But with Local
Persistent Volumes, the Kubernetes scheduler ensures that a pod using
a Local Persistent Volume is always scheduled to the same node.
Using hostPath does not garantee that a pod will restart on the same node. So you pod can attach /tmp/storage on k8s-node-1, then if you delete and re-create the pod, it may attach tmp/storage on k8s-node-[2-n]
On the contrary, if you use PVC/PV with local persistent storage class, then if you delete and re-create a pod, it will stick on the node which handle the local persistent storage.
StatefulSet creates pods and has volumeClaimTemplate field, which creates a dedicated PVC for each pod. So each pod created by the statefulSet will have its own dedicated storage, linked with Pod->PVC->PV->Storage. So StatefulSet use also the PVC/PV mechanism.
More details are available here.
I have a StatefulSet-1 running with 3 replicas & each pod writing logs to its own persistent volume say pv1,pv2,pv3 (achieved using volumeClaimTemplates:)
I have another StatefulSet-2 running with 3 replicas & I want each POD of StatefulSet-2 access already created StatefulSet-1's volumes i.e. pv1,pv2 & pv3 for processing seperate logs written by each pod of StatefulSet-1.
So pv1,pv2,pv3 should be using by both StatefulSet1 & StatefulSet2 since pv1,pv2,pv3 created as part of StatefulSet-1 deployment! pv1,pv2,pv3 will ofcourse takes POD's name of StatefulSet-1 which is ok for StatefulSet-2.
How to configure StatefulSet2 to achieve the above scenario? please help!
Thanks & Regards,
Sudhir
This won't work.
1. PVs backed by GCE disks are in readWriteOnce mode so 1 pvc per pod.
2. You are achieving the statefulset pods with PVCs using PVC templates which rely on dynamic volume provisioning to create the appropriate PVs and PVCs.
If you need these pods to share the PVC, your best bet is to use a readWriteMany PV such as one backed by NFS. You will also need to create the pods of statefulSet-2 manually to have them mount the appropriate PVCs. You could achieve this by creating a single pod deployment for each one.
Something else to consider, can you have the containers of each statefulSet run together in the same pods? Normally this is not recommended, but it would allow them both to share the same volumes (as long as they are not using the same ports)
I've been working with Kubernetes for quite a while, but still often got confused about Volume, PersistentVolume and PersistemtVolumeClaim. It would be nice if someone could briefly summarize the difference of them.
Volume - For a pod to reference a storage that is external , it needs volume spec. This volume can be from configmap, from secrets, from persistantvolumeclaim, from hostpath etc
PeristentVolume - It is representation of a storage that is made avaliable. The plugins for cloud provider enable to create this resource.
PeristentVolumeClaim - This claims specific resources and if the persistent volume is avaliable in namespaces match the claim requirement, then claim get tied to that Peristentvolume
At this point this PVC/PV aren't used. Then in Pod spec, pod makes use of claim as volumes and then the storage is attached to Pod
These are all in a Kubernetes application context. Too keep applications portable between different Kubernetes platforms, it is good to abstract away the infrastructure from the application. Here I will explain the Kubernetes objects that belongs to Application config and also to the Platform config. If your application runs on both e.g. GCP and AWS, you will need two sets of platform configs, one for GCP and one for AWS.
Application config
Volume
A pod may mount volumes. The source for volumes can be different things, e.g. a ConfigMap, Secret or a PersistentVolumeClaim
PersistentVolumeClaim
A PersistentVolumeClaim represents a claim of a specific PersistentVolume instance. For portability this claim can be for a specific StorageClass, e.g. SSD.
Platform config
StorageClass
A StorageClass represents PersistentVolume type with specific properties. It can be e.g. SSD. But the StorageClass is different on each platform, e.g. one definition on AWS, Azure, another on GCP or on Minikube.
PersistentVolume
This is a specific volume on the platform. And it may be different on platforms, e.g. awsElasticBlockStore or gcePersistentDisk. This is the instance that holds the actual data.
Minikube example
See Configure a Pod to Use a PersistentVolume for Storage for a full example on how to use PersistentVolume, StorageClass and Volume for a Pod using Minikube and a hostPath.