kubernetes volume : One shared volume and one dedicate volume between replicated pods - kubernetes

I'm new to Kubernetes and learning it.
have deployment kind of pods and replcas=3.
Is there any way I can mount separate volume for each pod and one volume for all pods.
Requirements:
case 1- My application that is generating some temp file name tempfile.txt, So there is three replica pod, each one will generate tempfile.txt but content might be different. So If I use shared volume that will override each other .
case-2: I have a common file that is not part of image, that will be used by all pods starting the application i.e copy files from host to all pods's container
Thanks in Advance.

There are multiple ways to achieve the first part. Here is mine:
Instead of a deployment, use a statefulSet to create the replicas. statefulSets allow you to include a volume template which each pod have created with it, thus each new pod will have a new PV created specificlaly for it.
This does require your cluster to allow for dynamically provisioned volumes.
Depending on the size of your tempfile.txt, your use case, and your cluster/node configuration, you might also want to consider using a hostPath volume which will use the local storage of your node.
For the second part of your question, using any readWriteMany volume will work (such as any NFS option).
On the note of subPath, this should also work, so long as you define different subPaths for each pod. the example in the link provided by DT does this by creating a subpath based off the pod name.

Related

Kubernetes Edit File In A Pod

I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the existed pod . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.
Thanks
If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is no way to edit pod volume list without recreating it.
The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a Deployement, StatefulSet, Job, or DaemonSet, depenging on deployment strategy and application nature).
There are two ways to edit a file in an existing pod: either by using kubectl exec and console commands to edit the file in place, or kubectl cp to copy an already edited file into the pod. I advise you against both of these, because this is not permanent. Better backup the necessary data, switch deployment type to Deployment with one replica, then go with mounting a configMap as you read on the Internet.

Facing "The Pod "web-nginx" is invalid: spec.initContainers: Forbidden: pod updates may not add or remove containers" applying pod with initcontainers

I was trying to make file before application gets up in kubernetes cluster with initcontainers,
But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
error-image
Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement )
Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations
This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.
In your case:
you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod may be scheduled on a different node of the cluster (if you have more than one) and that may have a different IP Address as Pods are disposable entities.
Change your definition with a slightly more complex one, a Deployment ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.
From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use emptyDir type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.
You can check the documentation here: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

How to recreate a Kubernetes persistentVolume?

I have a persistent volume.
I want to force Kubernetes to recreate it, as the contents is corrupted. Alternatively, if there's a way to fix that, it would be a solution.
I have checked that the persistent volume is working as expected using:
kubectl describe pv -n
And my pod was previously using it. However, my pod is now failing due to a corrupted file within the persistent volume.
I would like to recreate the persistent volume.
If I delete the persistent volume, will Kubernetes create a new one, or will I have to manually create a new one to attach?
If you delete a persistent volume then kubernetes will not create a new one for you, you have to manually create a new one. Basically it is the simple answer of your question.
But there are basically three options when you are done with your pv, you can delete the PVC object then depending on the PV reclaim policy you will have three options: Delete, Retain, Recycle. Now it depends on what policy is set in your pv reclaim policy.
As kubernetes official docs stated:
When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted.
for more you can look at the persistent volume docs of kubernetes.
You should be able to check if the volume is in usable state by accessing it from the host on which the volume is present. Simply try creating and reading a file in it to check.
You could also do a fsck on the block device to check if the Filesystem can be fixed. For example: # fsck /dev/sda3
If it is corrupted for good, the only way would be to recover from backup if available. Or else the data is gone, and you'd need to create a new volume.
Volume creation in Kubernetes can be done manually. When you use options such as hostPath, awsElasticBlockStore, etc., under volumes section of the pod definition, volume creation is static. In this case the volume which must be already present is assigned to the pod - Kubernetes will not create new volume for the pod.
If you want volumes to be created dynamically, then you must use Persistent Volume Claim under volume section of the pod definition, combined with Storage Classes. Storage Classes allow to use provisioners such as AWSElasticBlockStore, AzureFile, AzureDisk, CephFS, GlusterFS, etc., which provision volume on demand.

Attach new azure disk volume per pod in Kubernetes deployment

I have a Kubernetes Deployment app with 3 replicas, which needs a 7GB storage for each replica, I want to be able to attach a new empty azureDisk storage to be mounted into each pod/replica created in this deployment.
Basically I have the following restrictions:
I must use Deployment, not a Statefulset
Each time a pod dies and a new pod is up, it shouldn't have a state, and it will have a new empty azureDisk attached to it.
the pods do not share their storage, each pod has its own 7GB storage.
the pods need to use azureDisk because I need a 7GB storage on demand, which means, dynamically creating azureStorage when I scale my deployment replicas.
When using azureDisk, I need to use it with Access mode type ReadWriteOnce (as says in the docs ) and it will attach the only 1 pod to this disk, that's found, but, that only works if I have 1 pod, if I have more than 1 pod, I can't use the same claim... is there any way to dynamically ask for more storages like the one in the first claim?
NOTE 1: I know there is a volumeClaimTemplates, but that's only related to a Statefulset.
NOTE 2: I don't care if a pod restarts 100 times, and this in turn creates 100 PV which only 1 is used, that is fine.
I'm not sure why you need to use a StatefulSet but the only I see to do this is to create your own operator for your application. The operator would have a controller that manages your pods similar to what a ReplicaSet does but with the exception that for every new pod that is instantiated a new PVC is created.
It might just be better to figure out how to run your application in a StatefulSet and use VolumeClaimTemplates
✌️
The main question is - Why? "if I have an application which doesn't have state, still I need a large volume for each pod"
Looking at this explanation you should focus on StateFull application. From my point of view it looks like you are forcing to use Deployment instead of StateFullSet for StateFull application
In your example probably you need pv which support different access modes.
The main problem you have experienced is that using pv with supported mode ReadWriteOnce you can bind at the same time only one pv by single node. So your pods in different nodes will not start due to failing volume mounting. You can use this approach only for ReadOnlyMany/ReadWriteMany scenario.
Please refer to other providers which have different capabilities for access modes like: filestore(gcp), AzureFile(azure), Glusterfs, NFS
Deployments vs. StatefulSets

How to mimic Docker ability to pre-populate a volume from a container directory with Kubernetes

I am migrating my previous deployment made with docker-compose to Kubernetes.
In my previous deployment, some containers do have some data made at build time in some paths and these paths are mounted in persistent volumes.
Therefore, as the Docker volume documentation states,the persistent volume (not a bind mount) will be pre-populated with the container directory content.
I'd like to achieve this behavior with Kubernetes and its persistent volumes, How can I do ? Do I need to add some kind of logic using scripts in order to copy my container's files to the mounted path when data is not present the first time the container starts ?
Possibly related question: Kubernetes mount volume on existing directory with files inside the container
I think your options are
ConfigMap (are "some data" configuration files?)
Init containers (as mentioned)
CSI Volume Cloning (clone combining an init or your first app container)
there used to be a gitRepo; deprecated in favour of init containers where you can clone your config and data from
HostPath volume mount is an option too
An NFS volume is probably a very reasonable option and similar from an approach point of view to your Docker Volumes
Storage type: NFS, iscsi, awsElasticBlockStore, gcePersistentDisk and others can be pre-populated. There are constraints. NFS probably the most flexible for sharing bits & bytes.
FYI
The subPath might be of interest too depending on your use case and
PodPreset might help in streamlining the op across the fleet of your pods
HTH