Keeping pod volume mount configurable in Kubernetes - kubernetes

Is it possible to keep the volume mount configurable, such that I can choose to mount any specific persistent volume claim during POD creation?
I have a list of volume claims and I’m looking to configure my PodSpec in a way that will let me decide which claim to use as a volume mount without having to modify the YAML every time.
It is fine with me to run an additional kubectl command on the cluster before creating a new pod.

Based on your description here and in slack https://kubernetes.slack.com/archives/C09NXKJKA/p1559740826069800
Firstly, there is no interactive way to deploy yamls which will let you choose during run-time. Yaml are delarative therefore, you declare and then apply. NO questions asked, unless you have syntax errors!
Secondly, if you are looking for a kubectl command which the Sysadm will apply on production. Then right after deploying the dev yaml, you can use a (something similar to your use case) kubectl patch [resource name example pod] --patch '{"spec":{"volumes":[{"name": "glusterfsvol","persistentVolumeClaim": {"claimName": "nameOfNewVolumeClaim"}}]}}'
Lastly, What would be more concrete in your use case is to use a different storageclass in your dev and another one in production. In that you can have the same pvc which point to a different storage as it is defined in that k8s cluster. refer docs

Related

Kubernetes Edit File In A Pod

I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the existed pod . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.
Thanks
If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is no way to edit pod volume list without recreating it.
The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a Deployement, StatefulSet, Job, or DaemonSet, depenging on deployment strategy and application nature).
There are two ways to edit a file in an existing pod: either by using kubectl exec and console commands to edit the file in place, or kubectl cp to copy an already edited file into the pod. I advise you against both of these, because this is not permanent. Better backup the necessary data, switch deployment type to Deployment with one replica, then go with mounting a configMap as you read on the Internet.

Facing "The Pod "web-nginx" is invalid: spec.initContainers: Forbidden: pod updates may not add or remove containers" applying pod with initcontainers

I was trying to make file before application gets up in kubernetes cluster with initcontainers,
But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
error-image
Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement )
Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations
This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.
In your case:
you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod may be scheduled on a different node of the cluster (if you have more than one) and that may have a different IP Address as Pods are disposable entities.
Change your definition with a slightly more complex one, a Deployment ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.
From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use emptyDir type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.
You can check the documentation here: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Auto assign predefined env vars \ mounts to every pod (including future ones) on a cluster

Problem:
I want every pod created in my cluster to hold\point the same data
e.g. let's say I want all of them to have an env vars like "OWNER=MYNAME".
there are multiple users in my cluster and I don't want them to start changing their YAMLs and manually assign OWNER:MYNAME to env.
Is there a way to have all current/future pods to be assigned automatically with a predefined value or mount a configmap so that the same information will be available in every single pod?
can this be done on the cluster level? namespace level?
I want it to be transparent to the user, meaning a user would apply whatever pod to the cluster, and the info could be available to him without even asking.
Thanks, everyone!
Pod Preset might help you here to partially achieve what you need. Pod Preset resource allows injecting additional runtime requirements into a Pod at creation time. You use label selectors to specify the Pods to which a given PodPreset applies.
Check this to know how pod preset works.
First you need to enable pod preset in your cluster.
You can use Pod Preset to inject env variables or volumes in your pod.
You can also inject configmap in your pod.
Make use of some common label for all the pods which you want to have common config, use this common label in your pod preset resource.
Unfortunately there are plans to remove pod presets altogether in coming releases, but I guess you can still use it with current releases. Although there are other implementations similar to pod presets, which you can try.

kubernetes volume : One shared volume and one dedicate volume between replicated pods

I'm new to Kubernetes and learning it.
have deployment kind of pods and replcas=3.
Is there any way I can mount separate volume for each pod and one volume for all pods.
Requirements:
case 1- My application that is generating some temp file name tempfile.txt, So there is three replica pod, each one will generate tempfile.txt but content might be different. So If I use shared volume that will override each other .
case-2: I have a common file that is not part of image, that will be used by all pods starting the application i.e copy files from host to all pods's container
Thanks in Advance.
There are multiple ways to achieve the first part. Here is mine:
Instead of a deployment, use a statefulSet to create the replicas. statefulSets allow you to include a volume template which each pod have created with it, thus each new pod will have a new PV created specificlaly for it.
This does require your cluster to allow for dynamically provisioned volumes.
Depending on the size of your tempfile.txt, your use case, and your cluster/node configuration, you might also want to consider using a hostPath volume which will use the local storage of your node.
For the second part of your question, using any readWriteMany volume will work (such as any NFS option).
On the note of subPath, this should also work, so long as you define different subPaths for each pod. the example in the link provided by DT does this by creating a subpath based off the pod name.

kubernetes - kubectl run vs create and apply

I'm just getting started with kubernetes and setting up a cluster on AWS using kops. In many of the examples I read (and try), there will be commands like:
kubectl run my-app --image=mycompany/myapp:latest --replicas=1 --port=8080
kubectl expose deployment my=app --port=80 --type=LoadBalancer
This seems to do several things behind the scenes, and I can view the manifest files created using kubectl edit deployment, and so forth However, i see many examples where people are creating the manifest files by hand, and using commands like kubectl create -f or kubectl apply -f
Am I correct in assuming that both approaches accomplish the same goals, but that by creating the manifest files yourself, you have a finer grain of control?
Would I then have to be creating Service, ReplicationController, and Pod specs myself?
Lastly, if you create the manifest files yourself, how do people generally structure their projects as far as storing these files? Are they simply in a directory alongside the project they are deploying?
The fundamental question is how to apply all of the K8s objects into the k8s cluster. There are several ways to do this job.
Using Generators (Run, Expose)
Using Imperative way (Create)
Using Declarative way (Apply)
All of the above ways have a different purpose and simplicity. For instance, If you want to check quickly whether the container is working as you desired then you might use Generators .
If you want to version control the k8s object then it's better to use declarative way which helps us to determine the accuracy of data in k8s objects.
Deployment, ReplicaSet and Pods are different layers which solve different problems.All of these concepts provide flexibility to k8s.
Pods: It makes sure that related containers are together and provide efficiency.
ReplicaSet: It makes sure that k8s cluster has desirable replicas of the pods
Deployment: It makes sure that you can have different version of Pods and provide the capability to rollback to the previous version
Lastly, It depends on use case how you want to use these concepts or methodology. It's not about which is good or which is bad.
There is a little more nuance to the difference between apply and create than what is already mentioned here. Kubectl create can be used imperatively on the command line or declaratively against a manifest file.
Kubectl apply is used declaratively against a manifest file. You can't use kubectl apply imperatively.
One key difference is when you already have an object and you want to update something. Even if you used a manifest file with kubectl create, you will get an error when you use kubectl create again to update the same resource. But, if you use kubectl apply, you will not get an error. It will update the resource without any issues.
So, the convention is to use kubectl apply to create AND update resources, kubectl create is used to create resources, and kubectl run is used to create a pod with a specific image, namespace, etc. for experimentation and testing with the --dry-run=client option.