Is possible edit file on gcp persistent disk? - kubernetes

I have node on google kubernetes engine using persistent volume. Is possible edit files on this volume from gcloud, or google cloud shell? For example edit config and recreate node? Or it is only posiible from running pod using kubectl exec?

i think you can have a look to gsutil command it allows you to interact with your buckets .
Guide to Gsutil

The volume would be a block device, so I’d expect it’d not be possible to edit it outside of the pod it’s attached to. So yes, expecting into the pod would do it, but you could also just use kubectl cp to copy files (and directories!) directly from your local machine onto the volume, mounted to the pod.
Here’s the relevant doc:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp

Related

Copy files in persistent volume kubernetes

I want to add or copy files into persistent volume and then use it in container using volume mount ?any help
Once PVC/PV are created (https://kubernetes.io/docs/concepts/storage/persistent-volumes/), there are number of possible solutions.
For specific question, options 1 and 2 will suffice. Listing more for reference, however this list does not try to be complete.
Simplest and native, kubectl cp: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp
rsync - still quite simple, but also robust. Recommended for a task (both of below options were tested)
TO pod: https://itecnotes.com/server/rsync-files-to-a-kubernetes-pod/
FROM pod: https://cybercyber.org/using-rsync-to-copy-files-to-and-from-a-kubernetes-pod.html
tar, but incremental: https://www.freshleafmedia.co.uk/blog/incrementally-copying-rsyncing-files-from-a-kubernetes-pod
Tools for synchronisation, backup, etc
For example, https://github.com/backube/volsync

Is there a way to specify a tar file of docker image in manifest file for kubernetes?

Is there a way to specify a tar file of a docker image in a deployment manifest file for kubernetes? The nodes have access to a mounted network drive that will have the tar file. There's a post where the image is loaded by docker on each node, but I was wondering if there's a way just to specify the tar file and have Kubernetes do the loading and running.
--edit--
To be more exact, say I have a mounted network drive on each node, is there a way with just the manifest file to instruct kubernetes to load that image directly from tar file and not have to put it into a docker registry.
In general, no, Kubernetes can only access container images from a registry, not from a network drive, see documentation.
However, you could have a private registry inside your cluster (see docs). You could also have the images locally on the nodes (pre-pulled images) and have Kubernetes access them from there by setting imagePullPolicy to Never (see docs).
You have provided quite limited information about your environment and how it would looks like.
Two things comes to my mind.
Use initContainer to download this file using wget or similar.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
That way you can be sure that tar file will be downloaded before your application will start. Example can be found here
Use Mount Volume
In your deployment, statefulset, pod (not sure what you are using), you can Mount Volume into pod. After that you will be able to inside pod specified path from volume. Please keep in mind that you have to use proper access modes.
To run .tar file you can use some bash commands like in this documentation.

How can a file inside a pod be copied to the outside?

I have an audit pod, which has logic to generate a report file. Currently, this file is present in the pod itself. I have only one pod having only one replica.
I know, I can run kubectl cp to copy those files from my pod. This command has to be executed on the Kubernetes node itself, but the task is to copy the file from the pod itself due to many restrictions.
I cannot use a Persistent Volume due to restrictions. I checked the Kubernetes API, but couldn't find anything by which I can do a copy.
Is there another way to copy that file out of the pod?
This is a community wiki answer posted to sum up the whole scenario and for better visibility. Feel free to edit and expand on it.
Taking under consideration all the mentioned restrictions:
not supposed to use the Kubernetes volumes
no cloud storage
pod names not accessible to your user
no sidecar containers
the only workaround for your use case is the one you currently use:
the dynamic PV with the annotations."helm.sh/resource-policy": keep
use PVCs and explicitly mention the user to not to delete the
namespace
If any one has a better idea. Feel free to contribute.

How to mimic Docker ability to pre-populate a volume from a container directory with Kubernetes

I am migrating my previous deployment made with docker-compose to Kubernetes.
In my previous deployment, some containers do have some data made at build time in some paths and these paths are mounted in persistent volumes.
Therefore, as the Docker volume documentation states,the persistent volume (not a bind mount) will be pre-populated with the container directory content.
I'd like to achieve this behavior with Kubernetes and its persistent volumes, How can I do ? Do I need to add some kind of logic using scripts in order to copy my container's files to the mounted path when data is not present the first time the container starts ?
Possibly related question: Kubernetes mount volume on existing directory with files inside the container
I think your options are
ConfigMap (are "some data" configuration files?)
Init containers (as mentioned)
CSI Volume Cloning (clone combining an init or your first app container)
there used to be a gitRepo; deprecated in favour of init containers where you can clone your config and data from
HostPath volume mount is an option too
An NFS volume is probably a very reasonable option and similar from an approach point of view to your Docker Volumes
Storage type: NFS, iscsi, awsElasticBlockStore, gcePersistentDisk and others can be pre-populated. There are constraints. NFS probably the most flexible for sharing bits & bytes.
FYI
The subPath might be of interest too depending on your use case and
PodPreset might help in streamlining the op across the fleet of your pods
HTH

What is the right way to provision nodes with static content in Amazon EKS?

I have an application that loads a .conf file and some additional files on startup. Now I want to run this app in Amazon EKS. What is the best way to inject these files into a pod in Kubernetes? I tried copying them into a directory on the node and mounting that directory in the pod via hostpath. That works but doesn't feel the right way to do it. Does EKS have any autoprovision tool for this?
If it's a fixed config file for your app, you can even burn it inside docker image, i.e. copy file in your Dockerfile
If it needs to be configurable during deployment (e.g. it's environment-specific), then indeed, as mentioned by #anmolagrawal above, ConfigMap is the right way:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
If you can modify your app to rely on env vars or command-line arguments, it will make your life a lot simpler, you can just pass those values in the Pod spec, no need for ConfigMap.
But you definitely shouldn't be managing yourself any app-specific content on the Kubernetes nodes.