Is it possible to view files in a pod without using the kubectl exec command? - postgresql

I am using PostgreSQL in a pod on a self-hosted Kubernetes 1.23 (using Kubeadm). This pod was configured using the Helm Chart located at https://github.com/bitnami/charts/. I would like to be able to view files within this pod like individual log files (not the one exposed by kubectl logs) in order to find out what is wrong with the information being passed between my Entity Framework Core app and PostgreSQL.
However, this pod does not seem to have the ability to use kubectl exec, which is what I would normally use to view any file within a pod (and is how people seem to suggest doing so online). Is there a way to obtain a copy of or view the files within a pod running in Kubernetes without using kubectl exec, and if so, how would I do so?
Additionally, the storage is managed by Ceph, so I can't easily access the files from the node's filesystem.

Related

Kubernetes apply to get to desired state

I feel like I have a terrible knowledge gap when it comes to managing the resource states within Kubernetes.
Suppose I have 2 deployments in my cluster, foo1 and foo2. They are both defined in separate yaml files, foo1.yaml and foo2.yaml that are both inside a my-dir directory and have been applied with kubectl apply -f my-dir/
Now I want to make a third deployment, but also delete my second deployment. I know that I can do this in 2 steps:
Make another foo3.yaml file inside the directory and then do kubectl apply -f my-dir/foo3.yaml
Run kubectl delete -f my-dir/foo2.yaml to get rid of the second deployment.
My question is, can I do this in one shot by keeping the "desired state" in my directory. i.e. Is there any way that I can delete foo2.yaml, create a new foo3.yaml and then just do kubectl apply -f my-dir/ to let kubernetes handle the deletion of the removed resource file as well? What am I missing here?
The best and easiest way is to use some DevOps tools like jenkins, ansible or terraform for managing your deployments. If you don’t want to use external tools there is a python library for managing kubernetes. You can fetch the details of your kubernetes resources, deployments, pods etc., using this library you can also manage your kubernetes cluster. Similarly if you want to remove the deployment files you just need to add a few more lines for deleting the file.

Kubernetes Edit File In A Pod

I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the existed pod . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.
Thanks
If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is no way to edit pod volume list without recreating it.
The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a Deployement, StatefulSet, Job, or DaemonSet, depenging on deployment strategy and application nature).
There are two ways to edit a file in an existing pod: either by using kubectl exec and console commands to edit the file in place, or kubectl cp to copy an already edited file into the pod. I advise you against both of these, because this is not permanent. Better backup the necessary data, switch deployment type to Deployment with one replica, then go with mounting a configMap as you read on the Internet.

Helm chart copy shell script from local machine to remote pod , change permission and exeucte

Is there a way I can copy shell script from local machine to pod using charts and helm, change the script permission and execute the script inside the pod?
No, Helm cannot do this. In effect only the Kubernetes commands it can run are kubectl apply and kubectl delete, though it can apply templating before sending YAML off to the Kubernetes server. The sorts of imperative commands you're describing (kubectl cp and kubectl exec) aren't things Helm can do.
(The sorts of imperative commands you're describing aren't generally good form in Kubernetes in any case. Generally you'd need to package your script up in a Docker image to be able to run it in the cluster, and you want to try to set up your containers to be able to set themselves up as much as they can. Also remember that pods get deleted routinely, sometimes even outside of your control, and anything you've manually copied into a pod will get lost when this happens.)

Accidentally deleted Kubernetes namespace

I have a Kubernetes cluster on google cloud. I accidentally deleted a namespace which had a few pods running in it. Luckily, the pods are still running, but the namespace is in terminations state.
Is there a way to restore it back to active state? If not, what would the fate of my pods running in this namespace be?
Thanks
A few interesting articles about backing up and restoring Kubernetes cluster using various tools:
https://medium.com/#pmvk/kubernetes-backups-and-recovery-efc33180e89d
https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487
https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark
https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state
I guess they may be useful rather in future than in your current situation. If you don't have any backup, unfortunately there isn't much you can do.
Please notice that in all of those articles they use namespace deletion to simulate disaster scenario so you can imagine what are the consequences of such operation. However the results may not be seen immediately and you may see your pods running for some time but eventually namespace deletion removes all kubernetes cluster resources in a given namespace including LoadBalancers or PersistentVolumes. It may take some time. Some resource may not be deleted because it is still used by another resource (e.g. PersistentVolume by running Pod).
You can try and run this script to dump all your resources that are still available to yaml files however some modification may be needed as you will not be able to list objects belonging to deleted namespace anymore. You may need to add --all-namespaces flag to list them.
You may also try to dump any resource which is still available manually. If you still can see some resources like Pods, Deployments etc. and you can run on them kubectl get you may try to save their definition to a yaml file:
kubectl get deployment nginx-deployment -o yaml > deployment_backup.yaml
Once you have your resources backed up you should be able to recreate your cluster more easily.
backup most resource configuration reguarly:
kubectl get all --all-namespaces -o yaml > all-deploy-resources.yaml
but this is not includes all resources.
another ways
by ark/velero:
https://github.com/vmware-tanzu/velero (Backup and migrate Kubernetes applications and their persistent volumes https://velero.io)

Difficulty with different kubernetes pods run using kubetctl apply running same container images sharing directories

I am attempting to run two separate pods using the same container image on a cluster by applying a config file. Despite there being no shared or persistent volume when both pods are active the same directory on both pods is updated with created files from the other pod and write access changes suddenly. The container being used is the jupyter-docker-stacks jupyter/minimal-notebook image being pulled directly from dockerhub. These pods running this container is created by applying a manifest. The two pods have different labels and names. A service with a unique name is created for each pod for access.
Do resources for containers persist over time on a cluster like in docker containers? I cannot find something equivalent to a --rm flag to be used alongside kubectl apply.
Thanks
If you want to delete the pod after the job is completed, you might want to apply job instead of pod. The idea of job in k8s is to launch a pod and do the job, and then the pod get stopped. For more info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
$ kubectl apply -f <fileName> will create or make some changes in the pod. If you want to delete pod using apply you must use $ kubectl delete -f <fileName>
About sharing, if you have 2 separate manifest you can specify volumeMounts for each container. For more information please read the documentation depends on your needs.
Also as #Kaizhe Huang advised you can use Job if you want to execute something one time or try initContainers if you want to install something in POD before main container will be run. More about initContainers here.
You could check the dockerfile of your image. See if there are 'VOLUME' claimed. If have, maybe they share the same volume on host. Not sure, but you could check.