How to move a postgresql backup from a server to a local machine? - postgresql

Currently I am creating a backup just by using "pg_dump dbname > path" in the pod terminal but this only saves it into the OpenShift container.
How would I transfer the dump to a local device?
Is there a a command for grabbing a database backup and downloading it onto the local machine?

You can use oc cp to copy files from a container to the local machine:
# Copy /tmp/foo from a remote pod to /tmp/bar locally
oc cp <some-pod-name>:/tmp/foo /tmp/bar
So for example:
oc cp postgresql-1-ptcdm:/tmp/mybackupfile /home/myusername/mybackupfile
Note that this requires that the 'tar' binary is present in your container image. If 'tar' is not present, 'oc cp' will fail. For advanced use cases, such as symlinks, wildcard expansion or file mode preservation consider using oc exec.

Related

Recover deleted PostgreSQL config files

I was trying to delete pg_log folder (it was huge 3Gib) But i accidentally remove everything in data folder (by rm ./*):
Now all of the .conf files removed from data folder and i receiving this error in the log:
"Data page checksums are disabled"
The postgres was made by docker through docker hub (15-alpine)
I didn't touch any config files there.
Where can i find the default postgres config files? I think i can make it back to work by restoring the .conf files.
Steps to recover default config files using Docker image.
Pull docker image for Postrgres:
docker pull postgres:15-alpine
Run container:
docker run -e POSTGRES_HOST_AUTH_METHOD=trust postgres:15-alpine
Keep current terminal open and open new terminal
Connect to shell in docker container:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aee1237294b8 postgres:15-alpine3.17 "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 5432/tcp naughty_knuth
copy container id from docker ps result and execute shell command:
docker exec -it aee1237294b8 bash
Go to data folder and archive it:
cd /var/lib/postgresql/data/
tar -zcf pgdata.tar.gz *
Exit docker container shell:
exit
Copy archive from docker container:
docker cp aee1237294b8:/var/lib/postgresql/data/pgdata.tar.gz ~/Downloads/pgdata.tar.gz
as result I've downloaded config from postgres:15-alpine
grab it from here: https://anarjafarov.me/pg.conf.zip
and watch video instructions: https://www.youtube.com/watch?v=fgHtvwbQJDE

kubectl cp bitnami apache helm chart: cannot copy to exact location of pod filesystem

I'm trying to deploy a react app on my local machine with docker-desktop and its kubernetes cluster with bitnami apache helm chart.
I'm following this this tutorial.
The tutorial makes you publish the image on a public repo (step 2) and I don't want to do that. It is indeed possible to pass the app files through a persistent volume claim.
This is described in the following tutorial.
Step 2 of this second tutorial lets you create a pod pointing to a PVC and then asks you to copy the app files there by using command
kubectl cp /myapp/* apache-data-pod:/data/
My issues:
I cannot use the * wildcard or else I get an error. To avoid this I just run
kubectl cp . apache-data-pod:/data/
This instruction copies the files in the pod but it creates another data folder in the already existing data folder in the pod filesystem
After this command my pod filesystem looks like this
I tried executing
kubectl cp . apache-data-pod:/
But this copies the file in the root of the pod filesystem at the same location where first data folder is.
I need to copy the data directly in <my_pod>:/data/.
How can I achieve such behaviour?
Regards
**Use the full path in the command as mentioned below to copy local files to POD : *
kubectl cp apache-pod:/var/www/html/index.html /tmp
*If there are multiple containers on the POD, Use the below syntax to copy a file from local to pod:
kubectl cp /<path-to-your-file>/<file-name> <pod-name>:<fully-qualified-file-name> -c <container-name>
Points to remember :
While referring to the file path on the POD. It is always relative to the WORKDIR you have defined on your image.
Unlike Linux, the base directory does not always start from the / workdir is the base directory
When you have multiple containers on the POD you need to specify the container to use with the copy operation using -c parameter
Quick Example of kubectl cp : Here is the command to copy the index.html file from the POD’s /var/www/html to the local /tmp directory.
No need to mention the full path, when the doc root is the workdir or the default directory of the image.
kubectl cp apache-pod:index.html /tmp
To make it less confusing, you can always write the full path like this
kubectl cp apache-pod:/var/www/html/index.html /tmp
*Also refer to this stack question for more information.

Postgres volume mounting on WSL2 and Docker desktop: Permission Denied on PGDATA folder

There are some similar posts but this is specifically related to running Postgres with WSL2 backend on Docker desktop. WSL2 brings full Linux experience on Windows. Volumes can be mounted to both Windows and Linux file systems. But the best practice is to use Linux file system for performance reasons see docker documentation.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources where ~ is expanded by the Linux shell to $HOME.
My WSL distro is Ubuntu 20.04 LTS. I'm bind mounting Postgres data directory to a directory on Linux filesystem and I'm also configuring the Postgres PGDATA to use a sub-directory because this is instructed on the official Docker image docs:
PGDATA
This optional variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data. If the data volume you're using is a filesystem mountpoint (like with GCE persistent disks) or remote folder that cannot be chowned to the postgres user (like some NFS mounts), Postgres initdb recommends a subdirectory be created to contain the data.
So this is how I start Postgres with the volume mounting to WSL2 Ubuntu file system:
docker run -d \
--name some-postgres -e POSTGRES_PASSWORD=root \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v ~/custom/mount:/var/lib/postgresql/data \
postgres
I can exec into the running container and verify that the data folder exists and it's configured correctly:
Now from the host machine (WSL2 Linux) if I try to access that folder I get the permission denied:
I would appreciate if anyone can provide a solution. None of the existing posts worked to resolve the issue.
This has got nothing to do with PostgreSQL. Docker containers run as root and so any directory created by Docker will also belong to root.
When you attach to the container and list the directory under /var/lib/postgresql/data it shows postgres as the owner.
Check "Arbitrary --user Notes" section in the official documentation here
The second option "bind-mount /etc/passwd read-only from the host" worked for me.
Two things that were blocking us working with WSL2 on Windows were:
Folder c:\Program files\WindowsApps didn't have admin account listed as owner
McAfee was blocking the WSL. In order to disable blocking we had to remove following rule: Open McAfee -> Threat Prevention -> Show Advanced (button in Right upper corner) -> scroll down to Rules -> name of the rule is "Executing Subsystem for Linux"

docker postgres, fail to map volume in windows

I wish to store my persists data in my local D:\dockerData\postgres9.6. Below is my docker command
docker pull postgres
docker run -d -v /d/dockerData/postgres9.6:/var/lib/postgresql/data -p 5432:5432 postgres
It successful create a container and I can use pgAdmin to access and create database.
But I found out that there is no file in my D:\dockerData\postgres9.6. I exec bash into the container, there is at least 20+ files inside /var/lib/postgresql/data.
Anyone can point out which part goes wrong?
It depends what kind of Docker you are using on Windows:
Docker Toolbox with VirtualBox: only C:\Users\mylogin is shared by default. D:\ is not mounted.
Docker for Windows with HyperV: only C:\ is mounted by default. Make sure D:\ is a shared drive: see image

Dokku/Docker, how to access file in file system of running container?

Previously, to access a file in a running dokku instance I would run:
docker ps to get the container ID followed by
ls /var/lib/docker/aufs/diff/<container-id>/app/...
note: I'm just using 'ls' as an example command. I ultimately want to reference a particular file.
This must have changed, as the container ID is no longer accessible via this path. There are loads of dirs in that folder, but none that match any of the running containers.
It seems like mounting a volume for the entire container would be overkill in this scenario. I know I can access the file system using dokku run project-name ls, and also docker exec <container-id> ls, but neither of those will satisfy my use case.
To explain a bit more fully, in my dokku project, I have some .sql files that I'm using to bootstrap my postgres DB. These files get pushed up via git push with the rest of the project.
I'm hoping to use the postgres dokku plugin to run the following:
dokku postgres:connect db-name < file-name.sql
This is what I had previously been doing using:
dokku postgres:connect db-name < /var/lib/docker/aufs/diff/<container-id>/app/file-name.sql but that no longer works.
Any thoughts on this? Am I going about this all wrong?
Thanks very much for any thoughts.
Here's an example:
dokku apps:list
=====> My Apps
DummyApp
dokku enter DummyApp
Enter bash to the DummyApp container.
Never rely on the /var/lib/docker file system paths as most of the data stored there is dependent on the storage driver currently in use so it is subject to change.
cat the file from an existing container
docker exec <container> cat file.sql | dokku postgres:connect db-name
cat the file from an image
docker run --rm <image> cat file.sql | dokku postgres:connect db-name
Copy file from an existing container
docker cp <container>:file.sql file.sql
dokku postgres:connect db-name < file.sql