How can I merge 2 mounted volumes on Debian 10? - merge

I've got 2 volumes already mounted on a VPS from Hetzner.
The first one is on / with 1To space. The other one on /home with 2To space.
I want to get my 3 To together, how can I manage to merge those 2 volumes without erasing any data ?
Here is a picture of what a df -h looks like :
Thanks you very much !

You can merge them using a union filesystem. https://en.wikipedia.org/wiki/UnionFS
In Linux, you can find several kernel modules implementing a union fileystem such as aufs and overlayfs. Available modules may vary depending on kernel configuration in your vps. I will give you a configuration example using the aufs kernel module.
In first, create a subdirectory for each mount point:
mkdir -p /for-aufs
mkdir -p /home/for-aufs
To continue, create a directory for the aufs mount point:
mkdir -p /aufs
Edit your /etc/fstab file
none /aufs aufs br:/for-aufs=rw:/home/for-aufs=rw,sum,create=rr 0 0
Or using mount directly:
mount -t aufs -o br:/for-aufs=rw:/home/for-aufs=rw,sum,create=rr none /aufs
Then, you will see a new mount point in /aufs linked to /for-aufs (/dev/md2) and /home/for-aufs (/dev/md3) when executing df -h. When you are using /aufs mount point, a round-robing policy will create/modify files between /for-aufs and /home/for-aufs.
In conclusion, /for-aufs + /home/for-aufs = /aufs
Remember to read the manual for more information:
http://manpages.ubuntu.com/manpages/focal/man5/aufs.5.html

Related

Dockerized PGAdmin Mapped volume + COPY not working

I have a scenario where a certain data set comes from a CSV and I need to allow a non-dev to hit PG Admin and update this data set. I want to be able to put this CSV in a mapped folder from the host system and then use the PG Admin GUI to run a COPY command. So far PG Admin is telling me:
ERROR: could not open file "/var/lib/pgadmin/data-files/some_data.csv" for reading: No such file or directory
Here are my steps so far, along with a sanity check inspect:
docker volume create --name=data-files
docker run -e PGADMIN_DEFAULT_EMAIL="pgadmin#example.com" -e PGADMIN_DEFAULT_PASSWORD=some_pass -v data-files:/var/lib/pgadmin/data-files -d -p 5050:80 --name pgadmin dpage/pgadmin4
docker volume inspect data-files --format '{{.Mountpoint}}'
/app/docker/volumes/data-files/_data
docker cp ./updated-data.csv pgadmin:/var/lib/pgadmin/data-files
And, now I think that PG Admin could see the updated-data.csv, so I try COPY, which I know works locally on my dev system where PG Admin is on baremetal:
COPY foo.bar(
...
)
FROM '/var/lib/pgadmin/data-files/updated-data.csv'
DELIMITER ','
CSV HEADER
ENCODING 'windows-1252';
Is there any glaring mistake here? When I do docker cp there's no feedback to stdout. No error, no mention of success or a hash or anything.
It looks like you thought the file should be inside the pgadmin container however the file you are going to copy must be inside the postgres container so the query you run will find the file. I suggest you copy the file to postgres container :
docker cp <path_from_your_local>/file.csv <postgres_container_name>:/file.csv
Then in the query tool from your pgadmin you can copy without problems !
I hope this help to others came here...

Option to exclude files in pg_basebackup command Postgres

When cloning a standby, how can I prevent pg_basebackup from copying postgresql.conf and pg_hba.conf from the master to /var/lib/pgsql/9.9/data directory?
Currently I am using this command
[root#xyz..]# pg_basebackup -h {master ipAddr} -D /var/lib/pgsql/9.6/data -U postgres -v -P
according to docs:
The backup will include all files in the data directory and
tablespaces, including the configuration files and any additional
files placed in the directory by third parties. But only regular files
and directories are copied. Symbolic links (other than those used for
tablespaces) and special device files are skipped.
So there is no such option. If you still want to force it, move config files away from data directory (and optionally ln them to data_dir)
This answer is for Postgres 14. pg_basebackup takes backup of the entire data directory. https://www.postgresql.org/docs/14/app-pgbasebackup.html states that the backup utility will skip all directory/file that are symbolic links. So, that could be a workaround to get only desired content into the tar ball.
I had faced similar situations where I wanted to exclude the content of multiple directories like pg_replslot,pg_dynshmem, pg_notify etc. I made the tar ball the usual way: pg_basebackup -D /backup/ -F t -P -v. After the tar ball was made, and before restoring it to another server, I updated the tar manually by excluding content of all the required directories.

Centos7 "mount -a" "mount point /mnt/dev/ does not exist"

Adding the following line to the /etc/fstab and rebooting seems to work as expected, i.e. al of the files in the shared directory "DEV" are available and read-only.
/etc/fstab
//192.168.99.100/DEV /mnt/dev/ cifs _netdev,username=username,password=password,ro,uid=500,gid=1001 0 0
However, I am trying to mount this the machines provisioning and avoid rebooting, so I've tried doing a "mount -a" but get the following error:
[root#localhost ~]# mount -a
mount: mount point /mnt/dev/ does not exist
How can I make this mount available without rebooting?
ok, I guess that just adding an /etc/fstab entry automatically creates the mount directory at some point during the first reboot. Soo.... in order to avoid rebooting, I apparently need to manually create the directory first
mkdir /mnt/dev
mount -a
(rejoice)

Managing Persistent Files (not database) with Docker

What's the best strategy of managing persistent files (not database) such as config file, zip files, images, and so forth?
I tried the following approach:
Create folder /var/storage
Mount this to my container as -v /var/storage:/path/to/container/storage/
However, this does not behave as expected (i.e. only the main folder storage is created, and none of the subfolders and files are created. Furthermore, data is not synced. So if I add file to either the container or the host, it does not show up in the other. I'm thinking this is a permission issue.
My other approach (which I have not done yet), would be a container, similar to a database, to allow for a more portable structure.
My question is this: What is the best way to implement this? If I am doing the container way, what is my image then?! There doesn't seem to be anything, so it will just be a completely empty image.
Try it like this:
$ sudo mkdir /var/storage
$ sudo docker run -ti -v /var/storage:/var/inside ubuntu
root#cdc215309a60:/# cat > /var/inside/foo
Hello
^d
root#cdc215309a60:/# exit
exit
$ cat /var/storage/foo
Hello
The bit after the colon in -v is the location where you want the directory to be mounted inside the container.

Boot2Docker (on Windows) running Mongo with shared folder (This file system is not supported)

I am trying to start a Mongo container using shared folders on Windows using Boot2Docker. When starting using run -it -v /c/Users/310145787/Desktop/mongo:/data/db mongo i get a warning message inside the container saying:
WARNING: This file system is not supported.
After starting mongo shutsdown immediately.
Any hints or tips on how to solve this?
Apparently, according to this gist and Sev (sevastos), mongo doesn't support mounted volume through the VirtualBox shared folder:
See mongoDB Productions Notes:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this operation.
the easiest solutions of all and a proper way for data persistance is Data Volumes:
Assuming you have a container that has VOLUME ["/data"]
# Create a data volume
docker create -v /data --name yourData busybox true
# and use
docker run --volumes-from yourData ...
This isn't always ideal (but the following is for Mac, by Edward Chu (chuyik)):
I don't think it's a good solution, because the data just moved to another container right?
But it still inside the container rather than local system(mac disk).
I found another solution, that is to use sshfs to map data between boot2docker vm and your mac, which may be better since data is not stored inside linux container.
Create a directory to store data inside boot2docker:
boot2docker ssh
mkdir -p /mnt/sda1/dev
Use sshfs to link boot2docker and mac:
echo tcuser | sshfs docker#localhost:/mnt/sda1/dev <your mac dir path> -p 2022 -o password_stdin
Run image with mongo installed:
docker run -v /mnt/sda1/dev:/data/db <mongodb-image> mongod
The corresponding boot2docker issue points out to docker issue 12590 ( Problem with -v shared folders in 1.6 #12590), which points to the work around of using double-slash.
using a double slash seems to work. I checked it locally and it works.
docker run -d -v //c/Users/marco/Desktop/data:/data <image name>
it also works with
docker run -v /$(pwd):/data
As an workaround I just copy from a folder before mongo deamon starts. Also, in my case I don't care of journal files, so i only copy database files.
I've used this command on my docker-compose.yml
command: bash -c "(rm /data/db/*.lock && cd /prev && cp *.* /data/db) && mongod"
And everytime before stoping the container I use:
docker exec <container_name> bash -c 'cd /data/db && cp $(ls *.* | grep -v *.lock) /prev'
Note: /prev is set as a volume. path/to/your/prev:/prev
Another workaround is to use mongodump and mongorestore.
in docker-compose.yml: command: bash -c "(sleep 30; mongorestore
--quiet) & mongod"
in terminal: docker exec <container_name> mongodump
Note: I use sleep because I want to make sure that mongo started, and it takes a while.
I know this involves manual work etc, but I am happy that at least I got mongo with existing data running on my Windows 10 machine, and still can work on my Macbook when I want.
It's seems like you don't need the data directory for MongoDb, removing those lines from your docker-composer.yml should run without problems.
The data directory is only used by mongo for cache.