How can I restore my postgresql docker volume? - postgresql

I use docker-compose to start my application. One container use postgresql.
I created a script who backup container volume in a tar.gz file.
backup.tar.gz
├── base
├── _data
├── ...
├── pg_hba.conf
└── old.txt
If I inspect my files in my volume no old.txt
sudo tree -L 1 /var/lib/docker/volumes/application_postgresql/_data
├── base
├── _data
├── ...
└── pg_hba.conf
I try to stop my container (docker-compose stop db), untar into /var/lib/docker/volumes/application_postgresql/_data and restart my container (docker-compose restart db). But it did not seem to work.
Files looks good
sudo tree -L 1 /var/lib/docker/volumes/application_postgresql/_data
├── base
├── _data
├── ...
├── pg_hba.conf
└── old.txt
but my container don't want to start
db_1 | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
How can I restore my postgresql volume ?
I know that the solution would be more elegant with a pgdump. But I want to create a backup script that don't know about the environment.
What strategy could I use ?

Related

tileserver-gl: custom config file docker-compose

I am trying to include a Tileserver GL container to my existing docker-compose, using a personalized config.json file. Here is the relevant part of the docker-compose.yml
osm_tile_server:
image: maptiler/tileserver-gl
container_name: open_tile_server
volumes:
- ./Tile_server/data:/data
ports:
- '8081:8080'
- '5431:5432'
command:
- '-c my_config.json'
the data folder structure
.Tile_server/data/
├── malta.bbox
├── malta.osm.pbf
├── my_config.json
├── quickstart_checklist.chk
├── styles
│ └── my_style.json
└── tiles.mbtiles
when running docker-compose up the -c my_config.json file is ignored.
However it works if I simply run docker run -it -v $(pwd)/Tile_server/data:/data -p 8081:80 maptiler/tileserver-gl -c my_config.json and even weirdly, if I use --verbose as the command instance instead of -c my_config.json, the option is executed.

psql: error: server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request

I am new to postgres I want to containerize postgres , below is my dockerfile
FROM postgres:13.3-alpine
ENV POSTGRES_USER="postgres"
COPY . /docker-entrypoint-initdb.d
RUN chmod 777 /docker-entrypoint-initdb.d/main.sh
EXPOSE 5432
ENTRYPOINT ["/docker-entrypoint-initdb.d/main.sh"]
And i have some Initialization scripts (main.sh) that should run when containers start i have placed them inside docker-entrypoint-initdb.d. Files are
.
├── ddl
│   ├── create_db_ddl.sql
│   ├── create_index_ddl.sql
│   └── create_table_ddl.sql
├── dml
│   ├── insert_emm_cat4_child_que.sql
│   ├── insert_emm_data_cat1.sql
│   ├── insert_emm_data_cat2.sql
│   ├── insert_emm_data_cat3.sql
│   ├── insert_emm_data_cat4.sql
│   ├── insert_emm_template.sql
│   └── insert_master_data.sql
├── Dockerfile
├── init.sql
├── Jenkinsfile
└── main.sh
when i run the container it throws this error message
############ Create database and schema if not exist ###########
psql: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
this is from my main.sh file
One observation i made if in dockerfile if i dont include ENTRYPOINT ["/docker-entrypoint-initdb.d/main.sh"] and after docker exec into postgres container and then run ./main.sh manually it works fine
my main.sh file looks like this
#!/bin/sh
## SET password
export PGPASSWORD='sapient#123'
#Set the value of variable
database="survey_platform"
user="postgres"
## execute scripts
echo "############ Create database and schema if not exist ###########"
psql -h <IP>-p 5432 -U $user -f "ddl/create_db_ddl.sql"
echo "############ Create table if not exist ###########"
psql -h <IP> -p 5432 -U $user -d $database -f "ddl/create_table_ddl.sql"
echo "############ Create index if not exist ###########"
psql -h <IP> -p 5432 -U $user -d $database -f "ddl/create_index_ddl.sql"
/bin/sh
I am confused why i am not able to run main.sh with ENTRYPOINT from dockerfile
You don't need ENTRYPOINT, or even this wrapper script here. If you COPY the *.sql files into /docker-entrypoint-initdb.d, the postgres image will run them, in alphabetical order, with appropriate credentials, the first time the container starts up with an uninitialized database.
FROM postgres:13.3-alpine
COPY ddl/create_db_ddl.sql /docker-entrypoint-initdb.d/01_create_db_ddl.sql
COPY ddl/create_table_ddl.sql /docker-entrypoint-initdb.d/02_create_table_ddl.sql
COPY ddl/create_index_ddl.sql /docker-entrypoint-initdb.d/03_create_index_ddl.sql
# No EXPOSE, ENTRYPOINT, CMD, etc.
Note that these scripts are only run if the database data doesn't exist at all. If you store the database data in a named volume or host directory (and you should), these scripts will not be re-run if there's data there.
Fundamentally a Docker container only runs one command, and when that command completes, the container exits as well. The postgres image has a pretty involved entrypoint script that starts up a temporary non-networked database to run the init scripts; if you specify ENTRYPOINT in a derived Dockerfile, that command runs instead of the standard initialization script or the actual database. Your setup tries to run psql in the container, but since that's running instead of the database, there's nothing for it to connect to.

Postgres Docker: "postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory"

I am having weird issues with official postgres docker image. Most of the time it works fine, if I shut down the container and launch it again, I sometimes get this error but it's not every time:
PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
I am launching postgres image using this command:
export $(grep -v '^#' .env | xargs) && docker run --rm --name postgres \
-e POSTGRES_USER=$POSTGRES_USER \
-e POSTGRES_DB=$POSTGRES_DB \
-e POSTGRES_PASSWORD=$POSTGRES_PASSWORD \
-p $POSTGRES_PORT:$POSTGRES_PORT \
-v $POSTGRES_DEVELOPMENT_DATA:/var/lib/postgresql/data \
postgres
I keep variables in .env file, they look like this:
POSTGRES_USER=custom-db
POSTGRES_DB=custom-db
POSTGRES_PASSWORD=12345678
POSTGRES_PORT=5432
POSTGRES_DEVELOPMENT_DATA=/tmp/custom-db-pgdata
When I try to echo variables the values are there so I don't think I'm passing null values to docker env variables.
The directory on my host machine looks something like this:
/tmp/custom-db-pgdata
├── base
│   ├── 1
│   ├── 13407
│   ├── 13408
│   └── 16384
├── global
├── pg_logical
├── pg_multixact
│   ├── members
│   └── offsets
├── pg_notify
├── pg_stat
├── pg_stat_tmp
├── pg_subtrans
├── pg_wal
│   └── archive_status
└── pg_xact
If it's inconsistent in how it works between executions on the same machine and same session (aka without rebooting) then something isn't mapping your directories properly. Finding what it is that's breaking will be difficult, more so since you're on a Mac. Docker on a Mac you has the extra bonus of running through a VM, so docker is mapping your local drive/path through to the VM and then mapping that into the container image, so there are two different layers where things can go wrong.
Dario has the right idea in his clarifying comments, you shouldn't rely on /tmp since that also has Mac Magic to it. It's actually /var/private/somegarbagestring and is different every bootup. Try switching to a /Users/$USER/dbpath folder and move your data to that, so at least you're debugging with one less layer of magic between data and database.

AlpineLinux PXE boot specify startup script as kernel parameter

Is there a way to specify a script as kernel parameter during pxe boot? I want run a bunch of computers as workers. I want them to use PXE to boot AlpineLinux and then run a bash script that will load my app and join my cluster.
Change dir:
cd /tmp
Create directory structure:
.
└── etc
├── init.d
│   └── local.stop
└── runlevels
└── default
└── local.stop -> /etc/init.d/local.stop
mkdir ./etc/{init.d,runlevels/default}/
Create file ./etc/init.d/local.stop:
#!/sbin/openrc-run
start () {
wget http://172.16.11.8/share/video.mp4 -O /root/video.mp4
}
chmod +x ./etc/init.d/local.stop
cd /tmp/etc/runlevels/default
Make symlink:
ln -s /etc/init.d/local.stop local.stop
Go back:
cd /tmp
Create archive:
tar -czvf alpine-test-01.tar.gz ./etc/
Make pxelinux (on your tftp server) menu:
label insatll-alpine
menu label Install Alpine Linux [test]
kernel alpine-installer/boot/vmlinuz-lts
initrd alpine-installer/boot/initramfs-lts
append ip=dhcp alpine_repo=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main modloop=https://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/netboot/modloop-lts modules=loop,squashfs,sd-mod,usb-storage apkovl=http://{YOUR_WEBSERVER}/{YOUR_DIR}/alpine-test-01.tar.gz
And run:
My webserver log:
10.10.15.43 172.16.11.8 - [27/Aug/2021:01:15:22 +0300] "GET /share/video.mp4 HTTP/1.1" 200 5853379 "-" "Wget"

What kind of files or directory is expected by mongorestore when using the -d flag?

I want to use the mongorestore command in a script, but I am having troubles understanding exactly what kind of input it is looking for.
After using the mongodump command, I end up with this tree:
mydirectory
└── dump
├── mydb1
│   ├── schemas.bson
│   └── schemas.metadata.json
├── mydb2
│   ├── schemas.bson
│   ├── schemas.metadata.json
│   ├── status.bson
│   └── status.metadata.json
└── mydb3
├── schemas.bson
└── schemas.metadata.json
I understood that I can use the mongorestore command like this:
mydirectory$ mongorestore
since it is looking by default for the dump directory.
However, I do not understand why using the following command:
mydirectory/dump$ mongorestore mydb1
give the following results:
2018-01-02T14:35:59.823+0100 building a list of dbs and collections to restore from mydb1 dir
2018-01-02T14:35:59.823+0100 don't know what to do with file "mydb1/schemas.bson", skipping...
2018-01-02T14:35:59.823+0100 don't know what to do with file "mydb1/schemas.metadata.json", skipping...
2018-01-02T14:35:59.823+0100 done
Moreover, when I use the -d flag to specify a database to restore, it only works when I specify the directory in which this database is located, for example:
mydirectory/dump$ mongorestore mydb1 -d mydb1
(I would have expected this command to work without the -d flag)
What kind of files or directory is mongorestore expecting when using (or not) the -d flag?
mongorestore expects the dump folder to contain sub-folders with the database name, which in turn contain the BSON dump and the metadata. The error you're seeing is because it didn't find any subdirectory with BSON/metadata files in it.
Rather than restoring by going into the dump directory, it's better to use the --nsInclude option instead (new in MongoDB 3.4). See the nsInclude documentation for more details.
The option --nsInclude requires you to supply the namespace in the form of <database>.<collection>. For example, to restore the test database:
mongorestore --nsInclude "test.*"
To restore the test collection inside the test database:
mongorestore --nsInclude "test.test"
Make sure that you execute the restore from the dump directory's parent, and not from inside it.