How to restore postgres dump through pgadmin docker container (dockerized postgres) - postgresql

Dear stackoverflowers,
I use docker-compose to run the dockerized postgresql server and dockerized pgadmin4 webserver.
When i try to restore a dump via the web interface it shows me an empty folder with the path "/" for source location of the dump.
Now my question, is it in general possible to restore a dump via dockerized pgadmin and if, what path from which container (postgres or pgadmin) do i have to mount as volume to provide the dump to be restored?
version: '3.8'
services:
db:
container_name: pg_container
image: postgres:12.10
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ***
POSTGRES_DB: postgres
ports:
- "5432:5432"
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: ***
ports:
- "5050:80"
With kind regards
starguy

I'm in the same situation as you are. In the web pgadmin, you have an option to upload a .sql file just by clicking at the ... button.

Related

How do I make docker-compose reuse previous volumes for a service running postgres?

I am using Windows 10 with WSL2. I'm running docker-compose through Visual Studio 2022.
I have the following docker-compose.yml file:
version: '3.4'
services:
dbService:
image: postgres
volumes:
- PostgresUsers:/var/lib/postgresql/data
restart: always
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
ports:
- "5432:5432"
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
ports:
- "5050:80"
depends_on:
- dbService
volumes:
PostgresUsers:
If I create tables and insert data into a created database in my postgres service, shutdown and delete the 2 running containers, and create them again, no data is persisting. Put another way, when I register a Server and check the Databases in that server in pgAdmin, none of the previously entered data is present.
I've checked my Docker volumes directory in the hidden wsl$ drive here:
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes
and am seeing that each time I run docker-compose through Visual Studio, new volumes that are not the name I specified are being created, ones that appear to have a random string of what I'm assuming is the Docker Container's ID or some other related value
How can I make data entered in Postgres persist after shutting down and deleting the running containers?

Can't connect to databases from devcontainer

I'm working on building an environment for an educational setting, but for some reason my dev container isn't able to connect to the databases that are being generated by the docker-compose.yml. See below:
# docker-compose.yml
version: '3.9'
services:
conda:
image: continuumio/anaconda3:latest
build:
context: ..
dockerfile: .devcontainer/Dockerfile
command: sleep infinity
volumes:
- ..:/workspace:cached
mongo:
image: mongo:latest
restart: unless-stopped
volumes:
- dbs:/data/db
postgres:
image: postgres:latest
restart: unless-stopped
volumes:
- dbs:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
volumes:
dbs:
When I attempt to connect to the PostgreSQL database using SQLTools on port 5432 on localhost, I get the following:
[1642491269885] ERROR (ls): Connecting error: {"code":-32001,"data":{"driver":"PostgreSQL","driverOptions":{}},"name":"Error"}
ns: "conn-manager"
[1642491269886] ERROR (ext): ERROR: Error opening connection connect ECONNREFUSED 127.0.0.1:5432, {"code":-32001,"data":{"driver":"PostgreSQL","driverOptions":{}}}
ns: "error-handler"
As a possibly important aside, I am also noticing that the PostgreSQL instance is constantly restarting. Here is what the docker container set looks like:
However, I should note that I can't connect to the mongodb instance either, using the Mongo for VS Code extension.
Let me know how I can make this work!

Moved docker-compose.yml creates a new postgres database

I have set up a Postgres database on docker on ubuntu with the docker-compose.yml just for that database within the folder ~/postgres and I'd run docker-compose up -d to run my database from within the ~/postgres folder.
Here is my docker-compose.yml:
version: '3'
services:
database:
image: "postgres"
ports:
- "0.0.0.0:5432:5432"
env_file:
- database.env
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
This database is set up and working perfectly, so I decided to set up my web application as well and, because the docker-compose.yml file was inside that folder, I moved it outside to ~/ so I could use it for my web app as well.
This is what the docker-compose.yml in ~/ looks like:
version: "3"
services:
database:
image: "postgres"
ports:
- "0.0.0.0:5432:5432"
env_file:
- postgres/database.env
volumes:
- database-data:/var/lib/postgresql/data/
webapp:
image: webapp/site
build:
context: ./retro-search-engine
dockerfile: Dockerfile
args:
buildno: 1
links:
- "database:db"
ports:
- "0.0.0.0:8000:80"
volumes:
- webapp:/var/www
environment:
db_host: db
db_username: xxxx
db_password: xxxx
db_database: xxxx
db_port: 5432
volumes:
database-data:
webapp:
As you can see, the database docker configuration is basically the same, the only thing that changes is the path to the database.env file since it's still in the previous folder.
So, the problem here is that when I run docker-compose up -d from ~/, everything starts normally but when I access the database, all of my tables are gone.
If I go back to ~/postgres and do docker-compose up -d in that folder (with the previous docker-compose.yml) and connect to the db, I can access my tables.
So what I think is happening is that it's either creating a new container or somehow the folder where the data is stored is relative to the docker-compose.yml file and it's creating a new database because it can't find the old files.
I have no idea how to solve this issue, I have googled around and couldn't find anything so I decided to ask here before I dump my whole db and restore it into a new container, which I don't want to do because it's a 16gb database and it's gonna take forever.
Does anyone have any idea how I can use my new docker-compose.yml with the data from my database?
Thanks in advance.
First:
Replace : postgres/database.env by ./postgres/database.env
Use docker compose up --build :
it will rebuild the image (usefull if you made some change to your dockerfile). try to avoid to use -d when developing, you'll avoid to have tons of container running.
Second:
I suggest you to follow the following reco, It will resolve your problem and it will be cleaner if you want to use a pipeline CI/CD and to create more "autonomous" image and container on demand.
rootfolder
|-docker-compose.yaml
|-postgres/
| |--All_other_files_for_the_postgres_docker_image
|-webapp/
|-- Dockerfile
|-- All_other_files_for_the_webapp_docker_image
bellow you will find my "correction" :
version: "3"
services:
database:
image: "postgres"
container_name: "my_postgres_container"
ports:
- "0.0.0.0:5432:5432"
env_file:
- ./postgres/database.env
volumes:
- database-data:/var/lib/postgresql/data/
webapp:
image: webapp/site
container_name: "my_webapp_container"
build:
context: ./retro-search-engine
dockerfile: Dockerfile
links:
- "database:db"
ports:
- "0.0.0.0:8000:80"
volumes:
- webapp:/var/www
environment:
db_host: db
db_username: xxxx
db_password: xxxx
db_database: xxxx
db_port: 5432
volumes:
database-data:
webapp:
If you want to use an existing postgres image that is already present (to see if an image already existe you can do : docker image | grep postgres)
then you can do directly in your docker-compose :
image: "<your_image_name>"

Authentication failed when logging in to Postgres database created via docker-compose

I have set up the following docker-compose.yml file to set up and run PostgreSQL and PgAdmin.
version: '3.1'
services:
db:
image: postgres:latest
container_name: postgres-dopp
restart: unless-stopped
environment:
POSTGRES_USER: dopp_dev
POSTGRES_PASSWORD: dopp_dev_pass
PGDATA: /data/postgres
ports:
- "5432:5432"
volumes:
- dbdata-dopp:/data/postgres
networks:
- network-dopp
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
PGADMIN_CONFIG_SERVER_MODE: 'False'
depends_on:
- db
volumes:
- dbdata-dopp:/data/pgadmin
ports:
- "5050:80"
networks:
- network-dopp
networks:
network-dopp:
driver: bridge
volumes:
dbdata-dopp:
name: dopp-db-data
driver: local
This works fine, insofar as I can navigate to PgAdmin in my host machine's browser and through that I can connect to the database using the credentials I've defined in the environment variables. However, when attempting to make a direct connection to the postgres database from my host machine (by connecting to localhost:5432, since I have configured to expose that port), I then get the following error response:
[28P01] FATAL: password authentication failed for user "dopp_dev"
I'm fairly new to the peculiarities of Postgres and docker configuration, so I'm not sure what is causing Postgres to say that password authentication fails when connecting from my host machine, while it works perfectly fine if I do it through PgAdmin, which is on the same internal docker network.
Actually, I discovered that the docker postgres service's port 5432 was being shadowed by a local postgres instance running my host machine.

Docker Postgres container not creating expected database

I am launching a Postgres database via docker-compose, with...
Version: "3.7"
services:
db:
image: "postgres"
container_name: db
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: good_day
POSTGRES_USER: ian
POSTGRES_PASSWORD: ian
volumes:
- gd-data:/var/lib/postgresql/data/
volumes:
gd-data:
When I login to the Postgres database from my 'DBeaver' db client, I am not seeing a 'good_day' schema
Is that correct?
Solved it.
The problem is with 'DBeaver'...
The initial login for its postgres connector only seems to support 'postgres' as the database.
To see other databases requires that the 'Show all databases' option be enabled