I have this docker-compose.yml file where I run a mongo container
version: '3'
services:
appapi:
container_name: appapi
image: strapi/strapi:3.1.3
environment:
DATABASE_CLIENT: ${APPAPI_DATABASE_CLIENT}
DATABASE_HOST: ${APPAPI_DATABASE_HOST}
DATABASE_PORT: ${APPAPI_DATABASE_PORT}
DATABASE_NAME: ${APPAPI_DATABASE_NAME}
DATABASE_USERNAME: ${APPAPI_DATABASE_USERNAME}
DATABASE_PASSWORD: ${APPAPI_DATABASE_PASSWORD}
ports:
- 1337:1337
volumes:
- ./app:/srv/app
depends_on:
- appmongo
appmongo:
container_name: appmongo
image: mongo:4.4.0
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${APPDB_MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${APPDB_MONGO_INITDB_ROOT_PASSWORD}
ports:
- "27027:27017"
volumes:
- ./data/db:/data/db
I want to backup the database running a dump
docker run -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=admin --rm mongo mongodump --host mongoapp:27027 --archive --gzip | cat > ./mongodumps/dump_$(date '+%d-%m-%Y_%H-%M-%S').gz
I tried to modify the previous command but I am not able to connect and do the dump, I am getting
2020-08-15T19:27:04.870+0000 Failed: can't create session: could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: mongoapp:27027, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : dial tcp: lookup mongoapp on 192.168.65.1:53: no such host }, ] }
I was able to dump/restore with the following commands
dump
docker exec defymongo sh -c 'mongodump --archive -u {{mongouser}} -p {{mongopass}}' > ./mongodumps/dump_$(date '+%d-%m-%Y_%H-%M-%S').gz
restore
docker exec -i defymongo sh -c 'mongorestore --archive -u {{mongouser}} -p {{mongopass}}' < ./mongodumps/dump_$(date '+%d-%m-%Y_%H-%M-%S').gz
The difference is here the commands use sh -c to execute mongorestore and pass parameters with authentication values.
This is not enough to backup Strapi. Probably there are some values inside the /src/app folder in Strapi that should also be backed up
Hopefully this will helps someone else
Related
I want to create a postgres database named bank within a docker-compose.yml file just after the postgres container has started but when i run docker-compose --env-file .env -f docker-compose.yaml up -d i get this error: /var/run/postgresql:5432 - no response...
when i remove the line with the command: option, everything start correctly and i get: /var/run/postgresql:5432 - accepting connections
But now, i have to run this steps by steps in the terminal:
docker exec -it postgres bash
psql -U my_user_name
create database bank;
and exit
And i really don't want it to work like that, instead, i want the database to be created within the docker-compose file. (Note that, when i remove the command: option, and i run until pg_isready; do sleep 1; done; echo accepting; inside the container, it ouput accepting almost immediately)
The POSTGRES_DB env variable doesn't work, The username is still used as default
This is my docker-compose file:
services:
db:
container_name: postgres
image: postgres
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/data/postgres
volumes:
- db:/data/postgres
ports:
- "5332:5432"
networks:
- db
restart: unless-stopped
healthcheck:
test: [ "CMD-SHELL", "pg_isready -d postgres" ]
interval: 30s
timeout: 10s
retries: 5
command: /bin/bash -c "until pg_isready -U ${POSTGRES_USER} -p 5432; do sleep 1; done; psql -U ${POSTGRES_USER} -c 'CREATE DATABASE bank;'"
networks:
db:
driver: bridge
volumes:
db:
The most important line is the one with command: :
command: /bin/bash -c "until pg_isready -U ${POSTGRES_USER} -p 5432; do sleep 1; done; psql -U ${POSTGRES_USER} -c 'CREATE DATABASE bank;'"
Please help me with the correct command to execute so that the database will be created automatically when running docker-compose --env-file .env -f file up -d
Have you considered using the POSTGRES_DB environment variable ?
services:
db:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATA: /data/postgres
POSTGRES_DB: bank
volumes:
- db:/data/postgres
ports:
- "5332:5432"
networks:
- db
restart: unless-stopped
healthcheck:
test: [ "CMD-SHELL", "pg_isready -d postgres" ]
interval: 30s
timeout: 10s
retries: 5
networks:
db:
driver: bridge
volumes:
db:
Context
I need to populate a database inside a docker container from a backup file that I have on the host machine.
I've tried this docker command while the PostGIS container is up (see docker-compose.yml at the end):
sudo docker exec -i db_container_1 ./usr/local/bin/pg_restore --no-owner --role=postgres -h localhost -U postgres -p 5434 -d database_name < ../db/dumps/dump_prod_2020.backup
But I've got this message:
read unix #->/var/run/docker.sock: read: connection reset by peer
As documented here, I also tried using this docker-compose command but it raises the exact same strange message:
docker-compose exec -T db /usr/local/bin/pg_restore --no-owner --role=postgres -h localhost -U postgres -p 5434 -d database_name < ../db/dumps/dump_prod_2020.backup
Question
What am I doing wrong and how could I populate my docker database with my local dump?
More infos
Here's the docker-compose.yml used to start the db service (docker ps outputs db_container_1 as the corresponding container name):
version: '3.6'
volumes:
db_data:
services:
db:
image: mdillon/postgis:11-alpine
environment:
POSTGRES_DB: database_name
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- '${DB_PORT:-5434}:5432'
restart: 'no'
volumes:
- './docker/db:/docker-entrypoint-initdb.d:ro'
- 'db_data:/var/lib/postgresql/data' # to persist storage
I have "dockerized" a Django/PostgreSQL app and try to connect to my database
I have 2 containers: web et db
It works but I can't connect to my postgresql database
I used to ran docker exec -it coverage_africa_db_1 psql -U postgres but I got an error
psql: error: could not connect to server: FATAL: role "postgres" does not exist
I try to 'jump' into my container by running the command docker exec -it aab213f730cd bash and try to connect using psql command...
psql -d db_dev
psql: error: could not connect to server: FATAL: role "root" does not exist
or
psql -U postgres
error: could not connect to server: FATAL: role "postgres" does not exist
in fact, none of psql options works...
.env.dev
SECRET_KEY=*************************************
DEBUG=1
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=db_dev
SQL_USER=user
SQL_PASSWORD=user
SQL_HOST=db
SQL_PORT=5432
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.dev
docker-compose.yml
version: '3.7'
services:
web:
build: ./app
restart: always
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
restart: always
volumes:
- postgres_data:/var/lib/postgres/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=user
- POSTGRES_DB=db_dev
volumes:
postgres_data:
With postgres container, this:
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=user
- POSTGRES_DB=db_dev
defines how the database is initialized. If you didn't change it, you should be able to connect as user 'user' with password 'user'.
If you did change it, then the actual values are those which were present at the first launch. After first launch those credentials are written into the database, which data is on postgres_data volume. If you want to delete the data and reinitialize database with new credentials, use docker-compose down -v.
I m new to docker
so i followed up a tutorial here , part 6,7 and 8 in order to use and learn docker in a project.
The problem is that when i use docker-compose.yml to build images on my laptop,
my stack_server can connect to mongo.
But if i build images from docker hub and i pull and run them seperately on my laptop,my stack_server CAN't connect to mongo.
here is my docker-compose.yml :
client:
build: ./client
restart: always
ports:
- "80:80"
links:
- server
mongo:
image: mongo
command: --smallfiles
restart: always
ports:
- "27017:27017"
server:
build: ./server
restart: always
ports:
- "8080:8080"
links:
- mongo
However, my stack_client can connect to the stack_server.
and my commands to run my image (my images are public)
docker-run -i -t -p 27017:27017 mongo
docker-run -i -t -p 80:80 mik3fly4steri5k/stack_client
docker-run -i -t -p 8080:8080 mik3fly4steri5k/stack_server
and my error log
bryan#debian-dev7:~$ sudo docker run -i -t -p 8080:8080 mik3fly4steri5k/stack_server
[sudo] password for bryan:
Express server listening on 8080, in development mode
connection error: { Error: connect ECONNREFUSED 127.0.0.1:27017
at Object._errnoException (util.js:1021:11)
at _exceptionWithHostPort (util.js:1043:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1175:14)
name: 'MongoError',
message: 'connect ECONNREFUSED 127.0.0.1:27017' }
here is my stack_server dockerfile
FROM node:latest
# Set in what directory commands will run
WORKDIR /home/app
# Put all our code inside that directory that lives in the container
ADD . /home/app
# Make sure NPM is not cached, remove everything first
RUN rm -rf /home/app/node_modules/npm \
&& rm -rf /home/app/node_modules
# Install dependencies
RUN npm install
# Tell Docker we are going to use this port
EXPOSE 8080
# The command to run our app when the container is run
CMD ["node", "app.js"]
First Solution but deprecated
docker-run -i -t -p 27017:27017 --name:mongo mongo
docker-run -i -t -p 80:80 mik3fly4steri5k/stack_client
docker-run -i -t -p 8080:8080 --link mongo:mongo mik3fly4steri5k/stack_server
I had to put a name on my mongo and to link it with the parameter --link when i run my stack_server
docker documentation
Warning: The --link flag is a deprecated legacy feature of Docker.
It may eventually be removed.
Unless you absolutely need to continue using it,
we recommend that you use user-defined networks to facilitate communication
between two containers instead of using --link.
One feature that user-defined networks do not support that you can do with
--link is sharing environmental variables between containers.
However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
Run your stack_server with below command
sudo docker run -i -t -p 8080:8080 --link mongo:mongo mik3fly4steri5k/stack_server
Note: --link flag is a deprecated
Your container is running. But is it healthy? I recommend that you implement Healthcheck
It’s actually would look pretty much the same:
(Docker-compose YAML)
healthcheck:
test: curl -sS http://127.0.0.1:8080 || echo 1
interval: 5s
timeout: 10s
retries: 3
Docker health checks is a cute little feature that allows attaching shell command to container and use it for checking if container’s content is alive enough.
I hope it can help you.
more info https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
To start MongoDB on a desire port you would have to do the followings:
client:
build: ./client
restart: always
ports:
- "80:80"
links:
- server
mongo:
image: mongo
command: mongod --port 27017
restart: always
ports:
- "27017:27017"
server:
build: ./server
restart: always
ports:
- "8080:8080"
links:
- mongo
Lets look at error
connection error: { Error: connect ECONNREFUSED 127.0.0.1:27017
Server trying to connect to localhost instead of mongo. You need to configure server to connect mongo to mongo:27017
mongo is alias created by docker for linking containers to each other.
related posts:
1) docker postgres pgadmin local connection
2) https://coderwall.com/p/qsr3yq/postgresql-with-docker-on-os-x (in the example "Name" entry is not filled in)
there are two ways to complete this task, I use official postgres
METHOD 1:
and runs it with
sudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 -d postgres
then connect with
Name: postgres
Host: localhost
Port: 5432
user
pass
...
METHOD 2:
starts with
sudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
and then check the ip of container
sudo docker inspect
say result
172.17.42.1
then connect with pgAdmin tab Properties filled info
Name: postgres
Host: 172.17.42.1
Port: 5432
user
pass
...
I included this in the docker yaml file to get the database and pgAdmin:
database:
image: postgres:10.4-alpine
container_name: kafka-nodejs-example-database
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
expose:
- "5432"
ports:
- 8000:5432
volumes:
- ./services/database/schema.sql:/docker-entrypoint-initdb.d/1-schema.sql
- ./services/database/seed.sql:/docker-entrypoint-initdb.d/2-seed.sql
pgadmin:
image: dpage/pgadmin4
ports:
- 5454:5454/tcp
environment:
- PGADMIN_DEFAULT_EMAIL=admin#mydomain.com
- PGADMIN_DEFAULT_PASSWORD=postgres
- PGADMIN_LISTEN_PORT=5454
The postgres username is alphaone and the password is xxxxxxxxxxx.
Do a docker ps to get the container id and then docker inspect <dockerContainerId> | grep IPAddress
eg: docker inspect 2f50fabe8a87 | grep IPAddress
Insert the Ip address into pgAdmin and the database credentials used in docker:
Since you're mapping the port 5432 on the container to the same port on host with -p 5432:5432 in your docker run statement, try connecting pgadmin to port 5432 on the host instead of the container.