I'm using docker compose to run tests for my application. The configuration looks like:
version: '2'
services:
web:
build: .
image: myapp:web
ports:
- "3000:3000"
depends_on:
- mongo
links:
- mongo
mongo:
image: mongo:3.2.6
Right now, when I run docker-compose up, there is a volume created automatically (by docker-compose or the mongo image?) which maps the Mongo storage data to path like: /var/lib/docker/volumes/c297a1c91728cb225a13d6dc1e37621f966067c1503511545d0110025479ea65/_data.
Since I am running tests rather than production code, I'd actually like to avoid this persistence (the mongo data should go away when the docker-compose exits) -- is this possible? If so, what's the best way to do it?
After the containers exit (or you stop them with a down command), clean up the old containers and volumes with
docker-compose rm -v
The -v tells it to also remove the volumes (container volumes and named volumes created with docker-compose, but not host volumes).
Related
I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)
Can anyone tell me where I am going wrong. All I am trying to do is name a Mongo database using docker compose.
I have a docker compose file that looks like this:
version: "3"
services:
mongo-db:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=mydbname
ports:
- 27017:27017
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
I run docker docker-compose -f docker-compose.yml up -d --build and it runs. I then open Robo 3T and connect to my container but every time I do the database is called test and not mydbname. Any ideas? TIA
The environment variables are only used to create a new database if no database already exists. You map a volume to /data/db and that volume probably contains an existing database named 'test'.
Find the volume using docker volume ls. It's called something like <directory name>_mongo-db. Then delete it using docker volume rm <volume name>.
Now Docker will create a new, empty volume and Mongo will create a new database when you start the container. And it'll use the values from the environment variables.
Similar to Can't connect to MongoDB container from other Docker container - but answers from this post don't work for me.
I am new to Docker. Trying to learn it on a typescript/express/mongo/mongoose api example.
What I am trying to do (and having problems with), is to use mongo cmd line on a running mongo container after it has been spun up using docker compose up. Even though I have my data nicely persisted on a Docker volume, I don't seem to be able to log into the database using cmd line.
This is my docker-compose.yml file:
version: '3.9'
services:
api:
container_name: api_ts
build: .
restart: unless-stopped
environment:
- DB_URL=mongodb://myself:pass123#mongo:27017/
ports:
- '3131:3131'
depends_on:
- mongo
links: # (seems to be needed)
- mongo
mongo:
container_name: mongo_container
image: mongo:latest
restart: always
volumes:
- mongo_dbv:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=myself
- MONGO_INITDB_ROOT_PASSWORD=pass123
ports:
- '27017:27017'
volumes:
mongo_dbv: {}
This is my Dockerfile:
FROM node:alpine
WORKDIR /usr/src/app
COPY package*.json .
RUN npm ci
COPY . .
ENV PORT=3131
EXPOSE 3131
COPY .env ./dist
CMD ["npm", "start"]
I am running
docker compose up -d --build
After both services are ready, I do:
docker exec -it mongo_container mongo
show dbs
...and the output of the last cmd is empty
(same occurs when trying to follow the answers in the post mentioned above)
I am sure the database contains data, because I am able to verify it using REST client.
Also, I am a bit puzzled - and maybe this is somehow connected - why there is no indication, either in docker-compose.yml or in Dockerfile, of the database name which I am using. I would expect it to be part of show dbs output. Despite that, my api runs just fine.
Listing databases requires authentication
docker exec -it mongo_container mongo -u myself -p pass123
Now you can list databases
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
Note: mongo should show you warning that "mongo" shell has been superseded by "mongosh". When you use mongosh, a proper authentication error would be shown on the database listing attempt.
I am having a strange situation where I can not connect to my running mongo DB in my docker compose. My compose file:
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
external_links:
- myname:mongo
environment:
- MONGO_URL=mongodb://myname:27017/test
I have found a few infos on that that all did not solve my issue. I.e. I tried:
1)
Create a custom network:
docker network create mongonet
Then start mongo with the --network mongonet flag and add to the compose:
networks:
default:
external:
name: mongonet
Got nothing there either.
I looked into my /etc/hosts file on my compose, and it did not list any DNS entry.
If i do a docker inspect and grab the mongo IP and add it to my compose, that is fine and works like a charm.
I start mongo like this:
docker run -d -p 27017:27017 -v ~/mongo_data:/data/db mongo
I am really rather confused as I believed this to be a out-of-the-box kind of thing. Strangely I can't make it work. I have found examples on internal links (vs external_link) but that does not work for me as I have many services that I would like to run like this and not all of them should run at the same time.
I start my docker compose as this:
docker-compose up --force-recreate
My versions are:
docker-compose version 1.17.1, build 6d101fb
Docker version 17.05.0-ce, build 89658be
My question: How do I successfully link a running mongo container as an external link into my application containers such that they can connect to them?
My docker PS:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5cf6e08d6fde mongo "docker-entrypoint..." About an hour ago Up About an hour 0.0.0.0:27017->27017/tcp gallant_feynman
Links are deprecated, use networks instead.
Notes:
If you’re using the version 2 or above file format, the
externally-created containers must be connected to at least one of the
same networks as the service which is linking to them. Links are a
legacy option. We recommend using networks instead.
The network way should work. I think you are missing some pieces. Make sure to give the mongo container a name, and make sure to attach the app container to the network in the compose file:
docker network create mongonet
docker run -d -p 27017:27017 --network mongonet --name mongo -v ~/mongo_data:/data/db mongo
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
environment:
- MONGO_URL=mongodb://mongo:27017/test
networks:
- mongonet
networks:
default:
external:
name: mongonet
I have a main service in my docker-compose file that uses postgres's image and, though I seem to be successfully connecting to the database, the data that I'm writing to it is not being kept beyond the lifetime of the container (what I did is based on this tutorial).
Here's my docker-compose file:
main:
build: .
volumes:
- .:/code
links:
- postgresdb
command: python manage.py insert_into_database
environment:
- DEBUG=true
postgresdb:
build: utils/sql/
volumes_from:
- postgresdbdata
ports:
- "5432"
environment:
- DEBUG=true
postgresdbdata:
build: utils/sql/
volumes:
- /var/lib/postgresql
command: true
environment:
- DEBUG=true
and here's the Dockerfile I'm using for the postgresdb and postgresdbdata services (which essentially creates the database and adds a user):
FROM postgres
ADD make-db.sh /docker-entrypoint-initdb.d/
How can I get the data to stay after the main service has finished running, in order to be able to use it in the future (such as when I call something like python manage.py retrieve_from_database)? Is /var/lib/postgresql even the right directory, and would boot2docker have access to it given that it's apparently limited to /Users/?
Thank you!
The problem is that Compose creates a new version of the postgresdbdata container each time it restarts, so the old container and its data gets lost.
A secondary issue is that your data container shouldn't actually be running; data containers are really just a namespace for a volume that can be imported with --volumes-from, which still works with stopped containers.
For the time being the best solution is to take the postgresdbdata container out of the Compose config. Do something like:
$ docker run --name postgresdbdata postgresdb echo "Postgres data container"
Postgres data container
The echo command will run and the container will exit, but as long as don't docker rm it, you will still be able to refer to it in --volumes-from and your Compose application should work fine.