Use data from volume during container build in docker compose - docker-compose

I need to share data between containers with docker compose. Here shared_data_setup container should seed the shared volume with data to be used during build of the app container. However when I run this, app container /shared is empty. Is there a way to achieve this ?
services:
# This will setup some seed data to be used in other containers
shared_data_setup:
build: ./shared_data_setup/
volumes:
- shared:/shared
app:
build: ./app/
volumes:
- shared:/shared
depends_on:
- shared_data_setup
volumes:
shared:
driver: local

You need to specify the version of the docker-compose.yml file:
version: "3"
services:
# This will setup some seed data to be used in other containers
shared_data_setup:
build: ./shared_data_setup/
volumes:
- shared:/shared
app:
build: ./app/
volumes:
- shared:/shared
depends_on:
- shared_data_setup
volumes:
shared:
driver: local
Edit: Results:
# Test volume from app
$ docker-compose exec app bash
root#e652cb9e5c46:/# ls -l /shared
total 0
root#e652cb9e5c46:/# touch /shared/test
root#e652cb9e5c46:/# exit
# Test volume from shared_data_setup
$ docker-compose exec shared_data_setup bash
root#b21ead1a7354:/# ls -l /shared
total 0
-rw-r--r-- 1 root root 0 Feb 26 11:23 test
root#b21ead1a7354:/# exit

Related

container exited with code 100 while making mongodb container up

I am trying to making it up mongodb container but i am getting error
Starting v2_mongodb ... done
Starting rockmongo_v2 ... done
Attaching to v2_mongodb, rockmongo_v2
v2_mongodb | Starting mongod...
v2_mongodb exited with code 100
Here is the content of docker-compose.yml file and output of docker ps
version: '2'
services:
v2_db:
image: sameersbn/mongodb:latest
container_name: "v2_mongodb"
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db:rw
- ./data/db:/var/lib/mongodb:rw
environment:
- MONGO_DATA_DIR=/data/db
command: mongod --verbose --smallfiles --dbpath=/data/db # --quiet
rockmongo_v2:
image: javierjeronimo/rockmongo:latest
container_name: "rockmongo_v2"
ports:
- "27118:80"
links:
- v2_db:mongo
depends_on:
- v2_db
docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94794d6f3ff1 javierjeronimo/rockmongo:latest "/bin/sh -c 'service…" 38 minutes ago Up About a minute 0.0.0.0:27118->80/tcp rockmongo_v2
bd4ed92796db sameersbn/mongodb:latest "/sbin/entrypoint.sh…" 38 minutes ago Exited (100) About a minute ago v2_mongodb
I am not able to get the clue what can be the possible cause of above error?
List your volume links:
docker volumes ls
Delete the volumes: (In my case this freed up about 29.35 Gig)
docker volumes prune

Docker containers with volume mounting exits immediately on using docker-compose up

I am using docker-compose up command to spin-up few containers on AWS AMI RHEL 7.6 instance. I observe that in whichever containers there's a volume mounting, they are exiting with status Exiting(1) immediately after starting and remaining containers remain up. I tried using tty: true and stdin_open: true, but it didn't help. Surprisingly, the set-up works fine in another instance which basically I am trying to replicate in this new one.
The stopped containers are Fabric v1.2 peers, CAs and orderer.
Docker-compose.yml file which is in root folder where I use docker-compose up command
version: '2.1'
networks:
gcsbc:
name: gcsbc
services:
ca.org1.example.com:
extends:
file: fabric/docker-compose.yml
service: ca.org1.example.com
fabric/docker-compose.yml
version: '2.1'
networks:
gcsbc:
services:
ca.org1.example.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
ports:
- '7054:7054'
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./artifacts/channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerorg1
networks:
- gcsbc
hostname: ca.org1.example.com

Docker-compose version3 data volume container

I´m having problems getting data volume containers running in docker-compose v3. As a test I´ve tried to connect two simple images like:
version: '3'
services:
assets:
image: cpgonzal/docker-data-volume
container_name: data_container
command: /bin/true
volumes:
- assets_volume:/tmp
web:
image: python:3
volumes:
- assets_volume:/tmp
depends_on:
- assets
volumes:
assets_volume:
I would expect that python:3 container can see /tmp of data_container. Unfortunately
docker-compose up
fails with
data_container exited with code 0
desktop_web_1 exited with code 0
What am I doing wrong?
Both of your containers exited because there's no command to keep it running.
Use these three options: stdin_open, tty, and command to keep it running.
Here's an example:
version: '3'
services:
node:
image: node:8
stdin_open: true
tty: true
command: sh

Why I don't lose postgresql data when rebuild docker image?

version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Why I don't lose data when running docker-compose build --force-em --no-cache. If this is normal, why do we need to create volume for data folder ?
When running the command docker-compose build --force-em --no-cache, this will only build the web Docker image from the Dockerfile which in your case is in the same directory.
This command will not stop the containers that you have previously started using this compose file, thus you want lose any data when running this command.
However, as soon as you remove the containers using docker-compose down or when containers are stopped docker-compose rm, you won't find the postgres data when you restart the container.
If you want to persist the data, and make the container pick it up when it is recreated, you need to give the postgres data volume a name as such.
version: '3'
services:
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Now the postgres data won't be lost when the containers are recreated.

How do I back up docker volume for postgres?

Here is my docker compose file
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
I would like to back up data .
Is there a command for that in docker? I am using mac for my local and linux for my server
Volumes are not anything special in case of Docker. They are simple directories/files, when you are using volumes in compose file docker will create a directory and mount that inside of your container when you run the container. You can see that by:
docker inspect <container name/id>
in the output you will find information about volumes, you can see there the directory on the host OS
To backup your volume you can simply compress and store the directory using tar. To do that you need to know the path of the directory, you can either mount a directory from host OS like:
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- /var/lib/postgresql/data:/var/lib/postgresql/data
and you can backup /var/lib/postgresql/data from the host OS either by mounting it another container or from the host OS directly
OR there is another way, you can mount the same volume in another container in readonly mode and backup the directory:
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
backuptool:
image: busybox:latest
volumes:
- data:/data:ro
volumes:
data: {}
you can then tar and upload the backup of /data from the backuptool container