Running $ docker compose up -d on the following docker-compose.yml
version: '3'
services:
web:
platform: linux/arm/v6
build: ./web
restart: always
environment:
DATABASE_URL: ${DATABASE_URL}
SECRET_KEY: ${SECRET_KEY}
TZ: Europe/Amsterdam
ports:
- ${PORT}:5000
depends_on:
- db
volumes:
- web-data:/data
db:
platform: linux/arm/v6
image: arm32v6/postgres:15.1-alpine
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
ports:
- ${POSTGRES_PORT}:5432
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
web-data:
should create two docker containers. However, it produces
[+] Building 6.5s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.5s
=> => transferring dockerfile: 32B 0.2s
=> [internal] load .dockerignore 0.2s
=> => transferring context: 2B 0.1s
=> ERROR resolve image config for docker.io/docker/dockerfile:1 2.4s
------
> resolve image config for docker.io/docker/dockerfile:1:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend
dockerfile.v0: failed to solve with frontend gateway.v0: no match for platform in manifest
sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc: not found
I verified that both services (web and db) in my docker-compose file can be build using docker build, which excludes any platform issues from the services.
What does no match for platform in manifest mean? More importantly, how can I fix the error?
A temporary solution is to replace docker-compose.yml with explicit docker commands.
# Create the network and volumes
docker network create randpion-app
docker volume create postgres-data
docker volume create web-data
# Build the web service
docker build -t randpion/web web
# Load the environment variables from the .env file
export $(cat .env | xargs)
# Start the database service
docker run -d \
--network randpion-app --network-alias db --name randpion-db \
--platform "linux/arm/v6" \
-v postgres-data:/var/lib/postgresql/data \
-e POSTGRES_USER=${POSTGRES_USER} \
-e POSTGRES_PASSWORD=${POSTGRES_PASSWORD} \
-e POSTGRES_DB=${POSTGRES_DB} \
-p ${POSTGRES_PORT}:5432 \
arm32v6/postgres:15.1-alpine
# Start the web service
docker run -d \
--network randpion-app --network-alias web --name randpion-web \
--platform "linux/arm/v6" \
-v web-data:/data \
-e DATABASE_URL=${DATABASE_URL} \
-e SECRET_KEY=${SECRET_KEY} \
-e TZ=Europe/Amsterdam \
-p ${PORT}:5000 \
randpion/web
This runs fine for now, but the solution is not ideal. For example, the containers must restart after every reboot.
Related
When trying to translate the following 2 docker commands into a docker-compose.yml using Compose version 3
docker run \
--name timescaledb \
--network timescaledb-net \
-e POSTGRES_PASSWORD=insecure \
-e POSTGRES_INITDB_WALDIR=/var/lib/postgresql/data/pg_wal \
-e PGDATA=/var/lib/postgresql/data/pg_data \
timescale/timescaledb:latest-pg11 postgres \
-cwal_level=archive \
-carchive_mode=on \
-carchive_command="/usr/bin/wget wale/wal-push/%f -O -" \
-carchive_timeout=600 \
-ccheckpoint_timeout=700 \
-cmax_wal_senders=1
and
docker run \
--name wale \
--network timescaledb-net \
--volumes-from timescaledb \
-v ./backups:/backups \
-e WALE_LOG_DESTINATION=stderr \
-e PGWAL=/var/lib/postgresql/data/pg_wal \
-e PGDATA=/var/lib/postgresql/data/pg_data \
-e PGHOST=timescaledb \
-e PGPASSWORD=insecure \
-e PGUSER=postgres \
-e WALE_FILE_PREFIX=file://localhost/backups \
timescale/timescaledb-wale:latest
we get the following error when running docker-compose up:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.wale: 'volumes_from'
How can we translate the 2 Docker commands correctly to use Compose version 3? We will need to be able to specify the location of the volumes on the host (i.e. ./timescaledb).
Using Mac OS X 10.15.3, Docker 19.03.8, Docker Compose 1.25.4
docker-compose.yml
version: '3.3'
services:
timescaledb:
image: timescale/timescaledb:latest-pg11
container_name: timescaledb
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=insecure
- POSTGRES_INITDB_WALDIR=/var/lib/postgresql/data/pg_wal
- PGDATA=/var/lib/postgresql/data/pg_data
command: -cwal_level=archive -carchive_mode=on -carchive_command="/usr/bin/wget wale/wal-push/%f -O -" -carchive_timeout=600 -ccheckpoint_timeout=700 -cmax_wal_senders=1
volumes:
- ./timescaledb:/var/lib/postgresql/data
networks:
- timescaledb-net
wale:
image: timescale/timescaledb-wale:latest
container_name: wale
environment:
- WALE_LOG_DESTINATION=stderr
- PGWAL=/var/lib/postgresql/data/pg_wal
- PGDATA=/var/lib/postgresql/data/pg_data
- PGHOST=timescaledb
- PGPASSWORD=insecure
- PGUSER=postgres
- WALE_FILE_PREFIX=file://localhost/backups
volumes_from:
- tsdb
volumes:
- ./backups:/backups
networks:
- timescaledb-net
depends_on:
- timescaledb
networks:
timescaledb-net:
In the container timescaledb you are actually mounting the /var/lib/postgresql/data to ./timescaledb and hence, if you want to use the same volume for the wale container, you can edit the wale container like:
...
volumes:
- ./backups:/backups
- ./timescaledb:/var/lib/postgresql/data
...
In this case, both of the containers will be able to read and write from the same mounted volume to your local machine.
Also, remember to remove this part as it is not a valid command in docker-compose:
volumes_from:
- tsdb
Every time I try to build my image, I get the following error:
The server must be started by the user that owns the data directory.
The following is my docker file:
version: "3.7"
services:
db:
image: postgres
container_name: xxxxxxxxxxxx
volumes:
- ./postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
nginx:
image: nginx:latest
restart: always
container_name: xxxxxxxxxxxx-nginx
volumes:
- ./deployment/nginx:/etc/nginx
logging:
driver: none
depends_on: ["radio"]
ports:
- 8080:80
- 8081:443
radio:
build:
context: .
dockerfile: "./deployment/Dockerfile"
image: test-radio
command: './manage.py runserver 0:3000'
container_name: xxxxxxxxxxxxxxx
restart: always
depends_on: ["db"]
volumes:
- type: bind
source: ./api
target: /app/api
- type: bind
source: ./xxxxxx
target: /app/xxxxx
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_HOST: $POSTGRES_HOST
AWS_KEY_ID: $AWS_KEY_ID
AWS_ACCESS_KEY: $AWS_ACCESS_KEY
AWS_S3_BUCKET_NAME: $AWS_S3_BUCKET_NAME
networks:
default:
The image is built with the following run.sh file:
#!/usr/bin/env sh
if [ ! -f .pass ]; then
openssl rand -base64 32 > .pass
fi
#export POSTGRES_DB="xxxxxxxxxxxxxxxxx"
#export POSTGRES_USER="xxxxxxxxxxxxxx"
#export POSTGRES_PASSWORD="xxxxxxxxxxxxxxxxxxxx"
#export POSTGRES_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export POSTGRES_DB="xxxxxxxxxxxxxxxxxx"
export POSTGRES_USER="xxxxxxxxxxxxxxxxxxxx"
export POSTGRES_PASSWORD="`cat .pass`"
export POSTGRES_HOST="db"
export AWS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_S3_BUCKET_NAME=""
echo "Your psql password is in .pass do not commit this file."
echo "The app will be available on localhost:8080 shortly"
if [ -z "$1" ]; then
docker-compose up
else
docker-compose up $1
fi
I'm wondering if my error is being caused by attempting to use a bash script to deploy the service on a Windows machine?
Details on the issue
The behavior observed by the OP definetely comes from a UID/GID mismatch, given that the specification
volumes:
- ./postgres-data:/var/lib/postgresql/data
(which can be viewed as a docker-compose equivalent of docker run -v "$PWD/postgres-data:/var/lib/postgresql/data" …) bind-mounts the $PWD/postgres-data folder inside the container, giving access to its files as is (including owner/group metadata).
Also, note that the handling of owner/group metadata between host and containers only relies on the numeric UID and GID, not on the owner and group names.
For more information about UIDs and GIDs in a Docker context, see also that article on Medium.
Workarounds if the bind-mount is necessary
For completeness, several possible solutions to workaround the bind-mount UID-mismatch issue (including the most straightforward one that consists in changing the files' UID :) are described in this answer on StackOverflow:
How to have host and container read/write the same files with Docker?
Other solutions
Following #ParanoidPenguin's comment, you may want to use a named volume, which mainly consists in using:
the docker volume command
and/or the docker run option -v …:….
Remarks:
docker run -v PATH1:PATH2 … triggers a bind-mount of PATH1 (host) to PATH2 (container) if and only if PATH1 is absolute (i.e., starts with a /) (e.g., -v "$PWD:$PWD" is a common idiom)
docker run -v NAME:PATH2 … mounts volume NAME to PATH2 (container) if and only if NAME does not contain any / (i.e., matches regexp [a-zA-Z0-9][a-zA-Z0-9_.-]).
even if we don't run docker volume create foo beforehand by hand, docker run -v foo:/data --rm -it debian will create the named volume foo if need be.
in order to populate the files of a named volume (or respectively, backup them) you can use an ephemeral container of image debian, ubuntu or so, combining at the same time a bind-mount and a volume mount:
Add a file /home/user/bar.txt in a new volume foo
file1=/home/user/bar.txt # initial file
uid=2000 # target User-ID in the volume
gid=2000 # target Group-ID in the volume
docker pull debian
docker run -v "$file1:$file1:ro" -v foo:/data \
-e file1="$file1" -e uid="$uid" -e gid="$gid" \
--rm -it debian bash -exc \
'cp -v -- "$file1" /data/bar.txt && chown -v $uid:$gid /data/bar.txt'
docker volume ls
Backup the foo volume in a tarball
date=$(date +'%Y%m%d_%H%M%S')
back="backup_$date.tar.gz"
destdir=/home/user/backup
mkdir -p "$destdir"
docker run -v foo:/data -v "$destdir:/backup" -e back="$back" \
--rm -it debian bash -exc 'tar cvzf "/backup/$back" /data'
I'm trying to run a bare bones version of Hyperledger Sawtooth using Docker CE on a Mac. The docker-compose.yaml has containers running the base images from Sawtooth.
I'm unable to access the Sawtooth REST API from the host machine even though there are ports published for it when I run docker ps. The docker-compose file has worked on other Macs running Docker CE so I'm suspecting it may be a configuration or setup issue.
The contents of the docker-compose.yaml are below:
version: '2.1'
services:
settings-tp:
image: 'hyperledger/sawtooth-settings-tp:1.1.3'
container_name: sawtooth-settings-tp
depends_on:
- validator
entrypoint: settings-tp --connect tcp://validator:4004
identity-tp:
image: 'hyperledger/sawtooth-identity-tp:1.1.3'
container_name: sawtooth-identity-tp
depends_on:
- validator
entrypoint: identity-tp -vv --connect tcp://validator:4004
rest-api:
image: 'hyperledger/sawtooth-rest-api:1.1.3'
container_name: sawtooth-rest-api
ports:
- '8008:8008'
depends_on:
- validator
entrypoint: sawtooth-rest-api --connect tcp://validator:4004 --bind rest-api:8008
validator:
image: 'hyperledger/sawtooth-validator:1.1.3'
container_name: sawtooth-validator
ports:
- '4004:4004'
command: |
bash -c "
if [ ! -f /etc/sawtooth/keys/validator.priv ]; then
sawadm keygen
sawtooth keygen my_key
sawset genesis -k /root/.sawtooth/keys/my_key.priv
sawset proposal create \
-k /root/.sawtooth/keys/my_key.priv \
sawtooth.consensus.algorithm.name=Devmode \
sawtooth.consensus.algorithm.version=0.1 \
-o config.batch && \
sawadm genesis config-genesis.batch config.batch
fi;
sawtooth-validator -vvv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800 \
--bind consensus:tcp://eth0:5050 \
"
devmode-engine:
image: 'hyperledger/sawtooth-devmode-engine-rust:1.1.3'
container_name: sawtooth-devmode-engine-rust-default
depends_on:
- validator
entrypoint: devmode-engine-rust -C tcp://validator:5050
If you cannot access the port from the host, the container must not be running correctly. Look for error messages for that container when starting docker-compose
What does docker ps -a show?
Can you connect to the port? Try something like telnet localhost 8008
I want to start using Flyway to version our database changes. I am trying to create a Postgres Docker container with seeded data, TAG and publish that docker image to be used in automated testing.
I tried using docker-compose, however I haven't figured a way to TAG and publish after Flyway runs.
Repository with test project
https://github.com/bigboy1122/flyway_postgres
Here is a docker-compose I created
version: '3.7'
services:
flyway:
image: boxfuse/flyway
restart: always
command: -url=jdbc:postgresql://db:5432/foo -user='postgres' -password='test123' -schemas='bar' migrate
volumes:
- .:/flyway/sql
depends_on:
- db
db:
image: tgalati1122/flyway_seeded_postgres
build:
context: .
dockerfile: ./Dockerfile
restart: always
environment:
POSTGRES_PASSWORD: 'test123'
POSTGRES_USER: 'postgres'
POSTGRES_DB: 'foo'
ports:
- 5432:5432
adminer:
image: adminer
restart: always
ports:
- 8080:8080
depends_on:
- db
Here is trying to use Docker multi build feature.
In the example below, the database I think spins up but I can't get flyway to access it.
FROM postgres:10.5-alpine as donar
ENV PGDATA=/pgdata
ENV POSTGRES_PASSWORD='test123'
ENV POSTGRES_USER='postgres'
ENV POSTGRES_DB='foo'
EXPOSE 5432:5432
RUN /docker-entrypoint.sh --help
FROM debian:stretch-slim as build_tools
ENV FLWAY_VERSION='5.2.0
RUN set -ex; \
if ! command -v gpg > /dev/null; then \
apt-get update; \
apt-get install -y --no-install-recommends \
gnupg \
dirmngr \
wget \
; \
rm -rf /var/lib/apt/lists/*; \
fi
VOLUME flyway/sql
RUN wget --no-check-certificate https://repo1.maven.org/maven2/org/flywaydb/flyway-commandline/5.2.0/flyway-commandline-5.2.0-linux-x64.tar.gz -O - | tar -xz
RUN pwd; \
ls -l; \
cd flyway-5.2.0; \
pwd; \
ls -l; \
sh ./flyway -url=jdbc:postgresql://localhost:5432/optins -user='postgres' -password='test123' -schemas='bar' migrate; \
FROM postgres:10.5-alpine
ENV PGDATA=/pgdata
ENV POSTGRES_PASSWORD='test123'
ENV POSTGRES_USER='postgres'
ENV POSTGRES_DB='foo'
EXPOSE 5432:5432
COPY --chown=postgres:postgres --from=donor /pgdata /pgdata
The idea I am going for as database changes occur, I want to automatically build a new lightweight test database as well as update the persisted databases throughout the enterprise.
I'm trying to run a small docker-compose app inside a container-optimized Google Cloud Compute Engine node, but I'm getting stuck when it's trying to mount volumes during a docker-compose up:
Creating lightning_redis_1 ...
Creating lightning_db_1 ...
Creating lightning_redis_1
Creating lightning_db_1 ... done
Creating lightning_api_1 ...
Creating lightning_api_1 ... error
ERROR: for lightning_api_1 Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file sys
tem
ERROR: for api Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file system
Encountered errors while bringing up the project.
jeremy#instance-1 ~/lightning $
My docker-compose.yml file looks like this:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I don't want to have to change anything in the docker-compose.yml file - I'd prefer to be able to fix this issue by running commands inside the VM itself, or in how I set the VM up. Reason being is it's not my code and I can't change the docker-compose.yml file easily, and all I need to do is run it for a short period of time and execute a few docker-compose commands inside the VM.
Container optimized OS usually mounts most of the paths as read-only. That is why you are getting the error
source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file sys
So you have few options
Use named volumes in docker-compose
You will need to change your volumes like below
volumes:
- myappvol:/myapp
and define the top level volumes in compose
volumes:
myappvol: {}
As you said you don't want to modify the yaml then this may not work for you
Run docker-compose inside docker
Currently you run docker-compose on the main machine, instead you should use docker-compose inside another docker container which has the main root folder
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.13.0 up
This would work but the data would be persisted inside the docker container itself.
See below article for more details
https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os
I had the same error, I solved it by removing the 'rootfs' directory when mounting the docker container (you cannot write on this directory).
Just change:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.24.0 up
By:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.24.0 up
Add to the bottom of the file .bashrc file located /home/{your-user}/.bashrc using vi or nano:
e.g. nano /home/{your-user}/.bashrc
echo alias docker-compose="'"'docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.24.0'"'" >> ~/.bashrc
CTRL O - will save
CTRL M - override
CTRL X - exit
Run: source ~/.bashrc - to update the terminal.