Running docker-compose inside a Google Cloud Engine - docker-compose

I'm trying to run a small docker-compose app inside a container-optimized Google Cloud Compute Engine node, but I'm getting stuck when it's trying to mount volumes during a docker-compose up:
Creating lightning_redis_1 ...
Creating lightning_db_1 ...
Creating lightning_redis_1
Creating lightning_db_1 ... done
Creating lightning_api_1 ...
Creating lightning_api_1 ... error
ERROR: for lightning_api_1 Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file sys
tem
ERROR: for api Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file system
Encountered errors while bringing up the project.
jeremy#instance-1 ~/lightning $
My docker-compose.yml file looks like this:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I don't want to have to change anything in the docker-compose.yml file - I'd prefer to be able to fix this issue by running commands inside the VM itself, or in how I set the VM up. Reason being is it's not my code and I can't change the docker-compose.yml file easily, and all I need to do is run it for a short period of time and execute a few docker-compose commands inside the VM.

Container optimized OS usually mounts most of the paths as read-only. That is why you are getting the error
source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file sys
So you have few options
Use named volumes in docker-compose
You will need to change your volumes like below
volumes:
- myappvol:/myapp
and define the top level volumes in compose
volumes:
myappvol: {}
As you said you don't want to modify the yaml then this may not work for you
Run docker-compose inside docker
Currently you run docker-compose on the main machine, instead you should use docker-compose inside another docker container which has the main root folder
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.13.0 up
This would work but the data would be persisted inside the docker container itself.
See below article for more details
https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os

I had the same error, I solved it by removing the 'rootfs' directory when mounting the docker container (you cannot write on this directory).
Just change:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.24.0 up
By:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.24.0 up
Add to the bottom of the file .bashrc file located /home/{your-user}/.bashrc using vi or nano:
e.g. nano /home/{your-user}/.bashrc
echo alias docker-compose="'"'docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.24.0'"'" >> ~/.bashrc
CTRL O - will save
CTRL M - override
CTRL X - exit
Run: source ~/.bashrc - to update the terminal.

Related

Docker compose fails on Raspberry Pi

Running $ docker compose up -d on the following docker-compose.yml
version: '3'
services:
web:
platform: linux/arm/v6
build: ./web
restart: always
environment:
DATABASE_URL: ${DATABASE_URL}
SECRET_KEY: ${SECRET_KEY}
TZ: Europe/Amsterdam
ports:
- ${PORT}:5000
depends_on:
- db
volumes:
- web-data:/data
db:
platform: linux/arm/v6
image: arm32v6/postgres:15.1-alpine
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
ports:
- ${POSTGRES_PORT}:5432
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
web-data:
should create two docker containers. However, it produces
[+] Building 6.5s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.5s
=> => transferring dockerfile: 32B 0.2s
=> [internal] load .dockerignore 0.2s
=> => transferring context: 2B 0.1s
=> ERROR resolve image config for docker.io/docker/dockerfile:1 2.4s
------
> resolve image config for docker.io/docker/dockerfile:1:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend
dockerfile.v0: failed to solve with frontend gateway.v0: no match for platform in manifest
sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc: not found
I verified that both services (web and db) in my docker-compose file can be build using docker build, which excludes any platform issues from the services.
What does no match for platform in manifest mean? More importantly, how can I fix the error?
A temporary solution is to replace docker-compose.yml with explicit docker commands.
# Create the network and volumes
docker network create randpion-app
docker volume create postgres-data
docker volume create web-data
# Build the web service
docker build -t randpion/web web
# Load the environment variables from the .env file
export $(cat .env | xargs)
# Start the database service
docker run -d \
--network randpion-app --network-alias db --name randpion-db \
--platform "linux/arm/v6" \
-v postgres-data:/var/lib/postgresql/data \
-e POSTGRES_USER=${POSTGRES_USER} \
-e POSTGRES_PASSWORD=${POSTGRES_PASSWORD} \
-e POSTGRES_DB=${POSTGRES_DB} \
-p ${POSTGRES_PORT}:5432 \
arm32v6/postgres:15.1-alpine
# Start the web service
docker run -d \
--network randpion-app --network-alias web --name randpion-web \
--platform "linux/arm/v6" \
-v web-data:/data \
-e DATABASE_URL=${DATABASE_URL} \
-e SECRET_KEY=${SECRET_KEY} \
-e TZ=Europe/Amsterdam \
-p ${PORT}:5000 \
randpion/web
This runs fine for now, but the solution is not ideal. For example, the containers must restart after every reboot.

Dockerfile can't find my file exit code 1 Dockerfile can't find a file in the same dir

This is my first time with docker, I'm working on this problem for two days, it would make me very happy to find a solution.
I'm running docker-compose.yml file with "docker-compose up":
version: '3.3'
services:
base:
networks:
- brain_storm-network
volumes:
- brain_storm-storage:/usr/src/brain_storm
build: "./brain_storm"
data_base:
image: mongo
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- '27017:27017'
api:
build: "./brain_storm/api"
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- 5000:5000
depends_on:
- data_base
- base
restart: on-failure
the base Dockerfile inside ./brain_storm does the following:
FROM brain_storm-base:latest
RUN mkdir -p /usr/src/brain_storm/brain_storm
ADD . /usr/src/brain_storm/brain_storm
and when running the Dockerfile inside brain_storm/api
FROM brain_storm-base:latest
CMD cd /usr/src/brain_storm \
&& python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
I'm getting this error :
brain_storm_api_1 exited with code 1
api_1 | /usr/local/bin/python: Error while finding module specification for 'brain_storm.api' (ModuleNotFoundError: No module named 'brain_storm')
pwd says I'm in '/' and not in the current directory when running the base Dockerfile,
so that might be the problem but how do I solve it without going to /home/user/brain_storm in the Dockerfile, because I want to keep the location of brain_storm folder general.
How can I make Dockerfile see and take the file from the current directory (where the Dockerfile file is) ?
You should probably define WORKDIR command in both your Dockerfiles. The WORKDIR command is used to define the working directory of a Docker container at any given time. Any RUN, CMD, ADD,COPY, or ENTRYPOINT command will be executed in the specified working directory.:
base:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
COPY . .
api:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
CMD python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017

Receiving an error from a docker-compose that the user must own the data directory

Every time I try to build my image, I get the following error:
The server must be started by the user that owns the data directory.
The following is my docker file:
version: "3.7"
services:
db:
image: postgres
container_name: xxxxxxxxxxxx
volumes:
- ./postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
nginx:
image: nginx:latest
restart: always
container_name: xxxxxxxxxxxx-nginx
volumes:
- ./deployment/nginx:/etc/nginx
logging:
driver: none
depends_on: ["radio"]
ports:
- 8080:80
- 8081:443
radio:
build:
context: .
dockerfile: "./deployment/Dockerfile"
image: test-radio
command: './manage.py runserver 0:3000'
container_name: xxxxxxxxxxxxxxx
restart: always
depends_on: ["db"]
volumes:
- type: bind
source: ./api
target: /app/api
- type: bind
source: ./xxxxxx
target: /app/xxxxx
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_HOST: $POSTGRES_HOST
AWS_KEY_ID: $AWS_KEY_ID
AWS_ACCESS_KEY: $AWS_ACCESS_KEY
AWS_S3_BUCKET_NAME: $AWS_S3_BUCKET_NAME
networks:
default:
The image is built with the following run.sh file:
#!/usr/bin/env sh
if [ ! -f .pass ]; then
openssl rand -base64 32 > .pass
fi
#export POSTGRES_DB="xxxxxxxxxxxxxxxxx"
#export POSTGRES_USER="xxxxxxxxxxxxxx"
#export POSTGRES_PASSWORD="xxxxxxxxxxxxxxxxxxxx"
#export POSTGRES_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export POSTGRES_DB="xxxxxxxxxxxxxxxxxx"
export POSTGRES_USER="xxxxxxxxxxxxxxxxxxxx"
export POSTGRES_PASSWORD="`cat .pass`"
export POSTGRES_HOST="db"
export AWS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_S3_BUCKET_NAME=""
echo "Your psql password is in .pass do not commit this file."
echo "The app will be available on localhost:8080 shortly"
if [ -z "$1" ]; then
docker-compose up
else
docker-compose up $1
fi
I'm wondering if my error is being caused by attempting to use a bash script to deploy the service on a Windows machine?
Details on the issue
The behavior observed by the OP definetely comes from a UID/GID mismatch, given that the specification
volumes:
- ./postgres-data:/var/lib/postgresql/data
(which can be viewed as a docker-compose equivalent of docker run -v "$PWD/postgres-data:/var/lib/postgresql/data" …) bind-mounts the $PWD/postgres-data folder inside the container, giving access to its files as is (including owner/group metadata).
Also, note that the handling of owner/group metadata between host and containers only relies on the numeric UID and GID, not on the owner and group names.
For more information about UIDs and GIDs in a Docker context, see also that article on Medium.
Workarounds if the bind-mount is necessary
For completeness, several possible solutions to workaround the bind-mount UID-mismatch issue (including the most straightforward one that consists in changing the files' UID :) are described in this answer on StackOverflow:
How to have host and container read/write the same files with Docker?
Other solutions
Following #ParanoidPenguin's comment, you may want to use a named volume, which mainly consists in using:
the docker volume command
and/or the docker run option -v …:….
Remarks:
docker run -v PATH1:PATH2 … triggers a bind-mount of PATH1 (host) to PATH2 (container) if and only if PATH1 is absolute (i.e., starts with a /) (e.g., -v "$PWD:$PWD" is a common idiom)
docker run -v NAME:PATH2 … mounts volume NAME to PATH2 (container) if and only if NAME does not contain any / (i.e., matches regexp [a-zA-Z0-9][a-zA-Z0-9_.-]).
even if we don't run docker volume create foo beforehand by hand, docker run -v foo:/data --rm -it debian will create the named volume foo if need be.
in order to populate the files of a named volume (or respectively, backup them) you can use an ephemeral container of image debian, ubuntu or so, combining at the same time a bind-mount and a volume mount:
Add a file /home/user/bar.txt in a new volume foo
file1=/home/user/bar.txt # initial file
uid=2000 # target User-ID in the volume
gid=2000 # target Group-ID in the volume
docker pull debian
docker run -v "$file1:$file1:ro" -v foo:/data \
-e file1="$file1" -e uid="$uid" -e gid="$gid" \
--rm -it debian bash -exc \
'cp -v -- "$file1" /data/bar.txt && chown -v $uid:$gid /data/bar.txt'
docker volume ls
Backup the foo volume in a tarball
date=$(date +'%Y%m%d_%H%M%S')
back="backup_$date.tar.gz"
destdir=/home/user/backup
mkdir -p "$destdir"
docker run -v foo:/data -v "$destdir:/backup" -e back="$back" \
--rm -it debian bash -exc 'tar cvzf "/backup/$back" /data'

Add custom config location to Docker Postgres image preserving its access parameters

I have written a Dockerfile like this:
FROM postgres:11.2-alpine
ADD ./db/postgresql.conf /etc/postgresql/postgresql.conf
CMD ["-c", "config_file=/etc/postgresql/postgresql.conf"]
It just adds custom config location to a generic Postgres image.
Now I have the following docker-compose service description
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
POSTGRES_PASSWORD passwordhere
POSTGRES_USER: user
POSTGRES_DB: db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
The problem is I can no longer remotely connect to DB using these credentials if I add this Config option. Without that CMD line it works just fine.
If I prepend "postgres" in CMD it has the same effect due to the underlying script prepending it itself.
Provided all the files are where they need to be, I believe the only problem with your setup is that you've omitted an actual executable from the CMD -- specifying just the option. You need to actually run postgres:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]
That should work!
EDIT in response to OP's first comment below
First, I did confirm that behavior doesn't change whether "postgres" is in the CMD or not. It's exactly as you said. Onward!
Then I thought there must be a problem with the particular postgresql.conf in use. If we could just figure out what the default file is.. turns out we can!
How to get the existing postgres.conf out of the postgres image
1. Create docker-compose.yml with the following contents:
version: "3"
services:
db:
image: postgres:11.2-alpine
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
2. Spin up the service using
$ docker-compose run --rm --name=postgres db
3. In another terminal get the location of the file used in this release:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SHOW config_file"
config_file
------------------------------------------
/var/lib/postgresql/data/postgresql.conf
(1 row)
4. View the contents of default postgresql.conf
$ docker exec -it postgres cat /var/lib/postgresql/data/postgresql.conf
5. Replace local config file
Now all we have to do is replace the local config file ./db/postgresql.conf with the contents of the known-working-state config and modify it as necessary.
Database objects are only created once!
Database objects are only created once by the postgres container (source). So when developing the database parameters we have to remove them to make sure we're in a clean state.
Here's a nuclear (be careful!) option to
(1) remove all exited Docker containers, and then
(2) remove all Docker volumes not attached to containers:
$ docker rm $(docker ps -a -q) -f && docker volume prune -f
So now we can be sure to start from a clean state!
Final setup
Let's bring our Dockerfile back into the picture (just like you have in the question).
docker-compose.yml
version: "3"
services:
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
Connect to the db
Now all we have to do is build from a clean state.
# ensure all volumes are deleted (see above)
$ docker-compose build
$ docker-compose run --rm --name=postgres db
We can now (still) connect to the database:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SELECT COUNT(1) FROM pg_database WHERE datname='db_name'"
Finally, we can edit the postgres.conf from a known working state.
As per this other discussion, your CMD command only has arguments and is missing a command. Try:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

Howto pass POSTGRES_USER env variable when using docker-compose .yml for docker swarm

I'm starting a docker swarm with a PostgreSQL image.
I want to create a user named 'numbers' on that database.
This is my docker-compose file. The .env file contains POSTGRES_USER and POSTGRES_PASSORD. If I ssh into the container hosting the postgres image, I can see the variables when executing env.
But psql --user numbers tells me that role "numbers" does not exists.
How should I pass the POSTGRES_* vars so that the correct user is created?
version: '3'
services:
postgres:
image: 'postgres:9.5'
env_file:
- ./.env
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
deploy:
replicas: 1
networks:
- default
restart: always
This creates the postgresql user as expected.
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=numbers -d postgres
When Postgres find its data directory already initialized, he does not run the initialization script. This is the check:
if [ ! -s "$PGDATA/PG_VERSION" ]; then
....
So I recommend you to manually create that user or start from scratch (removing your volume if you can afford it, loosing the data). From command line:
docker volume ls
docker volume rm <id>