Store environment variable within tmpfs - docker-compose

Is it possible to store an environment variable from an .env file directly and secure (without showing up in docker inspect to a tmpfs storage?
version: "3"
services:
app:
build:
context: .
tmpfs:
- /var/tmp
command: "echo \"${SECRET}\" > /var/tmp/test.txt && ./app -c /var/tmp/test.txt"
My current approach leads to the container restarting itself over and over again.

Related

Use data from volume during container build in docker compose

I need to share data between containers with docker compose. Here shared_data_setup container should seed the shared volume with data to be used during build of the app container. However when I run this, app container /shared is empty. Is there a way to achieve this ?
services:
# This will setup some seed data to be used in other containers
shared_data_setup:
build: ./shared_data_setup/
volumes:
- shared:/shared
app:
build: ./app/
volumes:
- shared:/shared
depends_on:
- shared_data_setup
volumes:
shared:
driver: local
You need to specify the version of the docker-compose.yml file:
version: "3"
services:
# This will setup some seed data to be used in other containers
shared_data_setup:
build: ./shared_data_setup/
volumes:
- shared:/shared
app:
build: ./app/
volumes:
- shared:/shared
depends_on:
- shared_data_setup
volumes:
shared:
driver: local
Edit: Results:
# Test volume from app
$ docker-compose exec app bash
root#e652cb9e5c46:/# ls -l /shared
total 0
root#e652cb9e5c46:/# touch /shared/test
root#e652cb9e5c46:/# exit
# Test volume from shared_data_setup
$ docker-compose exec shared_data_setup bash
root#b21ead1a7354:/# ls -l /shared
total 0
-rw-r--r-- 1 root root 0 Feb 26 11:23 test
root#b21ead1a7354:/# exit

docker postgres not able to change pgdata permissions

I have a project where I use google compute engine to host my app and docker to containerize it.
I have a postgres image and I want to use a volume to make my data persistent when I restart the container. Moreover I want the db data to be stored in google storage. So I have a google storage bucket and I have mounted a directory in my google compute engine to that bucked. Specifically what I did is mkdir /home/vetter_leo/data where data is the folder I want to use as a volume and then I mount it using gcsfuse --dir-mode 777 --file-mode 777 -o allow_other --implicit-dirs artifacts.helenos-273112.appspot.com /home/vetter_leo/data/.
My dockerfile for the postgres image is this :
FROM postgres:latest
USER postgres
ENV POSTGRES_USER helenos
ENV POSTGRES_PASSWORD helenos
ENV POSTGRES_DB helenos
ENV PGDATA /var/lib/postgresql/data/pgdata
COPY init_helenos_schema.sql /docker-entrypoint-initdb.d/
EXPOSE 5432
and my docker-compose file is this :
version: "3.5"
services:
postgres:
container_name: postgres
image: postgres
build:
context: .
dockerfile: ./postgres.prod.dockerfile
volumes:
- /home/vetter_leo/data:/var/lib/postgresql/data
networks:
default:
external:
name: helenos-network
When doing docker-compose -f docker-compose.yml up -d --build I end up with the container not being started and this error is shown chmod: changing permissions of '/var/lib/postgresql/data/pgdata': Operation not permitted.
I have searched the web but so far I have not been able to find a solution for my problem. Any help would be greatly appreciated. Thanks.
I ended up using a persistend disk as sugested by #mebius99 and it works so no problem anymore.

Running docker-compose inside a Google Cloud Engine

I'm trying to run a small docker-compose app inside a container-optimized Google Cloud Compute Engine node, but I'm getting stuck when it's trying to mount volumes during a docker-compose up:
Creating lightning_redis_1 ...
Creating lightning_db_1 ...
Creating lightning_redis_1
Creating lightning_db_1 ... done
Creating lightning_api_1 ...
Creating lightning_api_1 ... error
ERROR: for lightning_api_1 Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file sys
tem
ERROR: for api Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file system
Encountered errors while bringing up the project.
jeremy#instance-1 ~/lightning $
My docker-compose.yml file looks like this:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I don't want to have to change anything in the docker-compose.yml file - I'd prefer to be able to fix this issue by running commands inside the VM itself, or in how I set the VM up. Reason being is it's not my code and I can't change the docker-compose.yml file easily, and all I need to do is run it for a short period of time and execute a few docker-compose commands inside the VM.
Container optimized OS usually mounts most of the paths as read-only. That is why you are getting the error
source path '/rootfs/home/jeremy/lightning': mkdir /rootfs: read-only file sys
So you have few options
Use named volumes in docker-compose
You will need to change your volumes like below
volumes:
- myappvol:/myapp
and define the top level volumes in compose
volumes:
myappvol: {}
As you said you don't want to modify the yaml then this may not work for you
Run docker-compose inside docker
Currently you run docker-compose on the main machine, instead you should use docker-compose inside another docker container which has the main root folder
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.13.0 up
This would work but the data would be persisted inside the docker container itself.
See below article for more details
https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os
I had the same error, I solved it by removing the 'rootfs' directory when mounting the docker container (you cannot write on this directory).
Just change:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.24.0 up
By:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.24.0 up
Add to the bottom of the file .bashrc file located /home/{your-user}/.bashrc using vi or nano:
e.g. nano /home/{your-user}/.bashrc
echo alias docker-compose="'"'docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.24.0'"'" >> ~/.bashrc
CTRL O - will save
CTRL M - override
CTRL X - exit
Run: source ~/.bashrc - to update the terminal.

Howto pass POSTGRES_USER env variable when using docker-compose .yml for docker swarm

I'm starting a docker swarm with a PostgreSQL image.
I want to create a user named 'numbers' on that database.
This is my docker-compose file. The .env file contains POSTGRES_USER and POSTGRES_PASSORD. If I ssh into the container hosting the postgres image, I can see the variables when executing env.
But psql --user numbers tells me that role "numbers" does not exists.
How should I pass the POSTGRES_* vars so that the correct user is created?
version: '3'
services:
postgres:
image: 'postgres:9.5'
env_file:
- ./.env
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
deploy:
replicas: 1
networks:
- default
restart: always
This creates the postgresql user as expected.
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=numbers -d postgres
When Postgres find its data directory already initialized, he does not run the initialization script. This is the check:
if [ ! -s "$PGDATA/PG_VERSION" ]; then
....
So I recommend you to manually create that user or start from scratch (removing your volume if you can afford it, loosing the data). From command line:
docker volume ls
docker volume rm <id>

How to move Postresql to RAM disk in Docker?

I want to run the Docker image postgres:9, stop Postgres, move it to /dev/shm, and restart it, so I can run my application tests really fast.
But when I try to stop Postgres in the container using postgres or pg_ctl I get told cannot be run as root.
Since all Docker containers log you in as the root user what can I do to run the Postgres commands I need?
And which folders do I need to move to /dev/shm before restarting it?
Command to start the container if you want to try this:
docker run -it postgres:9 bash
cd /usr/lib/postgresql/9.6/bin
./pg_ctl stop
Mount a tmpfs in the container and point the PostgreSQL data at it
docker run --tmpfs=/pgtmpfs -e PGDATA=/pgtmpfs postgres:15
Use size=Nk to set a size limit (rather than all free memory).
--tmpfs /pgtmpfs:size=131072k
The same can be done for MySQL
docker run --tmpfs=/var/lib/mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=yes mysql:8
Kubernetes
An emptyDir volume can set the medium property to Memory
apiVersion: v1
kind: Pod
metadata:
name: tmpfs-pd
spec:
containers:
- image: docker.io/postgres:15
name: tmpdb
env:
- name: PGDATA
value: /pgtmpfs
volumeMounts:
- mountPath: /pgtmpfs
name: tmpdata-volume
volumes:
- name: tmpdata-volume
emptyDir:
medium: Memory
sizeLimit: 131072k
Docker Compose
And in a docker compose 3.6+ definition (not supported by stack)
version: "3.6"
services:
db:
image: docker.io/postgres:15
environment:
- PGDATA=/pgtmpfs
tmpfs:
- /pgtmpfs
Compose can define shared volumes of tmpfs as well.
By adding --user=postgres
to your docker parameter list, you'll get to be user=postgres directly:
docker --user=postgres run -it postgres:9 bash
cd /usr/lib/postgresql/9.6/bin
./pg_ctl stop