docker-compose run does not set environment variables - docker-compose

I do use docker-compose and have a .env file containing a environment variable:
KEY=VAL
Additionally I have the following in my docker-compose.yml:
version: '3'
services:
webapp:
build: ./dir
environment:
- KEY={$KEY}
If I run docker-compose build as well as docker-compose up the environment variable KEY is accessable in the container.
If I now run some command: docker-compose run webapp echo $KEY
It prints nothing, so so guess it is not set. Is that normal behaviour, or do I miss something substantial?
Thanks in advance

Related

How can I use environment variable in the docker compose configuration when building a VS code devcontainer?

I want to mount a directory from my local Windows system into my VS code devcontainer. I would like to do this using the docker-compose file I have, which reads the local path from an environment variable.
These are the contents of my docker-compose.yml
services:
annotator:
build: .
volumes:
- ${DB}:/app/db:rw
- ${DATA}:/data:ro
command: "runserver 0.0.0.0:8000"
ports:
- "8000:8000"
When running docker-compose up this config works fine. However, when I use the same docker-compose.yml to set up a devcontainer, I get the error that ${DB} and ${DATA} are not set.
This is my devcontainer config:
{
"name": "Existing Docker Compose (Extend)",
"dockerComposeFile": [
"../docker-compose.yml"
],
"service": "annotator",
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}"
}
How can I make the devcontainer build process find those environment variables?

How can I run mysql router in Docker with compose file with bootstrap mode

MySql Router has a command to bootstrap itself
mysqlrouter --bootstrap root#127.0.0.1:3306 --directory /tmp/router
After bootstrap, Router will exit, and we should run it again with config file generated by bootstrap since I will modify this file
mysqlrouter --config /tmp/router/mysqlrouter.conf
It's work ok in Linux pure environment, but not in docker, below is my docker compose file
version: '2'
services:
common: &baseDefine
environment:
MYSQL_HOST: "192.168.213.6"
MYSQL_PORT: 3306
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'urpwd.root'
MYSQL_INNODB_CLUSTER_MEMBERS: 3
MYSQL_CREATE_ROUTER_USER: 0
image: "docker.io/mysql/mysql-router:latest"
volumes:
- "./conf:/tmp/myrouter"
network_mode: "host"
boot:
container_name: "mysql_router_boot"
command: ["mysqlrouter","--bootstrap","root#192.168.213.7:3306","--directory","/tmp/myrouter","--conf-use-sockets","--conf-skip-tcp",
"--force","--strict","--user","mysqlrouter","--account","sqlrouter","--account-create","if-not-exists"]
<<: *baseDefine
run:
container_name: "mysql_router"
restart: always
command: ["mysqlrouter","--config","/tmp/myrouter/mysqlrouter.conf"]
<<: *baseDefine
First, I will call the boot service for bootstrap, and generate configure to specified dir
docker-compose run --rm boot
after this command, the configure file is generated ok, then I execute
docker-compose run --name mysql_router run
It will work, but not work like what I supposed,
Without docker, the second step to run mysqlrouter only use config and without bootstrap
But with docker and those commands, the second step will bootstrap again.
I know this is since 2 service in 2 containers.
Is there any ideas to make this flow more suitable?
Such as run 2 service in one container?
Or run a service in an existed container?
It's ok with below:
modify yml's command of run service
run:
command: /bin/bash -c "mysqlrouter --config /tmp/myrouter/mysqlrouter.conf"
use bash to run mysqlrouter delay for recognizing the existed conf file;
it works

Running a MongoDB in Docker using Compose

I'm trying to run a database in docker and a python script with it to store MQTT messages. This gave me the idea to use Docker Compose since it sounded logical that both were somewhat connected. The issue I'm having is that the Docker Containers do indeed run, but they do not store anything in the database.
When I run my script locally it does store messages so my hunch is that the Compose File is not correct.
Is this the correct way to compose a python file which stores message in a DB and the database itself (with a .js file for the credentials). Any feedback would be appreciated!
version: '3'
services:
storing_script:
build:
context: .
dockerfile: Dockerfile
depends_on: [mongo]
mongo:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: xx
MONGO_INITDB_ROOT_PASSWORD: xx
MONGO_INITDB_DATABASE: motionDB
volumes:
- ${PWD}/mongo-data:/data/db
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
ports:
- 27018:27018
restart: unless-stopped
The DockerFile im using to build:
# set base image (host OS)
FROM python:3.8-slim
# set the working directory in the container
WORKDIR /code
# copy the dependencies file to the working
directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to
the working directory
COPY src/ .
# command to run on container start
CMD [ "python", "./main.py"]
I think this may be due to user permission.
What I did for my docker-compose for docker deployment, is I also mount the passwd file after creating a mongodb user
volumes:
/etc/passwd:/etc/passwd:ro
This worked for me as the most straight forward solution.

Initialize PostgreSQL-container with command

The following Dockerfile is created:
FROM postgres:12
CMD [«postgres»]
And docker-compose.yml
version: '3'
services:
codes:
container_name: short_codes
build:
context: codes_store
image: andrey1981spb/short_codes
ports:
- 5432:5432
I up docker-compose successfully. But when I try to enter in container, I receive:
"Container ... is not running"
Or I use a wrong command for initializing container.
You issue is due to incorrect quotes. Replacing them with proper quotes would solve it:
FROM postgres:12
CMD ["postgres"]
P.s. Your Dockerfile is essentially identical to the official postgres image, so you might as well use that in your compose yaml, unless you're planning some additional modifications later.

docker-compose how to pass env variables into file

Is there any chance to pass variables from docker-compose into apache.conf file?
I have Dockerfile with variables
ENV APACHE_SERVER_NAME localhost
ENV APACHE_DOCUMENT_ROOT /var/www/html
I have apache.conf which I copy into /etc/apache2/sites-available/ while building image
ServerName ${APACHE_SERVER_NAME}
DocumentRoot ${APACHE_DOCUMENT_ROOT}
I have docker-compose.yml
environment:
- APACHE_SERVER_NAME=cms
- APACHE_DOCUMENT_ROOT=/var/www/html/public
When I run docker-compose, nothing happened and apache.conf in container is unchanged.
Am I completely wrong and is this imposible or am I missing any step or point?
Thank you
Let me explain some little differences among ways to pass environment variables:
Environment variables for building time and for entrypoint in running time
ENV (Dockerfile): Once specified in Dockerfile before building, containers, will have environment variables for entrypoints.
environment: The same than ENV but for docker-compose.
docker run -e VAR=value... The same than before but for cli.
Environment variables only for building time
ARG (Dockerfile): They won't appear in deployed containers.
Environment variables accesibled for every container and builds
.env file defined in your working dir where executes docker run or docker-compose.
env_file: section in docker-compose.yml file to define another .env file.
As you're trying to define variables for apache conf file, maybe you should try to use apache as entrypoint, or just define them in .env file.