how to pass host user to Dockerfile when using docker-compose - command-line

I'm trying to pass a host variable to a Dockerfile when running docker-compose build
I would like to run
RUN usermod -u $USERID www-data
in an apache-php7 Dockerfile. $USERID being the ID of the current host user.
I would have thought that the following might work:
commandline
EXPORT USERID=$(id -u); docker-compose build
docker-compose.yml
...
environment:
- USERID=$USERID
Dockerfile
ENV USERID
RUN usermod -u $USERID www-data
But no luck yet.

For Docker in general it is generally not possible to use host environment variables during the build phase; this is because it is desirable that if you run docker build and I run docker build using the same Dockerfile (or Docker Hub runs `docker build with the same Dockerfile), we end up with the same image, regardless of our local environment.
While passing in variables at runtime is easy with the docker command line (using -e <var>=<value>), it's a little trickier with docker-compose, because that tool is designed to create self-contained environments.
A simple solution would be to drop the host uid into an environment file before starting the container. That is, assuming you have:
version: "2"
services:
shell:
image: alpine
env_file: docker-compose.env
command: >
env
You can then:
echo HOST_UID=$UID > docker-compose.env; docker-compose up
And the HOST_UID environment variable will be available to your
container:
Recreating vartest_shell_1
Attaching to vartest_shell_1
shell_1 | HOSTNAME=17423d169a25
shell_1 | HOST_UID=1000
shell_1 | HOME=/root
vartest_shell_1 exited with code 0
You would then to have something like an ENTRYPOINT script that
would set up the container environment (creating users, modifying file
ownership) to operate correctly with the given UID.

Related

Docker & PostgreSQL- modified PostgreSQL image doesn't start with Docker run command

I want to to build a PostgreSQL image that only contains some extra .sql files to be executed at starting time
Dockerfile:
FROM postgres:11.9-alpine
USER postgres
WORKDIR /
COPY ddl/*.sql /docker-entrypoint-initdb.d/
Then I build the image:
docker build -t my-postgres:1.0.0 -f Dockerfile .
And run the container
docker run -d --name my-database \
-e POSTGRES_PASSWORD=abc123 \
-p 5432:5432 \
my-postgres:1.0.0
The output of it is the container id
33ed596792a80fc08f37c7c0ab16f8827191726b8e07d68ce03b2b5736a6fa4e
Checking the running containers returns nothing:
Docker container ls
But if I explicitly start it, it works
docker start my-postgres
In the original PostgreSQL image the Docker run command already starts the database. Why after building my own image it doesn't?
It turned out that one of the copied .sql files was failing to execute and, based on this documentation, it makes the entrypoint script to exit. Fixing the SQL solved the issue and the container started normally with Docker run

Docker Compose - Container Bash Forking

I am trying to run netbox based on their standard guide on Docker Hub with a slight difference that I need our existing postgres dump to be restored when the postgres container starts.
I have tried a few approaches like defining a command option in docker-compose file like (and a few more combinations):
sleep 2 && psql -U netbox -f netbox.sql
sleep is required to prevent psql command running before the postgres service is started.
Or defining a bash script that does the database restore but all these approaches cause the container to exit after that command/script is run.
My last resort was to utilize bash forking and this is what the postgres snippet of docker-compose looks like:
postgres:
image: postgres:13-alpine
env_file: env/postgres.env
command:
- sh
- -c
- (sleep 3 && cd /home && psql -U netbox -f netbox.sql) & su -c postgres postgres
volumes:
- ./my_db:/home/
- netbox-postgres-data:/var/lib/postgresql/data
Sadly this throws results in:
postgres: could not access the server configuration file
"/var/lib/postgresql/data/postgresql.conf": No such file or directory
If I omit the command section of docker-compose, the container starts up fine and I can navigate and ls the directory in the error message but it is not what I really need because this container will go on to be part of a much larger jungle of an ecosystem with little to no control over it afterwards.
Could it be my bash forking or the problem lies somewhere else?
Thanks in advance
I was able to find a solution by going through the thread that David Maze shared in the comments.
In my case, placing the *.sql file inside /docker-entrypoint-initdb.d did not work but I wrote a bash script, placed it in /docker-entrypoint-initdb.d directory and it got triggered.
The bash script was a very simple one, it would cd to the directory containing the sql dump and then restore it by running psql:
psql -U netbox -f netbox.sql

Run SQL script after start of SQL Server on docker

I have a Dockerfile with below code
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA=Y
ENV sa_password=##$wo0RD!
CMD sqlcmd -i create-db.sql
and I can create image but when I run container with the image I don't see created database on the SQL Server because the script is executed before SQL Server was started.
Can I do that the script will be execute after start the service with SQL Server?
RUN gets used to build the layers in an image. CMD is the command that is run when you launch an instance (a "container") of the built image.
Also, if your script depends on those environment variables, if it's an older version of Docker, it might fail because those variables are not defined the way you want them defined!
In older versions of docker the Dockerfile ENV command uses spaces instead of "="
Your Dockerfile should probably be:
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA Y
ENV SA_PASSWORD ##$wo0RD!
RUN sqlcmd -i create-db.sql
This will create an image containing the database with your password inside it.
(If the SQL file somehow uses the environment variables, this wouldn't make sense as you might as well update the SQL file before you copy it over.) If you want to be able to override the password between the docker build and docker run steps, by using docker run --env sa_password=##$wo0RD! ..., you will need to change the last line to:
CMD sqlcmd -i create-db.sql && .\start -sa_password $env:SA_PASSWORD \
-ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
Which is a modified version of the CMD line that is inherited from the upstream image.
You can follow this link https://github.com/microsoft/mssql-docker/issues/11.
Credits to Robin Moffatt.
Change your docker-compose.yml file to contain the following
mssql:
image: microsoft/mssql-server-windows-express
environment:
- SA_PASSWORD=##$wo0RD!
- ACCEPT_EULA=Y
volumes:
# directory with sql script on pc to /scripts/
# - ./data/mssql:/scripts/
- ./create-db.sql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait 30 seconds for it to be available
# (lame, I know, but there's no nc available to start prodding network ports)
sleep 30
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
for foo in /scripts/*.sql
do /opt/mssql-tools/bin/sqlcmd -U sa -P $$SA_PASSWORD -l 30 -e -i $$foo
done
# So that the container doesn't shut down, sleep this thread
sleep infinity

Docker container for app tests with postgres database

I'm new to Docker.
I'm trying to run my node app tests in a Docker container.
I want to run the tests with a real postgres db.
I'm creating this container with the following Dockerfile:
# Set image
FROM postgres:alpine
# Install node latest
RUN apk add --update nodejs nodejs-npm
# Set working dir
WORKDIR .
# Copy the current directory contents into the container at .
ADD src src
ADD .env.testing .env
ADD package.json .
ADD package-lock.json .
# Run tests
CMD npm install && npm run coverage
From the image docs, when I run the container with:
$ docker run build-name -d postgres
I see that the container takes time to start postgresql service.
When I run the container without the "-d postgres" param:
$ docker run build-name
The service does not start and the tests fail due to "could not connect to server".
Questions:
A. How can I run the tests AFTER the postgresql service starts?
B. I saw some examples using docker-composer but can I do this without composer?
Thanks
Thanks to #Bogdan I found the complete solution:
Dockerfile should be:
# Set image
FROM postgres:alpine
# Install node latest
RUN apk add --update nodejs nodejs-npm
# Set working dir
WORKDIR .
# Copy the current directory contents into the container at .
ADD src src
ADD .env.testing .env
ADD package.json .
ADD package-lock.json .
# Install
RUN npm install
# Init container
CMD psql -U postgres -c "SELECT 1;" postgres
Build container:
$ docker build -t test .
Run container:
$ docker run --name startedtest -d test -d postgres
Run tests after conatiner is running:
$ docker exec startedtest some_create_schema_script && npm run coverage
If the goal is just to run the tests in the Postgres container, one solution could be to install NodeJs in your postgres:alpine derived image and run the container normally. Once the database is up, you can run npm using docker exec like this:
docker exec <container_id> npm run coverage

Configure dockerfile with postgres

Trying to make Dockerfile for postgres db need for my app.
Dockerfile
FROM postgres:9.4
RUN mkdir /sql
COPY src/main/resources/sql_scripts/* /sql/
RUN psql -f /sql/create_user.sql
RUN psql -U user -W 123 -f create_db.sql
RUN psql -U user -W 123 -d school_ats -f create_tables.sql
run
docker build .
result:
Sending build context to Docker daemon 3.367 MB
Step 1 : FROM postgres:9.4
---> 6196bca94565
Step 2 : RUN mkdir /sql
---> Using cache
---> 6f57c1e759b7
Step 3 : COPY src/main/resources/sql_scripts/* /sql/
---> Using cache
---> 3b496bfb28cd
Step 4 : RUN psql -a -f /sql/create_user.sql
---> Running in 33b2230a12fa
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The command '/bin/sh -c psql -a -f /sql/create_user.sql' returned a non-zero code: 2
How can I specify db in docker for my project?
When building your docker image postgres is not running. Database is started when container is starting, any sql files can be executed after that. Easiest solution is to put your sql files into special directory:
FROM postgres:9.4
COPY *.sql /docker-entrypoint-initdb.d/
When booting startup script will execute all files from this dir. You can read about this in docs https://hub.docker.com/_/postgres/ in section How to extend this image.
Also, if you need different user you should set environment variables POSTGRES_USER and POSTGRES_PASSWORD. It's easier then using custom scripts for creating user.
As the comment above says during the image build you don't get a running instance of Postgres.
You could take slightly different approach. Instead of trying to execute SQL scripts yourself you could copy them to /docker-entrypoint-initdb.d/ directory. They will be executed when the container starts up.
Have a look how postgres:9.4 image is build:
Dockerfile
docker-entrypoint.sh
Also in your Dockerfile use variables to set database details:
POSTGRES_DB
POSTGRES_USER
POSTGRES_PASSWORD