run sed in dockerfile to replace text with build arg value - sed

I'm trying to use sed to replace the text "localhost" in my nginx.conf with the IP address of my docker host (FYI. using docker machine locally, which is running docker on 192.168.99.100).
My nginx Dockerfile looks like this:
FROM nginx:alpine
ARG DOCKER_HOST
COPY ./nginx.conf /etc/nginx/nginx.conf
RUN sed -i 's/localhost/${DOCKER_HOST}/g' /etc/nginx/nginx.conf
EXPOSE 80
My nginx.conf file looks like this (note: majority removed for simplicity)
http {
sendfile on;
upstream epd {
server localhost:8080;
}
# ...
}
I'm expecting "localhost" to get replaced with "192.168.99.100", but it actually gets replaced with "${DOCKER_HOST}". This causes an error
host not found in upstream "${DOCKER_HOST}:8080"
I've tried a few other things, but I can't seem to get this working. I can confirm the DOCKER_HOST build arg is getting through to the Dockerfile via my docker compose script, as I can echo this out.
Many thanks for any responses...

Replace the single quotes ' around s/localhost/${DOCKER_HOST}/g with double quotes ". Variables will not be interpolated within single quotes.

Related

Singularity exec - echo redirect issue

I am running a singularity container with ubuntu xenial base.
When I attempt to create a text file by using redirect from echo command to the file system the target of the redirect is interpreted to be on the host instead of on the container.
Below is the command -
singualrity exec ubuntu_xenial_image.img echo "test" >> /mnt/test.txt
Instead of creating the file test.txt in the container folder named /mnt, it tries to write the test.txt file to the host root folder /mnt/test.txt resulting in a - no permissions error as obviously I don't have permission to write to the host root folder.
Do you know why the redirect goes to the host file system rather than the container file system as the singularity exec command is supposed to work?
The full command, as it is written is divided into
singularity exec ubuntu_xenial_image.img echo "test"
for the container and the output is redirected to >> /mnt/test.txt in the host filesystem.
to correct it
$ singularity exec ubuntu_xenial_image.img sh -c "echo" test ">> /mnt/test.txt"
thus the complete command will be interpreted by sh inside the container.
In addition to this, you need to verify the write permissions of the /mnt directory, or execute with sudo.

Run SQL script after start of SQL Server on docker

I have a Dockerfile with below code
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA=Y
ENV sa_password=##$wo0RD!
CMD sqlcmd -i create-db.sql
and I can create image but when I run container with the image I don't see created database on the SQL Server because the script is executed before SQL Server was started.
Can I do that the script will be execute after start the service with SQL Server?
RUN gets used to build the layers in an image. CMD is the command that is run when you launch an instance (a "container") of the built image.
Also, if your script depends on those environment variables, if it's an older version of Docker, it might fail because those variables are not defined the way you want them defined!
In older versions of docker the Dockerfile ENV command uses spaces instead of "="
Your Dockerfile should probably be:
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA Y
ENV SA_PASSWORD ##$wo0RD!
RUN sqlcmd -i create-db.sql
This will create an image containing the database with your password inside it.
(If the SQL file somehow uses the environment variables, this wouldn't make sense as you might as well update the SQL file before you copy it over.) If you want to be able to override the password between the docker build and docker run steps, by using docker run --env sa_password=##$wo0RD! ..., you will need to change the last line to:
CMD sqlcmd -i create-db.sql && .\start -sa_password $env:SA_PASSWORD \
-ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
Which is a modified version of the CMD line that is inherited from the upstream image.
You can follow this link https://github.com/microsoft/mssql-docker/issues/11.
Credits to Robin Moffatt.
Change your docker-compose.yml file to contain the following
mssql:
image: microsoft/mssql-server-windows-express
environment:
- SA_PASSWORD=##$wo0RD!
- ACCEPT_EULA=Y
volumes:
# directory with sql script on pc to /scripts/
# - ./data/mssql:/scripts/
- ./create-db.sql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait 30 seconds for it to be available
# (lame, I know, but there's no nc available to start prodding network ports)
sleep 30
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
for foo in /scripts/*.sql
do /opt/mssql-tools/bin/sqlcmd -U sa -P $$SA_PASSWORD -l 30 -e -i $$foo
done
# So that the container doesn't shut down, sleep this thread
sleep infinity

docker-compose down with a non-default yml file name

I have a non-default docker-compose file name (docker-compose-test.yml).
There is only one service defined in it.
I am starting the container using "docker-compose -f docker-compose-test.yml up"
I am trying to stop the container started above using docker-compose down, but it is not working.
I am getting below error,
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
I understand that it is looking for default docker compose file name. Is there a way to specify the custom config file name during docker-compose down?
You should run the docker-compose down command with the same flags that when you started the containers.
docker-compose -f docker-compose-test.yml down
You can create a .env file and add the following:
COMPOSE_FILE=docker-compose-test.yml
Note, that the syntax of docker-compose is such that -f needs to be before up/down, and thereafter -d:
docker-compose -f docker-compose.prod.yml up -d
If you put -f after up/down, it doesn't work, and also if you put -d before up/down you get the help display or an error. Down works of course without -d:
docker-compose -f docker-compose.prod.yml down
If you use multiple files for docker-compose AND a custom name, you should write like this:
docker-compose -f docker-compose.yml -f docker-compose.override.yml -p custom_name down

How can I grab exposed port from inspecting docker container?

Assuming that I start a docker container with the following command
docker run -d --name my-container -p 1234 my-image
and running docker ps shows the port binding for that image is...
80/tcp, 443 /tcp. 0.0.0.0:32768->1234/tcp
Is there a way that I can use docker inspect to grab the port that is assigned to be mapped to 1234 (in this case, 32768)?
Similar to parsing and grabbing the IP address using the following command...
IP=$(docker inspect -f "{{ .Networksettings.IPAddress }} my-container)
I want to be able to do something similar to the following
ASSIGNED_PORT=$(docker inspect -f "{{...}} my-container)
I am not sure if there is a way to do this through Docker, but I would imagine there is some command line magic (grep,sed,etc) that would allow me to do something like this.
When I run docker inspect my-container and look at the NetworkSettings...I see the following
"NetworkSettings": {
...
...
...
"Ports": {
"1234/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32768"
}
],
"443/tcp": null,
"80/tcp": null
},
...
...
},
In this case, I would want it to find HostPort without me telling it anything about port 1234 (it should ignore 443 and 80 below it) and return 32768.
Execute the command: docker inspect --format="{{json .Config.ExposedPorts }}" src_python_1
Result: {"8000/tcp":{}}
Proof (using docker ps):
e5e917b59e15 src_python:latest "start-server" 22 hours ago Up 22 hours 0.0.0.0:8000->8000/tcp src_python_1
It is not easy as with ip address as one container can have multiple ports, some exposed and some not, but this will get it:
sudo docker inspect name | grep HostPort | sort | uniq | grep -o [0-9]*
If more than one port is exposed it will be displayed on a new line.
There are two good options depending on your taste: docker port my-container 1234 | grep -o [0-9]*$ and docker inspect --format='{{(index (index .NetworkSettings.Ports "1234/tcp") 0).HostPort}}' my-container
Using jq:
docker inspect --format="{{json .}}" my-container | jq '.NetworkSettings.Ports["1234/tcp"][0].HostPort'
Change 1234 with the port you specified in docker run.
I have used both docker inspect <container> and docker inspect <container>| jq combination to strip ports.
In below example I am looking at dsb-server container and port I am looking for is 8080/tcp
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98b4bec33ba9 xxxxx:dsbv3 "./docker-entrypoint." 6 days ago Up 6 days 8009/tcp, 0.0.0.0:4848->4848/tcp, 8181/tcp, 0.0.0.0:9013->8080/tcp dsb-server
to strip ports
docker inspect dsb-server| jq -r '.[].NetworkSettings.Ports."8080/tcp"[].HostPort'
9013
docker inspect --format='{{(index (index .NetworkSettings.Ports "8080/tcp") 0).HostPort}}'
9013
The above answers were close and put me on the right track, but I kept getting the following error:
Template parsing error: template: :1: unexpected "/" in operand
I found the answer here: https://github.com/moby/moby/issues/27592
This is what finally worked for me:
docker inspect --format="{{(index (index .NetworkSettings.Ports \"80/tcp\") 0).HostPort}}" $INSTANCE_ID

Passing variable from container start to file

I have the following lines in a Dockerfile where I want to set a value in a config file to a default value before the application starts up at the end and provide optionally setting it using the -e option when starting the container.
I am trying to do this using Docker's ENV commando
ENV CONFIG_VALUE default_value
RUN sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
CMD command_to_start_app
I have the string CONFIG_VALUE explicitly in the file CONFIG_FILE and the default value from the Dockerfile gets correctly substituted. However, when I run the container with the added -e CONFIG_VALUE=100 the substitution is not carried out, the default value set in the Dockerfile is kept.
When I do
docker exec -i -t container_name bash
and echo $CONFIG_VALUE inside the container the environment variable does contain the desired value 100.
Instructions in the Dockerfile are evaluated line-by-line when you do docker build and are not re-evaluated at run-time.
You can still do this however by using an entrypoint script, which will be evaluated at run-time after any environment variables have been set.
For example, you can define the following entrypoint.sh script:
#!/bin/bash
sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
exec "$#"
The exec "$#" will execute any CMD or command that is set.
Add it to the Dockerfile e.g:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Note that if you have an existing entrypoint, you will need to merge it with this one - you can only have one entrypoint.
Now you should find that the environment variable is respected i.e:
docker run -e CONFIG_VALUE=100 container_name cat CONFIG_FILE
Should work as expected.
That shouldn't be possible in a Dockerfile: those instructions are static, for making an image.
If you need runtime instruction when launching a container, you should code them in a script called by the CMD directive.
In other words, the sed would take place in a script that the CMD called. When doing the docker run, that script would have access to the environment variable set just before said docker run.