Using ENV Variables to launch container in VSCode - visual-studio-code

My docker-compose.yml looks like this
services:
my-service:
image: ${PYTHON_IMAGE}:${PY_VERSION}
these ENV variables are declared in WSL, however when I use VS code remote container to launch the development in the container, it will fail, I tried using WSLENV to assign the variables and tested within a session between WSL and Windows, the variables are properly shared but still couldnt get the variables assigned when the container is created/build by VSCode
Appreciate your response
Thanks

Right now I get it working using
WLSENV
added the following line to .bashrc
export WSLENV=$WSLENV:VAR_NODE_IMAGE_NAME/u:VAR_NODE_VERSION/u
cmd.exe /C set | grep '^VAR' | tr '=' ' ' | awk '{printf "SETX %s %s\n", $1, $2}' | cmd.exe >> /dev/null
this will add the env vars to Windows when I launch my terminal

Related

write "message of the day" by images used by vscode remote development

I am using remote development extension in vscode with a Docker image and I would like that when I start it in the console I want to see the message of the day "motd".
The Dockerfile in .devcontaier has this at the end:
COPY motd /etc/
... # change the default user and WORKDIR
CMD cat /etc/motd && /bin/bash
If I run manually this image I see the message but when vscode uses it I don't see it in the console.
The best solution I found so far is
RUN echo "cat /etc/motd" >> $HOME/.bashrc
CMD /bin/bash

How to attach a remote container using vscode command line?

I want to use a command line tool to attach a remote container. I tried this command (see below), but it's of no use. Does anyone know the correct command?
code --folder-uri vscode-remote://dev-container+4aaf623ee98a52fa311226a2c619be19addfa221c090b9a3bc37e7cba03a7fce/easycv
That string of characters after dev-container+ is an ascii path to your dev container folder encoded in hexadecimal.
To open a folder in a container you can use the following style command:
code --folder-uri=vscode-remote://dev-container%2B{path-in-hex}/{path-inside-container}
For example to open the folder /workspaces/test in the development container located in /Users/jkells/projects/vscode-devcontainer I use the following CLI command.
code --folder-uri=vscode-remote://dev-container%2B2f55736572732f6a6b656c6c732f70726f6a656374732f7673636f64652d646576636f6e7461696e6572/workspaces/test
To convert the string /Users/jkells/projects/vscode-devcontainer into the hexadecimal 2f55736572732f6a6b656c6c732f70726f6a656374732f7673636f64652d646576636f6e7461696e6572 you can use the following command
printf /Users/jkells/projects/vscode-devcontainer | od -A n -t x1 | tr -d '[\n\t ]'
To automate this, I created this cross-plattform-ish solution:
https://github.com/geircode/vscode-attach-to-container-script
This solution creates the hex based on the name of the running container.
Windows CMD script:
docker run --rm geircode/string_to_hex bash string_to_hex.bash "<container_name>" > vscode_remote_hex.txt
set /p vscode_remote_hex=<vscode_remote_hex.txt
code --folder-uri=vscode-remote://attached-container+%vscode_remote_hex%/app
This shell script does the job:
#!/usr/bin/env bash
case $# in
1) ;;
*) echo "Usage: code-remote-container <directory>"; exit 1 ;;
esac
dir=`echo $(cd $1 && pwd)`
hex=`printf ${dir} | od -A n -t x1 | tr -d '[\n\t ]'`
base=`basename ${dir}`
code --folder-uri="vscode-remote://dev-container%2B${hex}/workspaces/${base}"
I have saved it under the name code-remote-container, which then e.g. can be used
as:
code-remote-container .
which would open the current directory in the remote container.
Obviously this expects that the remote container has already been setup for vsc.

Run SQL script after start of SQL Server on docker

I have a Dockerfile with below code
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA=Y
ENV sa_password=##$wo0RD!
CMD sqlcmd -i create-db.sql
and I can create image but when I run container with the image I don't see created database on the SQL Server because the script is executed before SQL Server was started.
Can I do that the script will be execute after start the service with SQL Server?
RUN gets used to build the layers in an image. CMD is the command that is run when you launch an instance (a "container") of the built image.
Also, if your script depends on those environment variables, if it's an older version of Docker, it might fail because those variables are not defined the way you want them defined!
In older versions of docker the Dockerfile ENV command uses spaces instead of "="
Your Dockerfile should probably be:
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA Y
ENV SA_PASSWORD ##$wo0RD!
RUN sqlcmd -i create-db.sql
This will create an image containing the database with your password inside it.
(If the SQL file somehow uses the environment variables, this wouldn't make sense as you might as well update the SQL file before you copy it over.) If you want to be able to override the password between the docker build and docker run steps, by using docker run --env sa_password=##$wo0RD! ..., you will need to change the last line to:
CMD sqlcmd -i create-db.sql && .\start -sa_password $env:SA_PASSWORD \
-ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
Which is a modified version of the CMD line that is inherited from the upstream image.
You can follow this link https://github.com/microsoft/mssql-docker/issues/11.
Credits to Robin Moffatt.
Change your docker-compose.yml file to contain the following
mssql:
image: microsoft/mssql-server-windows-express
environment:
- SA_PASSWORD=##$wo0RD!
- ACCEPT_EULA=Y
volumes:
# directory with sql script on pc to /scripts/
# - ./data/mssql:/scripts/
- ./create-db.sql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait 30 seconds for it to be available
# (lame, I know, but there's no nc available to start prodding network ports)
sleep 30
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
for foo in /scripts/*.sql
do /opt/mssql-tools/bin/sqlcmd -U sa -P $$SA_PASSWORD -l 30 -e -i $$foo
done
# So that the container doesn't shut down, sleep this thread
sleep infinity

How can the terminal in Jupyter automatically run bash instead of sh

I love the terminal feature and works very well for our use case where I would like students to do some work directly from a terminal so they experience that environment. The shell that launches automatically is sh and does not pick up all of my bash defaults. I can type "bash" and everything works perfectly. How can I make "bash" the default?
Jupyter uses the environment variable $SHELL to decide which shell to launch. If you are running jupyter using init then this will be set to dash on Ubuntu systems. My solution is to export SHELL=/bin/bash in the script that launches jupyter.
I have tried the ultimate way of switching system-wide SHELL environment variable by adding the following line to the file /etc/environment:
SHELL=/bin/bash
This works on Ubuntu environment. Every now and then, the SHELL variable always points to /bin/bash instead of /bin/sh in Terminal after a reboot.
Also, setting up CRON job to launch jupyter notebook at system startup triggered the same issue on jupyter notebook's Terminal.
It turns out that I need to include variable setting and sourcing statements for Bash init file like ~/.bashrc in CRON job statement as follows via the command $ crontab -e :
#reboot source /home/USERNAME/.bashrc && \
export SHELL=/bin/bash && \
/SOMEWHERE/jupyter notebook --port=8888
In this way, I can log in the Ubuntu server via a remote web browser (http://server-ip-address:8888/) with opening jupyter notebook's Terminal default to Bash as same as local environment.
You can add this to your jupyter_notebook_config.py
c.NotebookApp.terminado_settings = {'shell_command': ['/bin/bash']}
With Jupyter running on Ubuntu 15.10, the Jupyter shell will default into /bin/sh which is a symlink to /bin/dash.
rm /bin/sh
ln -s /bin/bash /bin/sh
That fix got Jupyter terminal booting into bash for me.

Passing variable from container start to file

I have the following lines in a Dockerfile where I want to set a value in a config file to a default value before the application starts up at the end and provide optionally setting it using the -e option when starting the container.
I am trying to do this using Docker's ENV commando
ENV CONFIG_VALUE default_value
RUN sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
CMD command_to_start_app
I have the string CONFIG_VALUE explicitly in the file CONFIG_FILE and the default value from the Dockerfile gets correctly substituted. However, when I run the container with the added -e CONFIG_VALUE=100 the substitution is not carried out, the default value set in the Dockerfile is kept.
When I do
docker exec -i -t container_name bash
and echo $CONFIG_VALUE inside the container the environment variable does contain the desired value 100.
Instructions in the Dockerfile are evaluated line-by-line when you do docker build and are not re-evaluated at run-time.
You can still do this however by using an entrypoint script, which will be evaluated at run-time after any environment variables have been set.
For example, you can define the following entrypoint.sh script:
#!/bin/bash
sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
exec "$#"
The exec "$#" will execute any CMD or command that is set.
Add it to the Dockerfile e.g:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Note that if you have an existing entrypoint, you will need to merge it with this one - you can only have one entrypoint.
Now you should find that the environment variable is respected i.e:
docker run -e CONFIG_VALUE=100 container_name cat CONFIG_FILE
Should work as expected.
That shouldn't be possible in a Dockerfile: those instructions are static, for making an image.
If you need runtime instruction when launching a container, you should code them in a script called by the CMD directive.
In other words, the sed would take place in a script that the CMD called. When doing the docker run, that script would have access to the environment variable set just before said docker run.