I'm trying to use the Remote - Containers extension for Visual Studio Code, but when I "Open Folder in Container", I get this error:
Run: docker exec 0d0c1eac6f38b81566757786f853d6f6a4f3a836c15ca7ed3a3aaf29b9faab14 /bin/sh -c set -o noclobber ; mkdir -p '/home/appuser/.vscode-server/data/Machine' && { > '/home/appuser/.vscode-server/data/Machine/.writeMachineSettingsMarker' ; } 2> /dev/null
mkdir: cannot create directory ‘/home/appuser’: Permission denied
My Dockerfile uses:
FROM python:3.7-slim
...
RUN useradd -ms /bin/bash appuser
USER appuser
I've also tried:
RUN adduser -D appuser
RUN groupadd -g 999 appuser && \
useradd -r -u 999 -g appuser appuser
USER appuser
Both of these work if I build them directly. How do I get this to work?
What works for me is to create a non-root user in my Dockerfile and then configure the VS Code dev container to use that user.
Step 1. Create the non-root user in your Docker image
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN groupadd --system --gid ${GROUP_ID} MY_GROUP && \
useradd --system --uid ${USER_ID} --gid MY_GROUP --home /home/MY_USER --shell /sbin/nologin MY_USER
Step 2. Configure .devcontainer/devcontainer.json file in the root of your project (should be created when you start remote dev)
"remoteUser": "MY_USER" <-- this is the setting you want to update
If you use docker compose, it's possible to configure VS Code to run the entire container as the non-root user by configuring .devcontainer/docker-compose.yml, but I've been happy with the process described above so I haven't experimented further.
You might get some additional insight by reading through the VS Code docs on this topic.
go into your WSL2 and check what is your local uid (non-root) using command id.
in my case it is UID=1000(ubuntu).
Change your dockerfile, to something like this:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /home/ubuntu
COPY . /home/ubuntu
# Creates a non-root user and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN useradd -u 1000 ubuntu && chown -R ubuntu /home/ubuntu
USER ubuntu
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "app.py"]
Related
I am trying to make the Scaleway CLI installed as part of a docker image I'm building to run Azure Pipelines.
My Dockerfile looks like this:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat\
docker.io \
s3cmd
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
# Add private key for SSH connections
COPY ./config/id_rsa $HOME/.ssh/id_rsa
# Config s3cmd
COPY ./config/.s3cfg $HOME/.s3cfg
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
The key section being:
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
The config.yaml file referenced above looks like the following (minus the real values of course):
access_key: <key>
secret_key: <secret>
default_organization_id: <orgId>
default_project_id: <projectId>
default_region: nl-ams
default_zone: nl-ams-1
However, when it executes RUN scw init, the output is Invalid email or secret-key: ''
I have tried without running scw init at all, but then calls to scw fail, saying
Access key is required
Details: Access_key can be initialised using the command "scw init".
After initialisation, there are three ways to provide access_key:
with the Scaleway config file, in the access_key key: /root/.config/scw/config.yaml;
with the SCW_ACCESS_KEY environement variable;
Note that the last method has the highest priority.
More info:
https://github.com/scaleway/scaleway-sdk-go/tree/master/scw#scaleway-config
Hint: You can get your credentials here:
https://console.scaleway.com/account/credentials
Which admittedly is one of the better error messages I've seen, but nonetheless has not helped me. I am going to try the Environment Variable approach, which I suspect may do the trick, but I'd still like to know what I'm doing wrong with this config.yaml file.
Lastly... someone with more rep than me needs to create the tag "scaleway". Hard to reference the actual technology in question when the tag doesn't exist.
I am pulling the postgres:12.0-alpine docker image to build my database. My intention is to replace the postgresql.conf file in the container to reflect the changes I want (changing data directory, modify backup options etc). I am trying with the following docker file
FROM postgres:12.0-alpine
# create the custom user
RUN addgroup -S custom && adduser -S custom_admin -G custom
# create the appropriate directories
ENV APP_HOME=/home/data
ENV APP_SETTINGS=/var/lib/postgresql/data
WORKDIR $APP_HOME
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME/entrypoint.sh
# copy postgresql.conf
COPY ./postgresql.conf $APP_HOME/postgresql.conf
RUN chmod +x /home/data/entrypoint.sh
# chown all the files to the app user
RUN chown -R custom_admin:custom $APP_HOME
RUN chown -R custom_admin:custom $APP_SETTINGS
# change to the app user
USER custom_admin
# run entrypoint.sh
ENTRYPOINT ["/home/data/entrypoint.sh"]
CMD ["custom_admin"]
my entrypoint.sh looks like
#!/bin/sh
rm /var/lib/postgresql/data/postgresql.conf
cp ./postgresql.conf /var/lib/postgresql/data/postgresql.conf
echo "replaced .conf file"
exec "$#"
But I get an exec error saying 'custom_admin: not found on the 'exec "$#"' line. What am I missing here?
In order to provide a custom configuration. Please use the below command:
docker run -d --name some-postgres -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
Here my-postgres.conf is your custom configuration file.
Refer the docker hub page for more information about the postgres image.
Better to use the suggested answer by #Thilak, you do not need custom image to just use the custom config.
Now the problem with the CMD ["custom_admin"] in the Dockerfile, any command that is passed to CMD in the Dockerfile, you are executing that command in the end of the entrypoint, normally such command refers to main or long-running process of the container. Where custom_admin seems like a user, not a command. you need to replace this with the process which would run as a main process of the container.
Change CMD to
CMD ["postgres"]
I would suggest modifying the offical entrypoint which do many tasks out of the box, as you own entrypoint just starting the container no DB initialization etc.
after i had some previous problem to Dockerise my MySQL Kitura SETUP here : Docker Build Kitura Sqift Container - Shim.h mysql.h file not found
I am running in a new Problem i can not solve following the Guide from : https://www.kitura.io/docs/deploying/docker.html .
After i followed all the steps and also did the fixing on the MySQL issue previously i was now able to run the following command :
docker run -p 8080:8080 -it myapp-run
THis however leads to the following issue :
error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory
i assume something tries again to open the libmysqclclient from some wrong Environmental Directories ?
But how can i fix this issues by building the docker images ... is there any way and better a smart way ?
Thanks a lot again for the help.
I was able to update and enhance my dockerfile this is now running smoothly and also can be used for CI and CD tasks.
FROM ibmcom/swift-ubuntu-runtime:latest
##FROM ibmcom/swift-ubuntu-runtime:5.0.1
LABEL maintainer="IBM Swift Engineering at IBM Cloud"
LABEL Description="Template Dockerfile that extends the ibmcom/swift-ubuntu-runtime image."
# We can replace this port with what the user wants
EXPOSE 8080
# Default user if not provided
ARG bx_dev_user=root
ARG bx_dev_userid=1000
# Install system level packages
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get update && apt-get install -y sudo libmysqlclient-dev
# Add utils files
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/run-utils.sh /swift-utils/run-utils.sh
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/common-utils.sh /swift-utils/common-utils.sh
RUN chmod -R 555 /swift-utils
# Create user if not root
RUN if [ $bx_dev_user != "root" ]; then useradd -ms /bin/bash -u $bx_dev_userid $bx_dev_user; fi
# Bundle application source & binaries
COPY ./.build /swift-project/.build
# Command to start Swift application
CMD [ "sh", "-c", "cd /swift-project && .build/release/Beautylivery_Server_New" ]
Here is my docker file:
FROM ubuntu:14.04
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update && apt-get -y -q install python-software-properties software-properties-common \
&& apt-get -y -q install postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
USER postgres
RUN /etc/init.d/postgresql start \
&& psql --command "CREATE USER pguser WITH SUPERUSER PASSWORD 'pguser';" \
&& createdb -O pguser pgdb
USER root
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
EXPOSE 5432
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
USER postgres
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
Here is what I did...
I build the docker image:
docker build --rm=true -t my_image/postgresql:9.3
Then, I create a new directory called data in my current directory and ran the following command:
docker run -i -t -v="data:/data" -p 5432:5432 my_image/postgresql:9.3
I open another terminal and enter the postgres shell by running:
psql -h my_docker_ip -p 5432 -U pguser -W pgdb
and I create a table:
pgdb=# create table test (test_id bigserial primary key);
I verify the table exist using \dt and exit the postgres shell
I terminate the docker process and rerun the following:
docker run -i -t -v="data:/data" -p 5432:5432 my_image/postgresql:9.3
I enter the posgrest shell again and run \dt
I notice
there are no tables.
in the data directory there are no files.
I must be doing something wrong since I am assuming that the table I created will persist. Can someone point out my mistake?
There is something that confused me and for me was not very clear in the official documentation.
To my knowledge, persistent volumes can be created in three ways.
At container invocation time including full path ( -v ~/database:/data ): makes an external folder from the host available inside the docker container. Both can modify it.
At container invocation time using a volume name ( -v datamysql:/data ): makes a volume that is persistent available inside the container. It is created it if it did not exist. You can list them by name with docker volume ls. Internally, it will be stored in a place such as /var/lib/docker/volumes/ae4445f7c9317a22fe84726fb894c47754f38a7fd150c00fd877024889968750/_data.
At container build time ( VOLUME ["/database/data"] in Dockerfile). Every invocation of docker run will create a new volume that will persist even if you delete the container. This can be confusing becausee subsequent invocations will result in different volumes being created that will not be reused.
You can list both named (second case) and unnamed (third case) volumes with
$ docker volume ls
DRIVER VOLUME NAME
local 064593b3e65977097d4d0c8402a6c633f1af69be2937bf118678ab8f97ee9a7e
local 4753ad0437d13e54c76d9c34a30a1843396a1866a0cf9237d500fdcca0d78c5f
local 8d7a35354f666b2e8a26866a35bbae36bb9601701d4c6b505ab8ce6629f69415
local db48eefe8f189b36107ca9c4eebb792690590ab0ba055e7e4e2c9adfd1765b7e
local datamysql
You can see the exact location of a container's volume by using docker inspect mycontainer
{
"Type": "volume",
"Name": "8d7a35354f666b2e8a26866a35bbae36bb9601701d4c6b505ab8ce6629f69415",
"Source": "/media/USBdrive/docker/volumes/8d7a35354f666b2e8a26866a35bbae36bb9601701d4c6b505ab8ce6629f69415/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
It might be handy to remove unused volumes (for the third case, specially).
$ docker volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
4753ad0437d13e54c76d9c34a30a1843396a1866a0cf9237d500fdcca0d78c5f
Total reclaimed space: 205MB
Because you used the VOLUME directive in your Dockerfile, you are in the third case. Inspect your container to look for the file, and specify the volume from the command line if you want repeated sessions to persist data.
Based on your comment:
the data persisted, but I still can't find the persist data in my host ./data directory
and running this command:
docker run -i -t -v="data:/data" -p 5432:5432 my_image/postgresql:9.3
You appear to be confusing a named volume and a host volume. The named volume is used when you give the volume a name without a path, like data. The named volume stores the data using the docker driver (typically local) under a given name that you can reuse. It has the advantage of being listed in docker volume ls, and being initialized to the content of the image at the mounted location.
If you include a full path, like /home/username/data that would mount the directory from the docker host instead of using the named volume. The biggest disadvantage is that you don't get the directory initialized with the contents from the image, and you will likely encounter permission issues where the uid of the container process won't match the uid you use on your host.
For more details, see https://docs.docker.com/engine/tutorials/dockervolumes/
I've really been struggling over the past few days trying to setup some docker containers and shell scripts to create an environment for my application to run in.
The tall and short is that I have a web server which requires a database to operate. My aim is to have end users unzip the content onto their docker machine, run a build script (which just builds the relevant docker images), then run a OneTime.sh script (which creates the volumes and databases necessary), during this script, they are prompted for what user name and password they would like for the super user of the database.
The problem I'm having is getting those values to the docker image. Here is my script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -p 9001:5432 -P --name psql-data-onetime postgres-setup
# Close containers
docker stop psql-data-onetime
docker rm psql-data-onetime
docker stop psql-transactions-onetime
docker rm psql-transactions-onetime
And here is the docker file:
FROM ubuntu
#Required environment variables: USERNAME, PASSWORD, DBNAME
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
# of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Complete configuration
USER root
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Expose the PostgreSQL port
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Run setup script
ADD Setup.sh /
CMD ["sh", "Setup.sh"]
The script 'Setup.sh' is the following:
echo -n " User name: "
read user
echo -n " Password: "
read password
echo -n " Database Name: "
read dbname
/etc/init.d/postgresql start
/usr/lib/postgresql/9.3/bin/psql --command "CREATE USER $user WITH SUPERUSER PASSWORD '$password';"
/usr/lib/postgresql/9.3/bin/createdb -O $user $dbname
exit
Why doesn't this work? (I don't get prompted to enter the text, and it throws an error that the parameters are bad). What is the proper way to do something like this? It feels like it's probably a pretty common problem to solve, but I cannot for the life of me find any non convoluted examples of this behaviour.
The main purpose of this is to make life easier for the end user, so if I could just prompt them for the user name, password, and dbname, (plus calling the correct scripts), that would be ideal.
EDIT:
After running the log file looks like this:
User name:
Password:
Database Name:
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
EDIT 2:
After updating to CMD ["sh", "-x", "Setup.sh"]
I get:
echo -n User name:
+read user
:bad variable nameuser
echo -n Password:
+read password
:bad variable namepassword
echo -n Database Name:
+read dbname
:bad variable dbname