Cannot specify current directory with docker run -v - powershell

I am trying to execute the following docker command in PowerShell but I cannot get it to recognize the $(PWD) for the current directory. Help please.
docker run -it -v $(PWD):/app --workdir /app samgentile\aspnetcore
I get:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: invalid reference format.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.

You should use "/" instead of "\" in the image name:
docker run -it -v $PWD:/app --workdir /app samgentile/aspnetcore

Mihai is correct to point out the parentheses. These signify that you want to run the command PWD and use its output, whereas without the parentheses PWD is considered a variable. A correct invocation would take the form:
docker run -it -v $(pwd):/app --workdir /app samgentile/aspnetcore
Or:
docker run -it -v $PWD:/app --workdir /app samgentile/aspnetcore

Related

How to specify default extensions while running gitpods' openvscode-server

Using https://github.com/gitpod-io/openvscode-server is there a way to set default VSCode extensions using their Docker command? (docker run -it --init -p 3000:3000 -v "$(pwd):/home/workspace:cached" gitpod/openvscode-server)
you can pass extra arguments to your docker command to install extensions, for example:
docker run -it --init -p 3000:3000 gitpod/openvscode-server --install-extension gitpod.gitpod-theme --install-extension vscodevim.vim --start-server
Feel free to drop by our community at https://gitpod.io/chat !

What is a -d2 flag in docker run command

I am working with a codebase that has a docker run command as follows (real name and password removed):
docker run -it --rm --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:11.6 -d2
I know that -d flag is to --detach the container, but what is -d2? I can't figure out the purpose of this flag at the end of the command. I'm also confused why it's at the end of the command and not before the IMAGE name like the other flags.
The docker command line is order sensitive. Once docker sees an option or flag it cannot parse, it treats that as the image name. And everything after the image name is the command to run instead of the default command. In other words:
docker run ${options_to_run} ${image_name} ${command_override}
In the postgres image, the entrypoint is docker-entrypoint.sh and the default command is postgres. That means docker will run this container by default as docker-entrypoint.sh postgres (it concatenates the entrypoint and command together into one command with args to run). With the -d2 command override, that becomes docker-entrypoint.sh -d2 and the entrypoint script may interpret that as an option to change how it will run. The entrypoint has special handling for flags:
if [ "${1:0:1}" = '-' ]; then
set -- postgres "$#"
fi
....
exec "$#"
Which means the entrypoint arguments are modified from -d2 to postgres -d2 and then the shell in pid 1 is replaced by the command line arguments, postgres running with the -d2 argument.
I found the answer. -d2 is a postgres CLI option for specifying the debugging level. We are executing the postgres container with that postgres CLI option.
From postgres --help:
-d 1-5 debugging level

Makefile: Terminates after running commands "go test ./..."

I encountered a problem running "go test" from a makefile. The idea behind all this is to start a docker container, run all tests against it and then stop & remove the container.
The container gets started and the tests run, but the last two commands (docker stop & rm) aren't executed.
Make returns this message:
make: *** [test] Error 1
Is it "go test" which terminates the makefile execution?
.PHONY: up down test
up:
docker-compose up
down:
docker-compose down
test:
docker run -d \
--name dev \
--env-file $${HOME}/go/src/test-api/testdata/dbConfigTest.env \
-p 5432:5432 \
-v $${HOME}/go/src/test-api/testdata/postgres:/var/lib/postgresql/data postgres
# runs all tests including integration tests.
go test ./... --tags=integration -failfast -v
# stop and remove container
docker stop `docker ps -aqf "name=dev"`
docker rm `docker ps -aqf "name=dev"`
Assuming that you want the 'make test' to return the test status consider the following change to the makefile
test:
docker run -d \
--name dev \
--env-file $${HOME}/go/src/test-api/testdata/dbConfigTest.env \
-p 5432:5432 \
-v $${HOME}/go/src/test-api/testdata/postgres:/var/lib/postgresql/data postgres
# runs all tests including integration tests.
go test ./... --tags=integration -failfast -v ; echo "$$?" > test.result
# stop and remove container
docker stop `docker ps -aqf "name=dev"`
docker rm `docker ps -aqf "name=dev"
exit $$(cat test.result)
It uses the test.result file to capture the exit code from the test

Replacing postgresql.conf in a docker container

I am pulling the postgres:12.0-alpine docker image to build my database. My intention is to replace the postgresql.conf file in the container to reflect the changes I want (changing data directory, modify backup options etc). I am trying with the following docker file
FROM postgres:12.0-alpine
# create the custom user
RUN addgroup -S custom && adduser -S custom_admin -G custom
# create the appropriate directories
ENV APP_HOME=/home/data
ENV APP_SETTINGS=/var/lib/postgresql/data
WORKDIR $APP_HOME
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME/entrypoint.sh
# copy postgresql.conf
COPY ./postgresql.conf $APP_HOME/postgresql.conf
RUN chmod +x /home/data/entrypoint.sh
# chown all the files to the app user
RUN chown -R custom_admin:custom $APP_HOME
RUN chown -R custom_admin:custom $APP_SETTINGS
# change to the app user
USER custom_admin
# run entrypoint.sh
ENTRYPOINT ["/home/data/entrypoint.sh"]
CMD ["custom_admin"]
my entrypoint.sh looks like
#!/bin/sh
rm /var/lib/postgresql/data/postgresql.conf
cp ./postgresql.conf /var/lib/postgresql/data/postgresql.conf
echo "replaced .conf file"
exec "$#"
But I get an exec error saying 'custom_admin: not found on the 'exec "$#"' line. What am I missing here?
In order to provide a custom configuration. Please use the below command:
docker run -d --name some-postgres -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
Here my-postgres.conf is your custom configuration file.
Refer the docker hub page for more information about the postgres image.
Better to use the suggested answer by #Thilak, you do not need custom image to just use the custom config.
Now the problem with the CMD ["custom_admin"] in the Dockerfile, any command that is passed to CMD in the Dockerfile, you are executing that command in the end of the entrypoint, normally such command refers to main or long-running process of the container. Where custom_admin seems like a user, not a command. you need to replace this with the process which would run as a main process of the container.
Change CMD to
CMD ["postgres"]
I would suggest modifying the offical entrypoint which do many tasks out of the box, as you own entrypoint just starting the container no DB initialization etc.

Running a script from a mongodb docker-container

I have script that restores the database restore.sh:
mongorestore --port 27017 --db myapp `pwd`/db-dump/myapp
I want to run this in a short lived docker-container using the image mvertes/alpine-mongo.
To run a shortlived container the --rmis used:
docker run --rm --name mongo -p 27017:27017 \
-v /data/db:/data/db \
mvertes/alpine-mongo
But how do I execute my script in the same command?
Check out the docker run reference:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You can pass in the command you wish to execute. In your case, this could be the restore script. You must consider two things, though.
The script is not part of the container, so you need to mount into the container.
Specifying a command overwrites the CMD directive in the Dockerfile.
If you look at the Dockerfile, you see this as its last line:
CMD [ "mongod" ]
This means the default command that the container executes is mongod. When you specify a command for docker run, you "replace" this with the command you pass in. In your case: Passing in the restore script will overwrite mongod, which means Mongo never starts and the script will fail.
You have two options:
Start one container with the database and another one with the restore script.
Try to chain the commands.
Since you want to run this in a short-lived container, option 2 might be better suited for you. Just remember to start mongod with the --fork flag to run it in daemon mode.
$ docker run --rm --name mongo -p 27017:27017 \
-v /data/db:/data/db \
-v "$(pwd)":/mnt/pwd \
mvertes/alpine-mongo "mongod --fork && /mnt/pwd/restore.sh"
Hopefully, this is all it takes to solve your problem.