What is a -d2 flag in docker run command - postgresql

I am working with a codebase that has a docker run command as follows (real name and password removed):
docker run -it --rm --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:11.6 -d2
I know that -d flag is to --detach the container, but what is -d2? I can't figure out the purpose of this flag at the end of the command. I'm also confused why it's at the end of the command and not before the IMAGE name like the other flags.

The docker command line is order sensitive. Once docker sees an option or flag it cannot parse, it treats that as the image name. And everything after the image name is the command to run instead of the default command. In other words:
docker run ${options_to_run} ${image_name} ${command_override}
In the postgres image, the entrypoint is docker-entrypoint.sh and the default command is postgres. That means docker will run this container by default as docker-entrypoint.sh postgres (it concatenates the entrypoint and command together into one command with args to run). With the -d2 command override, that becomes docker-entrypoint.sh -d2 and the entrypoint script may interpret that as an option to change how it will run. The entrypoint has special handling for flags:
if [ "${1:0:1}" = '-' ]; then
set -- postgres "$#"
fi
....
exec "$#"
Which means the entrypoint arguments are modified from -d2 to postgres -d2 and then the shell in pid 1 is replaced by the command line arguments, postgres running with the -d2 argument.

I found the answer. -d2 is a postgres CLI option for specifying the debugging level. We are executing the postgres container with that postgres CLI option.
From postgres --help:
-d 1-5 debugging level

Related

docker mechanism that allow container to not intialize postgres server while container starts?

i try to understand why,
by executing this command :
docker run --rm -it postgres bash
container starts well, gives me a bash prompt, without intializing
a postgres server.
In fact, when i only execute this :
docker run --rm -it postgres
container tries to intialize a postgres server and failed
because a non provided '-e POSTGRES_PASSWORD' sequence
which is absolutly normal.
But the question is :
what is the mechanism in 'docker' or 'in the official postgres image'
that tell container to :
not initialize a postgres server when an argument is provided
at the end of 'docker run --rm -it postgres' sequence
(like bash or psql..)
DO initialize a postgres server when NO argument is provided
(docker run --rm -it postgres)
Thanks by advance.
The postgres image Dockerfile is set up as
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
When an image has both an ENTRYPOINT and a CMD, the command part is passed as additional parameters to the entrypoint. If you docker run postgres bash, the bash command overrides the command part but leaves the entrypoint intact.
This entrypoint wrapper script setup is a common and useful technique. The script can do anything it needs to do to make the container ready to use, and then end with the shell command exec "$#" to run the command it got passed as arguments. Typical uses for this include dynamically setting environment variables, populating mounted volumes, and (for application containers more than database containers) waiting for a database or other container dependency to be ready.
In the particular case of the postgres image, its entrypoint script does (simplified):
if [ "$1" = 'postgres' ] && ! _pg_want_help "$#"; then
docker_setup_env
docker_create_db_directories
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_init_database_dir
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
fi
fi
exec "$#"
Since the entrypoint script is just a shell script and it does have the command as positional parameters, it can make decisions based on what the command actually is. In this case, if [ "$1" = 'postgres' ] – if the main container command is to run the PostgreSQL server – then do the first-time initialization, otherwise don't.

Docker & PostgreSQL- modified PostgreSQL image doesn't start with Docker run command

I want to to build a PostgreSQL image that only contains some extra .sql files to be executed at starting time
Dockerfile:
FROM postgres:11.9-alpine
USER postgres
WORKDIR /
COPY ddl/*.sql /docker-entrypoint-initdb.d/
Then I build the image:
docker build -t my-postgres:1.0.0 -f Dockerfile .
And run the container
docker run -d --name my-database \
-e POSTGRES_PASSWORD=abc123 \
-p 5432:5432 \
my-postgres:1.0.0
The output of it is the container id
33ed596792a80fc08f37c7c0ab16f8827191726b8e07d68ce03b2b5736a6fa4e
Checking the running containers returns nothing:
Docker container ls
But if I explicitly start it, it works
docker start my-postgres
In the original PostgreSQL image the Docker run command already starts the database. Why after building my own image it doesn't?
It turned out that one of the copied .sql files was failing to execute and, based on this documentation, it makes the entrypoint script to exit. Fixing the SQL solved the issue and the container started normally with Docker run

Running a script from a mongodb docker-container

I have script that restores the database restore.sh:
mongorestore --port 27017 --db myapp `pwd`/db-dump/myapp
I want to run this in a short lived docker-container using the image mvertes/alpine-mongo.
To run a shortlived container the --rmis used:
docker run --rm --name mongo -p 27017:27017 \
-v /data/db:/data/db \
mvertes/alpine-mongo
But how do I execute my script in the same command?
Check out the docker run reference:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You can pass in the command you wish to execute. In your case, this could be the restore script. You must consider two things, though.
The script is not part of the container, so you need to mount into the container.
Specifying a command overwrites the CMD directive in the Dockerfile.
If you look at the Dockerfile, you see this as its last line:
CMD [ "mongod" ]
This means the default command that the container executes is mongod. When you specify a command for docker run, you "replace" this with the command you pass in. In your case: Passing in the restore script will overwrite mongod, which means Mongo never starts and the script will fail.
You have two options:
Start one container with the database and another one with the restore script.
Try to chain the commands.
Since you want to run this in a short-lived container, option 2 might be better suited for you. Just remember to start mongod with the --fork flag to run it in daemon mode.
$ docker run --rm --name mongo -p 27017:27017 \
-v /data/db:/data/db \
-v "$(pwd)":/mnt/pwd \
mvertes/alpine-mongo "mongod --fork && /mnt/pwd/restore.sh"
Hopefully, this is all it takes to solve your problem.

How do I handle passwords and dockerfiles?

I've created an image for docker which hosts a postgresql server. In the dockerfile, the environment variable 'USER', and I pass a constant password into the a run of psql:
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" && createdb -O docker docker
Ideally either before or after calling 'docker run' on this image, I'd like the caller to have to input these details into the command line, so that I don't have to store them anywhere.
I'm not really sure how to go about this. Does docker have any support for reading stdin into an environment variable? Or perhaps there's a better way of handling this all together?
At build time
You can use build arguments in your Dockerfile:
ARG password=defaultPassword
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';" && createdb -O docker docker
Then build with:
$ docker build --build-arg password=superSecretPassword .
At run time
For setting the password at runtime, you can use an environment variable (ENV) that you can evaluate in an entrypoint script (ENTRYPOINT):
ENV PASSWORD=defaultPassword
ADD entrypoint.sh /docker-entrypoint.sh
USER postgres
ENTRYPOINT /docker-entrypoint.sh
CMD ["postgres"]
Within the entrypoint script, you can then create a new user with the given password as soon as the container starts:
pg_ctl -D /var/lib/postgresql/data \
-o "-c listen_addresses='localhost'" \
-w start
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';"
postgres pg_ctl -D /var/lib/postgresql/data -m fast -w stop
exec $#
You can also have a look at the Dockerfile and entrypoint script of the official postgres image, from which I've borrowed most of the code in this answer.
A note on security
Storing secrets like passwords in environment variables (both build and run time) is not incredibly secure (unfortunately, to my knowledge, Docker does not really offer any better solution for this, right now). An interesting discussion on this topic can be found in this question.
You could use environment variable in your Dockerfile and override the default value when you call docker run using -e or --env argument.
Also you will need to amend the init script to run psql command on startup referenced by the CMD instruction.

Why can't you start postgres in docker using "service postgres start"?

All the tutorials point out to running postgres in the format of
docker run -d -p 5432 \
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.2/bin/postgres \
-D /var/lib/postgresql/9.2/main \
-c config_file=/etc/postgresql/9.2/main/postgresql.conf'
Why can't we in our Docker file have:
ENTRYPOINT ["/etc/init.d/postgresql-9.2", "start"]
And simply start the container by
docker run -d psql
Is that not the purpose of Entrypoint or am I missing something?
the difference is that the init script provided in /etc/init.d is not an entry point. Its purpose is quite different; to get the entry point started, in the background, and then report on the success or failure to the caller. that script causes a postgres process, usually indirectly via pg_ctl, to be started, detached from the controlling terminal.
for docker to work best, it needs to run the application directly, attached to the docker process. that way it can usefully and generically terminate it when the user asks for it, or quickly discover and respond to the process crashing.
Exemplify that IfLoop said.
Using CMD into Dockerfiles:
USE postgres
CMD ["/usr/lib/postgresql/9.2/bin/postgres", "-D", "/var/lib/postgresql/9.2/main", "-c", "config_file=/etc/postgresql/9.2/main/postgresql.conf"]
To run:
$docker run -d -p 5432:5432 psql
Watching PostgeSQL logs:
$docker logs -f POSTGRES_CONTAINER_ID