There is a docker container with Postgres server. Ones postgres is stopped or crashed (doesn't matter) I need to check some environment variables and the state of a few files.
By default, the container stops after an application is finished.
I know there is an option to change the default behavior in dockerfile but I no longer to find it ((
If somebody knows that please give me an Dockerfile example like this :
FROM something
RUN something ...
ENTRYPOINT [something]
You can simply run non exiting process in the end of entrypoint to keep the container alive, even if the main process exits.
For example use
tail -f 'some log file'
There isn't an "option" to keep a container running when the main process has stopped or died. You can run something different in the container while debugging the actual startup scripts. Sometimes you need to override an entrypoint to do this.
docker run -ti $IMAGE /bin/sh
docker run -ti --entrypoint=/bin/sh $IMAGE
If the main process will not stay running when you docker start the existing container then you won't be able to use that container interactively, otherwise you could:
docker start $CID
docker exec -ti $CID sh
For getting files from an existing container, you can docker cp anything you need from the stopped container.
docker cp $CID:/a/path /some/local/path
You can also docker export a tar archive of the complete container.
docker export $CID -o $CID.tar
tar -tvf $CID.tar | grep afile
The environment Docker injects can be seen with docker inspect, but this won't give you anything the process has added to the environment.
docker inspect $CID --format '{{ json .Config.Env }}'
In general, Docker requires a process to keep running in the foreground. Otherwise, it assumes that the application is stopped and the container is shut down. Below, I outline a few ways, that I'm aware of, which can prevent a container from stopping:
Use a process manager such as runit or systemd to run a process inside a container:
As an example, here you can find a Redhat article about running systemd within a docker container.
A few possible approaches for debugging purposes:
a) Add an artificial sleep or pause to the entrypoint:
For example, in bash, you can use this to create an infinite pause:
while true; do sleep 1; done
b) For a fast workaround, one can run the tail command in the container:
As an example, with the command below, we start a new container in detached/background mode (-d) and executing the tail -f /dev/null command inside the container. As a result, this will force the container to run forever.
docker run -d ubuntu:18.04 tail -f /dev/null
And if the main process crashed/exited, you may still look up the ENV variable or check out files with exec and the basic commands like cd, ls. A few relevant commands for that:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' name-of-container
docker exec -it name-of-container bash
Related
i try to understand why,
by executing this command :
docker run --rm -it postgres bash
container starts well, gives me a bash prompt, without intializing
a postgres server.
In fact, when i only execute this :
docker run --rm -it postgres
container tries to intialize a postgres server and failed
because a non provided '-e POSTGRES_PASSWORD' sequence
which is absolutly normal.
But the question is :
what is the mechanism in 'docker' or 'in the official postgres image'
that tell container to :
not initialize a postgres server when an argument is provided
at the end of 'docker run --rm -it postgres' sequence
(like bash or psql..)
DO initialize a postgres server when NO argument is provided
(docker run --rm -it postgres)
Thanks by advance.
The postgres image Dockerfile is set up as
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
When an image has both an ENTRYPOINT and a CMD, the command part is passed as additional parameters to the entrypoint. If you docker run postgres bash, the bash command overrides the command part but leaves the entrypoint intact.
This entrypoint wrapper script setup is a common and useful technique. The script can do anything it needs to do to make the container ready to use, and then end with the shell command exec "$#" to run the command it got passed as arguments. Typical uses for this include dynamically setting environment variables, populating mounted volumes, and (for application containers more than database containers) waiting for a database or other container dependency to be ready.
In the particular case of the postgres image, its entrypoint script does (simplified):
if [ "$1" = 'postgres' ] && ! _pg_want_help "$#"; then
docker_setup_env
docker_create_db_directories
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_init_database_dir
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
fi
fi
exec "$#"
Since the entrypoint script is just a shell script and it does have the command as positional parameters, it can make decisions based on what the command actually is. In this case, if [ "$1" = 'postgres' ] – if the main container command is to run the PostgreSQL server – then do the first-time initialization, otherwise don't.
I want to to build a PostgreSQL image that only contains some extra .sql files to be executed at starting time
Dockerfile:
FROM postgres:11.9-alpine
USER postgres
WORKDIR /
COPY ddl/*.sql /docker-entrypoint-initdb.d/
Then I build the image:
docker build -t my-postgres:1.0.0 -f Dockerfile .
And run the container
docker run -d --name my-database \
-e POSTGRES_PASSWORD=abc123 \
-p 5432:5432 \
my-postgres:1.0.0
The output of it is the container id
33ed596792a80fc08f37c7c0ab16f8827191726b8e07d68ce03b2b5736a6fa4e
Checking the running containers returns nothing:
Docker container ls
But if I explicitly start it, it works
docker start my-postgres
In the original PostgreSQL image the Docker run command already starts the database. Why after building my own image it doesn't?
It turned out that one of the copied .sql files was failing to execute and, based on this documentation, it makes the entrypoint script to exit. Fixing the SQL solved the issue and the container started normally with Docker run
I've run in detached mode docker-compose up.
I have then run docker-compose exec container_name command
and i had nothing, so I have then run docker-compose exec -it container_name bash to get the shell and i got nothing either.
I have tried these command but I have no output from the first command and i dont have access to the shell with the second command. Do you know how i can have output or access the shell?
I am under macos x catalina 10.15.7. I have tried to reboot but it's the same.
docker-compose build and docker-compose up are running correctly
note docker ps -a gives me the same container id as docker-compose without the #1 at the end (react-wagtail-blog_web_cont with docker ps -a and react-wagtail-blog_web_cont_1 with docker-compose ps).
I can access my react-wagtail-blog_web_cont container with docker exec -it react-wagtail-blog_web_cont bash...
Thank you
If you just want to see the normal log output you can use docker logs <container> where container is the container name or hash. To find that, just run docker ps
Also if you want to see the logs realtime as they come use `docker logs -f
See docs
I have a standard Python docker image that needs to start after postgers is properly started in its standard image.
I understand that I can add this Bash command in the docker-compose file:
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- mypostgres
But I don't have bash installed in the standard python docker image, and I'm trying to keep the installation minimal.
Is there a way to wait for postgres without having bash installed in my image?
I have a standard Python docker image that needs to start after postgres is properly started in its standard image.
You mentioned "Python docker image", but you appear to be calling npm start, which is a node.js application, not a Python application.
The standard Python images do have bash installed (as do the official Node images):
$ docker run -it --rm python:3.10 bash
root#c9bdac2e23f9:/#
However, just checking for the port to be available may be insufficient in any case, so really what you want is to execute a query against the database and only continue once the query is successful.
A common solution is to install the postgres cli and run psql in a loop, like this:
until psql -h $HOST -U $USER -d $DB_NAME -c 'select 1' >/dev/null 2>&1; do
echo 'Waiting for database...'
sleep 1
done
You can use environment variables or a .pgpass file to provide the appropriate password.
If you are building a custom image, it may be better to place this logic in your ENTRYPOINT script rather than embedding it in the command field of your docker-compose.yaml.
If you don't want to psql, you can write the same logic in Python or Node utilizing whatever Postgres bindings are available (e.g., something like psycopg2 for Python).
A better solution is to make your application robust in the face of database failures, because this allows your application to continue running if the database is briefly unavailable during a restart.
I try to run locustfile in locustio/locust docker image and it cannot find the locustfile, despite the locustfile exists in the locust directory.
~ docker run -p 8089:8089 -v $PWD:/locust locustio/locust locust -f /locust/locustfile.py
Could not find any locustfile! Ensure file ends in '.py' and see --help for available options.
(I'm reposting this question as my own, because the original poster deleted it immediately after getting an answer!)
Remove the extra "locust" from your command, so that it becomes:
docker run ... locustio/locust -f /locust/locustfile.py