I am running Docker Desktop 3.5.1 on MacOS Big Sur and I am totally confused about the following behaviour:
If I run docker run -it --rm postgres psql --help I get the psql usage information (all as expected) and I can continue to run commands in my terminal. Edit to clarify: the docker container exits and terminates as expected, but my zsh session remains active (also as expected).
However, if I run psql with an invalid flag, say, docker run -it --rm postgres psql -m then I get
/usr/lib/postgresql/13/bin/psql: invalid option -- 'm'
Try "psql --help" for more information.
[Process completed]
and my terminal session exits. Edit to clarify: the docker container exits as expected, but it takes the host zsh session with it (unexpected).
What I'm trying to work out is why does my terminal session exit and how can I avoid this happening?
To keep a session open you can execute bash like this:
docker run --rm -it postgres /bin/bash
Then you can run as many psql commands as you like and it wont exit unless bash exits.
edit:
It seems terminal closing behaviour can be configured in OS
https://stackoverflow.com/a/17910412/657477
Very weird behaviour but #ErangaHeshan's comments pointed me to some nonsense inside my .zprofile file. As soon as that was commented out then psql in docker stopped taking down my host zsh session on exit.
Related
i try to understand why,
by executing this command :
docker run --rm -it postgres bash
container starts well, gives me a bash prompt, without intializing
a postgres server.
In fact, when i only execute this :
docker run --rm -it postgres
container tries to intialize a postgres server and failed
because a non provided '-e POSTGRES_PASSWORD' sequence
which is absolutly normal.
But the question is :
what is the mechanism in 'docker' or 'in the official postgres image'
that tell container to :
not initialize a postgres server when an argument is provided
at the end of 'docker run --rm -it postgres' sequence
(like bash or psql..)
DO initialize a postgres server when NO argument is provided
(docker run --rm -it postgres)
Thanks by advance.
The postgres image Dockerfile is set up as
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
When an image has both an ENTRYPOINT and a CMD, the command part is passed as additional parameters to the entrypoint. If you docker run postgres bash, the bash command overrides the command part but leaves the entrypoint intact.
This entrypoint wrapper script setup is a common and useful technique. The script can do anything it needs to do to make the container ready to use, and then end with the shell command exec "$#" to run the command it got passed as arguments. Typical uses for this include dynamically setting environment variables, populating mounted volumes, and (for application containers more than database containers) waiting for a database or other container dependency to be ready.
In the particular case of the postgres image, its entrypoint script does (simplified):
if [ "$1" = 'postgres' ] && ! _pg_want_help "$#"; then
docker_setup_env
docker_create_db_directories
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_init_database_dir
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
fi
fi
exec "$#"
Since the entrypoint script is just a shell script and it does have the command as positional parameters, it can make decisions based on what the command actually is. In this case, if [ "$1" = 'postgres' ] – if the main container command is to run the PostgreSQL server – then do the first-time initialization, otherwise don't.
I've run in detached mode docker-compose up.
I have then run docker-compose exec container_name command
and i had nothing, so I have then run docker-compose exec -it container_name bash to get the shell and i got nothing either.
I have tried these command but I have no output from the first command and i dont have access to the shell with the second command. Do you know how i can have output or access the shell?
I am under macos x catalina 10.15.7. I have tried to reboot but it's the same.
docker-compose build and docker-compose up are running correctly
note docker ps -a gives me the same container id as docker-compose without the #1 at the end (react-wagtail-blog_web_cont with docker ps -a and react-wagtail-blog_web_cont_1 with docker-compose ps).
I can access my react-wagtail-blog_web_cont container with docker exec -it react-wagtail-blog_web_cont bash...
Thank you
If you just want to see the normal log output you can use docker logs <container> where container is the container name or hash. To find that, just run docker ps
Also if you want to see the logs realtime as they come use `docker logs -f
See docs
I am working with a codebase that has a docker run command as follows (real name and password removed):
docker run -it --rm --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:11.6 -d2
I know that -d flag is to --detach the container, but what is -d2? I can't figure out the purpose of this flag at the end of the command. I'm also confused why it's at the end of the command and not before the IMAGE name like the other flags.
The docker command line is order sensitive. Once docker sees an option or flag it cannot parse, it treats that as the image name. And everything after the image name is the command to run instead of the default command. In other words:
docker run ${options_to_run} ${image_name} ${command_override}
In the postgres image, the entrypoint is docker-entrypoint.sh and the default command is postgres. That means docker will run this container by default as docker-entrypoint.sh postgres (it concatenates the entrypoint and command together into one command with args to run). With the -d2 command override, that becomes docker-entrypoint.sh -d2 and the entrypoint script may interpret that as an option to change how it will run. The entrypoint has special handling for flags:
if [ "${1:0:1}" = '-' ]; then
set -- postgres "$#"
fi
....
exec "$#"
Which means the entrypoint arguments are modified from -d2 to postgres -d2 and then the shell in pid 1 is replaced by the command line arguments, postgres running with the -d2 argument.
I found the answer. -d2 is a postgres CLI option for specifying the debugging level. We are executing the postgres container with that postgres CLI option.
From postgres --help:
-d 1-5 debugging level
docker exec -it command returns following error "cannot enable tty mode on non tty input"
level="fatal" msg="cannot enable tty mode on non tty input"
I am running docker(1.4.1) on centos box 6.6.
I am trying to execute the following command
docker exec -it containerName /bin/bash
but I am getting following error
level="fatal" msg="cannot enable tty mode on non tty input"
Running docker exec -i instead of docker exec -it fixed my issue. Indeed, my script was launched by CRONTAB which isn't a terminal.
As a reminder:
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY
If you're getting this error in windows docker client then you may need to use the run command as below
$ winpty docker run -it ubuntu /bin/bash
just use "-i"
docker exec -i [your-ps] [command]
If you're on Windows and using docker-machine and you're using GIT Bash or Cygwin, to "get inside" a running container you'll need to do the following:
docker-machine ssh default to ssh into the virtual machine (Virtualbox most likely)
docker exec -it <container> bash to get into the container.
EDIT:
I've recently discovered that if you use Windows PowerShell you can docker exec directly into the container, with Cygwin or Git Bash you can use winpty docker exec -it <container> bash and skip the docker-machine ssh step above.
I get "cannot enable tty mode on non tty input" for the following command on windows with boot2docker
docker exec -it <containerIdOrName> bash
Below command fixed the problem
winpty docker exec -it <containerIdOrName> bash
docker exec runs a new command in an already-running container. It is not the way to start a new container -- use docker run for that.
That may be the cause for the "non tty input" error. Or it could be where you are running docker. Is it a true terminal? That is, is a full tty session available? You might want to check if you are in an interactive session with
[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'
from https://unix.stackexchange.com/questions/26676/how-to-check-if-a-shell-is-login-interactive-batch
I encountered this same error message in Windows 7 64bit using Mintty shipped with Git for Windows.
$docker run -i -t ubuntu /bin/bash
cannot enable tty mode on non tty input
I tried to prefix the above command with winpty as other answers suggested but running it showed me another error message below:
$ winpty docker run -i -t ubuntu /bin/bash
exec: "D:\\Git\\usr\\bin\\bash": executable file not found in $PATH
docker: Error response from daemon: Container command not found or does not exist..
Then I happened to run the following command which gave me what I want:
$ winpty docker run -i -t ubuntu bash
root#512997713d49:/# ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
root#512997713d49:/#
I'm running docker exec -it under jenkins jobs and getting error 'cannot enable tty mode on non tty input'. No output to docker exec command is returned. My job login sequence was:
jenkins shell -> ssh user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -it <container>
I made a change to use -T flag in the initial ssh from jenkins. "-T - Disable pseudo-terminal allocation". And use -i flag with docker exec instead of -it. "-i - interactive. -t - allocate pseudo tty.". This seems to have solved my problem.
jenkins shell -> ssh -T user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -i <container>
Behaviour kindof matches this docker exec tty bug: https://github.com/docker/docker/issues/8755. Workaround on that docker bug discussion suggests using this:
docker exec -it <CONTAINER> script -qc <COMMAND>
Using that workaround didn't solve my problem. It is interesting though. Try these using different flags and under different ssh invocations, you can see 'not a tty' even with using -t with docker exec:
$ docker exec -it <CONTAINER> script -qc 'tty'
/dev/pts/0
$ docker exec -it <CONTAINER> 'tty'
not a tty
$ docker exec -it <CONTAINER> bash -c 'tty'
not a tty
All the tutorials point out to running postgres in the format of
docker run -d -p 5432 \
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.2/bin/postgres \
-D /var/lib/postgresql/9.2/main \
-c config_file=/etc/postgresql/9.2/main/postgresql.conf'
Why can't we in our Docker file have:
ENTRYPOINT ["/etc/init.d/postgresql-9.2", "start"]
And simply start the container by
docker run -d psql
Is that not the purpose of Entrypoint or am I missing something?
the difference is that the init script provided in /etc/init.d is not an entry point. Its purpose is quite different; to get the entry point started, in the background, and then report on the success or failure to the caller. that script causes a postgres process, usually indirectly via pg_ctl, to be started, detached from the controlling terminal.
for docker to work best, it needs to run the application directly, attached to the docker process. that way it can usefully and generically terminate it when the user asks for it, or quickly discover and respond to the process crashing.
Exemplify that IfLoop said.
Using CMD into Dockerfiles:
USE postgres
CMD ["/usr/lib/postgresql/9.2/bin/postgres", "-D", "/var/lib/postgresql/9.2/main", "-c", "config_file=/etc/postgresql/9.2/main/postgresql.conf"]
To run:
$docker run -d -p 5432:5432 psql
Watching PostgeSQL logs:
$docker logs -f POSTGRES_CONTAINER_ID