Kubernetes Container Command to start an bash that does not stop - kubernetes

I would like to start a container with bash only. In other words, a bash that does not stop, i guess an interactive bash or shell .
So far when i put sommething like ["bash"] or ["bin/bash"] or simply bash, the container run and stop. Is there a way to start a bash that run continuously ?
EDIT1
So far the only way that works for me is to write:
command:
- tail
- -f
- /dev/null
Edit2
My use case here is that i want to build a docker image simply to develop in it. So that image has all the tool i need to work.
Hence I wonder how such container should be start. I don't want to run any of the dev tool at start. I simply want the container to be available ready for someome to run the interactive shell at any time.

you can try the sleep command in while loop.
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 10;done"]

You create a container with
command: ["cat"]
tty: true
stdin: true
That way it would consume less cpu and memory than bash

Related

docker-compose, how to run bash commands after container has started, without overriding the CMD or ENTRYPOINT in the image docker is pulling in?

I just want to rename a few files, without overriding the commands inside the wordpress image that the docker is pulling in.
Inside the docker-compose.yml I tried using 'command' and 'entrypoint' to run bash commands, both basically interrupt what's happening inside the image and it all fails.
you have three main ways to run a command after the container starts:
with docker exec -d someContainer some command from the command line,
with CMD ["some", "command"] from your Dockerfile
with command: some command from a docker-compose file
if none of these is working for you, probably, you are doing something wrong. A common mistake is using multiple command in your docker-compose file, like so:
version: '3.8'
services:
someService:
command: a command
command: another command
this doesn't work, because the last command overrides the commands above, what you should do is concatenate the commands:
version: '3.8'
services:
someService:
command: a command && another command
take a look at this question.
edit: one thing i forgot to include is that the same behavior above is true to CMD in your Dockerfile, you can't do this:
CMD ["some", "command"]
CMD ["another", "command"]
instead, you should concatenate the commands, just like the docker-compose:
CMD ["some", "command", "&&", "another", "command"]
but this is very boring if you have a lot of commands, so an alternative is to use a shell script with all the commands you need and execute it in your Dockerfile:
#!/bin/sh
# bash file with your commands
run wordpress && rename files && do something else
# later in your Dockerfile
CMD ["sh", "/path/to/file.sh"]
see this question
As you haven't provided any code it's hard to say, but also, maybe you can use RUN command to rename as the last command(just before the CMD if you are using it) in your Dockerfile to rename these files at build time(what IMHO makes more sense because this is kind of thing you should do when you are building your images). So if you want more help, please, include your code too.

How do avoid a docker container stop after the application is stopped

There is a docker container with Postgres server. Ones postgres is stopped or crashed (doesn't matter) I need to check some environment variables and the state of a few files.
By default, the container stops after an application is finished.
I know there is an option to change the default behavior in dockerfile but I no longer to find it ((
If somebody knows that please give me an Dockerfile example like this :
FROM something
RUN something ...
ENTRYPOINT [something]
You can simply run non exiting process in the end of entrypoint to keep the container alive, even if the main process exits.
For example use
tail -f 'some log file'
There isn't an "option" to keep a container running when the main process has stopped or died. You can run something different in the container while debugging the actual startup scripts. Sometimes you need to override an entrypoint to do this.
docker run -ti $IMAGE /bin/sh
docker run -ti --entrypoint=/bin/sh $IMAGE
If the main process will not stay running when you docker start the existing container then you won't be able to use that container interactively, otherwise you could:
docker start $CID
docker exec -ti $CID sh
For getting files from an existing container, you can docker cp anything you need from the stopped container.
docker cp $CID:/a/path /some/local/path
You can also docker export a tar archive of the complete container.
docker export $CID -o $CID.tar
tar -tvf $CID.tar | grep afile
The environment Docker injects can be seen with docker inspect, but this won't give you anything the process has added to the environment.
docker inspect $CID --format '{{ json .Config.Env }}'
In general, Docker requires a process to keep running in the foreground. Otherwise, it assumes that the application is stopped and the container is shut down. Below, I outline a few ways, that I'm aware of, which can prevent a container from stopping:
Use a process manager such as runit or systemd to run a process inside a container:
As an example, here you can find a Redhat article about running systemd within a docker container.
A few possible approaches for debugging purposes:
a) Add an artificial sleep or pause to the entrypoint:
For example, in bash, you can use this to create an infinite pause:
while true; do sleep 1; done
b) For a fast workaround, one can run the tail command in the container:
As an example, with the command below, we start a new container in detached/background mode (-d) and executing the tail -f /dev/null command inside the container. As a result, this will force the container to run forever.
docker run -d ubuntu:18.04 tail -f /dev/null
And if the main process crashed/exited, you may still look up the ENV variable or check out files with exec and the basic commands like cd, ls. A few relevant commands for that:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' name-of-container
docker exec -it name-of-container bash

Executing wait-for-it.sh in python Docker container

I have a Python docker container that needs to wait until another container (postgres server) finishes setup. I tried the standard wait-for-it.sh but several commands weren't included. I tried a basic sleep (again in an sh file) but now it's reporting exec: 300: not found when trying to finally execute the command I'm waiting on.
How do I get around this (preferably without changing the image, or having to extend an image.)
I know I could also just run a Python script, but ideally I'd like to use wait-for-it.sh to wait for the server to turn up rather than just sleep.
Dockerfile (for stuffer):
FROM python:2.7.13
ADD ./stuff/bin /usr/local/bin/
ADD ./stuff /usr/local/stuff
WORKDIR /usr/local/bin
COPY requirements.txt /opt/updater/requirements.txt
COPY internal_requirements.txt /opt/stuff/internal_requirements.txt
RUN pip install -r /opt/stuff/requirements.txt
RUN pip install -r /opt/stuff/other_requirements.txt
docker-compose.yml:
version: '3'
services:
local_db:
build: ./local_db
ports:
- "localhost:5432:5432"
stuffer:
build: ./
depends_on:
- local_db
command: ["./wait-for-postgres.sh", "-t", "300", "localhost:5432", "--", "python", "./stuffing.py", "--file", "./afile"]
Script I want to use (but can't because no psql or exec):
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do >&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Sergey's comment. I had wrong argument order. This issue had nothing to do with docker and everything to do with my inability to read.
I made an example so you can see it working:
https://github.com/nitzap/wait-for-postgres
On the other hand also you can have errors inside the execution of the script to validate that the service is working. You should not refer as localhost .... because that is within the contexts of containers, if you want to point to another container has to be through the name of the service.

docker CMD run supervisord in background

is there any way to run supervisord in the background. means start the process and get out of shell.
I have a docker file where i try to run a script that suppose to start the postgresql and then get out. so I have a process running and i can create users.
Docker command
CMD ["/runprocess.sh"]
script runproccess.sh
#!/bin/bash
supervisord -c "/etc/supervisord.conf"
I have also tried to run it in background, but no luck
#!/bin/bash
supervisord -c "/etc/supervisord.conf" &
supervisord starts the process and just stays on screen for ever.
i want it to run the process and get out. so I can run other part of my script.
you can remove setting nodaemon or set it to false in supervisord.conf
[supervisord]
nodaemon=false ; Docker利用ではtrueにする必要あり
this will make supervisor start in background.

How to call stop scripts on BusyBox shutdown?

I'm running BusyBox with an entry in /etc/inittab
::sysinit:/etc/init.d/rcS
The rcS script calls all the start scripts in /etc/rc.d/ on startup.
How is it possible to tell the BusyBox init to shut down all services probably by calling /etc/rc.d/xxx stop on calling the BusyBox applets "poweroff", "halt" or "reboot"?
Just for the records - I finally came along with adding my own shutdown script to /etc/inittab
::shutdown:/etc/init.d/rcD
The script just loops the startup scripts backwards:
#!/bin/sh
if [ -d /etc/rc.d ]; then
for x in $(ls -r /etc/rc.d/) ; do
/etc/rc.d/$x stop
done
fi