Is it possible to restart container in docker-compose if service that is running inside it returns exit code different than 0? docker-compose.yml option restart: always doesn't work that way. Is there any way to solve it or is this a service issue and I should look for the answer inside of the container?
I use supervisord but adding option autorestart=true doesn't work even if service crashes with exit code 255 - the RUNNING_PID file (created by system) is not being deleted.
Thanks for any reply.
restart: always will restart the container regardless of the exit code, so even if exit code of the process running inside the container is 0. I'm using restart: on-failure and it does exactly what you describe. It restarts the container on a non zero exit code of the process. After the process exits and it is not restarted you can check the exit code using docker-compose ps
Related
In Kubernetes, when a Pod repeatedly crashes and is in CrashLoopBackOff status, it is not possible to shell into the container and poke around to find the problem, due to the fact that containers (unlike VMs) live only as long as the primary process. If I shell into a container and the Pod is restarted, I'm kicked out of the shell.
How can I keep a Pod from crashing so that I can investigate if my primary process is failing to boot properly?
Redefine the command
In development only, a temporary hack to keep a Kubernetes pod from crashing is to redefine it and specify the container's command (corresponding to a Docker ENTRYPOINT) and args to be a command that will not crash. For instance:
containers:
- name: something
image: some-image
# `shell -c` evaluates a string as shell input
command: [ "sh", "-c"]
# loop forever, outputting "yo" every 5 seconds
args: ["while true; do echo 'yo' && sleep 5; done;"]
This allows the container to run and gives you a chance to shell into it, like kubectl exec -it pod/some-pod -- sh, and investigate what may be wrong.
This needs to be undone after debugging so that the container will run the command it's actually meant to run.
Adapted from this blog post.
There are also other methods used for debugging pods that are worth noting in your use case scenario:
If your container has previously crashed, you can access the previous container's crash log with: kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images. kubectl has an alpha command that can create ephemeral containers for debugging beginning with version v1.18. An example for this method can be found here.
in my case I did build using mac m1/silicon. In this case pod crashes and there is no explicit message about this.
The problem was that I a also debugged using docker on the same m1 so could not really see what is wrong.
I would need to build image using docker build --platform linux/amd64.
I have two containers: A and B. Container B needs to be restarted each time container A is recreated to pick up that container's new id.
How can this be accomplished without hackery?
Not something I've tried to do before, but .. the docker daemon emits events when certain things happen. You can see some of these at https://docs.docker.com/engine/reference/commandline/events/#parent-command but, for example:
Docker containers report the following events:
attach
commit
copy
create
destroy
detach
die
exec_create
exec_detach
exec_start
export
health_status
kill
oom
pause
rename
resize
restart
start
stop
top
unpause
update
By default, on a single docker host, you can talk to the daemon through a unix socket /var/run/docker.sock. You can also bind that unix socket into a container so that you can catch events from inside a container. Here's a simple docker-compose.yml that does this:
version: '3.2'
services:
container_a:
image: nginx
container_name: container_a
container_b:
image: docker
container_name: container_b
command: docker events
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Start this stack with docker-compose up -d. Then, in one terminal, run docker logs -f container_b. In another terminal, run docker restart container_a and you'll see some events in the log window that show the container restarting. Your application can catch those events using a docker client library and then either terminate itself and wait to be restarted, or somehow otherwise arrange restart or reconfiguration.
Note that these events will actually tell you the new container's ID, so maybe you don't even need to restart?
I cant find more information about those.
Should we use docker stop for containers which we started with docker start
Or same for docker-compose up?
What is the difference between stop and down?
In docker-compose help
stop Stop services
down Stop and remove containers and networks (optionally images and volumes as well)
# Stop services only
docker-compose stop
# Stop and remove containers, networks..
docker-compose down
# Down and remove volumes
docker-compose down --volumes
# Down and remove images
docker-compose down --rmi <all|local>
Following are the differences among various docker-compose command options:
docker-compose up - start and restart all the services defined in docker-compose.yml
docker-compose down - command will stop running containers, but it also removes the stopped containers as well as any networks that were created. You can take down one step further and add the -v flag to remove all volumes too. This is great for doing a full blown reset on your environment by running docker-compose down -v.
docker-compose start - command will only restart containers stopped previously
docker-compose stop - command will stop running containers but won’t remove them
Just to answer the other part of the question:
Use docker-compose up to start or restart all the services defined in a docker-compose.yml.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
The docker-compose run command is for running “one-off” or “adhoc” tasks.
For further information visit this page.
Iv'e recently started to work with docker-compose and I wondered how can I start all containers in my docker-compose file in detached mode.
After searching on the web for a bit I found a solution which was:
add the following properties to a container tty: true and stdin_open: true and then I attach to container and detach with ctrl + p and ctrl + q without killing the container.
The simple ang ugly solution it to add the properties to all containers, but I wonder if there is s nicer solution for this, maybe something with the docker-compose up that can somehow start all containers in detached mode
To start containers in detached mode (as already mentioned above) you need to use the docker-compose up -d.
If you need to manually get in and out of the running container once it's started to perform some additional tasks, you can use docker-compose exec.
So I'm working on a docker compose file to deploy my Go web server. My server uses mongo, so I added a data volume container and the mongo service in docker compose.
Then I wrote a Dockerfile in order to build my Go project, and finally run it.
However, there is another step that must be done. Once my project has been compiled, I have to run the following command:
./my-project -setup
This will add some necessary information to the database, and the information only needs to be added once.
I can't however add this step on the Dockerfile (in the build process) because mongo must already be started.
So, how can I achieve this? Even if I restart the server and then run again docker-compose up I don't want this command to be executed again.
I think I'm missing some Docker understanding, because I don't actually understand everything about data volume containers (are they just stopped containers that mount a volume?).
Also, if I restart the server, and then run docker-compose up, which commands will be run? Will it just start the same container that was now stopped with the given CMD?
In any case, here is my docker-compose.yml:
version: '2'
services:
mongodata:
image: mongo:latest
volumes:
- /data/db
command: --break-mongo
mongo:
image: mongo:latest
volumes_from:
- mongodata
ports:
- "28001:27017"
command: --smallfiles --rest --auth
my_project:
build: .
ports:
- "6060:8080"
depends_on:
- mongo
- mongodata
links:
- mongo
And here is my Dockerfile to build my project image:
FROM golang
ADD . /go/src/my_project
RUN cd /go/src/my_project && go get
RUN go install my_project
RUN my_project -setup
ENTRYPOINT /go/bin/my_project
EXPOSE 8080
I suggest to add an entrypoint-script to your container; in this entrypoint-script, you can check if the database has been initialized, and if it isn't, perform the required steps.
As you noticed in your question, the order in which services / containers are started should not be taken for granted, so it's possible your application container is started before the database container, so the script should take that into account.
As an example, have a look at the official WordPress image, which performs a one-time initialization of the database in it's entrypoint-script. The script attempts to connect to the database (and retries if the database cannot be contacted (yet)), and checks if initialization is needed; https://github.com/docker-library/wordpress/blob/df190dc9c5752fd09317d836bd2bdcd09ee379a5/apache/docker-entrypoint.sh#L146-L171
NOTE
I notice you created a "data-only container" to attach your volume to. Since docker 1.9, docker has volume management, including naming volumes. Because of this, you no longer need to use "data-only" containers.
You can remove the data-only container from your compose file, and change your mongo service to look something like this;
mongo:
image: mongo:latest
volumes:
- mongodata:/data/db
ports:
- "28001:27017"
command: --smallfiles --rest --auth
This should create a new volume, named mongodata if it doesn't exist, or re-use the existing volume with that name. You can list all volumes using docker volume ls and remove a volume with docker volume rm <some-volume> if you no longer need it
You could try to use ONBUILD instruction:
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.
Any build instruction can be registered as a trigger.
This is useful if you are building an image which will be used as a base to build other images, for example an application build environment or a daemon which may be customized with user-specific configuration.
For example, if your image is a reusable Python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can’t just call ADD and RUN now, because you don’t yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
The solution is to use ONBUILD to register advance instructions to run later, during the next build stage.
Here’s how it works:
When it encounters an ONBUILD instruction, the builder adds a trigger to the metadata of the image being built. The instruction does not otherwise affect the current build.
At the end of the build, a list of all triggers is stored in the image manifest, under the key OnBuild. They can be inspected with the docker inspect command.
Later the image may be used as a base for a new build, using the FROM instruction. As part of processing the FROM instruction, the downstream builder looks for ONBUILD triggers, and executes them in the same order they were registered. If any of the triggers fail, the FROM instruction is aborted which in turn causes the build to fail. If all triggers succeed, the FROM instruction completes and the build continues as usual.
Triggers are cleared from the final image after being executed. In other words they are not inherited by “grand-children” builds.
In docker-compose you can define:
restart: no
To run the container only once, which is useful for example for db-migration containers.
Your application need some initial state for working. It means that you should:
Check if required state already exists
Depends on first step result init state or not
You can write program for checking current database state (here I will use bash script but it can be every other language program):
RUN if $(./check.sh); then my_project -setup; fi
In my case if script will return 0 (success exit status) then setup command will be called.