docker swarm restart 'any' meaning [duplicate] - docker-compose

In the docker swarm v3 docs, there are 3 different restart policy conditions that can be used. It's obvious what the none condition does, but I was wondering what the difference between on-failure and any is specifically.
Here's a picture from the docs:

The on-failure policy handles any time a container exist with a non-zero code. The any policy covers any other scenarios, but may only be handled on daemon restart depending on how the container was stopped (e.g. intentionally stopping a container with docker stop does not result in an immediate restart).
See this documentation for more details: https://docs.docker.com/config/containers/start-containers-automatically/
Note: I do not recommend a restart policy for containers running within swarm mode. I've seen scenarios, e.g. host out of memory, where both swarm mode and the docker engine attempt to restart the container and it's best to let swarm mode recreate a new container, possibly on another host.

Related

Is there a way to mount docker socket from one to container to another?

I'm looking to mount docker socket from one container to another without involving the host. Is it possible? I searched around and couldn't find an example of such a situation. The issue is that host uses a very old version of docker so I setup docker within the container which works okay. Now I need other docker containers to use the socket from the base container and not the host. Is there any way to achieve this (in kubernetes)?
The only way comes in mind is to use hostPath volume with type socket, and mount it into multiple containers:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Even if it works you will end up with "other containers" launching containers within your "newer docker" container, which is not a good practice. I would suggest spinning another node with newer docker, connecting it to your master, and spinning the part of load that requires access to docker sock there. You can use nodeSelector to schedule properly:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration
You can port this onto k8s further by turning your control container into operator https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017 (use k8s API instead of docker sock)

Postgres connection refused (via CloudSQL proxy) when doing rolling update Kubernetes

When I do a rolling update, I get exceptions from Sentry saying:
DatabaseError('server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request.',...)
I have two containers running inside each Pod, my app container and a cloudsql-proxy container, which the app container uses to communicate to Cloud SQL.
Is there a way to make sure that my app container goes down first during the 30 seconds of grace period (terminationGracePeriodSeconds)?
In other words, I want to drain the connections and have all the current requests finish before the cloudsql-proxy is taken out.
It would be ideal if I could specify that the app container be taken down first during the 30 seconds of grace period, and then the cloudsql-proxy.
This discussion suggests setting the “terminationGracePeriodSeconds” or the "PreStop hook" in the manifest.
Another idea that could work is running the two containers in different pods to allow granular control over the rolling update. You might also want to consider using Init Containers on your deployment to allow the proxy to be ready before your app container.

Docker Remote API not accurate listing running containers in swarm

Currently I am facing the following problem:
I set up 3 virtual box machines with a debian and installed docker. No firewall in place.
I created a swarm making one machine the manager and joined the other two as workers as described in countless web pages. Works perfect.
On the swarm manager I activated a remote API access via -H :4243... and restarted the deamon. (only on the swarm manager)
'docker node ls' qualifies all nodes being active.
When I call http://:4243/nodes I see all nodes.
I created an overlay network (most likely not needed to illustrate my problem. Standard Ingress Networking should be ok too)
Then I created a service with 3 replica. specifying a name, my overlay network and some env params.
'docker service ps ' gives me the info that each node runs one container with my image.
Doublechecking with 'docker ps' on each node says the same.
My problem is:
Calling 'http://:4243/containers/json' I only see one container, the one on the swarm manager.
I expect to see 3 containers, one for each node. The question is why ?
Any ideas ?
This Question does not seem to be my problem
Listing containers via /containers/json only shows "local" containers on that node. If you want a complete overview of every container on every node, you'll need to use the swarm aware endpoints. Docker Services are the high level abstraction, while Tasks are the container level abstraction. See https://docs.docker.com/engine/api/v1.30/#tag/Task for reference.
If you perform a request on your manager node at http://:4243/tasks you should see every task (aka container), on which node they are running, and to which service they belong to.

Kubernetes: How to run a Bash command on Docker container B from Docker container A

I've set up a simple Kubernetes test environment based on Docker-Multinode. I've also set up a quite large Docker image based on Ubuntu, which contains several of my tools. Most of them are written in C++ and have quite a lot of dependencies to libraries installed on the system and otherwise.
My goal is to distribute legacy batch tasks between multiple nodes. I liked the easy setup of Docker-Multinode, but now I wonder if this is the right thing for me - since I have my actual legacy applications in the other Docker image.
How can I run a Bash command on Docker container B (the Ubuntu Docker container with my legacy tools) from Docker container A (the multinode worker Docker container)? Or is this not advisable at all?
To clarify, Docker container A (the worker multinode worker Docker container) and Docker container B (the legacy tools Ubuntu Docker container) run on the same host (each machine will have both of them each).
Your question is really not clear:
Kubernetes runs Docker containers; any Docker container.
Kubernetes itself runs in Docker, and in 'multi-node' the dependencies needed by Kubernetes run in Docker, but in what is called bootstrapped Docker.
Now, it's not clear in your question where Docker A runs, vs. Docker B.
Furthermore, if you want to 'distribute' batch jobs, then each job should be an independent job that runs in its own container and container A should not depend on container A.
If you need the dependencies (libraries) in Docker container B to run your job, then you really only need to use the Docker container B as your base image for your job containers A. A Docker image is layered, so that even if it is big, if another container uses it as a base image, it is only loaded once overall, so it's not a problem to have five containers of type A with B as the base image, because the base image is only loaded once.
If you really need a container to communicate with another, then you should build an API to pass commands from one to the other (a RESTful API of some sort, that can communicate via HTTP calls to pass requests for a process to run on one container and return a result).

What is the difference between Docker Host and Container

I started learning about Docker. But I keep getting confused often, even though I read it in multiple places.
Docker Host and Docker Container.
Docker Engine is the base Engine that handles the containers.
Docker Containers sit on top of Docker engine. This is created by recipes (text file with shell script). It pulls the image from the hub and you can install your stuff on it.
In a typical application environment, you will create separate containers for each piece of the system, Application Server, Database Server, Web Server, etc. (one container for each).
Docker Swarm is a cluster of containers.
Where does the Docker Host come in? Is this another word for Container or another layer where you can keep multiple containers together?
Sorry may be a basic question.
I googled this, but no use.
The docker host is the base traditional OS server where the OS and processes are running in normal (non-container) mode. So the OS and processes you start by actually powering on and booting a server (or VM) are the docker host. The processes that start within containers via docker commands are your containers.
To make an analogy: the docker host is the playground, the docker containers are the kids playing around in there.
Docker Host is the machine that Docker Engine is installed.
Here's a picture, which I find easier to understand than words. I found it here.
The Host is the machine managing the containers and images, where you actually installed Docker.
Docker host is the machine where you installed the docker engine. the docker container can be compared with a simple process running on that same docker host.
The Host is the underlying OS and it's support for app isolation (ie., process and user isolation via "containers." Docker provides an API that defines a method of application packaging and methods for working for the containers.
Host = container implementation
Docker = app packaging and container management