Kubernetes: How to run a Bash command on Docker container B from Docker container A - kubernetes

I've set up a simple Kubernetes test environment based on Docker-Multinode. I've also set up a quite large Docker image based on Ubuntu, which contains several of my tools. Most of them are written in C++ and have quite a lot of dependencies to libraries installed on the system and otherwise.
My goal is to distribute legacy batch tasks between multiple nodes. I liked the easy setup of Docker-Multinode, but now I wonder if this is the right thing for me - since I have my actual legacy applications in the other Docker image.
How can I run a Bash command on Docker container B (the Ubuntu Docker container with my legacy tools) from Docker container A (the multinode worker Docker container)? Or is this not advisable at all?
To clarify, Docker container A (the worker multinode worker Docker container) and Docker container B (the legacy tools Ubuntu Docker container) run on the same host (each machine will have both of them each).

Your question is really not clear:
Kubernetes runs Docker containers; any Docker container.
Kubernetes itself runs in Docker, and in 'multi-node' the dependencies needed by Kubernetes run in Docker, but in what is called bootstrapped Docker.
Now, it's not clear in your question where Docker A runs, vs. Docker B.
Furthermore, if you want to 'distribute' batch jobs, then each job should be an independent job that runs in its own container and container A should not depend on container A.
If you need the dependencies (libraries) in Docker container B to run your job, then you really only need to use the Docker container B as your base image for your job containers A. A Docker image is layered, so that even if it is big, if another container uses it as a base image, it is only loaded once overall, so it's not a problem to have five containers of type A with B as the base image, because the base image is only loaded once.
If you really need a container to communicate with another, then you should build an API to pass commands from one to the other (a RESTful API of some sort, that can communicate via HTTP calls to pass requests for a process to run on one container and return a result).

Related

Is there a way to mount docker socket from one to container to another?

I'm looking to mount docker socket from one container to another without involving the host. Is it possible? I searched around and couldn't find an example of such a situation. The issue is that host uses a very old version of docker so I setup docker within the container which works okay. Now I need other docker containers to use the socket from the base container and not the host. Is there any way to achieve this (in kubernetes)?
The only way comes in mind is to use hostPath volume with type socket, and mount it into multiple containers:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Even if it works you will end up with "other containers" launching containers within your "newer docker" container, which is not a good practice. I would suggest spinning another node with newer docker, connecting it to your master, and spinning the part of load that requires access to docker sock there. You can use nodeSelector to schedule properly:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration
You can port this onto k8s further by turning your control container into operator https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017 (use k8s API instead of docker sock)

how to run an onpremise service fabric cluster in docker containers for windows?

I am not sure if this is possible, but if it is and someone have the experience to do so, could we not create a docker image for windows that represent an node?
I imaging that we will have a folder with configuration files, that can be mounted with docker -v
then if one needed a 5 node cluster, i would just run
docker run -v c:/dev/config:c:/config microsoft/servicefabric create-node --someOptions
for each node we wanted.
Is there any barriers for doign this? have anyone create the docker images for doign so? This would really simplify setting up a cluster on premise.
Using the 6.1 release you can run a cluster in a container, for dev/test purposes.
I'm not sure if you can get it to work with multiple containers though.
Service Fabric Linux Clusters in a Container
We have provided a
pre-configured Docker container image to run Service Fabric Linux
clusters in a container. The main scenario is to provide a
light-weight development experience for MacOS, but the container will
also run on Linux and Windows, using Docker CE.
To get started, follow the directions here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-mac
and
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-local-linux-cluster-windows

Docker Remote API not accurate listing running containers in swarm

Currently I am facing the following problem:
I set up 3 virtual box machines with a debian and installed docker. No firewall in place.
I created a swarm making one machine the manager and joined the other two as workers as described in countless web pages. Works perfect.
On the swarm manager I activated a remote API access via -H :4243... and restarted the deamon. (only on the swarm manager)
'docker node ls' qualifies all nodes being active.
When I call http://:4243/nodes I see all nodes.
I created an overlay network (most likely not needed to illustrate my problem. Standard Ingress Networking should be ok too)
Then I created a service with 3 replica. specifying a name, my overlay network and some env params.
'docker service ps ' gives me the info that each node runs one container with my image.
Doublechecking with 'docker ps' on each node says the same.
My problem is:
Calling 'http://:4243/containers/json' I only see one container, the one on the swarm manager.
I expect to see 3 containers, one for each node. The question is why ?
Any ideas ?
This Question does not seem to be my problem
Listing containers via /containers/json only shows "local" containers on that node. If you want a complete overview of every container on every node, you'll need to use the swarm aware endpoints. Docker Services are the high level abstraction, while Tasks are the container level abstraction. See https://docs.docker.com/engine/api/v1.30/#tag/Task for reference.
If you perform a request on your manager node at http://:4243/tasks you should see every task (aka container), on which node they are running, and to which service they belong to.

how to execute docker diff command using kubernetes

I am using docker diff in my application in order to find all changed files in a container. Now, my application manages containers through Kubernetes and don't have a direct access to them. I found Kubernetes implementations for several docker commands (like kubectl logs), bit docker diff is missed.
Is there a way to execute docker diff for a pod through Kubernetes?
Many thanks
Kubernetes (kubectl) does not offer an equivalent command. Ideally you should not be using this command at all outside your local development environment (which is docker)
The best practice is to start containers with readonly root filesystem, so that you avoid storing any important state in containers. Kubernetes can kill and restart your pod in another node as a new container, so you should not care about the docker diff that happens on the container.

What is the difference between Docker Host and Container

I started learning about Docker. But I keep getting confused often, even though I read it in multiple places.
Docker Host and Docker Container.
Docker Engine is the base Engine that handles the containers.
Docker Containers sit on top of Docker engine. This is created by recipes (text file with shell script). It pulls the image from the hub and you can install your stuff on it.
In a typical application environment, you will create separate containers for each piece of the system, Application Server, Database Server, Web Server, etc. (one container for each).
Docker Swarm is a cluster of containers.
Where does the Docker Host come in? Is this another word for Container or another layer where you can keep multiple containers together?
Sorry may be a basic question.
I googled this, but no use.
The docker host is the base traditional OS server where the OS and processes are running in normal (non-container) mode. So the OS and processes you start by actually powering on and booting a server (or VM) are the docker host. The processes that start within containers via docker commands are your containers.
To make an analogy: the docker host is the playground, the docker containers are the kids playing around in there.
Docker Host is the machine that Docker Engine is installed.
Here's a picture, which I find easier to understand than words. I found it here.
The Host is the machine managing the containers and images, where you actually installed Docker.
Docker host is the machine where you installed the docker engine. the docker container can be compared with a simple process running on that same docker host.
The Host is the underlying OS and it's support for app isolation (ie., process and user isolation via "containers." Docker provides an API that defines a method of application packaging and methods for working for the containers.
Host = container implementation
Docker = app packaging and container management