What is the difference between Docker Host and Container - deployment

I started learning about Docker. But I keep getting confused often, even though I read it in multiple places.
Docker Host and Docker Container.
Docker Engine is the base Engine that handles the containers.
Docker Containers sit on top of Docker engine. This is created by recipes (text file with shell script). It pulls the image from the hub and you can install your stuff on it.
In a typical application environment, you will create separate containers for each piece of the system, Application Server, Database Server, Web Server, etc. (one container for each).
Docker Swarm is a cluster of containers.
Where does the Docker Host come in? Is this another word for Container or another layer where you can keep multiple containers together?
Sorry may be a basic question.
I googled this, but no use.

The docker host is the base traditional OS server where the OS and processes are running in normal (non-container) mode. So the OS and processes you start by actually powering on and booting a server (or VM) are the docker host. The processes that start within containers via docker commands are your containers.
To make an analogy: the docker host is the playground, the docker containers are the kids playing around in there.

Docker Host is the machine that Docker Engine is installed.

Here's a picture, which I find easier to understand than words. I found it here.
The Host is the machine managing the containers and images, where you actually installed Docker.

Docker host is the machine where you installed the docker engine. the docker container can be compared with a simple process running on that same docker host.

The Host is the underlying OS and it's support for app isolation (ie., process and user isolation via "containers." Docker provides an API that defines a method of application packaging and methods for working for the containers.
Host = container implementation
Docker = app packaging and container management

Related

Access host process/terminal from docker container

Hello there,
I am trying to dockerize a node application. I created two containers and a docker-compose.yml file. The containers are build successfully and run but the one should interact a host process. How is this possible?
Thanks in regard
UPDATE 1
My application runs some commands with sudo. Probably I have to let docker container execute commands that target host system. Any ideas?
I assume that by interact with a host process, you mean interaction over some network protocol thus you will need access to the host's IP address from the container.
The host computer's IP is the default gateway of the container in case you are using docker's bridge network. This would be the case if you did not provide specific network configuration inside your docker-compose.yml (https://docs.docker.com/compose/compose-file/#network-configuration-reference)
Since you are using node.js, you can use the default-gateway package (https://www.npmjs.com/package/default-gateway) to obtain this IP.
You can't execute host applications within your containers. Because they're not in your containers filesystem and you shouldn't try to do that. Instead you should install all the necessary softwares your app needs inside your docker container as dependency for your application.
You could use a sock file, used for interprocess communication, and handle it to the container, similar to what watchtower does to control the docker daemon. Maybe you'll have to create a simple application to pipe the shell to the sock file and have it installed on the host to serve the container.
sock files:
What are .sock files and how to communicate with them
watchtower:
https://github.com/containrrr/watchtower

Minikube VM Driver: None vs Virtualbox/KVM

What are the differences in running Minikube with a VM hypervisor (VirtualBox/HVM) vs none?
I am not asking whether or not Minikube can run without a hypervisor. I know that running on '--vm-driver=none' is possible and it runs on the local machine and requires Docker be installed.
I am asking what is the performance differences. There is not a lot of documentation on how '--vm-driver=none' works. I am wondering would running without the VM affect the functionality of Minikube.
This is how I explain it to myself:
driver!=none mode
In this case minikube provisions a new docker-machine (Docker daemon/Docker host) using any supported providers. For instance:
a) local provider = your Windows/Mac local host: it frequently uses VirtualBox as a hypervisor, and creates inside it a VM based on boot2docker image (configurable). In this case k8s bootstraper (kubeadm) creates all Kubernetes components inside this isolated VM. In this setup you have usually two docker daemons, your local one for development (if you installed it prior), and one running inside minikube VM.
b) cloud hosts - not supported by minikube
driver=none mode
In this mode, your local docker host is re-used.
In case no.1 there will be a performance penalty, because each VM generates some overhead, by running several system processes required by VM itself, in addition to those required by k8s components running inside VM. I think driver-mode=none is similar to "kind" version of k8s boostraper, meant for doing CI/integration tests.

Is there a way to mount docker socket from one to container to another?

I'm looking to mount docker socket from one container to another without involving the host. Is it possible? I searched around and couldn't find an example of such a situation. The issue is that host uses a very old version of docker so I setup docker within the container which works okay. Now I need other docker containers to use the socket from the base container and not the host. Is there any way to achieve this (in kubernetes)?
The only way comes in mind is to use hostPath volume with type socket, and mount it into multiple containers:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Even if it works you will end up with "other containers" launching containers within your "newer docker" container, which is not a good practice. I would suggest spinning another node with newer docker, connecting it to your master, and spinning the part of load that requires access to docker sock there. You can use nodeSelector to schedule properly:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration
You can port this onto k8s further by turning your control container into operator https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017 (use k8s API instead of docker sock)

how to run an onpremise service fabric cluster in docker containers for windows?

I am not sure if this is possible, but if it is and someone have the experience to do so, could we not create a docker image for windows that represent an node?
I imaging that we will have a folder with configuration files, that can be mounted with docker -v
then if one needed a 5 node cluster, i would just run
docker run -v c:/dev/config:c:/config microsoft/servicefabric create-node --someOptions
for each node we wanted.
Is there any barriers for doign this? have anyone create the docker images for doign so? This would really simplify setting up a cluster on premise.
Using the 6.1 release you can run a cluster in a container, for dev/test purposes.
I'm not sure if you can get it to work with multiple containers though.
Service Fabric Linux Clusters in a Container
We have provided a
pre-configured Docker container image to run Service Fabric Linux
clusters in a container. The main scenario is to provide a
light-weight development experience for MacOS, but the container will
also run on Linux and Windows, using Docker CE.
To get started, follow the directions here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-mac
and
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-local-linux-cluster-windows

Kubernetes: How to run a Bash command on Docker container B from Docker container A

I've set up a simple Kubernetes test environment based on Docker-Multinode. I've also set up a quite large Docker image based on Ubuntu, which contains several of my tools. Most of them are written in C++ and have quite a lot of dependencies to libraries installed on the system and otherwise.
My goal is to distribute legacy batch tasks between multiple nodes. I liked the easy setup of Docker-Multinode, but now I wonder if this is the right thing for me - since I have my actual legacy applications in the other Docker image.
How can I run a Bash command on Docker container B (the Ubuntu Docker container with my legacy tools) from Docker container A (the multinode worker Docker container)? Or is this not advisable at all?
To clarify, Docker container A (the worker multinode worker Docker container) and Docker container B (the legacy tools Ubuntu Docker container) run on the same host (each machine will have both of them each).
Your question is really not clear:
Kubernetes runs Docker containers; any Docker container.
Kubernetes itself runs in Docker, and in 'multi-node' the dependencies needed by Kubernetes run in Docker, but in what is called bootstrapped Docker.
Now, it's not clear in your question where Docker A runs, vs. Docker B.
Furthermore, if you want to 'distribute' batch jobs, then each job should be an independent job that runs in its own container and container A should not depend on container A.
If you need the dependencies (libraries) in Docker container B to run your job, then you really only need to use the Docker container B as your base image for your job containers A. A Docker image is layered, so that even if it is big, if another container uses it as a base image, it is only loaded once overall, so it's not a problem to have five containers of type A with B as the base image, because the base image is only loaded once.
If you really need a container to communicate with another, then you should build an API to pass commands from one to the other (a RESTful API of some sort, that can communicate via HTTP calls to pass requests for a process to run on one container and return a result).