Hello there,
I am trying to dockerize a node application. I created two containers and a docker-compose.yml file. The containers are build successfully and run but the one should interact a host process. How is this possible?
Thanks in regard
UPDATE 1
My application runs some commands with sudo. Probably I have to let docker container execute commands that target host system. Any ideas?
I assume that by interact with a host process, you mean interaction over some network protocol thus you will need access to the host's IP address from the container.
The host computer's IP is the default gateway of the container in case you are using docker's bridge network. This would be the case if you did not provide specific network configuration inside your docker-compose.yml (https://docs.docker.com/compose/compose-file/#network-configuration-reference)
Since you are using node.js, you can use the default-gateway package (https://www.npmjs.com/package/default-gateway) to obtain this IP.
You can't execute host applications within your containers. Because they're not in your containers filesystem and you shouldn't try to do that. Instead you should install all the necessary softwares your app needs inside your docker container as dependency for your application.
You could use a sock file, used for interprocess communication, and handle it to the container, similar to what watchtower does to control the docker daemon. Maybe you'll have to create a simple application to pipe the shell to the sock file and have it installed on the host to serve the container.
sock files:
What are .sock files and how to communicate with them
watchtower:
https://github.com/containrrr/watchtower
Related
I'm looking to mount docker socket from one container to another without involving the host. Is it possible? I searched around and couldn't find an example of such a situation. The issue is that host uses a very old version of docker so I setup docker within the container which works okay. Now I need other docker containers to use the socket from the base container and not the host. Is there any way to achieve this (in kubernetes)?
The only way comes in mind is to use hostPath volume with type socket, and mount it into multiple containers:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Even if it works you will end up with "other containers" launching containers within your "newer docker" container, which is not a good practice. I would suggest spinning another node with newer docker, connecting it to your master, and spinning the part of load that requires access to docker sock there. You can use nodeSelector to schedule properly:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration
You can port this onto k8s further by turning your control container into operator https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017 (use k8s API instead of docker sock)
I have a weird situation where one of the service name, let's say 'myservice' in docker swarm shares the name with an actual host in my network. Sometimes the resolution of 'myservice' picks up that host IP and things fail since its not related to anything I am running. Is there a way to give 'myservice' in a fashion that forces docker to resolve it with its own services? Is that 'tasks.myservice' or something better?
Docker swarm CE 17.09 is the version in use
The easiest thing to do is change your Swarm service name... or give it a custom name that's different from service name to use, with --hostname option.
I would think the docker internal DNS would always resolve bridge/overlay network hostnames first before searching external resolvers.
Note that any containers on docker virtual networks will never resolve the the container hostname on a different bridge/overlay network, so in those cases they would correctly resolve the external DNS.
The documentation for etcd says that in order to connect to etcd from a job running inside a container, you need to do the following:
[...]you must use the IP address assigned to the docker0 interface on the CoreOS host.
$ curl -L http://172.17.42.1:2379/v2/keys/
What's the best way of passing this IP address to all of my container jobs? Specifically I'm using docker-compose to run my container jobs.
The documentation you reference is making a few assumptions without stating those assumptions.
I think the big assumption is that you want to connect to an etcd that is running on the host from a container. If you're running a project with docker-compose you should run etcd in a container as part of the project. This makes it very easy to connect to etcd. Use the name you gave to the etcd service in the Compose file as the hostname. If you named it etcd, you would use something like this:
http://etcd:2379/v2/keys/
I started learning about Docker. But I keep getting confused often, even though I read it in multiple places.
Docker Host and Docker Container.
Docker Engine is the base Engine that handles the containers.
Docker Containers sit on top of Docker engine. This is created by recipes (text file with shell script). It pulls the image from the hub and you can install your stuff on it.
In a typical application environment, you will create separate containers for each piece of the system, Application Server, Database Server, Web Server, etc. (one container for each).
Docker Swarm is a cluster of containers.
Where does the Docker Host come in? Is this another word for Container or another layer where you can keep multiple containers together?
Sorry may be a basic question.
I googled this, but no use.
The docker host is the base traditional OS server where the OS and processes are running in normal (non-container) mode. So the OS and processes you start by actually powering on and booting a server (or VM) are the docker host. The processes that start within containers via docker commands are your containers.
To make an analogy: the docker host is the playground, the docker containers are the kids playing around in there.
Docker Host is the machine that Docker Engine is installed.
Here's a picture, which I find easier to understand than words. I found it here.
The Host is the machine managing the containers and images, where you actually installed Docker.
Docker host is the machine where you installed the docker engine. the docker container can be compared with a simple process running on that same docker host.
The Host is the underlying OS and it's support for app isolation (ie., process and user isolation via "containers." Docker provides an API that defines a method of application packaging and methods for working for the containers.
Host = container implementation
Docker = app packaging and container management
I've built a Container that leverages a CF app that's bound to a service, Cloudant to be specific.
When I run the container locally I can connect to my Cloudant service.
When I build and run my image in the Bluemix container service I can no longer connect to my Cloudant service. I did use --bind to bind my app to the container. I have verified that the VCAP_Services info is propagating to my container successfully.
To narrow the problem down further, I tried just doing an
ice -run --name NAME IMAGE_NAME ping CLOUDANT_HOST
and I found I was getting an unknown host.
So I then tried to just ping the IP, and got Network is unreachable.
If we can not resolve bluemix services over the network, how can we leverage them? Is there just a temporary problem, or perhaps I'm missing something?
Again, runs fine locally but fails when hosted in the container service.
It has been my experience that networking is not reliable in IBM Containers for about 5 seconds at startup. Try adding a "sleep 10" to your CMD or ENTRYPOINT. Or set it up to retry for X seconds before giving up.
Once the networking comes up it has been reliable for me. But the first few seconds of a containers life have had troubles with DNS, binding, and outgoing traffic.
looking at your problem it could be related to a network error on container when on Bluemix.
Try to access your container through shell when on Bluemix (using cf ic console or docker one) and check if the network has been rised correctly and then if its network interface(s) has an IP to use.