docker for mac: access mongo from another container - mongodb

I have two container, one is a container for node.js code and another for a mongodb database. I set the network_mode to host for both containers, there are certain network restrictions making me have to do this. Additionally, both container are on the same physical mac machine. If I want to connect to the mongodb database from the node.js container, what should my connection string look like. I know that if I used a bridged network, I would use the name of the mongo container for the host name. However, in this case I have tried localhost, 0.0.0.0, 127.0.0.1, etc. none of which are working. How would I access the mongdb database in this case?

easiest way to use docker inspect and find the IP of the docker container and login/connect to it. Alternatively you can use VS Code (https://code.visualstudio.com/download) and install docker plugin
find running docker containers and container ids
docker ps
get details of the docker specific docker container
docker inspect <CONTAINER ID> | grep "IPAddress"
example
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
VS Code docker plugin

According to this github issue, network_mode doesn't work on Docker for Mac.
The comment on the issue I linked did say that if you need to access a service on the host from a container, you can use host.docker.internal, but this might not be relevant to your use case.

Related

Run a K3S server in a docker container, and connect a K3S agent in another docker container

I know k3d can do this magically via k3d cluster create myname --token MYTOKEN --agents 1, but I am trying to figure out how to do the most simple version of that 'manually'. I want to create a server something like:
docker run -e K3S_TOKEN=MYTOKEN rancher/k3s:latest server
And connect an agent something like like:
docker run -e K3S_TOKEN=MYTOKEN -e K3S_URL=https://localhost:6443 rancher/k3s:latest agent
Does anyone know what ports need to be forwarded here? How can I set this up? Nearly everything I try, the agent complains about port 6444 already in use, even if I disable as much as possible about the server with any combination of --no-deploy servicelb --disable-agent --no-deploy traefik
Feel free to disable literally everything other than the server and the agent, I'm trying to make this ultra ultra simple, but just butting my head against a wall at the moment. Thanks!
The containers must "see" each other. Docker isolates the networks by default, so "localhost" in your agent container is the agent container itself.
Possible solutions:
Run both containers without network isolation using --net=host, map API port of the server to the host with --port and use the host IP in the agent container or use docker-compose.
A working example for docker-compose is described here: https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html

How to connect local Mongo database to docker

I am working on golang project, recently I read about docker and try to use docker with my app. I am using mongoDB for database.
Now problem is that, I am creating Dockerfile to install all packages and compile and run the go project.
I am running mongo data as locally, if I am running go program without docker it gives me output, but if I am using docker for same project (just installing dependencies with this and running project), it compile successfully but not gives any output, having error::
CreateSession: no reachable servers
my Dockerfile::
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
WORKDIR $GOPATH/src/myapp
# Copy the local package files to the container's workspace.
ADD . /go/src/myapp
#Install dependencies
RUN go get ./...
# Build the installation command inside the container.
RUN go install myapp
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/myapp
# Document that the service listens on port 8080.
EXPOSE 8080
EXPOSE 27017
When you run your application inside Docker, it's running in a virtual environment; It's just like another computer but everything is virtual, including the network.
To connect your container to the host, Docker gives it an special ip address and give this ip an url with the value host.docker.internal.
So, assuming that mongo is running with binding on every interface on the host machine, from the container it could be reached with the connection string:
mongodb://host.docker.internal:21017/database
Simplifying, Just use host.docker.internal as your mongodb hostname.
In your golang project, how do you specify connection to mongodb? localhost:27017?
If you are using localhost in your code, your docker container will be the localhost and since you don't have mongodb in the same container, you'll get the error.
If you are starting your docker with command line docker run ... add --network="host". If you are using docker-compose, add network_mode: "host"
Ideally you would setup mongodo in it's own container and connect them from your docker-compose.yml -- but that's not what you are asking for. So, I won't go into that.
In future questions, please include relevant Dockerfile, docker-compose.yml to the extent possible. It will help us give more specific answer.

Docker best practice to access host's services

What is best practice to access the host's services within a docker container?
I'd like to access PostgreSQL running on the host within my application which runs in a docker container.
The easiest approach I've found is to use docker container run --net="host" which, based on this answer, behaves as follows:
Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host.
Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option.
Which does not seem to be best practice since the containers should be isolated from the host.
Other approaches I've found are awking the hosts IP. May this be the way to go?
The best option in this case to treat the host as a remote machine. That way the container will be portable and would not have a strict dependency on network locations when connecting to the database.
In addition to what is mentioned on the drawbacks of using --network=host, this option will tightly couple the container to the host by assuming that the database is found on localhost.
The way to treat the machine as a remote one, is to use standard network constructs such as IP and DNS. Define a new DNS entry for the container that will point to the host where the DB is found using the
--add-host option to docker run.
docker run --add-host db-static:<ip-address-of-host> ...
Then inside the container you connect to the database via db-static

configure mongo with docker on win7

I got a problem with configure connection to mongoDB via docker container in spring boot. I run mongo conteiner and it's waiting for action print screen of docker terminal but in the same time I got error in spring logs logs screen
Problem appears on win7 while working on udemy course with open source code which You can check on https://github.com/springframeworkguru/spring-boot-mongodb
On Windows, since you're running Docker Machine you need to connect to the docker machine instead of localhost. The IP will usually be 192.168.99.100, but you can check by executing the docker-machine ip default command.
So your mongo connection string will normally be something like mongodb://192.168.99.100/dbName
Hey I had same problem and solution for me was to add these two lines to specify port and host of vm and image.
spring.data.mongodb.host=your_host_ip
spring.data.mongodb.port=your_image_port
You can find both easly in Kitematic in home tab or by commands. For host_ip in command line enter ipconfig command and for image_port $docker ps to get container ID and than $docker inspect <container id>.
Hope it will help.
First do what Strelok said
docker-machine ip default and get the ip,
then start mongo
docker run -p 27017:27017 -d mongo.
The port is 27017
Then do what trajanesco suggested, edit the application.properties and add these two lines
spring.data.mongodb.host=192.168.99.100 # usually the default ip
spring.data.mongodb.port=27017

How do I set up linkage between Docker containers so that restarting won't break it?

I have a few Docker containers running like:
Nginx
Web app 1
Web app 2
PostgreSQL
Since Nginx needs to connect to the web application servers inside web app 1 and 2, and the web apps need to talk to PostgreSQL, I have linkages like this:
Nginx --- link ---> Web app 1
Nginx --- link ---> Web app 2
Web app 1 --- link ---> PostgreSQL
Web app 2 --- link ---> PostgreSQL
This works pretty well at first. However, when I develop a new version of web app 1 and web app 2, I need to replace them. What I do is remove the web app containers, set up new containers and start them.
For the web app containers, their IP addresses at first would be something like:
172.17.0.2
172.17.0.3
And after I replace them, they will have new IP addresses:
172.17.0.5
172.17.0.6
Now, those exposed environment variables in the Nginx container are still pointing to the old IP addresses. Here comes the problem. How do I replace a container without breaking linkage between containers? The same issue will also happen to PostgreSQL. If I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, so this is not ideal for real-life server operation.
The effect of --link is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).
We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).
1) Use dynamic DNS
The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.
We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).
An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.
2) Use the docker bridge ip
When exposing the container ports you can just bind them to the docker0 bridge, which has (or can have) a well known address.
When replacing a container with a new version, just make the new container publish the same port on the same IP.
This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0 bridge), etcétera… so our current favorite is option 1.
Links are for a specific container, not based on the name of a container. So the moment you remove a container, the link is disconnected and the new container (even with the same name) will not automatically take its place.
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:
postgres volume:
$ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true
postgres-container:
$ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql
ambassador-container for postgres:
$ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador
Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):
$ docker run --rm -t -i paintedfox/postgresql /bin/bash
root#b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
root#b94251eac8be:/# echo $PGHOST
172.17.42.1
root#b94251eac8be:/#
root#b94251eac8be:/# psql -h $PGHOST --user postgres
Password for user postgres:
psql (9.3.4)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
postgres=#
postgres=# select 6*7 as answer;
answer
--------
42
(1 row)
bpostgres=#
Now you can restart the ambassador container whithout having to restart the client.
If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.
There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..
The following is from docker docs
Important notes on Docker environment variables
Unlike host entries in the /etc/hosts file, IP addresses stored in the
environment variables are not automatically updated if the source
container is restarted. We recommend using the host entries in
/etc/hosts to resolve the IP address of linked containers.
These environment variables are only set for the first process in the
container. Some daemons, such as sshd, will scrub them when spawning
shells for connection.
This is included in the experimental build of docker 3 weeks ago, with the introduction of services: https://github.com/docker/docker/blob/master/experimental/networking.md
You should be able to get a dynamic link in place by running a docker container with the --publish-service <name> arguments. This name will be accessible via the DNS. This is persistent on container restarts (as long as you restart the container with the same service name that is of course)
You may use dockerlinks with names to solve this.
Most basic setup would be to first create a named database container :
$ sudo docker run -d --name db training/postgres
then create a web container connecting to db :
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
With this, you don't need to manually connect containers with their IP adresses.
with OpenSVC approach, you can workaround by :
use a service with its own ip address/dns name (the one your end users will connect to)
tell docker to expose ports to this specific ip address ("--ip" docker option)
configure your apps to connect to the service ip address
each time you replace a container, you are sure that it will connect to the correct ip address.
Tutorial here => Docker Multi Containers with OpenSVC
don't miss the "complex orchestration" part at the end of tuto, which can help you start/stop containers in the correct order (1 postgresql subset + 1 webapp subset + 1 nginx subset)
the main drawback is that you expose webapp and PostgreSQL ports to public address, and actually only the nginx tcp port need to be exposed in public.
You could also try the ambassador method of having an intermediary container just for keeping the link intact... (see https://docs.docker.com/articles/ambassador_pattern_linking/ ) for more info
You can bind the connection ports of your images to fixed ports on the host and configure the services to use them instead.
This has its drawbacks as well, but it might work in your case.
Another alternative is to use the --net container:$CONTAINER_ID option.
Step 1: Create "network" containers
docker run --name db_net ubuntu:14.04 sleep infinity
docker run --name app1_net --link db_net:db ubuntu:14.04 sleep infinity
docker run --name app2_net --link db_net:db ubuntu:14.04 sleep infinity
docker run -p 80 -p 443 --name nginx_net --link app1_net:app1 --link app2_net:app2 ubuntu:14.04 sleep infinity
Step 2: Inject services into "network" containers
docker run --name db --net container:db_net pgsql
docker run --name app1 --net container:app1_net app1
docker run --name app2 --net container:app1_net app2
docker run --name nginx --net container:app1_net nginx
As long as you do not touch the "network" containers, the IP addresses of your links should not change.
Network-scoped alias is what you need is this case. It's a rather new feature, which can be used to "publish" a container providing a service for the whole network, unlike link aliases accessible only from one container.
It does not add any kind of dependency between containers — they can communicate as long as both are running, regardless of restarts and replacement and launch order. It uses DNS internally, I believe, instead of /etc/hosts
Use it like this: docker run --net=some_user_definied_nw --net-alias postgres ... and you can connect to it using that alias from any container on the same network.
Does not work on the default network, unfortunately, you have to create one with docker network create <network> and then use it with --net=<network> for every container (compose supports it as well).
In addition to container being down and hence unreachable by alias multiple containers can also share an alias in which case it's not guaranteed that it will be resolved to the right one. But in some case that can help with seamless upgrade, probably.
It's all not very well documented as of yet, hard to figure out just by reading the man page.