Is it possible to create dummy Ethernet interface using Docker build below is snippet from DockerFile build logs.
Step 14/17 : run sudo ip link add dummy0 type dummy && sudo ip addr add 192.168.10.12/24 dev dummy0 && sudo ip link set dummy0 up
---> Running in 21c388505e28
RTNETLINK answers: Operation not permitted
I was able create dummy interface by running the image in privilege mode.
Related
I'm looking for a way to connect a docker container app so it can access Postgres database via https://postgresapp.com/
Was wondering if there are certain ports to open and what the yml file would look like to enable docker-compose to work with locally running postgres.
Thanks!
You have to make changes in postgres config files in host machine, especially
1) pg_hba.conf
2) postgres.conf
You can find above configuration in /var/lib/postgresql/[version]/ path
First check your docker0 bridge interface it might be 172.17.0.0/16, if not change accordingly.
make changes in postgresql.conf path will be same as pg_hba.conf.
listenaddress to "*"
Then in pg_hba.conf add rule as
host all all 172.17.0.0/16 md5.
Then in docker application use host ip address to connect to postures running in host.
There is no difference between accessing a database from inside a container or outside a container.
Let's say the Postgres database is running on localhost. If you want to connect to it from a small Python script, you can do so as follows, as found in the Docs:
#!/usr/bin/python
import psycopg2
import sys
def main():
#Define our connection string
conn_string = "host='localhost' dbname='my_database' user='postgres' password='secret'"
# print the connection string we will use to connect
print "Connecting to database\n ->%s" % (conn_string)
# get a connection, if a connect cannot be made an exception will be raised here
conn = psycopg2.connect(conn_string)
# conn.cursor will return a cursor object, you can use this cursor to perform queries
cursor = conn.cursor()
print "Connected!\n"
if __name__ == "__main__":
main()
In order for this script to run you need to install Python and psycopg2 on your local machine, probably in a virtual environment. Alternatively, you could put it in a container. Instead of installing everything manually, you would define a Dockerfile with your installation instructions. It would probably look something like this:
FROM python:latest
ADD python-script.py /opt/www/python-script.py
RUN pip install psycopg2
CMD ["python", "/opt/www/python-script.py"]
And if we build and run...
dave-mbp:Desktop dave$ ls -la
-rw-r--r-- 1 dave staff 135 Dec 8 19:05 Dockerfile
-rw-r--r-- 1 dave staff 22 Dec 8 19:04 python-script.py
dave-mbp:Desktop dave$ docker build -t python-script .
Sending build context to Docker daemon 17.92kB
Step 1/4 : FROM python:latest
latest: Pulling from library/python
85b1f47fba49: Already exists
ba6bd283713a: Pull complete
817c8cd48a09: Pull complete
47cc0ed96dc3: Pull complete
4a36819a59dc: Pull complete
db9a0221399f: Pull complete
7a511a7689b6: Pull complete
1223757f6914: Pull complete
Digest: sha256:db9d8546f3ff74e96702abe0a78a0e0454df6ea898de8f124feba81deea416d7
Status: Downloaded newer image for python:latest
---> 79e1dc9af1c1
Step 2/4 : ADD python-script.py /opt/www/python-script.py
---> 21f31c8803f7
Step 3/4 : RUN pip install psycopg2
---> Running in f280c82d74e7
Collecting psycopg2
Downloading psycopg2-2.7.3.2-cp36-cp36m-manylinux1_x86_64.whl (2.7MB)
Installing collected packages: psycopg2
Successfully installed psycopg2-2.7.3.2
---> bd38f911bb6a
Removing intermediate container f280c82d74e7
Step 4/4 : CMD python /opt/www/python-script.py
---> Running in 159b70861893
---> 4aa783be5c90
Removing intermediate container 159b70861893
Successfully built 4aa783be5c90
Successfully tagged python-script:latest
dave-mbp:Desktop dave$ docker run python-script
>> Connected!
Containers are largely a packaging solution. We'll be able to connect to an external database or API endpoint as if we were running everything from outside of a container.
There is one thing to keep in mind though...localhost may resolve to the virtual machine if you're using docker toolbox/virtualbox. In which case, you wouldn't connect to localhost unless you were running a bridged network connection in the VM. Otherwise, just specify the host IP instead of localhost.
Postgres default port is 5432.
Docker offers different networking modes to use. I can tell you how to achieve your goal using bridge mode (used by default).
There is always a bridge network that allows you access host machine. It is created by default and called docker0.
Run the following command in terminal ip addr show docker0 to see your host ip available to all the containers you run.
Output on my machine:
developer#dlbra:~$ ip addr show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:95:60:89 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
Therefore you don't need any additional configuration in docker-compose.yml
You just have to configure your db host to the ip address you saw. In my case it would be 172.17.0.1.
If your locally installed Postgres listens to a different port also specify that port in your application configuration.
I'm creating a container with a connection to a cloudsql database, when I run the image with kubernetes It does not have an external IP that I can use to allow the new image to connect to the database. But as this is part of the init configuration I can't wait to know what is the public IP to add to the whitelist databases.
I know that are ways to connect a database through services in the same cluster, but I can't figure out how to connect with the cloudsql provided by google.
There are two ways to solve that:
The first option is to use a cloudsql proxy using the instructions available in: https://cloud.google.com/sql/docs/sql-proxy
In your docker image you need to ensure that fuse is available in your installation, in wasn't my case (using a ubuntu:trusty-20160119 as base image). If you need to able that, then use the following steps in your Dockerfile:
# install fusermount
# RUN apt-get install build-essential -y
# RUN wget https://github.com/libfuse/libfuse/releases/download/fuse_2_9_5/fuse-2.9.5.tar.gz
# RUN tar -xzvf fuse-2.9.5.tar.gz
# RUN cd fuse-2.9.5 && ./configure && make -j8 && make install
Then at the startup of your container you must create a script that open the socket as described in https://cloud.google.com/sql/docs/sql-proxy#example_proxy_invocations_and_connection_strings.
The second way is just to allow the ips from the nodes that support the kubernetes cluster in the whitelist for the cloudsql.
I prefer the first option, because it works in any machine I deploy the image and I don't need to care about to add or remove ips if I need to deliver more nodes in the kubernetes cluster.
I have a docker daemon/engine running inside guest (Ubuntu) virtual machine
and as per Docker Tooling for Eclipse instruction I had downloaded and setup the plugin in Eclipse Mars on my host Mac OS machine.
How do I connect to Docker running in guest VM from the host machine IDE.
As per instructions, I would need to enter TCP and authentication so how do I get these details to setup the connection?
I had tried with guest OS IP (i.e. tcp://127.0.0.1:2376 output of ifconfig command with local host IP) but was not able to connect.
Here are the steps I used to get Docker Tooling working in Eclipse Neon on Windows.
Open the Docker Quickstart Terminal
Execute docker-machine ls
Copy the URL (e.g. tcp://192.168.99.100:2376)
Click the Add Connection button in the toolbar for Docker Explorer
Provide a Connection name:
Select TCP Connection
Paste the above URL into the URI: edit box
Change tcp to https in the edit box
Select Enable authentication
Set the path to C:\Users\username\.docker\machine\certs
Click on Test Connection to verify
There are two parts to this. First, enabling the TCP socket (which I'll answer). Then, setting TLS authentication on the socket (which I'll link to but won't cover). The first part should get you up.
You'll need to edit the DOCKER_OPTS settings in /etc/default/docker in the VM. Edit this file and set DOCKER_OPTS to something like:
DOCKER_OPTS="-H tcp://0.0.0.0:2376 -H unix://"
Then, restart Docker (sudo service docker restart). This should get you a TCP connection that you can put in your Eclipse settings as:
tcp://10.0.2.15:2376
The second part (which is optional at this point) would be setting up the CA and certificates per https://docs.docker.com/engine/articles/https/. But I'd actually recommend just installing Docker Machine and provisioning your VM that way as it will create the needed certificates for you. Then, if your machine was named dev, you just point the authentication dir to ~/.docker/machine/machines/dev.
If Docker Daemon is running(i.e docker desktop running) in window task bar , not inside the VM , just get the URI from its context menu setting. In eclipse docker tooling perspective , we can connect to running docker daemon only by providing the URI.
The Eclipse Remote Systems view is a great tool to connect to VMs and explore their file systems, currently the following options are available:
First I find out the container IP by running this command:
docker inspect <container> | grep IPAddress | cut -d '"' -f 4
Once I have the IP, I launch the New Connection wizard from the Remote Systems view, I tried to select Linux, SSH only and FTP only and in the Hostname field I paste the container IP, click Finish and the connection seems to be successfully created, now when I try to expand the the Files node it prompts for User and Password, the problem is that I don't have that info, does the user/pass vary from container to container? how can I get this info?
You can just instantiate a container with that image but with a shell so that you can see what usernames are configured on that image.
docker run -it node /bin/bash
You can then configure users, password and do a:
docker commit <image-name> my-node:0.1
Then you can instantiate a new container:
docker run -d -p 80:9080 -p 443:9443 my-node
Is ssh also running in that container? If not you will have to install it into the container so that you can ssh to it.
A docker container only runs a single parent process at a time (on your host machine that parent process is 'init' which runs a bunch of system services). In the case of your node container, that parent process is a node server.
Eclipse connects to a remote machine by connecting to a listener on that machine using some protocol. SSH of FTP, for example. With the docker container, there is no process listening for this connection, so you cannot connect using Eclipse as it is. You have two options...
Use the command line and docker exec to connect to the machine and explore its filesystem. No pretty pictures, but you don't need a lot of knowledge.
Modify your container in some way to connect to it. you have two options here...
A. Modify your image to run an SSH daemon. A simple way to do that is to use the phusion/baseimage container as your parent, and have it spawn both the ssh daemon and the node server. You need to know a good amount about linux sysadmin to get this working (not a lot, but a good amount).
B. Launch a second copy of the container with a different command, such as ssh -d. You can then connect to the second copy. This has the downside that it won't be the same container you're interested in, and you STILL have to modify the image since I doubt the node image even has an ssh daemon installed... but it is less knowledge than wrapping your head around runit.
I have a few Docker containers running like:
Nginx
Web app 1
Web app 2
PostgreSQL
Since Nginx needs to connect to the web application servers inside web app 1 and 2, and the web apps need to talk to PostgreSQL, I have linkages like this:
Nginx --- link ---> Web app 1
Nginx --- link ---> Web app 2
Web app 1 --- link ---> PostgreSQL
Web app 2 --- link ---> PostgreSQL
This works pretty well at first. However, when I develop a new version of web app 1 and web app 2, I need to replace them. What I do is remove the web app containers, set up new containers and start them.
For the web app containers, their IP addresses at first would be something like:
172.17.0.2
172.17.0.3
And after I replace them, they will have new IP addresses:
172.17.0.5
172.17.0.6
Now, those exposed environment variables in the Nginx container are still pointing to the old IP addresses. Here comes the problem. How do I replace a container without breaking linkage between containers? The same issue will also happen to PostgreSQL. If I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, so this is not ideal for real-life server operation.
The effect of --link is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).
We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).
1) Use dynamic DNS
The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.
We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).
An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.
2) Use the docker bridge ip
When exposing the container ports you can just bind them to the docker0 bridge, which has (or can have) a well known address.
When replacing a container with a new version, just make the new container publish the same port on the same IP.
This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0 bridge), etcétera… so our current favorite is option 1.
Links are for a specific container, not based on the name of a container. So the moment you remove a container, the link is disconnected and the new container (even with the same name) will not automatically take its place.
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:
postgres volume:
$ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true
postgres-container:
$ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql
ambassador-container for postgres:
$ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador
Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):
$ docker run --rm -t -i paintedfox/postgresql /bin/bash
root#b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
root#b94251eac8be:/# echo $PGHOST
172.17.42.1
root#b94251eac8be:/#
root#b94251eac8be:/# psql -h $PGHOST --user postgres
Password for user postgres:
psql (9.3.4)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
postgres=#
postgres=# select 6*7 as answer;
answer
--------
42
(1 row)
bpostgres=#
Now you can restart the ambassador container whithout having to restart the client.
If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.
There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..
The following is from docker docs
Important notes on Docker environment variables
Unlike host entries in the /etc/hosts file, IP addresses stored in the
environment variables are not automatically updated if the source
container is restarted. We recommend using the host entries in
/etc/hosts to resolve the IP address of linked containers.
These environment variables are only set for the first process in the
container. Some daemons, such as sshd, will scrub them when spawning
shells for connection.
This is included in the experimental build of docker 3 weeks ago, with the introduction of services: https://github.com/docker/docker/blob/master/experimental/networking.md
You should be able to get a dynamic link in place by running a docker container with the --publish-service <name> arguments. This name will be accessible via the DNS. This is persistent on container restarts (as long as you restart the container with the same service name that is of course)
You may use dockerlinks with names to solve this.
Most basic setup would be to first create a named database container :
$ sudo docker run -d --name db training/postgres
then create a web container connecting to db :
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
With this, you don't need to manually connect containers with their IP adresses.
with OpenSVC approach, you can workaround by :
use a service with its own ip address/dns name (the one your end users will connect to)
tell docker to expose ports to this specific ip address ("--ip" docker option)
configure your apps to connect to the service ip address
each time you replace a container, you are sure that it will connect to the correct ip address.
Tutorial here => Docker Multi Containers with OpenSVC
don't miss the "complex orchestration" part at the end of tuto, which can help you start/stop containers in the correct order (1 postgresql subset + 1 webapp subset + 1 nginx subset)
the main drawback is that you expose webapp and PostgreSQL ports to public address, and actually only the nginx tcp port need to be exposed in public.
You could also try the ambassador method of having an intermediary container just for keeping the link intact... (see https://docs.docker.com/articles/ambassador_pattern_linking/ ) for more info
You can bind the connection ports of your images to fixed ports on the host and configure the services to use them instead.
This has its drawbacks as well, but it might work in your case.
Another alternative is to use the --net container:$CONTAINER_ID option.
Step 1: Create "network" containers
docker run --name db_net ubuntu:14.04 sleep infinity
docker run --name app1_net --link db_net:db ubuntu:14.04 sleep infinity
docker run --name app2_net --link db_net:db ubuntu:14.04 sleep infinity
docker run -p 80 -p 443 --name nginx_net --link app1_net:app1 --link app2_net:app2 ubuntu:14.04 sleep infinity
Step 2: Inject services into "network" containers
docker run --name db --net container:db_net pgsql
docker run --name app1 --net container:app1_net app1
docker run --name app2 --net container:app1_net app2
docker run --name nginx --net container:app1_net nginx
As long as you do not touch the "network" containers, the IP addresses of your links should not change.
Network-scoped alias is what you need is this case. It's a rather new feature, which can be used to "publish" a container providing a service for the whole network, unlike link aliases accessible only from one container.
It does not add any kind of dependency between containers — they can communicate as long as both are running, regardless of restarts and replacement and launch order. It uses DNS internally, I believe, instead of /etc/hosts
Use it like this: docker run --net=some_user_definied_nw --net-alias postgres ... and you can connect to it using that alias from any container on the same network.
Does not work on the default network, unfortunately, you have to create one with docker network create <network> and then use it with --net=<network> for every container (compose supports it as well).
In addition to container being down and hence unreachable by alias multiple containers can also share an alias in which case it's not guaranteed that it will be resolved to the right one. But in some case that can help with seamless upgrade, probably.
It's all not very well documented as of yet, hard to figure out just by reading the man page.