Access docker within container on jenkins slave - sockets

my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?

With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible

Related

Run a K3S server in a docker container, and connect a K3S agent in another docker container

I know k3d can do this magically via k3d cluster create myname --token MYTOKEN --agents 1, but I am trying to figure out how to do the most simple version of that 'manually'. I want to create a server something like:
docker run -e K3S_TOKEN=MYTOKEN rancher/k3s:latest server
And connect an agent something like like:
docker run -e K3S_TOKEN=MYTOKEN -e K3S_URL=https://localhost:6443 rancher/k3s:latest agent
Does anyone know what ports need to be forwarded here? How can I set this up? Nearly everything I try, the agent complains about port 6444 already in use, even if I disable as much as possible about the server with any combination of --no-deploy servicelb --disable-agent --no-deploy traefik
Feel free to disable literally everything other than the server and the agent, I'm trying to make this ultra ultra simple, but just butting my head against a wall at the moment. Thanks!
The containers must "see" each other. Docker isolates the networks by default, so "localhost" in your agent container is the agent container itself.
Possible solutions:
Run both containers without network isolation using --net=host, map API port of the server to the host with --port and use the host IP in the agent container or use docker-compose.
A working example for docker-compose is described here: https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html

How to reconnect to same postgres database on Docker

I'm very new to using docker and I've created a postgres container using
docker run --name mytrainingdb -e POSTGRES_PASSWORD=mysecretpassword -d postgres. Then I connected to it with docker exec -it <container-id> bash and then psql.
Then I stop the container.
My query is, what do I do reconnect to the same database? I tried to run same docker run command, but it says the name 'mytrainingdb' is used, which means it is trying to create it afresh, which is not what I want. Hope my expectation is right, as in when I restart my laptop or resume work I can just restart the same container and my data/config would be preserved?
The documentation also mentions that we can link a host directory to volume of pg container to have the stored data accessible to us, but I'm ok with docker managing my storage for that database.
You will have error when you try to re-run the same command, because docker is trying to create a new container with same name as the previous one "mytrainingdb". If you close docker and reopen it you will still find your container , but its not running , you can start it again with docker start mytrainingdb or you can remove it with docker rm mytrainingdb .
However , dont restart docker because you want to create a new container with the same name! If you want to start a new container with the same name and your container is still running you can first stop it with docker stop mytrainingdb and docker rm mytrainingdb or you can just do docker rm -f mytrainingdb (this will remove you running container with force ) and then create a new container..
As for the volumes ,you just created one by default which is named is kind of hash , and its found at volumes/var/lib/docker/volumes/ .Because generally containers such PostgreSQL, or databases in general persists volumes. The volume gets created when running the container and is handy to save persistent data, whether you start the container with -v or not.
The volume you talked about in your question , is called mounted volume , is when you basically just bind a certain directory or file from the host (outside) to inside the container
docker run -v /hostdir:/containerdir in your case docker run -v /hostdir:/var/lib/postgresql/data
If you restart docker or your computer running containers won't be automatically restarted. You can start your container again with docker start mytrainingdb (related question), then connect with your docker exec command.
(one tip: instead of running bash, then psql, you can directly run psql, e.g. docker exec -it mytrainingdb psql --user postgres)
Your understanding of data persistence is correct, docker will manage the data and it will still be around.
From the postgres image documentation
There are several ways to store data used by applications that run in Docker containers. We encourage users of the postgres images to familiarize themselves with the options available, including:
Let Docker manage the storage of your database data by writing the database files to disk on the host system using its own internal volume management. This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
You can add --rm argument so that whenever you stop the container manually, or container stops for any reasons (his task is done or it fails), it will remove that container.
In your case, you can use this:
docker run --name mytrainingdb --rm -e POSTGRES_PASSWORD=mysecretpassword -d postgres

connect to shell terminal of other container in a pod

When I define multiple containers in a pod/pod template like one container running agent and another php-fpm, how can they access each other? I need the agent container to connect to php-fpm by shell and need to execute few steps interactively through agent container.
Based on my understanding, we can package kubectl into the agent container and use kubectl exec -it <container id> sh to connect to the container. But I don't want Agent container to have more privilege than to connect to the target container with is php-fpm.
Is there a better way for agent container to connect to php-fpm by a shell and execute commands interactively?
Also, I wasn't successful in running kubectl from a container when using minikube due to following errors
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
Error in configuration:
* unable to read client-cert /Users/user/.minikube/apiserver.crt for minikube due to open /Users/user/.minikube/apiserver.crt: no such file or directory
* unable to read client-key /Users/user/.minikube/apiserver.key for minikube due to open /Users/user/.minikube/apiserver.key: no such file or directory
* unable to read certificate-authority /Users/user/.minikube/ca.crt for minikube due to open /Users/user/.minikube/ca.crt: no such file or directory
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
First off, every Pod within a k8s cluster has its own k8s credentials provided by /var/run/secrets/kubernetes.io/serviceaccount/token, and thus there is absolutely no need to attempt to volume mount your home directory into a docker container
The reason you are getting the error about client-cert is because the contents of ~/.kube are merely strings that point to the externally defined ssl key, ssl certificate, and ssl CA certificate defined inside ~/.kube/config -- but I won't speak to fixing that problem further since there is no good reason to be using that approach

How to use the "Remote Systems" view in Eclipse to explore a Docker container file system?

The Eclipse Remote Systems view is a great tool to connect to VMs and explore their file systems, currently the following options are available:
First I find out the container IP by running this command:
docker inspect <container> | grep IPAddress | cut -d '"' -f 4
Once I have the IP, I launch the New Connection wizard from the Remote Systems view, I tried to select Linux, SSH only and FTP only and in the Hostname field I paste the container IP, click Finish and the connection seems to be successfully created, now when I try to expand the the Files node it prompts for User and Password, the problem is that I don't have that info, does the user/pass vary from container to container? how can I get this info?
You can just instantiate a container with that image but with a shell so that you can see what usernames are configured on that image.
docker run -it node /bin/bash
You can then configure users, password and do a:
docker commit <image-name> my-node:0.1
Then you can instantiate a new container:
docker run -d -p 80:9080 -p 443:9443 my-node
Is ssh also running in that container? If not you will have to install it into the container so that you can ssh to it.
A docker container only runs a single parent process at a time (on your host machine that parent process is 'init' which runs a bunch of system services). In the case of your node container, that parent process is a node server.
Eclipse connects to a remote machine by connecting to a listener on that machine using some protocol. SSH of FTP, for example. With the docker container, there is no process listening for this connection, so you cannot connect using Eclipse as it is. You have two options...
Use the command line and docker exec to connect to the machine and explore its filesystem. No pretty pictures, but you don't need a lot of knowledge.
Modify your container in some way to connect to it. you have two options here...
A. Modify your image to run an SSH daemon. A simple way to do that is to use the phusion/baseimage container as your parent, and have it spawn both the ssh daemon and the node server. You need to know a good amount about linux sysadmin to get this working (not a lot, but a good amount).
B. Launch a second copy of the container with a different command, such as ssh -d. You can then connect to the second copy. This has the downside that it won't be the same container you're interested in, and you STILL have to modify the image since I doubt the node image even has an ssh daemon installed... but it is less knowledge than wrapping your head around runit.

How do I set up linkage between Docker containers so that restarting won't break it?

I have a few Docker containers running like:
Nginx
Web app 1
Web app 2
PostgreSQL
Since Nginx needs to connect to the web application servers inside web app 1 and 2, and the web apps need to talk to PostgreSQL, I have linkages like this:
Nginx --- link ---> Web app 1
Nginx --- link ---> Web app 2
Web app 1 --- link ---> PostgreSQL
Web app 2 --- link ---> PostgreSQL
This works pretty well at first. However, when I develop a new version of web app 1 and web app 2, I need to replace them. What I do is remove the web app containers, set up new containers and start them.
For the web app containers, their IP addresses at first would be something like:
172.17.0.2
172.17.0.3
And after I replace them, they will have new IP addresses:
172.17.0.5
172.17.0.6
Now, those exposed environment variables in the Nginx container are still pointing to the old IP addresses. Here comes the problem. How do I replace a container without breaking linkage between containers? The same issue will also happen to PostgreSQL. If I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, so this is not ideal for real-life server operation.
The effect of --link is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).
We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).
1) Use dynamic DNS
The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.
We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).
An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.
2) Use the docker bridge ip
When exposing the container ports you can just bind them to the docker0 bridge, which has (or can have) a well known address.
When replacing a container with a new version, just make the new container publish the same port on the same IP.
This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0 bridge), etcétera… so our current favorite is option 1.
Links are for a specific container, not based on the name of a container. So the moment you remove a container, the link is disconnected and the new container (even with the same name) will not automatically take its place.
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:
postgres volume:
$ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true
postgres-container:
$ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql
ambassador-container for postgres:
$ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador
Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):
$ docker run --rm -t -i paintedfox/postgresql /bin/bash
root#b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
root#b94251eac8be:/# echo $PGHOST
172.17.42.1
root#b94251eac8be:/#
root#b94251eac8be:/# psql -h $PGHOST --user postgres
Password for user postgres:
psql (9.3.4)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
postgres=#
postgres=# select 6*7 as answer;
answer
--------
42
(1 row)
bpostgres=#
Now you can restart the ambassador container whithout having to restart the client.
If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.
There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..
The following is from docker docs
Important notes on Docker environment variables
Unlike host entries in the /etc/hosts file, IP addresses stored in the
environment variables are not automatically updated if the source
container is restarted. We recommend using the host entries in
/etc/hosts to resolve the IP address of linked containers.
These environment variables are only set for the first process in the
container. Some daemons, such as sshd, will scrub them when spawning
shells for connection.
This is included in the experimental build of docker 3 weeks ago, with the introduction of services: https://github.com/docker/docker/blob/master/experimental/networking.md
You should be able to get a dynamic link in place by running a docker container with the --publish-service <name> arguments. This name will be accessible via the DNS. This is persistent on container restarts (as long as you restart the container with the same service name that is of course)
You may use dockerlinks with names to solve this.
Most basic setup would be to first create a named database container :
$ sudo docker run -d --name db training/postgres
then create a web container connecting to db :
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
With this, you don't need to manually connect containers with their IP adresses.
with OpenSVC approach, you can workaround by :
use a service with its own ip address/dns name (the one your end users will connect to)
tell docker to expose ports to this specific ip address ("--ip" docker option)
configure your apps to connect to the service ip address
each time you replace a container, you are sure that it will connect to the correct ip address.
Tutorial here => Docker Multi Containers with OpenSVC
don't miss the "complex orchestration" part at the end of tuto, which can help you start/stop containers in the correct order (1 postgresql subset + 1 webapp subset + 1 nginx subset)
the main drawback is that you expose webapp and PostgreSQL ports to public address, and actually only the nginx tcp port need to be exposed in public.
You could also try the ambassador method of having an intermediary container just for keeping the link intact... (see https://docs.docker.com/articles/ambassador_pattern_linking/ ) for more info
You can bind the connection ports of your images to fixed ports on the host and configure the services to use them instead.
This has its drawbacks as well, but it might work in your case.
Another alternative is to use the --net container:$CONTAINER_ID option.
Step 1: Create "network" containers
docker run --name db_net ubuntu:14.04 sleep infinity
docker run --name app1_net --link db_net:db ubuntu:14.04 sleep infinity
docker run --name app2_net --link db_net:db ubuntu:14.04 sleep infinity
docker run -p 80 -p 443 --name nginx_net --link app1_net:app1 --link app2_net:app2 ubuntu:14.04 sleep infinity
Step 2: Inject services into "network" containers
docker run --name db --net container:db_net pgsql
docker run --name app1 --net container:app1_net app1
docker run --name app2 --net container:app1_net app2
docker run --name nginx --net container:app1_net nginx
As long as you do not touch the "network" containers, the IP addresses of your links should not change.
Network-scoped alias is what you need is this case. It's a rather new feature, which can be used to "publish" a container providing a service for the whole network, unlike link aliases accessible only from one container.
It does not add any kind of dependency between containers — they can communicate as long as both are running, regardless of restarts and replacement and launch order. It uses DNS internally, I believe, instead of /etc/hosts
Use it like this: docker run --net=some_user_definied_nw --net-alias postgres ... and you can connect to it using that alias from any container on the same network.
Does not work on the default network, unfortunately, you have to create one with docker network create <network> and then use it with --net=<network> for every container (compose supports it as well).
In addition to container being down and hence unreachable by alias multiple containers can also share an alias in which case it's not guaranteed that it will be resolved to the right one. But in some case that can help with seamless upgrade, probably.
It's all not very well documented as of yet, hard to figure out just by reading the man page.