Docker: Use sockets for communication between 2 containers - sockets

I have 2 Docker containers: App & Web.
App — simple container with php application code. It is used only for storage and deliver the code to the remote Docker host.
App image Dockerfile:
FROM debian:jessie
COPY . /var/www/app/
VOLUME ["/var/www/app"]
CMD ["true"]
Web — web service container, consist of PHP-FPM + Nginx.
Web image Dockerfile:
FROM nginx
# Remove default nginx configs.
RUN rm -f /etc/nginx/conf.d/*
# Install packages
RUN apt-get update && apt-get install -my \
supervisor \
curl \
wget \
php5-cli \
php5-curl \
php5-fpm \
php5-gd \
php5-memcached \
php5-mysql \
php5-mcrypt \
php5-sqlite \
php5-xdebug \
php-apc
# Ensure that PHP5 FPM is run as root.
RUN sed -i "s/user = www-data/user = root/" /etc/php5/fpm/pool.d/www.conf
RUN sed -i "s/group = www-data/group = root/" /etc/php5/fpm/pool.d/www.conf
# Pass all docker environment
RUN sed -i '/^;clear_env = no/s/^;//' /etc/php5/fpm/pool.d/www.conf
# Add configuration files
COPY config/nginx.conf /etc/nginx/
COPY config/default.vhost /etc/nginx/conf.d
COPY config/supervisord.conf /etc/supervisor/conf.d/
COPY config/php.ini /etc/php5/fpm/conf.d/40-custom.ini
VOLUME ["/var/www", "/var/log"]
EXPOSE 80 443 9000
ENTRYPOINT ["/usr/bin/supervisord"]
My question: Is it possible to link Web container and App container by the socket?
The main reason for this - using App container for deploy updated code to remote Docker host.
Using volumes/named volumes for share code between containers is not a good idea. But Sockets can help.
Thank you very much for your help and support!

If both containers run on the same host, it's possible to share a socket between the two as they are plain files.
You can create a local docker volume and mount that volume on both containers. Then configure you program(s) to use that path.
docker volume create --name=phpfpm
docker run phpfpm:/var/phpfpm web
docker run phpfpm:/var/phpfpm app
If the socket can be generated on the host you can mount the file into both containers. This is the method used to get a docker container to control the hosts docker.
docker run -v /var/container/some.sock:/var/run/some.sock web
docker run -v /var/container/some.sock:/var/run/some.sock app

Related

How to install multiple PG Bouncers in single servers with different pool_mode

How can we Install multiple PG bouncer with different pool mode on a single server(Ubuntu18.04)?
when I tried to second time install it says already installed?
Is there any other way to install with a different port?
You could install a container rungime (eg. docker) and run multiple containers, each containing a pgbouncer installation, eg. using this image: https://github.com/edoburu/docker-pgbouncer
First, install docker:
sudo apt install docker.io
Then, start can start as many pgbouncers as you like.
pgbouncer-1:
sudo docker run --rm -d \
-n pgbouncer-session\
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=session \
-p 5432:5432
edoburu/pgbouncer
pgbouncer-2
sudo docker run --rm -d \
-n pgbouncer-transaction \
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=transaction \
-p 5433:5432
edoburu/pgbouncer
Note that the the containers use different ports on the host (first one uses 5432, second one uses 5433).
If you have lots of configuration, you might want to use a bind-mount for the configuration files.
Also, for a steady setup, I would recommend using docker-compose instead of raw docker commands.

What is the best way to install tensorflow and mongodb in docker?

I want to create a docker container or image and have tensorflow and mongodb installed, I have seen that there are docker images for each application, but I need them to be working together, from a mongodb database I must extract the data to feed a model created in tensorflow.
Then I want to know if it is possible to have a configuration like that, since I have tried with a ubuntu container and inside it to install the applications I need, but I don't know if there is another way to do it.
Thanks.
Interesting that I find this post, and just found one solution for myself. Maybe not the one for you, BTW.
What I did is: docker pull mongo and run as daemon:
#!/bin/bash
export VOLUME='/home/user/code'
docker run -itd \
--name mongodb \
--publish 27017:27017 \
--volume ${VOLUME}:/code \
mongo
Here
the 'd' in '-itd' means running as daemon (like service, not
interactive).
The --volume may not be used.
Then docker pull tensorflow/tensorflow and run it with:
#!/bin/bash
export VOLUME='/home/user/code'
docker run \
-u 1000:1000 \
-it --rm \
--name tensorflow \
--volume ${VOLUME}:/code \
-w /code \
-e HOME=/code/tf_mongodb \
tensorflow/tensorflow bash
Here
the -u make docker bash with same ownership as host machine;
the --volume make host folder /home/user/code mapping to /code in docker;
the -w work make docker bash start from /code, which is /home/user/code in host;
the -e HOME= option sign bash $HOME folder such that later you can pip install.
Now you have bash prompt such that you can
create virtual env folder under /code (which is mapping to /home/user/code),
activate venv,
pip install pymongo,
then you can connect to mongodb you run in docker (localhost may not work, please use host IP address).

No access to apache docker server

in my docker image I need to run an Apache Server to deploy my website, a glassfish server for deploying the corresponding backend and MongoDB on which the backend connects.
My dockerfile looks like this:
FROM httpd:2.4
FROM glassfish:latest
FROM mongo:3.6
COPY /backend_war_exploded /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/backend_war_exploded
COPY /backend_war_exploded /usr/local/glassfish4/bin/backend_war_exploded
COPY /dist /usr/local/apache2/htdocs/
After building the image I run and start it with:
docker run -dit --name application -p 80:80 -p 8080:8080 -p 27017:27017 applicationimg
docker start application
When I try to access via http://localhost:80 it delivers the code: ERR_EMPTY_RESPONSE. Same for the backend but I can access mongodb on port 27017. When I am commenting out the FROM tags in my dockerfile and run everything separately it just works fine. Does somebody see the mistake? Thanks in advance.
UPDATE
I followed your suggestion and created rewrote the Dockerfile:
FROM ubuntu:16.04
COPY /dist /var/www/html/
COPY /backend_war_exploded /glassfish4/glassfish/domains/domain1/autodeploy/backend_war_exploded
RUN apt-get update && apt-get install -y apache2
RUN apt-get install -y openjdk-8-jdk
RUN apt-get install -y wget && apt-get install -y unzip
RUN wget http://download.java.net/glassfish/4.1.2/release/glassfish-4.1.2.zip
RUN unzip glassfish-4.1.2.zip
RUN cd /glassfish4/bin/ && ./asadmin start-domain domain1
EXPOSE 80
EXPOSE 8080
The webserver starts up and is accesable via localhost:80 but the glassfish server start while building the image but when running the docker image it is not started anymore. When I am accessing the container via docker exec I can navigate to glassfish and start it up manually. What is the issue?
You need to depend on on FROM only and add the other tools through RUN steps. or use single image for each application and connect them together through docker network or by creating a docker-compose.yml which will be easier, you can check it through here. Using multiple FROM does not mean that you are going to have all 3 in 1.
For more information about how to create Dockerfile and How to deploy your application with multiple containers you can check the get started tutorial from Docker
In order to run multiple service inside one container you need to use a service manager like Supervisor. Check the following link for more details: Multi-Service Container

I want to run my extra-addons module in dockerized odoo container

I have installed Docker in my system with odoo:latest and postgres:latest as a container, and i can successfully start & stop my odoo service.
But the problem is i can only see the base odoo modules in it instead i want to run my own created modules along with the base modules in the dockerized odoo.
I have searched many links but but failed to understand.
What should i do to run my own modules ?
Please help me with all the steps to it.
Thanks in advance.
The solution to this problem has been resolved as-
Firstly i mounted my local folder which contains my extra-addons by the command-
$ docker run -v /path/to/your/local/folder:/mnt/extra-addons -p 8069:8069 --name odoo --link db:db -t odoo
Then check weather your local folder is mounted on the odoo container
or not by-
$ docker exec -u root -it odoo /bin/bash
After logging-
$ ls /mnt/extra-addons
You should see your files which were present in your local/folder.
Now, its done just restart your docker odoo server
To stop-
$ sudo docker stop db
$ sudo docker stop odoo
$ sudo service docker stop
To Start-
$ sudo service docker start
$ sudo docker start db
$ sudo docker start -a odoo
Now you can install your modules from the app.
You just need to mount a folder from your host machine to the docker... go to docker hub and in the odoo image you will find how to mount your custom modules

xhost command for docker GUI apps (Eclipse)

I'm looking at running a GUI app in docker. I've heard that this is incurs security problems due to the Xserver being exposed. I'd like to know what is being done in each of the following steps, specifically the xhost local:root:
[ -d ~/workspace ] || mkdir ~/workspace
xhost local:root
docker run -i --net=host --rm -e DISPLAY -v $HOME/workspace/:/workspace/:z docbill/ubuntu-umake-eclipse
[ -d ~/workspace ] || mkdir ~/workspace
This creates a workspace directory in your home directory if it doesn't already exist.
xhost local:root
This permits the root user on the local machine to connect to X windows display.
docker run -i --net=host --rm -e DISPLAY -v $HOME/workspace/:/workspace/:z docbill/ubuntu-umake-eclipse
This runs a container with the following options:
-i: interactive, input typed after this command is run is received by the process launched inside the container.
--net=host: host networking, the container is not launched with an isolated network stack. Instead, all networking interfaces of the host are directly accessible inside the container.
--rm automatically cleanup the container on exit. Otherwise the container will remain in a stopped state.
-e DISPLAY pass through the DISPLAY environment variable from the host into the container. This tells GUI programs where to send their output.
-v $HOME/workspace/:/workspace/:z map the workspace folder from your home directory on the host to the /workspace folder inside the container with selinux sharing settings enabled.
docbill/ubuntu-umake-eclipse run this image, authored by user docbill on the docker hub (anyone is able to create an account here). This is not an official image from docker but a community submitted image.
From the options, this command is most likely designed for users running on RHEL or CentOS Docker host. It will not work on Docker for Windows or Docker for Mac, but should work on other variants of Linux.
I've used similar commands to run my containers with a GUI, but without the xhost and host networking. Instead, I've just mapped in the X windows socket (/tmp/.X11-unix) directly to the container:
docker run -it --rm -e DISPLAY -u `id -u` \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /etc/localtime:/etc/localtime:ro \
my_gui_image