Docker compose linking appears to not work - postgresql

I'm using Docker Compose to run an Elixir/Phoenix app in development. The setup is pretty standard, with a postgres container and a web container.
However, I'm having a hard time getting the web container to talk to the database container.
Here is my web container Dockerfile:
FROM ubuntu:14.04
MAINTAINER me#example.com
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y curl
RUN apt-get install -y inotify-tools
RUN apt-get install -y postgresql-client
RUN wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb \
&& dpkg -i erlang-solutions_1.0_all.deb
RUN curl -sL https://deb.nodesource.com/setup_5.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN apt-get update
RUN apt-get install -y esl-erlang
RUN apt-get install -y elixir
RUN mix local.rebar
RUN mix local.hex --force
ADD . src/blog/
WORKDIR src/blog/
RUN mix deps.get
RUN mix deps.compile
Here's my docker-compose.yml:
db:
image: postgres
web:
build: .
command: mix phoenix.server
volumes:
- .:/src/blog
ports:
- "4000:4000"
links:
- db
When I run docker-compose up, things appear to work okay. However, when I try to run (to create the database):
$ docker run blogphoenix_web mix ecto.create
I get the following error:
** (Mix) The database for Blog.Repo couldn't be created, reason given: psql: could not translate host name "db" to address: Name or service not known
Then, if I inspect the hosts file of the web container with:
$ docker run blogphoenix_web cat /etc/hosts
... I get this output:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4 a86e4f02ea56
Isn't Docker Compose supposed to create a hostname entry for the db container?
Here are some relevant version numbers for my Docker tooling:
$ docker-machine --version
#=> docker-machine version 0.6.0, build e27fb87
$ docker-compose --version
#=> docker-compose version 1.6.0, build unknown
$ docker --version
#=> Docker version 1.10.0, build 590d510
EDIT
Okay, I just noticed something that may help someone else reading this. This command, docker run blogphoenix_web cat /etc/hosts enters a new container, whereas this command docker exec 845f9d69cb1e cat /etc/hosts enters a running container. 845f9d69cb1e is the container ID for the running version of the blogphoenix_web image.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
845f9d69cb1e blogphoenix_web "mix phoenix.server" About an hour ago Up 2 minutes 0.0.0.0:4000->4000/tcp blogphoenix_web_1
21a6f48dfc3b postgres "/docker-entrypoint.s" About an hour ago Up 2 minutes 5432/tcp blogphoenix_db_1
Running the exec command I get the expected output from the hosts file, showing the appropriate hostname linkage for the db container:
$ docker exec 845f9d69cb1e cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 db_1 21a6f48dfc3b blogphoenix_db_1
172.17.0.2 blogphoenix_db_1 21a6f48dfc3b
172.17.0.2 db 21a6f48dfc3b blogphoenix_db_1
172.17.0.3 845f9d69cb1e
In other words, when I ran the docker run blogphoenix_web mix ecto.create command, I was executing mix ecto.create in a new container based on the blogphoenix_web image. This new container wasn't started with docker-compose and thus did not have the appropriate host file linkage to the db container setup.

You need to run it using docker-compose:
docker-compose run web mix ecto.create
Docker compose creates linked containers, but the images themselves are not linked. This means that blogphoenix_web is not linked to blogphoenix_db, but when you will run
docker-compose up
the newly created containers "blogphoenix_web_1" and "blogphoenix_db_1" will be linked together.

Related

How to install chrony on redhat 8 minimal

Im using keycloak docker image and need to synchronize time with chrony. however I cannot install chrony, because its not in repository i assume.
I use image from https://hub.docker.com/r/jboss/keycloak
ist based on registry.access.redhat.com/ubi8-minimal
Steps to reproduce:
~$ docker run -d --rm -p 8080:8080 --name keycloak jboss/keycloak
~$ docker exec -it -u root keycloak bash
root#707c136d9c8a /]# microdnf install chrony
error: No package matches 'chrony'
I'm not able to find working repo which provides chrony for redhat 8 minimal
Apparently i need synchronize time on host system, nothing to do with container itself.. Silly me, i need a break..

Mongodb on debian docker image - unable to stand

Good evening.
I'm noobie in docker and try to learn it a little bit. Currently writing simple java application integrated with mongodb, but I stuck on dockerfile. Basically the problem is with mongodb start. Here is my docker file:
FROM debian:buster-slim
# Install necessary libs
RUN apt-get update && apt-get install -y apt-utils wget gnupg gnupg2 curl
# Install mongodb
RUN wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add -
RUN echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-4.2.list
RUN apt-get update
RUN apt-get install -y mongodb-org
RUN systemctl enable mongod.service
RUN service mongod start
# Install jre 11
RUN apt-get install -y openjdk-11-jre
Here is the terminal output (only last step):
Setting up mongodb-org-shell (4.2.1) ...
Setting up mongodb-org-tools (4.2.1) ...
Setting up mongodb-org-mongos (4.2.1) ...
Setting up mongodb-org (4.2.1) ...
Removing intermediate container 7491080bfe9f
---> bbcf5b2ccb13
Step 7/11 : RUN service mongod start
---> Running in 46a66989ade2
mongod: unrecognized service
The command '/bin/sh -c service mongod start' returned a non-zero code: 1
Funny think is that I followed an official mongodb installation guide:
Mongodb installation on debian
During installation on 'real' debian/ubuntu machine it works.
It also doesn't work when tried to build docker image from official mongodb image from docker hub, I mean FROM mongo:4.2-bionic
After login to container and try to run mongo it returns:
root#8cc1d270a262:~# mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-10-23T20:39:44.728+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2019-10-23T20:39:44.729+0000 F - [main] exception: connect failed
2019-10-23T20:39:44.729+0000 E - [main] exiting with code 1
I expected that, cause mongo is unable to stand... Somehow.
Any ideas?
So, it seems when trying to follow the instructions to install MongoDB on Debian the SysVInit files are not created and error message mongod: unrecognized service. So a basic question: Does a docker container really need daemon control with either SysVInit or systemd? I don't think it really needs it, and my reason is because the container itself has a single purpose - to host the database. The container should always have the database engine running. With this philosophy in mind, I altered the Dockerfile to include an ENTRYPOINT that starts the mongod instead of relying on any daemon management system.
In order for the MongoDB database to be available outside the container I adjusted the mongod.conf file to bind to all network adapters by using bindIp: 0.0.0.0 instead of bindIp: 127.0.0.1. I also expose port 27017 in the Dockerfile. This means if you have MongoDB installed and running on the host computer using the default port 27107 that process will need to be halted to yield the port to the Docker container.
I was getting some errors in the container around the debconf stuff so I set it non-interactive as well. The installation of java was giving me fits, so I commented it out. If you need java on this container this will still need to be worked out.
Dockerfile:
FROM debian:buster-slim
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install necessary libs
RUN apt-get update && apt-get install -y apt-utils wget gnupg gnupg2 curl
# Install mongodb
RUN wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add -
RUN echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-4.2.list
RUN apt-get update
RUN apt-get install -y mongodb-org
# BIND TO ALL ADAPTERS IN CONTAINER
RUN sed -i "s,\\(^[[:blank:]]*bindIp:\\) .*,\\1 0.0.0.0," /etc/mongod.conf
# Install jre 11
# RUN "apt-get install -y openjdk-11-jre"
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod", "-f", "/etc/mongod.conf"]
Build:
To build the Docker image issue the following command...
docker build --tag mongodb .
(notice the period in the command - it is required).
Run:
To create a docker container, use the run command.
docker run --publish 27017:27017 --name mongodb -d mongodb
Notice the --publish to map host port 27017 to container port 27017. Notice the --name to name the container for easier reference if we need to get a bash shell inside the container. Run -d for detached mode so it runs in the background, and finally refer to the image named mongodb.
Connect:
Assuming MongoDB is installed on the host too the mongo shell binary will be available. Issue a mongo shell command...
mongo
No other parameters are needed. The installation of MongoDB in the container does not have authorization enabled and so does not ask for a username or password. The default port of 27107 is used by the container and mapped by the docker engine. Localhost is used by default.
Get BASH shell of container:
If you want to get a BASH shell inside of the container issue the following command...
docker exec -it mongodb bash
Try to run mongodb docker container and connect to it using mongo client before building custom images:
docker run --name some-mongo -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret -d mongo
docker exec -it some-mongo bash
mongo -u mongoadmin -p secret --authenticationDatabase admin

I want to run my extra-addons module in dockerized odoo container

I have installed Docker in my system with odoo:latest and postgres:latest as a container, and i can successfully start & stop my odoo service.
But the problem is i can only see the base odoo modules in it instead i want to run my own created modules along with the base modules in the dockerized odoo.
I have searched many links but but failed to understand.
What should i do to run my own modules ?
Please help me with all the steps to it.
Thanks in advance.
The solution to this problem has been resolved as-
Firstly i mounted my local folder which contains my extra-addons by the command-
$ docker run -v /path/to/your/local/folder:/mnt/extra-addons -p 8069:8069 --name odoo --link db:db -t odoo
Then check weather your local folder is mounted on the odoo container
or not by-
$ docker exec -u root -it odoo /bin/bash
After logging-
$ ls /mnt/extra-addons
You should see your files which were present in your local/folder.
Now, its done just restart your docker odoo server
To stop-
$ sudo docker stop db
$ sudo docker stop odoo
$ sudo service docker stop
To Start-
$ sudo service docker start
$ sudo docker start db
$ sudo docker start -a odoo
Now you can install your modules from the app.
You just need to mount a folder from your host machine to the docker... go to docker hub and in the odoo image you will find how to mount your custom modules

Starting Postgres in Docker Container

For testing, I'm trying to setup Postgres inside of a docker container so that our python app can run it's test suite against it.
Here's my Dockerfile:
# Set the base image to Ubuntu
FROM ubuntu:16.04
# Update the default application repository sources list
RUN apt-get update && apt-get install -y \
python2.7 \
python-pip \
python-dev \
build-essential \
libpq-dev \
libsasl2-dev \
libffi-dev \
postgresql
USER postgres
RUN /etc/init.d/postgresql start && \
psql -c "CREATE USER circle WITH SUPERUSER PASSWORD 'circle';" && \
createdb -O darwin circle_test
USER root
RUN service postgresql stop && service postgresql start
# Upgrade pip
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
# Set the container entrypoint
ENTRYPOINT ["gunicorn", "--config", "/app/config/gunicorn.py", "--access-logfile", "-", "--error-logfile", "-", "app:app"]
When I run:
docker run --entrypoint python darwin:latest -m unittest discover -v -s test
I'm getting:
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
The only way I can get it to work is if I ssh into the container, restart postgres and run the test suite directly.
Is there something I'm missing here?
In a Dockerfile you have
a configuration phase, the RUN directive (and some others)
the process(es) you start, that you put in either
CMD
or
ENTRYPOINT
see the docs
https://docs.docker.com/engine/reference/builder/#cmd
and
https://docs.docker.com/engine/reference/builder/#entrypoint
when a container has completed what it has to do in this start phase, it dies.
This is why the reference Dockerfile for PostgreSQL, at
https://github.com/docker-library/postgres/blob/3d4e5e9f64124b72aa80f80e2635aff0545988c6/9.6/Dockerfile
ends with
CMD ["postgres"]
if you want to start several processes, see supervisord or such tool (s6, daemontools...)
https://docs.docker.com/engine/admin/using_supervisord/

MongoDB, Docker, Meteor: Connection Refused

Meteor works perfectly if I run "meteor". If I setup MongoDB and run Meteor with MONGO_URL set to "mongodb://127.0.0.1:27017/meteor" then it too works perfectly. However, if I run a Docker Container that calls exactly the same Meteor files on the same machine with the MONGO_URL set as above then I get the error: "Exception in callback of async function: Error: failed to connect to [127.0.0.1:27017]". Logic would state that the introduction of Docker is causing the problem. Therefore, is there something I must do to specifically allow Meteor to call MongoDB from inside a container - such as something additional with the MongoDB ports etc.
Dockerfile is:
FROM ubuntu:14.04
MAINTAINER Me "me#me.com"
RUN apt-get update -y && apt-get install --no-install-recommends -y -q chrpath libfreetype6 libfreetype6-dev libssl-dev libfontconfig1
RUN apt-get install --no-install-recommends -y -q build-essential ca-certificates curl git gcc make nano python
ENV PATH /bin:/usr/local/sbin
RUN curl install.meteor.com | sh
ENV ROOT_URL 127.0.0.1
ENV PORT 3000
ENV MONGO_URL mongodb://127.0.0.1:27017/meteor
EXPOSE 3000
CMD [ "meteor" ]
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Meteor is called with the following:
docker run --name meteor-dev -it -p 3000:3000 -v /machine/meteor:/opt/meteor -w /opt/meteor meteor-dev
When you are running a container it creates its own network which is isolated from host network.
So when you are tying to connect to Mongo using "mongodb://127.0.0.1:27017/meteor it searches for MongoDB inside your container.
Instead of using 127.0.0.1 use the host ip addresss or hostname.
Or if your MongoDB is running from a container create a link and use the link to start meteor container. Hope this helps