CouchDB with docker-compose not reachable from host (but from localhost) - docker-compose

I am setting up CouchDB using docker-compose with the following docker-compose.yml (the following is a minimal example):
version: "3.6"
services:
couchdb:
container_name: couchdb
image: apache/couchdb:2.2.0
restart: always
ports:
- 5984:5984
volumes:
- ./test/couchdb/data:/opt/couchdb/data
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
couchdb_setup:
depends_on: ['couchdb']
container_name: couchdb_setup
image: apache/couchdb:2.2.0
command: ['/bin/bash', '-x', '-c', 'cat /usr/local/bin/couchdb_setup.sh | tr -d "\r" | bash']
volumes:
- ./scripts/couchdb_setup.sh:/usr/local/bin/couchdb_setup.sh:ro
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
The setup script of the second container is executing the script ./scripts/couchdb_setup.sh that starts with:
until curl -f http://couchdb:5984; do
sleep 1
done
Now, the issue is that the curl call is always returning The requested URL returned error: 502 Bad Gateway. I figured that CouchDB is only listening on http://localhost:5984 but not on http://couchdb:5984 as is evident when I bash into the couchdb container and issue both curls; for http://localhost:5984 I get the expected response, for http://couchdb:5984 as well as http://<CONTAINER_IP>:5984 (that's http://192.168.32.2:5984, in my case) responds with server 192.168.32.2 is unreachable ...
I looked into the configs and especially into the [chttp] settings and its bind_address argument. By default, bind_address is set to any, but I have also tried using 0.0.0.0, to no avail.
I'm looking for hints what I did wrong and for advice how to set up CouchDB with docker-compose. Any help is appreciated.

Related

How to resolve docker name resolution failure?

Below is my docker compose file:
version: '3.7'
# Run tests in much the same way as circleci
# docker-compose -f docker-compose.test.yml up
# TODO check aws versions
services:
db:
# image: circleci/postgres:11-alpine
image: kotify/postgres-non-durable:11.2
env_file: .env-test
container_name: limetonic_db
ports:
- 5433:5432
redis:
image: circleci/redis:6.2.1-alpine
selenium:
image: selenium/standalone-chrome:89.0
container_name: limetonic_selenium
shm_size: '2gb'
environment:
TZ: "Australia/Sydney"
ports:
- 4444:4444
test:
build:
context: .
dockerfile: Dockerfile
env_file: .env-test
depends_on:
- redis
- db
- selenium
command: bash -c "make ci.test"
The db, redis, selenium containers have DNS as 127.0.0.11 while test has 8.8.8.8. Now I am spinning a django server at test:8000 but it does not come up with name resolution failure which I understand is coming from 8.8.8.8 DNS.
I have read many questions on SO but none of the solutions work. I have modified DOCKER_OPTS and changes dnsmasq etc. Now this problem occurs in stock ubuntu installation and any changes made does not work.
It does not matter what DNS is in docker test container it won't resolve test.
Note that db, selenium and redis can ping each other but obviously test cannot.
My systemd-resolv has DNS as 4.2.2.2 and 8.8.8.8 that is why test is not resolving. I understand that docker does not take 127.0.0.11 as DNS. However, if that is the case how other images can resolve with the local DNS? And even if I set DNS of test container to 127.0.0.11 it still does not resolve test?

Docker-compose postgresql integration

I'm new to docker and am trying to make a composed image consisting of services, nginx and postgresql database. I'm following the tutorial here : http://www.patricksoftwareblog.com/how-to-use-docker-and-docker-compose-to-create-a-flask-application/
And have been successful up to adding postgresql where I'm having difficulties and questions.
My docker-compose.yml:
version : '2'
services:
web:
restart: always
build: ./home/admin/
expose:
- "8000"
nginx:
restart: always
build: ./etc/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:9.6
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./var/lib/postgresql
volumes_from:
- data
ports:
- "5432:5432"
I have included his docker generator script under /var/lib/postgresql but keep facing ERROR: Dockerfile parse error line 1: unknown instruction: IMPORT when I run 'docker-compose build'.
If I leave in the 'data' section & remove the postgres section in my docker-compose.yml file, my containers seemingly run fine but I'm unsure if postgresql is properly running at all. I'm able to GET using curl but still - I'm unsure how to go about confirming postgres specifics to confirm a proper environment and would appreciate examples on this topic in particular.
I was also wondering if running my docker-compose containers then simply running a separate postgresql container could also function if provided the correct ports.
Thank you!
Check the content of your docker-compose.yml:
yaml format (see for instance codebeautify.org/yaml-validator)
eol or encoding issue
multi-line instructions

How do I properly set up my Keystone.js app to run in docker with mongo?

I have built my app which runs fine locally. When I try to run it in docker (docker-compose up) it appears to start, but then throws an error message:
Creating mongodb ... done
Creating webcms ... done
Attaching to mongodb, webcms
...
Mongoose connection "error" event fired with:
MongoError: failed to connect to server [localhost:27017] on first connect
...
webcms exited with code 1
I have read that with Keystone.js you need to configure the Mongo location in the .env file, which I have:
MONGO_URI=mongodb://localhost:27017
Here is my Docker file:
# Use node 9.4.0
FROM node:9.4.0
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
# Expose API port to the outside
EXPOSE 3000
# Launch application
CMD ["node","keystone"]
...and my docker-compose
version: "2"
services:
# NodeJS app
web:
container_name: webcms
build: .
ports:
- 3000:3000
depends_on:
- mongo
# MongoDB
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db/mongo
ports:
- 27017:27017
When I run docker ps it confirms that mongo is up and running in a container...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3e06e4a5cfe mongo "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:27017->27017/tcp mongodb
I am either missing some config or I have it configured incorrectly. Could someone tell me what that is?
Any help would be appreciated.
Thanks!
It is not working properly because you are sending the wrong host.
your container does not understand what is localhost:27017 since it's your computer address and not its container address.
Important to understand that each service has it's own container with a different IP.
The beauty of the docker-compose that you do not need to know your container address! enough to know your service name:
version: "2"
volumes:
db-data:
driver: local
services:
web:
build: .
ports:
- 3000:3000
depends_on:
- mongo
environment:
- MONGO_URI=mongodb://mongo:27017
mongo:
image: mongo
volumes:
- "db-data:/data/db/mongo"
ports:
- 27017:27017
just run docker-compose up and you are all-set
A couple of things that may help:
First. I am not sure what your error logs look like but buried in my error logs was:
...Error: The cookieSecret config option is required when running Keystone in a production environment.Update your app or environment config so this value is supplied to the Keystone constructor....
To solve this problem, in your Keystone entry file (eg: index.js) make sure your Keystone constructor has the cookieSecret parameter set correctly: process.env.NODE_ENV === 'production'
Next. Change the mongo uri from the one Keystone generated (mongoUri: mongodb://localhost/my-keystone) to: mongoUri: 'mongodb://mongo:27017'. Docker needs this because it is the mongo container address. This change should also be reflected in your docker-compose file under the environment variable under MONGO_URI:
... environment: - MONGO_URI=mongodb://mongo:27017 ...
After these changes your Keystone constructor should look like this:
const keystone = new Keystone({
adapter: new Adapter(adapterConfig),
cookieSecret: process.env.NODE_ENV === 'production',
sessionStore: new MongoStore({ url: 'mongodb://mongo:27017' }),
});
And your docker-compose file, something like this (I used a network instead of links for my docker-compose as Docker has stated that links are a legacy option. I've included mine in case its useful for anyone else):
version: "3.3"
services:
mongo:
image: mongo
networks:
- appNetwork
ports:
- "27017:27017"
environment:
- MONGO_URI=mongodb://mongo:27017
appservice:
build:
context: ./my-app
dockerfile: Dockerfile
networks:
- appNetwork
ports:
- "3000:3000"
networks:
appNetwork:
external: false
It is better to use mongo db atlas if you does not want complications. You can use it in local and in deployment.
Simple steps to get the mongo url is available in https://www.mongodb.com/cloud/atlas
Then add a env variable
CONNECT_TO=mongodb://your_url
For passing the .env to docker, use
docker run --publish 8000:3000 --env-file .env --detach --name kb keystoneblog:1.0

Command not working from "command" section in docker compose

I try to use php:7.2-apache to run my (Laravel) code. As we have apache currently on our production server.
However I need the module mod_rewrite to be loaded, which is not the case by default for me.
docker-compose.yml
apache:
restart: unless-stopped
image: php:7.2-apache
container_name: apache_l
command: bash -c "a2enmod rewrite && service apache2 restart"
ports:
- 80:80
- 443:443
environment:
- APACHE_DOCUMENT_ROOT=/var/www/html/public/
volumes:
- .:/var/www/html
- ./docker/php-ini/php.ini:/usr/local/etc/php/php.ini
- ./docker/sites-enabled:/etc/apache2/sites-enabled/
I wrote the command in order to start mod_rewrite as suggested here:
https://github.com/docker-library/php/issues/179
If I run these commands by hand in the container it works, however in a command section like this I get an error log:
Enabling module rewrite.
To activate the new configuration, you need to run:
service apache2 restart
AH00558: apache2: Could not reliably determine the server's fully
qualified domain name, using 192.168.0.3. Set the 'ServerName'
directive globally to suppress this message
Restarting Apache httpd web server: apache2.
Module rewrite already enabled
...
How come it only works when running these commands by hand?
Bonus question: (not that important)
Why does the APACHE_DOCUMENT_ROOT environment variable not work. I have to change the config inside :/etc/apache2/sites-enabled/ by hand for it to work, but the variable is advertised.
I fixed it another way by first running
docker cp apache_l:/etc/apache2/ .\docker\
which will copy the entire config directory to the host and then mounting that directory as a volume in the docker-compose.yml
apache:
restart: unless-stopped
image: php:7.2-apache
container_name: apache_l
# depends_on:
# - openssl
ports:
- 80:80
- 443:443
environment:
- APACHE_DOCUMENT_ROOT=/var/www/html/public/
volumes:
- .:/var/www/html
- ./docker/apache2:/etc/apache2/
I also made a symlink from mods-available/rewrite.load to mods-enabled/rewrite.load

docker link resolves to localhost

I'm stuck on a very strange docker problem that I've not encountered before. What I want to do is to use docker-compose to make my application available from the internet. It's currently running on a instance on DigitalOcean and I'm currently working with the following docker-compose.yml:
version: '2.2'
services:
mongodb:
image: mongo:3.4
volumes:
- ./mongo:/data/db
ports:
- "27017"
mongoadmin: # web UI for mongo
image: mongo-express
ports:
- "8081:8081"
links:
- "mongodb:mongo"
environment:
- ME_CONFIG_OPTIONS_EDITORTHEME=ambiance
- ME_CONFIG_BASICAUTH_USERNAME=user
- ME_CONFIG_BASICAUTH_PASSWORD=pass
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
links:
- "mongodb:mongodb"
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"
Mongoadmin connects properly and is able to connect to the database, while the database itself cannot be connected to from outside the host.
The problem is that the app won't connect to the right address. It is a express server using mongoose to connect to the database. Before connecting I'm logging the url it will connect to. In my config.js I've listed mongodb://mongodb/project, but this is resolved to localhost thus resulting in MongoError: failed to connect to server [localhost:27017] on first connect. The name of the container is resolved, but not to the proper address.
I've tried to connect to the IP (in the 172.18.0.0 range) that docker addressed to the container, but that also resolved to localhost. I've looked into /etc/hosts but this does not show anything related to this. Furthermore, I'm baffled because the mongo-express container is able to connect.
I've tried changing the name of the container, thinking it might be block for some reason due to previous runs or something like that, but this did not resolve the issue
I've tried both explicit links and implicit using dockers internal DNS resolve, but both did not work.
When binding port 27017 to localhost it is able to connect, but because of security and easy configuration via environment variables, I rather have the mongodb instance not bound to localhost.
I've also tried to run this on my local machine and that works as expected, being that both mongoadmin and app are able to connect to the mongodb container. My localmachine runs Docker version 1.12.6, build 78d1802, while the VPS runs on Docker version 17.06.2-ce, build cec0b72, thus a newer version.
Could this be a newly introduced bug? Or am I missing something else? Any help would be appreciated.
Your docker-compose file seems not have linked the app and mongodb container.
You have this:
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"
While I think it should be this:
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
links:
- "mongodb:mongodb"
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"