I'm new to learning how to use goLang to build microservices. I had a whole project up and running locally, but when I tried deploying it I ran into a problem. The session I was working with (mgo.Dial("localhost")) was no longer working. When I put this into a docker image, it failed to connect to the local host, which makes sense, since the docker image builds it over a new OS (alpine in my case). I was wondering what I should do to get it to connect.
To be clear, when I was researching this, most people wanted to connect to a mongoDB session that is a docker container, I want to connect to a mongoDB session from within a docker container. Also once I'm ready for deployment I'll be using StatefulSet with kubernetes if that changes anything.
For example, this is what I want my program to be like:
sess, err := mgo.Dial("localhost") //or whatever
if err != nil {
fmt.Println("failed to connect")
else {
fmt.Println("connected")
What I tried doing:
Dockerfile:
FROM alpine:3.6
COPY /build/app /bin/
EXPOSE 8080
ENTRYPOINT ["/bin/app"]
In terminal:
docker build -t hell:4 .
docker run -d -p 8080:8080 hell:4
And as you can expect, it says not connected. Also the port mapping is for the rest of the project, not this part.
Thanks for your help!
I think you should not try to connect to the MongoDB server running on your machine. Think about deploying the whole application lateron you want a MongoDB server running together with your service on some cloud or server.
That problem could be solved by setting up an additional container and link it to your Go Web App. Docker compose can handle this. Just place a docker-compose.yml file in the directory you are executing your docker build in.
version: '3'
services:
myapp:
build: .
image: hell:4
ports:
- 8080:8080
links:
- mongodb
depends_on:
- mongodb
mongodb:
image: mongo:latest
ports:
- "27017:27017"
environment:
- MONGODB_USER="user"
- MONGODB_PASS="pass"
Something like this should do it (not tested). You have two services: One for your app that gets build according to your Dockerfile in the directory in which you currently are. Additionally it links to a service called mongodb defined below. The mongodb service is accessible via the service name mongodb.
If your mongoDB server is running in your host machine, replace localhost by you host IP.
Related
I've followed the instructions in https://tsmx.net/docker-local-mongodb/ but I still get the following error:
**panic: unable to connect to MongoDB (local): no reachable servers
**
I even tried the following but still get the same error:
_ = pflag.String("mongodb-addr", "127.0.0.1:27017", "MongoDB connection address")
My connection code is as follows:
dbAddr := d.cfg.GetString("mongodb-addr")
session, err := mgo.Dial(dbAddr)
And my docker run command is as follows:
docker run image_name
I'm using macOS Monterey. Any help would be greatly appreciated. Thanks.
If the application and the MongoDB are on the same docker network, then use the docker name to connect to the MongoDB container.
If the MongoDB is running in the server where the application is running in docker container, then use the IP of the server to communicate to the MongoDB. 127.0.0.1 from within the container will try to find the MongoDB within the same Docker as the application.
if you run mongo like this :
mongo:
image: mongo
restart: always
volumes:
- ./mongo-data:/data/db
env_file: .env
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
then you can connect from Go like this :
var cred options.Credential
cred.Username = MongoUsername
cred.Password = MongoPassword
clientOption := options.Client().ApplyURI(mongodb://mongodb:27017).SetAuth(cred)
I was facing the same issue and this command did the trick for me, was mentioned here.
Docker provides a host network which lets containers share your host’s networking stack. This approach means localhost inside a container resolves to the physical host, instead of the container itself.
docker run -d --network=host my-container:latest
hope it help someone.
I have built a docker image for a flask app I have with some html templates and after running my image I go to localhost:5000which takes me to the start page in my flask app . I press a register button to register a user using a flask endpoint but I get
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
Before going to localhost I run my mongodb image with sudo docker start mongodband the connection seems to hit this error whenever I have to search something in my monogdb database for the endpoint . Do I need a docker-compose.yml to connect and I cannot connect without one ?
This is how I connect to mongodb using pymongo
client = MongoClient('mongodb://localhost:27017/')
db = client['MovieFlixDB']
users = db['Users']
movies = db['Movies']
How I run my flask app :
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5000)
I would appreciate your help . Thank you in advance
To connect containers to each other you should use networks.
First you create a network
docker network create my-network
Run mongodb specyfing the network.
docker container run -d --name mongodb -p 27017:27017 --network my-network mongodb:latest
Modify your app to connect to mongodb as host instead of localhost. Containers that are connected to a common network can talk to each other by using their names (DNS names) that can be automatically resolved to container IPs.
client = MongoClient('mongodb://mongodb:27017/')
You could also think about providing such deatils (db host, user, password) through environment variables and read them in your app.
Rebuild image with your app and run it
docker container run --name flask-app -d --network my-network my-flaskapp-image
You can read more about container networking in docker docs.
Do I need a docker-compose.yml to connect and I cannot connect without
one ?
If you use docker-compose, it will be easier and don't have to use too many commands to deploy. Look at this example (there are too many however you can refer random service).
Steps -
Build your docker-componse file [I have modified the one in the example of random service, removing rest] e.g.
version: '3.3'
services:
web-random:
build:
context: .
args:
requirements: ./flask-mongodb-example/requirements.txt
image: web-random-image
ports:
- "800:5000"
entrypoint: python ./flask-mongodb-example/random_demo.py
depends_on:
- mongo
mongo:
image: mongo:4.2-bionic
ports:
- "27017:27017"
Refer this example to update your mongo URL in your python code
Now, use the following command to compose and bring up the containers
docker-compose build
docker-compose up
Now, either browse your URL with browser or use the curl command
Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)
I have been writing this awesome Express & MongoDB app, just for fun,
1: https://github.com/mwaz/oober-bck, everything is working perfectly offline, I have different DB configurations for different application environments e.g, development, staging, testing, and production, in the real sense in every environment, the DB is different and given MongoDB is flexible, we do not have a problem with that.
Since the Application is working normally by setting the $NODE_ENV variable to the application environment required on my local machine, everything should work fine when the application is dockerized, however this is not the case, the mongoDB crashes at some point and does not connect to the application, here is the sample log
The Docker file is as follows:
FROM node:7
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node app.js
EXPOSE 3003
The docker-compose.yml file
version: "2"
services:
app:
container_name: oober
restart: always
build: .
ports:
- "3003:3003"
environment:
- NODE_ENV=STAGING
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
The docker image can be found on dockerhub using this link
docker pull sparatan/oober_app
Your default configuration of the staging database is DATABASE: "mongodb://localhost:27017/staging_ooberdb" as shown in your config.js file.
In a docker environment as you're using "localhost" will refer to the container itself (in this case your "oober" container.
You need to use the mongodb container name instead like this DATABASE: "mongodb://mongo:27017/staging_ooberdb" in the STAGING part of your config.js file.
As a side note, you probably don't want to expose the mongodb port in a production environment.
Hi I am getting this error when I try to run docker-compose up on my yml file.
This is my docker-compose.yml file
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- . .
# make sure we start mongodb when we start this service
depends_on:
- db
db:
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
This is my Dockerfile:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
COPY . .
WORKDIR ./
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Is this a problem with my dev.exs setup or something to do with the compatibility of docker and phoenix / docker and mongodb?
https://docs.docker.com/compose/compose-file/#depends_on explicitly says:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready,
and advises you to implement the logic to wait for mongodb to spinup and be ready to accept connections by yourself: https://docs.docker.com/compose/startup-order/
In your case it could be something like:
CMD wait-for-db.sh && mix phx.server
where wait-for-db.sh can be as simple as
#!/bin/bash
until nc -z localhost 27017; do echo "waiting for db"; sleep 1; done
for which you need nc and wait-for-db.sh installed in the container.
There are plenty of other alternative tools to test if db container is listening on the target port.
UPDATE:
The network connection between containers is described at https://docs.docker.com/compose/networking/:
When you run docker-compose up, the following happens:
A network called myapp_default is created, where myapp is name of the directory where docker-compose.yml is stored.
A container is created using phoenix’s configuration. It joins the network myapp_default under the name phoenix.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Each container can now look up the hostname phoenix or db and get back the appropriate container’s IP address. For example, phoenix’s application code could connect to the URL mongodb://db:27017 and start using the Mongodb database.
It was an issue with my dev environment not connecting to the mongodb url specified in docker-compose. Instead of localhost, it should be db as named in my docker-compose.yml file
For clarity to dev env:
modify config/dev.exs to (replace with correct vars)
username: System.get_env("PGUSER"),
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE"),
hostname: System.get_env("PGHOST"),
port: System.get_env("PGPORT"),
create a dot env file on the root folder of your project (replace with relevant vars to the db service used)
PGUSER=some_user
PGPASSWORD=some_password
PGDATABASE=some_database
PGPORT=5432
PGHOST=db
Note that we have added port.
Host can be localhost but should be mongodb or db or even url when working on a docker-compose or server or k8s.
will update answer for prod config...