Connect to PostgreSQL from Flask app in another docker container - postgresql

On a virtual machine I have 2 docker containers running with the names <postgres> and <system> that run on the network with name <network>. I can't change options in these containers. I have created a flask application that connects to a database and outputs the required information. To connect from local computer I use
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<VM_ip>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
and it worked great.
But, when I run my application on the same VM and specify
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<postgres>.<network>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
I get an error:
psycopg2.OperationalError: could not parse network address \"<postgres>.<network>\": Name or service not known.
Local connections are allowed in pg_hba, the problem is in connecting from the new container in VM.
Here are the settings of my new container:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "5000:5000"
command: gunicorn -w 1 -b 0.0.0.0:8000 wsgi:server
I tried to make the same connection as from the local computer, specifying the VM_ip, but that didn't help either.
I also tried to specify the <postgres> container ip instead of its name in the host=, but this also caused an error.
Do you know what could be the problem?

You need to create a network first which you will use to communicate between containers. You can do that by:
docker network create <example> #---> you can name it whatever you want
Then you need to connect both containers with the network that you made.
docker run -d --net example --name <postgres_container> <postgres_image>
docker run -d --net example --name <flask_container> <flask_image>
You can read more about the docker network in its documentation here:
https://docs.docker.com/network/

from what I can see you might be using the docker-compose file for the deployment of the services, you can add one more layer above the service layer for the network where you can define the network that is supposed to be used by the services that are deployed. The network that is defined needs also be mentioned in the service definition this lets the Internal DNS engine that docker-compose creates in the background discover all the services in the network with the help of the service name.
A Bridge network may be a good driver to be used here.
You can use the following links for a better understanding of networks in docker-compose.
https://docs.docker.com/compose/compose-file/compose-file-v3/#network
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks

Related

unable to run docker flask image -pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused

I have built a docker image for a flask app I have with some html templates and after running my image I go to localhost:5000which takes me to the start page in my flask app . I press a register button to register a user using a flask endpoint but I get
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
Before going to localhost I run my mongodb image with sudo docker start mongodband the connection seems to hit this error whenever I have to search something in my monogdb database for the endpoint . Do I need a docker-compose.yml to connect and I cannot connect without one ?
This is how I connect to mongodb using pymongo
client = MongoClient('mongodb://localhost:27017/')
db = client['MovieFlixDB']
users = db['Users']
movies = db['Movies']
How I run my flask app :
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5000)
I would appreciate your help . Thank you in advance
To connect containers to each other you should use networks.
First you create a network
docker network create my-network
Run mongodb specyfing the network.
docker container run -d --name mongodb -p 27017:27017 --network my-network mongodb:latest
Modify your app to connect to mongodb as host instead of localhost. Containers that are connected to a common network can talk to each other by using their names (DNS names) that can be automatically resolved to container IPs.
client = MongoClient('mongodb://mongodb:27017/')
You could also think about providing such deatils (db host, user, password) through environment variables and read them in your app.
Rebuild image with your app and run it
docker container run --name flask-app -d --network my-network my-flaskapp-image
You can read more about container networking in docker docs.
Do I need a docker-compose.yml to connect and I cannot connect without
one ?
If you use docker-compose, it will be easier and don't have to use too many commands to deploy. Look at this example (there are too many however you can refer random service).
Steps -
Build your docker-componse file [I have modified the one in the example of random service, removing rest] e.g.
version: '3.3'
services:
web-random:
build:
context: .
args:
requirements: ./flask-mongodb-example/requirements.txt
image: web-random-image
ports:
- "800:5000"
entrypoint: python ./flask-mongodb-example/random_demo.py
depends_on:
- mongo
mongo:
image: mongo:4.2-bionic
ports:
- "27017:27017"
Refer this example to update your mongo URL in your python code
Now, use the following command to compose and bring up the containers
docker-compose build
docker-compose up
Now, either browse your URL with browser or use the curl command

Is a service running in a docker network secure from outside connection, even from localhost?

Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)

Docker : java.net.ConnectException: Connection refused - Application running at port 8083 is not able to access other application on port 3000

I have to consume an external rest API(using restTemplate.exchange) with Spring Boot. My rest API is running on port 8083 with URL http://localhost:8083/myrest (Docker command : docker run -p 8083:8083 myrest-app)
External API is available in form of public docker image and after running below command , I am able to pull and run it locally.
docker pull dockerExternalId/external-rest-api docker
run -d -p 3000:3000 dockerExternalId/external-rest-api
a) If I enter external rest API URL, for example http://localhost:3000/externalrestapi/testresource directly in chrome, then I get valid JSON data.
b) If I invoke it with myrest application from eclipse(Spring Boot Application), still I am getting valid JSON Response. (I am using Windows Platform to test this)
c) But if I run it on Docker and execute myrest service (say http://localhost:8083/myrest), then i am facing java.net.ConnectException: Connection refused
More details :
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:3000/externalrestapi/testresource": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
P.S - I am using Docker on Windows.
# The problem
You run with:
docker run -p 8083:8083 myrest-app
But you need to run like:
docker run --network "host" --name "app" myrest-app
So passing the flag --network with value host will allow you container to access your computer network.
Please ignore my first approach, instead use a better alternative that does not expose the container to the entire host network... is possible to make it work, but is not a best practice.
A Better Alternative
Create a network to be used by both containers:
docker network create external-api
Then run both containers with the flag --network external-api.
docker run --network "external-api" --name "app" -p 8083:8083 myrest-app
and
docker run -d --network "external-api" --name "api" -p 3000:3000 dockerExternalId/external-rest-api
The use of flag -p to publish the ports for the api container are only necessary if you want to access it from your computers browser, otherwise just leave them out, because they aren't needed for 2 containers to communicate in the external-api network.
TIP: docker pull is not necessary, once docker run will try to pull the image if does not found it in your computer.
Let me know how it went...
Call the External API
So in both solutions I have added the --name flag so that we can reach the other container in the network.
So to reach the external api from my rest app you need to use the url http://api:3000/externalrestapi/testresource.
Notice how I have replaced localhost by api that matches the value for --name flag in the docker run command for your external api.
From your myrest-app container if you try to access http://localhost:3000/externalrestapi/testresource, it will try to access 3000 port of the same myrest-app container.
Because each container is a separate running Operating System and it has its own network interface, file system, etc.
Docker is all about Isolation.
There are 3 ways by which you can access an API from another container.
Instead of localhost, provide the IP address of the external host machine (i.e the IP address of your machine on which docker is running)
Create a docker network and attach these two containers. Then you can provide the container_name instead of localhost.
Use --link while starting the container (deprecated)

How to connect local Mongo database to docker

I am working on golang project, recently I read about docker and try to use docker with my app. I am using mongoDB for database.
Now problem is that, I am creating Dockerfile to install all packages and compile and run the go project.
I am running mongo data as locally, if I am running go program without docker it gives me output, but if I am using docker for same project (just installing dependencies with this and running project), it compile successfully but not gives any output, having error::
CreateSession: no reachable servers
my Dockerfile::
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
WORKDIR $GOPATH/src/myapp
# Copy the local package files to the container's workspace.
ADD . /go/src/myapp
#Install dependencies
RUN go get ./...
# Build the installation command inside the container.
RUN go install myapp
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/myapp
# Document that the service listens on port 8080.
EXPOSE 8080
EXPOSE 27017
When you run your application inside Docker, it's running in a virtual environment; It's just like another computer but everything is virtual, including the network.
To connect your container to the host, Docker gives it an special ip address and give this ip an url with the value host.docker.internal.
So, assuming that mongo is running with binding on every interface on the host machine, from the container it could be reached with the connection string:
mongodb://host.docker.internal:21017/database
Simplifying, Just use host.docker.internal as your mongodb hostname.
In your golang project, how do you specify connection to mongodb? localhost:27017?
If you are using localhost in your code, your docker container will be the localhost and since you don't have mongodb in the same container, you'll get the error.
If you are starting your docker with command line docker run ... add --network="host". If you are using docker-compose, add network_mode: "host"
Ideally you would setup mongodo in it's own container and connect them from your docker-compose.yml -- but that's not what you are asking for. So, I won't go into that.
In future questions, please include relevant Dockerfile, docker-compose.yml to the extent possible. It will help us give more specific answer.

REST request from one docker container to another fails

I have two applications, one of which has a RESTful interface that is used by the other. Both are running on the same machine.
Application A runs in a docker container. I am running it using the command line:
docker run -p 40000:8080 --name AppA image1
When I test Application B outside a docker container (in other words, before it is dockerized) Application B successfully executes all RESTful requests and receives responses without problems.
Unfortunately, when I dockerize and run Application B within a container:
docker run -p 8081:8081 --name AppB image2
whenever I attempt to send a RESTful request to Application A, I get the following:
Connect to localhost:40000 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused
Of course, I also tried making Application B connect using my machine's IP address. When I do that, I get the following failure:
Connect to 192.168.1.101:40000 failed: No route to Host
Has anyone seen this kind of behavior before? What causes an application that communicates perfectly well with another dockerized application outside a docker container to fail to communicate with that same dockerized application once it is itself dockerized???
Someone please advise...
Simply linking B to A docker run -p 8081:8081 --link AppA --name AppB image2, then you can access the REST service using AppA:8080.
The reason is that Docker containers run on its own subnet (normally 172.17.0.0-255) and they cannot access the network that your host is on. Also localhost would be the container itself, not the host.