Is a service running in a docker network secure from outside connection, even from localhost? - postgresql

Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.

On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)

Related

Connect to PostgreSQL from Flask app in another docker container

On a virtual machine I have 2 docker containers running with the names <postgres> and <system> that run on the network with name <network>. I can't change options in these containers. I have created a flask application that connects to a database and outputs the required information. To connect from local computer I use
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<VM_ip>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
and it worked great.
But, when I run my application on the same VM and specify
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<postgres>.<network>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
I get an error:
psycopg2.OperationalError: could not parse network address \"<postgres>.<network>\": Name or service not known.
Local connections are allowed in pg_hba, the problem is in connecting from the new container in VM.
Here are the settings of my new container:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "5000:5000"
command: gunicorn -w 1 -b 0.0.0.0:8000 wsgi:server
I tried to make the same connection as from the local computer, specifying the VM_ip, but that didn't help either.
I also tried to specify the <postgres> container ip instead of its name in the host=, but this also caused an error.
Do you know what could be the problem?
You need to create a network first which you will use to communicate between containers. You can do that by:
docker network create <example> #---> you can name it whatever you want
Then you need to connect both containers with the network that you made.
docker run -d --net example --name <postgres_container> <postgres_image>
docker run -d --net example --name <flask_container> <flask_image>
You can read more about the docker network in its documentation here:
https://docs.docker.com/network/
from what I can see you might be using the docker-compose file for the deployment of the services, you can add one more layer above the service layer for the network where you can define the network that is supposed to be used by the services that are deployed. The network that is defined needs also be mentioned in the service definition this lets the Internal DNS engine that docker-compose creates in the background discover all the services in the network with the help of the service name.
A Bridge network may be a good driver to be used here.
You can use the following links for a better understanding of networks in docker-compose.
https://docs.docker.com/compose/compose-file/compose-file-v3/#network
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks

Setup a PostgreSQL connection to an already existing project in Docker

I had never used PostgreSQL nor Docker before. I set up an already developed project that uses these two technologies in order to modify it.
To get the project running on my Linux (Pop!_OS 20.04) machine I was given these instructions (sorry if this is irrelevant but I don't know what is important and what is not to state my problem):
Installed Docker CE and Docker Compose.
Cloned the project with git and ran the commands git submodule init and git submodule update.
Initialized the container with: docker-compose up -d
Generated the application configuration file: ./init.sh
After all of that the app was available at http://localhost:8080/app/ and I got inside the project's directory the following subdirectories:
And inside dbdata:
Now I need to modify the DB and there's where the difficulty arose since I don't know how to set up the connection with PostgreSQL inside Docker.
In a project without Docker which uses MySQL I would
Create the local project's database "dbname".
Import the project's DB: mysql -u username -ppassword dbname < /path/to/dbdata.sql
Connect a DB client (DBeaver in my case) to the local DB and perform the necessary modifications.
In an endeavour to do something like that with PostgeSQL, I have read that I need to
Install and configure Ubuntu 20.04 serve.
Install PostgreSQL.
Configure Postgres “roles” to handle authentication and authorization.
Create a new Database.
And then what?
How can I set up the connection in order to be able to modify the DB from DBeaver and see the changes reflected on http://localhost:8080/app/ when Docker is involved?
Do I really need an Ubuntu server?
Do I need other program than psql to connect to Postgres from the command line?
I have found many articles related to the local setup of PostgreSQL with Docker but all of them address the topic from scratch, none of them talk about how to connect to the DB of an "old" project inside Docker. I hope someone here can give directions for a newbie on what to do or recommend an article explaining from scratch how to configure PostgreSQL and then connecting to a DB in Docker. Thanks in advance.
Edit:
Here's the output of docker ps
You have 2 options to get into known waters pretty fast:
Publish the postgres port on the docker host machine, install any postgres client you like on the host and connect to the database hosted in the container as you would have done this traditionally. You will use localhost:5433 to reach the DB. << Update: 5433 is the port where the postgres container is published on you host, according to the screenshot.
Another option is to add another service in your docker-compose file to host the client itself in a container.
Here's a minimal example in which I am launching two containers: the postgres and an adminer that is exposed on the host machine on port 9999.
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 9999:8080
then I can access the adminer at localhost:9999 (password is example):
Once I'm connected to my postgres through adminer, I can import and execute any SQL query I need:
A kind advice is to have a thorough lecture to understand how the data is persisted in a Docker context. Performance and security are also topics that you might want to add under your belt as a novice in the field better sooner than later.
If you're running your PostgreSQL container inside your own machine you don't need anything else to connect using a database client. That's because to the host machine, all the containers are accessible using their own subnet.
That means that if you do this:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 341164c5050f`
it will output a list of IPs that you can configure in your DBeaver to access the container instance directly.
If you're not fond of doing that (or you prefer to use cli) you can always use the psql inside the installation of the PostgreSQL container to achieve something like you described in mysql point nº2:
docker exec -i 341164c5050f bash -c 'psql -U $POSTGRES_USER' < /path/to/your/schema.sql
It's important to inform the -i, otherwise it'll not read the schema from the stdin. If you're looking for psql in the interactive mode, use -it instead.
Last but not least, you can always edit the docker-compose.yml file to export the port and connect to the instance using the public IP/loopback device.

Failing to connect to a Postgres Container with psycopg2?

I have a docker-compose file that looks like this
version: "3.7"
services:
app:
stdin_open: true
tty: true
build:
context: .
dockerfile: app.Dockerfile
volumes:
- ${HOST_SAVE_DIRC}:${CONTAINER_SAVE_DIRC}
depends_on:
- postgres
postgres:
image: 'postgres'
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST_AUTH_METHOD=trust
restart: always
expose:
- "5432"
where variables like POSTGRES_USER are entries from a env file. app.Dockerfile looks like
FROM python:3.8.3-slim-buster
COPY src /src/
COPY init.sql .
COPY .env .
COPY run.sh run.sh
COPY requirements.txt .
RUN ls -a
RUN pip install --no-cache-dir -r requirements.txt
The containers are created, then the user is logged into the app container w/ the main function of the program being called - this is when the database calls
From the app container I am attempting to connect to the postgres container via psycopg2. However when I attempt to do so, I receive the following error:
psycopg2.OperationalError: could not connect to server: No route to host
Is the server running on host "postgres" (172.22.0.2) and accepting
TCP/IP connections on port 5432?
using a psycopg2 call that looks like
with psy.connect(host='postgres', port=5432, user='postgres', password='postgres') as conn:
...
the entries of this psycopg2 call match the env file given to the docker-compose file.
My understanding is that Postgres uses port 5432 by default. Also that when docker-compose creates the two containers - it creates a docker network for those containers name DIR_default where DIR is the name of the directory the docker-compose file lives in, where each container can be accessed with using the name listed in the docker-compose file ('postgres' and 'app' in these cases).
Among various tries:
I've checked and the database isn't going down between the container being created and the user being exec'd in.
I've tried various little changes like changing the container names, postgres login info, etc.
I've tried linking the postgres container name explicitly with link: "postgres:postgres".
Other solutions suggested here
Any help would be greatly appreciated! I see no reason why something as simple as this should be occurring, but also here I am.
Edit:
Pinging the Postgres container from the app container appears to be working when running docker exec app ping postgres_container_name. Is this a sign that the Docker network is set up correctly and the issue is something of mine?
Edit 2:
Tried clearing all images and containers, then restarting the Docker daemon and afterwards my PC. No change in either case.
For reference, the ping command looked like
docker exec python-app ping name_given_to_postgres_container
returning various statements which looked like
64 bytes from name_given_to_postgres_container.project_name_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.090 ms
which unless I am mistaken, I believe is signalling a succesful ping.
The top level .env file provided to docker-compose
HOST_SAVE_DIRC=~/python_projects/project_directory/directory_in_project
CONTAINER_SAVE_DIRC=/pdfs
POSTGRES_DB=project_name # same as project_directory
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_PORT=5432
Here is the requirements.txt file for the Python app as well
certifi==2020.4.5.1
chardet==3.0.4
idna==2.9
psycopg2-binary==2.8.5
read-env==1.1.0
requests==2.23.0
urllib3==1.25.9
Exec-ing into the Postgres container with docker exec -it container_id bash and running psql -U postgres appears to be successful - even with restart: always removed. I can also see the database named in the docker-compose file is also created. I feel confident in saying this container isn't dying spontaneously.
However, hitting the 5432 port on the Postgres container with netcat via nc name_given_to_postgres_container 5432-5433 returns an error similar to the one returned by psycopg2
arxivist_postgres_1 [172.22.0.3] 5433 (?) : No route to host
arxivist_postgres_1 [172.22.0.3] 5432 (postgresql) : No route to host
The same error is also returned with curl. So my guess the issue isn't with the Postgres container directly, psycopg2, or the host-name - but something with the port?
Edit 3:
As a last attempt to fix this project, the full project this post is referring to is posted at this link. If anyone would like to download the repo and try building the docker containers themselves via ./start.sh - that might be just what is needed to find a solution!
I thought I had Docker setup on my machine, which runs Fedora 32. However as I came to realize from this article, setting up Docker on Fedora 32 requires some extra steps I was not previously aware of.
Specifically for this issue, the command listed in the article to add Docker to whitelist Docker on the local network's firewall with the command
sudo firewall-cmd --permanent --zone=FedoraWorkstation --add-masquerade
So I believe the root cause of my issue was simply my app container being blocked from accessing the postgres container by the firewall. Making the above change made the program work finally!

Connect to a mongoDB session from within container

I'm new to learning how to use goLang to build microservices. I had a whole project up and running locally, but when I tried deploying it I ran into a problem. The session I was working with (mgo.Dial("localhost")) was no longer working. When I put this into a docker image, it failed to connect to the local host, which makes sense, since the docker image builds it over a new OS (alpine in my case). I was wondering what I should do to get it to connect.
To be clear, when I was researching this, most people wanted to connect to a mongoDB session that is a docker container, I want to connect to a mongoDB session from within a docker container. Also once I'm ready for deployment I'll be using StatefulSet with kubernetes if that changes anything.
For example, this is what I want my program to be like:
sess, err := mgo.Dial("localhost") //or whatever
if err != nil {
fmt.Println("failed to connect")
else {
fmt.Println("connected")
What I tried doing:
Dockerfile:
FROM alpine:3.6
COPY /build/app /bin/
EXPOSE 8080
ENTRYPOINT ["/bin/app"]
In terminal:
docker build -t hell:4 .
docker run -d -p 8080:8080 hell:4
And as you can expect, it says not connected. Also the port mapping is for the rest of the project, not this part.
Thanks for your help!
I think you should not try to connect to the MongoDB server running on your machine. Think about deploying the whole application lateron you want a MongoDB server running together with your service on some cloud or server.
That problem could be solved by setting up an additional container and link it to your Go Web App. Docker compose can handle this. Just place a docker-compose.yml file in the directory you are executing your docker build in.
version: '3'
services:
myapp:
build: .
image: hell:4
ports:
- 8080:8080
links:
- mongodb
depends_on:
- mongodb
mongodb:
image: mongo:latest
ports:
- "27017:27017"
environment:
- MONGODB_USER="user"
- MONGODB_PASS="pass"
Something like this should do it (not tested). You have two services: One for your app that gets build according to your Dockerfile in the directory in which you currently are. Additionally it links to a service called mongodb defined below. The mongodb service is accessible via the service name mongodb.
If your mongoDB server is running in your host machine, replace localhost by you host IP.

How to deploy desktop based application on kubernetes

I want to deploy my desktop based application on Kubernetes. Can someone suggest some ways of doing it.
In Docker we used --net and --add-host for running same. But in Kubernetes we are not able to find any solution.
Please help!
There are a bunch of desktop applications with dockerfiles to run on Linux Desktops.
I am not sure if it is possible but the idea is to deploy Desktop-based(GUI applications) to kubernetes you need to consider a few things.
You need to make sure kubernetes nodes are Desktops not the server otherwise it wont work.
mount the node's x11 socket inside container running desktop application to allow x11 connection.
--volume /tmp/.X11-unix:/tmp/.X11-unix
export node's DISPLAY environment variable to container DISPLAY.
-e DISPLAY = unix$DISPLAY
Here is a docker-compose file I use at my Desktop.
version: '3.0'
services:
eclipse:
container_name: naeemrashid/eclipse
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- /home/$USER/containers/eclipse/workspace:/home/eclipse/workspace
environment:
- DISPLAY=unix$DISPLAY