I am running nodejs application in docker. In that application I am trying to connect my system database. But It is not working
In my environment file:
**MONGODB_URI=mongodb://192.168.1.174:27017/sampleDB**
SESSION_SECRET=sample
App_PORT = 8088
But I am getting error and unable to access the db.
My application is running on docker machine ip 192.168.99.100:8088
Here, I attached my docker running command statement:
How to connect my system db into that application
The IP depends how the containers are run. If using docker-compose, it creates a network for you in which containers are accessible to themselves using service name (eg. db should you use it). If not, and you did not specify any network, the default bridge network is used (called docker0 on your docker machine).
Since you are running containers separately (using docker run), you have to either give specific IP address to the container (you can get one from inside the container using docker exec container_name ip a) or connect to it via the gateway (your docker machine). However, in order to do that, the database port has to be exposed (eg. 27017:27017 when running).
I recommend you start using docker-compose from now on, many things will get easier when running a stack of linked containers.
Related
When using rootless Docker, Docker instances are running per user and not system-wide. How can I enable communication between Docker containers that have been started by different users?
Scenario is the following. The systems is setup with rootless Docker because multiple people are sharing the same workstation. Rootless Docker is used, so that users building their Docker images and running their containers will not interfer each other.
We also like to run a mongodb in a Docker container, that should be accessible for all users. How can this be done without system-wide Docker? We will be connected to the workstation via ssh and do not want to access the mongodb Docker container from outside the workstation (not being connected via ssh).
Using system-wide Docker is not an option.
Any suggestions how this can be done with root-less Docker?
Info:
Previously with the system-wide Docker we created a network (bridge) and used docker run --network=some_network_for_mongodb -d -p 27017:27017 mongodb:5.0.2
When using rootless Docker, Docker instances are running per user and not system-wide. How can I enable communication between Docker containers that have been started by different users?
The only way you'll be able to enable communication between containers started by different users is by publishing the necessary ports on the host. For example, if you want to create a MongoDB instance that's generally accessible, you could run:
docker run -p 27019:27019 ...
This binds port 27019 in the mongo container to port 27019 on the host. Any other container on the system would be able to connect to this service using an ip address of a host interface.
Of course, this will also open the port to outside connections. There are several ways of dealing with this:
Block the port in your firewall configuration.
Bind the port to a specific host interface on which it won't be available for outside access. For example, if you have a docker bridge docker0 at 172.17.0.1, you could run:
docker run -p 172.17.0.1:27019:27019 ...
This would only publish the port on 172.17.0.1. Other containers could access the service at 172.17.0.1:27019, but it wouldn't be available for outside access.
I'm running a postgres server in a docker container within a custom docker network, and I found I can access it using psql from the host.
This seems to me to be undesirable from a security perspective since I thought the point of docker networks was to isolate access to the containers.
My thinking was that I would run my app in a separate container within the same docker network and publish ports on the app container only. That way, the app can be accessed from the outside world, but the database can't be.
My question is: Why is the 5432 port being published to host on the postgres container without me explicitly specifying that, and how can I "unpublish" this port?
And a related question would be: am I wrong that publishing port 5432 is a security concern in this case (or at least less secure than not publishing it)?
My container is running the official docker postgres image here: https://hub.docker.com/_/postgres/
Thanks for any help!
Edit: Here is the docker command I'm using to run the container:
docker run -d --restart=always --name=db.geppapp.com -e "POSTGRES_USER=<user>" -e "POSTGRES_PASSWORD=<password>" -e "POSTGRES_DB=gepp" -v "/mnt/storage/postgres-data:/var/lib/postgresql/data" --net=db postgres
Edit 2: My original question was not entirely correct as docker was not in fact publishing the port 5432 to the host but rather I was specifying the container's IP address as the host when connecting to postgres with psql as follows:
psql --host=<docker-assigned-ip> --username=<user> --dbname=gepp
So the thing preventing me from restricting access to the container from the host is in fact that an IP address is assigned to the container on the host network.
The Dockerfile of postgres exposes that port 5432, but that does not mean it makes the port of the container accessible to the host. To expose the port to host, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. But I don't see that in your command.
Are you sure you are not accessing a local postgres and not the container postgres?
or Are you connected to the Host network? If docker container is set to network_mode: host or type of host, any port exposed in the container would be exposed on the docker host as well, without requiring the -p or -P docker run options.
Thanks everyone who commented and answered. I'm answering my own question with a summary of my best understanding based on the replies and other things I've read.
As I mentioned in the edit to my question the thing preventing me from restricting access to the container from the host is the IP assignment on the host network.
According to the docker docs: (https://docs.docker.com/config/containers/container-networking/)
By default, the container is assigned an IP address for every Docker
network it connects to.
In this case, the container is not technically "connected" to the host network but has an IP on that network anyway. David pointed out in comments that this only occurs on Linux, not on Windows or Mac (I haven't personally verified this).
So it would appear that due to the way docker networks are implemented in Docker Linux, an IP address is published to the host for all running containers, and there's no way to prevent this.
From a security perspective, since the docker host is always trusted, my understanding is that database security comes mainly from restricting access to the host itself (network and linux account security), and the native database credential security. Docker does not add another secure layer as I was initially thinking it might.
I'm new to Docker. I successfully created a PostgreSQL container my-database and I am able to access it from SQLTools on my local machine with server address localhost and the port.
I got the containerized database's IP address from the following command:
docker container inspect my-database
But when I go back to SQLTools or the PHP web application (not containerized) and try to connect to my-database with the IP address I got above, it couldn't connect successfully.
What am I missing here?
FYI, I also created another container and was able to connect to my-database with the following way: Use the same network for both my-database and the second container.
It all depends on how you enable access to the database.
If your php service runs in the same machine, then localhost could work
If its on a different machine in the same network, then use the network IP assigned to that machine. If you have your php server in a totally different location, then you may want to use something like an nginx reverse proxy to your docker container.
So in your case you should get the ip:port where your db container runs and use that. Docker inspect shows the internal network ip which only helps other containers in the same virtual network connect to a container.
You never need the docker inspect IP address. You can only connect to it in two specific circumstances: if you're in a different container on the same Docker network, or if you're (a) not in a container at all, (b) on the same host, and (c) that host is running native Linux and not a different OS.
You've already identified the right answers. Say you start the container as
docker network create any-network-name
docker run \
--name database \
--net any-network-name \
-p 5432:5432 \
postgres
From another Docker container on the any-network-name network, you can use the database container name as a DNS name, and avoid the manual IP lookup; ignore any -p options and use the "normal" port for the service. From outside a container, you can use the host's DNS name or IP address (or, if it's the same host, localhost) and the first -p port number.
If you're running this in Docker Compose, it automatically creates a network for you and registers containers under their Compose service name; you do not need networks:, container_name:, or other similar settings.
version: '3.8'
services:
database:
image: postgres
ports: ['5432:5432']
application:
build: .
environment:
- PGHOST=postgres
I have twenty five containers in my docker-compose.yml file.
And I have external server with database PostgreSQL that run on host (not in docker container).
All containers can send request to database.
I would like to get IP addresses of container that send request to database.
When I try get info from DB, I can see only IP of host with docker containers.
One of the ideas it is add server lan interface to custom docker bridge net. Then I will can setup my lan network IP-addresses in each container.
Unfortunately, I can't run Postgres also in docker and configure docker-swarm or kubernetes. One of customer requirement it is run Postgres locally in a separate host.
I am new to docker and know basic things. I have postgresql installed on my local Ubuntu server and i want to connect it with the web application which is inside of the docker container. What setting should I apply to allow that?
You can use your server's Public IP address for this instead of localhost in your docker container.
Make sure that your firewall allows the port 5432
When you run an application on your local computer, the application can easily access all the services on your PC (i.e. localhost). When your application runs inside a Docker container it became a completely isolated environment from that machine (i.e. No access localhost and other system services unless you expose it directly) and can only use Host OS services directly served to Docker-Engine by Host OS. This is why you have to route through the Internet to access your service. Postgres in your case.