Can remote Docker be used locally? - postgresql

For examample, I have a Postgres docker container and I connect to it with psql -H localhost:5432.
If I "move" this container to the remote machine and add a Docker context with docker context create remote --docker 'host=ssh://<remote machine>' then after docker use context remote I can see that it is up and running remotely. I can manage that remote container without ssh-ing remote machine, everything consider Docker is fine.
However, I don't know how to connect to this remote container with psql, without exposing a port to the internet on remote host.
I thought that when I use Docker context all connections to my local Docker are transfered via SSH-tunnel to the remote Docker, but it seems to be not true.
So, is remote host feature is only for remote Docker management and to actually interact with services inside them I should open ports, configure firewall, etc (and that kills the point of moving everything on remote machine) or I just do something wrong and Docker supposed to work as I described above?

Related

Connect to a remote database running in a Docker container from another container running on different machine

I am facing the following situation:
I have a docker container (db) running a PostgreSQL database on my machine.
Furthermore, currently I have another container (server), which can successfully connect to that database from the same machine.
However, what I want is to connect to the db database from the server container, even if it is running on different machine. How could I do that?
I found this: https://www.codegrepper.com/code-examples/whatever/how+to+connect+to+remote+postgres+database+from+command+line
and this: https://www.a2hosting.com/kb/developer-corner/postgresql/remote-postgresql-connections#Method-2.3A-Set-up-a-direct-connection.
Both solutions make connection to the machine running the db container through ssh. However, I can whitelist only some valid users to be able to ssh to the machine running the db container, but not the user (which is the root) of the server container.
I hope my question is clear!
Thanks for any help!
p.s.: both containers are defined in a docker-compose file.

Connect to Windows Postgres from Debian Docker container

I am running Postgres on a Windows 10 computer, and I want to connect to it from a Docker container. I've followed instructions from many sources and things should be working, but they're not.
Command line used to create Docker container:
docker run --rm -d --network=host --name mycontainer myimage
In postgresql.conf:
listen_addresses = '*'
In pg_hba.conf:
host all all 172.17.0.0/16 trust
In the bash shell of my container, I run:
psql -h 127.0.0.1
and I get the error:
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432?
Needless to say, Postgres is definitely running on my computer and I am able to query it from local applications. What am I missing?
THIS WON'T WORK FOR DOCKER v18.03 AND ONWARDS
The answer is already there - From inside of a Docker container, how do I connect to the localhost of the machine?
This question is related to a mysql setup, but it should work for your case too.
FOR DOCKER v18.03 ONWARDS
Use host.docker.internal to refer to the host machine.
https://docs.docker.com/docker-for-windows/networking/#i-cannot-ping-my-containers
As you've discovered, --network-host doesn't work with Docker for Windows or Docker for Mac. It only works on Linux hosts.
One option for this scenario might be to host PostgreSql in a container, also. If you deploy them with a docker-compose file, you should be able to have two separate Docker containers (one for the database and one for your service) that are networked together. By default, docker-compose will expose containers to others in the same compose file using the container name as its DNS name.
You could also consider including the database in the same container as your service, but I think the docker-compose solution is better for several reasons:
It adheres to the best practice of each container having only a single process and single responsibility.
It means that you can easily change and re-deploy your service without having to recreate the database container.
Configure the connection inside your docker container with the real ip-address of your host or as workaround with a dns name

db connection path to connect application that is running in docker

I am running nodejs application in docker. In that application I am trying to connect my system database. But It is not working
In my environment file:
**MONGODB_URI=mongodb://192.168.1.174:27017/sampleDB**
SESSION_SECRET=sample
App_PORT = 8088
But I am getting error and unable to access the db.
My application is running on docker machine ip 192.168.99.100:8088
Here, I attached my docker running command statement:
How to connect my system db into that application
The IP depends how the containers are run. If using docker-compose, it creates a network for you in which containers are accessible to themselves using service name (eg. db should you use it). If not, and you did not specify any network, the default bridge network is used (called docker0 on your docker machine).
Since you are running containers separately (using docker run), you have to either give specific IP address to the container (you can get one from inside the container using docker exec container_name ip a) or connect to it via the gateway (your docker machine). However, in order to do that, the database port has to be exposed (eg. 27017:27017 when running).
I recommend you start using docker-compose from now on, many things will get easier when running a stack of linked containers.

Connecting local psql db with web app inside the docker container

I am new to docker and know basic things. I have postgresql installed on my local Ubuntu server and i want to connect it with the web application which is inside of the docker container. What setting should I apply to allow that?
You can use your server's Public IP address for this instead of localhost in your docker container.
Make sure that your firewall allows the port 5432
When you run an application on your local computer, the application can easily access all the services on your PC (i.e. localhost). When your application runs inside a Docker container it became a completely isolated environment from that machine (i.e. No access localhost and other system services unless you expose it directly) and can only use Host OS services directly served to Docker-Engine by Host OS. This is why you have to route through the Internet to access your service. Postgres in your case.

Equivalent of using a ssh tunnel

Using virtual hosts rather than deployed Docker container it was a normal work process for me to create ssh tunnels in order to access delimited machines from my local box. For instance connect with my psql client to a Postgres instance which I could only reach from a bastion box.
With Docker everything is boxes away even more. Is there an equivalent for doing the same but with Docker? Tunnel through the Docker instance to the RDS instance?
You use the docker CLI to connect to a running container. For instance...
To log into a db running in a container you can use (from your local machine)
docker exec -it mypsqlcontainer psql -U username dbname
I personally almost never have to ssh into the host. Everything can be done through the docker CLI.
You can make ssh-tunnel to the docker host. DB port must be accessible from docker host (i.e. using "-p" docker run option).
If you prefer not publishing DB port you can create jumpbox container with ssh server, publish port 22 on this container and user container linking to link jumpbox container with DB container.