I am using Rancher Desktop with dockerd(moby)
When I use docker desktop, I can connect to the host machine from the container using host.docker.internal
But while using Rancher Desktop, host.docker.internal is not pointing to the localhost(I'm trying to connect to postgres database on my localhost).
I have also tried --network=host but I'm not able to curl to http://127.0.0.1:8080, provided I had a frontend running on 8080.
What should be the alternative host.docker.internal for Rancher Desktop with dockerd(moby) ?
I have tried quite a few answers but none has helped me. I know this question has been asked but any help would be really appreciative.
On Windows, you may need to create a firewall rule to allow communication between the host and the container. You can run below command in a privileged powershell to create the firewall rule.
New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -InterfaceAlias "vEthernet (WSL)" -Action All
Pls refer here
https://docs.rancherdesktop.io/faq/#q-can-containers-reach-back-to-host-services-via-hostdockerinternal
Related
I'm running a postgres server in a docker container within a custom docker network, and I found I can access it using psql from the host.
This seems to me to be undesirable from a security perspective since I thought the point of docker networks was to isolate access to the containers.
My thinking was that I would run my app in a separate container within the same docker network and publish ports on the app container only. That way, the app can be accessed from the outside world, but the database can't be.
My question is: Why is the 5432 port being published to host on the postgres container without me explicitly specifying that, and how can I "unpublish" this port?
And a related question would be: am I wrong that publishing port 5432 is a security concern in this case (or at least less secure than not publishing it)?
My container is running the official docker postgres image here: https://hub.docker.com/_/postgres/
Thanks for any help!
Edit: Here is the docker command I'm using to run the container:
docker run -d --restart=always --name=db.geppapp.com -e "POSTGRES_USER=<user>" -e "POSTGRES_PASSWORD=<password>" -e "POSTGRES_DB=gepp" -v "/mnt/storage/postgres-data:/var/lib/postgresql/data" --net=db postgres
Edit 2: My original question was not entirely correct as docker was not in fact publishing the port 5432 to the host but rather I was specifying the container's IP address as the host when connecting to postgres with psql as follows:
psql --host=<docker-assigned-ip> --username=<user> --dbname=gepp
So the thing preventing me from restricting access to the container from the host is in fact that an IP address is assigned to the container on the host network.
The Dockerfile of postgres exposes that port 5432, but that does not mean it makes the port of the container accessible to the host. To expose the port to host, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. But I don't see that in your command.
Are you sure you are not accessing a local postgres and not the container postgres?
or Are you connected to the Host network? If docker container is set to network_mode: host or type of host, any port exposed in the container would be exposed on the docker host as well, without requiring the -p or -P docker run options.
Thanks everyone who commented and answered. I'm answering my own question with a summary of my best understanding based on the replies and other things I've read.
As I mentioned in the edit to my question the thing preventing me from restricting access to the container from the host is the IP assignment on the host network.
According to the docker docs: (https://docs.docker.com/config/containers/container-networking/)
By default, the container is assigned an IP address for every Docker
network it connects to.
In this case, the container is not technically "connected" to the host network but has an IP on that network anyway. David pointed out in comments that this only occurs on Linux, not on Windows or Mac (I haven't personally verified this).
So it would appear that due to the way docker networks are implemented in Docker Linux, an IP address is published to the host for all running containers, and there's no way to prevent this.
From a security perspective, since the docker host is always trusted, my understanding is that database security comes mainly from restricting access to the host itself (network and linux account security), and the native database credential security. Docker does not add another secure layer as I was initially thinking it might.
I have a situation where i have problems accessing my non-80 ports on host from docker. There is an application in my container needed to access this port 55555 on my host.
I have 2 VMs - VM1 and VM2.
VM1 has dockers and a container running. VM2 is a machine that i used to do testing.
Command used to start the container
docker run -dit --hostname VM1 --name ContainerTest mcr.microsoft.com/dotnet/framework/sdk:3.5-20191008-windowsservercore-ltsc2019
I execute exec -it XXXX powershell to execute PowerShell scripts needed. I used test-netconnection to test my connections.
Problem:
I tried to run the following PowerShell script test-netconnection -ComputerName "VM1" -port 55555 from my container - Failed.
However, i can do run test-netconnection -ComputerName "VM1" -port 80 and able to receive TcpTestSucceeded. All other ports that i tried failed too.
What have i tried:
I proved that VM1's port 55555 is opened to the public, as i can ping, and tcp test it from VM2 to VM1 successfully.
Turned off the firewall on VM1. No success either.
I am also aware that docker has --expose and -p command to expose the ports, however, in my case, i think i do not need to, as my goal is to access host's port from the container, not the other way.
I don't understand why is this port 55555 only accessible from VM2 to VM1, but not from container to VM1, and the container can access VM1 via port 80 only. Can anyone share some light on what is going on? Appreciate it.
by the general there is a service that open the port not just port number
in docker you shoude use docker network driver overlay for docker use in other networks
hope link help you
https://docs.docker.com/network/
I create a docker machine names dockerhyperv using the command docker-machine create -d hyperv dockerhyperv
I have a docker-compose.yml file, call docker-compose up and the logs look good but I cannot connect to on the port from the yml file.
I eventually found out about the command docker-machine ip and noticed the docker has a different IP address than my host.
I have no idea why this is the case. Does it have to do with hyper-v settings? I expect(ed) docker to run on localhost.
In the past I played with the virtualbox driver but this should not interfere with hyper-v.
Docker Machine creates a VM since any OS other than Linux can't natively run containers.
In case of VirtualBox, I believe it creates port forward rules to allow things to work on localhost but is probably not the case with Hyper-V.
You could probably just the VM settings to use your external network to get an IP from your router. Check this doc for information on setting this up.
can anybody help me with my issue when i do polymer serve -H [77.68.84.107] it brings up an error saying no ports available? im really stuck on this.
i am running a 1&1 cloud server with centOS 7
I had this problem.
I changed --hostname value to the ip instead of the domain name or localhost
if you are doing it on your machine put 127.0.0.1 instead of localhost.
Just installed mongodb using click-to-deploy in google cloud platform. I have another project, for which I created the mongodb database, where my web application runs.
Do I have to open some port or configure something?
As the other answers in this thread suggest, mongod daemon is listening on TCP port 27017. Therefore, you will need to add a firewall rule on Compute Engine firewall for this port and protocol. This can be done using Google Cloud console or using gcloud command tool:
gcloud compute firewall-rules create allow-mongodb --allow tcp:27017
It is recommended to use target tag with the firewall rule and use this target tag to specify what VM instances the firewall rule should be applied to.
Adding the port in the firewall is not enough. By default the host bind to 127.0.0.1 which needs to be changed to 0.0.0.0
Make changes in the file sudo nano /etc/mongod.conf inside the instance
Look for the term bindIp
change it to 0.0.0.0 and restart mongodb
You will be able to connect to the mongo db now
Click on the Http or Https checkbox to activate the external ip address so u can use it to access the database
On the Mongodb project you should open firewall for port 27017.
MongoDB used ports are listed at:
http://docs.mongodb.org/manual/tutorial/configure-linux-iptables-firewall/
Regards,
Paolo
This answer explains how to set the firewall rule for port 27017.
Another issue that could cause this is running your mongodb in a separate network and having your other instances on the default network (or vice versa).
I ran into this and after getting both instances on the same network, it was able to connect to the mongo instance by name.
Here's an example of how to set the network for a managed VM in your app.yaml:
network:
instance_tag: https-server
name: my-node-network