Cannot connect to docker0 (MongoDB chart) - mongodb

I am completely new to docker. I followed this instruction to install mongodb chart and docker.
When I connect to 172.17.0.1, it said
Unable to connect to MongoDB using the specified URI.
The following error was returned while attempting to connect:
MongoNetworkError: failed to connect to server [172.17.0.1:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 172.17.0.1:27017]
The result from pinging the specified server "172.17.0.1" from within the container is:
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.050 ms
--- 172.17.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
The mongodb is running on local machine. I think it is not running in the container (not sure) because I installed mongodb in my machine before I install docker.
I have also checked the setting by using docker network inspect bridge
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
This is the yml file:
version: "3.3"
services:
charts:
image: quay.io/mongodb/charts:v0.10.0
hostname: charts
ports:
# host:container port mapping. If you want MongoDB Charts to be
# reachable on a different port on the docker host, change this
# to <port>:80, e.g. 8888:80.
- 80:80
- 443:443
volumes:
- keys:/mongodb-charts/volumes/keys
- logs:/mongodb-charts/volumes/logs
- db-certs:/mongodb-charts/volumes/db-certs
- web-certs:/mongodb-charts/volumes/web-certs
environment:
# The presence of following 2 environment variables will enable HTTPS on Charts server.
# All HTTP requests will be redirected to HTTPS as well.
# To enable HTTPS, upload your certificate and key file to the web-certs volume,
# uncomment the following lines and replace with the names of your certificate and key file.
# CHARTS_HTTPS_CERTIFICATE_FILE: charts-https.crt
# CHARTS_HTTPS_CERTIFICATE_KEY_FILE: charts-https.key
# This environment variable controls the built-in support widget and
# metrics collection in MongoDB Charts. To disable both, set the value
# to "off". The default is "on".
CHARTS_SUPPORT_WIDGET_AND_METRICS: "on"
# Directory where you can upload SSL certificates (.pem format) which
# should be considered trusted self-signed or root certificates when
# Charts is accessing MongoDB servers with ?ssl=true
SSL_CERT_DIR: /mongodb-charts/volumes/db-certs
networks:
- backend
secrets:
- charts-mongodb-uri
networks:
backend:
volumes:
keys:
logs:
db-certs:
web-certs:
secrets:
charts-mongodb-uri:
external: true
How can I connect to the mongodb?

By default mongodb is configured to only accept results from localhost 127.0.0.1 and when charts image connects to it via docker it is seen as an external connection coming from docker0 and it is hence rejected by mongod [MongoNetworkError: connect ECONNREFUSED 172.17.0.1:27017]
To fix this, edit the mongo config sudo vim /etc/mongod.conf and add your docker0 ip to the bindIp config the line with bindIp: 127.0.0.1 should be changed to bindIp: 127.0.0.1,172.17.0.1 for default docker installations.
This may be an old question but I think it could be a common issue, I had to struggle with this for a while before actually reading the error message more thoroughly and realising it really a simple issue.
Another issue is that the upon first install you can connect to mongo without a username or password, so those two should be deleted from the uri if you had not configured security, making it mongodb://172.17.0.1:27017.

Assuming you know how to use echo "mongodb://<username>:<password>#myhost.com/" | docker secret create charts-mongodb-uri - to create the connection the url.
The problem is actually how to connect from a docker container to a outside service running on the host machine. You can use some help from plenty of questions like From inside of a Docker container, how do I connect to the localhost of the machine?
Basically, if you are using docker for mac or windows, use something like echo "mongodb://host.docker.internal" | docker secret create charts-mongodb-uri -, for linux see https://docs.mongodb.com/charts/master/installation/ section RUNNING METADATA DATABASE ON LOCALHOST for the doc, or just use host mode (remove ports section)
version: "3.3"
services:
charts:
image: quay.io/mongodb/charts:v0.10.0
hostname: charts
network_mode: "host"
...

Related

(Odoo) Why can't i connect to the server when using pganonymizer?

I am trying to anonymize some data in my database but I am getting the following Error.
Postgres is running on port 5432 in the container and I am exposing it on the host on port 5433.
ports:
- "5433:5432"
Am I supposed to add something in my odoo.conf file?
Thanks
The newest version of pganonymizer needs you to specify the postgres hostname and port in the subprocess command.
if env.get('PGHOST'):
cmd.extend(['--host', env['PGHOST']])
if env.get('PGPORT'):
cmd.extend(['--port', env['PGPORT']])

Docker Compose Postgres always ended by timeout or connection refused

Got a small question here. I suppose I've done something wrong at a moment but i can't find where and it's been +2 hours I'm turning around.
So Basically, I've created a docker-compose with Postgis (Postgres). I wanted to connect on it through Tableplus.
However, I can't ...
2 kind of error keep appearing :
When I try to connect basically on 127.0.0.1, it's keep telling me connection refused
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
When I try to use the docker IPAddress - 172.23.0.2 (docker inspect the image's ID to get the IP address of the image)
could not connect to server: Operation timed out
Is the server running on host "172.23.0.2" and accepting
TCP/IP connections on port 5432?
Here is my docker-compose.yml
version: '3.5'
services:
db:
image: kartoza/postgis:12.1
environment:
- POSTGRES_USER=user1
- POSTGRES_PASSWORD=password1
- POSTGRES_DB=database_db
volumes:
- data_db_volume:/var/lib/postgresql/12
ports:
- "5432:5432"
volumes:
data_db_volume:
At first, when I tried to connect, it was telling me: role user1 doesn't exist.
So to stop this I ran: brew services stop postgresql on my machine
I think a psql was running locally on the same port because with lsof -n -i:5432 | grep LISTEN i keep having information (it stop since I ran stop Postgresql)
Alright so after few days of research and trying on another computer, it seems it was coming from 2 points: the new docker software with a graphic interface that I didn't use before, once this running correctly and a prune done from this place everything started to work fine so I think something was causing error due to an outdated piece of software.
Thank's everyone

How to connect to postgresql instance in with Docker Toolbox with pgadmin4 client

i'm new to Docker, and i'm running it with Docker Toolbox cause i have win 10 family.
I have a training project where there is a database to put up with docker-compose.
Here is the docker-compose.yml (it's given to me, but i can modify it).
version: '2'
services:
myerp.db:
image: postgres:9.4
ports:
- "127.0.0.1:9032:5432"
volumes:
# - "./data/db:/var/lib/postgresql/data"
- "./init/db/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d"
environment:
- POSTGRES_DB=db_myerp
- POSTGRES_USER=usr_myerp
- POSTGRES_PASSWORD=myerp
When i run that through Docker with docker-compose up it seems ok,
but i'm unable to connect to it through a server in pgadmin4
I'm not sure of the connexion setup i'm suppose to put because of Docker ToolBox, where in general to access the running instance you don't use 127.0.0.1 but 192.168.99.100 by defaut.
Anyway, if i configure a postgresql server with
Hostname: 127.0.0.1
Port: 9032
Username and password as written up, i have
could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 9032?
If i change the IP in docker-compose for 192.168.99.100, i manage to connect,
but the database is not initialized although there are sql files in the folder
\init\db\docker-entrypoint-initdb.d :
01_create_schema.sql
02_create_tables.sql
21_insert__data_demo.sql
So is it my connexion that is wrongly setup ?
Or should i do it with 192.168.99.100, and it's the initialisation setup that is wrong in the docker-compose ?
Thanks for help !
Ok so i solve the problem, but unfortunately without Docker Toolbox.
I bought a key for windows 10 pro (6€), upgraded the system really fast, and installed Docker Desktop.
Everything worked instantly.
Before the connexion ip was indeed 192.168.99.100, but i don't know why the sql scripts where not copied in the virtual environment, and the folder
/docker-entrypoint-initdb.d
remained empty.
So my solution was to upgrade the system, and everything is fine !

How to connect to Cloud SQL (2nd Generation) via MySQL Proxy Docker Container over TCP

Running on Mac OS X, I have been trying to connect to a Cloud SQL instance via the proxy using these directions. Once you have installed the MySQL client, gce-proxy container, and have created a service account in Google Cloud Platform, you get down to running these two commands specified in the documentation:
docker run -d -v /cloudsql:/cloudsql \
-v [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] \
b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
-instances=[INSTANCE_CONNECTION_NAME]=tcp:3306 -credential_file=[CLOUD_KEY_FILE_PATH]
mysql -h127.0.0.1 -uroot -p
First, I don't understand how this should ever work, since the container is not exposing a port. So unsurprisingly, when I attempted to connect I get the following error from the MySQL client:
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)
But if I do expose the port by adding -p 3306:3306 to the docker run command, I still can't connect. Instead I get the following error from MySQL client:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
I have successfully connected to the proxy running cloud_sql_proxy on my docker host machine by following that documentation, so I am confident my credential file and my mysql client is configured correctly. The logs of the container do not state that any connection was attempted. I have no problem connecting to a normal mysql container via docker. What am I missing here?
I was able to figure out how to use cloudsql-proxy on my local docker environment by using docker-compose. You will need to pull down your Cloud SQL instance credentials and have them ready. I keep them them in my project root as credentials.json and add it to my .gitignore in the project.
The key part I found was using =tcp:0.0.0.0:5432 after the GCP instance ID so that the port can be forwarded. Then, in your application, use cloudsql-proxy instead of localhost as the hostname. Make sure the rest of your db creds are valid in your application secrets so that it can connect through local proxy being supplied by the cloudsql-proxy container.
Note: Keep in mind I'm writing a tomcat java application and my docker-compose.yml reflects that.
docker-compose.yml:
version: '3'
services:
cloudsql-proxy:
container_name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: /cloud_sql_proxy --dir=/cloudsql -instances=<YOUR INSTANCE ID HERE>=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
ports:
- 5432:5432
volumes:
- ./credentials.json:/secrets/cloudsql/credentials.json
restart: always
tomcatapp-api:
container_name: tomcatapp-api
build: .
volumes:
- ./build/libs:/usr/local/tomcat/webapps
ports:
- 8080:8080
- 8000:8000
env_file:
- ./secrets.env
restart: always
It does look like there are some omissions in the documentation.
1) As you point out, you need to expose the port from the container. You'll want to make sure you only expose it to the local machine by specifying -p 127.0.0.1:3306:3306.
2) Then when running the container, you'll want to expose the port outside the container by specifying -instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306
I tried #Vadim's suggestion, which is basically this:
docker run -d -v /cloudsql:/cloudsql \
-p 127.0.0.1:3306:3306 \
-v [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] \
b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
-instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306 -credential_file=[CLOUD_KEY_FILE_PATH]
I was still unable to get a connection, as I still got this error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
However, the logs of the docker container showed a connection, like so:
2016/10/16 07:52:32 New connection for "[INSTANCE_CONNECTION_NAME]"
2016/10/16 07:52:32 couldn't connect to "[INSTANCE_CONNECTION_NAME]": Post https://www.googleapis.com/sql/v1beta4/projects/[PROJECT_NAME]/instances/[CLOUD_SQL_INSTANCE_NAME]/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: x509: failed to load system roots and no roots provided
So now it appeared that it was getting the traffic, but it not find the certificates for the SSL container. I had used OpenSSL's cert.pem export of my certificates and mounted it to the same location in the docker container. It makes sense that an arbitrary mapping of [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] wasn't helping the proxy figure out where the certificates were. So I used a clue from this Kubernetes setup guide and change the mounted volume to -v [LOCAL_CERTIFICATE_FILE_PATH]:/etc/ssl/certs. Mercifully, that worked.
TL;DR - Here is the final syntax for getting the Docker Container to run over TCP:
docker run -d \
-p 127.0.0.1:3306:3306 \
-v [SERVICE_ACCOUNT_PRIVATE_KEY_DIRECTORY]:[SERVICE_ACCOUNT_PRIVATE_KEY_DIRECTORY] \
-v [LOCAL_CERTIFICATE_DIRECTORY]:/etc/ssl/certs \
b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
-instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306 \
-credential_file=[SERVICE_ACCOUNT_PRIVATE_KEY_JSON_FILE]

connect robomongo to mongoDB docker container

I'm running a NodeJS App with docker-compose. Everything works fine and I can see all my data by connecting to Mongo inside container. But when I connect to RoboMongo I don't see any data.
How can I deal with this problem?
There is another way. You can
SSH with Robomongo into your actual virtual server that hosts your docker applications (SSH tab, check "Use SSH tunnel" and complete the other fields accordingly)
Now ssh into the same machine in your terminal.
docker ps should show you your MongoDB container.
docker inspect <mongo container id> will print out complete information about that container. Look for IPAddress in the end, that will give you the local IP of the container.
In the "Connection" tab in Robomongo use that container IP to connect.
Another sidenote: Make sure that you don't expose your mongodb service ports in any way (neither Dockerfile nor docker-compose.yml), cause that will make your database openly accessible from everywhere. Assuming that you don't have set up a username / password for that service you will be scanned and hacked soon.
The easiest way is to enable forwarding the Mongo Container itself, here's how my docker-compose looks like.
mongo:
image: mongo
restart: always
ports:
- 27017:27017
You should do a Robomongo SSH tunnel connection to MongoDB inside docker container. First of all you should install a ssh server inside your docker container.
https://docs.docker.com/engine/examples/running_ssh_service/
After that you should configure your connection in Robomongo.
Inside "Connection Settings" there are configuration tabs of your Robomongo Connection.
Go to "SSH" Tab and configure your SSH connection to the docker container. After that go to "Connection" Tab and configure your connection to MongoDB as if it was in localhost scope.
I was facing a different problem. I had installed MongoDB locally. So, when the MongoDB on docker was running, it was clashing with the one running on my host. I had installed it using brew.
So, I ran
brew services stop mongodb-community
and then I restarted Robo3t. I saw the databases created in the MongoDB running on the docker.
Voila!
Please note that maybe you won't be able to use ssh because it was just a problem of incompatibility between mongo and robomongo.
'Robomongo v8.5 and lower doesn't support MongoDB 3'. It has nothing to do with docker.
First log in with ssh Login details
ssh -i yourpemfile.pem username#ipaddress
Check running container id for MongoDB
docker ps -a
then check the mongo container id
docker inspect container_id
Then open robo3t
create new connection and add container id
use ssh login details to connect to mongodb
In your docker-compose file, you can expose a port to the host.
For example, the following code will expose port 27017 inside the machine to the port 27018 in the host.
app:
image: node
volumes:
- /app
ports:
- "27018:27017"
Then, if you have docker-machine installed and your machine is default, you can do in a terminal :
docker-machine ip default
It will give you the ip of your host, for example 192.168.2.3. The address of your database (host) will be 192.168.2.3 and the port 27018.
If your docker machine is not virtual and is your OS, the address of your database will be localhost and the port 27018.