I'm looking for a way to connect a docker container app so it can access Postgres database via https://postgresapp.com/
Was wondering if there are certain ports to open and what the yml file would look like to enable docker-compose to work with locally running postgres.
Thanks!
You have to make changes in postgres config files in host machine, especially
1) pg_hba.conf
2) postgres.conf
You can find above configuration in /var/lib/postgresql/[version]/ path
First check your docker0 bridge interface it might be 172.17.0.0/16, if not change accordingly.
make changes in postgresql.conf path will be same as pg_hba.conf.
listenaddress to "*"
Then in pg_hba.conf add rule as
host all all 172.17.0.0/16 md5.
Then in docker application use host ip address to connect to postures running in host.
There is no difference between accessing a database from inside a container or outside a container.
Let's say the Postgres database is running on localhost. If you want to connect to it from a small Python script, you can do so as follows, as found in the Docs:
#!/usr/bin/python
import psycopg2
import sys
def main():
#Define our connection string
conn_string = "host='localhost' dbname='my_database' user='postgres' password='secret'"
# print the connection string we will use to connect
print "Connecting to database\n ->%s" % (conn_string)
# get a connection, if a connect cannot be made an exception will be raised here
conn = psycopg2.connect(conn_string)
# conn.cursor will return a cursor object, you can use this cursor to perform queries
cursor = conn.cursor()
print "Connected!\n"
if __name__ == "__main__":
main()
In order for this script to run you need to install Python and psycopg2 on your local machine, probably in a virtual environment. Alternatively, you could put it in a container. Instead of installing everything manually, you would define a Dockerfile with your installation instructions. It would probably look something like this:
FROM python:latest
ADD python-script.py /opt/www/python-script.py
RUN pip install psycopg2
CMD ["python", "/opt/www/python-script.py"]
And if we build and run...
dave-mbp:Desktop dave$ ls -la
-rw-r--r-- 1 dave staff 135 Dec 8 19:05 Dockerfile
-rw-r--r-- 1 dave staff 22 Dec 8 19:04 python-script.py
dave-mbp:Desktop dave$ docker build -t python-script .
Sending build context to Docker daemon 17.92kB
Step 1/4 : FROM python:latest
latest: Pulling from library/python
85b1f47fba49: Already exists
ba6bd283713a: Pull complete
817c8cd48a09: Pull complete
47cc0ed96dc3: Pull complete
4a36819a59dc: Pull complete
db9a0221399f: Pull complete
7a511a7689b6: Pull complete
1223757f6914: Pull complete
Digest: sha256:db9d8546f3ff74e96702abe0a78a0e0454df6ea898de8f124feba81deea416d7
Status: Downloaded newer image for python:latest
---> 79e1dc9af1c1
Step 2/4 : ADD python-script.py /opt/www/python-script.py
---> 21f31c8803f7
Step 3/4 : RUN pip install psycopg2
---> Running in f280c82d74e7
Collecting psycopg2
Downloading psycopg2-2.7.3.2-cp36-cp36m-manylinux1_x86_64.whl (2.7MB)
Installing collected packages: psycopg2
Successfully installed psycopg2-2.7.3.2
---> bd38f911bb6a
Removing intermediate container f280c82d74e7
Step 4/4 : CMD python /opt/www/python-script.py
---> Running in 159b70861893
---> 4aa783be5c90
Removing intermediate container 159b70861893
Successfully built 4aa783be5c90
Successfully tagged python-script:latest
dave-mbp:Desktop dave$ docker run python-script
>> Connected!
Containers are largely a packaging solution. We'll be able to connect to an external database or API endpoint as if we were running everything from outside of a container.
There is one thing to keep in mind though...localhost may resolve to the virtual machine if you're using docker toolbox/virtualbox. In which case, you wouldn't connect to localhost unless you were running a bridged network connection in the VM. Otherwise, just specify the host IP instead of localhost.
Postgres default port is 5432.
Docker offers different networking modes to use. I can tell you how to achieve your goal using bridge mode (used by default).
There is always a bridge network that allows you access host machine. It is created by default and called docker0.
Run the following command in terminal ip addr show docker0 to see your host ip available to all the containers you run.
Output on my machine:
developer#dlbra:~$ ip addr show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:95:60:89 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
Therefore you don't need any additional configuration in docker-compose.yml
You just have to configure your db host to the ip address you saw. In my case it would be 172.17.0.1.
If your locally installed Postgres listens to a different port also specify that port in your application configuration.
Related
I am using the jupyter/scipy-notebook image and want to connect to a MongoDB that runs locally on MacOS (port-forwarded through K8). I cannot get this to work. I can connect from the docker to the host using "host.docker.internal" but cannot reach MongoDB.
I tried two different approaches
A. Using the network flag to avoid the issue entirely:
docker run --network="host" jupyter/scipy-notebook:b418b67c225b
Result: Cannot reach the notebook at all on localhost:8888
B. Running the image without the network flag and connect to "host.docker.internal" when using the MongoDB.
docker run -p 8888:8888 jupyter/scipy-notebook:b418b67c225b
client = MongoClient("mongodb://host.docker.internal:27017/?readPreference=primary&ssl=false")
db = client["foo"]
db.list_collection_names()
Result: Some kind of topology error:
ServerSelectionTimeoutError: Could not reach any servers in ........ [Errno -3] Temporary failure in name resolution')>]>
Any ideas?
I have successfully connected to local environment on Jupyter notebook on port 8888. Now I am trying to query locally running mongodb on port 3001. I am using pymongo and below is my code:
myclient = pymongo.MongoClient("mongodb://localhost:3001")
mydb = myclient["meteor"]
mydoc = mydb["historicalNames"].find({ "Name" : "John Doe"})
print(mydoc)
<pymongo.cursor.Cursor at 0x7f78ff706e80>
But when I try to fetch data using below code
df = pd.DataFrame(list(mydoc))
df.head()
I get the error:
ServerSelectionTimeoutError: localhost:3001: [Errno 111] Connection refused
How to connect to local DB with connect local environment from google colab
You might try simplifying your setup by removing colab: does the same notebook code work on your local jupyter installation using the jupyter front-end?
A total guess: is the jupyter runtime running inside a docker container different to where the mongodb server is running? If yes then you probably need to bridge networks to make it work, or tell both docker containers to use --net=host networking (and make sure there are no port collisions among your host & all docker containers).
Hello guys I'm trying to find a way to move my mongo database inside vagrant outside of it. I'm reading some posts in this forum but they're related to postgres and mysql.
When I run npm start this is the code I have in my package.json
"start": "MONGODB=mongodb://localhost:27017....
So the problem is that the databse will get saved in Virtual Machine localhost, so, by the time it runs it won't be accessible outside of VM. How can I change this localhost path to communicate outside?
It is not different wether it is vagrant or another server.
The db location files are specified in /etc/mongodb.conf. By default db are saved in /data/db
So the problem is that the databse will get saved in Virtual Machine localhost, so, by the time it runs it won't be accessible outside of VM. How can I change this localhost path to communicate outside?
If you want the db to be accessible from your host machine you need to replace localhost by the IP of the vagrant VM (if you specified a private IP) or better use the 0.0.0.0 so its accessible from all network interfaces
I did it, this link gave me the answer: Vagrant reverse port forwarding?
It seems that by default mongo will be located in 10.0.2.2 outside of vagrant, so if I run inside vagrant: mongo 10.0.2.2:27017 it connects to my databases outside of vagrant.
Therefore, this is what I need to put in my package.json to run npm start...
"start": "MONGODB=mongodb://10.0.2.2:27017/
My Go application makes TLS connections via tls.Dial() to exchange data.
It works fine when run from the host:
But the outgoing connection doesn't seem to work when the app is run from a Docker container. The app hangs indefinitely.
Note 1: Same behavior with using docker run -p $(docker-machine ip):2500:2500 ...
Note 2: VM doesn't have extra port forwarding settings other than the default settings that came with docker-machine's default VM.
Docker image build with Dockerfile:
FROM golang:latest
RUN mkdir -p "$GOPATH/src/path/to/app"
# Install dependencies
RUN go get github.com/path/to/dep
VOLUME "$GOPATH/src/path/to/app"
EXPOSE 2500
WORKDIR "$GOPATH/src/path/to/app"
CMD ["go", "run", "main.go"]
Host is OS X running docker-machine.
Question
How can I make the TCP outgoing connection to work?
You are either using boot2docker or docker-machine (since you are running docker on OSX). If you are using boot2docker, you have to forward the ports on VirtualBox as well as docker, have a look at this blog post:
https://fogstack.wordpress.com/2014/02/09/docker-on-osx-port-forwarding/
If you are using docker-machine, you have to connect to the docker-machine assigned ip, not localhost, have a look at this post:
https://github.com/docker/machine/issues/710
I see now that you are using docker-machine specifically, so the post about docker-machine should answer your question.
Edit: I misunderstood the question. You are trying to make an outgoing connection on a forwarded port. That is not correct. By default docker can make outgoing connections on any port. The port forwarding is for incoming connections only. Please try again without specifying any ports to forward. My suspicion is that you are trying to make an outgoing connection on the incoming (forwarded) port.
I've just had exactly the same problem. Was unable to connect out at all.
Restarted the container, and suddenly outgoing connections worked fine. It's possible that the container survived an update of docker?
Currently using Docker version 18.09.3, build 774a1f4
I'm trying to run a distribution test for learning purpose and i'm using a Virtual machine Centos 7 as a slave in my Windows 7 ( master running in window 7) but even if i configure the master with the IP of the slave ( VM ), modifying the file jmeter.properties, doesn't work, i try run Jmeter-server in the Centos machine but this problem appears.
Created remote object: UnicastServerRef [liveRef: [endpoint:[127.0.0.1:44341](lo
cal),objID:[4e68a212:14a8564a618:-7fff, 5760053273490727502]]]
Server failed to start: java.rmi.RemoteException: Cannot start. localhost.locald
omain is a loopback address.
An error occurred: Cannot start. localhost.localdomain is a loopback address.
Can somebody give me a direction where look or a explanation how can i do it?
Thanks!
Put the following line in system.properties file: java.rmi.server.hostname=xxx.xxx.xxx.xxx
Alternatively start JMeter providing above property as a command-line argument as:
jmeter (or jmeter-server) -Djava.rmi.server.hostname=xxx.xxx.xxx.xxx
Double check your network configuration, i.e. make sure that your /etc/hosts file contains the following lines:
127.0.0.1 localhost localhost.localdomain
xxx.xxx.xxx.xxx your CentOS machine hostname
In all above cases xxx.xxx.xxx.xxx should be IP address of your CentOS machine and this IP address must be different from 127.0.0.1.
Also make sure that you select "Bridged" networking in your Virtual Machine, machines should be able to reach each other over the network, firewalls should be properly configured to allow communication, etc.
For more information on different JMeter Properties and ways of setting/overriding them see Apache JMeter Properties Customization Guide