I want to see the web interface of my rethink db which I run as a docker container. I read here how to invoke the web interface https://rethinkdb.com/docs/administration-tools/. But it does not work in my case. When I type localhost:28015 nothing is shown. Here is the docker compose file in which the rethinkdb container is defined.
services:
rethinkdb:
image: rethinkdb:2.3.5
ports: "28015:28015"
Can you tell me why it is not working?
You should expose port 8080 in your file instead of 28015.
Related
On a virtual machine I have 2 docker containers running with the names <postgres> and <system> that run on the network with name <network>. I can't change options in these containers. I have created a flask application that connects to a database and outputs the required information. To connect from local computer I use
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<VM_ip>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
and it worked great.
But, when I run my application on the same VM and specify
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<postgres>.<network>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
I get an error:
psycopg2.OperationalError: could not parse network address \"<postgres>.<network>\": Name or service not known.
Local connections are allowed in pg_hba, the problem is in connecting from the new container in VM.
Here are the settings of my new container:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "5000:5000"
command: gunicorn -w 1 -b 0.0.0.0:8000 wsgi:server
I tried to make the same connection as from the local computer, specifying the VM_ip, but that didn't help either.
I also tried to specify the <postgres> container ip instead of its name in the host=, but this also caused an error.
Do you know what could be the problem?
You need to create a network first which you will use to communicate between containers. You can do that by:
docker network create <example> #---> you can name it whatever you want
Then you need to connect both containers with the network that you made.
docker run -d --net example --name <postgres_container> <postgres_image>
docker run -d --net example --name <flask_container> <flask_image>
You can read more about the docker network in its documentation here:
https://docs.docker.com/network/
from what I can see you might be using the docker-compose file for the deployment of the services, you can add one more layer above the service layer for the network where you can define the network that is supposed to be used by the services that are deployed. The network that is defined needs also be mentioned in the service definition this lets the Internal DNS engine that docker-compose creates in the background discover all the services in the network with the help of the service name.
A Bridge network may be a good driver to be used here.
You can use the following links for a better understanding of networks in docker-compose.
https://docs.docker.com/compose/compose-file/compose-file-v3/#network
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks
I have redis and mysql each running as a separate containers
How can I connect to mysql and to redis from my app container?
I need an ip, but how to find it?
Found !
add a link to redis container, for example
having a service named 'redis_container'
links:
redis:redis_container
so each other containers can use 'redis' as host name
Wow
I have a docker.compose.yml file that works as expected when I execute docker-compose up in its parent directory.
My problem is that it's an old version compose file, and I need to integrate its containers into another compose file. The old file has the following structure:
service1:
...
service2:
...
While the target docker-compose.yml has the following structure:
version: '2.3'
services:
service1:
...
service2:
...
So, my problem is that the old version file relies on parameter links. I don't quite understand what is its function. I see few documentation online, and all that the docs says is that the links are replaced by networks. Good, but what is the function of links? How could I replace it, so I don't use (about to get) deprecated features?
Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.
You can just delete the link section of the old-version docker compose file and access the services from other container by their names.
You can optionally define networks in order to define which service will be available to the others by placing them in the same network. E.g. :
networks:
my_network_1:
driver: bridge
my_network_2:
driver: bridge
services:
service_1:
networks:
- my_network_1
service_2:
networks:
- my_network_1
service_3:
networks:
- my_network_2
Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)
I'm new to learning how to use goLang to build microservices. I had a whole project up and running locally, but when I tried deploying it I ran into a problem. The session I was working with (mgo.Dial("localhost")) was no longer working. When I put this into a docker image, it failed to connect to the local host, which makes sense, since the docker image builds it over a new OS (alpine in my case). I was wondering what I should do to get it to connect.
To be clear, when I was researching this, most people wanted to connect to a mongoDB session that is a docker container, I want to connect to a mongoDB session from within a docker container. Also once I'm ready for deployment I'll be using StatefulSet with kubernetes if that changes anything.
For example, this is what I want my program to be like:
sess, err := mgo.Dial("localhost") //or whatever
if err != nil {
fmt.Println("failed to connect")
else {
fmt.Println("connected")
What I tried doing:
Dockerfile:
FROM alpine:3.6
COPY /build/app /bin/
EXPOSE 8080
ENTRYPOINT ["/bin/app"]
In terminal:
docker build -t hell:4 .
docker run -d -p 8080:8080 hell:4
And as you can expect, it says not connected. Also the port mapping is for the rest of the project, not this part.
Thanks for your help!
I think you should not try to connect to the MongoDB server running on your machine. Think about deploying the whole application lateron you want a MongoDB server running together with your service on some cloud or server.
That problem could be solved by setting up an additional container and link it to your Go Web App. Docker compose can handle this. Just place a docker-compose.yml file in the directory you are executing your docker build in.
version: '3'
services:
myapp:
build: .
image: hell:4
ports:
- 8080:8080
links:
- mongodb
depends_on:
- mongodb
mongodb:
image: mongo:latest
ports:
- "27017:27017"
environment:
- MONGODB_USER="user"
- MONGODB_PASS="pass"
Something like this should do it (not tested). You have two services: One for your app that gets build according to your Dockerfile in the directory in which you currently are. Additionally it links to a service called mongodb defined below. The mongodb service is accessible via the service name mongodb.
If your mongoDB server is running in your host machine, replace localhost by you host IP.