We have a couple of Docker Images that run an Apache2 server, and we scrape data from them using prometheus:
Docker-compose file (minimal):
version: "3.3"
services:
noteable:
image: naas/noteable
ports:
- 9118:9117
proxy:
image: /naas/noteable_proxy
ports:
- 80:8080
- 9117:9117
In this format, both services are exposing prometheus data on port 9117, which is then mapped to different ports on the host.
If I try to scale the noteable service, I get a port clash:
[naas#naas-dev ~]$ docker-compose up --no-recreate --scale noteable=2
Starting naas_proxy_1 ...
WARNING: The "noteable" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Starting naas_noteable_1 ... done
Creating naas_noteable_2 ... error
ERROR: for naas_noteable_2 Cannot start service noteable: driver failed programming external connectivity on endpoint naas_noteable_2 (47ff958b454ce75887ea4a2b8f1b42f8618dc04f3911fa9190fb27129443728a): Bind for 0.0.0.0:9118 failed: port is already allocated
ERROR: for noteable Cannot start service noteable: driver failed programming external connectivity on endpoint naas_noteable_2 (47ff958b454ce75887ea4a2b8f1b42f8618dc04f3911fa9190fb27129443728a): Bind for 0.0.0.0:9118 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
(which is kinda obvious, really)
Question: Is there some clever way to do 9117+:9117 (ie, "map to 9117, or the next available port thereafter")?
Related
On a virtual machine I have 2 docker containers running with the names <postgres> and <system> that run on the network with name <network>. I can't change options in these containers. I have created a flask application that connects to a database and outputs the required information. To connect from local computer I use
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<VM_ip>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
and it worked great.
But, when I run my application on the same VM and specify
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<postgres>.<network>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
I get an error:
psycopg2.OperationalError: could not parse network address \"<postgres>.<network>\": Name or service not known.
Local connections are allowed in pg_hba, the problem is in connecting from the new container in VM.
Here are the settings of my new container:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "5000:5000"
command: gunicorn -w 1 -b 0.0.0.0:8000 wsgi:server
I tried to make the same connection as from the local computer, specifying the VM_ip, but that didn't help either.
I also tried to specify the <postgres> container ip instead of its name in the host=, but this also caused an error.
Do you know what could be the problem?
You need to create a network first which you will use to communicate between containers. You can do that by:
docker network create <example> #---> you can name it whatever you want
Then you need to connect both containers with the network that you made.
docker run -d --net example --name <postgres_container> <postgres_image>
docker run -d --net example --name <flask_container> <flask_image>
You can read more about the docker network in its documentation here:
https://docs.docker.com/network/
from what I can see you might be using the docker-compose file for the deployment of the services, you can add one more layer above the service layer for the network where you can define the network that is supposed to be used by the services that are deployed. The network that is defined needs also be mentioned in the service definition this lets the Internal DNS engine that docker-compose creates in the background discover all the services in the network with the help of the service name.
A Bridge network may be a good driver to be used here.
You can use the following links for a better understanding of networks in docker-compose.
https://docs.docker.com/compose/compose-file/compose-file-v3/#network
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks
I am trying to deploy an app in a Code Engine project. The container image is pretty standard: docker.io/library/httpd. All I did in the configuration wizard is to change the port from Code Engine default 8080 to port 80.
Code Engine comes back with:
Revision failed to start with "exit code 1". Check your image and configuration.
In the logs I found these two lines:
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
Why?
I don't know the answer to your question "why", except I see some people on Stackoverflow mention the range up to 1024 is reserved by the OS. I could run my httpd locally on port 80, but in the IBM Code Engine I had to change to 8080.
This is how I managed to get it running:
I edited the httpd.conf as this post implies:
"There is a hint on how to do this at the DockerHub page. An alternative config file must be obtained and added to the container via the Dockerfile.
First get a copy of the config file:
docker run --rm httpd:2.4 cat /usr/local/apache2/conf/httpd.conf > my-httpd.conf
Then edit the my-httpd.conf file and modify the port:
Listen 8080
Finally add to the Dockerfile the instruction to copy it:
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf "
Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)
I have a development environment using docker-compose, it has 5 services:
db (postrgresql)
redis
celery
celery-beat
web (a django web app - development is occurring here)
In development, I run the top four in containers with
docker-compose up db redis celery celery-beat
These four containers can connect to each other no problem.
However, while I code with the web app, I need to run it locally so I can get live updates and debug. However, running locally, the web app can't connect with the containers, and I need to map the ports on the containers, e.g:
db:
ports:
- 5432:5432
so that my locally running web app can connect with them.
However, if I then push my code to github, TravisCI fails it with this error:
ERROR: for db Cannot start service db: b'driver failed programming external connectivity on endpoint hackerspace_db_1 (493e7fb9e53f551b3b1eea35f9e2baf5725e9077fc642d8121891cab31b34373): Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use'
ERROR: Encountered errors while bringing up the project.
The command "docker-compose run web sh -c "python src/manage.py test src/"" exited with 1.
TravisCI passes without the port mapping, but I have to develop with port mapping.
How can I get around this so that they both work with the same settings? I'm willing to try different workflows, as I'm new to docker and containers and trying to find my way around.
I've tried:
Developing in a container with Visual Studio Code's Remote - Containers extension, but there doesn't seem to be a way to view the debug log/output
Finding a parameter to add to the docker-compose up ... that would map the ports when I run them, but there doesn't seem to be an option for this.
I'm working on a very basic (I thought) starter program in Go using MongoDB and Docker. Trying to get a handle on these before we start using them at work.
I've got my MongoDB running in a docker container, just using my local host, using the official Docker image. This is running fine, I can connect to it through MongoDB Compass and modify the DB.
My next task was to build a separate Docker container that is able to read and write to the DB. I'm using MongoDB-Go-Driver (https://godoc.org/github.com/mongodb/mongo-go-driver/mongo) for this as mgo is no longer kept up.
This is my code, I'm just following the numerous tutorials online to make a simple connection and then ping the DB to ensure connectivity.
client, err := mongo.Connect("mongodb://localhost:27017")
if err != nil {
log.Fatal("error ", err)
}
// Check the connection
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Fatal("error2 ", err)
}
fmt.Println("Connected to MongoDB!")
It always fails on doing any operation on the DB (Find, FindOne, Ping, etc.) with error2 server selection timeout
This is my docker-compose file I'm running.
version: "3"
services:
datastore:
image: mongo
ports:
- "27017:27017"
networks:
- maccaptionNet
volumes:
- .:/go/src/maccaption_microservice/dbdata
jobservice:
image: jobservicemaccaption:1.0
networks:
- maccaptionNet
depends_on:
- "datastore"
networks:
maccaptionNet:
driver: bridge
I'm brand new to MongoDB and after hours of research haven't made any progress on this.
I've read through https://docs.mongodb.com/manual/core/read-preference-mechanics/
https://docs.mongodb.com/manual/replication/
Can anyone point me in the right direction for this? I haven't been able to find a lot on this specific issue.
Thanks!
When you running the service and mongodb in docker you can't use localhost since the service is in a different container than mongodb, and from docker point of view it's under a different ip address.
You can connect with the service name you specify in docker-compose datastore
mongo.Connect("mongodb://datastore:27017")
Edit:
from: https://docs.docker.com/compose/networking/
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name
Meaning that if you run multiple containers via compose, you can access one container from the other by the container name,
Basically when docker-compose starts, it sets up the network, and each container in the compose joins the network under its container name. For a container's point if view, localhost is just the container itself, while he can search for other container's name and get back the container’s IP address.
Assuming that the docker is running on your localhost, you can set the name in etc/hosts file like this:
127.0.0.1 datastore
(if not just replace 127.0.0.1 with the docker ip)
And in the app you will connect with mongodb://datastore:27017
So you will be able to run the service both in the docker and from outside, if you'll decide to run only the db in docker
docker-compose start datastore
If you are connecting to one docker from another (like it is written in your docker-compose file, and using bridge network mode, you have to change your localhost to the hostname, like datastore
client, err := mongo.Connect("mongodb://datastore:27017")
When your go script uses localhost, it expects the database to located in the same docker
I think my answer might be unrelated but still, I was getting the same error and it was because my IP address was not listed in the IP whitelist tab in MongoDB atlas, so make sure you have your IP address there before trying to connect.
I had the same problem but found another way to address this issue. You can just pass network parameter while running docker image and this way docker points to correct localhost.
docker run --network="host" ....
Source for this solution
Somehow i've fix this problem in a different way: by changing ports from "27018:27017" to "27017:27017".
IDK why this helps. Maybe if Mongo sees not default port it thinks there are cluster of Mongo's nodes.
I got this problem when I tired to connect to
mongodb v4.0.10
with
pymongo==4.0.2 not worked
pymongo==3.12.3 worked
Check your packages
mongodb v5.0.2 works with pymongo==4.0.2