I have this docker-compose configuration
version: '3'
services:
selenoid:
image: "aerokube/selenoid:latest"
container_name: selenoid
ports:
- "0.0.0.0:4444:4444"
networks:
- selenoid
volumes:
- ".:/etc/selenoid"
- "./target:/output"
- "/var/run/docker.sock:/var/run/docker.sock"
- "./target:/opt/selenoid/video"
environment:
- "OVERRIDE_VIDEO_OUTPUT_DIR=$PWD/target"
command: ["-limit", "10", "-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-container-network", "selenoid"]
selenoid-ui:
image: "aerokube/selenoid-ui:latest"
container_name: selenoid-ui
links:
- selenoid
ports:
- "8083:8080"
networks:
- selenoid
command: ["--selenoid-uri", "http://selenoid:4444"]
chrome_79.0:
image: "selenoid/vnc:chrome_79.0"
container_name: chrome_79.0
links:
- selenoid
- selenoid-ui
depends_on:
- selenoid
- selenoid-ui
networks:
- selenoid
volumes:
- "/dev/shm:/dev/shm"
sent:
image: "sent:1.0"
container_name: sent
links:
- selenoid
- selenoid-ui
depends_on:
- selenoid
- selenoid-ui
networks:
- selenoid
networks:
selenoid:
external:
name: selenoid
send is a container that runs TestNG tests, but looks like can not find localhost URL.
If I check the open ports I see this
chrome_79.0 /entrypoint.sh Up 4444/tcp
selenoid /usr/bin/selenoid -listen ... Up 0.0.0.0:4444->4444/tcp
selenoid-ui /selenoid-ui --selenoid-ur ... Up (health: starting) 0.0.0.0:8083->8080/tcp
And I can access localhost:4444 from outside, but even the container is in the same network can not access localhost.
Any ideas?
Well, I little bit change your docker file and it works =)
Remove chrome_79.0 and everything what is under
Because you can execute download chrome image directly:
docker pull selenoid/vnc:chrome_79.0
Remove all your "networks" (also from command section)
networks:
-selenoid
and add instead (don't add in command section)
network_mode: bridge
i.e
image: "aerokube/selenoid:latest"
network_mode: bridge
.
.
.
image: "aerokube/selenoid-ui:latest"
network_mode: bridge
3.Before record video I suggest you to see how it works without recording...
Just change volumes section.
I use Mac and in my situation it looks like
volumes:
- "$PWD:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "$PWD:/opt/selenoid/logs"
4.Execute
docker-compose up
docker pull selenoid/vnc:chrome_79.0
5.See list of images and containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ef60d57bc743 aerokube/selenoid-ui:latest "/selenoid-ui --sele…" 2 hours ago Up 2 hours (healthy) 0.0.0.0:8083->8080/tcp selenoid-ui
05bf58b73f53 aerokube/selenoid:latest "/usr/bin/selenoid -…" 2 hours ago Up 2 hours 0.0.0.0:4444->4444/tcp selenoid
REPOSITORY TAG IMAGE ID CREATED SIZE
aerokube/selenoid-ui latest 71ad7fb4efa7 5 days ago 17.5MB
aerokube/selenoid latest b8f47d114751 11 days ago 16.3MB
selenoid/vnc chrome_65.0 dff07a4cfe6d 2 years ago 959MB
6.Open
http://localhost:8083
Fun
PS. If not working ask me directly.
Related
I'm trying to setup a local development with Docker Compose that has a MongoDB cluster as the database. I chose Mongo Express as the Database Admin User Interface so I can check inside the MongoDB database. It does take some time for the cluster to accept connections, I have the 3 db containers as part of the depends_on, but seems like I have to do more than that based on the Docker Compose documentation here. I can't seem to find a good example for waiting for MongoDB clusters. Has anyone figured this out already? Please share, that would be great. Thank you in advance!
Here's the docker-compose.yml file:
version: '3.9'
services:
mongodb-primary:
image: 'bitnami/mongodb:latest'
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-primary
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
volumes:
- 'mongodb_master_data:/bitnami'
mongodb-secondary:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-secondary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
mongodb-arbiter:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
dbadmin:
image: mongo-express
restart: always
ports:
- 8081:8081
depends_on:
- mongodb-primary
- mongodb-secondary
- mongodb-arbiter
environment:
ME_CONFIG_MONGODB_URL: mongodb://root:password#mongodb-primary:27017,mongodb-secondary:27017,mongodb-arbiter:27017?replicaSet=replicaset
ME_CONFIG_BASICAUTH_USERNAME: admin
ME_CONFIG_BASICAUTH_PASSWORD: mexpress
volumes:
mongodb_master_data:
driver: local
I have tried many configurations and scenarios based around this which is mostly a tutorial that stops at one ghost instance. I am trying to scale it to 2 with docker-deploy up -d --scale ghost=2. When I hit the individual IP;s of the ghost containers , they work but port 80 is 503.
version: "3.1"
volumes:
mysql-volume:
ghost-volume:
networks:
ghost-network:
services:
mysql:
image: mysql:5.7
container_name: mysql
volumes:
- mysql-volume:/var/lib/mysql
networks:
- ghost-network
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db
MYSQL_USER: blog-user
MYSQL_PASSWORD: supersecret
ghost:
build: ./ghost
image: laminar/ghost:3.0
volumes:
- ghost-volume:/var/lib/ghost/content
networks:
- ghost-network
restart: always
ports:
- "2368"
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: blog-user
database__connection__password: supersecret
database__connection__database: db
depends_on:
- mysql
entrypoint: ["wait-for-it.sh", "mysql", "--", "docker-entrypoint.sh"]
command: ["node", "current/index.js"]
haproxy:
image: eeacms/haproxy
depends_on:
- ghost
ports:
- "80:5000"
- "1936:1936"
environment:
BACKENDS: "ghost"
DNS_ENABLED: "true"
LOG_LEVEL: "info"
What I get on localhost:80 is a 503 error the particular eeacms/haproxy image is supposed to be self-configuring any help appreciated
I needed to add a backend URL to the environment and also tell ghost it was installed in an alternate location by adding URL: localhost:5050
I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve
I'm trying to access to my mongo database on docker with adminmongo.
Here's my docker-compose.yml
version: '3'
services:
mongo:
image: mongo
volumes:
- ~/data:/data/db
restart: always
expose:
- 6016
adminmongo:
image: mrvautin/adminmongo
expose:
- 1234
links:
- mongo:mongo
When i do a docker-compose up everything works fine, adminmongo also return me this : adminmongo_1_544d9a6f954c | adminMongo listening on host: http://localhost:1234
But when i go to localhost:1234 my navigator is telling me this page doesn't exist.
Here's what a docker ps return me :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c27d4a89254 mrvautin/adminmongo "/bin/sh -c 'node ap…" 38 seconds ago Up 33 seconds 1234/tcp iris_adminmongo_1_544d9a6f954c
2a7496a8c56a mongo "docker-entrypoint.s…" 40 minutes ago Up 38 seconds 6016/tcp, 27017/tcp iris_mongo_1_7f00356a3adc
I've found 2 issues here:
1st: Exposing a port is not enough. expose is just documentation, you need to publish (bind) a port to the host to be reachable. This is how it's done:
ports:
- 1234:1234
2nd: you have to configure adminmongo to listen to 0.0.0.0 because by default it starts listening on 127.0.0.1 and this makes it accessible only inside the container itself. From the documentation page you've included in your question, the Configuration section states that this can be done by passing an environment variable:
All above parameters are usable through the environment which makes it very handy to when using adminMongo as a docker container! just run docker run -e HOST=yourchoice -e PORT=1234 ...
Since you are using docker-compose, this is done by the following:
environment:
- HOST=0.0.0.0
Working example:
version: '3'
services:
mongo:
image: mongo
volumes:
- ~/data:/data/db
restart: always
expose:
- 6016
adminmongo:
image: mrvautin/adminmongo
ports:
- 1234:1234
environment:
- HOST=0.0.0.0
Example of docker-compose works :
version: '3'
services:
server:
container_name: docker_api_web_container
image: docker_api_web
build: .
volumes:
- ./src:/usr/src/node-app/src
- ./package.json:/usr/src/node-app/package.json
environment:
- ENV=DEVELOPMENT
- PORT=4010
ports:
- '9000:4010'
depends_on:
- 'mongo'
mongo:
container_name: docker_mongo_container
image: 'mongo'
ports:
- '27017:27017'
adminmongo:
container_name: docker_adminmongo_container
image: mrvautin/adminmongo
links: ['mongo:mongo']
environment:
- HOST=0.0.0.0
ports:
- '1234:1234'
You have to expose your service to the outside world like this:
version: '3'
services:
mongo:
image: mongo
volumes:
- ~/data:/data/db
restart: always
adminmongo:
image: mrvautin/adminmongo
ports:
- 1234:1234
Now you can access your adminmongo by http://localhost:1234.
And you don't have to use links here.Since compose creates a network and joins all services in the compose files. You can access other containers with their service names.
Here is my docker compose file:
version: "3.3"
services:
test:
image: test
networks:
- mongo_net
ports:
- 4000:80
depends_on:
- mongodb
links:
- mongodb
mongodb:
image: mongo:latest
networks:
- mongo_net
ports:
- 27017:27017
volumes:
- local_data:/data/db
volumes:
local_data:
networks:
mongo_net:
driver: bridge
The 'test' image cannot find the 'mongodb' instance.
My assumption is that the 'links' section would connect the two, but it is not happening.
What am I missing?
for your compose file try just using depends_on. links is deprecated and maybe thats why you are currently getting issues since this is a V3 compose file.