I am not using container orchestration techniques like docker swarm or Kubernetes.
My goal is to replicate the services of ca0 with different IP:Port to listen the request. This is required to maintain High Availability (HA) of services provided by ca0. The compose of ca0 is shown below:
version: '2'
networks:
byfn:
services:
ca0:
image: hyperledger/fabric-ca:$IMAGE_TAG
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/key.pem
- FABRIC_CA_SERVER_PORT=7054
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/key.pem -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca0_peerOrg1
networks:
- byfn
Again, To replicate the same service, the service named it to ca1, container name ca1_peerOrg, I just change the Port And appended it to the previous ca0 service defined as above. ca1 service is defined as below:
# Replicated CA
ca1:
image: hyperledger/fabric-ca:$IMAGE_TAG
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/key.pem
- FABRIC_CA_SERVER_PORT=8054
ports:
- "8054:8054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/key.pem -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca1_peerOrg1
networks:
- byfn
And I run the docker compose file and got an error as:
gopal#gopal:~/Dappdev/first/fabric-samples/first-network$ docker-compose -f docker-compose-ca.yaml up
ERROR: The Compose file './docker-compose-ca.yaml' is invalid because:
Invalid top-level property "ca1". Valid top-level sections for this Compose file are: services, version, networks, volumes, and extensions starting with "x-".
You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/
My approach to creating replicated containers is not the right way.
How to replicate a container with the different selected port?
You can make use of YAML anchor and merge syntax:
services:
ca0: &name-me # <- this is an anchor
image: hyperledger/fabric-ca:$IMAGE_TAG
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/key.pem
- FABRIC_CA_SERVER_PORT=7054
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/key.pem -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca0_peerOrg1
networks:
- byfn
ca1:
<<: *name-me # <- this is a merge (<<) with an alias (*name-me)
# keys below merge notation override those that declared under anchor
# so this:
ports:
- "8054:7054"
# replaces default 'ports' (the whole key)
The alias *name-me takes everything under the anchor &name-me but using merge <<: notation you can override some properties of the anchor (ports in this example). I take it that you can't use swarm or kubernetes, because this is not a great way to manage replicas. Go swarm or K8s if you can.
Related
I'm trying to parse my docker-compose file using yq (the go implementation from https://github.com/mikefarah/yq) to auto-generate a documentation using asciidoc.
My docker-compose.yml looks fairly simple and does nothing out of the ordinary:
---
version: "3.3"
services:
# prometheus metrics
node_exporter:
image: prom/node-exporter:latest
container_name: node_exporter
labels:
description: Prometheus exporter to monitor system metrics
restart: always
command:
- --path.rootfs=/host
pid: host
network_mode: host
# ports:
# - 9100:9100
# The network_mode: host tells docker to run the container as if it was running on the
# server itself, so all ports exports by the container will directly be mapped to the server.
volumes:
- /:/host:ro,rslave
- /etc/timezone:/etc/timezone:ro
# prometheus metrics
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: always
expose:
- 9110
ports:
- 9110:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
# Mange containers
portainer:
image: portainer/portainer-ce:alpine
container_name: portainer
command: -H unix:///var/run/docker.sock --admin-password-file /tmp/portainer_passwords
restart: always
ports:
- 9990:9000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
- ./assets/portainer.passwd:/tmp/portainer_passwords
- /etc/timezone:/etc/timezone:ro
volumes:
portainer_data:
I want some information for each service. Most important to me is the image. container name restart and ports. Plus maybe labels -> description which is a field I use for some documentation (what does the respective service actually do).
I don't know how I can get these fields for the respective service combined. When I run yq eval '.services.[] | .container_name, .services.[] | .image' $composeFile I first get 3 lines with the container name and then 3 lines with the image.
node_exporter
cadvisor
portainer
prom/node-exporter:latest
gcr.io/cadvisor/cadvisor:latest
portainer/portainer-ce:alpine
This result is not grouped by service. I'd prefer something like this:
node_exporter
prom/node-exporter:latest
cadvisor
gcr.io/cadvisor/cadvisor:latest
portainer
portainer/portainer-ce:alpine
Or since I want to generate asciidoc,the perfect solution would be this:
|node_exporter |prom/node-exporter:latest
|cadvisor |gcr.io/cadvisor/cadvisor:latest
|portainer |portainer/portainer-ce:alpine
This way I can generate the body of an asciidoc table with information on my services for my documentation.
Anyone got an idea how I can get yq to work as I indend?
Each .services.[] starts a new iteration. Do it once and extract all you need from there:
yq eval '.services.[] | "|" + .container_name + "| " + .image'
|node_exporter| prom/node-exporter:latest
|cadvisor| gcr.io/cadvisor/cadvisor:latest
|portainer| portainer/portainer-ce:alpine
I made 2 yml and when i run docker-compose -f postgresql.yml up its starts ok
and then when i run docker-compose -f postgresql2.yml up first exist code 0.
Is it even possible to run same image twice?
My main purpose to run same web app source twice with different db on the same server pc.
1 web app source 2 instances with self db each on one server(maybe its clearer definition).
Maybe there is better approach and I do and think everything in wrong way.
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5432:5432
and this no big difference
postgresql2.yml
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true
Just use another service name freshhipster-postgresql2 on postgresql2.yml
version: '3.8'
services:
freshhipster-postgresql2:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true
Currently I have setup my service like the following.
version: '3'
services:
gateway:
container_name: gateway
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./services/gateway:/services/gateway
- ./packages:/packages
- ./node_modules:/node_modules
env_file: .env
command: yarn run ts-node-dev services/gateway --colors
ports:
- 3000:3000
So I have specified one env_file. But now I want to pass multiple .env files.
Unfortunately, the following is not possible:
env_files:
- .env.secrets
- .env.development
Is there any way to pass multiple .env files to one service in docker-compsoe?
You can specify multiple env files on the env_file option (without s).
For instance:
version: '3'
services:
hello:
image: alpine
entrypoint: ["sh"]
command: ["-c", "env"]
env_file:
- a.env
- b.env
Note that, complementary to #conradkleineespel's answer, if an environment variable is defined in multiple .env files listed under env_file, the value found in the last environment file in the list overwrites all prior (tested in a docker-compose file with version: '3.7'.
So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)
I am developing a service using docker-compose and I deploy the the containers to a remote host using this commands:
eval $(docker-machine env digitaloceanserver)
docker-compose build && docker-compose stop && docker-compose rm -f && docker-compose up -d
My problem is that I'm changing laptop and I exported the docker-machines to the new laptop and I can activate them.
But when I try to deploy new changes it raises these errors:
Creating postgres ... error Creating redis ...ERROR: for postgres
Cannot create container for service postgres: b'Conflict. The
container name "/postgres" is already in use by container
"612f3887544224aeCreating redis ... errorERROR: for redis Cannot
create container for service redis: b'Conflict. The container name
"/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for postgres Cannot create container for service
postgres: b'Conflict. The container name "/postgres" is already in use
by container
"612f3887544224ae79f67e29552b4d97e246104b8a057b3a03d39f6546dbbd38".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for redis Cannot create container for service redis:
b'Conflict. The container name "/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.' ERROR: Encountered errors while bringing up the project.
My docker-compose.yml is this:
services:
nginx:
build: './docks/nginx/.'
ports:
- '80:80'
- "443:443"
volumes:
- letsencrypt_certs:/etc/nginx/certs
- letsencrypt_www:/var/www/letsencrypt
volumes_from:
- web:ro
depends_on:
- web
letsencrypt:
build: './docks/certbot/.'
command: /bin/true
volumes:
- letsencrypt_certs:/etc/letsencrypt
- letsencrypt_www:/var/www/letsencrypt
web:
build: './sources/.'
image: 'websource'
ports:
- '127.0.0.1:8000:8000'
env_file: '.env'
command: 'gunicorn cuidum.wsgi:application -w 2 -b :8000 --reload --capture-output --enable-stdio-inheritance --log-level=debug --access-logfile=- --log-file=-'
volumes:
- 'cachedata:/cache'
- 'mediadata:/media'
depends_on:
- postgres
- redis
celery_worker:
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum worker -l debug'
volumes_from:
- web
depends_on:
- web
celery_beat:
container_name: 'celery_beat'
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum beat --pidfile= -l debug'
volumes_from:
- web
depends_on:
- web
postgres:
container_name: 'postgres'
image: 'mdillon/postgis'
ports:
- '127.0.0.1:5432:5432'
volumes:
- 'pgdata:/var/lib/postgresql/data/'
redis:
container_name: 'redis'
image: 'redis:3.2.0'
ports:
- '127.0.0.1:6379:6379'
volumes:
- 'redisdata:/data'
volumes:
pgdata:
redisdata:
cachedata:
mediadata:
staticdata:
letsencrypt_certs:
letsencrypt_www:
You’re seeing those errors because you’re explicitly setting container_name:, and those same container names are used elsewhere. Remove those explicit settings. (You don’t need them even for inter-container DNS; Docker Compose automatically creates an alias for you using the name of the service block.)
There are still potential issues from port conflicts. If your other PostgreSQL container is listening on the same (default) host port 5432 then the one you declare in this docker-compose.yml file will conflict with it. You might be able to just not expose your database container ports, or you might need to change the port numbers in this file.