I'm unable to get network alias working. I've tried everything, but haven't been able to get it working.
I've two services:
legacy-service
actualService - actualService calls legacyService using an API and the configuration we've set up makes it call legacy-service.dev/test/preprod/prod.com.
When I spin up the below docker-compose, both legacy-service and actualService spins up, but when actualService calls legacy-service, it's actually calling the one on our dev/test server and not the locally spun up legacy-service even though I have the alias.
How do I make my locally spun actualService talk to my locally spun legacy-service with that alias?
version: '3.7'
services:
#legacyService component
legacyService:
environment:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_REGION: ap-southeast-2
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN}
JAVA_OPTS: -Djava.awt.headless=true -Xms484M -Xmx968M -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=768M -XX:CompressedClassSpaceSize=171M -XX:+UnlockDiagnosticVMOptions -Xdebug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n
image: legacy-service
networks:
net:
aliases:
- legacy-service.xyz.com
ports:
- 9090:8080
- 9000:8000
production-build:
build:
context: ../
dockerfile: build/Dockerfile
image: component-api-production-build
entrypoint: 'echo "overriding Dockerfile ENTRYPOINT, this will terminate this service, we only need the built image"'
#actual-service
development-build:
depends_on:
- production-build
- legacyService
image: component-api-dev-build
build:
context: ../
dockerfile: local-dev/Dockerfile
ports:
- 8080:8080
- 8000:8000
environment:
AWS_REGION: ${AWS_REGION}
AWS_DYNAMODB_TABLE_PROPERTY: ${AWS_DYNAMODB_TABLE_PROPERTY}
JAVA_OPTS:
-Xdebug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n
networks:
- net
networks:
net:
Related
I made 2 yml and when i run docker-compose -f postgresql.yml up its starts ok
and then when i run docker-compose -f postgresql2.yml up first exist code 0.
Is it even possible to run same image twice?
My main purpose to run same web app source twice with different db on the same server pc.
1 web app source 2 instances with self db each on one server(maybe its clearer definition).
Maybe there is better approach and I do and think everything in wrong way.
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5432:5432
and this no big difference
postgresql2.yml
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true
Just use another service name freshhipster-postgresql2 on postgresql2.yml
version: '3.8'
services:
freshhipster-postgresql2:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true
Cannot compose up in ecs context my multi-container app. In the default context it run. Where am I doing wrong?
$ docker compose up
mysql:8.0.23 resolved to docker.io/library/mysql:8.0.23#sha256:...
mongo:4.4.3-bionic resolved to docker.io/library/mongo:4.4.3-bionic#sha256:...
**invalid reference format**
My docker-compose file:
version: '3.8'
services:
app:
container_name: tsb-app
build:
context: ..
dockerfile: .docker/Dockerfile
depends_on:
- mongo
- mysql
volumes:
- tsb_modules:/node_modules
expose:
- "80"
mongo:
container_name: tsb-mongo
image: mongo:4.4.3-bionic
environment:
MONGO_INITDB_ROOT_USERNAME: asdasdasd
MONGO_INITDB_ROOT_PASSWORD: asdasdasd
volumes:
- tsb_data:/data/db
mysql:
container_name: tsb-mysql
image: mysql:8.0.23
environment:
MYSQL_ROOT_PASSWORD: asdasdasd
MYSQL_USER: asdasdasd
MYSQL_PASSWORD: asdasdasd
volumes:
- tsb_logs:/var/lib/
# Create the required schemas and tabs on first start
- ./setup.sql:/docker-entrypoint-initdb.d/setup.sql
ports:
- 3300:3306
volumes:
tsb_data:
tsb_logs:
tsb_modules:
my .docker/Dockerfile
FROM node:15.8.0-alpine3.10
WORKDIR /app
COPY package.json .yarnrc yarn.lock prod.env ./.docker/setup.sql ./
RUN yarn install --production --frozen-lockfile
COPY build/ ./build
CMD node build/index.js
I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.
I am using docker-compose up command to spin-up few containers on AWS AMI RHEL 7.6 instance. I observe that in whichever containers there's a volume mounting, they are exiting with status Exiting(1) immediately after starting and remaining containers remain up. I tried using tty: true and stdin_open: true, but it didn't help. Surprisingly, the set-up works fine in another instance which basically I am trying to replicate in this new one.
The stopped containers are Fabric v1.2 peers, CAs and orderer.
Docker-compose.yml file which is in root folder where I use docker-compose up command
version: '2.1'
networks:
gcsbc:
name: gcsbc
services:
ca.org1.example.com:
extends:
file: fabric/docker-compose.yml
service: ca.org1.example.com
fabric/docker-compose.yml
version: '2.1'
networks:
gcsbc:
services:
ca.org1.example.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
ports:
- '7054:7054'
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./artifacts/channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerorg1
networks:
- gcsbc
hostname: ca.org1.example.com
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
I have a API Rest (Express & Mongodb) with nginx proxy-pass.
Docker file:
FROM node:8-alpine
EXPOSE 3000
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
RUN mkdir /app
WORKDIR /app
ADD package.json yarn.lock /app/
RUN yarn --pure-lockfile
ADD . /app
CMD ["yarn", "start"]
Docker compose:
version: "2"
services:
api:
build: .
environment:
- NODE_ENV=production
command: yarn start
volumes:
- .:/app
ports:
- "3000:3000"
tty: true
depends_on:
- mongodb
restart: always
nginx:
image: nginx
depends_on:
- api
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mongodb
restart: always
mongodb:
image: mongo
ports:
- "27017:27017"
restart: always
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
That seems expected, since docker run would run one image.
As opposed to docker compose, which will run a multi-container Docker application.
You need for all images to run, starting with the right order, in order to anything to happen.