I've dockerize a nestcloud app with consul and a bunch of microservices.
However, i'm unable to start any of the microservice (for example test-service below, as it seems consul is refusing any tcp connection :
[Nest] 1 - 11/23/2021, 9:47:34 PM [NestFactory] Starting Nest application...
[Nest] 1 - 11/23/2021, 9:50:56 PM [ConfigModule] Unable to initial ConfigModule, retrying...
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] ServiceRegistryModule dependencies initialized +114ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] LoggerModule dependencies initialized +0ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] BootModule dependencies initialized +0ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] HttpModule dependencies initialized +6ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] AppModule dependencies initialized +2ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] ConsulModule dependencies initialized +0ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [ConfigModule] Unable to initial ConfigModule, retrying... +39ms
Error: consul: kv.get: connect ECONNREFUSED 127.0.0.1:8500
Here's my docker-compose.yaml :
version: "3.2"
services:
test-service:
build:
context: .
dockerfile: apps/test-service/Dockerfile
args:
NODE_ENV: development
image: "test-service:latest"
restart: always
depends_on:
- consul
environment:
- CONSUL_HOST=consul
- DISCOVERY_HOST=localhost
ports:
- 50054:50054
consul:
container_name: consul
ports:
- "8400:8400"
- "8500:8500"
- "8600:53/udp"
image: consul
command: ["agent", "-server", "-bootstrap", "-ui", "-client", "0.0.0.0"]
labels:
kompose.service.type: nodeport
kompose.service.expose: "true"
kompose.image-pull-policy: "Always"
the Dockerfile for my microservice test-service :
FROM node:12-alpine
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
RUN mkdir -p /usr/src/app
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN yarn global add #nestjs/cli
RUN yarn install --production=false
# Build production files
RUN nest build test-service
# Bundle app source
COPY . .
EXPOSE 50054
CMD ["node", "dist/apps/test-service/main.js"]
and the bootstrap-development.yaml used by nestcloud to launch the microservice
consul:
host: localhost
port: 8500
config:
key: mybackend/config/${{ service.name }}
service:
discoveryHost: localhost
healthCheck:
timeout: 1s
interval: 10s
tcp: ${{ service.discoveryHost }}:${{ service.port }}
maxRetry: 5
retryInterval: 5000
tags: ["v1.0.0", "microservice"]
name: io.mybackend.test.service
port: 50054
loadbalance:
ruleCls: RandomRule
logger:
level: info
transports:
- transport: console
level: debug
colorize: true
datePattern: YYYY-MM-DD h:mm:ss
label: ${{ service.name }}
- transport: file
name: info
filename: info.log
datePattern: YYYY-MM-DD h:mm:ss
label: ${{ service.name }}
# 100M
maxSize: 104857600
json: false
maxFiles: 10
- transport: dailyRotateFile
filename: info.log
datePattern: YYYY-MM-DD-HH
zippedArchive: true
maxSize: 20m
maxFiles: 14d
I can successfully ping the consul container from the microservice container with :
docker exec -ti test-service ping consul
Do you see anything wrong with my config, and if so can you please tell how i can get it to work please ?
You trying to access consul with localhost from NestJs service, this is another container, it's localhost is different from consul localhost, you need to access consule with the container name as the host
Change consul host to ${{ CONSUL_HOST }} in bootstrap-development.yaml.
CONSUL_HOST is defined in docker-compose.yaml: CONSUL_HOST=consul
new bootstrap-development.yaml:
consul:
host: ${{CONSUL_HOST}}
port: 8500
...
and rebuilt the container with : docker-compose --project-directory=. -f docker-compose.dev.yml up --build
Related
I want to run my app on my local machine within Docker. I don't want to optimize the size of my docker app or build it for production now.
Docker builds my backend-api app and postgres-db successfully. But I can only access the docker postgres database outside my docker e.g. with dbeaver installed on my computer. Also if I start my backend-api app WITHOUT Docker with "npm run start", my app can also access the database without any errors and can also write into the posgres db. Only when I build the backend-api with Docker and launch the app inside the Docker, I get this error. Since this only happens within Docker, I assume that something important is missing in my Dockerfile.
My docker-compose.yml:
version: '3'
services:
backend:
build: .
container_name: backend-api
command: npm run start
restart: unless-stopped
ports:
- 3000:3000
volumes:
- .:/usr/src/backend
networks:
- docker-network
depends_on:
- database
database:
image: postgres:latest
container_name: backend-db
ports:
- 5432:5432
volumes:
- postgresdb/:/var/lib/postgresql/data/
networks:
- docker-network
environment:
POSTGRES_USER: devuser
POSTGRES_PASSWORD: devpw
POSTGRES_DB: devdb
volumes:
postgresdb:
networks:
docker-network:
driver: bridge
My Dockerfile:
FROM node:16.15.0-alpine
WORKDIR /usr/src/backend
COPY . .
RUN npm install
sudo docker-compose up --build output:
Starting backend-db ... done
Recreating backend-api ... done
Attaching to backend-db, backend-api
backend-db |
backend-db | PostgreSQL Database directory appears to contain a database; Skipping initialization
backend-db |
backend-db | 2022-05-03 20:35:46.065 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
backend-db | 2022-05-03 20:35:46.066 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
backend-db | 2022-05-03 20:35:46.066 UTC [1] LOG: listening on IPv6 address "::", port 5432
backend-db | 2022-05-03 20:35:46.067 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
backend-db | 2022-05-03 20:35:46.071 UTC [26] LOG: database system was shut down at 2022-05-03 20:35:02 UTC
backend-db | 2022-05-03 20:35:46.077 UTC [1] LOG: database system is ready to accept connections
backend-api |
backend-api | > backend#0.0.1 start
backend-api | > nodemon
backend-api |
backend-api | [nodemon] 2.0.15
backend-api | [nodemon] to restart at any time, enter `rs`
backend-api | [nodemon] watching path(s): src/**/*
backend-api | [nodemon] watching extensions: ts
backend-api | [nodemon] starting `IS_TS_NODE=true ts-node -r tsconfig-paths/register src/main.ts`
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [NestFactory] Starting Nest application...
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +73ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] AppModule dependencies initialized +0ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (1)...
backend-api | Error: connect ECONNREFUSED 127.0.0.1:5432
backend-api | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1187:16)
backend-api | [Nest] 30 - 05/03/2022, 8:35:53 PM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (2)...
My ormconfig.ts:
import { ConnectionOptions } from 'typeorm';
const config: ConnectionOptions = {
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'devuser',
password: 'devpw',
database: 'devdb',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: false,
migrations: [__dirname + '/**/migrations/**/*{.ts,.js}'],
cli: {
migrationsDir: 'src/migrations',
},
};
export default config;
Please let me know what other information you would like me to provide. I'm new to the Docker world.
When you run a docker container localhost results in the container's localhost, not your machine's, so it's not the right host to use. As you're using docker-compose, a docker network is automatically created using the service's names as hosts. So instead of using localhost for you database host, you can use database as the host, and now the docker network will route the request properly to the database service as defined in your docker-compose.yml file
Ran into the same issue and this docker-compose.yml setup worked for me
version: "3.8"
services:
database:
container_name: db
image: postgres:14.2
ports:
- "5432:5432"
environment:
- POSTGRES_HOST_AUTH_METHOD
backend:
container_name: api
build:
dockerfile: Dockerfile
context: .
restart: on-failure
depends_on:
- database
ports:
- "3000:3000"
environment:
- DATABASE_HOST=database
- DATABASE_PORT=5432
- DATABASE_USER=postgres
- DATABASE_PASSWORD
- DATABASE_NAME
- DATABASE_SYNCHRONIZE
- NODE_ENV
Following were two main notable changes
changed the host to database so the docker network will route the
request properly to the database service as defined in your
docker-compose.yml file.
added a user postgres as default
PostgreSQL user
I am trying to deploy api-platform docker-compose images to Cloud Run. I have that working up to the point where the image starts to run. I have the system listening on port 8080 but I cannot turn off the https redirect. So, it shuts down. Here is the message I receive:
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
run: loading initial config: loading new config: loading http app module: provision http: server srv0: setting up route handlers: route 0: loading handler modules: position 0: loading module 'subroute': provision http.handlers.subroute: setting up subroutes: route 0: loading handler modules: position 0: loading module 'subroute': provision http.handlers.subroute: setting up subroutes: route 1: loading handler modules: position 0: loading module 'mercure': provision http.handlers.mercure: a JWT key for publishers must be provided
I am using Cloud Build to build the container and deploy to Run. This is kicked off through Gitlab CI.
This is my cloudbuild.yaml file.
steps:
- name: docker/compose
args: ['build']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_caddy:latest', 'gcr.io/$PROJECT_ID/apiplatform']
- name: docker
args: [ 'push', 'gcr.io/$PROJECT_ID/apiplatform']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'erp-ui', '--image', 'gcr.io/$PROJECT_ID/apiplatform', '--region', 'us-west1', '--platform', 'managed', '--allow-unauthenticated']
docker-compose.yml
version: "3.4"
services:
php:
build:
context: ./api
target: api_platform_php
depends_on:
- database
restart: unless-stopped
volumes:
- php_socket:/var/run/php
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
pwa:
build:
context: ./pwa
target: api_platform_pwa_prod
environment:
API_PLATFORM_CLIENT_GENERATOR_ENTRYPOINT: http://caddy
caddy:
build:
context: api/
target: api_platform_caddy
depends_on:
- php
- pwa
environment:
PWA_UPSTREAM: pwa:3000
SERVER_NAME: ${SERVER_NAME:-localhost, caddy:8080}
MERCURE_PUBLISHER_JWT_KEY: ${MERCURE_PUBLISHER_JWT_KEY:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${MERCURE_SUBSCRIBER_JWT_KEY:-!ChangeMe!}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- caddy_data:/data
- caddy_config:/config
ports:
# HTTP
- target: 8080
published: 8080
protocol: tcp
database:
image: postgres:13-alpine
environment:
- POSTGRES_DB=api
- POSTGRES_PASSWORD=!ChangeMe!
- POSTGRES_USER=api-platform
volumes:
- db_data:/var/lib/postgresql/data:rw
# you may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./api/docker/db/data:/var/lib/postgresql/data:rw
volumes:
php_socket:
db_data:
caddy_data:
caddy_config:
I have 2 servers A and B. When I run filebeat by docker-compose on the server A it's working well. But on the server B I have the following error:
pipeline/output.go:154 Failed to connect to backoff(async(tcp://logstash_ip:5044)): dial tcp logstash_ip:5044: connect: no route to host
So I think I missed some config on the server B. So how can I figure out my problem and fix them.
[Edited] Add filebeat.yml and docker-compose
Notice: I ran filebeat on the server A and got failed, so I tested it on the server B and it is still working. So I guess I have some problems with server config
filebeat.yml
logging.level: error
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/mylog/**/*.log
processors:
- decode_json_fields:
fields: ['message']
target: 'json'
output.logstash:
hosts: ['logstash_ip:5044']
console.pretty: true
processors:
- add_docker_metadata:
host: 'unix:///host_docker/docker.sock'
docker-compose
version: '3.3'
services:
filebeat:
user: root
container_name: filebeat
image: docker.elastic.co/beats/filebeat:7.9.3
volumes:
- /var/run/docker.sock:/host_docker/docker.sock
- /var/lib/docker:/host_docker/var/lib/docker
- ./logs/progress:/usr/share/filebeat/mylog
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:z
command: ['--strict.perms=false']
ulimits:
memlock:
soft: -1
hard: -1
stdin_open: true
tty: true
network_mode: bridge
deploy:
mode: global
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '50'
Thanks in advance
Assumptions:
Mentioned docker-compose file is for filebeat "concentration" server
which is running in docker on server B.
Both server are running in same network space and/or are accessible between themselves
Server B as filebeat server have correct firewall setting to accept connection on port 5044 (check with telnet from server A after starting container)
docker-compose (assuming server B)
version: '3.3'
services:
filebeat:
user: root
container_name: filebeat
ports: ##################
- 5044:5044 # <- see open port
image: docker.elastic.co/beats/filebeat:7.9.3
volumes:
- /var/run/docker.sock:/host_docker/docker.sock
- /var/lib/docker:/host_docker/var/lib/docker
- ./logs/progress:/usr/share/filebeat/mylog
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:z
command: ['--strict.perms=false']
ulimits:
memlock:
soft: -1
hard: -1
stdin_open: true
tty: true
network_mode: bridge
deploy:
mode: global
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '50'
filebeat.yml (assuming both servers)
logging.level: error
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/mylog/**/*.log
processors:
- decode_json_fields:
fields: ['message']
target: 'json'
output.logstash:
hosts: ['<SERVER-B-IP>:5044'] ## <- see server IP
console.pretty: true
processors:
- add_docker_metadata:
host: 'unix:///host_docker/docker.sock'
I'm trying to run an app using docker swarm. The app is designed to be completely local running on a single computer using docker swarm.
If I SSH into the server and run a docker stack deploy everything works, as seen here running docker service ls:
When this deployment works, the services generally go live in this order:
Registry (a private registry)
Main (an Nginx service) and Postgres
All other services in random order (all Node apps)
The problem I am having is on reboot. When I reboot the server, I pretty consistently have the issue of the services failing with this result:
I am getting some errors that could be helpful.
In Postgres: docker service logs APP_NAME_postgres -f:
In Docker logs: sudo journalctl -fu docker.service
Update: June 5th, 2019
Also, By request from a GitHub issue docker version output:
Client:
Version: 18.09.5
API version: 1.39
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:43:57 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:10:53 2019
OS/Arch: linux/amd64
Experimental: false
And docker info output:
Containers: 28
Running: 9
Paused: 0
Stopped: 19
Images: 14
Server Version: 18.09.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: pbouae9n1qnezcq2y09m7yn43
Is Manager: true
ClusterID: nq9095ldyeq5ydbsqvwpgdw1z
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 1
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.0.47
Manager Addresses:
192.168.0.47:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-50-generic
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.68GiB
Name: oeemaster
ID: 76LH:BH65:CFLT:FJOZ:NCZT:VJBM:2T57:UMAL:3PVC:OOXO:EBSZ:OIVH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No swap limit support
And finally, My docker swarm stack/compose file:
secrets:
jwt-secret:
external: true
pg-db:
external: true
pg-host:
external: true
pg-pass:
external: true
pg-user:
external: true
ssl_dhparam:
external: true
services:
accounts:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
JWT_SECRET_FILE: /run/secrets/jwt-secret
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-accounts:v0.8.0
secrets:
- source: jwt-secret
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
graphs:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-graphs:v0.8.0
secrets:
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
health:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-health:v0.8.0
secrets:
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
live-data:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
image: 127.0.0.1:5000/local-oee-master-live-data:v0.8.0
ports:
- published: 32000
target: 80
main:
depends_on:
- accounts
- graphs
- health
- live-data
- point-logs
- registry
deploy:
restart_policy:
condition: on-failure
environment:
MAIN_CONFIG_FILE: nginx.local.conf
image: 127.0.0.1:5000/local-oee-master-nginx:v0.8.0
ports:
- published: 80
target: 80
- published: 443
target: 443
modbus-logger:
depends_on:
- point-logs
- registry
deploy:
restart_policy:
condition: on-failure
environment:
CONTROLLER_ADDRESS: 192.168.2.100
SERVER_ADDRESS: http://point-logs
image: 127.0.0.1:5000/local-oee-master-modbus-logger:v0.8.0
point-logs:
depends_on:
- postgres
- registry
deploy:
restart_policy:
condition: on-failure
environment:
ENV_TYPE: local
PG_DB_FILE: /run/secrets/pg-db
PG_HOST_FILE: /run/secrets/pg-host
PG_PASS_FILE: /run/secrets/pg-pass
PG_USER_FILE: /run/secrets/pg-user
image: 127.0.0.1:5000/local-oee-master-point-logs:v0.8.0
secrets:
- source: pg-db
- source: pg-host
- source: pg-pass
- source: pg-user
postgres:
depends_on:
- registry
deploy:
restart_policy:
condition: on-failure
window: 120s
environment:
POSTGRES_PASSWORD: password
image: 127.0.0.1:5000/local-oee-master-postgres:v0.8.0
ports:
- published: 5432
target: 5432
volumes:
- /media/db_main/postgres_oee_master:/var/lib/postgresql/data:rw
registry:
deploy:
restart_policy:
condition: on-failure
image: registry:2
ports:
- mode: host
published: 5000
target: 5000
volumes:
- /mnt/registry:/var/lib/registry:rw
version: '3.2'
Things I've tried
Action: Added restart_policy > window: 120s
Result: No Effect
Action: Postgres restart_policy > condition: none & crontab #reboot redeploy
Result: No Effect
Action: Set all containers stop_grace_period: 2m
Result: No Effect
Current Workaround
Currently, I have hacked together a solution that is working just so I can move on to the next thing. I just wrote a shell script called recreate.sh that will kill the failed first boot version of the server, wait for it to break down, and the "manually" run docker stack deploy again. I am then setting the script to run at boot with crontab #reboot. This is working for shutdowns and reboots, but I don't accept this as the proper answer, so I won't add it as one.
It looks to me that you need to check is who/what kills postgres service. From logs you posted it seems that postrgres receives smart shutdown signal. Then, postress stops gently. Your stack file has restart policy set to "on-failure", and since postres process stops gently (exit code 0), docker does not consider this as failue and as instructed, it does not restart.
In conclusion, I'd recommend changing restart policy to "any" from "on-failure".
Also, have in mind that "depends_on" settings that you use are ignored in swarm and you need to have your services/images own way of ensuring proper startup order or to be able to work when dependent services are not up yet.
There's also thing you could try - healthchecks. Perhaps your postgres base image has a healthcheck defined and it's terminating container by sending a kill signal to it. And as wrote earlier, postgres shuts down gently and there's no error exit code and restart policy does not trigger. Try disabling healthcheck in yaml or go to dockerfiles to see for the healthcheck directive and figure out why it triggers.
I have issues instantiating my chaincode on a channel (timeout), and as I'm quite new to Docker and containerisation it may come from network setup.
Here is the command:
docker exec -e "CORE_PEER_LOCALMSPID=ControlTowerMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#controltower.com/msp" peer0.controltower.com peer chaincode instantiate -C frenchspoonchannel -n nodecc -v 1.0 -c '{"Args":["init","a","100","b","200"]}'
The chaincode container is created but not up and the command timeout
Peer logs:
2018-12-05 13:46:20.263 UTC [endorser] SimulateProposal -> ERRO 036 [frenchspoonchannel][b33ded8e] failed to invoke chaincode name:"lscc" , error: timeout expired while starting chaincode nodecc:1.0 for transaction b33ded8e61c100521c4e08ab9402b5324cb967f63e1163d1963e2293ce0445f6
dev-peer logs:
Error: Cannot find module '/usr/local/src/chaincode_example02.js'
at Function.Module._resolveFilename (module.js:538:15)
at Function.Module._load (module.js:468:25)
at Function.Module.runMain (module.js:684:10)
at startup (bootstrap_node.js:187:16)
at bootstrap_node.js:608:3
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! chaincode_example02#1.0.0 start: `node chaincode_example02.js "-- peer.address" "192.168.176.5:7052"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the chaincode_example02#1.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Orderer & Peer docker-compose.yaml
orderer.com:
container_name: orderer.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=info
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./config/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/controltower.com/orderers/orderer.controltower.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/controltower.com/peers/devpeer/:/etc/hyperledger/msp/peer0.controlTower.com
networks:
- controltower.com
peer0.controltower.com:
container_name: peer0.controltower.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.controltower.com
- CORE_LOGGING_PEER=info
- CORE_CHAINCODE_LOGGING_LEVEL=info
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_PEER_LOCALMSPID=ControlTowerMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.controltower.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=p0.controltower.co:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=p0.controltower.co:7051
#- CORE_PEER_CHAINCODELISTENADDRESS=peer0.controltower.com:7052
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_controltower.com
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
#- 7052:7052
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/controltower.com/peers/devpeer/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/controltower.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
- ./chaincode:/etc/hyperledger/chaincode
depends_on:
- orderer.com
- couchdb
networks:
- controltower.com
As you see I tried with the env var chaincode.listen.address and open new port 7052 but it didn't work.
Thanks !