NestJs, tried everything. I am trying to build my application.
"build": "nest build"`,
"start:prod": "node dist/main",
As a result
process.env.PORT undefined
process.env.POSTGRES_HOST undefined
[Nest] 22952 - 30.03.2022, 15:33:48 LOG [NestFactory] Starting Nest application...
[Nest] 22952 - 30.03.2022, 15:33:48 LOG [InstanceLoader] AppModule dependencies initialized +126ms
[Nest] 22952 - 30.03.2022, 15:33:48 LOG [InstanceLoader] SequelizeModule dependencies initialized +1ms
[Nest] 22952 - 30.03.2022, 15:33:48 LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
[Nest] 22952 - 30.03.2022, 15:33:48 LOG [InstanceLoader] ConfigModule dependencies initialized +0ms
[Nest] 22952 - 30.03.2022, 15:33:48 ERROR [SequelizeModule] Unable to connect to the database. Retrying (1)
I use "cross-env": "^7.0.3"
my app.module.ts :
imports: [
ConfigModule.forRoot({
envFilePath: `.${process.env.NODE_ENV}.env`
}),
SequelizeModule.forRoot({
dialect: 'postgres',
host: process.env.POSTGRES_HOST,
port: Number(process.env.POSTGRES_PORT),
username: process.env.POSTGRES_USER,
password: String(process.env.POSTGRES_PASSWORD),
database: process.env.POSTGRES_DB,
..
I can't figure out what I'm doing wrong.. Locally everything works.
"start:dev": "cross-env NODE_ENV=development nest start --watch"
You have a dot (.) at the beginning of the NODE_ENV.env file. Do you also have .production.env file in the directory root of your application?
Related
I have recently updated my micro-services project from Spring boot 2.0.5 to 2.7.4.
Also, updated Spring cloud dependencies to 2021.0.4.
Miragrate from Netflix Zuul to Spring cloud Gateway.
My Eureka Server is running on PORT 8000, which I can access on browser and see the Eureka Dashboard.
Gateway is running on port 8001, which registers successfully with Eureka, and I can see it's status "UP" in the Dashboard.
But no other service are registered on Eureka. The application.yml file of all other services are almost similar to the gateway-service. Below are some code snippets of one of the services :
Problem 1 - Application name UNKNOWN
DiscoveryClient_UNKNOWN/192.168.1.3: registering service...
Problem 2
Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=java.net.ConnectException: Connection refused (Connection refused) stacktrace=com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused (Connection refused)
Eureka-server config
spring:
application:
name: eureka-service
cloud:
config:
uri: ${eureka.instance.hostname}:${server.port}
server:
port: 8000
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
service-url:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka
server:
enableSelfPreservation: false
logging:
level:
org.springframework: ERROR
com.debugfactory.trypz: ERROR
file:
name: logs/eureka-service.log
Gateway-service Application.yml - This registers with proper name with Eureka
spring:
main:
web-application-type: reactive
allow-bean-definition-overriding: true
application:
name: zuul-service
servlet:
multipart:
max-file-size: 10MB
max-request-size: 10MB
cloud:
gateway:
discovery:
locator:
enabled: true
routes:
- id: sms-service
uri: http://sms-service/
predicates:
- Path=/sms/**
server:
port: 8001
max-thread: 100
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8000/eureka/}
instance:
preferIpAddress: true
logging:
file:
name: logs/zuul-service.log
pattern:
file: "%d %-5level [%thread] %logger{0} : %msg%n"
level:
org.springframework: INFO
com.bayview: DEBUG
org.netflix: INFO
SMS-Service <- This tries to look for Eureka port on 8761 with name UNKNOWN
Application.yml
spring:
application:
name: sms-service
data:
mongodb:
authentication-database: admin
database: sms
host: localhost
port: 27017
main:
allow-bean-definition-overriding: true
config:
activate:
on-profile: local
server:
port: 8022
servlet:
context-path: /sms
tomcat:
max-threads: 10
logging:
level:
org.springframework.web: ERROR
org.hibernate: ERROR
com.cryptified.exchange: DEBUG
org.springframework.data: ERROR
org.springframework.security: ERROR
org.springframework.web.client: DEBUG
file:
name: logs/sms-service.log
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8000/eureka/}
instance:
preferIpAddress: true
gradle
plugins {
id 'org.springframework.boot' version '2.7.5'
id 'io.spring.dependency-management' version '1.0.15.RELEASE'
id 'java'
}
repositories {
mavenCentral()
}
ext {
set('springCloudVersion', "2021.0.4")
set('testcontainersVersion', "1.17.4")
mapStructVersion = '1.5.3.Final'
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.flywaydb:flyway-core'
implementation 'org.springframework.cloud:spring-cloud-starter-config'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
implementation 'org.springframework.boot:spring-boot-starter-aop'
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-validation'
implementation "org.mapstruct:mapstruct:${mapStructVersion}"
annotationProcessor "org.mapstruct:mapstruct-processor:${mapStructVersion}"
compileOnly 'org.projectlombok:lombok'
runtimeOnly 'org.postgresql:postgresql'
annotationProcessor 'org.springframework.boot:spring-boot-configuration-processor'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.testcontainers:junit-jupiter'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
.....
application.yml in your project
URI_TO_SERVER_CONFIG: http://localhost:8888
spring:
application:
name: name_project_your
cloud:
config:
uri: ${URI_TO_SERVER_CONFIG}
config:
import: configserver:${URI_TO_SERVER_CONFIG}
profiles:
active: microservice, discoveries, name_project_your
Application configs on git-repository
application-discoveries.yml
SERVER_PORT_SERVICE_DISCOVERY: 8761
IP_ADDRESS_SERVICE_DISCOVERY: http://localhost
eureka:
instance:
instance-id: ${spring.application.name}:${random.uuid}
# https://docs.spring.io/spring-cloud-netflix/docs/current/reference/html/#changing-the-eureka-instance-id
client:
service-url:
defaultZone: ${IP_ADDRESS_SERVICE_DISCOVERY}:${SERVER_PORT_SERVICE_DISCOVERY}/eureka
lease-renewal-interval-in-seconds: 5
name-project-your.yml
DB_IP: 127.0.0.1
DB_PORT: 5432
DB_DATABASE_NAME: name_db
DB_URL: jdbc:postgresql://${DB_IP}:${DB_PORT}/${DB_DATABASE_NAME}
DB_USERNAME: postgres
DB_PASSWORD: postgres
DB_SCHEMA: your_schema
spring:
application:
name: name-project-your
cloud:
config:
enabled: true
discovery:
enabled: true
datasource:
url: ${DB_URL}
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
flyway:
baseline-on-migrate: true
locations: classpath:db/migration
create-schemas: true
default-schema: ${DB_SCHEMA}
application-microservice.yml
eureka:
client:
fetch-registry: true
register-with-eureka: true
logging:
level:
com:
netflix:
eureka: debug
discovery: debug
server:
port: 0
Today I am trying to deployment Apache Flink 1.11 in my kubernetes v1.16 cluster following the official doc, and this is my deployment jobs yaml just make a little tweak(original from official content and I reexported from kubernetes):
[root#k8smaster flink]# cat flink-jobmanager.yaml
apiVersion: batch/v1
kind: Job
metadata:
labels:
app: flink
component: jobmanager
job-name: flink-jobmanager
name: flink-jobmanager
namespace: infrastructure
spec:
backoffLimit: 6
completions: 1
parallelism: 1
template:
metadata:
creationTimestamp: null
labels:
app: flink
component: jobmanager
job-name: flink-jobmanager
spec:
containers:
- args:
- standalone-job
- --job-classname
- com.job.ClassName
- <optional arguments>
- <job arguments>
image: flink:1.11.0-scala_2.11
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 6123
timeoutSeconds: 1
name: jobmanager
ports:
- containerPort: 6123
name: rpc
protocol: TCP
- containerPort: 6124
name: blob-server
protocol: TCP
- containerPort: 8081
name: webui
protocol: TCP
securityContext:
runAsUser: 9999
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/flink/conf
name: flink-config-volume
- mountPath: /opt/flink/usrlib
name: job-artifacts-volume
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
name: flink-config
name: flink-config-volume
- hostPath:
path: /home
name: job-artifacts-volume
the job created success but when I login the pod and check the log, it shows like this:
Starting Job Manager
sed: couldn't open temporary file /opt/flink/conf/sedktEwGn: Read-only file system
sed: couldn't open temporary file /opt/flink/conf/sedvZ7Znn: Read-only file system
/docker-entrypoint.sh: 72: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml: Permission denied
/docker-entrypoint.sh: 91: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml.tmp: Read-only file system
Starting standalonejob as a console application on host flink-jobmanager-dfhtt.
2020-08-19 16:50:46,850 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-08-19 16:50:46,852 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Preconfiguration:
2020-08-19 16:50:46,852 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] -
JM_RESOURCE_PARAMS extraction logs:
jvm_params: -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456
logs: INFO [] - Loading configuration property: jobmanager.rpc.address, flink-jobmanager
INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 2
INFO [] - Loading configuration property: blob.server.port, 6124
INFO [] - Loading configuration property: jobmanager.rpc.port, 6123
INFO [] - Loading configuration property: taskmanager.rpc.port, 6122
INFO [] - Loading configuration property: queryable-state.proxy.ports, 6125
INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m
INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m
INFO [] - Loading configuration property: parallelism.default, 2
INFO [] - The derived from fraction jvm overhead memory (160.000mb (167772162 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
INFO [] - Final Master Memory configuration:
INFO [] - Total Process Memory: 1.563gb (1677721600 bytes)
INFO [] - Total Flink Memory: 1.125gb (1207959552 bytes)
INFO [] - JVM Heap: 1024.000mb (1073741824 bytes)
INFO [] - Off-heap: 128.000mb (134217728 bytes)
INFO [] - JVM Metaspace: 256.000mb (268435456 bytes)
INFO [] - JVM Overhead: 192.000mb (201326592 bytes)
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneApplicationClusterEntryPoint (Version: 1.11.0, Scala: 2.11, Rev:d04872d, Date:2020-06-29T16:13:14+02:00)
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - OS current user: flink
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Current Hadoop/Kerberos user: <no hadoop dependency found>
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.262-b10
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Maximum heap size: 989 MiBytes
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JAVA_HOME: /usr/local/openjdk-8
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - No Hadoop Dependency available
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM Options:
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xmx1073741824
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xms1073741824
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:MaxMetaspaceSize=268435456
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog.file=/opt/flink/log/flink--standalonejob-0-flink-jobmanager-dfhtt.log
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-console.properties
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Program Arguments:
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --configDir
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - /opt/flink/conf
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --job-classname
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - com.job.ClassName
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - <optional arguments>
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - <job arguments>
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Classpath: /opt/flink/lib/flink-csv-1.11.0.jar:/opt/flink/lib/flink-json-1.11.0.jar:/opt/flink/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/lib/flink-table-blink_2.11-1.11.0.jar:/opt/flink/lib/flink-table_2.11-1.11.0.jar:/opt/flink/lib/log4j-1.2-api-2.12.1.jar:/opt/flink/lib/log4j-api-2.12.1.jar:/opt/flink/lib/log4j-core-2.12.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.12.1.jar:/opt/flink/lib/flink-dist_2.11-1.11.0.jar:::
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Registered UNIX signal handlers for [TERM, HUP, INT]
2020-08-19 16:50:46,875 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Could not create application program.
org.apache.flink.util.FlinkException: Could not find the provided job class (com.job.ClassName) in the user lib directory (/opt/flink/usrlib).
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getJobClassNameOrScanClassPath(ClassPathPackagedProgramRetriever.java:140) ~[flink-dist_2.11-1.11.0.jar:1.11.0]
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getPackagedProgram(ClassPathPackagedProgramRetriever.java:123) ~[flink-dist_2.11-1.11.0.jar:1.11.0]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:110) ~[flink-dist_2.11-1.11.0.jar:1.11.0]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:78) [flink-dist_2.11-1.11.0.jar:1.11.0]
the question is:
why tell Read-only file system in the start of log? what should I do to fix?
why shows Could not find the provided job class (com.job.ClassName) in the user lib directory (/opt/flink/usrlib) and what should I do to fix it?
I have already tried in 2 kubernetes cluster(production kubernetes v1.15 cluster and my localhost kubernetes v1.18 cluster) and the error is the same. and I have no clue to fix it now.
I am collect container's log using filebeat in kubernetes cluster, and now collected log shows this error:
2020-06-10T09:26:35.831Z ERROR [kubernetes] add_kubernetes_metadata/matchers.go:91 Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.
this is the full log output:
I find the filebeat was listening is the node meowk8sslave2 and login into this node found the path exists. why the error could happen? this is my filebeat config:
{
"filebeat.yml": "filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: \"/var/log/containers/\"
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
"
}
Look inside your filebeat pod where exactly the logs are made available.
I was testing the ELK stack on Minikube.
In my case it was inside /var/lib/docker/containers/*/*.log
So this one worked for me.
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/lib/docker/containers/*/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/lib/docker/containers/"
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
change
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
# filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
to
# filebeat.inputs:
# - type: container
# paths:
# - /var/log/containers/*.log
# processors:
# - add_kubernetes_metadata:
# host: ${NODE_NAME}
# matchers:
# - logs_path:
# logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
Reference: https://discuss.elastic.co/t/problem-to-update-to-filebeat-7-7-0-and-parser-nginx-ingress-controller-on-kubernetes/232461/2
works for me:
update processors
from:
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
- drop_event.when.regexp.message: "kube-probe"
to:
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event.when.regexp.message: "kube-probe"
maybe you need to update your nginx module log path to:
/var/log/containers/*-${data.kubernetes.container.id}.log
I'm trying to make network with 3 org(each has 3 peers), two orderer
node with Kafka and zookeeper in fabric 1.4.3.
then, when I do peer create channel with
docker exec cli peer channel create -o orderer0.example.com:7050 -c $CHANNEL_NAME -f $ARTIFACTS_DIR/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
below error occurs in cli
Error: got unexpected status: FORBIDDEN -- implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
and this is docker logs of orderer0
2019-10-12 09:01:16.513 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 011 [channel: channel.first] Setting up the channel consumer for this channel (start offset: -2)...
2019-10-12 09:01:16.524 UTC [orderer.consensus.kafka] startThread -> INFO 012 [channel: channel.first] Channel consumer set up successfully
2019-10-12 09:01:16.543 UTC [orderer.consensus.kafka] startThread -> INFO 013 [channel: channel.first] Start phase completed successfully
2019-10-12 09:01:18.537 UTC [orderer.common.broadcast] ProcessMessage -> WARN 014 [channel: channel.first] Rejecting broadcast of config message from 172.18.0.29:35290 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
2019-10-12 09:01:18.537 UTC [comm.grpc.server] 1 -> INFO 015 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.18.0.29:35290 grpc.code=OK grpc.call_duration=1.888934ms
2019-10-12 09:01:18.541 UTC [common.deliver] Handle -> WARN 016 Error reading from 172.18.0.29:35288: rpc error: code = Canceled desc = context canceled
2019-10-12 09:01:18.542 UTC [comm.grpc.server] 1 -> INFO 017 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.18.0.29:35288 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=10.552989ms
dicectories
|──directories
| └──────artifacts
| | └──────channel.tx
| | └──────genesis.block
| |
| └──────bin
| | └──────crypto-config
| | | └──────...
| | └──────...
| |
| └──────network
| └──────docker-compose-mq.yaml
| └──────docker-compose-orderer.yaml
| └──────...
I read some solutions like me in here ,but I did't solved it yet.
This is my parts of configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: ./crypto-config/ordererOrganizations/example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: ./crypto-config/peerOrganizations/org1.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org1MSP.admin')"
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
and this is docker-compose-cli.yaml
cli:
container_name: cli
image: hyperledger/fabric-tools:1.4.3
tty: true
stdin_open: true
environment:
- SYS_CHANNEL=$SYS_CHANNEL
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- FABRIC_LOGGING_SPEC=DEBUG
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ../artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
- ../chaincode:/opt/gopath/src/github.com/hyperledger/fabric/chaincode
#- ./:/etc/hyperledger/fabric
crypto-config.yaml
OrdererOrgs:
- Name: Orderer
Domain: example.com
EnableNodeOUs: true
Specs:
- Hostname: orderer
Template:
Count: 2
PeerOrgs:
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
Template:
Count: 3
Users:
Count: 1
docker-compose-orderer.yaml
version: '2'
networks:
blockchain_network:
services:
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer:1.4.3
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls:/var/hyperledger/orderer/tls
ports:
- 7050:7050
networks:
- blockchain_network
# orderer1 is same with upside
I want to know why this error occurs and how to solve them.
What's your channel configuration inside configtx.yaml?
Have you tried to run the peer command inside the client bash (I'm not sure that your MSP related environment variables are active the way you are using "docker exec")?
docker exec -it cli bash
peer channel create -o orderer0.example.com:7050 -c $CHANNEL_NAME -f $ARTIFACTS_DIR/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
We're stuck configuring a fabric network based on 3 orgs with 1 peer each and 2 kafka-based orderers. For kafka ordering we use 4 kafka nodes with 3 zookeepers. It's deployed on some AWS ec2 instances as follows:
1: Org1
2: Org2
3: Org3
4: orderer0, orderer1, kafka0, kafka1, kafka2, kafka3, zookeeper0, zookeeper1, zookeeper2
The whole of the ordering nodes plus kafka cluster is located in the same machine for connectivity reasons (read somewhere they must be in the same machine to avoid these problems)
During our test, we try taking down the first orderer (orderer0) for redundancy testing with docker stop. We expected the network to continue working through orderer1, but instead it dies and stops working.
Looking at the peer's console, I can see some errors.
Could not connect to any of the endpoints: [orderer0.example.com:7050, orderer1.example.com:8050]
Find attached the content of the files related to the configuration of the system.
Orderer + kafka + zk
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
zookeeper0.example.com:
container_name: zookeeper0.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper0.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
zookeeper1.example.com:
container_name: zookeeper1.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
zookeeper2.example.com:
container_name: zookeeper2.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper2.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka0.example.com:
container_name: kafka0.example.com
extends:
file: docker-compose-base.yaml
service: kafka0.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka1.example.com:
container_name: kafka1.example.com
extends:
file: docker-compose-base.yaml
service: kafka1.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka2.example.com:
container_name: kafka2.example.com
extends:
file: docker-compose-base.yaml
service: kafka2.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka3.example.com:
container_name: kafka3.example.com
extends:
file: docker-compose-base.yaml
service: kafka3.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_LISTEN_PORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt, /etc/hyperledger/crypto/peerOrg3/tls/ca.crt]
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 7050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/crypto/peerOrg1
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peerOrg2
- ./channel/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/:/etc/hyperledger/crypto/peerOrg3
depends_on:
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
orderer1.example.com:
container_name: orderer1.example.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
- ORDERER_GEN ERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_LISTEN_PORT=8050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt, /etc/hyperledger/crypto/peerOrg3/tls/ca.crt]
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 8050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/crypto/peerOrg1
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peerOrg2
- ./channel/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/:/etc/hyperledger/crypto/peerOrg3
depends_on:
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
Peer and Ca from Org2
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
ca.org2.example.com:
image: hyperledger/fabric-ca:x86_64-1.1.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/efa7d0819b7083f6c06eb34da414acbcde79f607b9ce26fb04dee60cf79a389a_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/efa7d0819b7083f6c06eb34da414acbcde79f607b9ce26fb04dee60cf79a389a_sk
ports:
- "8054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerOrg2
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org2.example.com
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
ports:
- 8051:7051
- 8053:7053
volumes:
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peer
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
extra_hosts:
- "orderer0.example.com:xxx.xxx.xxx.xxx"
- "orderer1.example.com:xxx.xxx.xxx.xxx"
- "kafka0.example.com:xxx.xxx.xxx.xxx"
- "kafka1.example.com:xxx.xxx.xxx.xxx"
- "kafka2.example.com:xxx.xxx.xxx.xxx"
- "kafka3.example.com:xxx.xxx.xxx.xxx"
Orderer base
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer-base:
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
- ORDERER_GENERAL_LOGLEVEL=error
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
# kafka
- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka0.example.com,kafka1.example.com,kafka2.example.com,kafka3.example.com]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
Kafka base
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
zookeeper:
image: hyperledger/fabric-zookeeper
environment:
- ZOO_SERVERS=server.1=zookeeper0.example.com:2888:3888 server.2=zookeeper1.example.com:2888:3888 server.3=zookeeper2.example.com:2888:3888
restart: always
kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererMSP
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
AnchorPeers:
- Host: peer0.org2.example.com
Port: 7051
- &Org3
Name: Org3MSP
ID: Org3MSP
MSPDir: crypto-config/peerOrganizations/org3.example.com/msp
AnchorPeers:
- Host: peer0.org3.example.com
Port: 7051
################################################################################
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka0.example.com:9092
- kafka1.example.com:9092
- kafka2.example.com:9092
- kafka3.example.com:9092
Organizations:
################################################################################
Application: &ApplicationDefaults
Organizations:
################################################################################
Profiles:
ThreeOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
- *Org3
ThreeOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
May it be a configuration error? Connection problems are almost discarded because running the same network on a local machine gives the same result.
Thanks in advance
Regards
Finally got it running smooth. Turns out the problem wasn't in docker-compose files, but in the version of fabric sdk for the web service. I was using fabric-client and fabric-ca-client both on version 1.1, and this was missing until 1.2. (More info https://jira.hyperledger.org/browse/FABN-90)
Just to clarify, I was able to see transactions happening on both orderers because of the interconnection between them, but I was only attacking the first one. When that orderer went down, network would go dark.
I understood the way fabric deals with orderers, it points to the first orderer of the list, if it is down or unreachable, moves it to the bottom of the list and targets the next one. This is what's happening since 1.2, in older versions you have to code your own error controller so that it changes to the next orderer.
I'm not sure but it could be because of different network layer. Since it's a different compose file , docker do create different network layer for each composer.
Also, I don't see network mentioned in the yaml files.
Please check list of network layer using "docker network list".