i want to replace this two value (...*** = #IP)
enter image description here
with service name
enter image description here
Any solution plz ?
this is my service his name is api
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
targetPort: 8080
protocol: TCP
name: api
selector:
app: api
and this is my deployement
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: api
name: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: einstore/einstore-core:0.1.1
env:
- name: APICORE_STORAGE_LOCAL_ROOT
value: "/home/einstore"
- name: APICORE_SERVER_NAME
value: "Einstore - Enterprise AppStore"
- name: APICORE_SERVER_MAX_UPLOAD_FILESIZE
value: "50"
- name: APICORE_DATABASE_HOST
value: "postgres"
- name: APICORE_DATABASE_USER
value: "einstore"
- name: APICORE_DATABASE_PASSWORD
value: "einstore"
- name: APICORE_DATABASE_DATABASE
value: "einstore"
- name: APICORE_DATABASE_PORT
value: "5432"
- name: APICORE_DATABASE_LOGGING
value: "false"
- name: APICORE_JWT_SECRET
value: "secret"
- name: APICORE_STORAGE_S3_ENABLED
value: "false"
- name: APICORE_STORAGE_S3_BUCKET
value: "~"
- name: APICORE_STORAGE_S3_ACCESS_KEY
value: "~"
- name: APICORE_STORAGE_S3_REGION
value: "~"
- name: APICORE_STORAGE_S3_SECRET_KEY
value: "~"
- name: APICORE_SERVER_URL
value: "http://**.***.*.***:30001/"
when i try to replace the *** with my machine #ip Everything works,But what I need is to change that so that there is the name of the service so that I can deploy the app in any other machine
the first solution LGTM but i get this error
curl http://api.einstore:8080/
curl: (6) Could not resolve host: api.einstore
NB: einstore= the name of my namespace
Related
I'd like to deploy my k8s with NestJS backend server and redis.
In order to remove user service from the core service of NestJS, I would like to run user service as a service of k8s, and use the cache server of user db referenced by the user service as a service in k8s.
To do that, I set up the user service's database config module like this.
import { Module } from '#nestjs/common'
import { TypeOrmModule, TypeOrmModuleAsyncOptions, TypeOrmModuleOptions } from '#nestjs/typeorm'
import { SnakeNamingStrategy } from 'typeorm-naming-strategies'
let DATABASE_NAME = 'test'
if (process.env.NODE_ENV) {
DATABASE_NAME = `${DATABASE_NAME}_${process.env.NODE_ENV}`
}
const DB_HOST: string = process.env.DB_HOST ?? 'localhost'
const DB_USERNAME: string = process.env.DB_USERNAME ?? 'user'
const DB_PASSWORD: string = process.env.DB_PASSWORD ?? 'password'
const REDIS_HOST: string = process.env.REDIS_HOST ?? 'localhost'
const databaseConfig: TypeOrmModuleAsyncOptions = {
useFactory: (): TypeOrmModuleOptions => ({
type: 'mysql',
host: DB_HOST,
port: 3306,
username: DB_USERNAME,
password: DB_PASSWORD,
database: DATABASE_NAME,
autoLoadEntities: true,
synchronize: true,
namingStrategy: new SnakeNamingStrategy(),
logging: false,
cache: {
type: 'redis',
options: {
host: REDIS_HOST,
port: 6379,
},
},
timezone: '+09:00',
}),
}
#Module({
imports: [
TypeOrmModule.forRootAsync({
...databaseConfig,
}),
],
})
export class DatabaseModule {}
And, to implement k8s I used a helm.
Helm's template folders are as follows.
- configmap
- deployment
- pod
- service
And, under those folders are as follows.
// configmap/redis.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis-config: |
maxmemory 20mb
maxmemory-policy allkeys-lru
// deployment/user_service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
namespace: default
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: user-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: user-service
spec:
containers:
- image: {{ .Values.user_service.image }}:{{ .Values.user_service_version }}
imagePullPolicy: Always
name: user-service
ports:
- containerPort: 50051
protocol: TCP
env:
- name: COGNITO_CLIENT_ID
value: "some value"
- name: COGNITO_USER_POOL_ID
value: "some value"
- name: DB_HOST
value: "some value"
- name: DB_PASSWORD
value: "some value"
- name: DB_USERNAME
value: "some value"
- name: NODE_ENV
value: "test"
- name: REDIS_HOST
value: "10.100.77.0"
// pod/redis.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
name: redis
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
// service/user_service.yaml
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
clusterIP: 10.100.88.0
selector:
app: user-service
ports:
- protocol: TCP
port: 50051
targetPort: 50051
// service/redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
clusterIP: 10.100.77.0
selector:
app: redis
ports:
- name: redis
protocol: TCP
port: 6379
targetPort: 6379
With above yaml files, I install helm chart named test.
After installing, the result of kubectl get svc,po,deploy,configmap is like this.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4d4h
service/user-service ClusterIP 10.100.88.0 <none> 50051/TCP 6s
service/redis ClusterIP 10.100.77.0 <none> 6379/TCP 6s
NAME READY STATUS RESTARTS AGE
pod/user-service-78548d4d8f-psbr2 0/1 ContainerCreating 0 6s
pod/redis 0/1 ContainerCreating 0 6s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/user-service 0/1 1 0 6s
NAME DATA AGE
configmap/kube-root-ca.crt 1 4d4h
configmap/redis-config 1 6s
But, when I checked the user-service's deploy logs, these error was occurred.
[Nest] 1 - 02/07/2023, 7:15:32 AM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (1)...
Error: connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
I also checked through the console log that the REDIS_HOST environment variable is 10.100.77.0 in the database config of user-service, but an error was appearing while referring to the local host as above.
Is there any error in the part I set?
you can use service for connect to Redis. for this use redis.redis as REDIS_HOST in your application.
I am facing an weird issue in kubernetes yaml file with initContainers. It shows that my initContainer is successfully running but it is in not ready state and it remains forever. There are no errors in initcontainer logs and logs shows success result .Am i missing anything ?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: graphql-engine
strategy: {}
template:
metadata:
labels:
io.kompose.service: graphql-engine
spec:
initContainers:
# GraphQl
- env:
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: HASURA_GRAPHQL_LOG_LEVEL
value: debug
- name: HASURA_GRAPHQL_UNAUTHORIZED_ROLE
value: public
- name: PVX_MLCRAFT_ACTIONS_URL
value: http://pvx-mlcraft-actions:3010
image: hasura/graphql-engine:v2.10.1
name: graphql-engine
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
containers:
- env:
- name: AUTH_CLIENT_URL
value: http://localhost:3000
- name: AUTH_EMAIL_PASSWORDLESS_ENABLED
value: "true"
- name: AUTH_HOST
value: 0.0.0.0
- name: AUTH_LOG_LEVEL
value: debug
- name: AUTH_PORT
value: "4000"
- name: AUTH_SMTP_HOST
value: smtp.gmail.com
- name: AUTH_SMTP_PASS
value: fahkbhcedmwolqzp
- name: AUTH_SMTP_PORT
value: "587"
- name: AUTH_SMTP_SENDER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_SMTP_USER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_WEBAUTHN_RP_NAME
value: Nhost App
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_GRAPHQL_URL
value: http://graphql-engine:8080/v1/graphql
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: POSTGRES_PASSWORD
value: postgres
image: nhost/hasura-auth:latest
name: auth
ports:
- containerPort: 4000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
type: LoadBalancer
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: auth
spec:
ports:
- name: "4000"
port: 4000
targetPort: 4000
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
Init Container expected to be in ready state
The Status field of the initContainer is not relevant here. What you need is that your initContainer is deterministic. Currently your initContainer is running, because the used image is built to run indefinite.
Initcontainers need to built that they run their process and then exit with an exitcode 0. Graphql-engine on the other hand is a container that will run indefinite and provide an API.
What are you trying to accomplish with this graphql-engine pod?
I configured mongodb with user name and password, and deployed mongoDb and mongoDb express.
The problem is that I'm getting the following error in mongo-express logs:
Could not connect to database using connectionString: mongodb://username:password#mongodb://lc-mongodb-service:27017:27017/"
I can see that the connection string contains 27017 port twice, and also "mongodb://" in the middle that should not be there.
This is my mongo-express deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lc-mongo-express
labels:
app: lc-mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: lc-mongo-express
template:
metadata:
labels:
app: lc-mongo-express
spec:
containers:
- name: lc-mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: lc-configmap
key: DATABASE_URL
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongo-express-service
spec:
selector:
app: lc-mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
And my mongoDb deployment:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lc-mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
storageClassName: gp2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lc-mongodb
labels:
app: lc-mongodb
spec:
replicas: 1
serviceName: lc-mongodb-service
selector:
matchLabels:
app: lc-mongodb
template:
metadata:
labels:
app: lc-mongodb
spec:
volumes:
- name: lc-mongodb-storage
persistentVolumeClaim:
claimName: lc-mongodb-pvc
containers:
- name: lc-mongodb
image: "mongo"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
command:
- mongod
- --auth
volumeMounts:
- mountPath: '/data/db'
name: lc-mongodb-storage
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongodb-service
labels:
name: lc-mongodb
spec:
selector:
app: lc-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
What am I doing wrong?
Your connection string format is wrong
You should be trying out something like
mongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
Now suppose if you are using the Node js
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb+srv://<username>:<password>#<Mongo service Name>/<Database name>?retryWrites=true&w=majority";
const client = new MongoClient(uri, { useNewUrlParser: true });
client.connect(err => {
// creating collection
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
also you missing the Db path args: ["--dbpath","/data/db"] in command while using the PVC and configuring the path
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "pvc"
I need to use theSAML 2.0 Authentication (https://www.bookstackapp.com/docs/admin/saml2-auth/) in the Kubernetes deployment file of BookStack.
Is it possible to configure the variables from the above link in the Kubernetes deployment file?
Thanks in advance.
Look into this example - seems it was created just for you :)
https://github.com/BookStackApp/BookStack/issues/1776
Just use own variables in the deployment file from saml2-auth page
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "bookstack-test-x5"
namespace: "default"
labels:
app: "bookstack-test-x5"
spec:
replicas: 1
selector:
matchLabels:
app: "bookstack-test-x5"
template:
metadata:
labels:
app: "bookstack-test-x5"
spec:
containers:
- name: "bookstack-sha256"
image: "gcr.io/<PATH_TO_MY_CONTAINER>"
env:
- name: "DB_USER"
value: "bookstack22"
- name: "DB_HOST"
value: "127.0.0.1"
- name: DB_PORT
value: "3306"
- name: DB_DATABASE
value: "bookstack"
- name: "DB_PSWD"
value: "my_secret_passowrd"
- name: "APP_DEBUG"
value: "true"
- name: CACHE_DRIVER
value: "database"
- name: SESSION_DRIVER
value: "database"
Below are the peer logs:
2019-12-06 07:00:31.121 UTC [core.comm] ServerHandshake -> ERRO fa975 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:25731
2019-12-06 07:00:31.215 UTC [core.comm] ServerHandshake -> ERRO fa976 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:20784
2019-12-06 07:00:31.301 UTC [core.comm] ServerHandshake -> ERRO fa977 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:8059
2019-12-06 07:00:31.512 UTC [core.comm] ServerHandshake -> ERRO fa978 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.163.185:46359
2019-12-06 07:00:31.768 UTC [core.comm] ServerHandshake -> ERRO fa979 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:34603
Everything is working fine. We are able to do transactions on the chaincode.
Can anyone please help us on this issue?
EDITED: 9th Dec. 2019
Below is the peer deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: korg60
name: peer1-korg60
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
template:
metadata:
labels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
spec:
containers:
- name: couchdb
image: hyperledger/fabric-couchdb:latest
ports:
- containerPort: 5984
- name: peer1-korg60
image: hyperledger/fabric-peer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/korg60-ca-chain.pem
- name: ENROLLMENT_URL
value: http://peer1:peer1pw#ica-korg60.korg60:7054
- name: PEER_NAME
value: peer1-korg60
- name: PEER_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: PEER_HOST
value: some.domain.com:7051
- name: PEER_NAME_PASS
value: peer1:peer1pw
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_PEER_ID
value: peer1-korg60
- name: CORE_PEER_ADDRESS
value: some.domain.com:7051
- name: CORE_PEER_LOCALMSPID
value: korg60MSP
- name: CORE_PEER_MSPCONFIGPATH
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/msp
- name: CORE_VM_ENDPOINT
value: unix:///host/var/run/docker.sock
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: CORE_PEER_TLS_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: CORE_PEER_TLS_CLIENTROOTCAS_FILES
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTCERT_FILE
value: /data/tls/peer1-korg60-client.crt
- name: CORE_PEER_TLS_CLIENTKEY_FILE
value: /data/tls/peer1-korg60-client.key
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: some.domain.com:7051
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: CORE_LEDGER_STATE_STATEDATABASE
value: CouchDB
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: localhost:5984
- name: ORG
value: korg60
- name: ORG_ADMIN_CERT
value: /data/orgs/korg60/msp/admincerts/cert.pem
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7051
- containerPort: 7052
- containerPort: 7053
command: ["sh"]
args: ["-c", "/scripts/start-peer.sh 2>&1"]
volumeMounts:
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
- mountPath: /host/var/run/
name: run
volumes:
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-korg60-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-korg60-pvc
- name: run
hostPath:
path: /run
---
apiVersion: v1
kind: Service
metadata:
namespace: korg60
name: peer1-korg60
spec:
selector:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7051
targetPort: 7051
nodePort: 30401
- name: endpoint-chaincode
protocol: TCP
port: 7052
targetPort: 7052
nodePort: 30402
Below is the ordere yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
template:
metadata:
labels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
spec:
containers:
- name: orderer1-koinearth
image: hyperledger/fabric-orderer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /etc/hyperledger/orderer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/koinearth-ca-chain.pem
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: ENROLLMENT_URL
value: http://orderer1:orderer1pw#ica-koinearth.koinearth:7054
- name: ORDERER_HOME
value: /etc/hyperledger/orderer
- name: ORDERER_HOST
value: orderer1-koinearth.koinearth
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /data/genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: koinearthMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /etc/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "true"
- name: ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /etc/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /etc/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_DEBUG_BROADCASTTRACEDIR
value: data/logs
- name: ORG
value: koinearth
- name: ORG_ADMIN_CERT
value: /data/orgs/koinearth/msp/admincerts/cert.pem
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_GENERAL_TLS_CLIENTROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_KAFKA_VERBOSE
value: "true"
- name: ORDERER_KAFKA_VERSION
value: 1.0.0
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7050
command: ["sh"]
args: ["-c", "/scripts/start-orderer.sh 2>&1"]
volumeMounts:
- mountPath: /etc/hyperledger/fabric-ca
name: orderer
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
volumes:
- name: orderer
persistentVolumeClaim:
claimName: orderer-koinearth-pvc
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-koinearth-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-koinearth-pvc
---
apiVersion: v1
kind: Service
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
selector:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7050
targetPort: 7050
nodePort: 30300
Peer and orderer identity is created in the startup scripts and stored locally in the container.
This happens when you are using wrong certificates.
What are the two parties?
2 peers or 1 peer 1 orderer?
Or maybe the client?
The two parties must have valid TLS certificates, here you are using some wrong ones.