Hyperledger peers with TLS in kubernetes cluster constantly keep throwing TLS handshake errors - kubernetes

Below are the peer logs:
2019-12-06 07:00:31.121 UTC [core.comm] ServerHandshake -> ERRO fa975 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:25731
2019-12-06 07:00:31.215 UTC [core.comm] ServerHandshake -> ERRO fa976 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:20784
2019-12-06 07:00:31.301 UTC [core.comm] ServerHandshake -> ERRO fa977 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:8059
2019-12-06 07:00:31.512 UTC [core.comm] ServerHandshake -> ERRO fa978 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.163.185:46359
2019-12-06 07:00:31.768 UTC [core.comm] ServerHandshake -> ERRO fa979 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:34603
Everything is working fine. We are able to do transactions on the chaincode.
Can anyone please help us on this issue?
EDITED: 9th Dec. 2019
Below is the peer deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: korg60
name: peer1-korg60
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
template:
metadata:
labels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
spec:
containers:
- name: couchdb
image: hyperledger/fabric-couchdb:latest
ports:
- containerPort: 5984
- name: peer1-korg60
image: hyperledger/fabric-peer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/korg60-ca-chain.pem
- name: ENROLLMENT_URL
value: http://peer1:peer1pw#ica-korg60.korg60:7054
- name: PEER_NAME
value: peer1-korg60
- name: PEER_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: PEER_HOST
value: some.domain.com:7051
- name: PEER_NAME_PASS
value: peer1:peer1pw
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_PEER_ID
value: peer1-korg60
- name: CORE_PEER_ADDRESS
value: some.domain.com:7051
- name: CORE_PEER_LOCALMSPID
value: korg60MSP
- name: CORE_PEER_MSPCONFIGPATH
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/msp
- name: CORE_VM_ENDPOINT
value: unix:///host/var/run/docker.sock
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: CORE_PEER_TLS_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: CORE_PEER_TLS_CLIENTROOTCAS_FILES
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTCERT_FILE
value: /data/tls/peer1-korg60-client.crt
- name: CORE_PEER_TLS_CLIENTKEY_FILE
value: /data/tls/peer1-korg60-client.key
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: some.domain.com:7051
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: CORE_LEDGER_STATE_STATEDATABASE
value: CouchDB
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: localhost:5984
- name: ORG
value: korg60
- name: ORG_ADMIN_CERT
value: /data/orgs/korg60/msp/admincerts/cert.pem
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7051
- containerPort: 7052
- containerPort: 7053
command: ["sh"]
args: ["-c", "/scripts/start-peer.sh 2>&1"]
volumeMounts:
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
- mountPath: /host/var/run/
name: run
volumes:
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-korg60-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-korg60-pvc
- name: run
hostPath:
path: /run
---
apiVersion: v1
kind: Service
metadata:
namespace: korg60
name: peer1-korg60
spec:
selector:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7051
targetPort: 7051
nodePort: 30401
- name: endpoint-chaincode
protocol: TCP
port: 7052
targetPort: 7052
nodePort: 30402
Below is the ordere yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
template:
metadata:
labels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
spec:
containers:
- name: orderer1-koinearth
image: hyperledger/fabric-orderer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /etc/hyperledger/orderer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/koinearth-ca-chain.pem
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: ENROLLMENT_URL
value: http://orderer1:orderer1pw#ica-koinearth.koinearth:7054
- name: ORDERER_HOME
value: /etc/hyperledger/orderer
- name: ORDERER_HOST
value: orderer1-koinearth.koinearth
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /data/genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: koinearthMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /etc/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "true"
- name: ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /etc/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /etc/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_DEBUG_BROADCASTTRACEDIR
value: data/logs
- name: ORG
value: koinearth
- name: ORG_ADMIN_CERT
value: /data/orgs/koinearth/msp/admincerts/cert.pem
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_GENERAL_TLS_CLIENTROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_KAFKA_VERBOSE
value: "true"
- name: ORDERER_KAFKA_VERSION
value: 1.0.0
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7050
command: ["sh"]
args: ["-c", "/scripts/start-orderer.sh 2>&1"]
volumeMounts:
- mountPath: /etc/hyperledger/fabric-ca
name: orderer
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
volumes:
- name: orderer
persistentVolumeClaim:
claimName: orderer-koinearth-pvc
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-koinearth-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-koinearth-pvc
---
apiVersion: v1
kind: Service
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
selector:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7050
targetPort: 7050
nodePort: 30300
Peer and orderer identity is created in the startup scripts and stored locally in the container.

This happens when you are using wrong certificates.
What are the two parties?
2 peers or 1 peer 1 orderer?
Or maybe the client?
The two parties must have valid TLS certificates, here you are using some wrong ones.

Related

Kubernetes init container hanging (Init container is running but not ready)

I am facing an weird issue in kubernetes yaml file with initContainers. It shows that my initContainer is successfully running but it is in not ready state and it remains forever. There are no errors in initcontainer logs and logs shows success result .Am i missing anything ?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: graphql-engine
strategy: {}
template:
metadata:
labels:
io.kompose.service: graphql-engine
spec:
initContainers:
# GraphQl
- env:
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: HASURA_GRAPHQL_LOG_LEVEL
value: debug
- name: HASURA_GRAPHQL_UNAUTHORIZED_ROLE
value: public
- name: PVX_MLCRAFT_ACTIONS_URL
value: http://pvx-mlcraft-actions:3010
image: hasura/graphql-engine:v2.10.1
name: graphql-engine
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
containers:
- env:
- name: AUTH_CLIENT_URL
value: http://localhost:3000
- name: AUTH_EMAIL_PASSWORDLESS_ENABLED
value: "true"
- name: AUTH_HOST
value: 0.0.0.0
- name: AUTH_LOG_LEVEL
value: debug
- name: AUTH_PORT
value: "4000"
- name: AUTH_SMTP_HOST
value: smtp.gmail.com
- name: AUTH_SMTP_PASS
value: fahkbhcedmwolqzp
- name: AUTH_SMTP_PORT
value: "587"
- name: AUTH_SMTP_SENDER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_SMTP_USER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_WEBAUTHN_RP_NAME
value: Nhost App
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_GRAPHQL_URL
value: http://graphql-engine:8080/v1/graphql
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: POSTGRES_PASSWORD
value: postgres
image: nhost/hasura-auth:latest
name: auth
ports:
- containerPort: 4000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
type: LoadBalancer
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: auth
spec:
ports:
- name: "4000"
port: 4000
targetPort: 4000
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
Init Container expected to be in ready state
The Status field of the initContainer is not relevant here. What you need is that your initContainer is deterministic. Currently your initContainer is running, because the used image is built to run indefinite.
Initcontainers need to built that they run their process and then exit with an exitcode 0. Graphql-engine on the other hand is a container that will run indefinite and provide an API.
What are you trying to accomplish with this graphql-engine pod?

How to setup Jitsi Meet on a custom Kubernetes

We have a single node kubernetes environment hosted on an on prem server and we are attempting to host jitsi on it as a single pod. Jitsi web, jicofo, jvb and the prosody will be in on one pod rather than having separate pods for each (reference here)
So far what we have managed to set it up by adding our ingress hostname to as the PUBLIC_URL to all 4 containers within the pod. This service works fine if two users are on the same network.
If a user using another network joins the call, there is no video or audio and will receive such an error in the jvb container
JVB 2022-03-16 02:03:28.447 WARNING: [62] [confId=200d989e4b048ad3 gid=116159 stats_id=Durward-H4W conf_name=externalcropsjustifynonetheless#muc.meet.jitsi ufrag=4vfdk1fu8vfgn1 epId=eaff1488 local_ufrag=4vfdk1fu8vfgn1] ConnectivityCheckClient.startCheckForPair#374: Failed to send BINDING-REQUEST(0x1)[attrib.count=6 len=92 tranID=0xBFC4F7917F010AF9DA6E21D7]
java.lang.IllegalArgumentException: No socket found for 172.17.0.40:10000/udp->192.168.1.23:42292/udp
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:631)
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:581)
at org.ice4j.stack.StunClientTransaction.sendRequest0(StunClientTransaction.java:267)
at org.ice4j.stack.StunClientTransaction.sendRequest(StunClientTransaction.java:245)
at org.ice4j.stack.StunStack.sendRequest(StunStack.java:680)
at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:335)
at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:231)
at org.ice4j.ice.ConnectivityCheckClient$PaceMaker.run(ConnectivityCheckClient.java:938)
at org.ice4j.util.PeriodicRunnable.executeRun(PeriodicRunnable.java:206)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Furthermore the errors in the browser console are as such
EDIT
I have added the yaml file for the jitsi here
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: jitsi
name: jitsi
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
k8s-app: jitsi
template:
metadata:
labels:
k8s-app: jitsi
spec:
containers:
- name: jicofo
image: jitsi/jicofo:stable-7001
volumeMounts:
- mountPath: /config
name: jicofo-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: JICOFO_COMPONENT_SECRET
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_COMPONENT_SECRET
- name: JICOFO_AUTH_USER
value: focus
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: TZ
value: America/Los_Angeles
- name: JVB_BREWERY_MUC
value: jvbbrewery
- name: prosody
image: jitsi/prosody:stable-7001
volumeMounts:
- mountPath: /config
name: prosody-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_MUC_DOMAIN
value: muc.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: JICOFO_COMPONENT_SECRET
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_COMPONENT_SECRET
- name: JVB_AUTH_USER
value: jvb
- name: JVB_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JVB_AUTH_PASSWORD
- name: JICOFO_AUTH_USER
value: focus
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: TZ
value: America/Los_Angeles
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: web
image: jitsi/web:stable-7001
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: JICOFO_AUTH_USER
value: focus
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: XMPP_BOSH_URL_BASE
value: http://127.0.0.1:5280
- name: XMPP_MUC_DOMAIN
value: muc.meet.jitsi
- name: TZ
value: America/Los_Angeles
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: jvb
image: jitsi/jvb:stable-7001
volumeMounts:
- mountPath: /config
name: jvb-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: DOCKER_HOST_ADDRESS
value: <hidden>
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
# - name: JVB_STUN_SERVERS
# value: stun.l.google.com:19302,stun1.l.google.com:19302,stun2.l.google.com:19302
- name: JICOFO_AUTH_USER
value: focus
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: JVB_AUTH_USER
value: jvb
- name: JVB_PORT
value: "10000"
- name: JVB_TCP_PORT
value: "4443"
- name: JVB_TCP_MAPPED_PORT
value: "4443"
# - name: JVB_ENABLE_APIS
# value: "rest,colibri"
- name: JVB_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JVB_AUTH_PASSWORD
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: JVB_BREWERY_MUC
value: jvbbrewery
- name: TZ
value: America/Los_Angeles
volumes:
- name: jvb-config-volume
hostPath:
path: /home/jitsi-config/jvb
- name: jicofo-config-volume
hostPath:
path: /home/jitsi-config/jicofo
- name: prosody-config-volume
hostPath:
path: /home/jitsi-config/prosody
EDIT 2
apiVersion: v1
kind: Service
metadata:
labels:
service: web
name: web
namespace: default
spec:
ports:
- name: "http"
protocol: TCP
port: 80
targetPort: 80
nodePort: 31015
- name: "https"
protocol: TCP
port: 443
targetPort: 443
nodePort: 30443
- name: "prosody"
protocol: TCP
port: 5222
targetPort: 5222
- port: 30300
name: jvb-0
protocol: UDP
targetPort: 30300
nodePort: 30300
# - name: "jvbport"
# protocol: TCP
# port: 9090
# targetPort: 9090
- name: "udp"
protocol: UDP
port: 10000
targetPort: 10000
# - name: "udp-secondary"
# protocol: UDP
# port: 20000
# targetPort: 20000
- name: "test"
protocol: TCP
port: 4443
targetPort: 4443
selector:
k8s-app: jitsi
type: NodePort
---
# service for jvbs
# create service for jvb upd access on kubernetes Nodeport starting with 31000.
# Make sure NodePorts between 31000-31005 are available on your kube cluster.
# update this if you need JVBs more than 6.
# JVB-0
apiVersion: v1
kind: Service
metadata:
labels:
service: jvb-0
name: jvb-0
namespace: default
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- port: 31000
name: jvb-0
protocol: UDP
targetPort: 31000
nodePort: 31000
# - name: "udp"
# protocol: UDP
# port: 10000
# targetPort: 10000
# - name: "jvbport"
# protocol: TCP
# port: 9090
# targetPort: 9090
selector:
app: jvb
"statefulset.kubernetes.io/pod-name": jvb-0
---
Managed to fix it. Posting this for anyone who comes across the same issue.
first off the UDP port 10000 does not work in kubernetes as you can only expose ports between 30000 to 32768. Having said that you need to pick a port within that range and use it for the JVB_PORT configuration in the JVB container.
secondly use that port in the service lay to expose it to the front end
- name: "udp"
protocol: UDP
port: 31000
targetPort: 31000
nodePort: 31000
Thirdly, regarding the firewall and, if you are behind a company firewall, make sure you have enabled ingress and egress for your JVB_PORT

jhipster kubernetes deployment has no redis

i have designed several microservices using JHipster jdl studio, with redis cache.
i want to deploy them using kubernetes and docker-compose JHipster generator.
With docker-compose deployement generation, i see redis docker in the generated docker-compose.yml.
But in kubernetes no redis srvice or app generated.
I read the jhipster kubernetes generator source, but i dont see any redis generation in jhipster kubernetes generators and templates
Is there an issue or is there a reason for that?
thanks a lot
here is a sample of one microservice
app.jdl
application {
config {
applicationType microservice
authenticationType jwt
baseName msbooklibrary
blueprints []
buildTool maven
cacheProvider redis
clientPackageManager npm
creationTimestamp 1606242682385
databaseType sql
devDatabaseType h2Memory
dtoSuffix DTO
embeddableLaunchScript false
enableHibernateCache true
enableSwaggerCodegen true
enableTranslation false
jhiPrefix jhi
jhipsterVersion "6.10.5"
languages [en, fr]
messageBroker kafka
nativeLanguage en
otherModules []
packageName fr.XXXX
prodDatabaseType postgresql
searchEngine elasticsearch
serverPort 9000
serviceDiscoveryType eureka
skipClient true
skipUserManagement true
testFrameworks [gatling, cucumber]
websocket false
}
entities Book
}
docker-compose.yml
msbooklibrary:
image: msbooklibrary
environment:
- _JAVA_OPTIONS=-Xmx512m -Xms256m
- 'SPRING_PROFILES_ACTIVE=prod,swagger'
- MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED=true
- 'EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka'
- 'SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config'
- 'SPRING_DATASOURCE_URL=jdbc:postgresql://msbooklibrary-postgresql:5432/msbooklibrary'
- 'JHIPSTER_CACHE_REDIS_SERVER=redis://msbooklibrary-redis:6379'
- JHIPSTER_CACHE_REDIS_CLUSTER=false
- JHIPSTER_SLEEP=30
- 'SPRING_DATA_JEST_URI=http://msbooklibrary-elasticsearch:9200'
- 'SPRING_ELASTICSEARCH_REST_URIS=http://msbooklibrary-elasticsearch:9200'
- 'KAFKA_BOOTSTRAPSERVERS=kafka:9092'
- JHIPSTER_REGISTRY_PASSWORD=admin
msbooklibrary-postgresql:
image: 'postgres:12.3'
environment:
- POSTGRES_USER=msbooklibrary
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
msbooklibrary-elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.8.8'
environment:
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m
- discovery.type=single-node
msbooklibrary-redis:
image: 'redis:6.0.4'
msbooklibrary-deployment.yml // kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: msbooklibrary
namespace: msdmall
spec:
replicas: 1
selector:
matchLabels:
app: msbooklibrary
version: 'v1'
template:
metadata:
labels:
app: msbooklibrary
version: 'v1'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- msbooklibrary
topologyKey: kubernetes.io/hostname
weight: 100
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 msbooklibrary-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: msbooklibrary-app
image: dockerregistry/msbooklibrary
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_CLOUD_CONFIG_URI
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/config
- name: JHIPSTER_REGISTRY_PASSWORD
valueFrom:
secretKeyRef:
name: registry-secret
key: registry-admin-password
- name: EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/eureka/
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://msbooklibrary-postgresql.msdmall.svc.cluster.local:5432/msbooklibrary
- name: SPRING_DATASOURCE_USERNAME
value: msbooklibrary
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: msbooklibrary-postgresql
key: postgresql-password
- name: SPRING_DATA_JEST_URI
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: SPRING_ELASTICSEARCH_REST_URIS
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: KAFKA_CONSUMER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_CONSUMER_GROUP_ID
value: 'msbooklibrary'
- name: KAFKA_CONSUMER_AUTO_OFFSET_RESET
value: 'earliest'
- name: KAFKA_PRODUCER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_PRODUCER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_PRODUCER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: 'x-request-id,x-ot-span-context'
- name: JAVA_OPTS
value: ' -Xmx256m -Xms256m'
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
ports:
- name: http
containerPort: 9000
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
msbooklibrary-service.yml
apiVersion: v1
kind: Service
metadata:
name: msbooklibrary
namespace: msdmall
labels:
app: msbooklibrary
spec:
selector:
app: msbooklibrary
ports:
- name: http
port: 9000
I can't recall any specific reason. Guess it was just forgotten. Can you open an issue on github?

K8s: Error in applying yaml file after adding env values

The following yaml file works fine
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
But when I add the following two lines in env section, I get error
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_CASSANDRA_PORT <--- NEW LINE
value: 9042<--- NEW LINE
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
$ kubectl apply -f codingjediweb-nodes.yaml
Error from server (BadRequest): error when creating "codingjediweb-nodes.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found 9, error found in #10 byte of ...|,"value":9042},{"nam|..., bigger context ...|.1.85.10"},{"name":"DB_CASSANDRA_PORT","value":9042},{"name":"DB_PASSWORD","value":"1GFGc1Q|...
The following website validates that the YAML is correct.
What am I doing wrong?
Could you please add 9042 in double qoutes “9042” and try this. I think it’s looking for string and getting numbers instead so please add the value in double quotes

Getting "cannot init crypto" while deploying hyperledger fabric peer to Kubernetes

I am trying to deploy one peer hyperledger fabric network setup to Kubernetes on GCP and while deploying peer I a getting error -
"Cannot run peer because cannot init crypto, missing /var/msp folder"
I tried mounting the msp material but it is not working
This is peer configs -
apiVersion: apps/v1 kind: Deployment metadata: name: peer0 spec:
replicas: 1 selector:
matchLabels:
app: peer0 template:
metadata:
labels:
app: peer0
tier: backend
track: stable
spec:
hostAliases:
- ip: "10.128.0.3"
hostnames:
- "peer0.example.com"
- ip: "10.128.0.3"
hostnames:
- "couchdb0"
- ip: "10.128.0.4"
hostnames:
- "orderer0.orderer.com"
nodeSelector:
id: peer
containers:
- name: peer0
image: "hyperledger/fabric-peer:1.2.0"
ports:
- name: peer0-port
containerPort: 30002
- name: peer0-chaincode
containerPort: 30003
- name: peer0-event
containerPort: 30004
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: ["peer"]
args: ["node","start"]
env:
- name: CORE_VM_ENDPOINT
value: "unix:///var/run/docker.sock"
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
value: "bridge"
- name: CORE_PEER_ID
value: "peer0.example.com"
- name: CORE_PEER_ADDRESS
value: "peer0.example.com:30002"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: "peer0.example.com:30002"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: "0.0.0.0:30003"
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: "0.0.0.0:30002"
- name: CORE_PEER_LISTENADDRESS
value: "0.0.0.0:30002"
- name: CORE_PEER_EVENTS_ADDRESS
value: "0.0.0.0:30004"
- name: CORE_PEER_LOCALMSPID
value: "exampleMSP"
- name: CORE_LOGGING_GOSSIP
value: "INFO"
- name: CORE_LOGGING_PEER_GOSSIP
value: "INFO"
- name: CORE_LOGGING_MSP
value: "INFO"
- name: CORE_LOGGING_POLICIES
value: "DEBUG"
- name: CORE_LOGGING_CAUTHDSL
value: "DEBUG"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_LEDGER_STATE_STATEDATABASE
value: "CouchDB"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: "couchdb0:30005"
- name: ORDERER_URL
value: "orderer0.orderer.com:30001"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME
value: ""
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
value: ""
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: CORE_PEER_FILESYSTEMPATH
value: "/var/production"
- name: CORE_PEER_MSPCONFIGPATH
#value: "/var/msp"
value: "/var/msp"
volumeMounts:
- name: peer0-volume
mountPath: /var
- name: host
mountPath: /var/run
volumes:
- name: peer0-volume
#persistentVolumeClaim:
# claimName: peer0-pvc
- name: host
hostPath:
path: /var/run
Referencing James comment
"I resolved it , it was happening due to files not getting mount inside the container , I have added separate mount points for that and it worked fine."
It might be helpful to try kubechain from npm.