i have designed several microservices using JHipster jdl studio, with redis cache.
i want to deploy them using kubernetes and docker-compose JHipster generator.
With docker-compose deployement generation, i see redis docker in the generated docker-compose.yml.
But in kubernetes no redis srvice or app generated.
I read the jhipster kubernetes generator source, but i dont see any redis generation in jhipster kubernetes generators and templates
Is there an issue or is there a reason for that?
thanks a lot
here is a sample of one microservice
app.jdl
application {
config {
applicationType microservice
authenticationType jwt
baseName msbooklibrary
blueprints []
buildTool maven
cacheProvider redis
clientPackageManager npm
creationTimestamp 1606242682385
databaseType sql
devDatabaseType h2Memory
dtoSuffix DTO
embeddableLaunchScript false
enableHibernateCache true
enableSwaggerCodegen true
enableTranslation false
jhiPrefix jhi
jhipsterVersion "6.10.5"
languages [en, fr]
messageBroker kafka
nativeLanguage en
otherModules []
packageName fr.XXXX
prodDatabaseType postgresql
searchEngine elasticsearch
serverPort 9000
serviceDiscoveryType eureka
skipClient true
skipUserManagement true
testFrameworks [gatling, cucumber]
websocket false
}
entities Book
}
docker-compose.yml
msbooklibrary:
image: msbooklibrary
environment:
- _JAVA_OPTIONS=-Xmx512m -Xms256m
- 'SPRING_PROFILES_ACTIVE=prod,swagger'
- MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED=true
- 'EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka'
- 'SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config'
- 'SPRING_DATASOURCE_URL=jdbc:postgresql://msbooklibrary-postgresql:5432/msbooklibrary'
- 'JHIPSTER_CACHE_REDIS_SERVER=redis://msbooklibrary-redis:6379'
- JHIPSTER_CACHE_REDIS_CLUSTER=false
- JHIPSTER_SLEEP=30
- 'SPRING_DATA_JEST_URI=http://msbooklibrary-elasticsearch:9200'
- 'SPRING_ELASTICSEARCH_REST_URIS=http://msbooklibrary-elasticsearch:9200'
- 'KAFKA_BOOTSTRAPSERVERS=kafka:9092'
- JHIPSTER_REGISTRY_PASSWORD=admin
msbooklibrary-postgresql:
image: 'postgres:12.3'
environment:
- POSTGRES_USER=msbooklibrary
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
msbooklibrary-elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.8.8'
environment:
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m
- discovery.type=single-node
msbooklibrary-redis:
image: 'redis:6.0.4'
msbooklibrary-deployment.yml // kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: msbooklibrary
namespace: msdmall
spec:
replicas: 1
selector:
matchLabels:
app: msbooklibrary
version: 'v1'
template:
metadata:
labels:
app: msbooklibrary
version: 'v1'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- msbooklibrary
topologyKey: kubernetes.io/hostname
weight: 100
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 msbooklibrary-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: msbooklibrary-app
image: dockerregistry/msbooklibrary
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_CLOUD_CONFIG_URI
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/config
- name: JHIPSTER_REGISTRY_PASSWORD
valueFrom:
secretKeyRef:
name: registry-secret
key: registry-admin-password
- name: EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/eureka/
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://msbooklibrary-postgresql.msdmall.svc.cluster.local:5432/msbooklibrary
- name: SPRING_DATASOURCE_USERNAME
value: msbooklibrary
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: msbooklibrary-postgresql
key: postgresql-password
- name: SPRING_DATA_JEST_URI
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: SPRING_ELASTICSEARCH_REST_URIS
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: KAFKA_CONSUMER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_CONSUMER_GROUP_ID
value: 'msbooklibrary'
- name: KAFKA_CONSUMER_AUTO_OFFSET_RESET
value: 'earliest'
- name: KAFKA_PRODUCER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_PRODUCER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_PRODUCER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: 'x-request-id,x-ot-span-context'
- name: JAVA_OPTS
value: ' -Xmx256m -Xms256m'
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
ports:
- name: http
containerPort: 9000
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
msbooklibrary-service.yml
apiVersion: v1
kind: Service
metadata:
name: msbooklibrary
namespace: msdmall
labels:
app: msbooklibrary
spec:
selector:
app: msbooklibrary
ports:
- name: http
port: 9000
I can't recall any specific reason. Guess it was just forgotten. Can you open an issue on github?
Related
How do we configure keycloak to use the external postgres (AWS RDS)?
We deployed it in kubernetes using quarkus distro and update dthe DB env variables in our deployment.yaml , however it is still taking the local h2 data base and not the postgres.
For better understanding providing the deployment.yaml file we are using:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
kubectl.kubernetes.io/last-applied-configuration: |
creationTimestamp: "2022-06-21T16:47:29Z"
generation: 5
labels:
app: keycloak
name: keycloak
namespace: kc***
resourceVersion: "29233550"
uid: 3634683e-657c-4278-9002-82a3ce64b968
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: keycloak
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: keycloak
spec:
containers:
- args:
- start
- --hostname=kc-test.k8.com
- --https-certificate-file=/opt/pem/cert-pem/cert.pem
- --https-certificate-key-file=/opt/pem/key-pem/key.pem
- --log-level=DEBUG
env:
- name: KEYCLOAK_ADMIN
value: ****
- name: KEYCLOAK_ADMIN_PASSWORD
value: *****
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: jdbc:postgresql://database.c**7irl*****.us-east-1.rds.amazonaws.com/database
- name: DB_DATABASE
value: ****
- name: DB_USER
value: postgres
- name: DB_SCHEMA
value: public
- name: DB_VENDOR
value: POSTGRES
- name: JGROUPS_DISCOVERY_PROTOCOL
value: dns.DNS_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: dns_query=keycloak
- name: CACHE_OWNERS_COUNT
value: "2"
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: "2"
image: quay.io/keycloak/keycloak:17.0.0
imagePullPolicy: IfNotPresent
name: keycloak
ports:
- containerPort: 7600
name: jgroups
protocol: TCP
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /realms/master
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/pem/key-pem
name: key-pem
- mountPath: /opt/pem/cert-pem
name: cert-pem
- mountPath: /opt/keycloak/data
name: keydata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: key-pem
name: key-pem
- configMap:
defaultMode: 420
name: cert-pem
name: cert-pem
- emptyDir: {}
name: keydata
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-21T18:02:32Z"
lastUpdateTime: "2022-06-21T18:02:32Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-21T18:01:53Z"
lastUpdateTime: "2022-06-21T18:16:41Z"
message: ReplicaSet "keycloak-5c84476694" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 5
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Is your external DB also in same namespace?
if yes you can use below way.
<external postgres (AWS RDS)>secret-name in k8s secret contains all the below details.
Using this method it will dynamically fetch details from secret.
env:
- name: DB_DATABASE
valueFrom:
secretKeyRef:
name: database-secret-name
key: dbname
- name: DB_ADDR
valueFrom:
secretKeyRef:
name: database-secret-name
key: host
- name: DB_PORT
valueFrom:
secretKeyRef:
name: database-secret-name
key: port
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret-name
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: database-secret-name
key: user
if your external postgres in different namespace, copy you database secret to keycloak namespace and give it a try.
DB_ADDR is env variable for Keycloak versions 16-. Use doc for you Keycloak version https://www.keycloak.org/server/all-config
Keycloak 17+ has KC_DB_URL:
db-url
The full database JDBC URL.
If not provided, a default URL is set based on the selected database vendor. For instance, if using 'postgres', the default JDBC URL would be 'jdbc:postgresql://localhost/keycloak'.
CLI: --db-url
Env: KC_DB_URL
Of course configure also other env variables for your Keycloak version properly.
The following yaml file works fine
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
But when I add the following two lines in env section, I get error
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_CASSANDRA_PORT <--- NEW LINE
value: 9042<--- NEW LINE
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
$ kubectl apply -f codingjediweb-nodes.yaml
Error from server (BadRequest): error when creating "codingjediweb-nodes.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found 9, error found in #10 byte of ...|,"value":9042},{"nam|..., bigger context ...|.1.85.10"},{"name":"DB_CASSANDRA_PORT","value":9042},{"name":"DB_PASSWORD","value":"1GFGc1Q|...
The following website validates that the YAML is correct.
What am I doing wrong?
Could you please add 9042 in double qoutes “9042” and try this. I think it’s looking for string and getting numbers instead so please add the value in double quotes
Below are the peer logs:
2019-12-06 07:00:31.121 UTC [core.comm] ServerHandshake -> ERRO fa975 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:25731
2019-12-06 07:00:31.215 UTC [core.comm] ServerHandshake -> ERRO fa976 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:20784
2019-12-06 07:00:31.301 UTC [core.comm] ServerHandshake -> ERRO fa977 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:8059
2019-12-06 07:00:31.512 UTC [core.comm] ServerHandshake -> ERRO fa978 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.163.185:46359
2019-12-06 07:00:31.768 UTC [core.comm] ServerHandshake -> ERRO fa979 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:34603
Everything is working fine. We are able to do transactions on the chaincode.
Can anyone please help us on this issue?
EDITED: 9th Dec. 2019
Below is the peer deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: korg60
name: peer1-korg60
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
template:
metadata:
labels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
spec:
containers:
- name: couchdb
image: hyperledger/fabric-couchdb:latest
ports:
- containerPort: 5984
- name: peer1-korg60
image: hyperledger/fabric-peer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/korg60-ca-chain.pem
- name: ENROLLMENT_URL
value: http://peer1:peer1pw#ica-korg60.korg60:7054
- name: PEER_NAME
value: peer1-korg60
- name: PEER_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: PEER_HOST
value: some.domain.com:7051
- name: PEER_NAME_PASS
value: peer1:peer1pw
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_PEER_ID
value: peer1-korg60
- name: CORE_PEER_ADDRESS
value: some.domain.com:7051
- name: CORE_PEER_LOCALMSPID
value: korg60MSP
- name: CORE_PEER_MSPCONFIGPATH
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/msp
- name: CORE_VM_ENDPOINT
value: unix:///host/var/run/docker.sock
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: CORE_PEER_TLS_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: CORE_PEER_TLS_CLIENTROOTCAS_FILES
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTCERT_FILE
value: /data/tls/peer1-korg60-client.crt
- name: CORE_PEER_TLS_CLIENTKEY_FILE
value: /data/tls/peer1-korg60-client.key
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: some.domain.com:7051
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: CORE_LEDGER_STATE_STATEDATABASE
value: CouchDB
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: localhost:5984
- name: ORG
value: korg60
- name: ORG_ADMIN_CERT
value: /data/orgs/korg60/msp/admincerts/cert.pem
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7051
- containerPort: 7052
- containerPort: 7053
command: ["sh"]
args: ["-c", "/scripts/start-peer.sh 2>&1"]
volumeMounts:
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
- mountPath: /host/var/run/
name: run
volumes:
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-korg60-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-korg60-pvc
- name: run
hostPath:
path: /run
---
apiVersion: v1
kind: Service
metadata:
namespace: korg60
name: peer1-korg60
spec:
selector:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7051
targetPort: 7051
nodePort: 30401
- name: endpoint-chaincode
protocol: TCP
port: 7052
targetPort: 7052
nodePort: 30402
Below is the ordere yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
template:
metadata:
labels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
spec:
containers:
- name: orderer1-koinearth
image: hyperledger/fabric-orderer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /etc/hyperledger/orderer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/koinearth-ca-chain.pem
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: ENROLLMENT_URL
value: http://orderer1:orderer1pw#ica-koinearth.koinearth:7054
- name: ORDERER_HOME
value: /etc/hyperledger/orderer
- name: ORDERER_HOST
value: orderer1-koinearth.koinearth
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /data/genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: koinearthMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /etc/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "true"
- name: ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /etc/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /etc/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_DEBUG_BROADCASTTRACEDIR
value: data/logs
- name: ORG
value: koinearth
- name: ORG_ADMIN_CERT
value: /data/orgs/koinearth/msp/admincerts/cert.pem
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_GENERAL_TLS_CLIENTROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_KAFKA_VERBOSE
value: "true"
- name: ORDERER_KAFKA_VERSION
value: 1.0.0
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7050
command: ["sh"]
args: ["-c", "/scripts/start-orderer.sh 2>&1"]
volumeMounts:
- mountPath: /etc/hyperledger/fabric-ca
name: orderer
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
volumes:
- name: orderer
persistentVolumeClaim:
claimName: orderer-koinearth-pvc
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-koinearth-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-koinearth-pvc
---
apiVersion: v1
kind: Service
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
selector:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7050
targetPort: 7050
nodePort: 30300
Peer and orderer identity is created in the startup scripts and stored locally in the container.
This happens when you are using wrong certificates.
What are the two parties?
2 peers or 1 peer 1 orderer?
Or maybe the client?
The two parties must have valid TLS certificates, here you are using some wrong ones.
I am trying to deploy one peer hyperledger fabric network setup to Kubernetes on GCP and while deploying peer I a getting error -
"Cannot run peer because cannot init crypto, missing /var/msp folder"
I tried mounting the msp material but it is not working
This is peer configs -
apiVersion: apps/v1 kind: Deployment metadata: name: peer0 spec:
replicas: 1 selector:
matchLabels:
app: peer0 template:
metadata:
labels:
app: peer0
tier: backend
track: stable
spec:
hostAliases:
- ip: "10.128.0.3"
hostnames:
- "peer0.example.com"
- ip: "10.128.0.3"
hostnames:
- "couchdb0"
- ip: "10.128.0.4"
hostnames:
- "orderer0.orderer.com"
nodeSelector:
id: peer
containers:
- name: peer0
image: "hyperledger/fabric-peer:1.2.0"
ports:
- name: peer0-port
containerPort: 30002
- name: peer0-chaincode
containerPort: 30003
- name: peer0-event
containerPort: 30004
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: ["peer"]
args: ["node","start"]
env:
- name: CORE_VM_ENDPOINT
value: "unix:///var/run/docker.sock"
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
value: "bridge"
- name: CORE_PEER_ID
value: "peer0.example.com"
- name: CORE_PEER_ADDRESS
value: "peer0.example.com:30002"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: "peer0.example.com:30002"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: "0.0.0.0:30003"
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: "0.0.0.0:30002"
- name: CORE_PEER_LISTENADDRESS
value: "0.0.0.0:30002"
- name: CORE_PEER_EVENTS_ADDRESS
value: "0.0.0.0:30004"
- name: CORE_PEER_LOCALMSPID
value: "exampleMSP"
- name: CORE_LOGGING_GOSSIP
value: "INFO"
- name: CORE_LOGGING_PEER_GOSSIP
value: "INFO"
- name: CORE_LOGGING_MSP
value: "INFO"
- name: CORE_LOGGING_POLICIES
value: "DEBUG"
- name: CORE_LOGGING_CAUTHDSL
value: "DEBUG"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_LEDGER_STATE_STATEDATABASE
value: "CouchDB"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: "couchdb0:30005"
- name: ORDERER_URL
value: "orderer0.orderer.com:30001"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME
value: ""
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
value: ""
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: CORE_PEER_FILESYSTEMPATH
value: "/var/production"
- name: CORE_PEER_MSPCONFIGPATH
#value: "/var/msp"
value: "/var/msp"
volumeMounts:
- name: peer0-volume
mountPath: /var
- name: host
mountPath: /var/run
volumes:
- name: peer0-volume
#persistentVolumeClaim:
# claimName: peer0-pvc
- name: host
hostPath:
path: /var/run
Referencing James comment
"I resolved it , it was happening due to files not getting mount inside the container , I have added separate mount points for that and it worked fine."
It might be helpful to try kubechain from npm.
I'm unable to find good information describing these errors:
[sarah#localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet).
The YAML file it's referencing is here:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.
Is it possible the errors are coming from a different file, also?
StatefulSets objects has different structure than Pods are. You need to modify your yaml file a little:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}