kafka-connect consumer fails with bootstrap broker disconnected - apache-kafka

I have deployed kafka-connect cluster and have also did the right configuration for all connect properties but still my configuration fails to consume messages and fails with error.
bootstrap broker backbone.redpanda.svc.cluster.local:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
my kafka-connect deployment looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-connect
namespace: s3-connector
spec:
replicas: 1
selector:
matchLabels:
app: kafka-connect
template:
metadata:
labels:
app: kafka-connect
spec:
containers:
- name: kafka-connect
image: confluentinc/cp-kafka-connect:6.1.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8083
securityContext:
runAsUser: 0
env:
- name: CONNECT_BOOTSTRAP_SERVERS
value: "backbone.redpanda.svc.cluster.local:9092"
- name: CONNECT_KAFKA_HEAP_OPTS
value: "-Xms256M -Xmx2G"
- name: CONNECT_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR
value: "1"
- name: CONNECT_STATUS_STORAGE_REPLICATION_FACTOR
value: "1"
# - name: CONNECT_LOG4J_ROOT_LOGLEVEL
# value: "DEBUG"
- name: CONNECT_REST_PORT
value: "8083"
- name: CONNECT_GROUP_ID
value: "kafka-connect-worker"
- name: CONNECT_CONFIG_STORAGE_TOPIC
value: "kafka-connect-configs"
- name: CONNECT_OFFSET_STORAGE_TOPIC
value: "kafka-connect-offsets"
- name: CONNECT_STATUS_STORAGE_TOPIC
value: "kafka-connect-statuses"
- name: CONNECT_REST_ADVERTISED_HOST_NAME
value: "kafka-connect:8083"
- name: CONNECT_KEY_CONVERTER
value: "org.apache.kafka.connect.json.JsonConverter"
- name: CONNECT_VALUE_CONVERTER
value: "org.apache.kafka.connect.json.JsonConverter"
- name: CONNECT_PLUGIN_PATH
value: "/usr/share/java,/usr/share/confluent-hub-components,/data/connect-jars"
- name: AWS_ACCESS_KEY_ID
value: "---"
- name: AWS_SECRET_ACCESS_KEY
value: "---"
- name: CONNECT_LOG4J_LOGGERS
value: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
- name: CONNECT_SECURITY_PROTOCOL
value: SSL
- name: CONNECT_SSL_ENABLED_PROTOCOLS
value: "TLSv1.2"
- name: CONNECT_SSL_TRUSTSTORE_LOCATION
value: "/var/private/ssl/truststore.jks"
- name: CONNECT_SSL_TRUSTSTORE_PASSWORD
value: "backbone"
volumeMounts:
- name: connect-config-props
mountPath: /home/appuser
- name: h2-bundle
mountPath: /etc/tls-bundle
- mountPath: /var/private/ssl
name: tlscert
command:
- bash
- -c
- |
echo "Installing Connector"
confluent-hub install --no-prompt confluentinc/kafka-connect-s3:10.4.0
confluent-hub install --no-prompt mdrogalis/voluble:0.1.0
#
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
sleep infinity
volumes:
- name: connect-config-props
configMap:
name: connect-config
- name: h2-bundle
configMap:
name: h2-bundle
- name: truststore-thing
configMap:
name: client-cert
- name: tlscert
secret:
defaultMode: 420
items:
- key: truststore.jks
path: truststore.jks
secretName: redpanda-client-cert
Properties config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: connect-config
namespace: s3-connector
data:
connect.properties: |-
bootstrap.servers=backbone.redpanda.svc.cluster.local:9092
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=demo
s3.bucket.name=kafka-connect-s3-demo-devops
s3.region=eu-west-1
flush.size=3
storage.class=io.confluent.connect.s3.storage.S3Storage
format.class=io.confluent.connect.s3.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE
security.protocol=SSL
ssl.enabled.protocols=TLSv1.2
ssl.truststore.location=/var/private/ssl/truststore.jks
ssl.truststore.password=backbone
consumer.bootstrap.servers=backbone.redpanda.svc.cluster.local:9092
consumer.security.protocol=SSL
consumer.ssl.truststore.location=/var/private/ssl/truststore.jks
consumer.ssl.truststore.password=backbone
If I explicitely define the properties, it works but doesn't takes from the mounted properties.
Works fine:
kafka-console-consumer --bootstrap-server backbone.redpanda.svc.cluster.local:9092 --topic demo --from-beginning --consumer.config client.properties
Expected to work:
kafka-console-consumer --bootstrap-server backbone.redpanda.svc.cluster.local:9092 --topic demo --from-beginning
Same issue while connecting to s3 sink connector too:
curl -i -X PUT -H "Accept:application/json" \
-H "Content-Type:application/json" http://localhost:8083/connectors/confluentinc-kafka-connect-s3/config \
-d '
{
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"tasks.max": "1",
"topics": "demo",
"s3.region": "eu-west-1",
"s3.bucket.name": "kafka-connect-s3-demo-devops",
"flush.size": "1",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.json.JsonFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
}
'

connect.properties doesn't do anything.
CONNECT_BOOTSTRAP_SERVERS is the only thing that is affecting your error, and if backbone Service does not exist in the redpanda namespace, or that namespace does not have a NetworkPolicy to allow for connections from pods in the s3-connector namespace, then it will be unable to connect.

Related

jhipster kubernetes deployment has no redis

i have designed several microservices using JHipster jdl studio, with redis cache.
i want to deploy them using kubernetes and docker-compose JHipster generator.
With docker-compose deployement generation, i see redis docker in the generated docker-compose.yml.
But in kubernetes no redis srvice or app generated.
I read the jhipster kubernetes generator source, but i dont see any redis generation in jhipster kubernetes generators and templates
Is there an issue or is there a reason for that?
thanks a lot
here is a sample of one microservice
app.jdl
application {
config {
applicationType microservice
authenticationType jwt
baseName msbooklibrary
blueprints []
buildTool maven
cacheProvider redis
clientPackageManager npm
creationTimestamp 1606242682385
databaseType sql
devDatabaseType h2Memory
dtoSuffix DTO
embeddableLaunchScript false
enableHibernateCache true
enableSwaggerCodegen true
enableTranslation false
jhiPrefix jhi
jhipsterVersion "6.10.5"
languages [en, fr]
messageBroker kafka
nativeLanguage en
otherModules []
packageName fr.XXXX
prodDatabaseType postgresql
searchEngine elasticsearch
serverPort 9000
serviceDiscoveryType eureka
skipClient true
skipUserManagement true
testFrameworks [gatling, cucumber]
websocket false
}
entities Book
}
docker-compose.yml
msbooklibrary:
image: msbooklibrary
environment:
- _JAVA_OPTIONS=-Xmx512m -Xms256m
- 'SPRING_PROFILES_ACTIVE=prod,swagger'
- MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED=true
- 'EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka'
- 'SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config'
- 'SPRING_DATASOURCE_URL=jdbc:postgresql://msbooklibrary-postgresql:5432/msbooklibrary'
- 'JHIPSTER_CACHE_REDIS_SERVER=redis://msbooklibrary-redis:6379'
- JHIPSTER_CACHE_REDIS_CLUSTER=false
- JHIPSTER_SLEEP=30
- 'SPRING_DATA_JEST_URI=http://msbooklibrary-elasticsearch:9200'
- 'SPRING_ELASTICSEARCH_REST_URIS=http://msbooklibrary-elasticsearch:9200'
- 'KAFKA_BOOTSTRAPSERVERS=kafka:9092'
- JHIPSTER_REGISTRY_PASSWORD=admin
msbooklibrary-postgresql:
image: 'postgres:12.3'
environment:
- POSTGRES_USER=msbooklibrary
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
msbooklibrary-elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.8.8'
environment:
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m
- discovery.type=single-node
msbooklibrary-redis:
image: 'redis:6.0.4'
msbooklibrary-deployment.yml // kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: msbooklibrary
namespace: msdmall
spec:
replicas: 1
selector:
matchLabels:
app: msbooklibrary
version: 'v1'
template:
metadata:
labels:
app: msbooklibrary
version: 'v1'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- msbooklibrary
topologyKey: kubernetes.io/hostname
weight: 100
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 msbooklibrary-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: msbooklibrary-app
image: dockerregistry/msbooklibrary
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_CLOUD_CONFIG_URI
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/config
- name: JHIPSTER_REGISTRY_PASSWORD
valueFrom:
secretKeyRef:
name: registry-secret
key: registry-admin-password
- name: EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/eureka/
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://msbooklibrary-postgresql.msdmall.svc.cluster.local:5432/msbooklibrary
- name: SPRING_DATASOURCE_USERNAME
value: msbooklibrary
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: msbooklibrary-postgresql
key: postgresql-password
- name: SPRING_DATA_JEST_URI
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: SPRING_ELASTICSEARCH_REST_URIS
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: KAFKA_CONSUMER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_CONSUMER_GROUP_ID
value: 'msbooklibrary'
- name: KAFKA_CONSUMER_AUTO_OFFSET_RESET
value: 'earliest'
- name: KAFKA_PRODUCER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_PRODUCER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_PRODUCER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: 'x-request-id,x-ot-span-context'
- name: JAVA_OPTS
value: ' -Xmx256m -Xms256m'
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
ports:
- name: http
containerPort: 9000
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
msbooklibrary-service.yml
apiVersion: v1
kind: Service
metadata:
name: msbooklibrary
namespace: msdmall
labels:
app: msbooklibrary
spec:
selector:
app: msbooklibrary
ports:
- name: http
port: 9000
I can't recall any specific reason. Guess it was just forgotten. Can you open an issue on github?

K8s: Error in applying yaml file after adding env values

The following yaml file works fine
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
But when I add the following two lines in env section, I get error
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_CASSANDRA_PORT <--- NEW LINE
value: 9042<--- NEW LINE
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
$ kubectl apply -f codingjediweb-nodes.yaml
Error from server (BadRequest): error when creating "codingjediweb-nodes.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found 9, error found in #10 byte of ...|,"value":9042},{"nam|..., bigger context ...|.1.85.10"},{"name":"DB_CASSANDRA_PORT","value":9042},{"name":"DB_PASSWORD","value":"1GFGc1Q|...
The following website validates that the YAML is correct.
What am I doing wrong?
Could you please add 9042 in double qoutes “9042” and try this. I think it’s looking for string and getting numbers instead so please add the value in double quotes

Hyperledger peers with TLS in kubernetes cluster constantly keep throwing TLS handshake errors

Below are the peer logs:
2019-12-06 07:00:31.121 UTC [core.comm] ServerHandshake -> ERRO fa975 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:25731
2019-12-06 07:00:31.215 UTC [core.comm] ServerHandshake -> ERRO fa976 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.131.215:20784
2019-12-06 07:00:31.301 UTC [core.comm] ServerHandshake -> ERRO fa977 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:8059
2019-12-06 07:00:31.512 UTC [core.comm] ServerHandshake -> ERRO fa978 TLS handshake failed with error EOF server=ChaincodeServer remoteaddress=192.168.163.185:46359
2019-12-06 07:00:31.768 UTC [core.comm] ServerHandshake -> ERRO fa979 TLS handshake failed with error EOF server=PeerServer remoteaddress=192.168.131.215:34603
Everything is working fine. We are able to do transactions on the chaincode.
Can anyone please help us on this issue?
EDITED: 9th Dec. 2019
Below is the peer deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: korg60
name: peer1-korg60
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
template:
metadata:
labels:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
spec:
containers:
- name: couchdb
image: hyperledger/fabric-couchdb:latest
ports:
- containerPort: 5984
- name: peer1-korg60
image: hyperledger/fabric-peer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/korg60-ca-chain.pem
- name: ENROLLMENT_URL
value: http://peer1:peer1pw#ica-korg60.korg60:7054
- name: PEER_NAME
value: peer1-korg60
- name: PEER_HOME
value: /opt/gopath/src/github.com/hyperledger/fabric/peer
- name: PEER_HOST
value: some.domain.com:7051
- name: PEER_NAME_PASS
value: peer1:peer1pw
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_PEER_ID
value: peer1-korg60
- name: CORE_PEER_ADDRESS
value: some.domain.com:7051
- name: CORE_PEER_LOCALMSPID
value: korg60MSP
- name: CORE_PEER_MSPCONFIGPATH
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/msp
- name: CORE_VM_ENDPOINT
value: unix:///host/var/run/docker.sock
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: CORE_PEER_TLS_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: CORE_PEER_TLS_CLIENTROOTCAS_FILES
value: /data/korg60-ca-chain.pem
- name: CORE_PEER_TLS_CLIENTCERT_FILE
value: /data/tls/peer1-korg60-client.crt
- name: CORE_PEER_TLS_CLIENTKEY_FILE
value: /data/tls/peer1-korg60-client.key
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: some.domain.com:7051
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: CORE_LEDGER_STATE_STATEDATABASE
value: CouchDB
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: localhost:5984
- name: ORG
value: korg60
- name: ORG_ADMIN_CERT
value: /data/orgs/korg60/msp/admincerts/cert.pem
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7051
- containerPort: 7052
- containerPort: 7053
command: ["sh"]
args: ["-c", "/scripts/start-peer.sh 2>&1"]
volumeMounts:
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
- mountPath: /host/var/run/
name: run
volumes:
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-korg60-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-korg60-pvc
- name: run
hostPath:
path: /run
---
apiVersion: v1
kind: Service
metadata:
namespace: korg60
name: peer1-korg60
spec:
selector:
app: hyperledger
role: peer
org: korg60
name: peer1-korg60
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7051
targetPort: 7051
nodePort: 30401
- name: endpoint-chaincode
protocol: TCP
port: 7052
targetPort: 7052
nodePort: 30402
Below is the ordere yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
template:
metadata:
labels:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
spec:
containers:
- name: orderer1-koinearth
image: hyperledger/fabric-orderer:1.4
env:
- name: FABRIC_CA_CLIENT_HOME
value: /etc/hyperledger/orderer
- name: FABRIC_CA_CLIENT_TLS_CERTFILES
value: /data/koinearth-ca-chain.pem
- name: FABRIC_LOGGING_SPEC
value: "peer=INFO"
- name: ENROLLMENT_URL
value: http://orderer1:orderer1pw#ica-koinearth.koinearth:7054
- name: ORDERER_HOME
value: /etc/hyperledger/orderer
- name: ORDERER_HOST
value: orderer1-koinearth.koinearth
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /data/genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: koinearthMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /etc/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "true"
- name: ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /etc/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /etc/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_DEBUG_BROADCASTTRACEDIR
value: data/logs
- name: ORG
value: koinearth
- name: ORG_ADMIN_CERT
value: /data/orgs/koinearth/msp/admincerts/cert.pem
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_GENERAL_TLS_CLIENTROOTCAS
value: '[/data/koinearth-ca-chain.pem]'
- name: ORDERER_KAFKA_VERBOSE
value: "true"
- name: ORDERER_KAFKA_VERSION
value: 1.0.0
- name: GODEBUG
value: "netdns=go"
ports:
- containerPort: 7050
command: ["sh"]
args: ["-c", "/scripts/start-orderer.sh 2>&1"]
volumeMounts:
- mountPath: /etc/hyperledger/fabric-ca
name: orderer
- mountPath: /scripts
name: rca-scripts
- mountPath: /data
name: rca-data
volumes:
- name: orderer
persistentVolumeClaim:
claimName: orderer-koinearth-pvc
- name: rca-scripts
persistentVolumeClaim:
claimName: rca-scripts-koinearth-pvc
- name: rca-data
persistentVolumeClaim:
claimName: rca-data-koinearth-pvc
---
apiVersion: v1
kind: Service
metadata:
namespace: koinearth
name: orderer1-koinearth
spec:
selector:
app: hyperledger
role: orderer
org: koinearth
name: orderer1-koinearth
type: NodePort
ports:
- name: endpoint
protocol: TCP
port: 7050
targetPort: 7050
nodePort: 30300
Peer and orderer identity is created in the startup scripts and stored locally in the container.
This happens when you are using wrong certificates.
What are the two parties?
2 peers or 1 peer 1 orderer?
Or maybe the client?
The two parties must have valid TLS certificates, here you are using some wrong ones.

Accessing bitnami/kafka outside the kubernetes cluster

I am currently using bitnami/kafka image(https://hub.docker.com/r/bitnami/kafka) and deploying it on kubernetes.
kubernetes master: 1
kubernetes workers: 3
Within the cluster the other application are able to find kafka. The problem occurs when trying to access the kafka container from outside the cluster. When reading little bit I read that we need to set property "advertised.listener=PLAINTTEXT://hostname:port_number" for external kafka clients.
I am currently referencing "https://github.com/bitnami/charts/tree/master/bitnami/kafka". Inside my values.yaml file I have added
values.yaml
advertisedListeners1: 10.21.0.191
and statefulset.yaml
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: 'PLAINTEXT://{{ .Values.advertisedListeners }}:9092'
For a single kafka instance it is working fine.
But for 3 node kafka cluster, I changed some configuration like below:
values.yaml
advertisedListeners1: 10.21.0.191
advertisedListeners2: 10.21.0.192
advertisedListeners3: 10.21.0.193
and Statefulset.yaml
- name: KAFKA_CFG_ADVERTISED_LISTENERS
{{- if $MY_POD_NAME := "kafka-0" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092'
{{- else if $MY_POD_NAME := "kafka-1" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092'
{{- else if $MY_POD_NAME := "kafka-2" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092'
{{- end }}
Expected result is that all the 3 kafka instances should get advertised.listener property set to worker nodes ip address.
example:
kafka-0 --> "PLAINTEXT://10.21.0.191:9092"
kafka-1 --> "PLAINTEXT://10.21.0.192:9092"
kafka-3 --> "PLAINTEXT://10.21.0.193:9092"
Currently only one kafka pod in up and running and the other two are going to crashloopbackoff state.
and the other two pods are showing error as:
[2019-10-20 13:09:37,753] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-10-20 13:09:37,786] ERROR [KafkaServer id=1002] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: requirement failed: Configured end points 10.21.0.191:9092 in advertised listeners are already registered by broker 1001
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:399)
at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:397)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:397)
at kafka.server.KafkaServer.startup(KafkaServer.scala:261)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
That means the logic applied in statefulset.yaml is not working.
Can anyone help me in resolving this..?
Any help would be appreciated..
The output of kubectl get statefulset kafka -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2019-10-29T07:04:12Z"
generation: 1
labels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-6.0.1
name: kafka
namespace: default
resourceVersion: "12189730"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/kafka
uid: d40cfd5f-46a6-49d0-a9d3-e3a851356063
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/name: kafka
serviceName: kafka-headless
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-6.0.1
name: kafka
spec:
containers:
- env:
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: kafka-zookeeper
- name: KAFKA_PORT_NUMBER
value: "9092"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:$(KAFKA_PORT_NUMBER)
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://10.21.0.191:9092
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_BROKER_ID
value: "-1"
- name: KAFKA_CFG_DELETE_TOPIC_ENABLE
value: "false"
- name: KAFKA_HEAP_OPTS
value: -Xmx1024m -Xms1024m
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES
value: "10000"
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MS
value: "1000"
- name: KAFKA_CFG_LOG_RETENTION_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_RETENTION_CHECK_INTERVALS_MS
value: "300000"
- name: KAFKA_CFG_LOG_RETENTION_HOURS
value: "168"
- name: KAFKA_CFG_LOG_MESSAGE_FORMAT_VERSION
- name: KAFKA_CFG_MESSAGE_MAX_BYTES
value: "1000012"
- name: KAFKA_CFG_LOG_SEGMENT_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_DIRS
value: /bitnami/kafka/data
- name: KAFKA_CFG_DEFAULT_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
value: https
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_CFG_NUM_IO_THREADS
value: "8"
- name: KAFKA_CFG_NUM_NETWORK_THREADS
value: "3"
- name: KAFKA_CFG_NUM_PARTITIONS
value: "1"
- name: KAFKA_CFG_NUM_RECOVERY_THREADS_PER_DATA_DIR
value: "1"
- name: KAFKA_CFG_SOCKET_RECEIVE_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
value: "104857600"
- name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_ZOOKEEPER_CONNECTION_TIMEOUT_MS
value: "6000"
image: docker.io/bitnami/kafka:2.3.0-debian-9-r88
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
name: kafka
ports:
- containerPort: 9092
name: kafka
protocol: TCP
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/kafka
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 3
currentRevision: kafka-56ff499d74
observedGeneration: 1
readyReplicas: 1
replicas: 3
updateRevision: kafka-56ff499d74
updatedReplicas: 3
I see you have some trouble with passing different environment variables for differents pods in a StatefulSet.
You are trying to achieve this using helm templates:
- name: KAFKA_CFG_ADVERTISED_LISTENERS
{{- if $MY_POD_NAME := "kafka-0" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092'
{{- else if $MY_POD_NAME := "kafka-1" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092'
{{- else if $MY_POD_NAME := "kafka-2" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092'
{{- end }}
In helm template guide documentation you can find this explaination:
In Helm templates, a variable is a named reference to another object.
It follows the form $name. Variables are assigned with a special assignment operator: :=.
Now let's look at your code:
{{- if $MY_POD_NAME := "kafka-0" }}
This is variable assignment, not comparasion and
after this assignment, if statement evaluates this expression to true and that's why in your
staefulset yaml manifest you see this as an output:
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://10.21.0.191:9092
To make it work as expected, you shouldn't use helm templating. It's not going to work.
One way to do it would be to create separate enviroment variable for every kafka node
and pass all of these variables to all pods, like this:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: KAFKA_0
value: 10.21.0.191
- name: KAFKA_1
value: 10.21.0.192
- name: KAFKA_2
value: 10.21.0.193
# - name: KAFKA_CFG_ADVERTISED_LISTENERS
# value: PLAINTEXT://$MY_POD_NAME:9092
and also create your own docker image with modified starting script that will export KAFKA_CFG_ADVERTISED_LISTENERS variable
with appropriate value depending on MY_POD_NAME.
If you dont want to create your own image, you can create a ConfigMap with modified entrypoint.sh and mount it
in place of old entrypoint.sh (you can also use any other file, just take a look here
for more information on how kafka image is built).
Mounting ConfigMap looks like this:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test-container
image: docker.io/bitnami/kafka:2.3.0-debian-9-r88
volumeMounts:
- name: config-volume
mountPath: /entrypoint.sh
subPath: entrypoint.sh
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: kafka-entrypoint-config
defaultMode: 0744 # remember to add proper (executable) permissions
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-entrypoint-config
namespace: default
data:
entrypoint.sh: |
#!/bin/bash
# Here add modified entrypoint script
Please let me know if it helped.
I think the The helm chart doesn't whitelist your external (to kubernetes) network for advertised.listeners. I solved a similar issue by reconfiguring the helm values.yaml like this. In my case the 127.0.0.1 network is mac, yours might be different:
externalAccess:
enabled: true
autoDiscovery:
enabled: false
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.23.4-debian-10-r17
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
service:
type: NodePort
port: 9094
loadBalancerIPs: []
loadBalancerSourceRanges: []
nodePorts:
- 30000
- 30001
- 30002
useHostIPs: false
annotations: {}
domain: 127.0.0.1

Getting "cannot init crypto" while deploying hyperledger fabric peer to Kubernetes

I am trying to deploy one peer hyperledger fabric network setup to Kubernetes on GCP and while deploying peer I a getting error -
"Cannot run peer because cannot init crypto, missing /var/msp folder"
I tried mounting the msp material but it is not working
This is peer configs -
apiVersion: apps/v1 kind: Deployment metadata: name: peer0 spec:
replicas: 1 selector:
matchLabels:
app: peer0 template:
metadata:
labels:
app: peer0
tier: backend
track: stable
spec:
hostAliases:
- ip: "10.128.0.3"
hostnames:
- "peer0.example.com"
- ip: "10.128.0.3"
hostnames:
- "couchdb0"
- ip: "10.128.0.4"
hostnames:
- "orderer0.orderer.com"
nodeSelector:
id: peer
containers:
- name: peer0
image: "hyperledger/fabric-peer:1.2.0"
ports:
- name: peer0-port
containerPort: 30002
- name: peer0-chaincode
containerPort: 30003
- name: peer0-event
containerPort: 30004
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: ["peer"]
args: ["node","start"]
env:
- name: CORE_VM_ENDPOINT
value: "unix:///var/run/docker.sock"
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
value: "bridge"
- name: CORE_PEER_ID
value: "peer0.example.com"
- name: CORE_PEER_ADDRESS
value: "peer0.example.com:30002"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: "peer0.example.com:30002"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: "0.0.0.0:30003"
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: "0.0.0.0:30002"
- name: CORE_PEER_LISTENADDRESS
value: "0.0.0.0:30002"
- name: CORE_PEER_EVENTS_ADDRESS
value: "0.0.0.0:30004"
- name: CORE_PEER_LOCALMSPID
value: "exampleMSP"
- name: CORE_LOGGING_GOSSIP
value: "INFO"
- name: CORE_LOGGING_PEER_GOSSIP
value: "INFO"
- name: CORE_LOGGING_MSP
value: "INFO"
- name: CORE_LOGGING_POLICIES
value: "DEBUG"
- name: CORE_LOGGING_CAUTHDSL
value: "DEBUG"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_LEDGER_STATE_STATEDATABASE
value: "CouchDB"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: "couchdb0:30005"
- name: ORDERER_URL
value: "orderer0.orderer.com:30001"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME
value: ""
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
value: ""
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: CORE_PEER_FILESYSTEMPATH
value: "/var/production"
- name: CORE_PEER_MSPCONFIGPATH
#value: "/var/msp"
value: "/var/msp"
volumeMounts:
- name: peer0-volume
mountPath: /var
- name: host
mountPath: /var/run
volumes:
- name: peer0-volume
#persistentVolumeClaim:
# claimName: peer0-pvc
- name: host
hostPath:
path: /var/run
Referencing James comment
"I resolved it , it was happening due to files not getting mount inside the container , I have added separate mount points for that and it worked fine."
It might be helpful to try kubechain from npm.