Broker may not be available error Kafka schema registry - kubernetes

I have defined Kafka and Kafka schema registry configuration using Kubernetes deployments and services. I used this link as a reference for the environment variables set up. However, when I try to run Kafka with registry I see that the schema registry pods crashes with an error message in the logs:
[kafka-admin-client-thread | adminclient-1] WARN
org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(localhost/127.0.0.1:9092) could not be established. Broker may not be
available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for
a node assignment.
What could be the reason of this error?
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
ports:
- name: client
port: 9092
selector:
app: kafka
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-1
spec:
selector:
matchLabels:
app: kafka
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: kafka
server-id: "1"
spec:
volumes:
- name: kafka-data
emptyDir: {}
containers:
- name: server
image: confluentinc/cp-kafka:5.1.0
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://localhost:9092
- name: KAFKA_BROKER_ID
value: "2"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
name: schema-registry-service
spec:
ports:
- name: client
port: 8081
selector:
app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: kafka-schema-registry
template:
metadata:
labels:
app: kafka-schema-registry
spec:
containers:
- name: kafka-schema-registry
image: confluentinc/cp-schema-registry:5.1.0
env:
- name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
value: zookeeper:2181
- name: SCHEMA_REGISTRY_HOST_NAME
value: localhost
- name: SCHEMA_REGISTRY_LISTENERS
value: "http://0.0.0.0:8081"
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: PLAINTEXT://localhost:9092
ports:
- containerPort: 8081

You've configured Schema Registry to look for the Kafka broker at kafka:9092, but you've also configured the Kafka broker to advertise its address as localhost:9092.
I'm not familiar with Kubernetes specifically, but this article describes how to handle networking config in principle when using containers, IaaS, etc.

I was getting the same kind of error and I followed cricket_007's answer and it got resolved.
Just change KAFKA_ADVERTISED_LISTENERS value from PLAINTEXT://localhost:9092 to PLAINTEXT://schema-registry-service:9092

Related

Kafka and Zookeeper pods restart repeatedly on Kubernetes cluster

I'm developing a microservices-based application deployed with Kubernetes for a university project. I'm newbie with Kubernetes and Kafka and I'm trying to run Kafka and zookeeper in the same minikube cluster. I have created one pod for Kafka and one pod for Zookeeper but after deploying them on the cluster they begin to restart repeatedly going to "CrashLoopBackOff" error. Taking a look at the logs I noticed that kafka launch a "ConnectException: Connection refused", it seems that kafka cannot establish connection with zookeeper. I have created the pods manually with the following config file:
zookeeper-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: bitnami/zookeeper
ports:
- name: http
containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
selector:
app: zookeeper
ports:
- protocol: TCP
port: 2181
kafka-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: bitnami/kafka
ports:
- name: http
containerPort: 9092
env:
- name: KAFKA_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_BROKER_ID
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://$(KAFKA_POD_IP):9092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: zookeeper:2181
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
selector:
app: kafka
ports:
- protocol: TCP
port: 9092
type: LoadBalancer
Kafka and zookeeper configurations are more or less the same that I used with docker compose with no errors. So, probably there is something wrong in my configuration for Kubernetes. Anyone could help me please, I don't understand the issue, thanks.

Kafka connection refused with Kubernetes nodeport

I am trying to expose KAFKA in my Kubernetes setup for external usage using node port.
My Helmcharts kafka-service.yaml is as follows:
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: test
labels:
app: kafka-test
unit: kafka
spec:
type: NodePort
selector:
app: test-app
unit: kafka
parentdeployment: test-kafka
ports:
- name: kafka
port: 9092
targetPort: 9092
nodePort: 30092
protocol: TCP
kafka-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
namespace: {{ .Values.test.namespace }}
labels:
app: test-app
unit: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
unit: kafka
parentdeployment: test-kafka
spec:
hostname: kafka
subdomain: kafka
securityContext:
fsGroup: {{ .Values.test.groupID }}
containers:
- name: kafka
image: test_kafka:{{ .Values.test.kafkaImageTag }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: IS_KAFKA_CLUSTER
value: 'false'
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2281
- name: KAFKA_LISTENERS
value: SSL://:9092
- name: KAFKA_KEYSTORE_PATH
value: /opt/kafka/conf/kafka.keystore.jks
- name: KAFKA_TRUSTSTORE_PATH
value: /opt/kafka/conf/kafka.truststore.jks
- name: KAFKA_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: kafka-secret
key: jkskey
- name: KAFKA_TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: kafka-secret
key: jkskey
- name: KAFKA_LOG_DIRS
value: /opt/kafka/data
- name: KAFKA_ADV_LISTENERS
value: SSL://kafka:9092
- name: KAFKA_CLIENT_AUTH
value: none
volumeMounts:
- mountPath: "/opt/kafka/conf"
name: kafka-conf-pv
- mountPath: "/opt/kafka/data"
name: kafka-data-pv
volumes:
- name: kafka-conf-pv
persistentVolumeClaim:
claimName: kafka-conf-pvc
- name: kafka-data-pv
persistentVolumeClaim:
claimName: kafka-data-pvc
selector:
matchLabels:
app: test-app
unit: kafka
parentdeployment: test-kafka
zookeeper service yaml
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: {{ .Values.test.namespace }}
labels:
app: test-ra
unit: zookeeper
spec:
type: ClusterIP
selector:
app: test-ra
unit: zookeeper
parentdeployment: test-zookeeper
ports:
- name: zookeeper
port: 2281
zookeeper deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
namespace: {{ .Values.test.namespace }}
labels:
app: test-app
unit: zookeeper
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
unit: zookeeper
parentdeployment: test-zookeeper
spec:
hostname: zookeeper
subdomain: zookeeper
securityContext:
fsGroup: {{ .Values.test.groupID }}
containers:
- name: zookeeper
image: test_zookeeper:{{ .Values.test.zookeeperImageTag }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2281
env:
- name: IS_ZOOKEEPER_CLUSTER
value: 'false'
- name: ZOOKEEPER_SSL_CLIENT_PORT
value: '2281'
- name: ZOOKEEPER_DATA_DIR
value: /opt/zookeeper/data
- name: ZOOKEEPER_DATA_LOG_DIR
value: /opt/zookeeper/data/log
- name: ZOOKEEPER_KEYSTORE_PATH
value: /opt/zookeeper/conf/zookeeper.keystore.jks
- name: ZOOKEEPER_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: zookeeper-secret
key: jkskey
- name: ZOOKEEPER_TRUSTSTORE_PATH
value: /opt/zookeeper/conf/zookeeper.truststore.jks
- name: ZOOKEEPER_TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: zookeeper-secret
key: jkskey
volumeMounts:
- mountPath: "/opt/zookeeper/data"
name: zookeeper-data-pv
- mountPath: "/opt/zookeeper/conf"
name: zookeeper-conf-pv
volumes:
- name: zookeeper-data-pv
persistentVolumeClaim:
claimName: zookeeper-data-pvc
- name: zookeeper-conf-pv
persistentVolumeClaim:
claimName: zookeeper-conf-pvc
selector:
matchLabels:
app: test-ra
unit: zookeeper
parentdeployment: test-zookeeper
kubectl describe for kafka also shows exposed nodeport
Type: NodePort
IP: 10.233.1.106
Port: kafka 9092/TCP
TargetPort: 9092/TCP
NodePort: kafka 30092/TCP
Endpoints: 10.233.66.15:9092
Session Affinity: None
External Traffic Policy: Cluster
I have a publisher binary that will send some messages into Kafka. As I am having a 3 node cluster deployment, I am using my primary node IP and Kafka node port (30092) to connect with the Kafka.
But my binary is getting dial tcp <primary_node_ip>:9092: connect: connection refused error. I am unable to understand why is it getting rejected even after nodePort to targetPort conversion is successful. With the further debugging I am seeing the following debug logs in the kafka logs:
[2021-01-13 08:17:51,692] DEBUG Accepted connection from /10.233.125.0:1564 on /10.233.66.15:9092 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (kafka.network.Acceptor)
[2021-01-13 08:17:51,692] DEBUG Processor 0 listening to new connection from /10.233.125.0:1564 (kafka.network.Processor)
[2021-01-13 08:17:51,702] DEBUG [SslTransportLayer channelId=10.233.66.15:9092-10.233.125.0:1564-245 key=sun.nio.ch.SelectionKeyImpl#43dc2246] SSL peer is not authenticated, returning ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
[2021-01-13 08:17:51,702] DEBUG [SslTransportLayer channelId=10.233.66.15:9092-10.233.125.0:1564-245 key=sun.nio.ch.SelectionKeyImpl#43dc2246] SSL handshake completed successfully with peerHost '10.233.125.0' peerPort 1564 peerPrincipal 'User:ANONYMOUS' cipherSuite 'TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256' (org.apache.kafka.common.network.SslTransportLayer)
[2021-01-13 08:17:51,702] DEBUG [SocketServer brokerId=1001] Successfully authenticated with /10.233.125.0 (org.apache.kafka.common.network.Selector)
[2021-01-13 08:17:51,707] DEBUG [SocketServer brokerId=1001] Connection with /10.233.125.0 disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:614)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:448)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:398)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at kafka.network.Processor.poll(SocketServer.scala:861)
at kafka.network.Processor.run(SocketServer.scala:760)
at java.lang.Thread.run(Thread.java:748)
With the same configuration, I was able to expose other services. What am I missing here?
Update: When I added KAFKA_LISTENERS and KAFKA_ADV_LISTENERS for EXTERNAL and changed the targetPort to 30092, the error message during external connections disappeared, but started getting connection errors for internal connections.
Solution:
I exposed another service for external communication like mentioned in the answer and exposed 30092 as the port and the node port for it. So there was no requirement of targetPort. I also had to add additional KAFKA_LISTENERS and KAFKA_ADV_LISTENERS in the deployment file for external communication
We faced a similar issue in one of our Kafka setups; we ended up creating two k8s services, one using ClusterIP for internal communication and second service with same labels using NodePort for external communication.
internal access
apiVersion: v1
kind: Service
metadata:
name: kafka-internal
namespace: test
labels:
app: kafka-test
unit: kafka
spec:
type: NodePort
selector:
app: test-app
unit: kafka
parentdeployment: test-kafka
ports:
- name: kafka
port: 9092
protocol: TCP
type: ClusterIP
external access
apiVersion: v1
kind: Service
metadata:
name: kafka-external
namespace: test
labels:
app: kafka-test
unit: kafka
spec:
type: NodePort
selector:
app: test-app
unit: kafka
parentdeployment: test-kafka
ports:
- name: kafka
port: 9092
targetPort: 9092
protocol: TCP
type: NodePort

Kafka Kubernetes: Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic

I'm trying to set up a Kafka pod in Kubernetes but I keep getting this error:
[2020-08-30 11:23:39,354] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
This is my Kafka deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
template:
metadata:
labels:
app: instagnam
service: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
imagePullPolicy: Always
ports:
- containerPort: 9092
name: kafka
env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_CREATE_TOPICS
value: connessioni:2:1,ricette:2:1
- name: KAFKA_BROKER_ID
value: "0"
This is my Kafka service:
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
selector:
app: instagnam
service: kafka
id: "0"
type: LoadBalancer
ports:
- name: kafka
protocol: TCP
port: 9092
This is my Zookeeper deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
namespace: instagnam
labels:
app: instagnam
service: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
service: zookeeper
template:
metadata:
labels:
app: instagnam
service: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
name: zookeeper
imagePullPolicy: Always
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
And this is my Zookeeper service:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: instagnam
spec:
selector:
app: instagnam
service: zookeeper
ports:
- name: client
protocol: TCP
port: 2181
- name: follower
protocol: TCP
port: 2888
- name: leader
protocol: TCP
port: 3888
What am I doing wrong here?
If you need the full Kafka log here it is: https://pastebin.com/eBu8JB8A
And there are the Zookeper logs if you need them too: https://pastebin.com/gtnxSftW
EDIT:
I'm running this on minikube if this can help.
Kafka broker.id changes maybe cause this problem. Clean up the kafka metadata under zk, deleteall /brokers...
note: kafka data will be lost
Assuming that you're on the same Kafka image. The solution that fixed the issue for me was:
Replacing the deprecated settings of KAFKA_ADVERTISED_PORT and KAFKA_ADVERTISED_HOST_NAME as detailed in the docker image README see current docs (or README commit pinned). With KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS on which I had to add an "inside" and "outside" configurations.
Summarized from https://github.com/wurstmeister/kafka-docker/issues/218#issuecomment-362327563

container startup issue for k8 for kafka & zookeeper

i am trying to create a spring boot producer and consumer with
k8 zoo keeper & kafka but not able to set the k8 deployment its
failing Not sure what is wrong configured here
becoz same things is working for me in docker compose
i have used the below file for creating the service and
deployment in local docker-desktop
kubectl apply -f $(pwd)/kubernates/sample.yml
and the error which i am getting at the time of the deployment
i have added at the last
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zoo1
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_CREATE_TOPICS
value: sample.topic:1:1
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-cat
spec:
selector:
matchLabels:
app: kafka-cat
template:
metadata:
labels:
app: kafka-cat
spec:
containers:
- name: kafka-cat
image: confluentinc/cp-kafkacat
command: ["/bin/sh"]
args: ["-c", "trap : TERM INT; sleep infinity & wait"]**
exception in container
**[2020-08-03 18:47:49,724] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
org.apache.kafka.common.config.ConfigException: Invalid value tcp://10.103.92.112:9092 for configuration port: Not a number of type INT
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:726)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1235)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1238)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1218)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:34)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:29)
at kafka.Kafka$.main(Kafka.scala:68)
at kafka.Kafka.main(Kafka.scala)
finally i was able to solve this using different name for the kafka
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
selector:
app: zookeeper
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service0
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker0
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_ad_port
- name: KAFKA_ZOOKEEPER_CONNECT
valueFrom:
configMapKeyRef:
name: application-conf
key: zk_url
- name: KAFKA_CREATE_TOPICS
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_topic
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service0
Rename Kubernetes service name from kafka to something else kafka-broker for example. Update KAFKA_ADVERTISED_HOST_NAME or KAFKA_ADVERTISED_LISTENERS or both:
KAFKA_ADVERTISED_HOST_NAME: kafka-broker:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-broker:9092
apiVersion: v1
kind: ConfigMap
metadata:
name: application-conf # name of ConfigMap, referenced in other files
namespace: default
data:
host: "mysql" # host address of mysql server, we are using DNS of Service
name: "espark-mysql" # name of the database for application
port: "3306"
zk_url: "zookeeper:2181"
kafka_url: "kafka-service0:9092"
kafka_topic: "espark-topic:1:2"
kafka_ad_port: "9092"
Just add the hostname and IP for KAFKA_ADVERTISED_HOST_NAME like (localhost:9092) in your YAML file for Kafka.

TimeoutException: Timeout expired while fetching topic metadata Kafka

I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:
ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
What could be the reason of this behavior?
'
In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0
My Kafka configuration:
apiVersion: v1
kind: Service
metadata:
name: kafka-1
spec:
ports:
- name: client
port: 9092
selector:
app: kafka
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-1
spec:
selector:
matchLabels:
app: kafka
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: kafka
server-id: "1"
spec:
volumes:
- name: kafka-data
emptyDir: {}
containers:
- name: server
image: confluent/kafka:0.10.0.0-cp1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-1:2181
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-1
- name: KAFKA_BROKER_ID
value: "1"
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
name: schema
spec:
ports:
- name: client
port: 8081
selector:
app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: kafka-schema-registry
template:
metadata:
labels:
app: kafka-schema-registry
spec:
containers:
- name: kafka-schema-registry
image: confluent/schema-registry:3.0.0
env:
- name: SR_KAFKASTORE_CONNECTION_URL
value: zookeeper-1:2181
- name: SR_KAFKASTORE_TOPIC
value: "_schema_registry"
- name: SR_LISTENERS
value: "http://0.0.0.0:8081"
ports:
- containerPort: 8081
Zookeeper configuraion:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-1
spec:
ports:
- name: client
port: 2181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-1
spec:
selector:
matchLabels:
app: zookeeper
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: elevy/zookeeper:v3.4.7
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper-1"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: /zookeeper/data
name: data
- mountPath: /zookeeper/wal
name: wal
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;
security.protocol=SSL
One time I fixed this issue by restarting my machine but it happened again and I didn't want to restart my machine, so I fixed it with this property in the server.properties file
advertised.listeners=PLAINTEXT://localhost:9092
Kafka fetch topics metadata fails due to 2 reasons:
Reason 1 If the bootstrap server is not accepting your connections this can be due to some proxy issue like a VPN or some server level security groups.
Reason 2: Mismatch in security protocol where the expected can be SASL_SSL and the actual can be SSL. or the reverse or it can be PLAIN.
I have faced the same issue even though all the SSL config, topics are created. After long research, I have enabled the spring debug logs. The internal error is org.springframework.jdbc.CannotGetJdbcConnectionException. When I checked in other thread, they said about Spring Boot and Kafka dependency mismatch can cause the Timeout exception. So I have upgraded Spring Boot from 2.1.3 to 2.2.4. Now there is no error and kafka connection is successful. Might be useful to someone.
For others who might face this issue, it may happen because topics are not created on the kafka broker machine.
So ensure to create appropriate Topics on server as mentioned in your codebase.
org.apache.kafka.common.errors.TimeoutException: Timeout expired while
fetching topic metadata
In my case, the value of Kafka.consumer.stream.host in the application.properties file was not correct, this value should be in the right format according to the environment.
Zookeeper session timeout occurs due to long Garbage Collection processes.
So, I was facing same issue in my local. So check in your config folder server.properties file will there.
Increase the size of below value
zookeeper.connection.timeout.ms=18000