Kafka Kubernetes: Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic - kubernetes

I'm trying to set up a Kafka pod in Kubernetes but I keep getting this error:
[2020-08-30 11:23:39,354] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
This is my Kafka deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
template:
metadata:
labels:
app: instagnam
service: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
imagePullPolicy: Always
ports:
- containerPort: 9092
name: kafka
env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_CREATE_TOPICS
value: connessioni:2:1,ricette:2:1
- name: KAFKA_BROKER_ID
value: "0"
This is my Kafka service:
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
selector:
app: instagnam
service: kafka
id: "0"
type: LoadBalancer
ports:
- name: kafka
protocol: TCP
port: 9092
This is my Zookeeper deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
namespace: instagnam
labels:
app: instagnam
service: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
service: zookeeper
template:
metadata:
labels:
app: instagnam
service: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
name: zookeeper
imagePullPolicy: Always
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
And this is my Zookeeper service:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: instagnam
spec:
selector:
app: instagnam
service: zookeeper
ports:
- name: client
protocol: TCP
port: 2181
- name: follower
protocol: TCP
port: 2888
- name: leader
protocol: TCP
port: 3888
What am I doing wrong here?
If you need the full Kafka log here it is: https://pastebin.com/eBu8JB8A
And there are the Zookeper logs if you need them too: https://pastebin.com/gtnxSftW
EDIT:
I'm running this on minikube if this can help.

Kafka broker.id changes maybe cause this problem. Clean up the kafka metadata under zk, deleteall /brokers...
note: kafka data will be lost

Assuming that you're on the same Kafka image. The solution that fixed the issue for me was:
Replacing the deprecated settings of KAFKA_ADVERTISED_PORT and KAFKA_ADVERTISED_HOST_NAME as detailed in the docker image README see current docs (or README commit pinned). With KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS on which I had to add an "inside" and "outside" configurations.
Summarized from https://github.com/wurstmeister/kafka-docker/issues/218#issuecomment-362327563

Related

Kafka and Zookeeper pods restart repeatedly on Kubernetes cluster

I'm developing a microservices-based application deployed with Kubernetes for a university project. I'm newbie with Kubernetes and Kafka and I'm trying to run Kafka and zookeeper in the same minikube cluster. I have created one pod for Kafka and one pod for Zookeeper but after deploying them on the cluster they begin to restart repeatedly going to "CrashLoopBackOff" error. Taking a look at the logs I noticed that kafka launch a "ConnectException: Connection refused", it seems that kafka cannot establish connection with zookeeper. I have created the pods manually with the following config file:
zookeeper-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: bitnami/zookeeper
ports:
- name: http
containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
selector:
app: zookeeper
ports:
- protocol: TCP
port: 2181
kafka-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: bitnami/kafka
ports:
- name: http
containerPort: 9092
env:
- name: KAFKA_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_BROKER_ID
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://$(KAFKA_POD_IP):9092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: zookeeper:2181
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
selector:
app: kafka
ports:
- protocol: TCP
port: 9092
type: LoadBalancer
Kafka and zookeeper configurations are more or less the same that I used with docker compose with no errors. So, probably there is something wrong in my configuration for Kubernetes. Anyone could help me please, I don't understand the issue, thanks.

Can't access kafka from outside kubernetes

I'm trying to acces kafka from outside kubernetes on my local machine. I'm using spring application to produce events on a topic. This is my deployment file for kafka:
kind: Deployment
metadata:
name: kafka-broker0
labels:
app: kafka
spec:
replicas: 2
selector:
matchLabels:
app: kafka
id: "0"
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "30718"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 192.168.1.240
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_CREATE_TOPICS
value: LaunchScraper:1:1
And service file id:
kind: Service
metadata:
name: kafka-services
labels:
name: kafka
spec:
selector:
app: kafka
id: "0"
ports:
- protocol: TCP
name: kafka-port
port: 9092
type: NodePort
I've allready created a zookeeper pod on kubernetes. My spring boot application shows this error:
2020-09-25 23:56:29.123 WARN 44324 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/192.168.1.240:9092) could not be established. Broker may not be available.
It seems like you've not fixed a nodePort in your service. When you make it to the value you've entered in KAFKA_ADVERTISED_PORT. Also set the KAFKA_ADVERTISED_HOST to your K8s node hostname/DNS.
In the spec for your sevice add nodePort: 30718 under the ports entry. Then in your client, try to connect on 30718 port using the node's address or hostname
Also, if you're looking to deploy Kafka on production, I'd recommend using operators like Strimzi https://Strimzi.io
Deploying Kafka on Kubernetes was actually not as trivial as I first thought, but it worked after many trials and errors. Many examples you find on the internet did not work for me with the current version of Kubernetes / Kafka. What worked was:
Using a StatefulSet for Kafka, not a Deployment
setting KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS as shown below
An additional port for outside access (in my case 32092, but is arbitrary) in the NodePort service, don't forget to access Kafka from outside via 32092 then, not 9092
A working example config would be (as replacement for your Deployment and Service, probably not minimal):
apiVersion: v1
kind: Service
metadata:
labels:
service: kafka
name: kafka
spec:
type: NodePort
ports:
- name: "9092"
port: 9092
protocol: TCP
targetPort: 9092
- name: "9093"
port: 9093
protocol: TCP
targetPort: 9093
- name: "32092"
port: 32092
protocol: TCP
targetPort: 32092
nodePort: 32092
selector:
service: kafka-instance
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
service: kafka-instance
name: kafka-instance
spec:
selector:
matchLabels:
service: kafka-instance
serviceName: "kafka"
replicas: 1
template:
metadata:
labels:
service: kafka-instance
spec:
containers:
- env:
- name: MY_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ADVERTISED_LISTENERS
value: INTERNAL://$(MY_POD_NAME).kafka.default.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka.default.svc.cluster.local:9092,EXTERNAL://$(MY_HOST_IP):32092
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: INTERNAL
- name: KAFKA_LISTENERS
value: INTERNAL://:9093,CLIENT://:9092,EXTERNAL://:32092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_RESTART_ATTEMPTS
value: "10"
- name: KAFKA_RESTART_DELAY
value: "5"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_ZOOKEEPER_SESSION_TIMEOUT
value: "6000"
- name: ZOOKEEPER_AUTOPURGE_PURGE_INTERVAL
value: "0"
image: wurstmeister/kafka
name: kafka-instance
ports:
- containerPort: 9092
If you don't already have a zookeeper, just add that and it should work:
apiVersion: v1
kind: Service
metadata:
labels:
service: zoo1
name: zoo1
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
service: zoo1-instance
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
service: zoo1-instance
name: zoo1-instance
spec:
selector:
matchLabels:
service: zoo1-instance
serviceName: "zoo1"
replicas: 1
template:
metadata:
labels:
service: zoo1-instance
spec:
containers:
- image: wurstmeister/zookeeper
name: zoo1-instance
ports:
- containerPort: 2181

container startup issue for k8 for kafka & zookeeper

i am trying to create a spring boot producer and consumer with
k8 zoo keeper & kafka but not able to set the k8 deployment its
failing Not sure what is wrong configured here
becoz same things is working for me in docker compose
i have used the below file for creating the service and
deployment in local docker-desktop
kubectl apply -f $(pwd)/kubernates/sample.yml
and the error which i am getting at the time of the deployment
i have added at the last
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zoo1
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_CREATE_TOPICS
value: sample.topic:1:1
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-cat
spec:
selector:
matchLabels:
app: kafka-cat
template:
metadata:
labels:
app: kafka-cat
spec:
containers:
- name: kafka-cat
image: confluentinc/cp-kafkacat
command: ["/bin/sh"]
args: ["-c", "trap : TERM INT; sleep infinity & wait"]**
exception in container
**[2020-08-03 18:47:49,724] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
org.apache.kafka.common.config.ConfigException: Invalid value tcp://10.103.92.112:9092 for configuration port: Not a number of type INT
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:726)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1235)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1238)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1218)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:34)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:29)
at kafka.Kafka$.main(Kafka.scala:68)
at kafka.Kafka.main(Kafka.scala)
finally i was able to solve this using different name for the kafka
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
selector:
app: zookeeper
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service0
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker0
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_ad_port
- name: KAFKA_ZOOKEEPER_CONNECT
valueFrom:
configMapKeyRef:
name: application-conf
key: zk_url
- name: KAFKA_CREATE_TOPICS
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_topic
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service0
Rename Kubernetes service name from kafka to something else kafka-broker for example. Update KAFKA_ADVERTISED_HOST_NAME or KAFKA_ADVERTISED_LISTENERS or both:
KAFKA_ADVERTISED_HOST_NAME: kafka-broker:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-broker:9092
apiVersion: v1
kind: ConfigMap
metadata:
name: application-conf # name of ConfigMap, referenced in other files
namespace: default
data:
host: "mysql" # host address of mysql server, we are using DNS of Service
name: "espark-mysql" # name of the database for application
port: "3306"
zk_url: "zookeeper:2181"
kafka_url: "kafka-service0:9092"
kafka_topic: "espark-topic:1:2"
kafka_ad_port: "9092"
Just add the hostname and IP for KAFKA_ADVERTISED_HOST_NAME like (localhost:9092) in your YAML file for Kafka.

Broker may not be available error Kafka schema registry

I have defined Kafka and Kafka schema registry configuration using Kubernetes deployments and services. I used this link as a reference for the environment variables set up. However, when I try to run Kafka with registry I see that the schema registry pods crashes with an error message in the logs:
[kafka-admin-client-thread | adminclient-1] WARN
org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(localhost/127.0.0.1:9092) could not be established. Broker may not be
available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for
a node assignment.
What could be the reason of this error?
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
ports:
- name: client
port: 9092
selector:
app: kafka
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-1
spec:
selector:
matchLabels:
app: kafka
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: kafka
server-id: "1"
spec:
volumes:
- name: kafka-data
emptyDir: {}
containers:
- name: server
image: confluentinc/cp-kafka:5.1.0
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://localhost:9092
- name: KAFKA_BROKER_ID
value: "2"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
name: schema-registry-service
spec:
ports:
- name: client
port: 8081
selector:
app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: kafka-schema-registry
template:
metadata:
labels:
app: kafka-schema-registry
spec:
containers:
- name: kafka-schema-registry
image: confluentinc/cp-schema-registry:5.1.0
env:
- name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
value: zookeeper:2181
- name: SCHEMA_REGISTRY_HOST_NAME
value: localhost
- name: SCHEMA_REGISTRY_LISTENERS
value: "http://0.0.0.0:8081"
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: PLAINTEXT://localhost:9092
ports:
- containerPort: 8081
You've configured Schema Registry to look for the Kafka broker at kafka:9092, but you've also configured the Kafka broker to advertise its address as localhost:9092.
I'm not familiar with Kubernetes specifically, but this article describes how to handle networking config in principle when using containers, IaaS, etc.
I was getting the same kind of error and I followed cricket_007's answer and it got resolved.
Just change KAFKA_ADVERTISED_LISTENERS value from PLAINTEXT://localhost:9092 to PLAINTEXT://schema-registry-service:9092

TimeoutException: Timeout expired while fetching topic metadata Kafka

I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:
ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
What could be the reason of this behavior?
'
In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0
My Kafka configuration:
apiVersion: v1
kind: Service
metadata:
name: kafka-1
spec:
ports:
- name: client
port: 9092
selector:
app: kafka
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-1
spec:
selector:
matchLabels:
app: kafka
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: kafka
server-id: "1"
spec:
volumes:
- name: kafka-data
emptyDir: {}
containers:
- name: server
image: confluent/kafka:0.10.0.0-cp1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-1:2181
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-1
- name: KAFKA_BROKER_ID
value: "1"
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
name: schema
spec:
ports:
- name: client
port: 8081
selector:
app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: kafka-schema-registry
template:
metadata:
labels:
app: kafka-schema-registry
spec:
containers:
- name: kafka-schema-registry
image: confluent/schema-registry:3.0.0
env:
- name: SR_KAFKASTORE_CONNECTION_URL
value: zookeeper-1:2181
- name: SR_KAFKASTORE_TOPIC
value: "_schema_registry"
- name: SR_LISTENERS
value: "http://0.0.0.0:8081"
ports:
- containerPort: 8081
Zookeeper configuraion:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-1
spec:
ports:
- name: client
port: 2181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-1
spec:
selector:
matchLabels:
app: zookeeper
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: elevy/zookeeper:v3.4.7
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper-1"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: /zookeeper/data
name: data
- mountPath: /zookeeper/wal
name: wal
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;
security.protocol=SSL
One time I fixed this issue by restarting my machine but it happened again and I didn't want to restart my machine, so I fixed it with this property in the server.properties file
advertised.listeners=PLAINTEXT://localhost:9092
Kafka fetch topics metadata fails due to 2 reasons:
Reason 1 If the bootstrap server is not accepting your connections this can be due to some proxy issue like a VPN or some server level security groups.
Reason 2: Mismatch in security protocol where the expected can be SASL_SSL and the actual can be SSL. or the reverse or it can be PLAIN.
I have faced the same issue even though all the SSL config, topics are created. After long research, I have enabled the spring debug logs. The internal error is org.springframework.jdbc.CannotGetJdbcConnectionException. When I checked in other thread, they said about Spring Boot and Kafka dependency mismatch can cause the Timeout exception. So I have upgraded Spring Boot from 2.1.3 to 2.2.4. Now there is no error and kafka connection is successful. Might be useful to someone.
For others who might face this issue, it may happen because topics are not created on the kafka broker machine.
So ensure to create appropriate Topics on server as mentioned in your codebase.
org.apache.kafka.common.errors.TimeoutException: Timeout expired while
fetching topic metadata
In my case, the value of Kafka.consumer.stream.host in the application.properties file was not correct, this value should be in the right format according to the environment.
Zookeeper session timeout occurs due to long Garbage Collection processes.
So, I was facing same issue in my local. So check in your config folder server.properties file will there.
Increase the size of below value
zookeeper.connection.timeout.ms=18000