Error when using StatefulSet object from Kubernetes to find the kafka broker without a static IP - kubernetes

I am trying to deploy Zookeeper and Kafka on Kubernetes using the confluentinc docker images. I based my solution on this question and this post. The Zookeeper is running without errors on the log. I want to deploy 3 Kafka brokers using StatefulSet. The problem with my yaml files is that I don't know how to configure the KAFKA_ADVERTISED_LISTENERS property for Kafka when using 3 brokers.
Here is the yaml files for zookeeper:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
clusterIP: None
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
spec:
replicas: 1
serviceName: zookeeper
selector:
matchLabels:
app: zookeeper # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: zookeeper # has to match .spec.selector.matchLabels
spec:
hostname: zookeeper
containers:
- name: zookeeper
image: confluentinc/cp-zookeeper:5.5.0
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
and for the kafka broker:
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
app: kafka
spec:
type: LoadBalancer
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
replicas: 3
serviceName: kafka
podManagementPolicy: OrderedReady
selector:
matchLabels:
app: kafka # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: kafka # has to match .spec.selector.matchLabels
spec:
containers:
- name: kafka
image: confluentinc/cp-enterprise-kafka:5.5.0
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181 # zookeeper-2.zookeeper.default.svc.cluster.local
- name: KAFKA_ADVERTISED_LISTENERS
value: "LISTENER_0://kafka-0:9092,LISTENER_1://kafka-1:9093,LISTENER_2://kafka-2:9094"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "LISTENER_0:PLAINTEXT,LISTENER_1:PLAINTEXT,LISTENER_2:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: LISTENER_0
I get the 3 kafka pods running, the kafka-0 is connecting but the kafka-1 and kafka-2 are not connecting.
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kafka-0 1/1 Running 0 4m12s 172.17.0.4 minikube <none> <none>
kafka-1 1/1 Running 5 4m9s 172.17.0.5 minikube <none> <none>
kafka-2 0/1 CrashLoopBackOff 4 4m7s 172.17.0.6 minikube <none> <none>
zookeeper-0 1/1 Running 0 21m 172.17.0.3 minikube <none> <none>
The error is saying that I already advertised kafka-0:9092,kafka-1:9093,kafka-2:9094 in the first pod kafka-0. So, I suppose it has to be dynamic. How do I configure it?
[2020-09-30 14:56:40,519] ERROR [KafkaServer id=1017] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: requirement failed: Configured end points kafka-0:9092,kafka-1:9093,kafka-2:9094 in advertised listeners are already registered by broker 1012
at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3(KafkaServer.scala:436)
at kafka.server.KafkaServer.$anonfun$createBrokerInfo$3$adapted(KafkaServer.scala:434)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:434)
at kafka.server.KafkaServer.startup(KafkaServer.scala:293)
at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:114)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:66)

I have been reading this blog post "Kafka Listeners - Explained" and I was able to configure 3 Kafka brokers with the following configuration.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
replicas: 3
serviceName: kafka
podManagementPolicy: OrderedReady
selector:
matchLabels:
app: kafka # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: kafka # has to match .spec.selector.matchLabels
spec:
restartPolicy: Always
containers:
- name: kafka
image: confluentinc/cp-enterprise-kafka:5.5.0
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits: # limit of 0.5 cpu and 512MiB of memory
memory: "512Mi"
cpu: "500m"
# imagePullPolicy: Always
ports:
- containerPort: 9092
name: kafka-0
- containerPort: 9093
name: kafka-1
- containerPort: 9094
name: kafka-2
env:
- name: MY_METADATA_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: STAS_DELAY
value: "120"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181 # zookeeper-2.zookeeper.default.svc.cluster.local
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://$(MY_POD_IP):9092"
- name: KAFKA_LISTENERS
value: "INSIDE://$(MY_POD_IP):9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"

Related

Kafka and Zookeeper pods restart repeatedly on Kubernetes cluster

I'm developing a microservices-based application deployed with Kubernetes for a university project. I'm newbie with Kubernetes and Kafka and I'm trying to run Kafka and zookeeper in the same minikube cluster. I have created one pod for Kafka and one pod for Zookeeper but after deploying them on the cluster they begin to restart repeatedly going to "CrashLoopBackOff" error. Taking a look at the logs I noticed that kafka launch a "ConnectException: Connection refused", it seems that kafka cannot establish connection with zookeeper. I have created the pods manually with the following config file:
zookeeper-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: bitnami/zookeeper
ports:
- name: http
containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
selector:
app: zookeeper
ports:
- protocol: TCP
port: 2181
kafka-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: bitnami/kafka
ports:
- name: http
containerPort: 9092
env:
- name: KAFKA_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_BROKER_ID
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://$(KAFKA_POD_IP):9092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: zookeeper:2181
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
selector:
app: kafka
ports:
- protocol: TCP
port: 9092
type: LoadBalancer
Kafka and zookeeper configurations are more or less the same that I used with docker compose with no errors. So, probably there is something wrong in my configuration for Kubernetes. Anyone could help me please, I don't understand the issue, thanks.

Kafka: connect from local machine to running in k8s on remote machine Kafka Broker

Good day everyone!
The main problem is: I want to connect from my local machine to Kafka which is running on cluster (let it be DNS node03.st) in k8s container by my own manifest.
The manifest of zookeeper deployment is here (image: confluentinc/cp-zookeeper:6.2.4):
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aptmess
name: zookeeper-aptmess-deployment
labels:
name: zookeeper-service-filter
spec:
selector:
matchLabels:
app: zookeeper-label
template:
metadata:
labels:
app: zookeeper-label
spec:
containers:
- name: zookeeper
image: confluentinc/cp-zookeeper:6.2.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2181 # ZK client
name: client
- containerPort: 2888 # Follower
name: follower
- containerPort: 3888 # Election
name: election
- containerPort: 8080 # AdminServer
name: admin-server
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_TICK_TIME
value: "2000"
---
apiVersion: v1
kind: Service
metadata:
namespace: aptmess
name: zookeeper-service-aptmess
labels:
name: zookeeper-service-filter
spec:
type: NodePort
ports:
- port: 2181
protocol: TCP
name: client
- name: follower
port: 2888
protocol: TCP
- name: election
port: 3888
protocol: TCP
- port: 8080
protocol: TCP
name: admin-server
selector:
app: zookeeper-label
My kafka StatefulSet manifest (image: confluentinc/cp-kafka:6.2.4):
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: aptmess
name: kafka-stateful-set-aptmess
labels:
name: kafka-service-filter
spec:
serviceName: kafka-broker
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: kafka-label
template:
metadata:
labels:
app: kafka-label
spec:
volumes:
- name: config
emptyDir: {}
- name: extensions
emptyDir: {}
- name: kafka-storage
persistentVolumeClaim:
claimName: kafka-data-claim
terminationGracePeriodSeconds: 300
containers:
- name: kafka
image: confluentinc/cp-kafka:6.2.4
imagePullPolicy: Always
ports:
- containerPort: 9092
resources:
requests:
memory: "2Gi"
cpu: "1"
command:
- bash
- -c
- unset KAFKA_PORT; /etc/confluent/docker/run
env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-broker
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-service-aptmess:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "PLAINTEXT:PLAINTEXT,CONNECTIONS_FROM_HOST:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "PLAINTEXT"
- name: KAFKA_LISTENERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://kafka-broker.aptmess.svc.cluster.local:9092"
volumeMounts:
- name: config
mountPath: /etc/kafka
- name: extensions
mountPath: /opt/kafka/libs/extensions
- name: kafka-storage
mountPath: /var/lib/kafka/
securityContext:
runAsUser: 1000
fsGroup: 1000
---
apiVersion: v1
kind: Service
metadata:
namespace: aptmess
name: kafka-broker
labels:
name: kafka-service-filter
spec:
type: NodePort
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka-label
NodePort for port 9092 is 30000.
When i try to connect from localhost a got error:
from kafka import KafkaProducer
producer = KafkaProducer(
bootstrap_servers=['node03.st:30000']
)
>> Error connecting to node kafka-broker.aptmess.svc.cluster.local:9092 (id: 1 rack: null)
I spent a long time by changing internal and external listeners, but it doesn't help me. What should i do to reach the goal of sending message from my localhost to remote Kafka broker?
Thanks in advance!
P.s: I have searched this links to find results:
Use SCRAM-SHA-512 authentication with SSL on LoadBalancer in Strimzi Kafka
https://github.com/strimzi/strimzi-kafka-operator/issues/1156
https://github.com/strimzi/strimzi-kafka-operator/issues/1463
https://githubhelp.com/Yolean/kubernetes-kafka/issues/328?ysclid=l4grqi7hc6364785597
Connecting Kafka running on EC2 machine from my local machine
Access kafka broker in a remote machine ERROR
How to Connect to kafka on localhost (host machine) from app inside kubernetes (minikube)
kafka broker not available at starting
https://github.com/SOHU-Co/kafka-node/issues/666
https://docs.confluent.io/operator/current/co-nodeports.html
https://developers.redhat.com/blog/2019/06/07/accessing-apache-kafka-in-strimzi-part-2-node-ports
https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
Kafka in Kubernetes Cluster- How to publish/consume messages from outside of Kubernetes Cluster
Kafka docker compose external connection
confluentinc image
NodePort for port 9092 is 30000
Then you need to define that node's hostname and port as part of KAFKA_ADVERTISED_LISTENERS, as mentioned in many of the linked posts... You've only defined one listener, and it's internal to k8s... However, keep in mind, that's a poor solution unless you force the broker pod to only be running on that one host, and that one port.
Alternatively, replace your setup with Strimzi operator, and read how you can use Ingress resources (ideally) to access the Kafka cluster, but they also support NodePort - https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/ (cross reference with latest documentation since that's an old post)
Ingress's would be ideal because the Ingress controller would be able to dynamically route requests to the broker pods while having a fixed external address, otherwise, you'll constantly need to use k8s api to describe the broker pods and get their current port information

Kafka connection refused with Kubernetes nodeport

I am trying to expose KAFKA in my Kubernetes setup for external usage using node port.
My Helmcharts kafka-service.yaml is as follows:
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: test
labels:
app: kafka-test
unit: kafka
spec:
type: NodePort
selector:
app: test-app
unit: kafka
parentdeployment: test-kafka
ports:
- name: kafka
port: 9092
targetPort: 9092
nodePort: 30092
protocol: TCP
kafka-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
namespace: {{ .Values.test.namespace }}
labels:
app: test-app
unit: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
unit: kafka
parentdeployment: test-kafka
spec:
hostname: kafka
subdomain: kafka
securityContext:
fsGroup: {{ .Values.test.groupID }}
containers:
- name: kafka
image: test_kafka:{{ .Values.test.kafkaImageTag }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: IS_KAFKA_CLUSTER
value: 'false'
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2281
- name: KAFKA_LISTENERS
value: SSL://:9092
- name: KAFKA_KEYSTORE_PATH
value: /opt/kafka/conf/kafka.keystore.jks
- name: KAFKA_TRUSTSTORE_PATH
value: /opt/kafka/conf/kafka.truststore.jks
- name: KAFKA_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: kafka-secret
key: jkskey
- name: KAFKA_TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: kafka-secret
key: jkskey
- name: KAFKA_LOG_DIRS
value: /opt/kafka/data
- name: KAFKA_ADV_LISTENERS
value: SSL://kafka:9092
- name: KAFKA_CLIENT_AUTH
value: none
volumeMounts:
- mountPath: "/opt/kafka/conf"
name: kafka-conf-pv
- mountPath: "/opt/kafka/data"
name: kafka-data-pv
volumes:
- name: kafka-conf-pv
persistentVolumeClaim:
claimName: kafka-conf-pvc
- name: kafka-data-pv
persistentVolumeClaim:
claimName: kafka-data-pvc
selector:
matchLabels:
app: test-app
unit: kafka
parentdeployment: test-kafka
zookeeper service yaml
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: {{ .Values.test.namespace }}
labels:
app: test-ra
unit: zookeeper
spec:
type: ClusterIP
selector:
app: test-ra
unit: zookeeper
parentdeployment: test-zookeeper
ports:
- name: zookeeper
port: 2281
zookeeper deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
namespace: {{ .Values.test.namespace }}
labels:
app: test-app
unit: zookeeper
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
unit: zookeeper
parentdeployment: test-zookeeper
spec:
hostname: zookeeper
subdomain: zookeeper
securityContext:
fsGroup: {{ .Values.test.groupID }}
containers:
- name: zookeeper
image: test_zookeeper:{{ .Values.test.zookeeperImageTag }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2281
env:
- name: IS_ZOOKEEPER_CLUSTER
value: 'false'
- name: ZOOKEEPER_SSL_CLIENT_PORT
value: '2281'
- name: ZOOKEEPER_DATA_DIR
value: /opt/zookeeper/data
- name: ZOOKEEPER_DATA_LOG_DIR
value: /opt/zookeeper/data/log
- name: ZOOKEEPER_KEYSTORE_PATH
value: /opt/zookeeper/conf/zookeeper.keystore.jks
- name: ZOOKEEPER_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: zookeeper-secret
key: jkskey
- name: ZOOKEEPER_TRUSTSTORE_PATH
value: /opt/zookeeper/conf/zookeeper.truststore.jks
- name: ZOOKEEPER_TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: zookeeper-secret
key: jkskey
volumeMounts:
- mountPath: "/opt/zookeeper/data"
name: zookeeper-data-pv
- mountPath: "/opt/zookeeper/conf"
name: zookeeper-conf-pv
volumes:
- name: zookeeper-data-pv
persistentVolumeClaim:
claimName: zookeeper-data-pvc
- name: zookeeper-conf-pv
persistentVolumeClaim:
claimName: zookeeper-conf-pvc
selector:
matchLabels:
app: test-ra
unit: zookeeper
parentdeployment: test-zookeeper
kubectl describe for kafka also shows exposed nodeport
Type: NodePort
IP: 10.233.1.106
Port: kafka 9092/TCP
TargetPort: 9092/TCP
NodePort: kafka 30092/TCP
Endpoints: 10.233.66.15:9092
Session Affinity: None
External Traffic Policy: Cluster
I have a publisher binary that will send some messages into Kafka. As I am having a 3 node cluster deployment, I am using my primary node IP and Kafka node port (30092) to connect with the Kafka.
But my binary is getting dial tcp <primary_node_ip>:9092: connect: connection refused error. I am unable to understand why is it getting rejected even after nodePort to targetPort conversion is successful. With the further debugging I am seeing the following debug logs in the kafka logs:
[2021-01-13 08:17:51,692] DEBUG Accepted connection from /10.233.125.0:1564 on /10.233.66.15:9092 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (kafka.network.Acceptor)
[2021-01-13 08:17:51,692] DEBUG Processor 0 listening to new connection from /10.233.125.0:1564 (kafka.network.Processor)
[2021-01-13 08:17:51,702] DEBUG [SslTransportLayer channelId=10.233.66.15:9092-10.233.125.0:1564-245 key=sun.nio.ch.SelectionKeyImpl#43dc2246] SSL peer is not authenticated, returning ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
[2021-01-13 08:17:51,702] DEBUG [SslTransportLayer channelId=10.233.66.15:9092-10.233.125.0:1564-245 key=sun.nio.ch.SelectionKeyImpl#43dc2246] SSL handshake completed successfully with peerHost '10.233.125.0' peerPort 1564 peerPrincipal 'User:ANONYMOUS' cipherSuite 'TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256' (org.apache.kafka.common.network.SslTransportLayer)
[2021-01-13 08:17:51,702] DEBUG [SocketServer brokerId=1001] Successfully authenticated with /10.233.125.0 (org.apache.kafka.common.network.Selector)
[2021-01-13 08:17:51,707] DEBUG [SocketServer brokerId=1001] Connection with /10.233.125.0 disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:614)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:448)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:398)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at kafka.network.Processor.poll(SocketServer.scala:861)
at kafka.network.Processor.run(SocketServer.scala:760)
at java.lang.Thread.run(Thread.java:748)
With the same configuration, I was able to expose other services. What am I missing here?
Update: When I added KAFKA_LISTENERS and KAFKA_ADV_LISTENERS for EXTERNAL and changed the targetPort to 30092, the error message during external connections disappeared, but started getting connection errors for internal connections.
Solution:
I exposed another service for external communication like mentioned in the answer and exposed 30092 as the port and the node port for it. So there was no requirement of targetPort. I also had to add additional KAFKA_LISTENERS and KAFKA_ADV_LISTENERS in the deployment file for external communication
We faced a similar issue in one of our Kafka setups; we ended up creating two k8s services, one using ClusterIP for internal communication and second service with same labels using NodePort for external communication.
internal access
apiVersion: v1
kind: Service
metadata:
name: kafka-internal
namespace: test
labels:
app: kafka-test
unit: kafka
spec:
type: NodePort
selector:
app: test-app
unit: kafka
parentdeployment: test-kafka
ports:
- name: kafka
port: 9092
protocol: TCP
type: ClusterIP
external access
apiVersion: v1
kind: Service
metadata:
name: kafka-external
namespace: test
labels:
app: kafka-test
unit: kafka
spec:
type: NodePort
selector:
app: test-app
unit: kafka
parentdeployment: test-kafka
ports:
- name: kafka
port: 9092
targetPort: 9092
protocol: TCP
type: NodePort

Can't access kafka from outside kubernetes

I'm trying to acces kafka from outside kubernetes on my local machine. I'm using spring application to produce events on a topic. This is my deployment file for kafka:
kind: Deployment
metadata:
name: kafka-broker0
labels:
app: kafka
spec:
replicas: 2
selector:
matchLabels:
app: kafka
id: "0"
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "30718"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 192.168.1.240
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_CREATE_TOPICS
value: LaunchScraper:1:1
And service file id:
kind: Service
metadata:
name: kafka-services
labels:
name: kafka
spec:
selector:
app: kafka
id: "0"
ports:
- protocol: TCP
name: kafka-port
port: 9092
type: NodePort
I've allready created a zookeeper pod on kubernetes. My spring boot application shows this error:
2020-09-25 23:56:29.123 WARN 44324 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/192.168.1.240:9092) could not be established. Broker may not be available.
It seems like you've not fixed a nodePort in your service. When you make it to the value you've entered in KAFKA_ADVERTISED_PORT. Also set the KAFKA_ADVERTISED_HOST to your K8s node hostname/DNS.
In the spec for your sevice add nodePort: 30718 under the ports entry. Then in your client, try to connect on 30718 port using the node's address or hostname
Also, if you're looking to deploy Kafka on production, I'd recommend using operators like Strimzi https://Strimzi.io
Deploying Kafka on Kubernetes was actually not as trivial as I first thought, but it worked after many trials and errors. Many examples you find on the internet did not work for me with the current version of Kubernetes / Kafka. What worked was:
Using a StatefulSet for Kafka, not a Deployment
setting KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS as shown below
An additional port for outside access (in my case 32092, but is arbitrary) in the NodePort service, don't forget to access Kafka from outside via 32092 then, not 9092
A working example config would be (as replacement for your Deployment and Service, probably not minimal):
apiVersion: v1
kind: Service
metadata:
labels:
service: kafka
name: kafka
spec:
type: NodePort
ports:
- name: "9092"
port: 9092
protocol: TCP
targetPort: 9092
- name: "9093"
port: 9093
protocol: TCP
targetPort: 9093
- name: "32092"
port: 32092
protocol: TCP
targetPort: 32092
nodePort: 32092
selector:
service: kafka-instance
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
service: kafka-instance
name: kafka-instance
spec:
selector:
matchLabels:
service: kafka-instance
serviceName: "kafka"
replicas: 1
template:
metadata:
labels:
service: kafka-instance
spec:
containers:
- env:
- name: MY_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ADVERTISED_LISTENERS
value: INTERNAL://$(MY_POD_NAME).kafka.default.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka.default.svc.cluster.local:9092,EXTERNAL://$(MY_HOST_IP):32092
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: INTERNAL
- name: KAFKA_LISTENERS
value: INTERNAL://:9093,CLIENT://:9092,EXTERNAL://:32092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_RESTART_ATTEMPTS
value: "10"
- name: KAFKA_RESTART_DELAY
value: "5"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_ZOOKEEPER_SESSION_TIMEOUT
value: "6000"
- name: ZOOKEEPER_AUTOPURGE_PURGE_INTERVAL
value: "0"
image: wurstmeister/kafka
name: kafka-instance
ports:
- containerPort: 9092
If you don't already have a zookeeper, just add that and it should work:
apiVersion: v1
kind: Service
metadata:
labels:
service: zoo1
name: zoo1
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
service: zoo1-instance
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
service: zoo1-instance
name: zoo1-instance
spec:
selector:
matchLabels:
service: zoo1-instance
serviceName: "zoo1"
replicas: 1
template:
metadata:
labels:
service: zoo1-instance
spec:
containers:
- image: wurstmeister/zookeeper
name: zoo1-instance
ports:
- containerPort: 2181

Kafka Kubernetes: Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic

I'm trying to set up a Kafka pod in Kubernetes but I keep getting this error:
[2020-08-30 11:23:39,354] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
This is my Kafka deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
template:
metadata:
labels:
app: instagnam
service: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
imagePullPolicy: Always
ports:
- containerPort: 9092
name: kafka
env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_CREATE_TOPICS
value: connessioni:2:1,ricette:2:1
- name: KAFKA_BROKER_ID
value: "0"
This is my Kafka service:
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
selector:
app: instagnam
service: kafka
id: "0"
type: LoadBalancer
ports:
- name: kafka
protocol: TCP
port: 9092
This is my Zookeeper deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
namespace: instagnam
labels:
app: instagnam
service: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
service: zookeeper
template:
metadata:
labels:
app: instagnam
service: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
name: zookeeper
imagePullPolicy: Always
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
And this is my Zookeeper service:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: instagnam
spec:
selector:
app: instagnam
service: zookeeper
ports:
- name: client
protocol: TCP
port: 2181
- name: follower
protocol: TCP
port: 2888
- name: leader
protocol: TCP
port: 3888
What am I doing wrong here?
If you need the full Kafka log here it is: https://pastebin.com/eBu8JB8A
And there are the Zookeper logs if you need them too: https://pastebin.com/gtnxSftW
EDIT:
I'm running this on minikube if this can help.
Kafka broker.id changes maybe cause this problem. Clean up the kafka metadata under zk, deleteall /brokers...
note: kafka data will be lost
Assuming that you're on the same Kafka image. The solution that fixed the issue for me was:
Replacing the deprecated settings of KAFKA_ADVERTISED_PORT and KAFKA_ADVERTISED_HOST_NAME as detailed in the docker image README see current docs (or README commit pinned). With KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS on which I had to add an "inside" and "outside" configurations.
Summarized from https://github.com/wurstmeister/kafka-docker/issues/218#issuecomment-362327563