Kafka and Zookeeper pods restart repeatedly on Kubernetes cluster - kubernetes

I'm developing a microservices-based application deployed with Kubernetes for a university project. I'm newbie with Kubernetes and Kafka and I'm trying to run Kafka and zookeeper in the same minikube cluster. I have created one pod for Kafka and one pod for Zookeeper but after deploying them on the cluster they begin to restart repeatedly going to "CrashLoopBackOff" error. Taking a look at the logs I noticed that kafka launch a "ConnectException: Connection refused", it seems that kafka cannot establish connection with zookeeper. I have created the pods manually with the following config file:
zookeeper-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: bitnami/zookeeper
ports:
- name: http
containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
selector:
app: zookeeper
ports:
- protocol: TCP
port: 2181
kafka-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: bitnami/kafka
ports:
- name: http
containerPort: 9092
env:
- name: KAFKA_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_BROKER_ID
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://$(KAFKA_POD_IP):9092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: zookeeper:2181
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
selector:
app: kafka
ports:
- protocol: TCP
port: 9092
type: LoadBalancer
Kafka and zookeeper configurations are more or less the same that I used with docker compose with no errors. So, probably there is something wrong in my configuration for Kubernetes. Anyone could help me please, I don't understand the issue, thanks.

Related

How ping to headless pod from other namespace

Hello I have problem with connect directly to choosen pod from other namespace. I checked nslookup and dns for my pod is kafka-0.kafka-headless-svc.message-broker.svc.cluster.local and when I ping from same ns everything is ok, problem is when I try same on other ns. When I try something like this kafka.message-broker.svc.cluster.local everything is fine, but I want to connect to specific pod, not that choosen by balancer. My configs:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
namespace: message-broker
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.message-broker.svc.cluster.local
- name: KAFKA_LISTENERS
value: PLAINTEXT://kafka-0:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-0:9092
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka
ports:
- containerPort: 9092
name: kafka
serviceName: "kafka-headless-svc"
selector:
matchLabels:
app: kafka
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka
name: kafka
namespace: message-broker
spec:
clusterIP: None
selector:
app: kafka
ports:
- port: 9092
targetPort: 9092
protocol: TCP
I doing everything on minikube so maybe there is some problem with networkpolicy (I didn't set any)
Problem resolved. I had to set serviceName from StatefulSet (in my example "kafka-headless-svc") same as metadata.name from my Service file

Kafka Kubernetes: Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic

I'm trying to set up a Kafka pod in Kubernetes but I keep getting this error:
[2020-08-30 11:23:39,354] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
This is my Kafka deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
template:
metadata:
labels:
app: instagnam
service: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
imagePullPolicy: Always
ports:
- containerPort: 9092
name: kafka
env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_CREATE_TOPICS
value: connessioni:2:1,ricette:2:1
- name: KAFKA_BROKER_ID
value: "0"
This is my Kafka service:
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: instagnam
labels:
app: instagnam
service: kafka
spec:
selector:
app: instagnam
service: kafka
id: "0"
type: LoadBalancer
ports:
- name: kafka
protocol: TCP
port: 9092
This is my Zookeeper deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
namespace: instagnam
labels:
app: instagnam
service: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: instagnam
service: zookeeper
template:
metadata:
labels:
app: instagnam
service: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
name: zookeeper
imagePullPolicy: Always
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
And this is my Zookeeper service:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: instagnam
spec:
selector:
app: instagnam
service: zookeeper
ports:
- name: client
protocol: TCP
port: 2181
- name: follower
protocol: TCP
port: 2888
- name: leader
protocol: TCP
port: 3888
What am I doing wrong here?
If you need the full Kafka log here it is: https://pastebin.com/eBu8JB8A
And there are the Zookeper logs if you need them too: https://pastebin.com/gtnxSftW
EDIT:
I'm running this on minikube if this can help.
Kafka broker.id changes maybe cause this problem. Clean up the kafka metadata under zk, deleteall /brokers...
note: kafka data will be lost
Assuming that you're on the same Kafka image. The solution that fixed the issue for me was:
Replacing the deprecated settings of KAFKA_ADVERTISED_PORT and KAFKA_ADVERTISED_HOST_NAME as detailed in the docker image README see current docs (or README commit pinned). With KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS on which I had to add an "inside" and "outside" configurations.
Summarized from https://github.com/wurstmeister/kafka-docker/issues/218#issuecomment-362327563

container startup issue for k8 for kafka & zookeeper

i am trying to create a spring boot producer and consumer with
k8 zoo keeper & kafka but not able to set the k8 deployment its
failing Not sure what is wrong configured here
becoz same things is working for me in docker compose
i have used the below file for creating the service and
deployment in local docker-desktop
kubectl apply -f $(pwd)/kubernates/sample.yml
and the error which i am getting at the time of the deployment
i have added at the last
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zoo1
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_CREATE_TOPICS
value: sample.topic:1:1
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-cat
spec:
selector:
matchLabels:
app: kafka-cat
template:
metadata:
labels:
app: kafka-cat
spec:
containers:
- name: kafka-cat
image: confluentinc/cp-kafkacat
command: ["/bin/sh"]
args: ["-c", "trap : TERM INT; sleep infinity & wait"]**
exception in container
**[2020-08-03 18:47:49,724] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
org.apache.kafka.common.config.ConfigException: Invalid value tcp://10.103.92.112:9092 for configuration port: Not a number of type INT
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:726)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1235)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1238)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1218)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:34)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:29)
at kafka.Kafka$.main(Kafka.scala:68)
at kafka.Kafka.main(Kafka.scala)
finally i was able to solve this using different name for the kafka
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
selector:
app: zookeeper
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service0
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker0
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_ad_port
- name: KAFKA_ZOOKEEPER_CONNECT
valueFrom:
configMapKeyRef:
name: application-conf
key: zk_url
- name: KAFKA_CREATE_TOPICS
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_topic
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service0
Rename Kubernetes service name from kafka to something else kafka-broker for example. Update KAFKA_ADVERTISED_HOST_NAME or KAFKA_ADVERTISED_LISTENERS or both:
KAFKA_ADVERTISED_HOST_NAME: kafka-broker:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-broker:9092
apiVersion: v1
kind: ConfigMap
metadata:
name: application-conf # name of ConfigMap, referenced in other files
namespace: default
data:
host: "mysql" # host address of mysql server, we are using DNS of Service
name: "espark-mysql" # name of the database for application
port: "3306"
zk_url: "zookeeper:2181"
kafka_url: "kafka-service0:9092"
kafka_topic: "espark-topic:1:2"
kafka_ad_port: "9092"
Just add the hostname and IP for KAFKA_ADVERTISED_HOST_NAME like (localhost:9092) in your YAML file for Kafka.

I'm having hard time setting up kafka on gke and would like to know the best way of setting it up?

I was trying to use statefulset to deploy the zookeeper and Kafka server in a cluster in gke but the communication between the Kafka and zookeeper fails with an error message in logs. I'd like to know what would be the easiest way to setup a Kafka in kubernetes.
I've tried the following configurations and I see that the Kafka fails to communicate with zookeeper but I am not sure why? I know that I may need a headless service because the communication is being handled by Kafka and zookeeper themselves.
For Zookeeper
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
type: LoadBalancer
selector:
app: zookeeper
ports:
- port: 2181
targetPort: client
name: zk-port
- port: 2888
targetPort: leader
name: zk-leader
- port: 3888
targetPort: election
name: zk-election
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
spec:
replicas: 3
selector:
matchLabels:
app: zookeeper
serviceName: zookeeper
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zk-pod
image: zookeeper:latest
imagePullPolicy: Always
ports:
- name: client
containerPort: 2181
- name: leader
containerPort: 2888
- name: election
containerPort: 3888
env:
- name: ZOO_MY_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ZOO_TICK_TIME
value: "2000"
- name: ZOO_INIT_LIMIT
value: "5"
- name: ZOO_SYNC_LIMIT
value: "2"
- name: ZOO_SERVERS
value: zookeeper:2888:3888
For Kafka
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
replicas: 3
selector:
matchLabels:
app: kafka
serviceName: kafka-svc
podManagementPolicy: Parallel
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:latest
ports:
- containerPort: 9092
name: client
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: kafka.default.svc.cluster.local:9092
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
spec:
type: LoadBalancer
selector:
app: kafka
ports:
- port: 9092
targetPort: client
name: kfk-port
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
I'd like to be able to send messages to a topic and to be able to read them back. I've been using kafkacat to test the connection.
This is one of limitation that specified in Official Kubernetes Documentation about StateFulsets, that
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
So, as you mentioned, you need Headless Service and you can just easily add headless service yaml to top of your configuration similar to below for your both StatefulSets:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- port: 2181
name: someport
clusterIP: None
selector:
app: zookeeper
Hope it helps!
I have been following the official guide from Google Click to Deploy. It proves very beneficial for me. Here is the link that you can follow to setup Kafka cluster over GKE. This github repository is officially maintained by Google Cloud.
https://github.com/GoogleCloudPlatform/click-to-deploy/tree/master/k8s/kafka
Other simpe approach is to deploy via Google Cloud Console.
https://console.cloud.google.com/marketplace
Search for kafka cluster (with replication)
Click on Configure
Fill out all necessary details and it will configure Kafka cluster for you will all internal communication enabled. I have removed my cluster name for privacy.

kafka - ERROR Error when sending message to topic test-topic with key: null, value: 17 bytes with error

I am working on deploying Kafka/Zookeeper in Kubernetes using MINIKUBE. below is my YAML file:
##################################
# Setup Zookeeper Deployment
##################################
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
# imagePullPolicy: Always
name: zookeeper
ports:
- containerPort: 2181
##################################
# Setup Zookeeper Service
##################################
---
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
---
##################################
# Setup Kafka service
##################################
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-service
name: kafka-service
spec:
type: NodePort
ports:
- name: kafka-port
port: 9092
nodePort: 30092
targetPort: 9092
selector:
app: kafka
---
##################################
# Setup Kafka Broker Deployment
##################################
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kafka
name: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: 192.168.99.100
- name: KAFKA_ADVERTISED_PORT
value: "30092"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: 192.168.99.100:30181
- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://192.168.99.100:30092"
# - name: KAFKA_LISTENERS
# value: "PLAINTEXT://192.168.99.100:9092"
- name: KAFKA_CREATE_TOPICS
value: "vignesh-topic:1:1"
- name: LOG4J_LOGGER_KAFKA_AUTHORIZER_LOGGER
value: "DEBUG"
image: wurstmeister/kafka
#imagePullPolicy: Always
name: kafka
ports:
- containerPort: 9092
I have successfully created the Deployment/Services in local machine Kubernetes using MINIKUBE using below command.
kubectl create -f kafka.yml
I have navigated inside Kafka pods and I am able to create a topic using below command,
./bin/kafka-topics.sh --create --zookeeper 192.168.99.100:30181 --replication-factor 1 --partitions 1 --topic test-topic
But, When I try to send a message to the topic (test-topic), the system throws the below error.
Note
when I run netstat -tunap , both port 30092 and 30181 is showing established.
I don't know what I am missing here. Please help me to move forward.
Thanks and Appreciate your help.
Thank you #SoheilPourbafrani and #cricket_007 for your help! I have found the workaround for the question I asked above.
Once I run the below command in the window PowerShell, Kafka started properly and able to communicate with it from Node Application and Kafka Tool as well.
minikube ssh
sudo ip link set docker0 promisc on
References: Newer versions of Minikube don't allow Pods to use their own Services