Error Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka on 10.96.0.10:53: no such host - kubernetes

C:\kafka>kubectl logs kafka-exporter-745f574c74-tzfn4
I0127 12:39:08.113890 1 kafka_exporter.go:792] Starting kafka_exporter (version=1.6.0, branch=master, revision=9d9cd654ca57e4f153d0d0b00ce36069b6a677c1)
F0127 12:39:08.890639 1 kafka_exporter.go:893] Error Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka.osm.svc.cluster.local on 10.96.0.10:53: no such host
below is the kafka-exporter-deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-exporter
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: kafka-exporter
template:
metadata:
labels:
app: kafka-exporter
spec:
containers:
- name: kafka-exporter
image: danielqsj/kafka-exporter:latest
imagePullPolicy: IfNotPresent
args:
- --kafka.server=kafka.osm.svc.cluster.local:9092
- --web.listen-address=:9092
ports:
- containerPort: 9308
env:
- name: KAFKA_EXPORTER_KAFKA_CONNECT
value: kafka-broker-644794f4ff-8gmxb:9092
- name: KAFKA_EXPORTER_TOPIC_WHITELIST
value: samptopic
Kafka-exporter-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-exporter
namespace: default
spec:
selector:
app: kafka-exporter
ports:
name: http
port: 9308
targetPort: 9308
type: NodePort

actually there is no namespace called "osm" everything running on default namespace
Refer kubernetes documentation.
As the error says, there's no service kafka in an osm namespace (whether that exists, or not), that can be reached. Remove .osm from that address, or change it to .default, assuming that there really is a Service named kafka available at port 9092

Related

Kafka and Zookeeper pods restart repeatedly on Kubernetes cluster

I'm developing a microservices-based application deployed with Kubernetes for a university project. I'm newbie with Kubernetes and Kafka and I'm trying to run Kafka and zookeeper in the same minikube cluster. I have created one pod for Kafka and one pod for Zookeeper but after deploying them on the cluster they begin to restart repeatedly going to "CrashLoopBackOff" error. Taking a look at the logs I noticed that kafka launch a "ConnectException: Connection refused", it seems that kafka cannot establish connection with zookeeper. I have created the pods manually with the following config file:
zookeeper-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: bitnami/zookeeper
ports:
- name: http
containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
selector:
app: zookeeper
ports:
- protocol: TCP
port: 2181
kafka-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: bitnami/kafka
ports:
- name: http
containerPort: 9092
env:
- name: KAFKA_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_BROKER_ID
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://$(KAFKA_POD_IP):9092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: zookeeper:2181
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
selector:
app: kafka
ports:
- protocol: TCP
port: 9092
type: LoadBalancer
Kafka and zookeeper configurations are more or less the same that I used with docker compose with no errors. So, probably there is something wrong in my configuration for Kubernetes. Anyone could help me please, I don't understand the issue, thanks.

Kafka: connect from local machine to running in k8s on remote machine Kafka Broker

Good day everyone!
The main problem is: I want to connect from my local machine to Kafka which is running on cluster (let it be DNS node03.st) in k8s container by my own manifest.
The manifest of zookeeper deployment is here (image: confluentinc/cp-zookeeper:6.2.4):
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aptmess
name: zookeeper-aptmess-deployment
labels:
name: zookeeper-service-filter
spec:
selector:
matchLabels:
app: zookeeper-label
template:
metadata:
labels:
app: zookeeper-label
spec:
containers:
- name: zookeeper
image: confluentinc/cp-zookeeper:6.2.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2181 # ZK client
name: client
- containerPort: 2888 # Follower
name: follower
- containerPort: 3888 # Election
name: election
- containerPort: 8080 # AdminServer
name: admin-server
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_TICK_TIME
value: "2000"
---
apiVersion: v1
kind: Service
metadata:
namespace: aptmess
name: zookeeper-service-aptmess
labels:
name: zookeeper-service-filter
spec:
type: NodePort
ports:
- port: 2181
protocol: TCP
name: client
- name: follower
port: 2888
protocol: TCP
- name: election
port: 3888
protocol: TCP
- port: 8080
protocol: TCP
name: admin-server
selector:
app: zookeeper-label
My kafka StatefulSet manifest (image: confluentinc/cp-kafka:6.2.4):
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: aptmess
name: kafka-stateful-set-aptmess
labels:
name: kafka-service-filter
spec:
serviceName: kafka-broker
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: kafka-label
template:
metadata:
labels:
app: kafka-label
spec:
volumes:
- name: config
emptyDir: {}
- name: extensions
emptyDir: {}
- name: kafka-storage
persistentVolumeClaim:
claimName: kafka-data-claim
terminationGracePeriodSeconds: 300
containers:
- name: kafka
image: confluentinc/cp-kafka:6.2.4
imagePullPolicy: Always
ports:
- containerPort: 9092
resources:
requests:
memory: "2Gi"
cpu: "1"
command:
- bash
- -c
- unset KAFKA_PORT; /etc/confluent/docker/run
env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-broker
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-service-aptmess:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "PLAINTEXT:PLAINTEXT,CONNECTIONS_FROM_HOST:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "PLAINTEXT"
- name: KAFKA_LISTENERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://kafka-broker.aptmess.svc.cluster.local:9092"
volumeMounts:
- name: config
mountPath: /etc/kafka
- name: extensions
mountPath: /opt/kafka/libs/extensions
- name: kafka-storage
mountPath: /var/lib/kafka/
securityContext:
runAsUser: 1000
fsGroup: 1000
---
apiVersion: v1
kind: Service
metadata:
namespace: aptmess
name: kafka-broker
labels:
name: kafka-service-filter
spec:
type: NodePort
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka-label
NodePort for port 9092 is 30000.
When i try to connect from localhost a got error:
from kafka import KafkaProducer
producer = KafkaProducer(
bootstrap_servers=['node03.st:30000']
)
>> Error connecting to node kafka-broker.aptmess.svc.cluster.local:9092 (id: 1 rack: null)
I spent a long time by changing internal and external listeners, but it doesn't help me. What should i do to reach the goal of sending message from my localhost to remote Kafka broker?
Thanks in advance!
P.s: I have searched this links to find results:
Use SCRAM-SHA-512 authentication with SSL on LoadBalancer in Strimzi Kafka
https://github.com/strimzi/strimzi-kafka-operator/issues/1156
https://github.com/strimzi/strimzi-kafka-operator/issues/1463
https://githubhelp.com/Yolean/kubernetes-kafka/issues/328?ysclid=l4grqi7hc6364785597
Connecting Kafka running on EC2 machine from my local machine
Access kafka broker in a remote machine ERROR
How to Connect to kafka on localhost (host machine) from app inside kubernetes (minikube)
kafka broker not available at starting
https://github.com/SOHU-Co/kafka-node/issues/666
https://docs.confluent.io/operator/current/co-nodeports.html
https://developers.redhat.com/blog/2019/06/07/accessing-apache-kafka-in-strimzi-part-2-node-ports
https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
Kafka in Kubernetes Cluster- How to publish/consume messages from outside of Kubernetes Cluster
Kafka docker compose external connection
confluentinc image
NodePort for port 9092 is 30000
Then you need to define that node's hostname and port as part of KAFKA_ADVERTISED_LISTENERS, as mentioned in many of the linked posts... You've only defined one listener, and it's internal to k8s... However, keep in mind, that's a poor solution unless you force the broker pod to only be running on that one host, and that one port.
Alternatively, replace your setup with Strimzi operator, and read how you can use Ingress resources (ideally) to access the Kafka cluster, but they also support NodePort - https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/ (cross reference with latest documentation since that's an old post)
Ingress's would be ideal because the Ingress controller would be able to dynamically route requests to the broker pods while having a fixed external address, otherwise, you'll constantly need to use k8s api to describe the broker pods and get their current port information

How ping to headless pod from other namespace

Hello I have problem with connect directly to choosen pod from other namespace. I checked nslookup and dns for my pod is kafka-0.kafka-headless-svc.message-broker.svc.cluster.local and when I ping from same ns everything is ok, problem is when I try same on other ns. When I try something like this kafka.message-broker.svc.cluster.local everything is fine, but I want to connect to specific pod, not that choosen by balancer. My configs:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
namespace: message-broker
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.message-broker.svc.cluster.local
- name: KAFKA_LISTENERS
value: PLAINTEXT://kafka-0:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-0:9092
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka
ports:
- containerPort: 9092
name: kafka
serviceName: "kafka-headless-svc"
selector:
matchLabels:
app: kafka
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka
name: kafka
namespace: message-broker
spec:
clusterIP: None
selector:
app: kafka
ports:
- port: 9092
targetPort: 9092
protocol: TCP
I doing everything on minikube so maybe there is some problem with networkpolicy (I didn't set any)
Problem resolved. I had to set serviceName from StatefulSet (in my example "kafka-headless-svc") same as metadata.name from my Service file

I'm having hard time setting up kafka on gke and would like to know the best way of setting it up?

I was trying to use statefulset to deploy the zookeeper and Kafka server in a cluster in gke but the communication between the Kafka and zookeeper fails with an error message in logs. I'd like to know what would be the easiest way to setup a Kafka in kubernetes.
I've tried the following configurations and I see that the Kafka fails to communicate with zookeeper but I am not sure why? I know that I may need a headless service because the communication is being handled by Kafka and zookeeper themselves.
For Zookeeper
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
type: LoadBalancer
selector:
app: zookeeper
ports:
- port: 2181
targetPort: client
name: zk-port
- port: 2888
targetPort: leader
name: zk-leader
- port: 3888
targetPort: election
name: zk-election
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
spec:
replicas: 3
selector:
matchLabels:
app: zookeeper
serviceName: zookeeper
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zk-pod
image: zookeeper:latest
imagePullPolicy: Always
ports:
- name: client
containerPort: 2181
- name: leader
containerPort: 2888
- name: election
containerPort: 3888
env:
- name: ZOO_MY_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ZOO_TICK_TIME
value: "2000"
- name: ZOO_INIT_LIMIT
value: "5"
- name: ZOO_SYNC_LIMIT
value: "2"
- name: ZOO_SERVERS
value: zookeeper:2888:3888
For Kafka
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
replicas: 3
selector:
matchLabels:
app: kafka
serviceName: kafka-svc
podManagementPolicy: Parallel
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:latest
ports:
- containerPort: 9092
name: client
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: kafka.default.svc.cluster.local:9092
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
spec:
type: LoadBalancer
selector:
app: kafka
ports:
- port: 9092
targetPort: client
name: kfk-port
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
I'd like to be able to send messages to a topic and to be able to read them back. I've been using kafkacat to test the connection.
This is one of limitation that specified in Official Kubernetes Documentation about StateFulsets, that
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
So, as you mentioned, you need Headless Service and you can just easily add headless service yaml to top of your configuration similar to below for your both StatefulSets:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- port: 2181
name: someport
clusterIP: None
selector:
app: zookeeper
Hope it helps!
I have been following the official guide from Google Click to Deploy. It proves very beneficial for me. Here is the link that you can follow to setup Kafka cluster over GKE. This github repository is officially maintained by Google Cloud.
https://github.com/GoogleCloudPlatform/click-to-deploy/tree/master/k8s/kafka
Other simpe approach is to deploy via Google Cloud Console.
https://console.cloud.google.com/marketplace
Search for kafka cluster (with replication)
Click on Configure
Fill out all necessary details and it will configure Kafka cluster for you will all internal communication enabled. I have removed my cluster name for privacy.

Kubernetes MySQL connection timeout

I've set up a Kubernetes deployment and service for MySQL. I cannot access the MySQL service from any pod using its DNS name... It just times out. Any other port refuses the connection immediately, but the port in my service configuration times out after ~10 seconds.
I am able to resolve the MySQL Pod DNS.
I cannot ping the host.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
run: mysql-service
spec:
ports:
- port: 3306
protocol: TCP
- port: 3306
protocol: UDP
selector:
run: mysql-service
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-service
labels:
app: mysql-service
spec:
replicas: 1
selector:
matchLabels:
app: mysql-service
template:
metadata:
labels:
app: mysql-service
spec:
containers:
- name: 'mysql-service'
image: mysql:5.5
env:
- name: MYSQL_ROOT_PASSWORD
value: some_password
- name: MYSQL_DATABASE
value: some_database
ports:
- containerPort: 3306
Your deployment (and more specifically its pod spec) says
labels:
app: mysql-service
but your service says
selector:
run: mysql-service
These don't match, so your service isn't attaching to the pod. You should also see this if you kubectl describe service mysql-service, the "endpoints" list will be empty.
Change the service's selector to match the pod's labels (or vice versa) and this should be better.