I am trying to setup Kafka on Minikube, a very basic setup. I can't validate if Kafka and Zookeeper have been setup correctly because kafkacat fails.
Here is my config:
zookeeper
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper-deployment-1
spec:
selector:
matchLabels:
app: zookeeper-1
template:
metadata:
labels:
app: zookeeper-1
spec:
containers:
- name: zoo1
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zoo1
---
apiVersion: v1
kind: Service
metadata:
name: zoo1
labels:
app: zookeeper-1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-1
kafka
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker0
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "0"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
pods
kafka-broker0-6885746967-6vktz 1/1 Running 0 5m20s
zookeeper-deployment-1-7f5bb9785f-7pplk 1/1 Running 0 5m25s
svc
kafka-service ClusterIP 10.99.226.129 <none> 9092/TCP 6m30s
zoo1 ClusterIP 10.96.140.187 <none> 2181/TCP,2888/TCP,3888/TCP 6m35s
kafkacat logs
✗ kafkacat -L -b kafka-service:9092 -d broker
%7|1596239513.610|BRKMAIN|rdkafka#producer-1| [thrd::0/internal]: :0/internal: Enter main broker thread
%7|1596239513.610|BROKER|rdkafka#producer-1| [thrd:app]: kafka-service:9092/bootstrap: Added new broker with NodeId -1
%7|1596239513.610|BRKMAIN|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Enter main broker thread
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: kafka-service:9092/bootstrap: Selected for cluster connection: bootstrap servers added (broker has 0 connection attempt(s))
%7|1596239513.610|INIT|rdkafka#producer-1| [thrd:app]: librdkafka v1.4.0 (0x10400ff) rdkafka#producer-1 initialized (builtin.features gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer, CC CXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SYSLOG SNAPPY SOCKEM SASL_SCRAM SASL_OAUTHBEARER CRC32C_HW, debug 0x2)
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Received CONNECT op
%7|1596239513.610|STATE|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Broker changed state INIT -> TRY_CONNECT
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: broker in state TRY_CONNECT connecting
%7|1596239513.610|STATE|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Broker changed state TRY_CONNECT -> CONNECT
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.614|BROKERFAIL|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: failed: err: Local: Host resolution failure: (errno: Bad address)
NODEPORT UPDATE
✗ kafkacat -L -b kafka-service:30236 -d broker
%7|1596476848.078|STATE|rdkafka#producer-1| [thrd:kafka-service:30236/bootstrap]: kafka-service:30236/bootstrap: Broker changed state CONNECT -> DOWN
%7|1596476848.078|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 46ms: application metadata request
%7|1596476848.078|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 46ms: application metadata request
%7|1596476849.065|CONNECT|rdkafka#producer-1| [thrd:app]: Cluster connection already in progress: application metadata request
%7|1596476849.065|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
% ERROR: Failed to acquire metadata: Local: Broker transport failure
minikube ip
✗ kafkacat -L -b 192.168.64.2:30236 -d broker
|1596476908.164|BRKMAIN|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Enter main broker thread
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:app]: 192.168.64.2:30236/bootstrap: Selected for cluster connection: bootstrap servers added (broker has 0 connection attempt(s))
%7|1596476908.164|INIT|rdkafka#producer-1| [thrd:app]: librdkafka v1.4.0 (0x10400ff) rdkafka#producer-1 initialized (builtin.features gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer, CC CXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SYSLOG SNAPPY SOCKEM SASL_SCRAM SASL_OAUTHBEARER CRC32C_HW, debug 0x2)
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Received CONNECT op
%7|1596476908.164|STATE|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Broker changed state INIT -> TRY_CONNECT
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: broker in state TRY_CONNECT connecting
%7|1596476908.164|STATE|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Broker changed state TRY_CONNECT -> CONNECT
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
✗ kafkacat -L -b localhost:30236 -d broker
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Received CONNECT op
%7|1596477098.454|INIT|rdkafka#producer-1| [thrd:app]: librdkafka v1.4.0 (0x10400ff) rdkafka#producer-1 initialized (builtin.features gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer, CC CXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SYSLOG SNAPPY SOCKEM SASL_SCRAM SASL_OAUTHBEARER CRC32C_HW, debug 0x2)
%7|1596477098.454|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state INIT -> TRY_CONNECT
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: broker in state TRY_CONNECT connecting
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.454|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state TRY_CONNECT -> CONNECT
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.460|CONNECT|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Connecting to ipv4#127.0.0.1:30236 (plaintext) with socket 9
%7|1596477098.461|BROKERFAIL|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: failed: err: Local: Broker transport failure: (errno: Connection refused)
%7|1596477098.461|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state CONNECT -> DOWN
%7|1596477098.461|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state DOWN -> INIT
%7|1596477098.461|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 43ms: application metadata request
%7|1596477098.461|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 42ms: application metadata request
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30236
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
I have never used kafkacat, but if it is a cli that is installed on your host, and not inside of another container, you can use it like this now:
✗ kafkacat -L -b localhost:30236 -d broker
localhost, or the ip of your kubernetes node.
Using a service default is of type ClusterIp, and this can be accessed only inside the kubernetes cluster
Related
I am trying to connect to kafka broker setup on azure aks from onprem rancher k8 cluster over internet . I have created a loadbalancer listener on azure kafka. it is creating 4 public ip's using azure load balancer service.
- name: external
port: 9093
type: loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
loadBalancerIP: 172.22.199.99
annotations:
external-dns.alpha.kubernetes.io/hostname: bootstrap.example.com
brokers:
- broker: 0
loadBalancerIP: 172.22.199.100
annotations:
external-dns.alpha.kubernetes.io/hostname: kafka-0.example.com
- broker: 1
loadBalancerIP: 172.22.199.101
annotations:
external-dns.alpha.kubernetes.io/hostname: kafka-1.example.com
- broker: 2
loadBalancerIP: 172.22.199.102
annotations:
external-dns.alpha.kubernetes.io/hostname: kafka-2.example.com
brokerCertChainAndKey:
secretName: source-kafka-listener-cert
certificate: tls.crt
key: tls.key
now to connect from onprem i have opened firewall for only bootstrap lb ip .My understanding is that bootstrap will inturn route the request to individual broker but that is not happening. When i try to connect connection is established with bootstrap loadbalncer ip but after that i get timeout error .
2022-08-22 08:14:04,659 INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2022-08-22 08:14:04,659 ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed) [main] org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:70)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:51)
please let me know if i have to open firewall for individual brokers as well?
I have a Kafka cluster in Kubernetes created using Strimzi.
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: {{ .Values.cluster.kafka.name }}
spec:
kafka:
version: 2.7.0
replicas: 3
storage:
deleteClaim: true
size: {{ .Values.cluster.kafka.storagesize }}
type: persistent-claim
rack:
topologyKey: failure-domain.beta.kubernetes.io/zone
template:
pod:
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9404'
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
loadBalancerIP: {{ .Values.cluster.kafka.bootstrapipaddress }}
brokers:
{{- range $key, $value := (split "," .Values.cluster.kafka.brokersipaddress) }}
- broker: {{ (split "=" .)._0 }}
loadBalancerIP: {{ (split "=" .)._1 | quote }}
{{- end }}
authorization:
type: simple
Cluster is created and up, I am able to create topics and produce/consume to/from topic.
The issue is that if I exec into one of Kafka brokers pods I see intermittent errors
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.35 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-9]
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.159 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-11]
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.4 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-10]
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.128 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-1]
After inspecting these IPs [10.240.0.35, 10.240.0.159, 10.240.0.4,10.240.0.128] I figured out the all they are related to pods from kube-system namespace which are implicitly created as part of Kafka cluster deployment.
Any idea what can be wrong?
I do not think this is necessarily wrong. You seem to have somewhere some application trying to connect to the broker without properly configured TLS. But as the connection is forwarded the IP probably gets masked - so it does not shwo the real external IP anymore. These can be all kind of things from misconfigured clients up to some healthchecks trying to just open TCP connection (depending on your environment, the load balancer can do it for example).
Unfortunately, it is a bit hard to find out where they really come from. You can try to trace it through the logs of whoeevr owns the IP address it came from, as that forwarded it from someone else etc. You could also try to enable TLS debug in Kafka with the Java system property javax.net.debug=ssl. But that might help only in some cases with misconfigured clients, not with some TPC probes and it will also make it hard to find the right place in the logs because it will also dump the replication traffic etc. which used TLS as well.
My setup is like, I have a producer service provisioned as part of minikube cluster and it is trying to publish messages to a kafka instance running on the host machine.
I have written a kafka service and endpoints yaml as follows:
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports:
- name: "broker"
protocol: "TCP"
port: 9092
targetPort: 9092
nodePort: 0
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
namespace: default
subsets:
- addresses:
- ip: 10.0.2.2
ports:
- name: "broker"
port: 9092
The ip address of the host machine from inside the minikube cluster mentioned in the endpoint is acquired from the following command:
minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'"
The problem I am facing is that the topic is getting created when producer tries to publish message for the first time but no messages are getting written on to that topic.
Digging into the pod logs, I found that producer is trying to connect to kafka instance on localhost or something (not really sure of it).
2020-05-17T19:09:43.021Z [warn] org.apache.kafka.clients.NetworkClient [] -
[Producer clientId=/system/sharding/kafkaProducer-greetings/singleton/singleton/producer]
Connection to node 0 (omkara/127.0.1.1:9092) could not be established. Broker may not be available.
Following which I suspected that probably I need to modify server.properties with the following change:
listeners=PLAINTEXT://localhost:9092
This however resulted in the change in the ip address in the log:
2020-05-17T19:09:43.021Z [warn] org.apache.kafka.clients.NetworkClient [] -
[Producer clientId=/system/sharding/kafkaProducer-greetings/singleton/singleton/producer]
Connection to node 0 (omkara/127.0.0.1:9092) could not be established. Broker may not be available.
I am not sure what ip address must be mentioned here? Or what is an alternate solution? And if it is even possible to connect from inside the kubernetes cluster to the kafka instance installed outside the kubernetes cluster.
Since producer kafka client is on the very same network as the brokers, we need to configure an additional listener like so:
listeners=INTERNAL://0.0.0.0:9093,EXTERNAL://0.0.0.0:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://localhost:9093,EXTERNAL://10.0.2.2:9092
inter.broker.listener.name=INTERNAL
We can verify messages in topic like so:
kafka-console-consumer.sh --bootstrap-server INTERNAL://0.0.0.0:9093 --topic greetings --from-beginning
{"name":"Alice","message":"Namastey"}
You can find a detailed explaination on understanding & provisioning kafka listeners here.
I would like to configure Kafka broker in Kubernetes. The docker image I am using is confluentinc/cp-kafka:latest. It requires KAFKA_ADVERTISED_LISTENERS environment variable which allows Kafka client to communicate with broker.
The problem is the difficulty to assign service endpoints IP to KAFKA_ADVERTISED_LISTENERS. If I am using localhost as this value, it is only working in local Kafka broker pod but it won't work for some Kafka client pods in kubernetes cluster to communicate with it. If I am using the service endpoint IP coming from kubectl get endpoints -l app=kafka, this is working but it is little overhead to use some audit script set this dynamic value every time.
I wonder is there a better way I can dynamically set this value in Kubernetes yaml file, so I don't need to programatically set this IP every time.
Here is the yaml file:
---
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
labels:
app: kafka
spec:
type: NodePort
ports:
- port: 9092
targetPort: 9092
selector:
app: kafka
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-broker
spec:
replicas: 1
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
hostname: broker
containers:
- name: kafka
image: confluentinc/cp-kafka:latest
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://DYNAMIC_ENDPOINT_IP:9092"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
Thanks in advance.
Edit: I tried to use Server name, Service host environment variable, service source IP and Pod IP. Unfortunately, I still get the error in kafka log:
java.lang.IllegalArgumentException: Error creating broker listeners from 'PLAINTEXT://$KAFKA_BROKER_SERVICE_HOST:9092': Unable to parse PLAINTEXT://$KAFKA_BROKER_SERVICE_HOST:9092 to a broker endpoint
If I am using kubectl exec -it kafa-broker-ssfjks env, those environment variables are actually set correctly in this pod. I guess it may be related to a Kafka broker configuration issue ?
You should let your clients connect through the service, so exposing the ip or dns of the service should work. By default services are exposed as variable names in the pod. If a dns plugin is configured dns can be used. More info: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Use the service name (kafka-broker) instead of it's IP. Kube-dns will resolve it for you. If the kafka client was placed at same namespace, you should use just "kafka-broker", if not, you must use the qualified name "kafka-broker.YOURNAMESPACE.svc"
#Jakub got me on the right track, so for something like cp-kafka-connect my Dockerfile looks like:
FROM confluentinc/cp-kafka-connect:5.4.0
ENV CONNECT_GROUP_ID='kafkatosql'
ENV CONNECT_CONFIG_STORAGE_TOPIC="kafkatosql-config"
ENV CONNECT_OFFSET_STORAGE_TOPIC="kafkatosql-offsets"
ENV CONNECT_STATUS_STORAGE_TOPIC="kafkatosql-status"
ENV CONNECT_KEY_CONVERTER="io.confluent.connect.avro.AvroConverter"
ENV CONNECT_VALUE_CONVERTER="io.confluent.connect.avro.AvroConverter"
ENV CONNECT_LOG4J_ROOT_LOGLEVEL="ERROR"
ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.4.0
WORKDIR /app
COPY start.sh .
CMD exec ./start.sh
and then start.sh looks like:
kafka_connect_host=localhost:8083
export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I)
/etc/confluent/docker/run &
wait_counter=0
echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
while true; do
status=$(curl -s -o /dev/null -w %{http_code} http://$kafka_connect_host/connectors)
if [ $status -eq 000 ]; then
wait_counter=$((wait_counter+1))
echo "Kafka Connect listener HTTP status: $status (waiting for 200)"
if [ $wait_counter = 15 ]; then
echo 'Waited too long!';
exit 1;
else
echo "Retries: $wait_counter"
sleep 3
fi
else
break
fi
done
echo -e "\n--\n+> Creating Kafka Connect Postgresql Sink"
curl -X PUT http://$kafka_connect_host/connectors/jdbc_sink_postgresql_00/config -H "Content-Type: application/json" -d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": 1,
"topics": "users",
"connection.url": "jdbc:'"$DB_URL"'",
"auto.create": false
}'
# ... other stuff
trap : TERM INT; sleep infinity & wait
For testing purposes I try to create a Kafka Cluster on my local minikube. The Cluster must be reachable from outside of the Kubernetes.
When I produce/consume from inside the pods there is no problem, everything works just fine.
When I produce from my local machine with
bin/kafka-console-producer.sh --topic mytopic --broker-list 192.168.99.100:32767
where 192.168.99.100 is my minikube-ip and 32767 is the node port of the kafka service.
I get the following Error Message:
>testmessage
>[2018-04-30 11:55:04,604] ERROR Error when sending message to topic ams_stream with key: null, value: 11 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ams_stream-0: 1506 ms has passed since batch creation plus linger time
When I consume from my local machine I get the following warnings:
[2018-04-30 10:22:30,680] WARN Connection to node 2 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-04-30 10:23:46,057] WARN Connection to node 8 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-04-30 10:25:01,542] WARN Connection to node 2 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-04-30 10:26:17,008] WARN Connection to node 5 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
The Broker IDs are right, so it looks like I can at least reach the brokers
Edit:
I think that the Problem may be, that the service is routing me "randomly" to any of my brokers, but he needs to route me to the leader of the topic.
Could this be the Problem? Does anybody know a way around this Problem?
Additional Information:
I'm using the wurstmeister/kafka and digitalwonderland/zookeeper images
I started using the DellEMC Tutorial (and the linked one from defuze.org)
This did not work out for me so I made some changes in the kafka-service.yml (1) and the kafka-cluster.yml (2)
kafka-service.yml
added a fixed NodePort
removed id from the selector
kafka-cluster.yml
added replicas to the specification
removed id from the label
changed the broker id to be generated by the last number from the IP
replaced deprecated values advertised_host_name / advertised_port with
listeners ( pod-ip:9092 ) for communication inside the k8s
advertised_listeners ( minikube-ip:node-port ) for communication with applications outside the kubernetes
1 - kafka-service.yml:
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
type: NodePort
ports:
- port: 9092
nodePort: 32767
targetPort: 9092
protocol: TCP
selector:
app: kafka
type: LoadBalancer
2 - kafka-cluster.yml:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-b
spec:
replicas: 3
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: HOSTNAME_COMMAND
value: "ifconfig |grep 'addr:172' |cut -d':' -f 2 |cut -d ' ' -f 1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk1:2181
- name: BROKER_ID_COMMAND
value: "ifconfig |grep 'inet addr:172' | cut -d'.' -f '4' | cut -d' ' -f '1'"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INTERNAL://192.168.99.100:32767"
- name: KAFKA_LISTENERS
value: "INTERNAL://_{HOSTNAME_COMMAND}:9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INTERNAL:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INTERNAL"
- name: KAFKA_CREATE_TOPICS
value: mytopic:1:3