I'm attempting to provide bi-direction external access to Kafka using Strimzi by following this guide: Red Hat Developer - Kafka in Kubernetes
My YAML taken from the Strimizi examples on GitHub, is as follows:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 3.0.0
replicas: 1 #3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external
port: 9094
type: loadbalancer
tls: false
configuration:
#externalTrafficPolicy: Local
#loadBalancerSourceRanges:
# - 10.0.0.200/32
brokers:
- broker: 0
advertisedHost: 10.0.0.200
advertisedPort: 30123
config:
offsets.topic.replication.factor: 1 #3
transaction.state.log.replication.factor: 1 #3
transaction.state.log.min.isr: 1 #2
log.message.format.version: "3.0"
inter.broker.protocol.version: "3.0"
storage:
type: ephemeral
zookeeper:
replicas: 1 #3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
When running kubectl get services I'm presented with the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
my-cluster-kafka-0 LoadBalancer 10.107.190.96 <pending> 9094:31964/TCP 29m
my-cluster-kafka-bootstrap ClusterIP 10.99.34.246 <none> 9091/TCP,9092/TCP,9093/TCP 43m
my-cluster-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 43m
my-cluster-kafka-external-bootstrap LoadBalancer 10.99.91.68 <pending> 9094:31442/TCP 29m
my-cluster-zookeeper-client ClusterIP 10.101.216.35 <none> 2181/TCP 45m
my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 45m
Note the my-cluster-kafka-0 and my-cluster-kafka-external-bootstrap has a <pending> EXTERNAL-IP. What am I missing within my YAML file to provide bi-direction external access to my-cluster-kafka-0?
Strimzi just created the Kubernetes Service of type Loadbalancer. It is up to your Kubernetes cluster to provision the load balancer and set its external address which Strimzi can use. When the external address is listed as pending it means the load balancer is not (yet) created. In some public clouds that can take few minutes, so it might be just about waiting for it. But keep in mind that the load balancers are not supported in all environments => and when they are not supported, you cannot really use them. So you really need to double check whether your environment supports them or not. Typically, different clouds would support load balancers while some local or bare-metal environments might not (but it really depends).
I'm also not really sure why did you configured the advertised host and port:
advertisedHost: 10.0.0.200
advertisedPort: 30123
When using load balancers (assuming they would be supported in your environments), you would normally want to use the loadbalancer address which will be automatically set as the advertised host / port. Apart from that, your YAML looks good, but the loadbalancer support might be missing.
Related
Followed https://strimzi.io/quickstarts/ and https://strimzi.io/blog/2019/05/13/accessing-kafka-part-4/ to use GKE internal loadbalancer with Strimzi. After adding the internal load balancer Strimzi provisioned two loadbalancer service with external IP.
Kafka % k get svc -n kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-cluster-kafka-0 LoadBalancer xx.xxx.xx.xxx bb.bb.bbb.bb 9094:30473/TCP 3d1h
my-cluster-kafka-bootstrap ClusterIP xx.xxx.xx.xxx <none> 9091/TCP,9092/TCP,9093/TCP 25d
my-cluster-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 25d
my-cluster-kafka-external-bootstrap LoadBalancer xx.xxx.xx.xxx aa.aa.aaa.aa 9094:30002/TCP 3d1h
my-cluster-zookeeper-client ClusterIP xx.xxx.xx.xxx <none> 2181/TCP 25d
my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 25d
The producer/consumer flow is working inside the cluster using my-cluster-kafka-bootstrap and I can also curl the my-cluster-kafka-external-bootstrap addess aa.aa.aaa.aa:9094 from outside the cluster. However after producing to aa.aa.aaa.aa:9094 from outside the cluster my producer logged the error below.
Connection to node 0 (bb.bb.bbb.bb:9094) could not be established. Broker may not be available.
which seem to indicate my-cluster-kafka-external-bootstrap is forwarding the traffic to my-cluster-kafka-0. And per kubectl get svc -o yaml output only my-cluster-kafka-external-bootstrap was setup as a GKE internal LB. Since there are various firewall rules in our enviroment I suspect that my-cluster-kafka-0 needs to be set up as a GKE internal LB as well for the producer to work. Does this seem to be the issue? How do I update Strimzi to make both LB internal? Thanks.
A relevant question before Strimzi kafka accessing it privately with in GKE. But it didn't help after I turn off tls.
Answering own question. Appearntly Strimzi provision one LB per broker which is the my-cluster-kafka-0 here. The listener config can specify these per broker LBs like this https://strimzi.io/blog/2019/05/13/accessing-kafka-part-4/
# ...
listeners:
# ...
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
brokers:
- broker: 0
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
- broker: 1
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
- broker: 2
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
# ...
Hi Kubernetes Experts,
I have an application cluster running in the azure kubernetes cluster. There are 3 pods inside the application cluster. The app is designed in a way, that each pod listens on a different port. For example, pod 1 listens on 31090, pod2 on 31091 and pod 3 on 31092.
This application is needed to be connected from outside the network. At this point, I need to create a separate load balancer service for each of the pods.
In the service, I cannot use selector as app name/label as it tries to distribute traffic between all 3 pods in round robin way. Now as you see above that one port (say 31090) is running only on one pod. So, external connections to the load balancer IP fails 2/3 rd of times.
So, I am trying to create 3 different load balancer services individual to each pod, without mentioning the selector and later assigning endpoint individually to them.
The approach is explained here:
In Kubernetes, how does one select a pod by name in a service selector?
But after the endpoint is created, the service shows endpoint as blank. See below.
First I created only the service
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 31090
targetPort: 31090
name: b0
type: LoadBalancer
After this, the service shows endpoint as "none". So far, so good.
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints: <none>**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3s service-controller Ensuring load balancer
Then I create the end point, I have made sure the names match between service and endpoint, including any spaces or tabs. But the endpoint in service desc shows "" (blank). And this is why, I am unable to get to the app from outside network. Telnet to the port and external IP just keeps trying.
---
apiVersion: v1
kind: Endpoints
metadata:
name: myservice
subsets:
- addresses:
- ip: 10.240.1.32
ports:
- port: 31090
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
LoadBalancer Ingress: 20.124.49.192
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints:**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3m22s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 3m10s service-controller Ensured load balancer
Only this service is failing (using no selector). All my other external load balancer services are working fine. They all are getting to the pods. They all are using selector as app label.
Here is the pod ip. I have ensured port 31090 is running inside the pod.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ck21-cp-kafka-0 2/2 Running 2 (76m ago) 78m **10.240.1.32** aks-agentpool-26199219-vmss000013 <none> <none>
Can someone please help me here?
Thanks !
I am running my Kafka on my local k8s cluster. I have 1 instance of broker and zookeeper each. I have logged into Kafka broker container and I am able to create topic, send/receive message on the topic.
Now I want to send message from another pod which is running in a different namespace. Below are services in my Kafka cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kafka-service LoadBalancer 10.108.3.192 localhost 9092:30785/TCP 56m app=kafka,id=0
zoo1 ClusterIP 10.108.132.149 <none> 2181/TCP,2888/TCP,3888/TCP 56m app=zookeeper-1
I am trying to curl the services from a different pod.
I am able to access the zookeeper service.
bash-5.1# curl zoo1.queues.svc.cluster.local:2181
curl: (52) Empty reply from server
But when I try to access kafka-service.queues.svc.cluster.local:9092, I get the below error:
bash-5.1# curl kafka-service.queues.svc.cluster.local:9092
curl: (56) Recv failure: Connection reset by peer
How to access the LoadBalancer service from another living in a different namespace?
Below is the description of the service:
Name: kafka-service
Namespace: queues
Labels: name=kafka
Annotations: <none>
Selector: app=kafka,id=0
Type: LoadBalancer
IP: 10.108.3.192
LoadBalancer Ingress: localhost
Port: kafka-port 9092/TCP
TargetPort: 9092/TCP
NodePort: kafka-port 30785/TCP
Endpoints: 10.1.1.95:9092
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster.
I'm trying to expose it my using a TCP controller with nginx.
My TCP nginx configmap looks like
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
And i've made the corresponding entry in my nginx ingress controller
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
Now I'm trying to connect to my kafka instance.
When i just try to connect to the IP and port using kafka tools, I get the error message
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
From the pod kafka-zookeeper-0 I'm gettting loads of
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Though I'm not sure these have anything to do with it?
Any ideas on what I'm doing wrong?
Thanks in advance.
TL;DR:
Change the value nodeport.enabled to true inside cp-kafka/values.yaml before deploying.
Change the service name and ports in you TCP NGINX Configmap and Ingress object.
Set bootstrap-server on your kafka tools to <Cluster_External_IP>:31090
Explanation:
The Headless Service was created alongside the StatefulSet. The created service will not be given a clusterIP, but will instead simply include a list of Endpoints.
These Endpoints are then used to generate instance-specific DNS records in the form of:
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
It creates a DNS name for each pod, e.g:
[ root#curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
This is what makes this services connect to each other inside the cluster.
I've gone through a lot of trial and error, until I realized how it was supposed to be working. Based your TCP Nginx Configmap I believe you faced the same issue.
The Nginx ConfigMap asks for: <PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>".
I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.
I also realized that you are trying to expose cp-kafka:9092 which is the headless service, also only used internally, as I explained above.
In order to get outside access you have to set the parameters nodeport.enabled to true as stated here: External Access Parameters.
It adds one service to each kafka-N pod during chart deployment.
Then you change your configmap to map to one of them:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0 this is how the service identifies the pod it is intended to connect to.
Edit the nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
Set your kafka tools to <Cluster_External_IP>:31090
Reproduction:
- Snippet edited in cp-kafka/values.yaml:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
Deploy the chart:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
Create the TCP configmap:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
Edit the Nginx Ingress Controller:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
My ingress is on IP 35.226.189.123, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can use kafka-client pod to test:
user#minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user#minikube:~$ kubectl exec kafka-client -it -- bin/bash
root#kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root#kafka-client:/#
As you can see, I was able to access the kafka from outside.
If you need external access to Zookeeper as well I'll leave a service model for you:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
It will create a service for it:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
Patch your configmap:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
Add the Ingress rule:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
Test it with your external IP:
pod/zookeeper-client created
user#minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root#zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
If you have any doubts, let me know in the comments!
I have created a Kafka cluster using the Strimzi Kafka operator on minikube to learn the basics. I am trying to access Kafka inside the minikube environment from my Host and for this I created a Kafka Node port:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: kafka-cluster
spec:
kafka:
version: 2.4.0
replicas: 3
listeners:
plain: {}
tls: {}
external:
type: nodeport
tls: false
overrides:
bootstrap:
nodePort: 32100
brokers:
- broker: 0
nodePort: 32000
- broker: 1
nodePort: 32001
- broker: 2
nodePort: 32002
...
...
...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-cluster-kafka-0 NodePort 10.96.40.176 <none> 9094:32000/TCP 7d
kafka-cluster-kafka-1 NodePort 10.96.138.2 <none> 9094:32001/TCP 7d
kafka-cluster-kafka-2 NodePort 10.96.209.16 <none> 9094:32002/TCP 7d
kafka-cluster-kafka-bootstrap ClusterIP 10.96.216.169 <none> 9091/TCP,9092/TCP,9093/TCP,9404/TCP 7d
kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 7d
kafka-cluster-kafka-exporter ClusterIP 10.96.17.45 <none> 9404/TCP 47d
kafka-cluster-kafka-external-bootstrap NodePort 10.96.252.97 <none> 9094:32100/TCP 7d
kafka-cluster-zookeeper-client ClusterIP 10.96.155.34 <none> 9404/TCP,2181/TCP 7d
kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 7d
All seems to work well so far and I am able to publish messages to the Kafka topic using kafka-cluster-kafka-external-bootstrap (withBootstrapServers("192.168.99.107:32100"). Since I am learning things step-by-step, I wanted to see if I could assign a name instead of referring by the IP address.
Is there an easy way to do it in the Nodeport configuration? I have been stuck on this issue for a week. Appreciate a nudge in the right direction!
That mainly depends on your infrastructure. Minikube is using IP addresses as both node address and loadbalancer address. That is why Strimzi always gives you the address based on the IP address. If you would run it on Kubernetes for example somewhere on AWS, you would out of the box get DNS names instead of the IPs.
Even with Minikube you can configure Strimzi to use some names you specify: https://strimzi.io/docs/latest/full.html#con-kafka-broker-external-listeners-addresses-deployment-configuration-kafka ... but you would need to make sure these route to the Minikube IP address for example by adding them anually to /etc/hosts. But TBH, normally this is not worth it with Minikube.
You should install some Kubernetes bare-metal load balancer, like MetalLB, then use LoadBalancer service instead of NodePort service, get EXTERNAL-IP value of LoadBalancer service and point DNS to this IP.