How to assign a DNS name to a Kafka Kubernetes Nodeport - kubernetes

I have created a Kafka cluster using the Strimzi Kafka operator on minikube to learn the basics. I am trying to access Kafka inside the minikube environment from my Host and for this I created a Kafka Node port:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: kafka-cluster
spec:
kafka:
version: 2.4.0
replicas: 3
listeners:
plain: {}
tls: {}
external:
type: nodeport
tls: false
overrides:
bootstrap:
nodePort: 32100
brokers:
- broker: 0
nodePort: 32000
- broker: 1
nodePort: 32001
- broker: 2
nodePort: 32002
...
...
...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-cluster-kafka-0 NodePort 10.96.40.176 <none> 9094:32000/TCP 7d
kafka-cluster-kafka-1 NodePort 10.96.138.2 <none> 9094:32001/TCP 7d
kafka-cluster-kafka-2 NodePort 10.96.209.16 <none> 9094:32002/TCP 7d
kafka-cluster-kafka-bootstrap ClusterIP 10.96.216.169 <none> 9091/TCP,9092/TCP,9093/TCP,9404/TCP 7d
kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 7d
kafka-cluster-kafka-exporter ClusterIP 10.96.17.45 <none> 9404/TCP 47d
kafka-cluster-kafka-external-bootstrap NodePort 10.96.252.97 <none> 9094:32100/TCP 7d
kafka-cluster-zookeeper-client ClusterIP 10.96.155.34 <none> 9404/TCP,2181/TCP 7d
kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 7d
All seems to work well so far and I am able to publish messages to the Kafka topic using kafka-cluster-kafka-external-bootstrap (withBootstrapServers("192.168.99.107:32100"). Since I am learning things step-by-step, I wanted to see if I could assign a name instead of referring by the IP address.
Is there an easy way to do it in the Nodeport configuration? I have been stuck on this issue for a week. Appreciate a nudge in the right direction!

That mainly depends on your infrastructure. Minikube is using IP addresses as both node address and loadbalancer address. That is why Strimzi always gives you the address based on the IP address. If you would run it on Kubernetes for example somewhere on AWS, you would out of the box get DNS names instead of the IPs.
Even with Minikube you can configure Strimzi to use some names you specify: https://strimzi.io/docs/latest/full.html#con-kafka-broker-external-listeners-addresses-deployment-configuration-kafka ... but you would need to make sure these route to the Minikube IP address for example by adding them anually to /etc/hosts. But TBH, normally this is not worth it with Minikube.

You should install some Kubernetes bare-metal load balancer, like MetalLB, then use LoadBalancer service instead of NodePort service, get EXTERNAL-IP value of LoadBalancer service and point DNS to this IP.

Related

Strimzi Kafka setup with GKE internal loadbalancer

Followed https://strimzi.io/quickstarts/ and https://strimzi.io/blog/2019/05/13/accessing-kafka-part-4/ to use GKE internal loadbalancer with Strimzi. After adding the internal load balancer Strimzi provisioned two loadbalancer service with external IP.
Kafka % k get svc -n kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-cluster-kafka-0 LoadBalancer xx.xxx.xx.xxx bb.bb.bbb.bb 9094:30473/TCP 3d1h
my-cluster-kafka-bootstrap ClusterIP xx.xxx.xx.xxx <none> 9091/TCP,9092/TCP,9093/TCP 25d
my-cluster-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 25d
my-cluster-kafka-external-bootstrap LoadBalancer xx.xxx.xx.xxx aa.aa.aaa.aa 9094:30002/TCP 3d1h
my-cluster-zookeeper-client ClusterIP xx.xxx.xx.xxx <none> 2181/TCP 25d
my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 25d
The producer/consumer flow is working inside the cluster using my-cluster-kafka-bootstrap and I can also curl the my-cluster-kafka-external-bootstrap addess aa.aa.aaa.aa:9094 from outside the cluster. However after producing to aa.aa.aaa.aa:9094 from outside the cluster my producer logged the error below.
Connection to node 0 (bb.bb.bbb.bb:9094) could not be established. Broker may not be available.
which seem to indicate my-cluster-kafka-external-bootstrap is forwarding the traffic to my-cluster-kafka-0. And per kubectl get svc -o yaml output only my-cluster-kafka-external-bootstrap was setup as a GKE internal LB. Since there are various firewall rules in our enviroment I suspect that my-cluster-kafka-0 needs to be set up as a GKE internal LB as well for the producer to work. Does this seem to be the issue? How do I update Strimzi to make both LB internal? Thanks.
A relevant question before Strimzi kafka accessing it privately with in GKE. But it didn't help after I turn off tls.
Answering own question. Appearntly Strimzi provision one LB per broker which is the my-cluster-kafka-0 here. The listener config can specify these per broker LBs like this https://strimzi.io/blog/2019/05/13/accessing-kafka-part-4/
# ...
listeners:
# ...
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
brokers:
- broker: 0
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
- broker: 1
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
- broker: 2
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
# ...

Minio Deployment in Kubernetes : Console getting redirected

I made a Minio deployment in my 2 Node Kubernetes cluster using YAML files.
I had used an NFS server for the corresponding persistent volume and pvc associated with the same.
Once the pod is running, I created a service to access the console from the browser.
But when tried the URL "http://<host-ip-address:nodePort>", the same was getting redirected to the port 45893 with the message "This site cannot be reached."
Regards,
Vivek
After many tries, got a solution with the help of my friend.
We created a copy of the service and changed the Port to the port to which my Minio console was getting redirected and Nodeport to some random port allowed in the firewall. This resolved the issue.
service.yaml
type: LoadBalancer
ports:
- port: 9000
nodePort: 32767
protocol: TCP
selector:
service_copy.yaml
ports:
- port: 45893
nodePort: 32766
protocol: TCP
selector:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP X.X.X.X <none> 443/TCP 25d
minio-xxx-service NodePort X.X.X.X <none> 9000:32767/TCP 3d23h
minio-xxxx-service-cp NodePort X.X.X.X <none> 45893:32766/TCP 146m
After doing the same, I was able to access the console.
Regards,
Vivek

External access to Kafka using Strimzi

I'm attempting to provide bi-direction external access to Kafka using Strimzi by following this guide: Red Hat Developer - Kafka in Kubernetes
My YAML taken from the Strimizi examples on GitHub, is as follows:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 3.0.0
replicas: 1 #3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external
port: 9094
type: loadbalancer
tls: false
configuration:
#externalTrafficPolicy: Local
#loadBalancerSourceRanges:
# - 10.0.0.200/32
brokers:
- broker: 0
advertisedHost: 10.0.0.200
advertisedPort: 30123
config:
offsets.topic.replication.factor: 1 #3
transaction.state.log.replication.factor: 1 #3
transaction.state.log.min.isr: 1 #2
log.message.format.version: "3.0"
inter.broker.protocol.version: "3.0"
storage:
type: ephemeral
zookeeper:
replicas: 1 #3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
When running kubectl get services I'm presented with the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
my-cluster-kafka-0 LoadBalancer 10.107.190.96 <pending> 9094:31964/TCP 29m
my-cluster-kafka-bootstrap ClusterIP 10.99.34.246 <none> 9091/TCP,9092/TCP,9093/TCP 43m
my-cluster-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 43m
my-cluster-kafka-external-bootstrap LoadBalancer 10.99.91.68 <pending> 9094:31442/TCP 29m
my-cluster-zookeeper-client ClusterIP 10.101.216.35 <none> 2181/TCP 45m
my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 45m
Note the my-cluster-kafka-0 and my-cluster-kafka-external-bootstrap has a <pending> EXTERNAL-IP. What am I missing within my YAML file to provide bi-direction external access to my-cluster-kafka-0?
Strimzi just created the Kubernetes Service of type Loadbalancer. It is up to your Kubernetes cluster to provision the load balancer and set its external address which Strimzi can use. When the external address is listed as pending it means the load balancer is not (yet) created. In some public clouds that can take few minutes, so it might be just about waiting for it. But keep in mind that the load balancers are not supported in all environments => and when they are not supported, you cannot really use them. So you really need to double check whether your environment supports them or not. Typically, different clouds would support load balancers while some local or bare-metal environments might not (but it really depends).
I'm also not really sure why did you configured the advertised host and port:
advertisedHost: 10.0.0.200
advertisedPort: 30123
When using load balancers (assuming they would be supported in your environments), you would normally want to use the loadbalancer address which will be automatically set as the advertised host / port. Apart from that, your YAML looks good, but the loadbalancer support might be missing.

Access services on k8s on prem

I have 3 virtual machines (ubuntu 18 lts) on my local pc: 1 is master and 2 are nodes. I was able to install kubernetes and also to setup my application.
My application consist of 3 parts: database, backend and frontend. For each of these parts I've created and deployed services. I want to expose the FE service outside the cluster to be able to access it from one of the nodes.
The service description looks like this:
apiVersion: v1
kind: Service
metadata:
name: fe-deployment
labels:
run: fe-srv
spec:
ports:
- protocol: TCP
port: 8085
targetPort: 80
selector:
app: fe
type: NodePort
The ouput of
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
be-deployment ClusterIP 10.96.169.225 <none> 8080/TCP 2d22h app=be
db-deployment ClusterIP 10.110.14.88 <none> 3306/TCP 2d22h app=db
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
I would have expected that using one node IP and the node port to be able to access my FE from browser, but it doesn't work.
What am I missing? How to access my FE from outside the cluster?
Edit
Based on the documentation, NodePort service type should:
Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort
I understand that I will access my service from outside of the cluster using node IP and static port. From the node IP statement I understand that it refers to the machine (the VM in my case) IP.
Later Edit
I've checked the firewall and it seems that is disable on all my machines:
sudo ufw status
Status: inactive
Later later edit
As I told in a comment, trying to telnet to IPv4 address didn't work. Trying with IPv6 does work on localhost and also using the ethernet interface IPv6 IP.
The netstat output is:
netstat -6 -a | grep 324
tcp6 1 0 [::]:32476 [::]:* LISTEN
Despite the fact that it should work (based on the information I read on internet) it doesn't work with IPv4. Is there a way to change this?
Later later later edit
It seems that this is a bug
You can assign EXTERNAL-IP for fe service as IP address if node.
Then you can check : curl -k http://EXTERNAL-IP:PORT
EXTERNAL-IP is Node of IP adress Server.
In your case, due to you didn't defined nodePort, kubernetes randomly assigned port 32476 to your service. To access that service go to <EXTERNAL-NODE-IP>:32476 (kubernetes-docs).
If you want to assign specific port, you need to define nodePort in service definition (example for ingress based on nginx):
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
spec:
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
type: NodePort
You would not get an external IP when exposing service as a nodeport.
Exposing Service on a Nodeport means that your service would be available on externally via the NodeIP of any node in the cluster at a random port between 30000-32767(default behaviour) .
Each of the nodes in the cluster proxy that port (the same port number on every Node) into the pod where your service is launched.
From your kubectl get service -o wide output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
We can find that port on which your service is exposed is port 32476.
From Your kubectl get node -o wide output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
We can find that your node ips are: 172.17.199.105 and 172.17.199.110
You can now access your service externally using <Node-IP>:<Node-Port>.
So in Your case these are 172.17.199.105:32476 and 172.17.199.110:32476 depending on which node you want to access Your service.
Additionally, if you want a fixed Node port, you can specify that in the yaml.
You need to make sure you add a security rule on your nodes to allow traffic on the particular port.

Map service on minikube to host IP

This is my first time running through the Kubernetes tutorial.
I installed Docker, Kubectl and Minikube on a headless Ubuntu server (18.04).
I ran Minikube like this -
minikube start --vm-driver=none
I have a local docker image that run a restful service on port 9110. I create a deployment and expose it like this -
kubectl run hello-node --image=dbtemplate --port=9110 --image-pull-policy=Never
kubectl expose deployment hello-node --type=NodePort
status of my service -
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node NodePort 10.98.104.45 <none> 9110:32651/TCP 39m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h2m
# kubectl describe services hello-node
Name: hello-node
Namespace: default
Labels: run=hello-node
Annotations: <none>
Selector: run=hello-node
Type: NodePort
IP: 10.98.104.45
Port: <unset> 9110/TCP
TargetPort: 9110/TCP
NodePort: <unset> 32651/TCP
Endpoints: 172.17.0.5:9110
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# minikube ip
192.168.1.216
As you can see, the service is available on the internal IP of 172.17.0.5.
Is there some way for me to get this service mapped to/exposed on the IP of the parent host, which is 192.168.1.216. I would like my restful service at 192.168.1.216:9110.
I think minikube tunnel might be what you're looking for. https://github.com/kubernetes/minikube/blob/master/docs/networking.md
Services of type LoadBalancer can be exposed via the minikube tunnel command.