How ping to headless pod from other namespace - kubernetes

Hello I have problem with connect directly to choosen pod from other namespace. I checked nslookup and dns for my pod is kafka-0.kafka-headless-svc.message-broker.svc.cluster.local and when I ping from same ns everything is ok, problem is when I try same on other ns. When I try something like this kafka.message-broker.svc.cluster.local everything is fine, but I want to connect to specific pod, not that choosen by balancer. My configs:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
namespace: message-broker
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.message-broker.svc.cluster.local
- name: KAFKA_LISTENERS
value: PLAINTEXT://kafka-0:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-0:9092
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka
ports:
- containerPort: 9092
name: kafka
serviceName: "kafka-headless-svc"
selector:
matchLabels:
app: kafka
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka
name: kafka
namespace: message-broker
spec:
clusterIP: None
selector:
app: kafka
ports:
- port: 9092
targetPort: 9092
protocol: TCP
I doing everything on minikube so maybe there is some problem with networkpolicy (I didn't set any)

Problem resolved. I had to set serviceName from StatefulSet (in my example "kafka-headless-svc") same as metadata.name from my Service file

Related

Error Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka on 10.96.0.10:53: no such host

C:\kafka>kubectl logs kafka-exporter-745f574c74-tzfn4
I0127 12:39:08.113890 1 kafka_exporter.go:792] Starting kafka_exporter (version=1.6.0, branch=master, revision=9d9cd654ca57e4f153d0d0b00ce36069b6a677c1)
F0127 12:39:08.890639 1 kafka_exporter.go:893] Error Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka.osm.svc.cluster.local on 10.96.0.10:53: no such host
below is the kafka-exporter-deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-exporter
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: kafka-exporter
template:
metadata:
labels:
app: kafka-exporter
spec:
containers:
- name: kafka-exporter
image: danielqsj/kafka-exporter:latest
imagePullPolicy: IfNotPresent
args:
- --kafka.server=kafka.osm.svc.cluster.local:9092
- --web.listen-address=:9092
ports:
- containerPort: 9308
env:
- name: KAFKA_EXPORTER_KAFKA_CONNECT
value: kafka-broker-644794f4ff-8gmxb:9092
- name: KAFKA_EXPORTER_TOPIC_WHITELIST
value: samptopic
Kafka-exporter-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-exporter
namespace: default
spec:
selector:
app: kafka-exporter
ports:
name: http
port: 9308
targetPort: 9308
type: NodePort
actually there is no namespace called "osm" everything running on default namespace
Refer kubernetes documentation.
As the error says, there's no service kafka in an osm namespace (whether that exists, or not), that can be reached. Remove .osm from that address, or change it to .default, assuming that there really is a Service named kafka available at port 9092

Kafka and Zookeeper pods restart repeatedly on Kubernetes cluster

I'm developing a microservices-based application deployed with Kubernetes for a university project. I'm newbie with Kubernetes and Kafka and I'm trying to run Kafka and zookeeper in the same minikube cluster. I have created one pod for Kafka and one pod for Zookeeper but after deploying them on the cluster they begin to restart repeatedly going to "CrashLoopBackOff" error. Taking a look at the logs I noticed that kafka launch a "ConnectException: Connection refused", it seems that kafka cannot establish connection with zookeeper. I have created the pods manually with the following config file:
zookeeper-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: bitnami/zookeeper
ports:
- name: http
containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
selector:
app: zookeeper
ports:
- protocol: TCP
port: 2181
kafka-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: bitnami/kafka
ports:
- name: http
containerPort: 9092
env:
- name: KAFKA_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_BROKER_ID
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://$(KAFKA_POD_IP):9092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: zookeeper:2181
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
selector:
app: kafka
ports:
- protocol: TCP
port: 9092
type: LoadBalancer
Kafka and zookeeper configurations are more or less the same that I used with docker compose with no errors. So, probably there is something wrong in my configuration for Kubernetes. Anyone could help me please, I don't understand the issue, thanks.

What host does Kubernetes assign to my deployment?

I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.

I'm having hard time setting up kafka on gke and would like to know the best way of setting it up?

I was trying to use statefulset to deploy the zookeeper and Kafka server in a cluster in gke but the communication between the Kafka and zookeeper fails with an error message in logs. I'd like to know what would be the easiest way to setup a Kafka in kubernetes.
I've tried the following configurations and I see that the Kafka fails to communicate with zookeeper but I am not sure why? I know that I may need a headless service because the communication is being handled by Kafka and zookeeper themselves.
For Zookeeper
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
type: LoadBalancer
selector:
app: zookeeper
ports:
- port: 2181
targetPort: client
name: zk-port
- port: 2888
targetPort: leader
name: zk-leader
- port: 3888
targetPort: election
name: zk-election
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
spec:
replicas: 3
selector:
matchLabels:
app: zookeeper
serviceName: zookeeper
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zk-pod
image: zookeeper:latest
imagePullPolicy: Always
ports:
- name: client
containerPort: 2181
- name: leader
containerPort: 2888
- name: election
containerPort: 3888
env:
- name: ZOO_MY_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ZOO_TICK_TIME
value: "2000"
- name: ZOO_INIT_LIMIT
value: "5"
- name: ZOO_SYNC_LIMIT
value: "2"
- name: ZOO_SERVERS
value: zookeeper:2888:3888
For Kafka
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
replicas: 3
selector:
matchLabels:
app: kafka
serviceName: kafka-svc
podManagementPolicy: Parallel
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:latest
ports:
- containerPort: 9092
name: client
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: kafka.default.svc.cluster.local:9092
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
spec:
type: LoadBalancer
selector:
app: kafka
ports:
- port: 9092
targetPort: client
name: kfk-port
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
I'd like to be able to send messages to a topic and to be able to read them back. I've been using kafkacat to test the connection.
This is one of limitation that specified in Official Kubernetes Documentation about StateFulsets, that
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
So, as you mentioned, you need Headless Service and you can just easily add headless service yaml to top of your configuration similar to below for your both StatefulSets:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- port: 2181
name: someport
clusterIP: None
selector:
app: zookeeper
Hope it helps!
I have been following the official guide from Google Click to Deploy. It proves very beneficial for me. Here is the link that you can follow to setup Kafka cluster over GKE. This github repository is officially maintained by Google Cloud.
https://github.com/GoogleCloudPlatform/click-to-deploy/tree/master/k8s/kafka
Other simpe approach is to deploy via Google Cloud Console.
https://console.cloud.google.com/marketplace
Search for kafka cluster (with replication)
Click on Configure
Fill out all necessary details and it will configure Kafka cluster for you will all internal communication enabled. I have removed my cluster name for privacy.

Kubernetes MySQL connection timeout

I've set up a Kubernetes deployment and service for MySQL. I cannot access the MySQL service from any pod using its DNS name... It just times out. Any other port refuses the connection immediately, but the port in my service configuration times out after ~10 seconds.
I am able to resolve the MySQL Pod DNS.
I cannot ping the host.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
run: mysql-service
spec:
ports:
- port: 3306
protocol: TCP
- port: 3306
protocol: UDP
selector:
run: mysql-service
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-service
labels:
app: mysql-service
spec:
replicas: 1
selector:
matchLabels:
app: mysql-service
template:
metadata:
labels:
app: mysql-service
spec:
containers:
- name: 'mysql-service'
image: mysql:5.5
env:
- name: MYSQL_ROOT_PASSWORD
value: some_password
- name: MYSQL_DATABASE
value: some_database
ports:
- containerPort: 3306
Your deployment (and more specifically its pod spec) says
labels:
app: mysql-service
but your service says
selector:
run: mysql-service
These don't match, so your service isn't attaching to the pod. You should also see this if you kubectl describe service mysql-service, the "endpoints" list will be empty.
Change the service's selector to match the pod's labels (or vice versa) and this should be better.