Connecting to a Kafka Kubernetes cluster from WITHIN the cluster - kubernetes

I am new to Kubernetes deployment.
I CAN connect to the kafka cluster from outside, BUT am not able to do the same from WITHIN the cluster.
Here are my configurations:
kafka-broker:
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: crd-instrument
name: kafkaservice
spec:
replicas: 1
selector:
matchLabels:
app: kafkaservice
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INTERNAL_PLAINTEXT:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT"
- name: KAFKA_LISTENERS
value: "INTERNAL_PLAINTEXT://0.0.0.0:9092,EXTERNAL_PLAINTEXT://0.0.0.0:9093"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INTERNAL_PLAINTEXT://kafkaservice:9092,EXTERNAL_PLAINTEXT://127.0.0.1:30035"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INTERNAL_PLAINTEXT"
- name: KAFKA_CREATE_TOPICS
value: "crd_instrument_req:1:1,crd_instrument_resp:1:1,crd_instrument_resol:1:1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeperservice:2181"
ports:
- name: port9092
containerPort: 9092
- name: port9093
containerPort: 9093
kafka-service:
---
apiVersion: v1
kind: Service
metadata:
namespace: crd-instrument
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
Kafka-service-external
---
apiVersion: v1
kind: Service
metadata:
namespace: crd-instrument
name: kafkaservice-external
labels:
app: kafkaservice-external
spec:
selector:
app: kafkaservice
ports:
- name: port9093
port: 9093
protocol: TCP
nodePort: 30035
type: NodePort
This is the client yml thats started in the same namespace:
---
apiVersion: v1
kind: Pod
metadata:
name: crd-instrument-client
labels:
app: crd-instrument-client
namespace: crd-instrument
spec:
containers:
- name: crd-instrument-client
image: crd_instrument_client:1.0
imagePullPolicy: Never
The code within is trying to connect with BOOTSTRAP_SERVERS as "kafkaservice:9092"
Its not connecting. Where am I going wrong, if someone can help pointing out please..

Related

Restart pod when another service is recreated

I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003

container startup issue for k8 for kafka & zookeeper

i am trying to create a spring boot producer and consumer with
k8 zoo keeper & kafka but not able to set the k8 deployment its
failing Not sure what is wrong configured here
becoz same things is working for me in docker compose
i have used the below file for creating the service and
deployment in local docker-desktop
kubectl apply -f $(pwd)/kubernates/sample.yml
and the error which i am getting at the time of the deployment
i have added at the last
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zoo1
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_CREATE_TOPICS
value: sample.topic:1:1
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-cat
spec:
selector:
matchLabels:
app: kafka-cat
template:
metadata:
labels:
app: kafka-cat
spec:
containers:
- name: kafka-cat
image: confluentinc/cp-kafkacat
command: ["/bin/sh"]
args: ["-c", "trap : TERM INT; sleep infinity & wait"]**
exception in container
**[2020-08-03 18:47:49,724] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
org.apache.kafka.common.config.ConfigException: Invalid value tcp://10.103.92.112:9092 for configuration port: Not a number of type INT
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:726)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1235)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1238)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1218)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:34)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:29)
at kafka.Kafka$.main(Kafka.scala:68)
at kafka.Kafka.main(Kafka.scala)
finally i was able to solve this using different name for the kafka
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
selector:
app: zookeeper
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service0
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker0
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_ad_port
- name: KAFKA_ZOOKEEPER_CONNECT
valueFrom:
configMapKeyRef:
name: application-conf
key: zk_url
- name: KAFKA_CREATE_TOPICS
valueFrom:
configMapKeyRef:
name: application-conf
key: kafka_topic
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service0
Rename Kubernetes service name from kafka to something else kafka-broker for example. Update KAFKA_ADVERTISED_HOST_NAME or KAFKA_ADVERTISED_LISTENERS or both:
KAFKA_ADVERTISED_HOST_NAME: kafka-broker:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-broker:9092
apiVersion: v1
kind: ConfigMap
metadata:
name: application-conf # name of ConfigMap, referenced in other files
namespace: default
data:
host: "mysql" # host address of mysql server, we are using DNS of Service
name: "espark-mysql" # name of the database for application
port: "3306"
zk_url: "zookeeper:2181"
kafka_url: "kafka-service0:9092"
kafka_topic: "espark-topic:1:2"
kafka_ad_port: "9092"
Just add the hostname and IP for KAFKA_ADVERTISED_HOST_NAME like (localhost:9092) in your YAML file for Kafka.

How to configure axon server in pod for Kubernetes

I have 3 services which are axon, command and query. I am trying running them via Kubernetes. With docker-compose and swarm works perfectly. But somehow not working via K8s.
Getting following error:
Connecting to AxonServer node axonserver:8124 failed: UNAVAILABLE: Unable to resolve host axonserver
Below are my config files.
`
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
env:
- name: AXONSERVER_HOSTNAME
value: axonserver
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
`
Here is command-service yaml contains service as well.
apiVersion:
kind: Pod
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/command-svc
name: command-service
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8081"
port: 8081
targetPort: 8080
selector:
labels:
app: axonserver
`
Here is last service as query-service yml file
` apiVersion: v1
kind: Pod
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/query-svc
name: query-service
ports:
- containerPort: 8080
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8082"
port: 8082
targetPort: 8080
selector:
labels:
app: axonserver`
your YAML is somehow mixed. If I understood you correctly, you have three services:
command-service
query-service
axonserver
Your setup should be configured in a way that command-service and query-service expose their ports, but both use ports exposed by axonserver. Here is my attempt for your YAML:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
imagePullPolicy: Always
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
The ports your defined in:
ports:
- name: command-srv
containerPort: 8081
protocol: TCP
- name: query-srv
containerPort: 8082
protocol: TCP
are not ports of Axon Server, but of your command-service and query-service and should be exposed in those containers.
Kind regards,
Simon

DNS in Kubernetes deployment not working as expected

I'm well versed in Docker, but must be doing something wrong here with K8. I'm running skaffold with minikube and trying to get DNS between containers working. Here's my deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
name: my-api
labels:
app: my-api
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
However, in this scenario my-api-node can't contact my-api-postgres via the DNS hostname my-api-postgres. Any idea what I'm doing wrong?
You have defined all 3 containers as part of the same pod. Pods have a common network namespace so in your current setup (which is not correct, more on that in a second), you could talk to the other containers using localhost:<port>.
The 'correct' way of doing this would be to create a deployment for each application, and front those deployments with services.
Your example would roughly become (untested):
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-node
namespace: my-api
labels:
app: my-api-node
spec:
replicas: 1
selector:
matchLabels:
app: my-api-node
template:
metadata:
name: my-api-node
labels:
app: my-api-node
spec:
containers:
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-node
spec:
selector:
app: my-api-node
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-redis
namespace: my-api
labels:
app: my-api-redis
spec:
replicas: 1
selector:
matchLabels:
app: my-api-redis
template:
metadata:
name: my-api-redis
labels:
app: my-api-redis
spec:
containers:
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-redis
spec:
selector:
app: my-api-redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-postgres
namespace: my-api
labels:
app: my-api-postgres
spec:
replicas: 1
selector:
matchLabels:
app: my-api-postgres
template:
metadata:
name: my-api-postgres
labels:
app: my-api-postgres
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-postgres
spec:
selector:
app: my-api-postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
DNS records get registered for services so you are connecting to those and being forwarded to the pods behind it (simplified). If you need to get to your node app from the outside world, that's a whole additional deal, and you should look at LoadBalancer type services, or Ingress.
As an addition to johnharris85 DNS, when you will separate your apps, which you should do in your scenario.
Multi-container Pods are usually used in specific use cases, like for example sidecar containers to help the main container with some particular tasks or proxies, bridges and adapters to for example provide connectivity to some specific destination.
In your case you can easily separate them. In this case you have a deployment with 1 Pod in which there are 3 containers which communicate with each other by localhost and not DNS names as already mentioned.
After which I recommend you to read about DNS inside of Kubernetes and how the communication works with the services stepping up into the game.
In case of pods you can read more here.

communication between polipo and tor kubernetes deployment

Where can I add socksParentProxy in Kubernetes deployment file to communicate polipo with tor. I already created tor service tor:9150 and tor deployment. Here is a my YAML file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: polipo-deployment
labels:
app: myauto
spec:
selector:
matchLabels:
name: polipo-pod
app: myauto
template:
metadata:
name: polipo-deployment
labels:
name: polipo-pod
app: myauto
spec:
containers:
- env:
- name: socksParentProxy
value: tor:9150
name: polipo
image: 'clue/polipo'
ports:
- containerPort: 8123
replicas: 1
As in the documentation, you should use args:
containers:
name: polipo
image: 'clue/polipo'
args: ["socksParentProxy=tor:9150"]
ports:
- containerPort: 8123