Hi i am new to kubernets, i am using minikube single node cluster for local development and testing.
Host: Ubuntu 16.04 LTS.
Minikube: Virtual box running minikube cluster
My requirement is i need to deploy kafka and zookeeper on minikube and should be used to produce or consume messages.
I followed this link and successfully deployed it on minikube its details are below
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-service 10.0.0.15 <pending> 9092:30244/TCP 46m
kubernetes 10.0.0.1 <none> 443/TCP 53m
zoo1 10.0.0.43 <none> 2181/TCP,2888/TCP,3888/TCP 50m
zoo2 10.0.0.226 <none> 2181/TCP,2888/TCP,3888/TCP 50m
zoo3 10.0.0.6 <none> 2181/TCP,2888/TCP,3888/TCP 50m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-deployment-3583985961-f2301 1/1 Running 0 48m
zookeeper-deployment-1-1598963595-vgx1l 1/1 Running 0 52m
zookeeper-deployment-2-2038841231-tdsff 1/1 Running 0 52m
zookeeper-deployment-3-2478718867-5vjcj 1/1 Running 0 52m
$ kubectl describe service kafka-service
Name: kafka-service
Namespace: default
Labels: app=kafka
Annotations: <none>
Selector: app=kafka
Type: LoadBalancer
IP: 10.0.0.15
Port: kafka-port 9092/TCP
NodePort: kafka-port 30244/TCP
Endpoints: 172.17.0.7:9092
Session Affinity: None
Events: <none>
and i have set KAFKA_ADVERTISED_HOST_NAME to minikube ip(192.168.99.100).
Now for message producer i am using $cat textfile.log | kafkacat -b $(minikube ip):30244 -t mytopic its not publishing the message giving below message
% Auto-selecting Producer mode (use -P or -C to override)
% Delivery failed for message: Local: Message timed out
can any one help how to publish and consume message.
I know that this is quite an old post. Were you able to resolve and run kafka + zookeeper within minikube? I was able to run a simple single cluster kafka and zookeeper deployment successfully using minikube v0.17.1 and produce and consume messages using kafkacat producer and consumer respectively. I was able to run these successfully on uBuntu and Mac OSX. The deployment and service yamls are as below:
zookeeper-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
zookeeper-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
kafka-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kafka
name: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: "192.168.99.100"
- name: KAFKA_ADVERTISED_PORT
value: "30092"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: 192.168.99.100:30181
- name: KAFKA_CREATE_TOPICS
value: "test-topic:1:1"
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka
ports:
- containerPort: 9092
kafka-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-service
name: kafka-service
spec:
type: NodePort
ports:
- name: kafka-port
port: 9092
nodePort: 30092
targetPort: 9092
selector:
app: kafka
You can test your deployment by installing kafkacat client and running the following commands on separate terminal windows:
echo "Am I receiving this message?" | kafkacat -P -b 192.168.99.100:30092 -t test-topic
kafkacat -C -b 192.168.99.100:30092 -t test-topic
% Reached end of topic test-topic [0] at offset 0
Am I receiving this message?
I was able to successfully run this on minikube versions v0.17.1 and v0.19.0. If you want to run this on minikube versions v0.21.1 and v0.23.0, please refer to my reply to the post here: Kafka inaccessible once inside Kubernetes/Minikube
Thanks.
I followed as shown below.
$ git clone https://github.com/d1egoaz/minikube-kafka-cluster
$ cd minikube-kafka-cluster
$ kubectl apply -f 00-namespace/
$ kubectl apply -f 01-zookeeper/
$ kubectl apply -f 02-kafka/
$ kubectl apply -f 03-yahoo-kafka-manager/
$ kubectl get svc -n kafka-ca1 (get the port of kafka manager like 31445)
$ minikube ip
Put http://minikube-ip:port in browser to get ui page of kafka manager.
source https://technology.amis.nl/2019/03/24/running-apache-kafka-on-minikube/
You used a service with type = LoadBalancer which is used for cloud provider (you can see the service waiting for an external ip address .. pending state ... which will never happen). In your case you should try with NodePort.
Going off the previous answer and your comment here is very basic sample code for a Kafka service with type=Nodeport.
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
type: NodePort
Related
I have a list of pods like so:
❯ kubectl get pods -l app=test-pod (base)
NAME READY STATUS RESTARTS AGE
test-deployment-674667c867-jhvg4 1/1 Running 0 14m
test-deployment-674667c867-ssx6h 1/1 Running 0 14m
test-deployment-674667c867-t4crn 1/1 Running 0 14m
I have a service
kubectl get services (base)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-service ClusterIP 10.100.4.138 <none> 4000/TCP 15m
I perform a dns query:
❯ kubectl exec -ti test-deployment-674667c867-jhvg4 -- /bin/bash (base)
root#test-deployment-674667c867-jhvg4:/# busybox nslookup test-service
Server: 10.100.0.10
Address: 10.100.0.10:53
Name: test-service.default.svc.cluster.local
Address: 10.100.4.138
My config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: python-http-server
image: python:2.7
command: ["/bin/bash"]
args: ["-c", "echo \" Hello from $(hostname)\" > index.html; python -m SimpleHTTPServer 80"]
ports:
- name: http
containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- protocol: TCP
port: 4000
targetPort: http
How can I instead get a list of all the pods's ip addresses via a dns query?
Ideally I would like to perform an nslookup of a name and get a list of all the pod's ips in a list.
You have to use a headless service with selectors. It returns the ip addresses of the pods.
See here:
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
.spec.clusterIP must be "None"
I have a Mosquitto Broker on my Kubernetes. I can connect to Mosquitto Broker in Private Network. It works well.
But when We use a Public Domain ( We use Sophos UTM 9 ), The client can't connect to Mosquitto Broker.
I'm a new with Kubernetes. This is mosquitto.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto
spec:
selector:
matchLabels:
app: mosquitto
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
containers:
- name: mosquitto
image: eclipse-mosquitto:v1.16.10
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
imagePullPolicy: Always
ports:
- containerPort: 1883
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
externalIPs:
- xxx.xxx.xxx.xxx
type: ClusterIP
ports:
- name: mqtt
port: 1883
targetPort: 1883
protocol: TCP
selector:
app: mosquitto
I use NodeJS to connect with public domain. This NodeJS code is:
var mqtt = require('mqtt');
var client = mqtt.connect('mqtt://mydomain.com:1883');
client.on('connect', function () {
client.subscribe(topic)
console.log("Subscribed topic " + topic);
})
I wonder what the problem is kubernetes or Sophos UTM 9. Do I miss anything?
What do I have to do for Mosquitto on Kubernetes to use the Public Domain ?
I am most grateful.
After test your yaml file, I've concluded that you configuration is almost correct, I mean that because:
The image you are using eclipse-mosquitto:v1.16.10 does not exists. You can check all tag available here.
So, the most probable issue, is that your pod might not be running. You can check it by running the command below and checking the column STATUS.
$ kubectl get pods -l=app=mosquitto
NAME READY STATUS RESTARTS AGE
mosquitto-c9dc57d59-98l8r 1/1 Running 0 5m53s
Here the yaml that worked for me. Note: I've removed the externalIP and resource limits from service and deployment for tests purposes and replaced the image for eclipse-mosquitto:1.6.10:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto
spec:
selector:
matchLabels:
app: mosquitto
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
containers:
- name: mosquitto
image: eclipse-mosquitto:1.6.10
imagePullPolicy: Always
ports:
- containerPort: 1883
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
type: ClusterIP
ports:
- name: mqtt
port: 1883
targetPort: 1883
protocol: TCP
selector:
app: mosquitto
After deployed I've tested using a dnsutil container (you can find the spec here):
kubectl exec dnsutils -- sh -c 'apk update && apk add mosquitto-clients'
kubectl exec dnsutils -- mosquitto_pub -h mosquitto -t 'test/topic' -m 'upvoteIt'
Check the logs in mosquitto pod:
kubectl logs mosquitto-xxxxx
1597829622: New client connected from 172.17.0.4 as mosqpub|88-dnsutils (p1, c1, k60).
1597829622: Client mosqpub|88-dnsutils disconnected.
If you want to see the message before test, open a second terminal and run this command to see the message being received by mosquitto server:
$ kubectl exec mosquitto-xxxxx -- mosquitto_sub -v -t 'test/topic'
test/topic upvoteIt
Where mosquitto-xxxxx is the name of your pod.
I apply a deployment of 2 some http pods, and a service for it, and it works fine. I can curl the serviceip or servicename. The service did round robin well.
But after I delete one pod, k8s create a new one to replace it. When I curl the service, the new pod doesn't return, only the other old one is OK.
The question is why k8s not update new pod to the service so I can curl the serviceip or servicename as before?
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
spec:
selector:
matchLabels:
run: mvn-demo
replicas: 2
template:
metadata:
labels:
run: mvn-demo
spec:
containers:
- name: mvndemo
image: 192.168.0.193:59999/mvndemo
ports:
- containerPort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: mvn-svc
labels:
run: mvn-demo
spec:
ports:
- port: 8080
protocol: TCP
#type: NodePort
selector:
run: mvn-demo
kdes svc mvn-svc
Name: mvn-svc
Namespace: default
Labels: run=mvn-demo
Annotations: Selector: run=mvn-demo
Type: ClusterIP
IP: 10.97.21.218
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 100.101.153.220:8080,100.79.233.220:8080
Session Affinity: None
Events: <none>
kpod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mvn-dp-8f59c694f-2mwq8 1/1 Running 0 81m 100.79.233.220 worker2 <none> <none>
mvn-dp-8f59c694f-xmt6m 1/1 Running 0 87m 100.101.153.220 worker3 <none> <none>
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
Hello Docker World, from: mvn-dp-8f59c694f-xmt6m
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
**curl: (7) Failed connect to 10.97.21.218:8080; 连接超时(connetion timeout)**
As u can see the age of mvn-dp-8f59c694f-2mwq8 is newer than the other one, because I deleted one pod and k8s replace it with this new one.
Set a label to your deployment metadata
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
labels:
run: mvn-demo
it will work
I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000
I'm trying to deploy a cassandra multinode cluster in minikube, I have followed this tutorial Example: Deploying Cassandra with Stateful Sets and made some modifications, the cluster is up and running and with kubectl I can connect via cqlsh, but I want to connect externally, I tried to expose the service via NodePort and test the connection with datastax studio (192.168.99.100:32554) but no success, also later I want to connect in spring boot, I supose that I have to use the svc name or the node ip.
All host(s) tried for query failed (tried: /192.168.99.100:32554 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:32554] Cannot connect))
[cassandra-0] /etc/cassandra/cassandra.yaml
rpc_port: 9160
broadcast_rpc_address: 172.17.0.5
listen_address: 172.17.0.5
# listen_interface: eth0
start_rpc: true
rpc_address: 0.0.0.0
# rpc_interface: eth1
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "cassandra-0.cassandra.default.svc.cluster.local"
Here is minikube output for the svc and pods
$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra NodePort 10.102.236.158 <none> 9042:32554/TCP 20m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cassandra-0 1/1 Running 0 20m 172.17.0.4 minikube <none> <none>
cassandra-1 1/1 Running 0 19m 172.17.0.5 minikube <none> <none>
cassandra-2 1/1 Running 1 19m 172.17.0.6 minikube <none> <none>
$ kubectl describe service cassandra
Name: cassandra
Namespace: default
Labels: app=cassandra
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"cassandra"},"name":"cassandra","namespace":"default"},"s...
Selector: app=cassandra
Type: NodePort
IP: 10.102.236.158
Port: <unset> 9042/TCP
TargetPort: 9042/TCP
NodePort: <unset> 32554/TCP
Endpoints: 172.17.0.4:9042,172.17.0.5:9042,172.17.0.6:9042
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl exec -it cassandra-0 -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 104.72 KiB 256 68.1% 680bfcb9-b374-40a6-ba1d-4bf7ee80a57b rack1
UN 172.17.0.4 69.9 KiB 256 66.5% 022009f8-112c-46c9-844b-ef062bac35aa rack1
UN 172.17.0.6 125.31 KiB 256 65.4% 48ae76fe-b37c-45c7-84f9-3e6207da4818 rack1
$ kubectl exec -it cassandra-0 -- cqlsh
Connected to K8Demo at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh>
cassandra-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
selector:
app: cassandra
cassandra-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: cassandra:3.11
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: CASSANDRA_START_RPC
value: "true"
- name: CASSANDRA_RPC_ADDRESS
value: "0.0.0.0"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-standard
Just for anyone with this problem:
After reading docs on datastax I realized that DataStax Studio is meant for use with DataStax Enterprise, for local development and the community edition of cassanda I'm using DataStax DevCenter and it works.
For spring boot (Cassandra cluster running on minikube):
spring.data.cassandra.keyspacename=mykeyspacename
spring.data.cassandra.contactpoints=cassandra-0.cassandra.default.svc.cluster.local
spring.data.cassandra.port=9042
spring.data.cassandra.schemaaction=create_if_not_exists
For DataStax DevCenter(Cassandra cluster running on minikube):
ContactHost = 192.168.99.100
NativeProtocolPort: 300042
Updated cassandra-service
# ------------------- Cassandra Service ------------------- #
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
nodePort: 30042
selector:
app: cassandra
If we just want to connect cqlsh, what you neeed is following command
kubectl exec -it cassandra-0 -- cqlsh
On the other hand, if we want to connect from external point, command can be used to get cassandra url (I use DBever to connect cassandra cluster)
minikube service cassandra --url