Exposing cassandra cluster on minikube to access externally - kubernetes

I'm trying to deploy a cassandra multinode cluster in minikube, I have followed this tutorial Example: Deploying Cassandra with Stateful Sets and made some modifications, the cluster is up and running and with kubectl I can connect via cqlsh, but I want to connect externally, I tried to expose the service via NodePort and test the connection with datastax studio (192.168.99.100:32554) but no success, also later I want to connect in spring boot, I supose that I have to use the svc name or the node ip.
All host(s) tried for query failed (tried: /192.168.99.100:32554 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:32554] Cannot connect))
[cassandra-0] /etc/cassandra/cassandra.yaml
rpc_port: 9160
broadcast_rpc_address: 172.17.0.5
listen_address: 172.17.0.5
# listen_interface: eth0
start_rpc: true
rpc_address: 0.0.0.0
# rpc_interface: eth1
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "cassandra-0.cassandra.default.svc.cluster.local"
Here is minikube output for the svc and pods
$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra NodePort 10.102.236.158 <none> 9042:32554/TCP 20m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cassandra-0 1/1 Running 0 20m 172.17.0.4 minikube <none> <none>
cassandra-1 1/1 Running 0 19m 172.17.0.5 minikube <none> <none>
cassandra-2 1/1 Running 1 19m 172.17.0.6 minikube <none> <none>
$ kubectl describe service cassandra
Name: cassandra
Namespace: default
Labels: app=cassandra
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"cassandra"},"name":"cassandra","namespace":"default"},"s...
Selector: app=cassandra
Type: NodePort
IP: 10.102.236.158
Port: <unset> 9042/TCP
TargetPort: 9042/TCP
NodePort: <unset> 32554/TCP
Endpoints: 172.17.0.4:9042,172.17.0.5:9042,172.17.0.6:9042
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl exec -it cassandra-0 -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 104.72 KiB 256 68.1% 680bfcb9-b374-40a6-ba1d-4bf7ee80a57b rack1
UN 172.17.0.4 69.9 KiB 256 66.5% 022009f8-112c-46c9-844b-ef062bac35aa rack1
UN 172.17.0.6 125.31 KiB 256 65.4% 48ae76fe-b37c-45c7-84f9-3e6207da4818 rack1
$ kubectl exec -it cassandra-0 -- cqlsh
Connected to K8Demo at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh>
cassandra-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
selector:
app: cassandra
cassandra-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: cassandra:3.11
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: CASSANDRA_START_RPC
value: "true"
- name: CASSANDRA_RPC_ADDRESS
value: "0.0.0.0"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-standard

Just for anyone with this problem:
After reading docs on datastax I realized that DataStax Studio is meant for use with DataStax Enterprise, for local development and the community edition of cassanda I'm using DataStax DevCenter and it works.
For spring boot (Cassandra cluster running on minikube):
spring.data.cassandra.keyspacename=mykeyspacename
spring.data.cassandra.contactpoints=cassandra-0.cassandra.default.svc.cluster.local
spring.data.cassandra.port=9042
spring.data.cassandra.schemaaction=create_if_not_exists
For DataStax DevCenter(Cassandra cluster running on minikube):
ContactHost = 192.168.99.100
NativeProtocolPort: 300042
Updated cassandra-service
# ------------------- Cassandra Service ------------------- #
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
nodePort: 30042
selector:
app: cassandra

If we just want to connect cqlsh, what you neeed is following command
kubectl exec -it cassandra-0 -- cqlsh
On the other hand, if we want to connect from external point, command can be used to get cassandra url (I use DBever to connect cassandra cluster)
minikube service cassandra --url

Related

Kubernetes : RabbitMQ pod is spammed with connections from kube-system

I'm currently learning Kubernetes and all its quircks.
I'm currently using a rabbitMQ Deployment, service and pod in my cluster to exchange messages between apps in the cluster. However, I saw an abnormal amount of the rabbitMQ pod restarts.
After installing prometheus and Grafana to see the problem, I saw that the rabbitMQ pod would consume more and more memory and cpu until it gets killed by the OOMkiller every two hours or so. The graph looks like this :
Graph of CPU consumption in my cluster (rabbitmq in red)
After that I looked into the rabbitMQ pod UI, and saw that an app in my cluster (ip 10.224.0.5) was constantly creating new connections, this IP corresponding to my kube-system and my prometheus instance, as shown by the following logs :
k get all -A -o wide | grep 10.224.0.5
E1223 12:13:48.231908 23198 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E1223 12:13:48.311831 23198 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
kube-system pod/azure-ip-masq-agent-xh9jk 1/1 Running 0 25d 10.224.0.5 aks-agentpool-37892177-vmss000001 <none> <none>
kube-system pod/cloud-node-manager-h5ff5 1/1 Running 0 25d 10.224.0.5 aks-agentpool-37892177-vmss000001 <none> <none>
kube-system pod/csi-azuredisk-node-sf8sn 3/3 Running 0 3d15h 10.224.0.5 aks-agentpool-37892177-vmss000001 <none> <none>
kube-system pod/csi-azurefile-node-97nbt 3/3 Running 0 19d 10.224.0.5 aks-agentpool-37892177-vmss000001 <none> <none>
kube-system pod/kube-proxy-2s5tn 1/1 Running 0 3d15h 10.224.0.5 aks-agentpool-37892177-vmss000001 <none> <none>
monitoring pod/prometheus-prometheus-node-exporter-dztwx 1/1 Running 0 20h 10.224.0.5 aks-agentpool-37892177-vmss000001 <none> <none>
Also, I noticed that these connections seem tpo be blocked by rabbitMQ, as the field connection.blocked in the client properties is set to true, as shown in the follwing image:
Print screen of a connection details from rabbitMQ pod's UI
I saw in the documentation that rabbitMQ starts to blocks connections when it hits low on resources, but I set the cpu and memory limits to 1 cpu and 1 Gib RAM, and the connections are blocked from the start anyway.
On the cluster, I'm also using Keda which uses the rabbitmq pod, and polls it every one second to see if there are any messages in a queue (I set pollingInterval to 1 in the yaml). But as I said earlier, it's not Keda that's creating all the connections, it's kube-system. Unless keda uses a component described earlier in the log to poll rabbitmq, and that the Keda's polling interval does not corresponds to seconds (which is highly unlikely as it's written in the docs that this polling intertval is given in seconds), I don't know at all what's going on with all these connections.
The following section contains the yamls of all the components that might be involved with this problem (keda and rabbitmq) :
rabbitMQ Replica Count.yaml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
component: rabbitmq
name: rabbitmq-controller
spec:
replicas: 1
template:
metadata:
labels:
app: taskQueue
component: rabbitmq
spec:
containers:
- image: rabbitmq:3.11.5-management
name: rabbitmq
ports:
- containerPort: 5672
name: amqp
- containerPort: 15672
name: http
resources:
limits:
cpu: 1
memory: 1Gi
rabbitMQ Service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
component: rabbitmq
name: rabbitmq-service
spec:
type: LoadBalancer
ports:
- port: 5672
targetPort: 5672
name: amqp
- port: 15672
targetPort: 15672
name: http
selector:
app: taskQueue
component: rabbitmq
keda JobScaler, Secret and TriggerAuthentication (sample data is just a replacement for fields that I do not want to be revealed :) ):
apiVersion: v1
kind: Secret
metadata:
name: keda-rabbitmq-secret
data:
host: sample-host # base64 encoded value of format amqp://guest:password#localhost:5672/vhost
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-trigger-auth-rabbitmq-conn
namespace: default
spec:
secretTargetRef:
- parameter: host
name: keda-rabbitmq-secret
key: host
---
apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
name: builder-job-scaler
namespace: default
spec:
jobTargetRef:
parallelism: 1
completions: 1
activeDeadlineSeconds: 600
backoffLimit: 5
template:
spec:
volumes:
- name: shared-storage
emptyDir: {}
initContainers:
- name: sourcesfetcher
image: sample image
volumeMounts:
- name: shared-storage
mountPath: /mnt/shared
env:
- name: SHARED_STORAGE_MOUNT_POINT
value: /mnt/shared
- name: RABBITMQ_ENDPOINT
value: sample host
- name: RABBITMQ_QUEUE_NAME
value: buildOrders
containers:
- name: builder
image: sample image
volumeMounts:
- name: shared-storage
mountPath: /mnt/shared
env:
- name: SHARED_STORAGE_MOUNT_POINT
value: /mnt/shared
- name: MINIO_ENDPOINT
value: sample endpoint
- name: MINIO_PORT
value: sample port
- name: MINIO_USESSL
value: "false"
- name: MINIO_ROOT_USER
value: sample user
- name: MINIO_ROOT_PASSWORD
value: sampel password
- name: BUCKET_NAME
value: "hex"
- name: SERVER_NAME
value: sample url
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 500m
memory: 512Mi
restartPolicy: OnFailure
pollingInterval: 1
maxReplicaCount: 2
minReplicaCount: 0
rollout:
strategy: gradual
triggers:
- type: rabbitmq
metadata:
protocol: amqp
queueName: buildOrders
mode: QueueLength
value: "1"
authenticationRef:
name: keda-trigger-auth-rabbitmq-conn
Any help would very much appreciated!

How to port forward service?

I am using https://github.com/zalando/postgres-operator and I have created a database cluster. The following services have been also created:
databaker-users-db ClusterIP 10.245.227.1 <none> 5432/TCP 52d
databaker-users-db-config ClusterIP None <none> <none> 52d
databaker-users-db-repl ClusterIP 10.245.156.119 <none> 5432/TCP 52d
I would like to forward the service to localhost and I tried as follows:
kubectl port-forward service/databaker-users-db 5432:5432
and it shows me:
error: cannot attach to *v1.Service: invalid service 'databaker-users-db': Service is defined without a selector
The content of the yml file
apiVersion: acid.zalan.do/v1
kind: postgresql
metadata:
annotations:
meta.helm.sh/release-name: users
meta.helm.sh/release-namespace: dev
labels:
app.kubernetes.io/managed-by: Helm
team: databaker
name: databaker-users-db
namespace: dev
spec:
databases:
databaker_users_db: databaker
numberOfInstances: 2
postgresql:
version: '12'
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
teamId: databaker
users:
databaker:
- superuser
- createdb
volume:
size: 2Gi
What am I doing wrong?
It seems like, your k8s service databaker-users-db doesn't have selector specified.
apiVersion: v1
kind: Service
metadata:
name: databaker-users-db
spec:
ports:
- ...
- ...
selector: <-- check here
When a Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoint object manually.

Can't ping postgres pod from another pod in kubernetes

I created one busy pod to test db connection by following yaml
pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: marks-dummy-pod
spec:
containers:
- name: marks-dummy-pod
image: djtijare/ubuntuping:v1
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: Never
Dockerfile used :-
FROM ubuntu
RUN apt-get update && apt-get install -y iputils-ping
CMD bash
I create service as
postgresservice.yaml
kind: Service
apiVersion: v1
metadata:
name: postgressvc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
Endpoint for created service as
kind: Endpoints
apiVersion: v1
metadata:
name: postgressvc
subsets:
- addresses:
- ip: 172.31.6.149
ports:
- port: 5432
Then i ran ping 172.31.6.149 inside pod (kubectl exec -it mark-dummy-pod bash) but not working.(ping localhost is working)
output of kubectl get pods,svc,ep -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/marks-dummy-pod 1/1 Running 0 43m 192.168.1.63 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgressvc ClusterIP 10.107.58.81 <none> 5432/TCP 33m <none>
NAME ENDPOINTS AGE
endpoints/postgressvc 172.31.6.149:5432 32m
Output for answer by P Ekambaram
kubectl get pods,svc,ep -o wide gives
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/postgres-855696996d-w6h6c 1/1 Running 0 44s 192.168.1.66 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgres NodePort 10.110.203.204 <none> 5432:31076/TCP 44s app=postgres
NAME ENDPOINTS AGE
endpoints/postgres 192.168.1.66:5432 44s
So problem was in my DNS pod in namespace=kube-system
I just create new kubernetes setup and make sure that DNS is working
For new setup refer to my answer of another question
How to start kubelet service??
postgres pod is missing?
did you create endpoint object or was it auto generated?
share the pod definition YAML
you shouldnt be creating endpoint. it is wrong. follow the below deployment for postgres.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
undeploy postgres service and endpoint and deploy the above YAML.
it should work
why NODE ip is prefixed with ip-
you should create deployment for your database and then make a service that target this deployment and then ping using this service why ping with ip ?

Two kubernetes deployments in the same namespace are not able to communicate

I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:
From Kibana container logs:
{"type":"log","#timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.
What I'am doing wrong? How to reference elasticsearch container from kibana container?
More info:
kubectl --context=19team-observability-admin-context -n observability get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-9d495b84f-j2297 1/1 Running 0 15s
kibana-65bc7f9c4-s9cv4 1/1 Running 0 15s
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.104.250.175 <none> 9200:30083/TCP,9300:30059/TCP 1m
kibana NodePort 10.102.124.171 <none> 5601:30124/TCP 1m
I start my containers with command
kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml
elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
type: NodePort
ports:
- name: "9200"
port: 9200
targetPort: 9200
- name: "9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-vm-max-map-count
image: busybox
imagePullPolicy: IfNotPresent
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: "3Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1"
kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: observability
spec:
type: NodePort
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: observability_platform_kibana
name: kibana
namespace: observability
spec:
replicas: 1
template:
metadata:
labels:
app: observability_platform_kibana
spec:
containers:
- env:
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana
image: docker.elastic.co/kibana/kibana-oss:6.7.1
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
restartPolicy: Always
UPDATE 1
As gonzalesraul proposed I've created second service for elastic with ClusterIP type:
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Service is created:
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.106.5.94 <none> 9200:31598/TCP,9300:32018/TCP 26s
elasticsearch-local ClusterIP 10.101.178.13 <none> 9200/TCP 26s
kibana NodePort 10.99.73.118 <none> 5601:30004/TCP 26s
And reference elastic as "http://elasticsearch-local:9200"
Unfortunately it does not work, in kibana container:
{"type":"log","#timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}
Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Then update the kibana manifest to point to the ClusterIP service:
# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-local:9200
# ...
The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes
Edit the server name value in kibana.yaml and set it to kibana:5601.
I think if you don't do this, by default it is trying to go to port 80.
This is what looks like now kibana.yaml:
...
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana:5601
image: docker.elastic.co/kibana/kibana-oss:6.7.1
imagePullPolicy: IfNotPresent
name: kibana
...
And this is the output now:
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
UPDATE
I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.
This is the output:
{"type":"log","#timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?

Kafka deployment on minikube

Hi i am new to kubernets, i am using minikube single node cluster for local development and testing.
Host: Ubuntu 16.04 LTS.
Minikube: Virtual box running minikube cluster
My requirement is i need to deploy kafka and zookeeper on minikube and should be used to produce or consume messages.
I followed this link and successfully deployed it on minikube its details are below
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-service 10.0.0.15 <pending> 9092:30244/TCP 46m
kubernetes 10.0.0.1 <none> 443/TCP 53m
zoo1 10.0.0.43 <none> 2181/TCP,2888/TCP,3888/TCP 50m
zoo2 10.0.0.226 <none> 2181/TCP,2888/TCP,3888/TCP 50m
zoo3 10.0.0.6 <none> 2181/TCP,2888/TCP,3888/TCP 50m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-deployment-3583985961-f2301 1/1 Running 0 48m
zookeeper-deployment-1-1598963595-vgx1l 1/1 Running 0 52m
zookeeper-deployment-2-2038841231-tdsff 1/1 Running 0 52m
zookeeper-deployment-3-2478718867-5vjcj 1/1 Running 0 52m
$ kubectl describe service kafka-service
Name: kafka-service
Namespace: default
Labels: app=kafka
Annotations: <none>
Selector: app=kafka
Type: LoadBalancer
IP: 10.0.0.15
Port: kafka-port 9092/TCP
NodePort: kafka-port 30244/TCP
Endpoints: 172.17.0.7:9092
Session Affinity: None
Events: <none>
and i have set KAFKA_ADVERTISED_HOST_NAME to minikube ip(192.168.99.100).
Now for message producer i am using $cat textfile.log | kafkacat -b $(minikube ip):30244 -t mytopic its not publishing the message giving below message
% Auto-selecting Producer mode (use -P or -C to override)
% Delivery failed for message: Local: Message timed out
can any one help how to publish and consume message.
I know that this is quite an old post. Were you able to resolve and run kafka + zookeeper within minikube? I was able to run a simple single cluster kafka and zookeeper deployment successfully using minikube v0.17.1 and produce and consume messages using kafkacat producer and consumer respectively. I was able to run these successfully on uBuntu and Mac OSX. The deployment and service yamls are as below:
zookeeper-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
zookeeper-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
kafka-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kafka
name: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: "192.168.99.100"
- name: KAFKA_ADVERTISED_PORT
value: "30092"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: 192.168.99.100:30181
- name: KAFKA_CREATE_TOPICS
value: "test-topic:1:1"
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka
ports:
- containerPort: 9092
kafka-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-service
name: kafka-service
spec:
type: NodePort
ports:
- name: kafka-port
port: 9092
nodePort: 30092
targetPort: 9092
selector:
app: kafka
You can test your deployment by installing kafkacat client and running the following commands on separate terminal windows:
echo "Am I receiving this message?" | kafkacat -P -b 192.168.99.100:30092 -t test-topic
kafkacat -C -b 192.168.99.100:30092 -t test-topic
% Reached end of topic test-topic [0] at offset 0
Am I receiving this message?
I was able to successfully run this on minikube versions v0.17.1 and v0.19.0. If you want to run this on minikube versions v0.21.1 and v0.23.0, please refer to my reply to the post here: Kafka inaccessible once inside Kubernetes/Minikube
Thanks.
I followed as shown below.
$ git clone https://github.com/d1egoaz/minikube-kafka-cluster
$ cd minikube-kafka-cluster
$ kubectl apply -f 00-namespace/
$ kubectl apply -f 01-zookeeper/
$ kubectl apply -f 02-kafka/
$ kubectl apply -f 03-yahoo-kafka-manager/
$ kubectl get svc -n kafka-ca1 (get the port of kafka manager like 31445)
$ minikube ip
Put http://minikube-ip:port in browser to get ui page of kafka manager.
source https://technology.amis.nl/2019/03/24/running-apache-kafka-on-minikube/
You used a service with type = LoadBalancer which is used for cloud provider (you can see the service waiting for an external ip address .. pending state ... which will never happen). In your case you should try with NodePort.
Going off the previous answer and your comment here is very basic sample code for a Kafka service with type=Nodeport.
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
type: NodePort