Kubernetes, access service (Zookeeper) - kubernetes

I am trying to deploy a custom nifi instances working with external zookeeper, on Kubernetes (begineer).
Every thing works except for the state management within Nifi.
I understood that I have to update the state-management.xml file, with the right connection string :
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String"></property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider>
I do not know how I get access to this connection string within Kubernetes, this is my service.yml for zookeeper:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2181
name: client
For Zookeeper leader election and so on, I used the following address :
zk-0.zk-hs.default.svc.cluster.local:2888:3888
But how to access to the port 2181?

You can just access zk-cs.default.svc.cluster.local:2181

Related

Error Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka on 10.96.0.10:53: no such host

C:\kafka>kubectl logs kafka-exporter-745f574c74-tzfn4
I0127 12:39:08.113890 1 kafka_exporter.go:792] Starting kafka_exporter (version=1.6.0, branch=master, revision=9d9cd654ca57e4f153d0d0b00ce36069b6a677c1)
F0127 12:39:08.890639 1 kafka_exporter.go:893] Error Init Kafka Client: kafka: client has run out of available brokers to talk to: dial tcp: lookup kafka.osm.svc.cluster.local on 10.96.0.10:53: no such host
below is the kafka-exporter-deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-exporter
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: kafka-exporter
template:
metadata:
labels:
app: kafka-exporter
spec:
containers:
- name: kafka-exporter
image: danielqsj/kafka-exporter:latest
imagePullPolicy: IfNotPresent
args:
- --kafka.server=kafka.osm.svc.cluster.local:9092
- --web.listen-address=:9092
ports:
- containerPort: 9308
env:
- name: KAFKA_EXPORTER_KAFKA_CONNECT
value: kafka-broker-644794f4ff-8gmxb:9092
- name: KAFKA_EXPORTER_TOPIC_WHITELIST
value: samptopic
Kafka-exporter-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-exporter
namespace: default
spec:
selector:
app: kafka-exporter
ports:
name: http
port: 9308
targetPort: 9308
type: NodePort
actually there is no namespace called "osm" everything running on default namespace
Refer kubernetes documentation.
As the error says, there's no service kafka in an osm namespace (whether that exists, or not), that can be reached. Remove .osm from that address, or change it to .default, assuming that there really is a Service named kafka available at port 9092

Digitalocean kubernetes cluster load balancer doesn't work properly as round robin

I created a Digitalocean kubernetes cluster and create a service with 4 replicas. the service type is LoadBalancer. the load balancer is also created. but I posted my request to the target endpoint by using postman. and I have written the endpoint with the pod host name. but every time I get the response from the same pod. if the load is balanced by the load balancer, the requests should go to each and every pods. but is not happening as I expected.
my manifest file like below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-deployment
labels:
app: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: myRepo/ms-user-service:1.0.1
ports:
- containerPort: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: proud
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
please resolve this problem.
you have to disable the keepalive feature (service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive) when you create the service. under the metadata.annotations you can configure it like the below. read for more detail.
apiVersion: v1
kind: Service
metadata:
name: user-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "false"
spec:
selector:
app: user-service
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080

how to connect 2 webapps in 2 nodeport service?

I have a single node k8s cluster with 2 web applications running on 2 NGINX k8s pods.
nginx-deployment1 --> WEBAPP1 --> nginx-svc-app1 --> <K8s_controler_IP>:30080/webapp1
nginx-deployment2 --> WEBAPP2 --> nginx-svc-app2 --> <K8s_controler_IP>:30081/webapp2
Its connecting only to the respective nodeport ip but not connecting to <K8s_controler_IP>:30080/webapp1 and <K8s_controler_IP>:30081/webapp2. Could you please help me understand what am i missing?
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment1
labels:
name: nginx-app1
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app1
template:
metadata:
labels:
name: nginx-app1
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app1
labels:
name: nginx-svc-app1
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: app1_port
selector:
name: nginx-app1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment2
labels:
name: nginx-app2
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app2
template:
metadata:
labels:
name: nginx-app2
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app2
labels:
name: nginx-svc-app2
spec:
type: NodePort
ports:
- port: 80
nodePort: 30081
name: http
selector:
name: nginx-app2
Your port configurations are incorrect. You need to set targetPort: 80 for both services. When you use the NodePort service type and specify the NodePort, the port field is irrelevant. That is the port that typically receives the incoming traffic for the service endpoint. So, specifying both would mean:
Receive incoming traffic on an endpoint with port 80.
Receive incoming traffic to Node port 30080.
And you are not specifying a targetPort which is the port that receives the traffic in the pods of the deployment. So, the traffic is coming in, but not being forwarded anywhere.
You need to add targetPort: 80 to both services and remove the port parameter.
Additionally, you're not running 2 pods, but you are running 2 deployments, each with 3 pods (replicas) in them. And the traffic you will send to each service, will be 'distributed' by the service, on all the pods that will match the selector - i.e. all 3 of your pods. It's important to understand how the communication takes place.
Also, k8s_controller_ip is an incorrect way of describing the IP address. You should use the term k8s Node IP or k8s_master_node_ip. The IP address belongs to the node that is running the cluster, not to any controller.

I'm having hard time setting up kafka on gke and would like to know the best way of setting it up?

I was trying to use statefulset to deploy the zookeeper and Kafka server in a cluster in gke but the communication between the Kafka and zookeeper fails with an error message in logs. I'd like to know what would be the easiest way to setup a Kafka in kubernetes.
I've tried the following configurations and I see that the Kafka fails to communicate with zookeeper but I am not sure why? I know that I may need a headless service because the communication is being handled by Kafka and zookeeper themselves.
For Zookeeper
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
type: LoadBalancer
selector:
app: zookeeper
ports:
- port: 2181
targetPort: client
name: zk-port
- port: 2888
targetPort: leader
name: zk-leader
- port: 3888
targetPort: election
name: zk-election
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
spec:
replicas: 3
selector:
matchLabels:
app: zookeeper
serviceName: zookeeper
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zk-pod
image: zookeeper:latest
imagePullPolicy: Always
ports:
- name: client
containerPort: 2181
- name: leader
containerPort: 2888
- name: election
containerPort: 3888
env:
- name: ZOO_MY_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ZOO_TICK_TIME
value: "2000"
- name: ZOO_INIT_LIMIT
value: "5"
- name: ZOO_SYNC_LIMIT
value: "2"
- name: ZOO_SERVERS
value: zookeeper:2888:3888
For Kafka
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
replicas: 3
selector:
matchLabels:
app: kafka
serviceName: kafka-svc
podManagementPolicy: Parallel
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:latest
ports:
- containerPort: 9092
name: client
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: kafka.default.svc.cluster.local:9092
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
spec:
type: LoadBalancer
selector:
app: kafka
ports:
- port: 9092
targetPort: client
name: kfk-port
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
I'd like to be able to send messages to a topic and to be able to read them back. I've been using kafkacat to test the connection.
This is one of limitation that specified in Official Kubernetes Documentation about StateFulsets, that
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
So, as you mentioned, you need Headless Service and you can just easily add headless service yaml to top of your configuration similar to below for your both StatefulSets:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
ports:
- port: 2181
name: someport
clusterIP: None
selector:
app: zookeeper
Hope it helps!
I have been following the official guide from Google Click to Deploy. It proves very beneficial for me. Here is the link that you can follow to setup Kafka cluster over GKE. This github repository is officially maintained by Google Cloud.
https://github.com/GoogleCloudPlatform/click-to-deploy/tree/master/k8s/kafka
Other simpe approach is to deploy via Google Cloud Console.
https://console.cloud.google.com/marketplace
Search for kafka cluster (with replication)
Click on Configure
Fill out all necessary details and it will configure Kafka cluster for you will all internal communication enabled. I have removed my cluster name for privacy.

Kubernetes Communication between services

For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.