Suppose if I have 3 node Kafka cluster setup. Then how do I expose it outside a cloud using Load Balancer service? I have read reference material but have a few doubts.
Say for example below is a service for a broker
apiVersion: v1
kind: Service metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 9092
name: outside
targetPort: 9092
selector: app: kafka kafka-pod-id: "0"
What is port and targetPort?
Do I setup LoadBalancer service for each of the brokers?
Do these multiple brokers get mapped to single public IP address of cloud LB?
How does a service outside k8s/cloud access individual broker? By using public-ip:port? or by using kafka-<pod-id>.kafka.my.company.com:port?. Also which port is used here? port or targetPort?
How do I specify this configuration in Kafka broker's Advertised.listeners property? As port can be different for services inside k8s cluster and outside it.
Please help.
Based on the information you provided I will try give you some answers, eventually give some advise.
1) port: is the port number which makes a service visible to other services running within the same K8s cluster. In other words, in case a service wants to invoke another service running within the same Kubernetes cluster, it will be able to do so using port specified against port in the service spec file.
targetPort: is the port on the POD where the service is running. Your application needs to be listening for network requests on this port for the service to work.
2/3) Each Broker should be exposed as LoadBalancer and be configured as headless service for internal communication. There should be one addiational LoadBalancer with external ip for external connection.
Example of Service
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
pod-name: kafka-0
type: LoadBalancer
4) You have to use kafka-<pod-id>.kafka.my.company.com:port
5) It should be set to the external addres so that clients can connect to it. This article might help with understanding.
Similar case was on Github, it might help you also - https://github.com/kow3ns/kubernetes-kafka/issues/3
In addition, You could also think about Ingress - https://tothepoint.group/blog/accessing-kafka-on-google-kubernetes-engine-from-the-outside-world/
Related
In my Kubernetes cluster setup, I have a Greenplum DB cluster (one master and 8 segments nodes) with a LoadBlanacer service. Please refer to the below service config.
apiVersion: v1
kind: Service
metadata:
labels:
app: greenplum
greenplum-cluster: greenplum-cluster
name: greenplum
spec:
clusterIP: 10.101.251.127
clusterIPs:
- 10.101.251.127
externalIPs:
- 11.4.8.141
externalTrafficPolicy: Cluster
healthCheckNodePort: 32572
ports:
- name: psql
nodePort: 32198
port: 5432
protocol: TCP
targetPort: 5432
selector:
statefulset.kubernetes.io/pod-name: master-0
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
However, a few hours after deployment, the externalTrafficPolicy value is set to Local instead of Cluster, which made the service inaccessible via the defined external IP. Is there any reason for this? It changes automatically even after I edit the service configuration.
Or is there any other way to access this Greenplum DB (TCP 5432) such as ingress?
Can you provide more details on your Kubernetes cluster setup? Is this bare-metal, cloud managed, kubeadm?
Would be useful to see the logs when you try to connect to the DB, also some traces?
The only difference between externalTrafficPolicy Local or Cluster is that Cluster will balance the traffic between all pods in the cluster having a nice distribution between your pods. Local will only distribute the traffic between the pods already deployed on the node where the request land from the LB.
Additionally using Local will preserve the clientIP of the request but not the Cluster option.
There is a nice video who really explain this on detail here.
I am trying to expose a stateful mongo replica set running on my cluster for outside access.
I have three replicas and I have created a LoadBalancer service for each replica with the same LoadBalancerIP while incrementing the port sequentially from 10255 to 10257.
apiVersion: v1
kind: Service
metadata:
name: mongo-service-0
namespace: datastore
labels:
app: mongodb-replicaset
spec:
loadBalancerIP: staticip
type: LoadBalancer
externalTrafficPolicy: Local
selector:
statefulset.kubernetes.io/pod-name: mongo-mongodb-replicaset-0
ports:
- protocol: TCP
port: 10255
targetPort: 27017
The issue is that only one service mongo-service-0 is deployed successfully with static IP, the other timeout after a while.
What I am trying to figure out is if I can use a single static IP address as LoadBalancerIP across multiple services with different ports.
Since you are using different ports for each of your Mongo replicas you had to create different Kubernetes Services for each one of the replicas. You can only tie a Kubernetes service with a single load balancer and each load balancer will have its own unique IP (or IPs) address, so you won't be able to share that IP address across a LoadBalancer type of Service.
The workaround is to use a NodePort service and basically manage your load balancer independently and point it to the NodePort for each one of your replicas.
Another workaround is to use the same port on your Mongo replicas and use the same Kubernetes LoadBalancer service. Is there any reason why you are not using that?
I've installed minikube to learn kubernetes a bit better.
I've deployed some apps and services which have ip's in a range of 10.x.x.x (private ip). I can expose my services on minikube and visit them in my browser. But I want to use the private IP's and not exposing it.
How can I visit (vpn/proxy wize) private ip's of services in minikube?
Minikube is Kubernetes with only one node and master server running on this node.
It provides the possibility to learn how it works with minimum hardware required.
It's ideal for testing purposes and seamless running on a laptop. Minikube is still software with mature
network stack from Kubernetes. This means that ports are exposed to services and virtually services are
communicating with pods.
To understand what is communicating, let me explain what ClusterIP does - it exposes the service on an internal IP in the cluster. This type makes service only reachable from within the cluster.
Cluster IP you can get by the command:
kubectl get services test_service
So, after you create a new service, you like to establish connections to ClusterAPI.
Basically, there are three ways to connect to backend resource:
1/ use kube-proxy - this proxy reflects services as defined in the Kubernetes API and simple stream TCP and UDP to backend or set of them in advanced configuration. Service cluster IPs and ports are currently found through Docker compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.
Example shows how can we use nodeselectors to define connection to port 5000 on ClusterIP - config.yaml may consist of:
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci spec:
type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
2/ use port forwarding to access application - first check if kubectl command-line tool to communicate with your minikube cluster works, then if true find service port from ClusterIP configuration.
kubectl get svc | grep test_service
Let assume service test_service works on port 5555 so to do port forwarding run the command:
kubectl port-forward pods/test_service 5555:5555
After that, you service will be available on the localhost:5555
3/ If you are familiar with the concept of pods networking you cat declare public ports in the pod’s manifest file. A user can connect to pods network defining manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
When the container is starting with manifest file like above host port TCP port 8080 will be forwarded to pod port 8080.
Please keep in the mind that ClusterIP is the use of a lot of services regarding to proper works of the cluster. I think it is not good practice to deal with ClusterIP as a regular network service - on worst scenario, it breaks a cluster soon, by invalid internal network state of connections.
I use Google Container Engine with Kubernetes.
I have created an https load balancer which terminates ssl and forwards traffic to k8s cluster nodes. The problem is I see no option to whitelist/filter incoming ip addresses. Is there any?
It sounds like you've set up a load balancer outside of Kubernetes. You may want to consider using a Kubernetes Service set to type: LoadBalancer. That type of service will give you an external IP that load balances to all of your Pods and can be easily restricted to whitelist IPs using the loadBalancerSourceRanges setting. Here is the example from the docs at https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerIP: 79.78.77.76
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
If you're using gce controller is not yet possible[1], just nginx controller[2] accept whitelist ip.
[1] https://github.com/kubernetes/ingress/issues/566
[2] https://github.com/kubernetes/ingress/blob/188c64aaac17ef29400e0f143b9aed7770e32fee/controllers/nginx/configuration.md#whitelist-source-range
I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.
So far, i've been following the example here to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.
I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.
How can I manually set my node's ExternalIP address?
This is something that is tested and works for an nginx service created on a particular node.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.
I would suggest checking out MetalLB: https://github.com/google/metallb
It allows for externalIP addresses in a baremetal cluster using either ARP or BGP. It has worked great for us and allows you to simply request a LoadBalancer service like you would in the cloud.