In kubernetes cluster, kafka service not showing an external IP, though I used LoadBalancer type - kubernetes

I followed the steps from this guide to create a kafka pod:
https://dzone.com/articles/ultimate-guide-to-installing-kafka-docker-on-kuber
Though I used LoadBalancer type for kafka-service (as said in the guide), I don't get an external IP for kafka-service:
On kubernetes dashboards kafka-service is shown as running.

The LoadBalancer service and Ingress are only available to Kubernetes if you are using any cloud provider, like: GCP, AWS, Azure etc... It's not supported by default for bare-metal implementations.
But, if you are running kubernetes bare-metal, alternatively, you can use MetaLB to enable the service LoadBalancer type and ingress.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
For minikube
On minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
Run the following command:
minikube service hello-node
Or you can enable the nginx-ingress addon if you want to to create an ingress:
minikube addons enable ingress

Related

Get Externally accessible IP address of Pod in Kubernetes

I need to create two instances using the same Ubuntu Image in Kubernetes. Each instance used two ports i.e. 8080 and 9090. How can I access these two ports externally? Can we use the IP address of the worker in this case?
If you want to access your Ubuntu instances from outside the k8s cluster you should place pods behind the service.
You can access services through public IPs:
create Service of type NodePort- the service will be available on <NodeIp>:<NodePort>
create Service of type LoadBalancer - if you are running your workload in the cloud creating service of type LoadBalancer will automatically deploy LoadBalancer for you.
Alternatively you can deploy Ingress to expose your Service. You would also need Ingress Controller.
Useful links:
GCP example
Ingress Controller
Ingress
Kubernetes Service

Kubernetes Default Loadbalancer for a service run locally?

currently I have 3 virtual machines (1 master kubernetes node and 2 slaves).
I want to create a service which encapsulates 3 replicas of my container.
I am curious if by default, in this local environment, when creating the service, kubernetes offers a load balancer by default, even though it was NOT specified in the service yaml file. Does it offer round robin by default ?
If your not on a supported cloud provider, your pretty much stuck with NodePort or ClusterIP for service types. A project I used when I was experimenting with a local kubernetes environment was Metallb. Metallb allows you to use the LoadBalancer service type and expose your service outside of the cluster network when running kubernetes outside a hosted platform, i.e., local test cluster.
To use Metallb, you must provide a pool of ip addresses the you can use on your pod network.
First create a config map with your pod network ip range --
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
Then add that config map to your cluster.
kubectl apply -f metallb-config.yaml
Finally add the metallb controller to your cluster
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
Now you should be able to expose your service.
kubectl expose deployment name-of-deployment --type=LoadBalancer --name=name-of-service
You can not practically use LoadBalancer service in your local.LoadBalancer service creates a LoadBalancer provided by your cloud provider if you are running on public cloud. You can set L7 load balancing capabilities via your cloud provide offering. Load balancing in L4 layer will be controller by kube-proxy which is round robin by default.
If you are using cluster ip or NodePort then also you are getting the L4 load balancing offered by kube proxy.
You won't be able to do that because locally there is no cloud controller available.
When you will be in cloud and you created service with LoadBalancer kubernetes controller will talk to cloud controller and it will create loadbalancer in cluster. But in this case there is no cloud-controller available to create loadbalancer.

Kubernetes LoadBalancer service with hostNetwork binding

I have a query regarding the usage of a LoadBalancer service with hostNetwork
If we set hostNetwork: true, then the pods bind on the host network - to which the external services connect to. If we need only one instance of the pod running - then I believe we do not need a LoadBalancer service for the external services to connect to the pod. I do not see any use-case for a a LoadBalancer service here, or are there any I am missing ?
hostNetwork=true is not the recommended approach for exposing pods outside of the cluster. It has a few limitations:
Only 1 instance of a pod can run on a specific node on the same port
You have to use the nodeIP to access the pod, however, the node IP can change.
If the pod fails, the k8s scheduler may spawn it on a different node.
The recommended way for exposing pods outside of the cluster is via Kubernetes Service Controllers.
All service controllers act as load balancers (they will balance the traffic across all "ready" pods) no matter the Service.spec.type property.
Service.spec.type property can be one of the below:
ClusterIP, NodePort, LoadBalancer, ExternalName
The LoadBalancer type means that k8s will use a cloud provider LoadBalancer to expose the service outside of the cluster (for example AWS Elastic Load balancer if the k8s cluster is running on AWS).
LoadBalancer: Exposes the Service externally using a cloud provider’s
load balancer. NodePort and ClusterIP Services, to which the external
load balancer routes, are automatically created.
More on k8s service types

kubernetes service exposed to host ip

I created a kubernetes service something like this on my 4 node cluster:
kubectl expose deployment distcc-deploy --name=distccsvc --port=8080
--target-port=3632 --type=LoadBalancer
The problem is how do I expose this service to an external ip. Without an external ip you can not ping or reach this service endpoint from outside network.
I am not sure if i need to change the kubedns or put some kind of changes.
Ideally I would like the service to be exposed on the host ip.
Like http://localhost:32876
hypothetically let's say
i have a 4 node vm on which i am running let's say nginx service. i expose it as a lodabalancer service. how can i access the nginx using this service from the vm ?
let's say the service name is nginxsvc is there a way i can do http://:8080. how will i get this here for my 4 node vm ?
LoadBalancer does different things depending on where you deployed kubernetes. If you deployed on AWS (using kops or some other tool) it'll create an elastic load balancer to expose the service. If you deployed on GCP it'll do something similar - Google terminology escapes me at the moment. These are separate VMs in the cloud routing traffic to your service. If you're playing around in minikube LoadBalancer doesn't really do anything, it does a node port with the assumption that the user understands minikube isn't capable of providing a true load balancer.
LoadBalancer is supposed to expose your service via a brand new IP address. So this is what happens on the cloud providers, they requisition VMs with a separate public IP address (GCP gives a static address and AWS a DNS). NodePort will expose as a port on kubernetes node running the pod. This isn't a workable solution for a general deployment but works ok while developing.

Kubernetes External Load Balancer Service on DigitalOcean

I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP.
Things have changed, DigitalOcean created their own cloud provider implementation as answered here and they are maintaining a Kubernetes "Cloud Controller Manager" implementation:
Kubernetes Cloud Controller Manager for DigitalOcean
Currently digitalocean-cloud-controller-manager implements:
nodecontroller - updates nodes with cloud provider specific labels and
addresses, also deletes kubernetes nodes when deleted on the cloud
provider.
servicecontroller - responsible for creating LoadBalancers
when a service of Type: LoadBalancer is created in Kubernetes.
To try it out clone the project on your master node.
Next get the token key from https://cloud.digitalocean.com/settings/api/tokens and run:
export DIGITALOCEAN_ACCESS_TOKEN=abc123abc123abc123
scripts/generate-secret.sh
kubectl apply -f do-cloud-controller-manager/releases/v0.1.6.yml
There more examples here
What will happen once you do the above? DO's cloud manager will create a load balancer (that has a failover mechanism out of the box, more on it in the load balancer's documentation
Things will change again soon as DigitalOcean are jumping on the Kubernetes bandwagon, check here and you will have a choice to let them manage your Kuberentes cluster instead of you worrying about a lot of the infrastructure (this is my understanding of the service, let's see how it works when it becomes available...)
The LoadBalancer type of service is implemented by adding code to the kubernetes master specific to each cloud provider. There isn't a cloud provider for Digital Ocean (supported cloud providers), so the LoadBalancer type will not be able to take advantage of Digital Ocean's Floating IPs.
Instead, you should consider using a NodePort service or attaching an ExternalIP to your service and mapping the exposed IP to a DO floating IP.
It is actually possible to expose a service through a floating ip. The only catch is that the external IP that you need to use is a little unintuitive.
From what it seems DO has some sort of overlay network for their Floating IP service. To get the actual IP you need to expose you need to ssh into your gateway droplet and find its anchor IP by hitting up the metadata service:
curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
and you will get something like
10.x.x.x
This is the address that you can use as an external ip in LoadBalancer type service in kubernetes.
Example:
kubectl expose rc my-nginx --port=80 --public-ip=10.x.x.x --type=LoadBalancer