How to access kubernetes service load balancer with juju - kubernetes

Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS with juju?
Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network.
There its a way to accomplish this deploying kubernetes cluster with juju?

I can't speak to juju specifically, but Kubernetes supports Amazon ELB - turning up a load-balancer should work.

There is a way to accomplish this, but it depends on additional work landing in the Kubernetes charms from the ~Kubernetes charmer team.
While you can reasonably hook it into an AWS ELB - Juju charms strive to be as DC agnostic as possible so its easily portable between data centers and clouds. A 'one size fits most' if you will.
What I see being required, is attaching the kube-proxy service to a load balancer service (Such as nginx) and using a template generator service like confd, or consul-template, to register/render the reverse proxy / load balancer configs for the services.
At present the Kubernetes bundle only has an internally functioning network, and the networking model is undergoing some permutations. If you'd like to participate in this planning + dev cycle, the recommended location to participate is the juju mailing list: juju#lists.ubuntu.com

Related

API gateway for services running with Kubernetes?

We have all our services running with Kubernetes. We want to know what is the best practice to deploy our own API gateway, we thought of 2 solutions:
Deploy API gateways outside the Kubernetes cluster(s), i.e. with Kong. This means the clusters' ingress will connect to the external gateways. The gateway is either VM or physical machines, and you can scale by replicating many gateway instances
Deploy gateway from within Kubernetes (then maybe connect to external L4 load balancer), i.e. Ambassador. However, with this approach, each cluster can only have 1 gateway. The only way to prevent fault-tolerance is to actually replicate the entire K8s cluster
What is the typical setup and what is better?
The typical setup for an api gateway in kubernetes is either using a load balancer service, if the cloud provider that you are using support dynamic provision of load balancers (all major cloud vendors like gcp, aws or azure support it), or even more common to use an ingress controller.
Both of these options can scale horizontally so you have fault tolerance, in fact there is already a solution for ingress controller using kong
https://github.com/Kong/kubernetes-ingress-controller

Where is a good place to learn more about cloud independent Kubernetes Istio ingress controller external load balancer setup?

I am currently running Kubespray configured Kubernetes clusters that I am trying to integrate Istio as an Ingress Controller, and I am trying to figure out, not how to set up the internal workings of the Istio service (which there are tons of tutorials), but how to connect the Istio Ingress to cloud agnostic load balancers to route traffic into the cluster.
I see a lot of tutorials that mention cloud specific methodologies like AWS or GCP load balancers from within Kubernetes (which are utterly useless to me), but I want a Kubernetes cluster that knows / cares nothing about the external cloud environment that makes it easier to port or create hybrid / multi-cloud environments. I am having trouble finding information for this kind of setup. Can anyone help point me to information about manually configuring external load balancers to link external traffic into the cluster without relying on Kubernetes cloud extensions?
Thanks for any information you can provide or references you can point me to!

How to deploy kubernertes service (type LoadBalancer) on onprem VMs?

How to deploy kubernertes service (type LoadBalancer) on onprem VMs ? When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS. My question is-:
Do we need a Load balancer if I use type=LoadBalcer on Onprem VMs?
Can I assign LoadBalncer IP manually in yaml?
You need to setup metalLB.
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
To install run
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
For more details Click here to install
It might be helpful to check the Banzai Cloud Pipeline Kubernetes Engine (PKE) that is "a simple, secure and powerful CNCF-certified Kubernetes distribution" platform. It was designed to work on any cloud, VM or on bare metal nodes to provide a scalable and secure foundation for private clouds. PKE is cloud-aware and includes an ever-increasing number of cloud and platform integrations.
When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS.
If you create a LoadBalancer service — for example try to expose your own TCP based service, or install an ingress controller — the cloud provider integration will take care of creating the needed cloud resources, and writing back the endpoint where your service will be available. If you don't have a cloud provider integration or a controller for this purpose, your Service resource will remain in Pending state.
In case of Kubernetes, LoadBalancer services are the easiest and most common way to expose a service (redundant or not) for the world outside of the cluster or the mesh — to other services, to internal users, or to the internet.
Load balancing as a concept can happen on different levels of the OSI network model, mainly on L4 (transport layer, for example TCP) and L7 (application layer, for example HTTP). In Kubernetes, Services are an abstraction for L4, while Ingresses are a generic solution for L7 routing.
You need to setup metalLB.
MetalLB is one of the most popular on-prem replacements for LoadBalancer cloud integrations. The whole solution runs inside the Kubernetes cluster.
The main component is an in-cluster Kubernetes controller which watches LB service resources, and based on the configuration supplied in a ConfigMap, allocates and writes back IP addresses from a dedicated pool for new services. It maintains a leader node for each service, and depending on the working mode, advertises it via BGP or ARP (sending out unsolicited ARP packets in case of failovers).
MetalLB can operate in two ways: either all requests are forwarded to pods on the leader node, or distributed to all nodes with kubeproxy.
Layer 7 (usually HTTP/HTTPS) load balancer appliances like F5 BIG-IP, or HAProxy and Nginx based solutions may be integrated with an applicable ingress-controller. If you have such, you won't need a LoadBalancer implementation in most cases.
Hope that sheds some light on a "LoadBalancer on bare metal hosts" question.

How to install Kubernetes dashboard on external IP address?

How to install Kubernetes dashboard on external IP address?
Is there any tutorial for this?
You can expose services and pods in several ways:
expose the internal ClusterIP service through Ingress, if you have that set up.
change the service type to use 'type: LoadBalancer', which will try to create an external load balancer.
If you have external IP addresses on your kubernetes nodes, you can also expose the ports directly on the node hosts; however, I would avoid these unless it's a small, test cluster.
change the service type to 'type: NodePort', which will utilize a port above 30000 on all cluster machines.
expose the pod directly using 'type: HostPort' in the pod spec.
Depending on your cluster type (Kops-created, GKE, EKS, AKS and so on), different variants may not be setup. Hosted clusters typically support and recommend LoadBalancers, which they charge for, but may or may not have support for NodePort/HostPort.
Another, more important note is that you must ensure you protect the dashboard. Running an unprotected dashboard is a sure way of getting your cluster compromised; this recently happened to Tesla. A decent writeup on various way to protect yourself was written by Jo Beda of Heptio

Kubernetes VIP using Istio

I am new to Kubernetes and trying to move from VM based services to Kubernetes.
Current approach,
Have multiple VM's and running services on each VM. Services are running on multiple VM's and have VIP in front of them. Clients will be accessing VIP and VIP will be doing round robin on available services.
I read ISTIO and ingress and hope, the same thing can be done using ISTIO. I have setup a local minikube cluster and exploring all the use cases. I was able to deploy my service with scaling factor 2. Now, I would like to access my service using VIP. I was not sure how to create VIP and expose to other service in the Kubernetes cluster and services running outside the Kubernetes cluster? Can i use the same existing VIP? Or, Do i need to do any extra setting create a VIP in Kubenetes with any service name?
Thanks
Please note that Istio is an additional layer on top of other frameworks, including Kubernetes. In your case you should port your application to Kubernetes first, and then add Istio if needed.
Porting to Kubernetes:
Instead of a VIP, you define a Kubernetes service. You change the code or configure your microservices to use the defined Kubernetes services instead of the VIPs.
To access your services from the outside, you define a Kubernetes Ingress.
This probably should be enough to make your application run on Kubernetes.
Once you ported your application to Kubernetes, you can add Istio, see Istio Quick Start Guide. Istio can provide you advanced routing, logging and monitoring, policy enforcement, traffic encryption between services, and also support for various microservices patterns. See more at istio.io.