Kubernetes VIP using Istio - kubernetes

I am new to Kubernetes and trying to move from VM based services to Kubernetes.
Current approach,
Have multiple VM's and running services on each VM. Services are running on multiple VM's and have VIP in front of them. Clients will be accessing VIP and VIP will be doing round robin on available services.
I read ISTIO and ingress and hope, the same thing can be done using ISTIO. I have setup a local minikube cluster and exploring all the use cases. I was able to deploy my service with scaling factor 2. Now, I would like to access my service using VIP. I was not sure how to create VIP and expose to other service in the Kubernetes cluster and services running outside the Kubernetes cluster? Can i use the same existing VIP? Or, Do i need to do any extra setting create a VIP in Kubenetes with any service name?
Thanks

Please note that Istio is an additional layer on top of other frameworks, including Kubernetes. In your case you should port your application to Kubernetes first, and then add Istio if needed.
Porting to Kubernetes:
Instead of a VIP, you define a Kubernetes service. You change the code or configure your microservices to use the defined Kubernetes services instead of the VIPs.
To access your services from the outside, you define a Kubernetes Ingress.
This probably should be enough to make your application run on Kubernetes.
Once you ported your application to Kubernetes, you can add Istio, see Istio Quick Start Guide. Istio can provide you advanced routing, logging and monitoring, policy enforcement, traffic encryption between services, and also support for various microservices patterns. See more at istio.io.

Related

How to Kubernetes Cloud like GKE , EKS,.. allocation external ip for Service with Type Loadbacer in Kubernetes?

I just new in K8s. I try to self deploy k8s cloud in internal company server. And I have question how to I setup my K8s can allocation External IP for Service with Loabalancer. May you tell you how it work in GKE or EKS?
Updated base on your comment.
What I mean how to EKS or GKE behind the scenes allocation ip, what is a mechanism?
Here's the EKS version and here's the GKE version. It's a complex thing, suggest you use these materials as the starting point before diving into technical details (which previous answer provided you the source). In case you thought of on-premises k8s cluster, it depends on the CNI that you will use, a well known CNI is Calico.
In GKE you can define services to expose or to make accessible the applications defined in the cluster. There are several kinds of services one of them is a LoadBalancer service, this can have an external IP address.

How to deploy kubernertes service (type LoadBalancer) on onprem VMs?

How to deploy kubernertes service (type LoadBalancer) on onprem VMs ? When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS. My question is-:
Do we need a Load balancer if I use type=LoadBalcer on Onprem VMs?
Can I assign LoadBalncer IP manually in yaml?
You need to setup metalLB.
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
To install run
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
For more details Click here to install
It might be helpful to check the Banzai Cloud Pipeline Kubernetes Engine (PKE) that is "a simple, secure and powerful CNCF-certified Kubernetes distribution" platform. It was designed to work on any cloud, VM or on bare metal nodes to provide a scalable and secure foundation for private clouds. PKE is cloud-aware and includes an ever-increasing number of cloud and platform integrations.
When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS.
If you create a LoadBalancer service — for example try to expose your own TCP based service, or install an ingress controller — the cloud provider integration will take care of creating the needed cloud resources, and writing back the endpoint where your service will be available. If you don't have a cloud provider integration or a controller for this purpose, your Service resource will remain in Pending state.
In case of Kubernetes, LoadBalancer services are the easiest and most common way to expose a service (redundant or not) for the world outside of the cluster or the mesh — to other services, to internal users, or to the internet.
Load balancing as a concept can happen on different levels of the OSI network model, mainly on L4 (transport layer, for example TCP) and L7 (application layer, for example HTTP). In Kubernetes, Services are an abstraction for L4, while Ingresses are a generic solution for L7 routing.
You need to setup metalLB.
MetalLB is one of the most popular on-prem replacements for LoadBalancer cloud integrations. The whole solution runs inside the Kubernetes cluster.
The main component is an in-cluster Kubernetes controller which watches LB service resources, and based on the configuration supplied in a ConfigMap, allocates and writes back IP addresses from a dedicated pool for new services. It maintains a leader node for each service, and depending on the working mode, advertises it via BGP or ARP (sending out unsolicited ARP packets in case of failovers).
MetalLB can operate in two ways: either all requests are forwarded to pods on the leader node, or distributed to all nodes with kubeproxy.
Layer 7 (usually HTTP/HTTPS) load balancer appliances like F5 BIG-IP, or HAProxy and Nginx based solutions may be integrated with an applicable ingress-controller. If you have such, you won't need a LoadBalancer implementation in most cases.
Hope that sheds some light on a "LoadBalancer on bare metal hosts" question.

How to install Kubernetes dashboard on external IP address?

How to install Kubernetes dashboard on external IP address?
Is there any tutorial for this?
You can expose services and pods in several ways:
expose the internal ClusterIP service through Ingress, if you have that set up.
change the service type to use 'type: LoadBalancer', which will try to create an external load balancer.
If you have external IP addresses on your kubernetes nodes, you can also expose the ports directly on the node hosts; however, I would avoid these unless it's a small, test cluster.
change the service type to 'type: NodePort', which will utilize a port above 30000 on all cluster machines.
expose the pod directly using 'type: HostPort' in the pod spec.
Depending on your cluster type (Kops-created, GKE, EKS, AKS and so on), different variants may not be setup. Hosted clusters typically support and recommend LoadBalancers, which they charge for, but may or may not have support for NodePort/HostPort.
Another, more important note is that you must ensure you protect the dashboard. Running an unprotected dashboard is a sure way of getting your cluster compromised; this recently happened to Tesla. A decent writeup on various way to protect yourself was written by Jo Beda of Heptio

Should we run a Consul container in every Pod?

We run our stack on the Google Cloud Platform (hosted Kubernetes, GKE) and have a Consul cluster running outside of K8s (regular GCE instances).
Several services running in K8s use Consul, mostly for it's CP K/V Store and advanced locking, not so much for service discovery so far.
We recently ran into some issues with using the Consul service discovery from within K8s. Right now our apps talk directly to the Consul Servers to register and unregister services they provide.
This is not recommended best-practice, usually Consul clients (i.e. apps using Consul) should talk to the local Consul agent. In our setup there are no local Consul agents.
My Question: Should we run local Consul agents as sidekick containers in each pod?
IMHO this would be a huge waste of ressources, but it would match the Consul best-practies better.
I tried searching on Google, but all posts about Consul and Kubernetes talk about running Consul in K8s, which is not what I want to do.
As the official Consul Helm chart and the documentation suggests the standard approach is to run a DaemonSet of Consul clients and then use a connect-side-car injector to inject sidecars into your node simply by providing an annotation of the pod spec. This should handle all of the boilerplate and will be inline with best practices.
Consul: Connect Sidecar; https://www.consul.io/docs/platform/k8s/connect.html

How to access kubernetes service load balancer with juju

Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS with juju?
Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network.
There its a way to accomplish this deploying kubernetes cluster with juju?
I can't speak to juju specifically, but Kubernetes supports Amazon ELB - turning up a load-balancer should work.
There is a way to accomplish this, but it depends on additional work landing in the Kubernetes charms from the ~Kubernetes charmer team.
While you can reasonably hook it into an AWS ELB - Juju charms strive to be as DC agnostic as possible so its easily portable between data centers and clouds. A 'one size fits most' if you will.
What I see being required, is attaching the kube-proxy service to a load balancer service (Such as nginx) and using a template generator service like confd, or consul-template, to register/render the reverse proxy / load balancer configs for the services.
At present the Kubernetes bundle only has an internally functioning network, and the networking model is undergoing some permutations. If you'd like to participate in this planning + dev cycle, the recommended location to participate is the juju mailing list: juju#lists.ubuntu.com