Istio service mesh in on premise kubernetes cluster in data center - kubernetes

Currently, we have k8s cluster in our data center due to compliance reasons. We are running the traefik as an ingress controller. Now we want to have the service mesh added to it for monitoring the service level communication. Can you suggest me how can I do it? Do I replace the traefik ingress controller and have the istio ingress on the host network setup or any other better options without removing the traefik and have istio to it too?

If you are going to install Istio to get "free" observability features, you need to keep in mind that in some scenarios it directly doesn't fit. e.g. you want to get the latency within a service. not possible with Istio.
I would recommend you to get Istio, if you need service mesh and/or routing, besides observability, but don't install it just for observability. There are other tools out there specific for that.
Without counting the fact that you are going to use cluster resources, to get an extra container for each service, just for monitoring. Not a good approach, in my opinion.

Related

Installing knative on existing kind cluster

I have an existing kind k8s cluster with a bunch of running services and nginx-ingress setup and I would like to add knative to it.
Is there a way of doing this with nginx-ingress, seems like networking for knative is a bit more complex than a normal service installation.
Knative needs more capabilities out of the HTTP routing layer than are exposed via the Kubernetes Ingress resource. (Percentage splits and header rewrite are two of the big ones.)
Unfortunately, so one has written an adapter from Knative's "KIngress" implementation to the nginx ingress. In the future, the gateway API (aka "Ingress V2") may provide these capabilities; in the meantime, you'll need to install one of the other network adapters and ingress implementations. Kourier provides the smallest implementation, while Contour also provides an Ingress implementation if you want to switch from nginx entirely.

Kubernetes - is Service Mesh a must?

Recently I have built several microservices within a k8s cluster with Nginx ingress controller and they are working normally.
When dealing with communications among microservices, I attempted gRPC and it worked. Then I discover when microservice A -> gRPC -> microservice B, all requests were only occurred at 1 pod of microservice B (e.g. total 10 pods available for microservice B). In order to load balance the requests to all pods of microservice B, I attempted linkerd and it worked. However, I realized gRPC sometimes will produce internal error (e.g. 1 error out of 100 requests), making me changed to using the k8s DNS way (e.g. my-svc.my-namespace.svc.cluster-domain.example). Then, the requests never fail. I started to hold up gRPC and linkerd.
Later, I was interested in istio. I successfully deployed it to the cluster. However, I observe it always creates its own load balancer, which is not so matching with the existing Nginx ingress controller.
Furthermore, I attempted prometheus and grafana, as well as k9s. These tools let me have better understanding on cpu and memory usage of the pods.
Here I have several questions that I wish to understand:-
If I need to monitor cluster resources, we have prometheus, grafana and k9s. Are they doing the same monitoring role as service mesh (e.g. linkerd, istio)?
if k8s DNS can already achieve load balancing, do we still need service mesh?
if using k8s without service mesh, is it lag behind the normal practice?
Actually I also want to use service mesh every day.
The simple answer is
Service mesh for a kubernetes server is not necessary
Now to answer your questions
If I need to monitor cluster resources, we have prometheus, grafana and k9s. Are they doing the same monitoring role as service mesh (e.g. linkerd, istio)?
K9s is a cli tool that is just a replacement to the kubectl cli tool. It is not a monitor tool. Prometheus and grafana are monitoring tools that will need use the data provided by applications(pods) and builds the time-series data which can be visualized as charts, graphs etc. However the applications have to provide the monitoring data to Prometheus. Service meshes may use a sidecar and provide some default metrics useful for monitoring such as number of requests handled in a second. Your application doesn't need to have any knowledge or implementation of the metrics. Thus service meshes are optional and it offloads the common things such as monitoring or authorization.
if k8s DNS can already achieve load balancing, do we still need service mesh?
Service meshes are not needed for load balancing. When you have multiple services running in the cluster and want to use a single entry point for all your services to simplify maintenance and to save cost, Ingress controllers such as Nginx, Traefik, HAProxy are used. Also, service meshes such as Istio comes with its own ingress controller.
if using k8s without service mesh, is it lag behind the normal practice?
No, there can be clusters that don't have service meshes today and still use Kubernetes.
In the future, Kubernetes may bring some functionalities from service meshes.
Service mesh is not a silver bullet and it doesn't fit into every use case. Service mesh will not do everything for you, it also have bugs and limited features.
You can use Prometheus without Istio and have a very nice app monitoring. Service mesh can simplify some monitoring tasks for you, but it doesn't mean you cannot do it yourself.
Please don't think of DNS as load balancing solution. Kubernetes have Services and Ingresses to do load balancing. Nginx Ingress today is very powerful and have many advanced features.
It heavily depends on your use case.

Deploying and exposing a microservice on Openshift

I'm new in k8s world and using Openshift 4.2.18. I want to deploy a microservice on it. What I need is one common ip and being able to access each microservice using virtual path.
Like this,
https://my-common-ip/microservice1/
https://my-common-ip/microservice2/
https://my-common-ip/microservice3/
Service and deployment are OK. However I'm so confused with the other terms. Should I use route or ingress? Should I use VirtualService like in this link? Also heard about HA-Proxy and Istio. What's the best way of doing this? I would appreciate it if you could provide the information about these terms.
Thanks in advance, Best Regards
Route and ingress are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.If you intend to deploy your application on multiple Kubernetes distributions at the same time then Ingress might be a good option.
Virtual service and istio is service mesh which is not necessary for external access of an app. You bring complexity with a service mesh. Unless the capabilities offered by a service mesh is really needed for your usecase there is no reason to use it.

Deploying a mobile app backend with kubernetes

I need to some advice regarding how to deploy a high traffic mobile app back-end using kubernetes. This deployment should support HA at-least. We have plans to run a DR site as well, but scope of this question does not include a DR.
We currently use hardware load-balancers to route incoming traffic to different IP addresses attached to different boxes. Each such box runs a nginx instance as a reverse proxy which also act as the https terminator. After https termination, traffic is directed to an apache web-server. Each box has one apacher server receiving all traffic from nginx running in the same box.
We want to introduce kubernetes to this setup so that we can utilize boxes better. Our traffic patterns are highly fluctuating and we believe kubernetes can help us utilize boxes in a more efficient manner.
My current plan is as follows:
-- Keep the hardware load balancer to route incoming traffic to different boxes. (this may not be needed but getting rid of HLB could become very political).
-- Run a kubenetes cluster utilizing all available boxes
-- pack apache + our app as docker image and deploy this image on docker container which in tern is run inside pods in the kubenetes cluster
-- setup ingress to accept external traffic, do https termination and load balance to above pods. A simple round robin or random load balancing algo is fine as our back ends are stateless
Does this sound right? Are there any alternatives? In the above case, where does the ingress controller run?
Your plan seems right. You can either pack apache with the code but it shall be better to keep it separate so that they can contact each other and any one of the version upgrades won't be dependent upon this one.
Also, the hardware load balancer will tickle the traffic on to the ingress which shall further bring it down to the k8s cluster and eventually on the pods.
The ingress controller runs inside the cluster. I guess you're looking to run kuberentes on-premise with your existing hardware. To use the existing hardware loadbalancer outside of kubernetes you could run the nginx ingress controller as a daemonset so that there'd be one instance on each node and expose it via HostPort so that each is exposed on the same port. Or if there are lots of nodes then you'd want to just use a Deployment. Then you'd would want to use NodePort so that Kuberentes would send the traffic to a node where an ingress controller pod runs.
Another alternative would be to expose the nginx ingress controller through LoadBalancer - to do that you'd need to integrate your loadbalancer with kubernetes using something like https://hackernoon.com/metallb-a-load-balancer-for-bare-metal-kubernetes-clusters-f7320fde52f2
Alternatively, you wouldn't necessarily have to use ingress. You could just run nginx in the cluster and expose it via NodePort.
It's not clear to me that you'd need apache http server in your container. I guess it depends how you are using it currently.

Kubernetes VIP using Istio

I am new to Kubernetes and trying to move from VM based services to Kubernetes.
Current approach,
Have multiple VM's and running services on each VM. Services are running on multiple VM's and have VIP in front of them. Clients will be accessing VIP and VIP will be doing round robin on available services.
I read ISTIO and ingress and hope, the same thing can be done using ISTIO. I have setup a local minikube cluster and exploring all the use cases. I was able to deploy my service with scaling factor 2. Now, I would like to access my service using VIP. I was not sure how to create VIP and expose to other service in the Kubernetes cluster and services running outside the Kubernetes cluster? Can i use the same existing VIP? Or, Do i need to do any extra setting create a VIP in Kubenetes with any service name?
Thanks
Please note that Istio is an additional layer on top of other frameworks, including Kubernetes. In your case you should port your application to Kubernetes first, and then add Istio if needed.
Porting to Kubernetes:
Instead of a VIP, you define a Kubernetes service. You change the code or configure your microservices to use the defined Kubernetes services instead of the VIPs.
To access your services from the outside, you define a Kubernetes Ingress.
This probably should be enough to make your application run on Kubernetes.
Once you ported your application to Kubernetes, you can add Istio, see Istio Quick Start Guide. Istio can provide you advanced routing, logging and monitoring, policy enforcement, traffic encryption between services, and also support for various microservices patterns. See more at istio.io.