Installing knative on existing kind cluster - kubernetes

I have an existing kind k8s cluster with a bunch of running services and nginx-ingress setup and I would like to add knative to it.
Is there a way of doing this with nginx-ingress, seems like networking for knative is a bit more complex than a normal service installation.

Knative needs more capabilities out of the HTTP routing layer than are exposed via the Kubernetes Ingress resource. (Percentage splits and header rewrite are two of the big ones.)
Unfortunately, so one has written an adapter from Knative's "KIngress" implementation to the nginx ingress. In the future, the gateway API (aka "Ingress V2") may provide these capabilities; in the meantime, you'll need to install one of the other network adapters and ingress implementations. Kourier provides the smallest implementation, while Contour also provides an Ingress implementation if you want to switch from nginx entirely.

Related

Kubernetes - is Service Mesh a must?

Recently I have built several microservices within a k8s cluster with Nginx ingress controller and they are working normally.
When dealing with communications among microservices, I attempted gRPC and it worked. Then I discover when microservice A -> gRPC -> microservice B, all requests were only occurred at 1 pod of microservice B (e.g. total 10 pods available for microservice B). In order to load balance the requests to all pods of microservice B, I attempted linkerd and it worked. However, I realized gRPC sometimes will produce internal error (e.g. 1 error out of 100 requests), making me changed to using the k8s DNS way (e.g. my-svc.my-namespace.svc.cluster-domain.example). Then, the requests never fail. I started to hold up gRPC and linkerd.
Later, I was interested in istio. I successfully deployed it to the cluster. However, I observe it always creates its own load balancer, which is not so matching with the existing Nginx ingress controller.
Furthermore, I attempted prometheus and grafana, as well as k9s. These tools let me have better understanding on cpu and memory usage of the pods.
Here I have several questions that I wish to understand:-
If I need to monitor cluster resources, we have prometheus, grafana and k9s. Are they doing the same monitoring role as service mesh (e.g. linkerd, istio)?
if k8s DNS can already achieve load balancing, do we still need service mesh?
if using k8s without service mesh, is it lag behind the normal practice?
Actually I also want to use service mesh every day.
The simple answer is
Service mesh for a kubernetes server is not necessary
Now to answer your questions
If I need to monitor cluster resources, we have prometheus, grafana and k9s. Are they doing the same monitoring role as service mesh (e.g. linkerd, istio)?
K9s is a cli tool that is just a replacement to the kubectl cli tool. It is not a monitor tool. Prometheus and grafana are monitoring tools that will need use the data provided by applications(pods) and builds the time-series data which can be visualized as charts, graphs etc. However the applications have to provide the monitoring data to Prometheus. Service meshes may use a sidecar and provide some default metrics useful for monitoring such as number of requests handled in a second. Your application doesn't need to have any knowledge or implementation of the metrics. Thus service meshes are optional and it offloads the common things such as monitoring or authorization.
if k8s DNS can already achieve load balancing, do we still need service mesh?
Service meshes are not needed for load balancing. When you have multiple services running in the cluster and want to use a single entry point for all your services to simplify maintenance and to save cost, Ingress controllers such as Nginx, Traefik, HAProxy are used. Also, service meshes such as Istio comes with its own ingress controller.
if using k8s without service mesh, is it lag behind the normal practice?
No, there can be clusters that don't have service meshes today and still use Kubernetes.
In the future, Kubernetes may bring some functionalities from service meshes.
Service mesh is not a silver bullet and it doesn't fit into every use case. Service mesh will not do everything for you, it also have bugs and limited features.
You can use Prometheus without Istio and have a very nice app monitoring. Service mesh can simplify some monitoring tasks for you, but it doesn't mean you cannot do it yourself.
Please don't think of DNS as load balancing solution. Kubernetes have Services and Ingresses to do load balancing. Nginx Ingress today is very powerful and have many advanced features.
It heavily depends on your use case.

Reverse proxy for thrift hiveservers running in a kubernetes cluster

I have a requirement to run multiple hiveservers as pods on a kubernetes cluster, each serving users belonging to different AD groups. These hiveservers need to be exposed outside of kubernetes cluster, but each hiveserver cannot be exposed as a different service. Ideally I would like to have a reverse proxy implemented using ingress controller with ingress defined for each hiveserver, as the servers could be dynamically created and destroyed.
I see that nginx ingress controller can be used for http, I don't see a way to make this work as a reverse proxy for thrift based hiveservers. I also had a look at knox, but that seems to support http transport only.
Is there a known way to have ingress controller setup as reverse proxy to front end non-http end points like thrift hiveservers?
You may try to use service mesh, if this is an option for you.
In Istio such a use case (managing TCP traffic) can be achieved with Istio ingress gateway, that will act as entry point for the bunch of services inside your cluster (similar to K8S ingress but not limited to http traffic). There is even a built-in support for custom protocols like Apache Thrift protocol, which allows you to use features like rate limiting.

Istio service mesh in on premise kubernetes cluster in data center

Currently, we have k8s cluster in our data center due to compliance reasons. We are running the traefik as an ingress controller. Now we want to have the service mesh added to it for monitoring the service level communication. Can you suggest me how can I do it? Do I replace the traefik ingress controller and have the istio ingress on the host network setup or any other better options without removing the traefik and have istio to it too?
If you are going to install Istio to get "free" observability features, you need to keep in mind that in some scenarios it directly doesn't fit. e.g. you want to get the latency within a service. not possible with Istio.
I would recommend you to get Istio, if you need service mesh and/or routing, besides observability, but don't install it just for observability. There are other tools out there specific for that.
Without counting the fact that you are going to use cluster resources, to get an extra container for each service, just for monitoring. Not a good approach, in my opinion.

Deploying a mobile app backend with kubernetes

I need to some advice regarding how to deploy a high traffic mobile app back-end using kubernetes. This deployment should support HA at-least. We have plans to run a DR site as well, but scope of this question does not include a DR.
We currently use hardware load-balancers to route incoming traffic to different IP addresses attached to different boxes. Each such box runs a nginx instance as a reverse proxy which also act as the https terminator. After https termination, traffic is directed to an apache web-server. Each box has one apacher server receiving all traffic from nginx running in the same box.
We want to introduce kubernetes to this setup so that we can utilize boxes better. Our traffic patterns are highly fluctuating and we believe kubernetes can help us utilize boxes in a more efficient manner.
My current plan is as follows:
-- Keep the hardware load balancer to route incoming traffic to different boxes. (this may not be needed but getting rid of HLB could become very political).
-- Run a kubenetes cluster utilizing all available boxes
-- pack apache + our app as docker image and deploy this image on docker container which in tern is run inside pods in the kubenetes cluster
-- setup ingress to accept external traffic, do https termination and load balance to above pods. A simple round robin or random load balancing algo is fine as our back ends are stateless
Does this sound right? Are there any alternatives? In the above case, where does the ingress controller run?
Your plan seems right. You can either pack apache with the code but it shall be better to keep it separate so that they can contact each other and any one of the version upgrades won't be dependent upon this one.
Also, the hardware load balancer will tickle the traffic on to the ingress which shall further bring it down to the k8s cluster and eventually on the pods.
The ingress controller runs inside the cluster. I guess you're looking to run kuberentes on-premise with your existing hardware. To use the existing hardware loadbalancer outside of kubernetes you could run the nginx ingress controller as a daemonset so that there'd be one instance on each node and expose it via HostPort so that each is exposed on the same port. Or if there are lots of nodes then you'd want to just use a Deployment. Then you'd would want to use NodePort so that Kuberentes would send the traffic to a node where an ingress controller pod runs.
Another alternative would be to expose the nginx ingress controller through LoadBalancer - to do that you'd need to integrate your loadbalancer with kubernetes using something like https://hackernoon.com/metallb-a-load-balancer-for-bare-metal-kubernetes-clusters-f7320fde52f2
Alternatively, you wouldn't necessarily have to use ingress. You could just run nginx in the cluster and expose it via NodePort.
It's not clear to me that you'd need apache http server in your container. I guess it depends how you are using it currently.

Is it possible to run both Istio and gRPC in a GKE cluster

Istio and gRPC seem complementary and I'd like to use both in the clusters.
The thing is that they both add an extra container which receives/proxy communication between pods / microservices.
Is it advised or not to use both in parallel in all pods?
Are there particular adaptations to do if one uses both?
Thanks for any advice!
Istio and gRPC do work well together, when declaring your services ports' to istio just make sure to name them grpc-something so the proxy knows it is h2/grpc traffic and route it properly
You mention that gRPC adds an extra container - why not having your service speak gRPC natively ?
We do have future plans with protocol transcoding and rich integrated gRPC/istio libraries that would skip layers but that's not there yet.