Kubernetes: Monitoring throughput of each Ingress - kubernetes

We're having a bare metal K8s cluster with an NGINX Ingress Controller.
Is there a way to tell how much traffic is transmitted/received of each Ingress?
Thanks!

Ingress Controllers are implemented as standard Kubernetes applications. Any monitoring method adopted by organizations can be applied to Ingress controllers to track the health and lifetime of k8s workloads. To track network traffic statistics, controller-specific mechanisms should be used.
To observe Kubernetes Ingress traffic you can send your statistic to Prometheus and view them in Grafana (widely adopted open source software for data visualization).
Here is a monitoring guide from the ingress-nginx project, where you can read how do do it step by step. Start with installing those tools.
To deploy Prometheus in Kubernetes run the below command:
kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/
To install grafana run this one:
kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/
Follow the next steps in the mentioned before monitoring guide.
See also this article and this similar question.

Related

Migrate deployments and services from stable/nginx-ingress to kubernetes/ingress-nginx

I'm trying to migrate our ingress controllers from the old stable/nginx-ingress to the newer kubernetes/ingress-nginx
I have followed their instructions for zero downtime deployments.
Create a second nginx-controller with the kubernetes/ingress-nginx helm chart.
The instanceClassName has to be different than the original.
original instanceClassName: nginx
new instanceClassName: nginx2
Update dns to point to the new nginx 2 ELB.
Get rid of the old nginx-controller
This is all great, but all of our services/deployments are attached to instanceClassName: nginx. We can update the DNS, but then the services attached to it won't receive traffic. We can update the services at the same time, but they update at different times. This will cause an outage of some type while updating.
All of the research I have done seems to stop at that controller level. It doesn't go deeper and explain how to keep all the services connected during the switch.
How can I get both nginx controllers to route traffic to the application at the same time? I have not been able to get that to happen at the service or nginx controller level.
Or maybe I'm thinking about incorrectly, and it can work in a different way.
thanks.
There are multiple methods and I have given below with Istio and providing alternate method documentations for your reference.
You can avoid down time while migrating via splitting the traffic. There are few traffic splitting tools. Kubernetes has traffic splitting in-built feature with Istio and it will help you to direct a percentage of traffic to the new ingress controller while keeping the rest of the traffic on the old ingress controller.
Install Istio in your cluster and configure your ingress resources to use Istio gateway instead of the ingress controllers directly.
Install Istio in your cluster.
Configure your ingress resources to use the Istio gateway.
Create virtual service for your ingress resources and gradually increase the traffic to the new ingress controller and also make sure to update your DNS records to point to the new ingress controllers IP.
Reference and for further information please check the official Istio page and Istio Service Mesh Workshop.
For alternative methods please refer below options:
Canary Deployments and Type Loadbalancer

Ingress resource deployment

What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
As we all know there are different cloud provider and many types of settings that are related to the deployment of your ingress resource which depends on your target environments: AWS, OpenShift, plain vanilla K8S, google cloud, Azure.
On cloud deployments like Amazon, Google, etc., ingresses need also special annotations, most of which are common to all micro services in need of an ingress.
If we deploy also a mesh like Istio on top of k8s then we need to use an Istio gateway with ingress. if we use OCP then it has special kind called “routes”.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
So maybe the best approach is to create an operator to deploy the Ingress resource because of the many different setups here?
Is it important to create some generic component to deploy the Ingress while keeping cloud agnostic?
How do other companies deploy their ingress resources to the k8s cluster?
What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
On AWS the common approach is to use ALB, and the AWS ALB Ingress Controller, but it has its own drawbacks in that it create one ALB per Ingress resource.
Is we deploy also a mesh like Istio then we need to use Istio gateway with ingress.
Yes, then the situation is different, since you will use VirtualService from Istio or use AWS App Mesh - that approach looks better, and you will not have an Ingress resource for your apps.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
Yes, this is in the intersection between the cloud provider infrastructure and your cluster, so there are unfortunately many different setups here. It also depends on if your ingress gateway is within the cluster or outside of the cluster.
In addition, the Ingress resource, just become GA (stable) in the most recent Kubernetes, 1.19.

Is the prometheus-to-sd required for GKE? Can I delete it?

A while back a GKE cluster got created which came with a daemonset of:
kubectl get daemonsets --all-namespaces
...
kube-system prometheus-to-sd 6 6 6 3 6 beta.kubernetes.io/os=linux 355d
Can I delete this daemonset without issue?
What is it being used for?
What functionality would I be losing without it?
TL;DR
Even if you delete it, it will be back.
A little bit more explanation
Citing explanation by user #Yasen what prometheus-to-sd is:
prometheus-to-sd is a simple component that can scrape metrics stored in prometheus text format from one or multiple components and push them to the Stackdriver. Main requirement: k8s cluster should run on GCE or GKE.
Github.com: Prometheus-to-sd
Assuming that the command deleting this daemonset will be:
$ kubectl delete daemonset prometheus-to-sd --namespace=kube-system
Executing this command will indeed delete the daemonset but it will be back after a while.
prometheus-to-sd daemonset is managed by Addon-Manager which will recreate deleted daemonset back to original state.
Below is the part of the prometheus-to-sd daemonset YAML definition which states that this daemonset is managed by addonmanager:
labels:
addonmanager.kubernetes.io/mode: Reconcile
You can read more about it by following: Github.com: Kubernetes: addon-manager
Deleting this daemonset is strictly connected to the monitoring/logging solution you are using with your GKE cluster. There are 2 options:
Stackdriver logging/monitoring
Legacy logging/monitoring
Stackdriver logging/monitoring
You need to completely disable logging and monitoring of your GKE cluster to delete this daemonset.
You can do it by following a path:
GCP -> Kubernetes Engine -> Cluster -> Edit -> Kubernetes Engine Monitoring -> Set to disabled.
Legacy logging/monitoring
If you are using a legacy solution which is available to GKE version 1.14, you need to disable the option of Legacy Stackdriver Monitoring by following the same path as above.
Let me know if you have any questions in that.
TL;DR - it's ok
Assuming your context, I suppose, it's ok to shutdown prometheus component of your cluster.
Except cases when reports, alerts and monitoring - are critical parts of your system.
Let dive in the sources of GCP
As per source code at GoogleCloudPlatform:
prometheus-to-sd is a simple component that can scrape metrics stored in prometheus text format from one or multiple components and push them to the Stackdriver. Main requirement: k8s cluster should run on GCE or GKE.
Prometheus
From their Prometheus Github Page:
The Prometheus monitoring system and time series database.
To get a picture what is it for - you can read awesome guide on Prometheus: Prometheus Monitoring : The Definitive Guide in 2019 – devconnected
Also, there are hundreds of videos on their Youtube channel Prometheus Monitoring
Your questions
So, answering to your questions:
Can I delete this daemonset without issue?
It depends. As I said, you can. Except cases when reports, alerts and monitoring - are critical parts of your system.
What is it being used for
It's a TSDB for monitoring
what functionality would I be loosing without it?
metrics
→ therefore dashboards
→ therefore alerting

HTTP codes monitoring for Kubernetes cluster using MetalLB ingress controller

Having a cluster running on VMs on our private cloud and using MetalLB as ingress-controller we need to see the network traffic and HTTP codes returned from our applications to see in Grafana HTTP requests and traffic load the way you see it on AWS Load Balancers for example.
We have deployed Prometheus through the Helm deployment in all nodes so we can gather metrics from all the cluster but didn't find any metric containing the needed information. Tried looking the metrics in Prometheus about ingresses, proxy, http but there is nothing matching our need. Also tried some Grafana dashboards from the repository but nothing shows the metrics.
Thanks.

what is an ingress controller and how do I create it?

Good morning guys, so I took down a staging environment for a product on GCP and ran the deployment scripts again, the backend and frontend service have been setup. I have an ingress resource and a load balancer up, however, the service is not running. A look at the production app revealed there was something like an nginx-ingress-controller. I really don't understand all these and how it was created. Can someone help me understand because I have not seen anything online that makes it clear for me. Am I missing something?
loadBalancer: https://gist.github.com/davidshare/5a571e56febe7dacd580282b373f3095
Ingress Resource: https://gist.github.com/davidshare/d0f53912bc7da8310ec3d64f1c8a44f1
Ingress allows access to your Kubernetes services from outside the Kubernetes cluster. There are different kubernetes aka K8 resources alternatively you can use like (Node Port / Loadbalancer) which you can use to expose.
Ingress is independent resource to your service , you can specify routing rules declaratively, so each url with some context can be mapped to different services.
This makes it decoupled and isolated from the services you want to expose.
So to work ingress it needs an Ingress Controller for your cluster.
Like deployment resource in K8, ingress can be created simply by
kubectl create -f ingress.yaml
First, you have to implement Ingress Controller in order to apply Ingress resource, as described in #Shubhu answer. Ingress controller, as an edge router, applies specific logical structure with aim to route external traffic to your Kubernetes cluster underlying services via basic pattern routing rules defined in Ingress resource.
If you select Nginx Ingress Controller then it might be useful to proceed with installation guide approaching some specific prerequisites based on cloud provider environment. In order to simplify Nginx Ingress controller installation procedure it is also possible to use Helm package manager and install appropriate stable/nginx-ingress Helm chart.