GCP kubernetes objects monitoring options - kubernetes

We are trying to figure out which monitoring options will be suitable for our environment.
We have two clusters in GCP and we installed Istio (with Helm) in both of them. We are also using Workload Identity and Stackdriver Monitoring.
Now, we would like to create dashboards (or charts) for kubernetes objects (such as, deployments, containers, cronjobs, services, etc.) and want to set alerts on them. So can anyone suggest free monitoring options to achieve these all? We don't want to go with any third party paid software.
Thank you in advance.

If you are using GCP GKE then default stack driver logging & monitoring is best option.
It's free if you are using GCP service and using stack driver monitoring you can monitoring and creat respective dashboards as per need.
For alerts, you can use the GCP Uptime check option available in monitoring itself which sends the email. For call alerts, you may have to use some custom or third applications.
You can read more at : https://cloud.google.com/monitoring/docs
Uptime checks : https://cloud.google.com/monitoring/uptime-checks

Related

Is there any clould provider where one can run a managed k8s cluster in free tier indefinetively?

I'm trying to run open-source with minimal costs on the cloud and would love to run it on k8s without the hassle of managing it (managed k8s cluster). Is there a free tier option for a small-scale project in any cloud provider?
If there is one, which parameters should I choose to get the free tier?
You can use IBM cloud which provides a single worker node Kubernetes cluster along with container registry like other cloud providers. This is more than enough for a beginner to try the concepts of Kubernetes.
You can also use Tryk8s which provides a playground for trying Kubernetes for free. Play with Kubernetes is a labs site provided by Docker and created by Tutorius. Play with Kubernetes is a playground which allows users to run K8s clusters in a matter of seconds. It gives the experience of having a free Alpine Linux Virtual Machine in the browser. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs.
If you want to use more services and resources, based on your use case you can try other cloud providers, they may not provide an indefinitely free trial but have no restriction on the resources.
For Example, Google Kubernetes engine(GKE) provides $300 credit to fully explore and conduct an assessment of Google Cloud. You won’t be charged until you upgrade which can be used for a 3 month period from the account creation. There is no restriction on the resources and the number of nodes for creating a cluster. You can add Istio and Try Cloud Run (Knative) also.
Refer Free Kubernetes which Lists the free Trials/Credit for Managed Kubernetes Services.

Is it possible/fine to run Prometheus, Loki, Grafana outside of Kubernetes?

In some project there are scaling and orchestration implemented using technologies of a local cloud provider, with no Docker & Kubernetes. But the project has poor logging and monitoring, I'd like to instal Prometheus, Loki, and Grafana for metrics, logs, and visualisation respectively. Unfortunately, I've found no articles with instructions about using Prometheus without K8s.
But is it possible? If so, is it a good way? And how to do this? I also know that Prometheus & Loki can automatically detect services in the K8s to extract metrics and logs, but will the same work for a custom orchestration system?
Can't comment about Loki, but Prometheus is definitely doable.
Prometheus supports a number of service discovery mechanisms, k8s being just on of them. If you look at the list of options (the ones ending with _sd_config) you can see if your provider is there.
If it is not then a generic service discovery can be used. Maybe DNS-based discovery will work with your custom system? If not then with some glue code a file based service discovery will almost certainly work.
Yes, I'm running Prometheus, Loki etc. just fine in a AWS ECS cluster. It just requires a bit more configuration especially regarding service discovery (if you are not already using something like ECS Service Disovery or Hashicorp Consul)

GKE is built by default in Anthos solution ? Getting Anthos Metrics

I have a cluster with 7 nodes and a lot of services, nodes, etc in the Google Cloud Platform. I'm trying to get some metrics with StackDriver Legacy, so in the Google Cloud Console -> StackDriver -> Metrics Explorer I have all the set of anthos metrics listed but when I try to create a chart based on that metrics it doesn't show the data, actually the only response that I get in the panel is no data is available for the selected time frame even changing the time frame and stuffs.
Is right to think that with anthos metrics I can retrieve information about my cronjobs, pods, services like failed initializations, jobs failures ? And if so, I can do it with StackDriver Legacy or I need to Update to StackDriver kubernetes Engine Monitoring ?
Anthos solution, includes what’s called GKE-on prem. I’d take a look at the instructions to use logging and monitoring on GKE-on prem. Stackdriver monitors GKE On-Prem clusters in a similar way as cloud-based GKE clusters.
However, there’s a note where they say that currently, Stackdriver only collects cluster logs and system component metrics. The full Kubernetes Monitoring experience will be available in a future release.
You can also check that you’ve met all the configuration requirements.

Get request count from Kubernetes service

Is there any way to get statistics such as service / endpoint access for services defined in Kubernetes cluster?
I've read about Heapster, but it doesn't seem to provide these statistics. Plus, the whole setup is tremendously complicated and relies on a ton of third-party components. I'd really like something much, much simpler than that.
I've been looking into what may be available in kube-system namespace, and there's a bunch of containers and services, there, Heapster including, but they are effectively inaccessible because they require authentication I cannot provide, and kubectl doesn't seem to have any API to access them (or does it?).
Heapster is the agent that collects data, but then you need a monitoring agent to interpret these data. On GCP, for example, that's fluentd who gets these metrics and sends to Stackdriver.
Prometheus is an excellent monitoring tool. I would recommend this one, if youare not on GCP.
If you would be on GCP, then as mentioned above you have Stackdriver Monitoring, that is configured by default for K8s clusters. All you have to do is to create a Stackdriver accound (this is done by one click from GCP Console), and you are good to go.

How to go about logging in GKE without using Stackdriver

We are unable to grab logs from our GKE cluster running containers if StackDriver is disabled on GCP. I understand that it is proxying stderr/stdout but it seems rather heavy handed to block these outputs when Stackdriver is disabled.
How does one get an ELF stack going on GKE without being billed for StackDriver aka disabling it entirely? or is it so much a part of GKE that this is not doable?
From the article linked on a similar question regarding GCP:
"Kubernetes doesn’t specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: Stackdriver Logging for use with Google Cloud Platform, and Elasticsearch. You can find more information and instructions in the dedicated documents. Both use fluentd with custom configuration as an agent on the node." (https://kubernetes.io/docs/concepts/cluster-administration/logging/#exposing-logs-directly-from-the-application)
Perhaps our understanding of Stackdriver billing is wrong?
But we don't want to be billed for Stackdriver as the 150MB of logs outside of the GCP metrics is not going to be enough and we have some expertise in setting up ELF for logging that we'd like to use.
You can disable Stackdriver logging/monitoring on Kubernetes by editing your cluster, and setting "Stackdriver Logging" and "Stackdriver Monitoring" to disable.
I would still suggest sticking to GCP over AWS as you get the whole Kube as a service experience. Amazon's solution is still a little way off, and they are planning charging for the service in addition to the EC2 node prices (Last I heard).