access to loki in grafana cloud - grafana

I just decided to try out grafana cloud and especially loki for this. Therefore I just set up to push my nextcloud logs for first experiments.
The push seems to be working, at least I see an ingest rate at the account dashboard.
Now I would like to explore the logs but don't have any clue where. Maybe I'm missing some links - how can I access the logs now? Is there any URL to access the ingested logs?
Same question will probably arise to access the provided prometheus instance.

Generally, Grafana is the "UI" for Loki, so you need some Grafana instance.
You very likely has also your Grafana instance in Grafana Cloud, where is your Loki datasource preconfigured, so you can explore logs there (with explore Grafana feature for example).
Or your Loki overview in the Grafana Cloud has Grafana Data Source settings, which you can use to configure Loki datasource in any other Grafana instance (e.g. you can start own Grafana instance locally).

Related

Why are metrics available via http://localhost:56789/metrics but not returned via https://example.com/path/serviceforapp/metrics?

Kubernetes. Prometheus. A test application that can send metrics or a simple phrase to an HTTP request.
The end goal is to see graphs in Grafana. I want to make sure that everything works up to the current setup moment.
Now I want to see the metrics by URL, to see that the ServiceMonitor is working correctly, that they reach the Graphana. But so far I haven't been able to find a good enough troubleshooting guide.
I assume it's okay not to show everyone your metrics. But I want to make sure that this is normal behavior and which way to google in case I want to allow it.
This is a completely intended behavior when trying to use Grafana to visualize data from Managed Service for Prometheus. Metrics will be available via http request when trying to browse Grafana, as during the deployment we port-forward the frontend service to our local machine. As it is not a good practice to leave our metrics available to everyone for some security reasons, Grafana port-forwards the frontend service to our local machine.
Google Cloud APIs all require authentication using OAuth2. However, Grafana doesn't support OAuth2 authentication for Prometheus data sources. To use Grafana with Managed Service for Prometheus, you must use the Prometheus UI as an authentication proxy.
You can refer to these documentations to know more about Grafana & How-to-Deploy-Grafana.

Moving Logs into a Kubernetes Cluster

I have Grafana running inside a Kubernetes Cluster and i want to push logs from outside of Kubernetes (apps not running in K8s/DB's etc) into kubernetes so i can view them inside the Grafana cluster. What's the best way of doing this?
So Grafana is a GUI for reporting on data stored in other databases. It sounds like you are capturing metrics from the cluster and this data is stored in another database. If you are running Prometheus this is the database for Grafana's time-series data. You also may end up running long-term storage systems like Thanos in the future for that data to keep it over time depending on the volume of data.
Back to logging... Similarly to use Grafana for logs you'll need to implement some kind of logging database. The most popular is the formerly open-source ELK (ElasticSearch, Logstash, Kibana) stack. You can now use OpenSearch which is an open-source version of ElasticSearch and Kibana. Most K8S distributions come with Fluentd which replaces logstash for sending data. You can also install Fluentd or Fluentbit on any host to send data to this stack. You'll find that Grafana is not the best for log analysis, so most people use Kibana (OpenSearch Dashboards). However you can use Grafana as well, it's just painful IMO.
Another option if you don't want to run ELK is using Grafana Loki, which is another open-source database for logging. It's a lot more simple, but also more limited as to how you can query the logs due to the way it indexes. It works nicely with Grafana, but once again this is not a full-text indexing technology so it will be a bit limited.
Hope this is helpful, let me know if you have questions!

Best practices when trying to implement custom Kubernetes monitoring system

I have two Kubernetes clusters representing dev and staging environments.
Separately, I am also deploying a custom DevOps dashboard which will be used to monitor these two clusters. On this dashboard I will need to show information such as:
RAM/HD Space/CPU usage of each deployed Pod in each environment
Pod health (as in if it has too many container restarts etc)
Pod uptime
All these stats have to be at a cluster level and also per namespace, preferably. As in, if I query a for a particular namespace, I have to get all the resource usages of that namespace.
So the webservice layer of my dashboard will send a service request to the master node of my respective cluster in order to fetch this information.
Another thing I need is to implement real time notifications in my DevOps dashboard. Every time a container fails, I need to catch that event and notify relevant personnel.
I have been reading around and two things that pop up a lot are Prometheus and Metric Server. Do I need both or will one do? I set up Prometheus on a local cluster but I can't find any endpoints it exposes which could be called by my dashboard service. I'm also trying to set up Prometheus AlertManager but so far it hasn't worked as expected. Trying to fix it now. Just wanted to check if these technologies have the capabilities to meet my requirements.
Thanks!
I don't know why you are considering your own custom monitoring system. Prometheus operator provides all the functionality that you mentioned.
You will end up only with your own grafana dashboard with all required information.
If you need custom notification you can set it up in Alertmanager creating correct prometheusrules.monitoring.coreos.com, you can find a lot of preconfigured prometheusrules in kubernetes-mixin
.
Using labels and namespaces in Alertmanager you can setup a correct route to notify person responsible for a given deployment.
Do I need both or will one do?, yes, you need both - Prometheus collects and aggregates metric when Metrick server exposes metrics from your cluster node for your Prometheus to scrape it.
If you have problems with Prometheus, Alertmanger and so on consider using helm chart as entrypoint.
Prometheus + Grafana are a pretty standard setup.
Installing kube-prometheus or prometheus-operator via helm will give you
Grafana, Alertmanager, node-exporter and kube-state-metrics by default and all be setup for kubernetes metrics.
Configure alertmanager to do something with the alerts. SMTP is usually the first thing setup but I would recommend some sort of event manager if this is a service people need to rely on.
Although a dashboard isn't part of your requirements, this will inform how you can connect into prometheus as a data source. There is docco on adding prometheus data source for grafana.
There are a number of prebuilt charts available to add to Grafana. There are some charts to visualise alertmanager too.
Your external service won't be querying the metrics directly with prometheus, in will be querying the collected data in prometheus stored inside your cluster. To access the API externally you will need to setup an external path to the prometheus service. This can be configured via an ingress controller in the helm deployment:
prometheus.ingress.enabled: true
You can do the same for the alertmanager API and grafana if needed.
alertmanager.ingress.enabled: true
grafana.ingress.enabled: true
You could use Grafana outside the cluster as your dashboard via the same prometheus ingress if it proves useful.

How to push mule(Java based) logs to Prometheus storage?

I have a mule application which mostly does HTTP requests, which is logging as plain text. I want to push these logs as metrics to Prometheus. Since this is a legacy application it would take a substantial amount of time to change code and push metrics directly into Prometheus storage.
Idea is to show Prometheus metrics in Grafana Dashboard.
Is there any intermediate tool that converts plain text to metrics?
Anything that helps with this requirement.
FYI- We have Nagios and Splunk which is doing this task as of now, we are looking to move our solution to Prometheus and Grafana
In situations like these you can use tools like https://github.com/fstab/grok_exporter to convert logs into metrics.

Dashboards and Visualisations gets lost when Grafana restarts on DC/OS

I have used this documentation in order to deploy Prometheus with Grafana on the cluster.
The problem arises whenever we restart our Prometheus and Grafana with some changed configuration all our dashboards and visualizations are gone.
Is there a workaround where we can persist the dashboards and visualizations?
You need to define volumes, which will be used in the Grafana/Prometheus containers to store data persistently.
Doc: https://docs.mesosphere.com/1.7/administration/storage/mount-disk-resources/