Grafana - How to set a default field value for a visualization with a cloudwatch query as the data source - grafana

I'm new to grafana, so I might be missing something obvious. But, I have a custom cloudwatch metric that records http response codes into buckets (e.g. 2xx, 3xx, etc.).
My grafana visualization is using a query to pull and group data from cloudwatch and the resulting fields are dynamic: 2xx (us-east-1), 2xx (us-west-1), 3xx (us-east-1), etc.
I then use transformations to aggregate those values for a global view of the data:
The problem is, I can't create the transformation until the data exists. I'd like to have a 5xx field, but since that data is sporadic, it doesn't show up in the UI and I can't find a way to force "5xx (...)" to exist and have it get used when/if those response codes start occurring.
Is there a way to create placeholder fields somehow to achieve this?

You can't create it in the UI. But you have still option to edit that in the panel model directly. It is JSON, which represent whole panel. Edit it manually - in the panel menu click Inspect > Panel JSON and create&customize another item in the transformation section. It is not very convenient option to edit panel, but you will achieve your target.

Related

Automatically map contents of REST JSON body as flat table in Data Flow

With the Copy Data transformation it is possible to retrieve data from a REST call (array with flat json objects, similar to Odata) and copy the contents to a flat table keeping the data types from the source but without the necessity to set the schema for that specific data.
When I try to recreate this with Data Flow, I can't get this to work. When I check the Data Preview of my Source I get a hierarchy with a body (with my odata like data) and a header. And if I send that to my sink (Avro) it will be saved in this same hierarchical structure (including the header). I know I can fix this manually by using a Select operation (body.column1, body.column2, etc.), but I want to make my Data Flow dynamic so I'm able to use it with multiple tables/endpoints.
So I receive it like this with my REST source:
link
And I want it to be like this at my Sink without hardcoding my schema:
link
The only work around I can come up with is retrieving the data using Copy Data, put it somewhere temporarily and then use my data flow to further transform the data. Is there a more easy way to do this? I cannot imagine that I'm the only one that has this issue.
Hopefully it's clear and somebody is able to help. Thank you very much in advance.
Data flow projection will get schema from API including body and header. Hence, when you use auto mapping everything going to be saved.
Below work arounds you can think of,
As you mentioned, using copy data first and then data flow to further transform.
Use select or derived column transformations and transform your data to get all column names and then finally use sink. For this you can opt with Column pattern matching syntax. So that one condition can be meet with multiple columns to transform.
Check below link to know about column pattern mappings.
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-column-pattern

Datadog allows me to create facets but it does not show any values for them

I am using datadog to see my microservice metrics. When I go to APM tab I can see the spans I created and their corresponding tags are reaching the server correctly. The problem is that If I click in a tag "gear" to convert it to a facet, while the operation completes correctly I can not query for this value nor do I seee any value when I add it as a column to my metrics. Example below:
I can click that gear and convert "Headers-Received" to a string value, there is no error at all from DD, but I cannot query or see any value being registered. But I DO can see the values in each trace of a request reaching my server.
What is going on here?
Not sure if it helps, but by default facets need to be created before logs are processed, which means it won't show old logs prior to the creation of facets.

Prometheus Grafana Templating order by count

I am trying to put a dropdown for each API end point which will show the QPS and Latency of http requests (RED metrics).
I used Grafana's templating and used the following prometheus query.
label_values(http_duration_milliseconds_count, api_path)
But the problem here is sort order. It shows some longtail api requests like /admin/phpMyAdmin all.
I want to do only the top 10 endpoints by count to be shown in this drop down. How do I achieve this?
Attached an image for reference on my first dashboard.
We can use query_result to achieve this.
https://grafana.com/docs/grafana/latest/datasources/prometheus/template-variables/#use-query-variables
query_result(topk(10, sort_desc(sum(http_tt_ms_count) by (api_path))))
http_tt_ms_count - is my metric timeseries of Prometheus with time taken.
api_path - is my label name
This query_result will give three-tuple value like this.
{api_path="/search/query"} 25704195 1507641522000
used the Regex field in query path to get only the api names.
*api_path="(.*)".*
This looks like a long way but
label_values((topk(10, sort_desc(sum(http_tt_ms_count) by (api_path)))), api_path)
is not working in Grafana which made me to go into this path.

Using visjs manipulation to create workflow dependencies

We are currently using visjs version 3 to map the dependencies of our custom built workflow engine. This has been WONDERFUL because it helps us to visualize the flow and find invalid or missing dependencies. What we want to do next is simplify the process of building the dependencies using the visjs manipulation feature. The idea would be that we would display a large group of nodes and allow the user to order them correctly. We then want to be able to submit that json structure back to the server for processing.
Would this be possible?
Yes, this is possible.
Vis.js dispatches various events that relate to user interactions with graph (e.g. manipulations, or position changes) for which you can add handlers that modify or store the data on change. If you use DataSets to store nodes and edges in your network, you can always use the DataSets' get() function to retrieve all elements in you handler in JSON format. Then in your handler, just use an ajax request to transmit the JSON to your server to store the entire graph in your DB or by saving the JSON as a file.
The oppposite for loading the graph: simply query the JSON from your server and inject it into the node and edge DataSets' using the set method.
You can also store the networks current options using the network's getOptions method, which returns all applied options as json.

Grafana - Graph with metrics on demand

I am using Grafana for my application, where I have metrics being exposed from my data source on demand, and I want to monitor such on-demand metrics in Grafana in a user-friendly graph. For example, until an exception has been hit by my application, the data source does NOT expose the metric named 'Exception'. However, I want to create a graph before hand where I should be able to specify the metric 'Exception' and it should log it in the graph whenever my data source exposes the 'Exception' metric.
When I try to create a graph on Grafana using the web GUI, I'm unable to see these 'on-demand metrics' since they've not yet been exposed by my data source. However, I should be able to configure the graph such that in case these metrics are exposed then show them. If I go ahead and type out the non-exposed metric name in the metrics field, I get an error "Timeseries data request error".
Does Grafana provide a method to do this? If so, what am I missing?
It depends on what data source you are using (Graphite, InfluxDB, OpenTSDB?).
For graphite you can enter raw query mode (pen button). To specify what ever query you want, it does not need to exist. Same is true InfluxDB, you find the raw query mode in the hamburger menu drop down to the right of eacy query.
You can also use wildcards in a graphite query (or regex in InfluxDB) to create generic graphs that will add series to the graph as they come in.