How to create a timeseries of 99-percentiles from Loki in Grafana? - grafana

I have a server application logging all HTTP response times to Loki. I would like to generate time series of 99-percentile response times for each endpoint to see trends and problems. The following query is similar to what I see recommended:
quantile_over_time(0.99,{filename="/var/log/server.log"}
| json
| MessageTemplate=~".*HostingRequestFinishedLog:l.*"
| unwrap Properties_ElapsedMilliseconds[1m])
by (Properties_Path)
However, this just generates a single datapoint per endpoint (path), for instance like this:
The endpoints shown here have requests each minute, so should have enough data to populate a line graph.
How can I turn the request into time series?

Related

Google Analytics Data API V1 (GA4 property) - wrong API report results when including an event-specific conversions count metric

The bounty expires in 2 days. Answers to this question are eligible for a +50 reputation bounty.
Golak Sarangi wants to draw more attention to this question:
I am also facing a similar issue where the dimensions in the results change based on the metric used. For a certain date range I get 24 cities when I query for totalUsers but 402 cities when I query for sessions.
I'm migrating an app from the Reporting API V4 to the new Data API V1 to support GA4 properties.
While the sessions metric aggregation is usually a few sessions off when adding multiple dimensions or requesting multiple days, adding an event-specific conversions count metric to the report request produces a considerable increase in the number of sessions on the result rows - by a factor of ~4x from my empiric observations.
Request JSON with only the sessions metric and the date dimension:
{"dimensions":[{"name":"date"}],"metrics":[{"name":"sessions"}],"dateRanges":[{"startDate":"2022-12-01","endDate":"2022-12-01"}],"metricAggregations":["TOTAL"]}
Results: 612 sessions on the response row | 612 sessions in Metric Aggregations
Request JSON with the additional conversions:purchase metric:
{"dimensions":[{"name":"date"}],"metrics":[{"name":"sessions"},{"name":"conversions:purchase"}],"dateRanges":[{"startDate":"2022-12-01","endDate":"2022-12-01"}],"metricAggregations":["TOTAL"]}
Results: 2527 sessions on the response row | 612 sessions in Metric Aggregations
Note: this behavior is consistent throughout multiple properties, even for those with no conversions:purchase events.
Is this an API bug, or am I missing something? Is there something I can do to get accurate results?

How to consolidate GET request stats in locust

Locust seems to be consolidating the request stats based on endpoint, this works fine for POST request because endpoint doesn't change. However, for GET request endpoint can change on every request
ex,
xyz.com/v1/user/<user_id1>
xyz.com/v1/user/<user_id2>
In this case standard stats that we get are not consolidated at entire test level, what we get in the standard report is
xyz.com/v1/user/<user_id1> | request latency details
xyz.com/v1/user/<user_id2> | request latency details
This is not helpful when we have to assess the entire load test. is there a workaround for this?
You can use the name parameter to rename your requests.
self.client.get("/v1/user/" + user_id, name="/v1/user/...")
for details see https://docs.locust.io/en/stable/writing-a-locustfile.html#name-parameter

Pushing key/value pair data to graphite/grafana

We are trying to see if graphite will fit our use case. So we have a number of public parameters. Like key value pairs.
Say:
Data:
Caller:abc
Site:xyz
Http status: 400
6-7 more similar fields (key values pairs) .
Etc.
This data is continuously posted to use in a data report. What we want is to draw visualisations over this data.
We want graphs that will say things like how many 400s by sites etc. Which are the top sites or callers for whom there is 400.
Now we are wondering if this can be done with graphite.
But we have questions. Graphite store numerical values. So how will we represent this in graphite.
Something like this ?
Clicks.metric.status.400 1 currTime
Clicks.metric.site.xyz 1 currTime
Clicks.metric.caller.abc 1 currTime
Adding 1 as the numerical value to record the event.
Also how will we group the set of values together.
For eg this http status is for this site as it is one record.
In that case we need something like
Clicks.metric.status.{uuid1}.400 1 currTime
Clicks.metric.site.{uuid1}.xyz 1 currTime
Our aim is to then use grafana to have graphs on this data as in what are the top site which have are showing 400 status?
will this is ok ?
regards
Graphite accepts three types of data: plaintext, pickled, and AMQP.
The plaintext protocol is the most straightforward protocol supported
by Carbon.
The data sent must be in the following format: <metric path> <metric
value> <metric timestamp>. Carbon will then help translate this line
of text into a metric that the web interface and Whisper understand.
If you're new to graphite (which sounds like you are) plaintext is definitely the easiest to get going with.
As to how you'll be able to group metrics and perform operations on them, you have to remember that graphite doesn't natively store any of this for you. It stores timeseries metrics, and provides functions that manipulate that data for visual / reporting purposes. So when you send a metric, prod.host-abc.application-xyz.grpc.GetStatus.return-codes.400 1 1522353885, all you're doing is storing the value 1 for that specific metric at timestamp 1522353885. You can then use graphite functions to display that data, e.g.,: sumSeries(prod.*.application-xyz.grpc.GetStatus.return-codes.400) will produce a sum of all 400 error codes from all hosts.

Send the new inserted value on InfluxDB through HTTP

I'm trying to figure out how I can raise a notification when a new value is inserted on my influxDB and send a notification to an HTTP endpoint with the data of the new inserted measurement sample. I'm not sure if it's the goal of Kapacitor (I'm new on the TICK stack) or it's better to use another tool (any suggestion will be welcome).
Thanks in advance.
Best regards,
Albert.
In Kapacitor there is two types of task namely batch and stream. The former is meant for processing historical data and stream is for real time purpose.
Looking at your requirement I guess it is obvious that stream is the way to go as it will enable you to watch data from an influxdb's measurement in real time. For invoking an endpoint in TICK script you can use the HttpPostNode node.
Example (Pseudo code ONLY):
var data = stream
|from()
.database('myInfluxDB')
.retentionPolicy('autogen')
.measurement('measurement_ABCD')
|window()
.period(10s)
.every(10s)
data
|httpPost('http://your.service.url/api/endpoint_xyz')
In this instance the TICK script will watch for new inserted data on measurement, measurement_ABCD for a window period of 10 seconds before doing a HTTP POST to the defined endpoint and this entire process will repeat again every 10 seconds.
That is, you have a moving window of 10 seconds.
Reference:
https://docs.influxdata.com/kapacitor/v1.3/nodes/http_post_node/

Recording GET requests to a table from REST API

I would like to record the various GET requests to my API in a table and use that table as part of the calculation of what to return for future GET requests.
Perhaps the easiest test example would be a GET function that returns the number of GET requests in the last hour.
The REST protocol says that GET requests should only have data returns.
Do I need to POST the request and then GET the results of the same request?
You can easily achieve that with nodejs
You should have the requests saved in a json file or database for example and have another service that returns this saved data.
Take a look at expressjs
Best luck