I have stream of http logs (fastify pino format) via Loki that look like:
[2022-07-25T16:59:40.796Z] INFO: incoming request {"req":{"method":"GET","url":"/api/v1/teams/6vYE9rpOPl/members","hostname":"forge.flowforge.loc","remoteAddress":"10.1.106.162","remotePort":38422},"reqId":"req-t6"}
[2022-07-25T16:59:40.810Z] INFO: request completed {"res":{"statusCode":200},"responseTime":13.292339086532593,"reqId":"req-t6"}
I'd like to display average response time by path, but I'm struggling to work out how to combine the 2 log lines correlated by the reqId to get the url and responseTime together.
I can extract and parse the json for the 2 lines separately but not together.
I don't think it's doable with Loki alone. For that you would need either:
some kind of a join, which is not supported (see https://github.com/grafana/loki/issues/3567), or
some kind of a group-by/aggregation which is supported but in a very limited way (https://grafana.com/docs/loki/latest/logql/metric_queries/#log-range-aggregations)
One solution I can think of is using Grafana transformations:
use LogQL pattern and line_format to extract the JSON part from the logs, so that the line field becomes a valid JSON. Do not parse the JSON in LogQL: the extracted fields wouldn't be recognized by the Grafana transformations later on.
apply "extract fields" transformation in order to parse the JSON (the transformation is currently in an Alpha version)
add merge transformation
outer join by reqId
group by reqId
Grafana transformations are pretty powerful but a bit non-intuitive too, so the solution would require some experimenting. The debug transformation functionality might be helpful.
Related
With the Copy Data transformation it is possible to retrieve data from a REST call (array with flat json objects, similar to Odata) and copy the contents to a flat table keeping the data types from the source but without the necessity to set the schema for that specific data.
When I try to recreate this with Data Flow, I can't get this to work. When I check the Data Preview of my Source I get a hierarchy with a body (with my odata like data) and a header. And if I send that to my sink (Avro) it will be saved in this same hierarchical structure (including the header). I know I can fix this manually by using a Select operation (body.column1, body.column2, etc.), but I want to make my Data Flow dynamic so I'm able to use it with multiple tables/endpoints.
So I receive it like this with my REST source:
link
And I want it to be like this at my Sink without hardcoding my schema:
link
The only work around I can come up with is retrieving the data using Copy Data, put it somewhere temporarily and then use my data flow to further transform the data. Is there a more easy way to do this? I cannot imagine that I'm the only one that has this issue.
Hopefully it's clear and somebody is able to help. Thank you very much in advance.
Data flow projection will get schema from API including body and header. Hence, when you use auto mapping everything going to be saved.
Below work arounds you can think of,
As you mentioned, using copy data first and then data flow to further transform.
Use select or derived column transformations and transform your data to get all column names and then finally use sink. For this you can opt with Column pattern matching syntax. So that one condition can be meet with multiple columns to transform.
Check below link to know about column pattern mappings.
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-column-pattern
I have added GTM and GA4 to some website apps to produce tables of detailed stats on click-throughs of ads per advertiser for a date range. I now have suitable reports working successfully using Data Studio, but my attempts to do the same using the PHP implementation of Analytics Data API V1 Beta (in order to do batch runs covering many date ranges) repeatedly hit a brick wall: the methods needed to analyse the response from instantiating BetaAnalyticsDataClient and then invoking runPivotReport or batchRunReports or batchRunPivotReports (and so on) appear not be specified.
The only example that I could work from is the ‘quickstart’ one that does a basic dimension and metric retrieval, and even this employs:
getRows()
getDimensionValues()
getValue()
getMetricValues
that do not appear in the API documentation, at least that I can find.
The JSON response format for each report is of course documented: for example the output from running runPivotReport is documented as an instantiation of runPivotReportResponse.
But nowhere can I find a specification of the methods to be used to traverse the JSON tree (vide getDimensionValues() above) and extract some output data.
Guesswork has taken me part way, but purely for example, when retrieving pivot data, should a
getPivotDimensionHeaders()[0]
be followed by a
getDimensionValues()
or a
getPivotDimensionValues()
I am obviously approaching this all wrong, but what should I do, please?
We are trying to see if graphite will fit our use case. So we have a number of public parameters. Like key value pairs.
Say:
Data:
Caller:abc
Site:xyz
Http status: 400
6-7 more similar fields (key values pairs) .
Etc.
This data is continuously posted to use in a data report. What we want is to draw visualisations over this data.
We want graphs that will say things like how many 400s by sites etc. Which are the top sites or callers for whom there is 400.
Now we are wondering if this can be done with graphite.
But we have questions. Graphite store numerical values. So how will we represent this in graphite.
Something like this ?
Clicks.metric.status.400 1 currTime
Clicks.metric.site.xyz 1 currTime
Clicks.metric.caller.abc 1 currTime
Adding 1 as the numerical value to record the event.
Also how will we group the set of values together.
For eg this http status is for this site as it is one record.
In that case we need something like
Clicks.metric.status.{uuid1}.400 1 currTime
Clicks.metric.site.{uuid1}.xyz 1 currTime
Our aim is to then use grafana to have graphs on this data as in what are the top site which have are showing 400 status?
will this is ok ?
regards
Graphite accepts three types of data: plaintext, pickled, and AMQP.
The plaintext protocol is the most straightforward protocol supported
by Carbon.
The data sent must be in the following format: <metric path> <metric
value> <metric timestamp>. Carbon will then help translate this line
of text into a metric that the web interface and Whisper understand.
If you're new to graphite (which sounds like you are) plaintext is definitely the easiest to get going with.
As to how you'll be able to group metrics and perform operations on them, you have to remember that graphite doesn't natively store any of this for you. It stores timeseries metrics, and provides functions that manipulate that data for visual / reporting purposes. So when you send a metric, prod.host-abc.application-xyz.grpc.GetStatus.return-codes.400 1 1522353885, all you're doing is storing the value 1 for that specific metric at timestamp 1522353885. You can then use graphite functions to display that data, e.g.,: sumSeries(prod.*.application-xyz.grpc.GetStatus.return-codes.400) will produce a sum of all 400 error codes from all hosts.
I want to use Tarantool database for logging user activity.
Are there any out of the box solutions to create web dashboard with nice charts based on the collected data?
A long time ago, using an old-old version of tarantool I've created a draft of tarbon - time-series database, with carbon-cache identical interface.
Since that time the protocol have changed, but the generic idea still the same: use spaces to store data, compact data organization and correct indexes to access spaces as time-series rows and lua for preparing resulting jsons.
That solution was perfect in performance (either on reads or on writes), but that old version lacks disk storage and without disk I was very limited to metrics capacity.
Tarantool has embedded lua language so u could generate json from your data and use any charting library. For example D3.js has method to load json directly from url.
d3.json(url[, callback])
Creates a request for the JSON file at the specified url with the mime type "application/json". If a callback is specified, the request is immediately issued with the GET method, and the callback will be invoked asynchronously when the file is loaded or the request fails; the callback is invoked with two arguments: the error, if any, and the parsed JSON. The parsed JSON is undefined if an error occurs. If no callback is specified, the returned request can be issued using xhr.get or similar, and handled using xhr.on.
You also could look at c3.js simple facade for d3
I am trying to learn a mirth system with a channel that is pulling from a database for its source and outputting hl7 messages for its destination(s). The SQL query pulls the correct data from the source--but Mirth does not output all of the data in the right spots in the HL7 message. The destinations show that it is outputting Template:${message.encodedData}. What does that mean? Where can I see the template that it using. The destinations don'y have any filters or transformers so I am confused.
message.encodedData is the fully transformed message - after any transformation steps.
The transformer is also where you can specify the output template for how you want the data to look. Simply load up a sample template message in the output template of the transformer (message template tab in the transformer) and then create a series of message builder steps. Your output message will be in the variable tmp, and your sql results will be in the variable msg.
So, if your first column is patientID (Select patientiD as patientID ...), you would create a message builder steps along the lines of
mapped segment: tmp['PID']['PID.3']['PID.3.2']
mapping: msg['patientID'];
I don't have exact syntax in front of me right now, but that's the basic idea.
I think "transformed" is the status of the message right after the transformers are executed and "encoded" message is the status after the message that comes from the transformers is encoded into the specified channel outbound datatype. In some cases those messages will be the same but not in all the cases.
Also, is very difficult to find updated and comprehensive Mirth documentation.