ELK Stack Log timestamp - elastic-stack

I'm relatively new to ELK stack in general and I'm trying to process some old logs back from 2014 for visualization. I was wondering if there's a way to allow Kibana to display the same time stamp as the creation of the logs instead of the time stamp when I added them into my forwarder client.
This is an excerpt from my filter section

The timestamp it's only show the time when logstash take the logs and saved to elasticsearch. Logstash can't know when that's logs had been created. The only possible way to know that , it's maybe if you saved the time inside the log's if they are systemlogs or your own logs with timestamps inside. Then you can create a filter in logstash and then just used as a key.
You can use grok filter for that.
http://grokconstructor.appspot.com/do/match
Example:
Output log
55.3.244.1 GET /index.html 15824 0.043
Filter
%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}
Logstash config
input {
file {
path => "/var/log/http.log"
}
}
filter {
grok {
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
}

Related

React query compare old dat with new data in refetchInterval

from a react query call I have a refreshInterval where I get a function call that returns the fresh data.
refetchInterval: (data) => (compareData(data) ? false : 1000),
const compareData(freshData) {
... would like access previous data to compare with freshData
.. if different stop interval
}
I want a way to get the previous data from the refecthInterval function. Is there a way to do this ?
So far all I can get back is the fresh data. I want to be able to compare my new fresh data with previous stale data and do a comparison.
I've seen something called isDataEqual that you can set on the config of the query but can't find any docs on how to use it.

Grafana - Is it possible to use variables in Loki-based dashboard query?

I am working on a Loki-based Dashboard on Grafana. I have one panel for searching text in the Loki trace logs, the current query is like:
{job="abc-service"}
|~ "searchTrace"
|json
|line_format "{if .trace_message}} Message: \t{{.trace_message}} {{end}}"
Where searchTrace is a variable of type "Text box" for the user to input search text.
I want to include another variable skipTestLog to skip logs created by some test cron tasks. skipTestLog is a custom variable of two options: Yes,No.
Suppose the logs created by test cron tasks contain the text CronTest in the field trace_message after the json parser, are there any ways to filter them out based on the selected value of skipTestLog?
Create a key/value custom variable like in the following example:
Use the variable like in the following example:

Grafana dashboard to display a metric for a key in JSON Loki record

I'm having trouble understanding how to create a dashboard time series plot to display a single key/value from a Loki log which is in JSON format.
eg:
here is my query in the Explorer:
{job="railsdevlogs"}|json
which returns log lines such as:
{"date":"2022-01-05T21:27:21.895Z","pool":{"Pool Size":50,"Current":5,"Active":1,"Idle":4,"Dead":0,"Timeout":"5 sec"},"puma":{"Started At":"2022-01-05T20:35:26Z","Max Threads":16,"Pool Capacity":16,"Running":1,"Backlog":0,"IO Handles":15,"File Handles":2,"Socket Handles":4,"Server Log Size":46750072},"process":[{"Name":"ruby.exe","Process ID":656,"Threads":11,"Working Set":150728704,"Virtual Size":288079872},{"Name":"mysqld.exe","Process ID":4836,"Threads":3,"Working Set":360448,"Virtual Size":4445065216},{"Name":"mysqld.exe","Process ID":5808,"Threads":49,"Working Set":69906432,"Virtual Size":4924059648},{"Name":"aaaaa.exe","Process ID":14460,"Threads":18,"Working Set":49565696,"Virtual Size":5478469632},{"Name":"bbbbb.exe","Process ID":9584,"Threads":14,"Working Set":35012608,"Virtual Size":4496551936},{"Name":"ccccc.exe","Process ID":11944,"Threads":14,"Working Set":29609984,"Virtual Size":4481880064}],"gc":{"count":242,"heap_allocated_pages":1277,"heap_sorted_length":1279,"heap_allocatable_pages":9,"heap_available_slots":869213,"heap_live_slots":464541,"heap_free_slots":404672,"heap_final_slots":0,"heap_marked_slots":411311,"heap_swept_slots":457903,"heap_eden_pages":1268,"heap_tomb_pages":9,"total_allocated_pages":1278,"total_freed_pages":1,"total_allocated_objects":74364715,"total_freed_objects":73900174,"malloc_increase_bytes":640096,"malloc_increase_bytes_limit":16777216,"minor_gc_count":131,"major_gc_count":111,"remembered_wb_unprotected_objects":57031,"remembered_wb_unprotected_objects_limit":114062,"old_objects":349257,"old_objects_limit":698512,"oldmalloc_increase_bytes":640288,"oldmalloc_increase_bytes_limit":16777216},"os":{"System Name":"xxxxx","Description":"","Organization":"","Operating System":"Microsoft Windows 10 Enterprise LTSC","OS Version":"10.0.17763","OS Serial Number":"xxxxx-xxxxx-xxxxx-xxxxx","System Time":"2022-01-05T16:27:22.000-05:00","System Time Zone":-300,"Last Boot Time":"2021-12-15T23:26:38.000-05:00","System Drive":"C:","Total Physical Memory":34204393472,"Free Physical Memory":20056260608,"Total Virtual Memory":39304667136,"Free Virtual Memory":13915041792,"Number of Processes":307,"Number of Users":2,"volumes":[{"Drive":"C:\\","Type":"NTFS","Total Space":1023563264000,"Free Space":681182343168,"Block Size":4096}]},"symbol":{"size":28106},"stats_collection_time":387}
using |json will automatically create dynamic labels for all the key/values in the json log line:
gc_count = 123
os_Free_Virtual_Memory = 456789
etc.
Now I would like to plot one of these values in a grafana time series plot, but I am struggling to understand how to isolate one dynamic label and plot it.
Perhaps I'm using |json incorrectly. The documentation and examples I have read so far shows how to filter the logs using the dynamic labels, but I dont need that since I want to plot every log line.
thanks
I think this should help https://grafana.com/go/observabilitycon/2020/keynote-what-is-observability/ if you go to minute 41.
There's an example which is very similar to what you're trying to achieve.
Your query should look something like:
quantile_over_time(0,99, {job="railsdevlogs"}
| json
| unwrap gc_count [1m]}
by (job)

Displaying Records with route53 using .net

Before I begin I do want to mention this is my first time doing this so forgive any errors I might do. I am currently learning how to use route53 using .net and I am currently stuck on getting it to post records in a field. I have the correct Accesskey,Secretkey and hostedzone id.
iv tested pulling up the name of the hosted zone aswell as getting it to display the number of records. but when i try to get it to actually post the values it displays: "Amazon.Route53.Model.ListResourceRecordSetsResponse". I'm sure the answer is right there in front of me but in the API for route53 there isnt really any guidelines to showing records. it shows you how to create records but not simply viewing them.
Here is what I have:
route53Client.ListResourceRecordSets(new ListResourceRecordSetsRequest
{
HostedZoneId = "HostedZoneId here",
MaxItems = "1"
});
I'm assuming that I am not including enough information for it to properly pull the records. I can pull them up through the AWS CLI so I know I have what i need to see them. just stuck on this part. any help would be great.
Here is the link to the API: https://docs.aws.amazon.com/sdkfornet/v3/apidocs/index.html
Process ListResourceRecordSetResponse like this:
var result = client.ListResourceRecordSets(
{
HostedZoneId = "HostedZoneId here",
MaxItems = "1"
});
foreach (var recordSet in result.ListResourceRecordSetsResult.ResourceRecordSets)
{
// ResourceRecordSet
foreach(var resourceRecord in recordSet.ResourceRecords)
{
// ResourceRecord
Console.WriteLine(resourceRecord.Value);
}
}
ListResourceRecordSetsResponse
ResourceRecordSet
ResourceRecord

Is it possible to create an RHQ plugin that collects historic measurements from files?

I'm trying to create an RHQ plugin to gather some measurements. It seems relativity easy to create a plugin that return a value for the present moment. However, I need to collect these measurements from files. These files are created on a schedule, for example one per hour, but they contain much finer measurements, for example a measurement for every minute. The file may look something like below:
18:00 20
18:01 42
18:02 39
...
18:58 12
18:59 15
Is it possible to create a RHQ plugin that can return many values with timestamps for a measurement?
I think you can within org.rhq.core.pluginapi.measurement.MeasurementFacet#getValues return as many values as you want within the MeasurementReport.
So basically open the file, seek to the last known position (if the file is always appended to), read from there and for each line you go
MeasurementData data = new MeasurementDataNumeric(timeInFile, request, valueFromFile);
report.add(data);
Of course alerting on this (historical) data is sort of questionable, as if you only read the file one hour later, the alert can not be retroactively fired at the time the bad value happened :->
Yes it is surely possible .
#Override
public void getValues(MeasurementReport report, Set<MeasurementScheduleRequest> metrics) throws Exception {
for (MeasurementScheduleRequest request : metrics) {
Double result = SomeReadUtilClass.readValueFromFile();
MeasurementData data = new MeasurementDataNumeric(request, result)
report.addData(data );
}
}
SomeReadUtilClass is a utility class to read the file and readValueFromFile is the function, you can write you login to read the value from file.
result is the Double variable that is more important, this result value you can calculate from database or read file. And then this result value you have to provide to MeasurementDataNumeric function MeasurementDataNumeric(request, result));