Kibana - what logs are not reporting - visualization

I am currently using kibana 5.0 almost 45 log sources are integrated with kibana like iis,vpn ,asa etc.now my question is how to create a visualization to check what logs sources are not reporting to kibana.can anybody help on this?

Quick and dirty solution...
Make sure each log source is given a unique and meaningful tag as soon as their data enters the logstash workflow.
As each document is processed write an entry to a separate index, call it masterlist.idx (do not give this index a date suffix). Use the tags you assigned as the document ID when you write entries to masterlist.idx.
The masterlist.idx should really just contain a list of your log sources with each entry having a timestamp. Then all you've got to do is visualise masterlist showing all the entries. You should have 45 documents each with a timestamp showing their latest updates. I think the default timepicker on Kibana's discover tab will do the job. Any sources that haven't been updated in X days (or whenever your threshold is) have a problem.

Related

Can you calculate active users using time series

My atomist client exposes metrics on commands that are run. Each command is a metric with a username element as well a status element.
I've been scraping this data for months without resetting the counts.
My requirement is to show the number of active users over a time period. i.e 1h, 1d, 7d and 30d in Grafana.
The original query was:
count(count({Username=~".+"}) by (Username))
this is an issue because I dont clear the metrics so its always a count since inception.
I then tried this:
count(max_over_time(help_command{job=“Application
Name”,Username=~“.+“}[1w]) -
max_over_time(help_command{job=“Application name”,Username=~“.+“}[1w]
offset 1w) > 0)
which works but only for one command I have about 50 other commands that need to be added to that count.
I tried the:
"{__name__=~".+_command",job="app name"}[1w] offset 1w"
but this is obviously very expensive (timeout in browser) and has issues with integrating max_over_time which doesn't support it.
Any help, am I using the metric in the wrong way. Is there a better way to query... my only option at the moment is the count (format working above for each command)
Thanks in advance.
To start, I will point out a number of issues with your approach.
First, the Prometheus documentation recommends against using arbitrarily large sets of values for labels (as your usernames are). As you can see (based on your experience with the query timing out) they're not entirely wrong to advise against it.
Second, Prometheus may not be the right tool for analytics (such as active users). Partly due to the above, partly because it is inherently limited by the fact that it samples the metrics (which does not appear to be an issue in your case, but may turn out to be).
Third, you collect separate metrics per command (i.e. help_command, foo_command) instead of a single metric with the command name as label (i.e. command_usage{commmand="help"}, command_usage{commmand="foo"})
To get back to your question though, you don't need the max_over_time, you can simply write your query as:
count by(__name__)(
(
{__name__=~".+_command",job=“Application Name”}
-
{__name__=~".+_command",job=“Application name”} offset 1w
) > 0
)
This only works though because you say that whatever exports the counts never resets them. If this is simply because that exporter never restarted and when it will the counts will drop to zero, then you'd need to use increase instead of minus and you'd run into the exact same performance issues as with max_over_time.
count by(__name__)(
increase({__name__=~".+_command",job=“Application Name”}[1w]) > 0
)

JMeter to record results on hourly basis

I have a JMeter project with multiple GET and POST requests and assertions for these. I use Aggregate results and View results tree listeners, but none of these can store results on hourly basis. I tried JMeterPlugins-Standard and JMeterPlugins-Extras packages and jp#gc - Graphs Generator listener, but all of them use aggregated data instead of hourly data. So I would like to get number of successful and failed requests/assertions per hour, maybe a bar chart would be most suitable for this purpose.
I'm going to suggest a non-conventional design-level solution: name your samplers dynamically with hour (or date and hour), so that each hour the name will change, and thus they will appear in different category, i.e.:
The code for such name is:
${__time(dd:hh,)} the rest of sampler name
Such sampler will appear in the following way in Aggregate Report (here I simulated it with minutes/seconds, but same will happen with days/hours, just on larger scale):
Pros and cons of such approach:
Simple, you can aggregate anything by hour, minute, or any other time slice while test is running, and not by analysis after execution.
Not listener-dependant, can be used with pretty much any listener or visualizer
If you want to also have overall stats, it will require to sum up every sub-category. So it alters data, but in the way that it can still can be added back to original relatively easy.
Calculating __time before every sampler will not be unnoticed completely from performance perspective, but I don't think it will add visible overhead to a script.
You could get the same data by properly aggregating JTL or CSV (whichever you use) after execution, so it doesn't provide you with anything that is not possible to achieve using standard methods
Script needs altering to make this happen. if you have 100s of samplers, it's going to take a while. And if you want to change back...
You might want to use Filter Results Tool which has --start-offset and --end-offset parameters, you can "cut" your results file into "interesting" pieces and plot them according to your requirements.
You can install Filter Results Tool using JMeter Plugins Manager
Also be aware that according to JMeter Best Practices you should
Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
Don't use "View Results Tree" or "View Results in Table" listeners during the load test, use them only during scripting phase to debug your scripts.
You can get whatever information you need from the .jtl results file, you can specify test results location via -l command-line argument
To get summarized results per hour add to your test plan Generate Summary Results:
Generates a summary of the test run so far to the log file and/or standard output
Update interval in jmeter.properties to your needs ,1 hour, 3600 seconds:
summariser.interval=3600
You will get summary per hour of your requests.
You can try with Jmeter backend Listener. It has integration with graphite and Influxdb. After storing the results in these time series database you can display the result in Grafana dashboard. Grafana has its own filtering of showing the results in hourly, monthly, daily basis and so on.

Returning current version of each record using Google Cloud Datastore query

I am using a relay/graphql/googlecloud setup for a project that saves data immutably.
I have a set of fields that create a new record each time a modification is made to any of the fields structured like so:
Project
- Start Date
- End Date
- Description
- ...
- ...
The first time a project is created it is saved with a timestamp and a version number. For example:
1470065550-1
Any modifications after this creates a new version number but still uses the same timestamp.
1470065550-2
Bearing in mind that it is immutable there will potentially be a lot of versions of one project. If there are also a lot of projects this could result in a large number of records being fetched
If I want to fetch a list of projects from the datastore returning only the latest version of each one what would be the best way of going about this? As the version number increments i never know which one is the latest.
For example if I had rows containing 2 projects, both with multiple versions and I want to fetch the latest version of each:
1470065550-1
1470065550-2
1470065550-3
1470065550-4
1470065550-5
1470065550-6
1470065550-7 <--- Current Version for 1470065550
1567789887-1
1567789887-2
1567789887-3 <--- Current Version for 1567789887
Based on the rows above I need the query to just return the latest version of the two projects:
1470065550-7 <--- Current Version for 1470065550
1567789887-3 <--- Current Version for 1567789887
You probably want to change your tag to [google-cloud-datastore] instead of [google-cloud-storage] because you're probably missing people who are truly experts on datastore.
But just to offer my two cents on your question: It may be easiest to add a field for "current" and then use a transaction to switch it atomically. Then it is an easy filter for use in any other query.
If you can't do that, it's a bit tricky to answer your question because you haven't given us the query you are building to get this set of results. The typical way of getting a max value is to sort and set a limit of 1 like so:
var query = datastore
.createQuery('Projects')
.order('timestamp')
.limit(1);
But given the way you are storing the data, I don't think you can do this when you run over from -9 to -10 because -10 usually comes before -9 in lexicographical sorts (I didn't check how this works in datastore, however). You might need to zero pad.

How to use a Zephyr rest call to find the time when the teststeps were updated?

We have testcases stored in Zephyr for Jira. When we update teststeps, the updated date on Jira issues does not get updated. I assume tath information is saved on Zephyr. I need to find all the testcases whose steps have been modified since a given date. i would like to do it with a ZAPI call.
In order to determine the changes made to the tests step, you can navigate to Add-ons> Under zephyr for jira select "General information> you can see the audit history for the steps. Please let me know if this fulfills your requirement
This is what I figuured out:
1) call like:
https:///rest/zapi/latest/audit?entityType=TESTSTEP&maxRecords=100&offset=0
will give you the last 100 records from the audit where Test steps were changed. You can inspect information like "creationDate" and issueKey from the response.
2) If you need to see additional audit lines, change, the offset, to multiples of 100. You can stop once, creationdate is older than your cutoff date.

Given code base hosted on TFS, which command can tell me which file has changed most?

I want to find out files under a given directory which have been updated most. Is there any command which can display this info? Or is there any way to get max version count for a given file, so I can write some script to get this info from all and then sort desc.
Do you mean changed the most number of times, or undergone the most code chrun?
Either way - looking at the report data might be the easiest option for you. Take a look at the following blog post I did explaining how to use Excel for looking at TFS data that uses churn as an example allowing you to drill down into folders and files - but you should be able to get the data that you are looking for.
Getting Started with the TFS Data Warehouse