I'm trying to aggregate the API logs based on the different endpoints I have. There are a total of 4 endpoints:
1: /v1/vehicle_locations
2: /v1/vehicle_locations/id
3: /v1/driver_locations
4: /v1/driver_locations/id
The way I'm currently doing this is:
_sourceCategory=production | keyvalue auto | where (path matches "/v1/driver_locations" OR path matches "/v1/driver_locations/*" or path matches "/v1/vehicle_locations" or path matches "/v1/vehicle_locations/*") | count by path
The problem with this is that while I get the correct aggregate for /v1/vehicle_locations and /v1/driver_locations, I get individual results for /v1/driver_locations/id and /v1/vehicle_locations/id since the id is a wildcard. Is there a way I can aggregate these wildcards as well?
There are several ways to achieve what you ask. I think the most straightforward one and suggested is to use | parse operator so that you can treat the top-most element of your path as a field, e.g.
_sourceCategory=production
| keyvalue auto
| parse field=path "*/*" as topmost, rest
| where (topmost = "vehicle_locations" or topmost = "driver_locations")
| count by topmost
Note that by default | parse operator works on the raw message (e.g. the original log line), but you can make it parse a field - using the field= syntax and this is what it's used above.
You might want to tweak the parse expression or use a regex depending on the actual paths you encounter.
(Disclaimer: I am currently employed by Sumo Logic)
Related
I have to queries which look like this:
source="/log/ABCD/cABCDXYZ/xyz.log" doSomeTasks| timechart partial=f span=1h count as "#XYZ doSomeTasks" | fillnull
source="/log/ABCD/cABCDXYZ/xyz.log" doOtherTasks| timechart partial=f span=1h count as "#XYZ doOtherTasks" | fillnull
I now want to get this two searches in one graph (I do not want to sum the numbers I get per search up to one value).
I saw that there is the possibility to take appendcols but my trials to use this command were not successful.
I tried this but it did not work:
source="/log/ABCD/cABCDXYZ/xyz.log" doSomeTasks|timechart partial=f span=1h count as "#XYZ doSomeTasks" appendcols [doOtherTasks| timechart partial=f span=1h count as "#XYZ doOtherTasks" | fillnull]
Thanks to PM 77-1 the issue is solved.
This command works:
source="/log/ABCD/cABCDXYZ/xyz.log" doSomeTasks|timechart partial=f span=1h count as "#XYZ doSomeTasks" | appendcols[search source="/log/ABCD/cABCDXYZ/xyz.log" doOtherTasks| timechart partial=f span=1h count as "#XYZ doOtherTasks" | fillnull]
Note: You do not have to mention the source in the second search command if it is the same source as the first one.
General solution
Generate each data column by using a subsearch query in the following form:
|appendcols[search (myquery) |timechart count]
Additional steps
The list of one-or-more query columns needs to be preceded by a generated column which establishes the timechart rows (and gives appendcols something to append to).
|makeresults |timechart count |eval count=0
Note: It isn't strictly required to start with a generated column, but I've found this to be a clean and robust approach. Notably, it avoids problems that may occur in the special-case of "No results found", which otherwise can confuse the visualization rendering. Plus it's more uniform and, as a result, easier to work with.
Finally, specify each of the fields to be charted, with _time as the x-axis:
|fields _time, myvar1, myvar2, myvar3
Complete example
|makeresults |timechart span=5m count |eval count=0
|appendcols[search (myquery1) |timechart span=5m count as myvar1]
|appendcols[search (myquery2) |timechart span=5m count as myvar2]
|appendcols[search (myquery3) |timechart span=5m count as myvar3]
|fields _time, myvar1, myvar2, myvar3
Be careful to use the same span throughout.
Other hints
When comparing disparate data on the same chart, perhaps to evaluate their relative timing, it's common to have differences in type or scale that can render the overlaid result nearly useless. For cases like this, don't neglect the 'Log' format option for the Y-Axis.
In some cases, it may even be worthwhile to employ data hacks with eval to massage the values into a visual comparable state. For example, appending |eval myvar1=if(myvar1=0,0,1) deduplicates values when used following timechart count. Here's some relevant docs:
Mathematical functions
Comparison and Conditional functions
Is there any way to refine a string to only a certain subset of values? For example, I have a list of 500 keys in a hash map. But I only want certain keys to be inserted. For example, "abcd" and "aaaa" are valid keys but "abdc" is invalid. Is there any way to refine the String to only one of the given 500 keys?
I'm guessing the way to do this is just a very long regexp that matches abcd|aaaa?
Edit: Using the fthomas/refined library specifically the MatchesRegex function. Want to know if there is a better approach that I'm missing out on.
Scala 3 seems to Allow Singletons in Unions #6299 like so
val refinedString: "abcd" | "aaaa" = "aaaa"
whilst abdc would result in the following error
val refinedString: "abcd" | "aaaa" = "abdc"
^^^^^^
Found: String("abdc")
Required: String("abcd") | String("aaaa")
It worked for me with Dotty Scala version 0.15.0-bin-20190517-fb6667b-NIGHTLY.
I ended up actually just using generated source code that has every known key inside a MatchesRegex (a|b..) construct for 500 keys. It works. It's not pretty but it's also generated source code that I don't have to deal with so it's okay, I guess.
Using Kapacitor 1.3 and I am trying to use the following where node to keep measurements with an empty tag. Nothing is passing through and I get the same result with ==''.
| where(lambda: 'process-cpu__process-name' =~ /^$/)
I can workaround this issue using a default value for missing tags and filter on this default tag, in the following node but I am wondering if there is a better way structure the initial where statement and avoid an extra node.
| default()
.tag('process-cpu__process-name','system')
| where(lambda: \"process-cpu__process-name\" == 'system' )
Sure it doesn't pass, 'cause this
'process-cpu__process-name'
is a string literal it TICKScript, not a reference to a field, which is
"process-cpu__process-name"
You obviously got the condition always false in this case.
Quite common mistake though, especially for someone with previous experience with the languages that tolerates both single & double quote for mere string. :-)
Also, there's a function in TICKScript lambda called strLength(), find the doc here, please.
I want to get result matches with all nodes contains property 'abc' value as 'xyz' or 'pqr'.
I am trying in below ways:
http://localhost:4502/bin/querybuilder.json?path=/content/campaigns/asd&property=abc&property.1_value=/%xyz/%&property.2_value=/%pqr/%property.operation=like&p.limit=-1&orderby:path
http://localhost:4502/bin/querybuilder.json?path=/content/campaigns/asd&property=abc&property.1_value=/%xyz/%&property.2_value=/%pqr/%&property.1_operation=like&property.2_operation=like&p.limit=-1&orderby:path
http://localhost:4502/bin/querybuilder.json?path=/content/campaigns/asd&1_property=abc&1_property.1_value=/%xyz/%&1_property.1_operation=like&2_property=abc&1_property.1_value=/%xyz/%&2_property.1_operation=like&p.limit=-1&orderby:path
But none of them served my purpose. Any thing that I am missing in this?
The query looks right and as such should work. However if it is just xyz or pqr you would like to match in the query, you may not need the / in the values.
For eg.
path=/content/campaigns/asd
path.self=true //In order to include the current path as well for searching
property=abc
property.1_value=%xyz%
property.2_value=%abc%
property.operation=like
p.limit=-1
Possible things which you can check
Check if the path that you are trying to search contains the desired nodes/properties.
Check if the property name that you are using is right.
If you want to match exact values, you can avoid using the like operator and remove the wild cards from the values.
You can actually use the 'OR' operator in your query to combine two or more values of a property.
For example in the query debug interface : http:///libs/cq/search/content/querydebug.html
path=/content/campaigns/asd
property=PROPERTY1
property.1_value=VALUE1
property.2_value=VALUE2
property.operation=OR
p.limit=-1
It worked with below query:
http://localhost:4502/bin/querybuilder.json?orderby=path
&p.limit=-1
&path=/content/campaigns
&property=jcr:content/par/nodeName/xyz
&property.1_value=pqr
&property.2_value=%abc%
&property.operation=like
&type=cq:Page
Note: property name should be fully specified form the type of node we are expecting.
Ex: jcr:content/par/nodeName/xyz above instead of just xyz
I did such search,
` Comment.search "aabbb "`
and I want to get the results which contain "ab" too.;
So I did that way:
` Comment.search "aabbb ab"`
but I found the results aabbb and ab are mixed , in fact, I want to make the results which match aabbb shows before ab, in other words, have a higher priority.
I know sphinx can add weight the fields of the table. for example add 10 to comments's name, 20 to comment's content. but is it possible to add weight to the query works?
Unfortunately this is not possible with sphinx yet but you can add similar behavior on a query by adding multiple times the keyword you want to weight.
For example:
"aabbb | aabbb | ab"
The aabbb is twice more important than ab
Sphinx has no ability to weight certain search phrases, I'm afraid - so what you're trying to do is not possible.
It's also worth noting that Sphinx uses AND logic by default - if you want results that match either aabbb OR ab, you'll probably want to use the :any match mode:
Comment.search "aabbb ab", :match_mode => :any