AWS CloudWatch Log Metric Filter with JSON key has character space - amazon-cloudwatchlogs

When creating an AWS CloudWatch Log Metric Filter, how would you match terms in JSON Log Events where the key has a character space in the name?
For example, let's assume there's a log line with JSON element like the following...
{"Event":"SparkListenerLogStart","Spark Version":"2.4.0-SNAPSHOT"}
How would you reference the "Spark Version"? $."Spark Version", $.Spark Version, $.Spark\ Version, and $.[Spark Version] don't work.
I couldn't find the answer in the AWS Filter and Pattern Syntax documentation.

At the time of writing, this is not possible. AWS will probably fix that at some point, but for now the only workaround would be to use the non-JSON syntax and search for the exact string. The following filter:
"\"Spark Version\":\"2.4.0-SNAPSHOT\""
will match:
{"Event":"SparkListenerLogStart","Spark Version":"2.4.0-SNAPSHOT"}

Related

This is regarding the grokparsefailure

This is my sample log.
<4>Nov 19 17:08:28 BAGW-R kernel: [BlackRidge|Gateway|5.0.0.8928M]
class="Attribution" category="Filter Rule: To_Trusted Drop"
ctx="bump0" filterNumber="1022" src="192.168.120.173" srcPort="41178"
dest="192.168.120.100" destPort="443" gwAction="DISCARD"
gwMode="Enforce"
Grok pattern:
%{WORD:class} %{WORD:category} %{WORD:ctx} %{NUMBER:fil ternumber}
%{IP:src} %{NUMBER:srcPort} %{IP:dest} %{NUMBER:destPort} %{WORD:gwAc
tion} %{WORD:gwMode}
I get a grokparsefailure.
Can anyone please help.
As per my understanding you're getting this error because the pattern you have used does not match the logs you have provided.
Can you be more specific what fields you are trying to capture from this log ?
I have wrote a grok pattern for the logs you must follow similar way such that it matches the whole log. In case you get found unknown escape character error use \ twice instead of single \
%{MONTH:month}%{SPACE}%{MONTHDAY:date}%{SPACE}%{TIME:time}%{SPACE}%{GREEDYDATA:temp1}\]%{SPACE}class\=\"%{WORD:class}\"%{SPACE}category\=\"%{GREEDYDATA:category}\"%{SPACE}ctx\=\"%{WORD:ctx}\"%{SPACE}filterNumber\=\"%{NUMBER:filternumber}\"%{SPACE}src\=\"%{IPV4:src}\"%{SPACE}srcPort\=\"%{DATA:srcport}\"%{SPACE}dest\=\"%{IPV4:dest}\"%{SPACE}destPort\=\"%{NUMBER:destport}\"%{SPACE}gwAction\=\"%{WORD:gwaction}\"%{SPACE}gwMode\=\"%{WORD:gwmode}\"
I have written the whole grok command please check if this works. I have made an assumption that u would get all logs in this format.
Use this website to test ur grok pattern: https://grokconstructor.appspot.com/do/match#result
Existing grok pattern: https://grokdebug.herokuapp.com/patterns#

Grafana: Overrriding series with metric named `/var(avail_MB)`: "Panel rendering error '/var(avail_MB)' is not a valid regular expression."

An Icinga2 plugin (written by myself) returns performance data with metrics named /var(avail_MB), /var(total_MB) and similar. Data is forwarded to an InfluxDB with Grafana as Frontend.
I'm using "GROUP BY" "tag(metric)" and "ALIAS BY" "$tag_metric" in a dashboard's panel query.
The metric names are displayed correctly below the graph then.
However when I try to override series by specifying "alias or regex" /var(avail_MB) it does not seem to work, and when going back from panel configuration to dashboard, I get an error message saying "Panel rendering error '/var(avail_MB)' is not a valid regular expression.".
I tried to put a backslash in front of ( and ), but that didn't help.
To make matters worse, the whole graph disappeared, and when trying to open the "Query Inspector", the frontend seems to take forever (Query never appears).
What is the problem, and how could I fix it?
I'm new to Icinga2, Grafana and InfluxDB (I'm just a "user" not administrator of those).
The color change is not applied to the graph.
Here is an example of plugin output:
OK: /var: 3114/5632MB (55.30%), slope is NaN|/var(total_MB)=5631.56MB;;;0 /var(avail_pct)=55.30%;25;5;0;100 /var(avail_MB)=3114.12MB;10;5;0;5632 /var(est_avail_MB)=nanMB;10;5;0;5632
(The "nanMB" was a bug in the plugin that has been fixed already, but that data wasn't from the machine in question.)
The problems seems to be the beginning of the string ("/var").
Grafana seems to treat every string starting with / as regular expression, and it expects any regular expression to start with /, too (it seems).
So the fix was to add a trailing /, and escape the literal / as \/.
Unfortunately this only removes the error message, but doesn't make the override work (match).
It is also required to backslash-escape the parentheses and the slash(es):
Instead of /var(total_MB) you need to write /\/var\(total_MB\).
The original problem has two origins:
The monitoring plugin specification at https://www.monitoring-plugins.org/doc/guidelines.html#AEN201 states: "2. label can contain any characters except the equals sign or single quote (')") states that any character except = and ' is allowed as metric name.
Grafana v6.7.3 proposes the incorrect (i.e.: unescaped) values for "alias or regex".
That is how I had created the problem.

prometheus doesn't match regex query

I'm trying to write a prometheus query in grafana that will select visits_total{route!~"/api/docs/*"}
What I'm trying to say is that it should select all the instances where the route doesn't match /api/docs/* (regex) but this isn't working. It's actually just selecting all the instances. I tried to force it to select others by doing this:
visits_total{route=~"/api/order/*"} but it doesn't return anything. I found these operators in the querying basics page of prometheus. What am I doing wrong here?
May be because you have / in the regex. Try with something like visits_total{route=~".*order.*"} and see if the result is generated or not.
Try this also,
visits_total{route!~"\/api\/docs\/\*"}
If you want to exclude all the things that has the word docs you can use below,
visits_total{route!~".*docs.*"}
The main problem with your original query is that /api/docs/* will only match things like /api/docs and /api/docs//////; i.e. the * in your query will match 0 or more / characters.
I think what you meant to use was /api/docs/.*.

Azure APIM Policy Editor

I would very much like to be able to set Azure API Policy attributes based on a User's Jwt Claims data. I have been able to set string values for things like the counter-key and increment-condition but I can't set all attributes. I imagined doing something like the following:
<rate-limit-by-key
calls="#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))"
renewal-period="#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Duration/InSeconds", "60"))"
counter-key="#((string)context.Variables["Subject"])"
increment-condition="#(context.Response.StatusCode == 200)"
/>
However there seems to be some validation happening when I save the policy as I get the following error:
Error in element 'rate-limit-by-key' on line 98, column 10: The 'calls' attribute is invalid - The value '#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))' is invalid according to its datatype 'http://www.w3.org/2001/XMLSchema:int' - The string '#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))' is not a valid Int32 value.
I even have trouble setting a string parameter (albeit one with a strict format)
<quota-by-key
calls="10"
bandwidth="100"
renewal-period="#((string) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/Quota/RenewalPeriod", "P00Y00M01DT00H00M00S"))"
counter-key="#((string)context.Variables["Subject"])"
/>
Which gives the following when I try and save the policy:
Error in element 'quota-by-key' on line 99, column 6: #((string) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/Quota/RenewalPeriod", "P00Y00M01DT00H00M00S")) is not in a valid format. Provide number of seconds or use 'PxYxMxDTxHxMxS' format where 'x' is a number.
I have tried a large set of variations casting, Convert.ToInt32, claims that are not strings, #{return 5}, #(5) etc but there seems to be some validation happening at save time that is stopping it.
Is there away around this issue as I think it would be a useful feature to add to my API?
calls attribute on rate-limit-by-key and quota-by-key does not support policy expressions. Internal limitations block us from treating it on per-request basis unfortunately. The best you can do is categorize requests into a few finite groups and apply rate limit/quota conditionally using choose policy.
Or try using increment-count attribute to control by how much counter is increased per each request.

Apply Command to String-type custom fields with YouTrack Rest API

and thanks for looking!
I have an instance of YouTrack with several custom fields, some of which are String-type. I'm implementing a module to create a new issue via the YouTrack REST API's PUT request, and then updating its fields with user-submitted values by applying commands. This works great---most of the time.
I know that I can apply multiple commands to an issue at the same time by concatenating them into the query string, like so:
Type Bug Priority Critical add Fix versions 5.1 tag regression
will result in
Type: Bug
Priority: Critical
Fix versions: 5.1
in their respective fields (as well as adding the regression tag). But, if I try to do the same thing with multiple String-type custom fields, then:
Foo something Example Something else Bar P0001
results in
Foo: something Example Something else Bar P0001
Example:
Bar:
The command only applies to the first field, and the rest of the query string is treated like its String value. I can apply the command individually for each field, but is there an easier way to combine these requests?
Thanks again!
This is an expected result because all string after foo is considered a value of this field, and spaces are also valid symbols for string custom fields.
If you try to apply this command via command window in the UI, you will actually see the same result.
Such a good question.
I encountered the same issue and have spent an unhealthy amount of time in frustration.
Using the command window from the YouTrack UI I noticed it leaves trailing quotations and I was unable to find anything in the documentation which discussed finalizing or identifying the end of a string value. I was also unable to find any mention of setting string field values in the command reference, grammer documentation or examples.
For my solution I am using Python with the requests and urllib modules. - Though I expect you could turn the solution to any language.
The rest API will accept explicit strings in the POST
import requests
import urllib
from collections import OrderedDict
URL = 'http://youtrack.your.address:8000/rest/issue/{issue}/execute?'.format(issue='TEST-1234')
params = OrderedDict({
'State': 'New',
'Priority': 'Critical',
'String Field': '"Message to submit"',
'Other Details': '"Fold the toilet paper to a point when you are finished."'
})
str_cmd = ' '.join(' '.join([k, v]) for k, v in params.items())
command_url = URL + urllib.urlencode({'command':str_cmd})
result = requests.post(command_url)
# The command result:
# http://youtrack.your.address:8000/rest/issue/TEST-1234/execute?command=Priority+Critical+State+New+String+Field+%22Message+to+submit%22+Other+Details+%22Fold+the+toilet+paper+to+a+point+when+you+are+finished.%22
I'm sad to see this one go unanswered for so long. - Hope this helps!
edit:
After continuing my work, I have concluded that sending all the field
updates as a single POST is marginally better for the YouTrack
server, but requires more effort than it's worth to:
1) know all fields in the Issues which are string values
2) pre-process all the string values into string literals
3) If you were to send all your field updates as a single request and just one of them was missing, failed to set, or was an unexpected value, then the entire request will fail and you potentially lose all the other information.
I wish the YouTrack documentation had some mention or discussion of
these considerations.