I am using the schema in here: https://jsonschema.dev/s/5iht5 in my mac. But the validator doesn't seem to understand if.
warning: the following keywords are unknown and will be ignored: [if, then]
level: "warning"
schema: {"loadingURI":"#","pointer":"/definitions/legal/allOf/0"}
domain: "syntax"
ignored: ["if","then"]
This does not work in MAC / AL2 OS.
The expected outcome of validator is I want the required field to be applicable only when the value of the first attribute is true.
Looks like I was using an older validator that supports only until draft-04.
I am using everit-org json-schema now no longer see the warning.
Related
How to look up a particular internationalized property based on a report parameter?
This works, but is static:
$R{some_literal_string}
This works too, but is not internationalized:
$P{key_to_parameters_map_element}
What I need is:
$R{$P{key_to_parameters_map_element}}
Unfortunately, I get a pile of error messages:
Caused by: net.sf.jasperreports.engine.JRException: Errors were encountered when compiling report expressions class file:
1. Syntax error on token "}", delete this token
value = str("$P{key_to_parameters_map_element")}; //$JR_EXPR_ID=13$
This doesn't change anything:
$R{$P{key_to_parameters_map_element}.toString()}
Is this possible at all?
It's
str($P{key_to_parameters_map_element})
Quite intuitive, isn't it?
I would very much like to be able to set Azure API Policy attributes based on a User's Jwt Claims data. I have been able to set string values for things like the counter-key and increment-condition but I can't set all attributes. I imagined doing something like the following:
<rate-limit-by-key
calls="#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))"
renewal-period="#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Duration/InSeconds", "60"))"
counter-key="#((string)context.Variables["Subject"])"
increment-condition="#(context.Response.StatusCode == 200)"
/>
However there seems to be some validation happening when I save the policy as I get the following error:
Error in element 'rate-limit-by-key' on line 98, column 10: The 'calls' attribute is invalid - The value '#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))' is invalid according to its datatype 'http://www.w3.org/2001/XMLSchema:int' - The string '#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))' is not a valid Int32 value.
I even have trouble setting a string parameter (albeit one with a strict format)
<quota-by-key
calls="10"
bandwidth="100"
renewal-period="#((string) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/Quota/RenewalPeriod", "P00Y00M01DT00H00M00S"))"
counter-key="#((string)context.Variables["Subject"])"
/>
Which gives the following when I try and save the policy:
Error in element 'quota-by-key' on line 99, column 6: #((string) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/Quota/RenewalPeriod", "P00Y00M01DT00H00M00S")) is not in a valid format. Provide number of seconds or use 'PxYxMxDTxHxMxS' format where 'x' is a number.
I have tried a large set of variations casting, Convert.ToInt32, claims that are not strings, #{return 5}, #(5) etc but there seems to be some validation happening at save time that is stopping it.
Is there away around this issue as I think it would be a useful feature to add to my API?
calls attribute on rate-limit-by-key and quota-by-key does not support policy expressions. Internal limitations block us from treating it on per-request basis unfortunately. The best you can do is categorize requests into a few finite groups and apply rate limit/quota conditionally using choose policy.
Or try using increment-count attribute to control by how much counter is increased per each request.
and thanks for looking!
I have an instance of YouTrack with several custom fields, some of which are String-type. I'm implementing a module to create a new issue via the YouTrack REST API's PUT request, and then updating its fields with user-submitted values by applying commands. This works great---most of the time.
I know that I can apply multiple commands to an issue at the same time by concatenating them into the query string, like so:
Type Bug Priority Critical add Fix versions 5.1 tag regression
will result in
Type: Bug
Priority: Critical
Fix versions: 5.1
in their respective fields (as well as adding the regression tag). But, if I try to do the same thing with multiple String-type custom fields, then:
Foo something Example Something else Bar P0001
results in
Foo: something Example Something else Bar P0001
Example:
Bar:
The command only applies to the first field, and the rest of the query string is treated like its String value. I can apply the command individually for each field, but is there an easier way to combine these requests?
Thanks again!
This is an expected result because all string after foo is considered a value of this field, and spaces are also valid symbols for string custom fields.
If you try to apply this command via command window in the UI, you will actually see the same result.
Such a good question.
I encountered the same issue and have spent an unhealthy amount of time in frustration.
Using the command window from the YouTrack UI I noticed it leaves trailing quotations and I was unable to find anything in the documentation which discussed finalizing or identifying the end of a string value. I was also unable to find any mention of setting string field values in the command reference, grammer documentation or examples.
For my solution I am using Python with the requests and urllib modules. - Though I expect you could turn the solution to any language.
The rest API will accept explicit strings in the POST
import requests
import urllib
from collections import OrderedDict
URL = 'http://youtrack.your.address:8000/rest/issue/{issue}/execute?'.format(issue='TEST-1234')
params = OrderedDict({
'State': 'New',
'Priority': 'Critical',
'String Field': '"Message to submit"',
'Other Details': '"Fold the toilet paper to a point when you are finished."'
})
str_cmd = ' '.join(' '.join([k, v]) for k, v in params.items())
command_url = URL + urllib.urlencode({'command':str_cmd})
result = requests.post(command_url)
# The command result:
# http://youtrack.your.address:8000/rest/issue/TEST-1234/execute?command=Priority+Critical+State+New+String+Field+%22Message+to+submit%22+Other+Details+%22Fold+the+toilet+paper+to+a+point+when+you+are+finished.%22
I'm sad to see this one go unanswered for so long. - Hope this helps!
edit:
After continuing my work, I have concluded that sending all the field
updates as a single POST is marginally better for the YouTrack
server, but requires more effort than it's worth to:
1) know all fields in the Issues which are string values
2) pre-process all the string values into string literals
3) If you were to send all your field updates as a single request and just one of them was missing, failed to set, or was an unexpected value, then the entire request will fail and you potentially lose all the other information.
I wish the YouTrack documentation had some mention or discussion of
these considerations.
I want to query the Field which are empty and which are not empty using Sensenet Odata Rest API. Their documentation mentions a filter function called 'length'. I have tried to query the field with the length operation but it fails with the error.
This is the filter I have used
$filter=length(Name) eq 2
Sense/Net 6.5.4.9496
Exception
"code": "NotSpecified",
"exceptiontype": "SnNotSupportedException",
"message": {
"lang": "en-us",
"value": "Unknown method: length"
},
Wiki Link http://wiki.sensenet.com/OData_REST_API
The length operation was included in the list of supported methods incorrectly, we apologise for that. SenseNet compiles these filters to Lucene queries and it is not possible to compose such a query in Lucene that performs an operation on a field.
(the remaining methods, like substringof or startswith can be compiled to a wildcard expression easily, so that should work)
Unfortunately 'empty' expressions are also not supported by Lucene, because of their document/term structure. So the following expression does not work either:
Description eq ''
Edit: as a workaround, developers may create a custom field index handler.
For every field you want to check for emptiness (e.g. Description), you may create a technical hidden bool field (IsDescriptionEmpty) in the content type definition. The only thing you have to create and define is a custom field index handler class. In your case it would inherit from the built-in bool field index handler and you could return a boolean index value based on whether the target field (in this case Description) is empty or not.
After this you would be able to define search exressions like the following:
+Type:File +IsDescriptionEmpty:true
Please check the wiki article below and the source code for index handler examples.
How to create a field indexhandler
We need to make extensive use of the 'xml_is_well_formed' function provided by the XML2 module.
Yet the documentation says that the xml2 module will be deprecated since "XML syntax checking and XPath queries"
is covered by the XML-related functionality based on the SQL/XML standard in the core server from PostgreSQL 8.3 onwards.
However, the core function XMLPARSE does not provide equivalent functionality since when it detects an invalid XML document,
it throws an error rather than returning a truth value (which is what we need and currently have with the 'xml_is_well_formed' function).
For example:
select xml_is_well_formed('<br></br2>');
xml_is_well_formed
--------------------
f
(1 row)
select XMLPARSE( DOCUMENT '<br></br2>' );
ERROR: invalid XML document
DETAIL: Entity: line 1: parser error : expected '>'
<br></br2>
^
Entity: line 1: parser error : Extra content at the end of the document
<br></br2>
^
Is there some way to use the new, core XML functionality to simply return a truth value
in the way that we need?.
Thanks,
-- Mike Berrow
After asking about this on the pgsql-hackers e-mail list, I am happy to report that the guys there agreed that it was still needed and they have now moved this function to the core.
See:
http://web.archiveorange.com/archive/v/alpsnGpFlZa76Oz8DjLs
and
http://postgresql.1045698.n5.nabble.com/review-xml-is-well-formed-td2258322.html