Uima ruta -Abbrevations - uima

Can I segment the letters of a word using Uima Ruta?
Ex.
1.(WHO)
2.(APIAs)
Script:
DECLARE NEW;
BLOCK (foreach)CAP{}
{
W{REGEXP(".")->MARK(NEW)};
}

Yes, this is achieved with simple regex rules in UIMA Ruta:
DECLARE Char;
CAP->{"."->Char;};
You cannot use normal rules for this because you need to match on something smaller than RutaBasic. The only option is to use regexp rules which operate directly on the text instead of on annotations. You should of course be very careful since this can lead to really many annotations.
Some explanation for the somewhat compact rule: CAP->{"."->Char;};
CAP // the only rule element of the rule: match on each CAP annotation
->{// indicates that inlined rules follow that are applied in the context of the matched annotation.
"." // a regular expression matching on each character
-> Char // the "action" of the regex rule: create an annotation of the type Char for each match of the regex
;}; // end of regex rule, end of inlined rules, end of actual rule
Summarizing, the rule iterates over all CAP annotations, applies a regular expression on each iterated covered text and creates annotations for the matches.
You can of course also use a BLOCK instead of an inlined rule.
DISCLAIMER: I am a developer of UIMA Ruta

Related

What's the common denominator for regex "pattern" in OpenAPI?

I'm using FastAPI, which allows pattern=re.compile("(?P<foo>[42a-z]+)...").
https://editor.swagger.io/ shows an error for this pattern.
My guess is that Python's named group syntax (?P<name>...) is different from ES2018 (?<name>...).
But, come to think of it, the idea of OpenAPI is interoperability, and some other language, esp. a compiled language may use yet another notation, or may not support named groups in the regular expressions at all.
What common denominator of regular expression syntax should I use?
OpenAPI uses json schema, and the json schema spec defines regex as "A regular expression, which SHOULD be valid according to the ECMA-262 regular expression dialect." Here is the relevant ECMA-262 section.
Of course non-javascript implementations probably won't care too much about it, and just use the default regex library of their platform. So good luck with figuring out the common denominator :)
I suggest just using as simple regexes as possible. And add some tests for it, using the library that you use in production.
Json Schema recommends a specific subset of regular expressions because the authors accept that most implementations will not support full ECMA 262 syntax:
https://json-schema.org/understanding-json-schema/reference/regular_expressions.html
A single unicode character (other than the special characters below) matches itself.
.: Matches any character except line break characters. (Be aware that what constitutes a line break character is somewhat dependent on your platform and language environment, but in practice this rarely matters).
^: Matches only at the beginning of the string.
$: Matches only at the end of the string.
(...): Group a series of regular expressions into a single regular expression.
|: Matches either the regular expression preceding or following the | symbol.
[abc]: Matches any of the characters inside the square brackets.
[a-z]: Matches the range of characters.
[^abc]: Matches any character not listed.
[^a-z]: Matches any character outside of the range.
+: Matches one or more repetitions of the preceding regular expression.
*: Matches zero or more repetitions of the preceding regular expression.
?: Matches zero or one repetitions of the preceding regular expression.
+?, *?, ??: The *, +, and ? qualifiers are all greedy; they match as much text as possible. Sometimes this behavior isn’t desired and you want to match as few characters as possible.
(?!x), (?=x): Negative and positive lookahead.
{x}: Match exactly x occurrences of the preceding regular expression.
{x,y}: Match at least x and at most y occurrences of the preceding regular expression.
{x,}: Match x occurrences or more of the preceding regular expression.
{x}?, {x,y}?, {x,}?: Lazy versions of the above expressions.
P.S. Kudos to #erosb for the idea how to find this recommendation.

What characters are allowed in the name of a rule in Drools?

I haven't been able to find in Drools documentation, which characters (beyond alphabet letters) are allowed/disallowed in a rule name in Drools - does anyone know or have a reference?
The only relevant section of Drools doc I've found so far does not specify:
Each rule must have a unique name within the rule package. If you use the same rule name more than once in any DRL file in the package, the rules fail to compile. Always enclose rule names with double quotation marks (rule "rule name") to prevent possible compilation errors, especially if you use spaces in rule names.
I think I have discovered, anecdotally, that some "grouping" characters do not work in rule names (seems rules named with can't be found or aren't included) - or at least, in extension rules (the extended rule seems to work with grouping chars, but not its extension; example below): The grouping chars include parentheses "()", square brackets "[]", and "curly braces" "{}". Although less than & greater than "<>" work, so I'm so far replacing the former with the latter.
Or are there escape chars for the problematic grouping chars?
Example:
rule "(grouping chars, and commas, work here)"
when
// conditions LHS
then
end
// removing parentheses, or replacing with < >,
// from below line works
rule "(grouping chars DON'T work here)"
extends "(grouping chars, and commas, work here)"
when
then
// consequences RHS
I haven't discovered either way yet with all other characters (for example, other punctuation; except I have discovered commas "," work). But it would be nice to know ahead of time what characters are allowed.
Theoretically every identifier inside a string should work, but you might have empirically found some combination that is breaking the grammar somehow.
Thanks for the investigation, I've filled a Jira, please take a look at it

Where to put the "i" flag in Firestore Security Rules str.matches(regex)?

I'm writing Security Rules for my Firebase Firestore project, but I can't seem to find a reference that exemplifies the correct syntax for using matches() method with a regular expression as the parameter. Here's what I use now:
request.resource.data.str.matches("^[a-z0-9]+$");
My question is now, where to put the case-insensitive flag "i" in the expression? I tried some different combinations but didn't work.
Thanks.
Firestore security rules follow the Google RE2 format for regular expressions.
This means you can define a non-capturing group for your match, and flag that group to use case-insensitive matching with:
allow write: if request.resource.data.str.matches("(?i:^[a-z0-9]+)$");
Of course you can also simply expand the character group to include both uppercase and lowercase ASCII characters:
allow write: if request.resource.data.str.matches("^[a-zA-Z0-9]+$");

How can we annotate an unicode character in uima ruta

How can we annotate an unicode character in uima ruta:
For Example: I want to mark this text(Paris: Éditions Robert Laffont).So I used the following rule.
DECLARE CITY;
CW COLON CW+{->MARK(CITY,1,3)};
But the text covered upto Paris: Ã. Is there any way to solve this problem. Awaiting for the answer.Thanks in advance.
Its all about he definition of the lexer which creates the token class annotations of ruta (W, CW, SPECIAL ...).
The rule CW COLON CW+{->MARK(CITY,1,1)}; creates an annotation of the type CITY for the text span Paris regardless of the unicode character.
The last rule element CW+ matches on à since this is annotated with a CW, but stops there since ‰ is not a CW but a SPECIAL.
There are different ways to avoid this problem. My advice would be that you should rely on a different type of annotation for your rules. The job of the lexer annotations of ruta is to create minimal annotations. They do not define tokens in general.
You could maybe use something like this (or use an actual tokenizer for better performance):
DECLARE CITY;
DECLARE Token;
RETAINTYPE(SPACE);
(W (SPECIAL? W)*){-> Token};
RETAINTYPE;
Token COLON Token+{->MARK(CITY,1,1)};
DISCLAIMER: I am a developer of UIMA Ruta

Can actions in Lex access individual regex groups?

Can actions in Lex access individual regex groups?
(NOTE: I'm guessing not, since the group characters - parentheses - are according to the documentation used to change precedence. But if so, do you recommend an alternative C/C++ scanner generator that can do this? I'm not really hot on writing my own lexical analyzer.)
Example:
Let's say I have this input: foo [tagName attribute="value"] bar and I want to extract the tag using Lex/Flex. I could certainly write this rule:
\[[a-z]+[[:space:]]+[a-z]+=\"[a-z]+\"\] printf("matched %s", yytext);
But let's say I would want to access certain parts of the string, e.g. the attribute but without having to parse yytext again (as the string has already been scanned it doesn't really make sense to scan part of it again). So something like this would be preferable (regex groups):
\[[a-z]+[[:space:]]+[a-z]+=\"([a-z]+)\"\] printf("matched attribute %s", $1);
You can separate it to start conditions. Something like this:
%x VALUEPARSE ENDSTATE
%%
char string_buf[100];
<INITIAL>\[[a-z]+[[:space:]]+[a-z]+=\" {BEGIN(VALUEPARSE);}
<VALUEPARSE>([a-z]+) (strncpy(string_buf, yytext, yyleng);BEGIN(ENDSTATE);} //getting value text
<ENDSTATE>\"\] {BEGIN(INITIAL);}
%%
About an alternative C/C++ scanner generator - I use QT class QRegularExpression for same things, it can very easy get regex group after match.
Certainly at least some forms of them do.
But the default lex/flex downloadable from sourceforge.org do not seem to list it in their documentation, and this example leaves the full string in yytext.
From IBM's LEX documentation for AIX:
(Expression)
Matches the expression in the parentheses.
The () (parentheses) operator is used for grouping and causes the expression within parentheses to be read into the yytext array. A group in parentheses can be used in place of any single character in any other pattern.
Example: (ab|cd+)?(ef)* matches such strings as abefef, efefef, cdef, or cddd; but not abc, abcd, or abcdef.