Is there any way to get docstrings to highlight as a different color than string literals? Example:
def my_func():
"""This should be one color, these comments are used for documenting files"""
return "this is a string literal, and should be another color"
For me, the syntax coloring for "bytes" seems to control both.
Not currently.
The last release added support for different colors for bytes/unicode (i.e.: u'str' and b'str' will have different colors -- and the version without a 'u' or 'b' prefix is dependent on the grammar version you choose as it's different for Python 2 and 3), but docstrings are currently not differentiated (being bytes or unicode)...
This isn't completely straightforward in code terms (which was the reason it wasn't added in the last release, although I agree it's a nice feature to have, so, I may end up adding it in the future).
Related
Consider the string "abc를". According to unicode's demo implementation of word segmentation, this string should be split into two words, "abc" and "를". However, 3 different Rust implementations of word boundary detection (regex, unic-segment, unicode-segmentation) have all disagreed, and grouped that string into one word. Which behavior is correct?
As a follow up, if the grouped behavior is correct, what would be a good way to scan this string for the search term "abc" in a way that still mostly respects word boundaries (for the purpose of checking the validity of string translations). I'd want to match something like "abc를" but don't match something like abcdef.
I'm not so certain that the demo for word segmentation should be taken as the ground truth, even if it is on an official site. For example, it considers "abc를" ("abc\uB97C") to be two separate words but considers "abc를" ("abc\u1105\u1173\u11af") to be one, even though the former decomposes to the latter.
The idea of a word boundary isn't exactly set in stone. Unicode has a Word Boundary specification which outlines where word-breaks should and should not occurr. However, it has an extensive notes section for elaborating on other cases (emphasis mine):
It is not possible to provide a uniform set of rules that resolves all issues across languages or that handles all ambiguous situations within a given language. The goal for the specification presented in this annex is to provide a workable default; tailored implementations can be more sophisticated.
For Thai, Lao, Khmer, Myanmar, and other scripts that do not typically use spaces between words, a good implementation should not depend on the default word boundary specification. It should use a more sophisticated mechanism, as is also required for line breaking. Ideographic scripts such as Japanese and Chinese are even more complex. Where Hangul text is written without spaces, the same applies. However, in the absence of a more sophisticated mechanism, the rules specified in this annex supply a well-defined default.
...
My understanding is that the crates you list are following the spec without further contextual analysis. Why the demo disagrees I cannot say, but it may be an attempt to implement one of these edge cases.
To address your specific problem, I'd suggest using Regex with \b for matching a word boundary. This unfortunately follows the same unicode rules and will not consider "를" to be a new word. However, this regex implementation offers an escape hatch to fallback to ascii behaviour. Simply use (?-u:\b) to match a non-unicode boundary:
use regex::Regex;
fn main() {
let pattern = Regex::new("(?-u:\\b)abc(?-u:\\b)").unwrap();
println!("{:?}", pattern.find("some abcdef abc를 sentence"));
}
You can run it for yourself on the playground to test your cases and see if this works for you.
I noticed that a few locales have only one form of (cardinal) plural defined by the CLDR. Here are a few examples of popular languages without plurals:
Chinese
Indonesian
Japanese
Supposing the following English ICU message string:
You have {count, plural, =0 {no unread messages} one {# unread message} other {# unread messages}}.
Is there any value in translating the plural ICU syntax to these languages? For example in Chinese:
Plural Syntax 您有{count, plural, other {#}}條未讀消息。
Or since there is no plural, should we recommend translators to simply use the variable instead like this:
Variable Syntax 您有{count}條未讀消息。
I tested two libraries (GlobalizeJs and FormatJs) and both seem to work with both syntaxes.
Is there any known issue in interchanging these syntaxes?
Would it be safe for most libraries to recommend using the variable syntax for locales without plurals?
There is no value to keep the plural marker. The =0 case should say 沒有, though.
A couple of years late but please note that there is huge value depending on whether the phrase only uses plural categories or also uses overrides (like the =0 in your example) and whether you are only trying to be grammatically correct in each language or rather trying to sound like a human and not like a machine.
Also, if it’s a matter of teaching linguists how to use ICU, you’re better off explaining how arguments work because it’s not limited to plurals (there’s also selects which have a similar way of working).
Finally, if you are working with translation files such as YML with ICU in them, I would say it’s best to keep the ICU plural marker. This means you can change the source language of your translations at any point in time (including to a language that doesn’t have plurals like chinese), and you won’t have to rewrite all strings that should have kept the plural marker in the first place.
I have two versions of a large book in txt format and I'd like to compare them to find significant changes between the versions, ignoring small single character differences.
There are lots of diffing tools that can ignore whitespace differences, but I also want to ignore small typos and single or couple character differences. For example, one version of the book has a repeated misspelling of leige hundreds of times and this is corrected in the next version to liege. Some proper nouns have also changed their spelling. (I could make custom workarounds for each misspelling, but would like something more general purpose)
Since I only care about more significant multi-word differences want I really want is to set a filter that ignores changes for a line unless the Levenshtein edit distance is above some threshold.
Looking around all the diff/comparisons tools I find seem to have code in mind so they lack any feature around ignoring small text changes. Google's diff_match_patch library is great for diffing plaintext and ignoring whitespace changes (demo here) but doesn't seem to have an out of the box way to ignore single character non-whitespace differences.
tl;dr; Are there any diff tools that can compare text documents but filter out minor single character non-whitespace differences?
In Beyond compare you can define "replacements".
An example:
Differences are marked red:
Then you can go to Session->Session Settings and set a replacement:
Or even easier: Mark the text and define the replacement immediate:
Now the difference is unimportant and marked blue:
With one click you can ignore the unimportant differences (red arrow in the screenshot).
Technical remark: I use BC4 with the pro edition.
I wish to code fully with Esperanto lexemes. That is, not ending up with a English/Esperanto mix up. Perligata is a good example of the kind of result I would like, but it use Latin where I want to use Esperanto.
So Perl seems to be a valid answer to my question. On the other end a language like Python have no mechanism that would let you use se (if in esperanto) rather than if. On what you may call middle ground, you have languages like C that allow to replace keywords through its processor (#define se if), but won't allow you to get ride of the define keywords itself. Also you have languages like racket and the LISP-family that would probably let you use wrap most internal symbols, but probably not allow you to easily change parentheses for anything else. For example mapping ( with ene and ) with ele.
Also an other point is ability to use unicode in identifiers, as Esperanto do use non-ASCII characters in its alphabet, like ĉ. That's not really a blocking element, as one is available to use cx instead of ĉ, but its nevertheless an interesting parameter.
So I guess an ideal answer to this question would be a matrix of languages specifying their lexeme and grammar customability.
each formal language has its syntax. in my opinion, lowest 'syntax overhead' is offered by lisp-like languages. but then you don't want to have parenthesis. you also don't want to have #define therefore you reject any syntax at all and all possible replacements.
i don't think there is any language that will let you do it. you should look for language generators, write your own language (at least the syntax part) or the simplest possible way, add your own find-replace layer on top of any existing language
I'm trying to implement a library for reading Microsoft CFB (Compound File Binary) Format files, according to the official specification of that format. The specification is available from this site.
In a nutshell - some of the structures of the file are stored in a red-black tree. I've got a problem with the comparison predicate used for storing these structures in that tree. The specification says that, if the names (the strings are stored as UTF-16, the standard in Windows APIs) of these structures are different, it is necessary to iterate through every UTF-16 code point and :
(...) convert to upper-case with the Unicode Default Case Conversion
Algorithm, simple case conversion variant (simple case foldings), with the following notes.<2> Compare each upper-cased UTF-16 code point binary value.
The <2> reference says that :
or Windows XP and Windows Server 2003: The compound file implementation
conforms to the Unicode 3.0.1 Default Case Conversion Algorithm, simple case folding
(http://www.unicode.org/Public/3.1-Update1/CaseFolding-4.txt) with the following exceptions.
However, when I looked up the referenced case folding file, and read the UTR #21 "Case Mapping" referenced there, I realized that the case folding is defined as an operation that bears much more resemblance to lower-casing, rather than upper-casing.
By using CaseFolding-4.txt, we can obtain the case folding mappings of upper-case letters to lower-case ones. The mapping is always 1-to-1, since full foldings (those that expand to multiple characters) aren't needed here. However, the reverse mapping of lower-case letters to upper-case ones isn't straightforward anymore. For example,
0392; C; 03B2; # GREEK CAPITAL LETTER BETA
03D0; C; 03B2; # GREEK BETA SYMBOL
Thus, we have no way of knowing whether 03B2 should be converted to 0392 or 03D0. Does the standard define something like folding to upper-case? Maybe I should use case folding, and then convert to upper-case? Or have I understood the specification completely wrong?
Summary: The wording used by Microsoft is...confusing to say the least. It appears that simple upper case mapping should be done, though I can't be certain.
Background
Part of the confusion might be the difference between case folding and case mapping. Case mapping maps every character to a designated case. Case folding, while it is based on lower-casing, is defined to result in case-less characters (UTR #21 §1.3).
Now there are two variants of case mapping and case folding, simple and full. Unlike the simple transformation, The full one can change string length, and as you rightly point out is not needed here. The specification specifically mentions simple, and is probably the only clear thing in this answer. I do feel the need to mention for future reference that the the current Unicode Standard (6.3.0) mentions that the default case transformation is the full one, though the version Microsoft references (3.1.1) does not appear to make this distinction.
Spec Analysis
(...) convert to upper-case with the Unicode Default Case Conversion Algorithm, simple case conversion variant (simple case foldings), with the following notes.<2> Compare each upper-cased UTF-16 code point binary value.
To me this quote appears to suggest they want upper case, and simply made an error by saying case folding instead of case mapping. But then comes that reference you quoted:
For Windows XP and Windows Server 2003: The compound file implementation conforms to the Unicode 3.0.1 Default Case Conversion Algorithm, simple case folding (http://www.unicode.org/Public/3.1-Update1/CaseFolding-4.txt) with the following exceptions.
They actually mention the case folding data file! At this point, I'm not sure what to think. My main line of thought is that Microsoft wants case folding though erroneously thought that it was based on upper casing rather than lower casing. This is even a stretch though, but its the closest I've been able to come to reconciling this possible contradiction, and I hope there's a better explanation.
I've found in section 2.6.1 the following which supports some form of upper-casing:
[...] the directory entry name is compared using a special case-insensitive upper-case mapping, described in Red-Black Tree.
Note that they do in fact use the term mapping here.
The exception list
Taking a look at the exception list for the mentioned Windows XP and Windows Server 2003, most entries are subtractions, suggesting code points Microsoft wants to keep distinct. However, in the table, the code points are actually listed in reverse order to the Unicode case folding data file.
One interpretation of this is that it's just a display quirk. This idea is shot down by the last row where they subtract the case transformation 0x03C2 -> 0x03C2. This transformation does not exist in the data file since the transformation 0x03C2 -> 0x03C3 does (an unlisted case transformation is considered to transform to itself).
Another interpretation is that they do in fact erroneously believe that its the reverse mapping that's the correct one. As you mentioned though, this runs into trouble, as the reverse mapping is not always straightforward. Otherwise, this interpretation would be fine.
A third interpretation is to consider their reference to the Unicode case folding data file wrong. This of course makes me feel uneasy, but if they actually did mean case mapping originally, they might have just provided the link as a quick reference point. The exception list they mention does have column headers such as "Lowercase UTF-16 code point", but we know that case folding is in fact case-less.
As an aside, I did look at the exception list for the later operating systems, hoping to gain some more insight. I found more confusion. In particular the addition of 0x03C3 -> 0x03A3 troubles me. Since the exception list and the Unicode file list their code points in the opposite order, it appears that the transformation is already in the data file and doesn't need to be added. This part of the specification does not want to be understood!
Conclusion
If you've read all of the above, you'll probably guess that this conclusion is going to be less than ideal. Clearly at one or more points, the specification is in error, but it's hard to tell where. Really there are three possibilities depending on your interpretation as to what kind of case transformation needs to be done.
Simple upper case mapping
Simple case folding, followed by simple upper case mapping
Simple case folding
To me it seems like Microsoft does in fact want upper casing. From there I believe that the case folding reference is an error, and as such my guess is they just want simple upper case mapping.
I highly doubt it's the last simple case folding only option though. Both of the other options would give very similar results with only a small amount of code points possibly giving different results.
It seems like the only way to know for sure would be to either contact Microsoft, or painstakingly look at binaries to see which method is followed.
In 3.13 Default Case Algorithms (p. 115) of The Unicode Standard
Version 6.2 – Core Specification the text refers to UnicodeData.txt. This contains:
03B2;GREEK SMALL LETTER BETA;Ll;0;L;;;;;N;;;0392;;0392
03D0;GREEK BETA SYMBOL;Ll;0;L;<compat> 03B2;;;;N;GREEK SMALL LETTER CURLED BETA;;0392;;0392
which indicates that the Greek small letter Beta should indeed map to the Greek Beta symbol, and as an aside indicates that the two symbols have some level of compatibility. It also contains the remainder of the bidirectional case conversion you are looking for. You may also need to look at SpecialCasing.txt for boundary cases.