In my UWP app I have a textbox.
I want the user to be able to type Farsi / Persian text (right to left) into the textbox so I set the FlowDirection property to RighToLeft.
The text can be entered and is displayed correctly:
When I save the text, and inspect the property during debugging, i see the same character order as on screen:
The same character order applies for the stored value when viewed with mssql management studio:
When I add a '.' or a '!' at the end of the text, the WPF textbox still displays what I expect,
but the text I get back from the text property puts the exclamation mark at the right side of the string.
It is also stored this way in the sql database:
When loading the database value (with the exclamation point on the right) into the textbox it shows the exclamation point correctly on the left side. There must be some magic happening here that I am not aware of, or maybe the problem is that the debug preview / mssql preview does not support displaying RTL values.
My problem is that this magic does not work in other situations.
When I load the database value and put it in a microsoft word document, it seems to do no conversion and place the text in the document exactly as it is in the database, resulting in the exclamation point to be shown on the 'wrong' side.
I would like to understand the 'magic' that takes place in displaying / storing these strings, so I can output it correctly in MS Word. And Yes, I have set the paragraph where I output the values in word to RTL.
In Unicode, all characters have directional properties that get used in the Unicode Bidirectional Algorithm for determining how characters are ordered visually. Most characters have a "strong" directional property, but not all. In particular, most punctuation characters are considered directionally neutral.
The visual ordering of neutral characters is determined by the characters that surround them. For example, the exclamation mark ! is neutral; if it occurs between two left-to-right characters, it will be treated as though it also is a left-to-right character. But if it occurs between two right-to-left characters, it will be treated as though it is a right-to-left character.
In your example, though, the exclamation mark occurs at the end of the string. So, it has a strong-direction character on one side, but nothing on the other side. In this case, another factor comes into play, which is that the paragraph as a whole has a base direction.
The Unicode Bidi Algorithm allows two ways that apps can handle the paragraph base direction:
the app can set the base direction explicitly, regardless of the string content in the paragraph; or
the app can let the base direction be derived implicitly from the string: the base direction is determined by the first strong-directional character in the string.
In your UWP app, when you set the flow direction to RTL, then the paragraph base direction (for purposes of the Bidi Algorithm) is RTL. With an Arabic-script string that ends with the exclamation mark, the directionality of the exclamation is set to RTL because of the paragraph base direction, and so it appears at the left end of the string. But when you view the control property value in an IDE, the IDE is presenting that property string in a control that has LTR base direction. That is causing the exclamation at the logical end of the string to appear visually at the right end.
Note that apps will often conflate base direction and alignment, though these are really distinct things. In Word, you can set the paragraph base direction in the Paragraph settings dialog, and when you do it will set the alignment to match by default:
But you can override the paragraph alignment to have a RTL base direction with left alignment:
Note that the visual order of the exclamation mark is affected by the paragraph base direction but not by the alignment. The Unicode Bidi Algorithm doesn't pay attention to the alignment.
This article gives a good overview of how the Bidi Algorithm works: https://www.w3.org/International/articles/inline-bidi-markup/uba-basics.
If you want to explore how the Bidi Algorithm works in more detail, you can read the spec, Unicode Standard Annex #9, Unicode Bidirectional Algorithm; and check out this Unicode utility that explains how the rules of the algorithm apply to sample strings you can provide.
Related
Where can I get the complete list of all unicode characters that doesn't behave as simple characters. Examples: character 0x0363 (won't be printed without another one before), character 0x0084 (does weird things when printed). I need just a raw list of such unusual characters to replace them with something harmless to avoid unwanted output effects. Regular characters (those who not in this list) should use exactly one character place when printed (= cursor moved +1 to the right), should not depend on previous or next characters, and should not affect printing style in any way.
Edit because of multiple comments:
I have some unicode string, usually consists of "usual" characters like 0x20-0x7E or cyrillic letters. Also, there are a lot of other unicode characters that are usual and may be safely assumed as having strlen() = 1. The string is printed on the terminal and I should know the resulting position of the cursor. I don't want to use some complex and non-stable libraries to do that, i want to have simplest possible logic to do that. Every problematic character may be replaced with U+0xFFFD or something like "<U+0363>" (ASCII string with its index instead of character itself). I want to have a list of "possibly-problematic" characters to replace. It is acceptable to have some non-problematic characters in this list too, but not much.
There is no simple algorithm for this. You'll likely need a complex, but extremely stable library: libicu, or something based on it. Basically every other library that does this kind of work is based on libicu, which is maintained by the Unicode organization.
If you don't want to use the official library (or something based on their library), you'll need to parse the Unicode Character Database yourself. In particular, you need to look at Character Properties, and parse the files in the UCD.
I believe you're asking for Bidi_Class (i.e. "direction") to be Left_To_Right, Canonical_Combining_Class to be Not_Reordered, and Joining_Type to be Non_Joining.
You probably also want to check the General_Category and avoid M* (Marks) and C* (Other).
This should work for some Emoji, but this whole approach will break a lot of emoji that look simple and are not. Most famously: ❤️, which is two "characters," not one. You may want to filter out Emoji. As a simple starting point, you may want to restrict yourself to the Basic Multilingual Plane (BMP), which are code points 0000-FFFF. Anything above this range is, almost by definition, rare or unusual. The BMP does include some emoji, but most emoji (and all new emoji) are outside the range.
Remember that the glyphs for single characters can still have radically different widths, even in nominally fixed-width fonts. For example, 𒈙 (U+12219 CUNEIFORM SIGN LUGAL OPPOSING LUGAL) is a completely "normal" character in the way you're describing. It is left-to-right. It doesn't depend on or influence characters around it (it's non-combining and non-joining). Its "length in characters" is 1. Its glyph is also extremely wide in most fonts and breaks a lot of layout. I don't know anything in the Unicode database that would warn you of this, since "glyph width" is entirely a function of fonts, not characters, and Unicode explicitly does not consider fonts. (That said, most of the most problematic characters are outside the BMP. Probably the most common exception is DŽ, but many fixed-width fonts have a narrow glyph for it: DŽ.)
Let's write some cuneiform in a fixed-width font.
Normally, every character should line up with a character above.
Here: 𒈙. See how these characters don't align correctly?
Not only is it a very wide glyph, but its width is not even a multiple.
At least not in my font (Mac Safari 15.0).
But DŽ is ok.
Also remember that there are multiple ways to encode the same "character." For example, é can be a "simple" character (U+00E9), or it can be two characters (U+0065, U+0301). So in some cases é may print in your scheme, and in others it won't. I suspect this is fine for your problem, but if it isn't, you're going to need to apply a normalization form (likely NFC).
If I have a string "\u05D2\u0308" I don't actually get a diaeresis on top of a gimel. It sits to the side, as a discrete glyph.
I don't actually want the combined glyph, but I'm confused in general. How does a combining diacritic like U+0308 decide whether to combine with the preceeding character or hang out on its own?
And how much of this behavior is specified in the unicode standard and how much is up to the individual text renderer or font?
Actually, it does combine.
You are perhaps using some environment where the text engine fails to render this correctly, but in fact your string is one character long (using the conventional sense of "character"), and a correct Unicode-compliant environment will tell you so.
I'm using the Google Sheets API to export cells that contain Arabic text to a JSON file. When a cell contains text that has an exclamation mark at the end of a sentence (so on the left side of the text in an RTL cell), it switches positions in the API response and ends up on the right (incorrect) side in my resulting JSON output:
"levelSelect:titleOne": "قاوم الطعام غير الصحي!"
The word order is correct, but I would expect the exclamation mark to be after the third double quote from the left and not before the fourth one. The punctuation mark placement is correct in the original cell in Google Sheets:
So all the Unicode RTL characters are coming back in the original order, but the one non Unicode character (the exclamation mark) has switched places.
I query Google Sheets with a higher level library, but even if I make a direct API request (using the OAuth playground), I get the same output (FWIW, it makes the request with charset=utf-8 in the Content-type header). The order is already wrong before I write the response to a file.
At this point, I am not sure if this is a bug in the Google Sheets API, or there is something fundamental I don't understand about how Arabic text should be handled. There doesn't seem to be a magic 'format this mostly Unicode string with the non Unicode characters in the right place' conversion function.
I am aware that there are copy and paste issues bewteen different sources where this can happen when you paste text with formatting from the clipboard. If I copy the cell with Cmd-C (on a Mac) and then paste it into some places with just Cmd-V, I get the reversed order, and if I use Shift-Cmd-V, I get the correct order for the exclamation mark. It's as if what the API sends gets confused here on what format to send.
Are there any invisible characters? I have checked Google for invisible characters and ended up with many answers but I'm not sure about those. Can someone on Stack Overflow tell me more about this?
Also I have checked a profile on Facebook and found that the user didn't have any name to his profile? How can this be possible? Is it some database issue? Hacking or something?
When I searched over Internet, I found that 200D is an ASCII value with an invisible character. Is it true?
I just went through the character map to get these.
They are all in Calibri.
Number Name HTML Code Appearance
------ -------------------- --------- ----------
U+2000 En Quad " "
U+2001 Em Quad " "
U+2002 En Space " "
U+2003 Em Space " "
U+2004 Three-Per-Em Space " "
U+2005 Four-Per-Em Space " "
U+2006 Six-Per-Em Space " "
U+2007 Figure Space " "
U+2008 Punctuation Space " "
U+2009 Thin Space " "
U+200A Hair Space " "
U+200B Zero-Width Space ""
U+200C Zero Width Non-Joiner ""
U+200D Zero Width Joiner ""
U+200E Left-To-Right Mark ""
U+200F Right-To-Left Mark ""
U+202F Narrow No-Break Space " "
How a character is represented is up to the renderer, but the server may also strip out certain characters before sending the document.
You can also have untitled YouTube videos like https://www.youtube.com/watch?v=dmBvw8uPbrA by using the Unicode character ZERO WIDTH NON-JOINER (U+200C), or in HTML. The code block below should contain that character:
There is actually a truly invisible character: U+FEFF.
This character is called the Byte Order Mark and is related to the Unicode 8 system. It is a really confusing concept that can be explained HERE The Byte Order Mark or BOM for short is an invisible character that doesn't take up any space. You can copy the character bellow between the > and <.
Here is the character:
> <
How to catch this character in action:
Copy the character between the > and <,
Write a line of text, then randomly put your caret in the line of text
Paste the character in the line.
Go to the beginning of the line and press and hold the right arrow key.
You will notice that when your caret gets to the place you pasted the character, it will briefly stop for around half a second. This is becuase the caret is passing over the invisible character. Even though you can't see it doesn't mean it isn't there. The caret still sees that there is a character in that area that you pasted the BOM and will pass through it. Since the BOM is invisble, the caret will look like it has paused for a brief moment. You can past the BOM multiple times in an area and redo the steps above to really show the affect. Good luck!
EDIT: Sadly, Stackoverflow doesn't like the character. Here is an example from w3.org: https://www.w3.org/International/questions/examples/phpbomtest.php
Other answers are correct - whether a character is invisible or not depends on what font you use. This seems to be a pretty good list to me of characters that are truly invisible (not even space). It contains some chars that the other lists are missing.
'\u2060', // Word Joiner
'\u2061', // FUNCTION APPLICATION
'\u2062', // INVISIBLE TIMES
'\u2063', // INVISIBLE SEPARATOR
'\u2064', // INVISIBLE PLUS
'\u2066', // LEFT - TO - RIGHT ISOLATE
'\u2067', // RIGHT - TO - LEFT ISOLATE
'\u2068', // FIRST STRONG ISOLATE
'\u2069', // POP DIRECTIONAL ISOLATE
'\u206A', // INHIBIT SYMMETRIC SWAPPING
'\u206B', // ACTIVATE SYMMETRIC SWAPPING
'\u206C', // INHIBIT ARABIC FORM SHAPING
'\u206D', // ACTIVATE ARABIC FORM SHAPING
'\u206E', // NATIONAL DIGIT SHAPES
'\u206F', // NOMINAL DIGIT SHAPES
'\u200B', // Zero-Width Space
'\u200C', // Zero Width Non-Joiner
'\u200D', // Zero Width Joiner
'\u200E', // Left-To-Right Mark
'\u200F', // Right-To-Left Mark
'\u061C', // Arabic Letter Mark
'\uFEFF', // Byte Order Mark
'\u180E', // Mongolian Vowel Separator
'\u00AD' // soft-hyphen
The question about invisible characters in Unicode deserves a more thorough explanation.
Short answer - there are lots
Here are 134 invisible characters →← and here is their escaped ASCII representation: U+00AD U+061C U+180E U+200B U+200C U+200D U+200E U+200F U+202A U+202B U+202C U+202D U+202E U+2060 U+2061 U+2062 U+2063 U+2064 U+2067 U+2066 U+2068 U+2069 U+206A U+206B U+206C U+206D U+206E U+206F U+FEFF U+1D173 U+1D174 U+1D175 U+1D176 U+1D177 U+1D178 U+1D179 U+1D17A U+E0001 U+E0020 U+E0021 U+E0022 U+E0023 U+E0024 U+E0025 U+E0026 U+E0027 U+E0028 U+E0029 U+E002A U+E002B U+E002C U+E002D U+E002E U+E002F U+E0030 U+E0031 U+E0032 U+E0033 U+E0034 U+E0035 U+E0036 U+E0037 U+E0038 U+E0039 U+E003A U+E003B U+E003C U+E003D U+E003E U+E003F U+E0040 U+E0041 U+E0042 U+E0043 U+E0044 U+E0045 U+E0046 U+E0047 U+E0048 U+E0049 U+E004A U+E004B U+E004C U+E004D U+E004E U+E004F U+E0050 U+E0051 U+E0052 U+E0053 U+E0054 U+E0055 U+E0056 U+E0057 U+E0058 U+E0059 U+E005A U+E005B U+E005C U+E005D U+E005E U+E005F U+E0060 U+E0061 U+E0062 U+E0063 U+E0064 U+E0065 U+E0066 U+E0067 U+E0068 U+E0069 U+E006A U+E006B U+E006C U+E006D U+E006E U+E006F U+E0070 U+E0071 U+E0072 U+E0073 U+E0074 U+E0075 U+E0076 U+E0077 U+E0078 U+E0079 U+E007A U+E007B U+E007C U+E007D U+E007E U+E007F
Are there more? Yes.
Are there invisible characters in the ASCII range? Depends on the font.
Long answer - ready? set. go!
The Unicode Standard enables anyone to read and write in their own language. To do that, it lists unique code points (U+hex), that are categorized into letters (D,ž,Dž,ʶ,愛,𓂀), symbols (+∊≠,£¥₪,҂˚˟˿), marks (ם֑֟֯ ,ী,◌҉ ), separators ( , , , , ), emojis (😊,🙏,👍), and much more. ASCII/Basic Latin is the very beginning of the table and more code points are added every update.
Simply listing unique numbers for characters is not enough. Characters can change their shape or change the sentence depending on the context. To support that, every code point comes with a list of properties . These properties may define the width (AA), its role in the sentence (-“.), its direction (cכ), and much more.
Most invisible characters have the property General_Category=Format (other answers here included Spaces as well). Theis characters have a supporting role to a word/sentence. Here are some examples:
General Punctuation Block -
Invisible characters that are an integral part of some writing systems and emojis. Common ones are Zero width joiner (U+200D), Zero width non joiner (U+200C), Word joiner (U+2060)
Explicit Bidirectional Formatting characters - 12 invisible characters used to enforce different direction constraints on the sentence. Helping present text to more than 300 million speakers of right-to-left languages e.g. Hebrew or Arabic.
Tags - 97 invisible characters that mirror ASCII (just drop the E and you get characters in the ASCII range). These are used as emoji modifiers and digital signatures to prove who copied your text.
This all leads to talk about exploiting invisible characters for homograph attack/visual spoofing. Sometimes it's harmless like invisible names and titles but in lots of cases they are used maliciously. For example U+202E is one invisible character that keeps doing more harm than good for decades!!
Last point, there is another way to make invisible characters using fonts. Fonts are files that store glyphs (pictures of characters), that present the characters' look. If the font does not contain a glyph for a codepoint, a substitute/replacement character is displayed (e.g. �, □). But if the font contains a transparent glyph for a codepoint, then the character is invisible, only when displayed by that font. This is the only way to have invisible characters in the ASCII range (for example can you see →``← U+000C Form Feed).
Hope you find this explanation helpful and may you check strings for invisible characters more often 😉
Yes you can use invisible or blank name on facebook by using some HTML code/symbols.
Method 1:
Copy and paste (ﹺ ﹺ) symbols without brackets in your first and last name field.
Method 2:
Click on edit name. Now copy and paste following symbol in first and last name.
ՙՙ ՙՙ
An invisible Character is , or U+200b
While trying to parse some unicode text strings, I'm hitting an invisible character that I can't find any definition for. If I paste it in to a text editor and show invisibles, I can see that it looks like a bullet point (• alt-8), and by copy/pasting them, I can see it has an effect like a space or tab, but it's none of those.
I need to test for it, something like...
if(uniChar == L'\t')
But of course I need to provide something to match to.
It has bytes 0xc2 0xa0 in UTF-8.
If no-one has a definition, is there any devious way to test for something I can't define!?
(I happen to be using NSStrings in Objective-C, OSX, Xcode, but I don't think that has any bearing.)
Bytes C2 A0 in UTF-8 encode U+00A0 ɴᴏ-ʙʀᴇᴀᴋ sᴘᴀᴄᴇ, which can be used, for example, to display combining marks in isolation. It is as a named HTML entity. It is almost the same as a U+0020 sᴘᴀᴄᴇ, except it prevents line breaks before or after it, and acts as a numerical separator for bidirectional layout.
The dot you see when you ask a text editor to show invisibles just happens to be what glyph the text editor chose to display spaces. It does not mean the character in question is U+00B7 ᴍɪᴅᴅʟᴇ ᴅᴏᴛ, which is definitely not invisible.
In code, if you have it as a unichar, you can compare it to L'\x00A0'.