What's the Unicode or Segoe UI Symbols (or other font) code for exclamation mark in circle?
There is no single Unicode codepoint for that particular symbol.
Unicode does define a U+20DD COMBINING ENCLOSING CIRCLE codepoint, but most fonts (including Segoe) do not treat it as a combining symbol, but rather as its own character. In Word, for instance, you would have to adjust the character spacing between it and a preceding character (in this case U+0021 EXCLAMATION MARK) to a negative offset to make them overlap (see Using the “Combining Enclosing Circle” character in Word).
Some fonts do support U+20DD in general (see COMBINING ENCLOSING CIRCLE (U+20DD) Font Support), and some of them do treat it as a combining mark (Code2000, GNU FreeFont fonts, STIX fonts, Symbola, XITS, etc), but the resulting overlap may not visually be exactly what you are looking for, depending on the size and alignment of the character it is being combined with.
Related
I'm combining some characters with unicode combining marks. Some marks when presented in labels, however appear shifted to the right, this is not the actual example but let's say I were to combine A and ˚. Instead of having Å, i have A˚. If I copy the text and paste it somewhere else, the character appears perfect (Å).
To combine the characters I use a method that does this:
Character("\(character)\(mark)")
Where character would be a letter and mark an accent or another combining mark.
I read that this may happen because some fonts don't support certain characters. The font I use for my labels where I display the combined stuff is the systemFont.
Why is this happening? How can I prevent combined characters from being shifted to the right?
Is there any way to display diacritical marks like following without the dotted ring?
◌́
◌̀
◌̃
Each of these items are actually two characters in Unicode that are combined via ligatures or mark-to-base features in the font. The dotted circle is 0x25CC, and the marks you have here are 0x301, 0x300, and 0x303 - each of these are designed to combine with the previous character, but there are non-combining versions of each of these: 0x2CA, 0x2CB, and 0x2DC.
So you can delete the dotted circle from the beginning of the character (it may be difficult to figure out where this character is, since the marks have a width of zero), and replace it with a space, but it may display in odd ways depending on what's surrounding it:
́
̀
̃
Or use the non-combining versions of these marks:
ˊ
ˋ
˜
I want the character ⇓ with stroke, just like ⇏ but downwards, but I can't find it. Does it exist?
Edit:
If you don't see the arrows (e.g. you use IE),
I want the character [downwards double arrow] with stroke, just like [rightwards double arrow with stroke] but downwards, but I can't find it. Does it exist?
There is no such character as a precomposed character (i.e., as a single encoded character, a code point assigned to a character), but you can in principle represent it using an arrow character followed by a combining overlay character.
The character “⇏” U+21CF RIGHTWARDS DOUBLE ARROW WITH STROKE has been defined as having the canonical decomposition RIGHTWARDS DOUBLE ARROW (U+21D2) COMBINING LONG SOLIDUS OVERLAY (U+0338). In principle, a character should be expected to be rendered the same way as its canonical decomposition. In practice, things don’t always go that way.
Along the same lines, a downwards double arrow with stroke could be written as the two-character sequence DOWNWARDS DOUBLE ARROW (U+21D3) COMBINING LONG SOLIDUS OVERLAY (U+0338) or, in HTML, as ⇓̸. In practice, few fonts contain these characters, and browsers may fail to implement the combination properly. Moreover, in many fonts, the result is awkward. In Arial Unicode MS and in DejaVu Serif, the result might be acceptable, but only the latter is free (can be legally used as a downloadable font via #font-face). Here’s the combination as rendered by your browser with the SO stylesheets in effect: ⇓̸.
It doesn't seem to exist, according to this page (compared to this).
I am trying to minimize the vertical distance between controls on a programmatically constructed Windows Form (using C#). This involves setting the Height property appropriately.
I have found that if the text of the control does not contain any letters with descenders in them (i.e. does not have any of the characters j, g, p, q or y) then the control Height can be smaller than when it does contain such letters (if it does contain letters with descenders then the descenders are chopped off if the Height isn't enough).
It will work fine to test for any of the above 5 characters as long as the language is English, or English - like, but I need to be able to cater for (just about) any language.
Is there a way, given some arbitrary Unicode character (and perhaps a font) to determine if that Unicode character has a descender or not?
There is no property defined for Unicode characters to indicate the presence of a descender, and it’s really a feature of glyph design rather than characters. For example, “Q” has a descenders in many fonts, and “J” has one in some. Besides, given the context, you should also consider diacritic marks placed below a letter, not just descenders of base letters. And probably diacritics above letters, too.
So you would need to read the font information (when available) about character dimensions, or tentatively draw characters in your software and measure their dimensions.
As a rule of thumb, any line height below 1.1 times the font size will cause problems with some characters and fonts. Using 1 (“setting solid”) is not enough, because characters may in fact extend outside the font size.
In Windows, you call GetPath() to get an array containing the X/Y coordinates of every point making up the perimeter or outline of the string of glyphs. Search the array for min/max, which will get you the rectangle exactly enclosing the string. Right to the edge of the letters.
Characters included in BMP as specified by 4 digits,
and those characters outside of BMP contains 5 or 6 digits.
But my doubt is:
how is the finanal character drawed from value of code point?
Are the pictures of each character restored in each computer and when displaying just show the matching picture?
Or the final glyph is a computed result of code point itself?
Each Unicode character has a code. The software displaying the character obtains a glyph for that character code - usually from a font installed onto the hosting computer. It then uses the obtained glyph to display the character.
If it can't find a glyph for that character (many fonts for Latin characters completely omit the glyphs used for East Asian languages characters) it formally can't display it. It will then either indicate error or use a supplement glyph meaning that the actual glyph can't be displayed (it can be a question mark or a square or whatever).