I'm writing a tcl/tk application, where i would like to use font-awesome icons.
In principal, this works nicely: just display some string/label with the correct unicode char, and if the proper fonts are installed, it will render)
Now, on my dev machine i have font-awesome installed as an ordinary machine.
I cannot expect that on the deployment machines.
So I would like to find out, whether the system can render a given character, or whether it just uses a glyph-not-found placeholder. In the latter case, i would just fallback to some less-nice representation...
(I don't want my users to have to answer a question like "does this string look correct?")
E.g. the symbol "" () displays as the stackoverflow-icon in my application. in my browser it is rendered as a glyph-not-found.
So, is there a way to find out programmatically, if any of the (used) system fonts provides the glyph for a given character?
Unfortunately, there isn't; it's outright missing functionality. The closest you can get is to get the actual font info for a character — requires 8.6 I think — or to measure its width, but that doesn't really help:
% font actual TkFixedFont
-family Monaco -size 11 -weight normal -slant roman -underline 0 -overstrike 0
% font actual TkFixedFont \uf16c
-family Monaco -size 11 -weight normal -slant roman -underline 0 -overstrike 0
% font measure TkFixedFont \uf16c
14
(The character renders as the glyph-not-found symbol on this system with that font.)
Related
I've already installed Windows Terminal, set it up with "oh my posh" and everything working as intended.
Though whenever I launch PowerShell 7 (without the terminal), the font is messy as you can see at the image below
I have already tried to change the font, to the same one I used in terminal's .json but there are still some parts that are not rendering correctly and I cannot use it that way with VSCode
The problem is because the Windows Console doesn't fully support UTF-8:
Windows Console was created way back in the early days of Windows,
back before Unicode itself existed! Back then, a decision was made to
represent each text character as a fixed-length 16-bit value (UCS-2).
Thus, the Console’s text buffer contains 2-byte wchar_t values per
grid cell, x columns by y rows in size.
...
One problem, for example, is that because UCS-2 is a fixed-width
16-bit encoding, it is unable to represent all Unicode codepoints.
This means you have "partial" support for Unicode characters in the Windows Console (i.e. as long as the character can be represented in UCS-2), but won't support all potential (32-bit) Unicode regions.
When you see boxes, that means that the character that is being used is using a region outside of the UCS-2 range. You also tell this because you get 2 boxes (i.e. 2 x 16 bit values). That is why you can't have happy faces 😀 in your Windows Console (which makes me sad ☹️).
In order for it to work in all locations, you will have to modify your oh-my-posh themes to use a different character that can be represented with a UCS-2 character.
For Version 2 of Oh My Posh, to make the font changes you have to edit the $ThemeSettings variable. Follow the instructions on the GitHub on configuring Theme Settings. e.g.:
$ThemeSettings.GitSymbols.BranchSymbol = [char]::ConvertFromUtf32(0x2514)
For Version 3+ of Oh My Posh, you have to edit the JSON configuration file to make the changes, e.g.:
...
{
"type": "git",
"style": "powerline",
"powerline_symbol": "\u2514",
....
There are several unicode relevant questions has been confusing me for some time.
For these reasons as follow I think the unicode characters are existed on disk.
Execute echo "\u6211" in terminal, it will print the glyph corresponding to the unicode code point U+6211.
There's a concept of UCD (unicode character database), and We can download it's latest version. UCD latest
Some new version unicode characters like latest emojis can not display on my mac until I upgrade macOS version.
So if the unicode characters does existed on the disk , then :
Where is it ?
How can I upgrade it ?
What's the process of mapping the unicode code point to a glyph ?
If I use a specific font, then what's the process of mapping the unicode code point to a glyph ?
If not, then what's the process of mapping the unicode code point to a glyph ?
It will very appreciated if someone could shed light on these problems.
Execute echo "\u6211" in terminal, it will print the glyph corresponding to the unicode code point U+6211.
That's echo -e in bash.
› echo "\u6211"
\u6211
› echo -e "\u6211"
我
Where is it ?
In the font file.
Some new version unicode characters like latest emojis can not display on my mac until I upgrade macOS version.
How can I upgrade it ?
Installing/upgrading a suitable font with the emojis should be enough. I don't have macOS, so I cannot verify this.
I use "Noto Color Emoji" version 2.011/20180424, it works fine.
What's the process of mapping the unicode code point to a glyph ?
The application (e.g. text editor) provides the font rendering subsystem (Quartz? on macOS) with Unicode text and a font name. The font renderer analyses the codepoints of the text and decides whether this is simple text (e.g. Latin, Chinese, stand-alone emojis) or complex text (e.g. Latin with many marks, Thai, Arabic, emojis with zero-width joiners). The renderer finds the corresponding outlines in the font file. If the file does not have the required glyph, the renderer may use a similar font, or use a configured fallback font for a poor substitute (white box, black question mark etc.). Then the outlines undergo shaping to compose a complex glyph and line-breaking. Finally, the font renderer hands off the result to the display system.
Apart from the shaping, very little of this has to do with Unicode or encoding. Font rendering already used to work that way before Unicode existed, of course font files and rendering was much simpler 30 years ago. Encoding only matters when someone wants to load or save text from an application.
Summary: investigate
Truetype/Opentype font editing software so you can see what's contained in the files
font renderers, on Linux look at the libraries pango and freetype.
Generally speaking, operating system components that use text use the Unicode character set. In particular, font files use the Unicode character set. But, not all font files support all the Unicode codepoints.
When a codepoint is not supported by one font, the system might fallback to another that does. This is particularly true of web browsers. But ultimately if the codepoint is not supported, an unfilled rectangle is rendered. (There is no character for that because it's not a character. In fact, if you were able to copy and paste it as text, it should be the original character that couldn't be rendered.)
In web development, the web page can either supply or give the location of fonts that should work for the codepoints it uses.
Other programs typically use the operating system's rendering facilities and therefore the fonts available through it. How to install a font in an operating system is not a programming question (unless you are including a font in an installer for your program). For more information on that, you could see if the question fits with the Ask Different (Apple) Stack Exchange site.
xmgrace is wonderful, but it has some problems when dealing with miscellaneous characters.
How can I make the script small l ($\ell$ in latex) in xmgrace?
I believe the only way to do this is to specify a script-like system font. None of the standard ones are suitable so you will have to make sure that a suitable font is installed on your system.
You can change to any font by enclosing the name in
\f{}
e.g.
\f{Symbol}
or
\f{Century-Schoolbook-L-Bold_italic}
You can see a list of the available fonts (and their labels) by going to the Font tool in the Window menu of the xmgrace GUI.
After typing the special character you can return to your original font in a similar way, or by using \0 to get back to the default font 0.
I thought Julia supports raw unicode input, such as:
julia> test = "π£¢∞§"
"π£¢∞§"
julia> 😘 = 1 ;
julia> print(😘 )
1
However, it seems julia does not support (Apple logo).
julia> = 123
ERROR: syntax: invalid character ""
julia> test = ""
"\uf8ff"
I wonder what's the underlying reason for that, and whether there is a way I can use character in Julia?
I believe this link more properly explains the case of the unicode character that you see as apple's logo.
The problem is that the unicode value used is one of several that is set aside for private use. That means that each operating system, or application, or implementation is free to use those unicode characters for anything they want. It just so happens that Apple has chosen to use unicode character U+F8FF (decimal value 63743, or on the web as either or ) as the Apple Logo. But some Windows fonts put in a Windows logo. And some other fonts put in a Klingon Mummification glyph. Or elven script. Or anything they want. And if it isn't defined in your local font, you'll just see a square.
My opinion is that Julia simply doesn't use this special value for anything. This also explains why your "π£¢∞§" characters work nicely - they are proper unicode characters, more largely supported by different platforms.
As a side note, i too see a simple square instead of the apple logo on this instance.
Edit
Here is a list of unicode characters supported by Julia.
To expand on Alex's answer...
Apple's logo () isn't an official Unicode symbol. I think there are very few commercial logos and symbols in the main Unicode tables.
However, Unicode provides some 'anything goes' areas (called PUAs - private use areas) that companies and individuals can fill with their own symbols, so that their users can access certain special glyphs. The main PUA is U+E000 to U+F8FF. Depending on which font you're using, you'll find all kinds of stuff assigned to these codes. On a Mac, I can usually get the Apple logo at "\uf8ff", with the right font selected, but not the Ubuntu symbol or the Windows logo, unless I choose another font. (There's also a fallback mechanism, whereby if you request a code point that the current font doesn't have, the OS will find a suitable substitute in another font and use that.)
[
In Julia, you can only use certain Unicode characters for variable names. Julia wouldn't allow anything from the private use area anyway, unless some fonts were distributed to every computer and everyone agreed on who had which Unicode point. (Mathematica makes extensive use of PUA symbols in their notebooks, because they can and do install their own fonts, and can then access various glyphs from the PUA in the notebook with guaranteed results.)
You are allowed to use emoji characters as variable names, so you could try the Emoji apple, rather than the Apple apple:
When I run Mocha, it tries to show a check mark or an X for a passing or a failing test run, respectively. I've seen great-looking screenshots of Mocha's output. But those screenshots were all taken on Macs or Linux. In a console window on Windows, these characters both show up as a nondescript empty-box character, the classic "huh?" glyph:
If I highlight the text in the console window and copy it to the clipboard, I do see actual Unicode characters; I can paste the fancy characters into a textbox in a Web browser and they render just fine (✔, ✖). So the Unicode output is getting to the console window just fine; the problem is that the console window isn't displaying those characters properly.
How can I fix this so that all of Mocha's output (including the ✔ and ✖) displays properly in a Windows console?
By pasting the characters into LinqPad, I was able to figure out that they were 'HEAVY CHECK MARK' (U+2714) and 'HEAVY MULTIPLICATION X' (U+2716). It looks like neither character is supported in any of the console fonts (Consolas, Lucida Console, or Raster Fonts) that are available in Windows 7. In fact, out of all the fonts that ship with Windows 7, only a handful support these characters (Meiryo, Meiryo UI, MS Gothic, MS Mincho, MS PGothic, MS PMincho, MS UI Gothic, and Segoe UI Symbol). The ones starting with "MS" are all fixed-width (monospace) fonts, but they all look awful at the font sizes typical of a console. And the others are out, since the console requires fixed-width fonts.
So you'll need to download a font. I like DejaVu Sans Mono -- it's free, it looks good at console sizes, it's easy to tell the 0 from the O and the 1 from the I from the l, and it's got all kinds of fancy Unicode symbols, including the check and X that Mocha uses.
Unfortunately, it's a bit of a pain to install a new console font, but it's doable. (Steps adapted from this post by Scott Hanselman, but extended to include the non-obvious subtleties of 000.)
Steps:
Download the DejaVu fonts. Unzip the files. Go into the "ttf" directory you just unzipped, select all the files, right-click and "Install".
Run Regedit, and go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Console\TrueTypeFont.
Add a new string value. Give it a name that's a string of zeroes one longer than the longest string of zeroes that's already there. For example, on my Windows 7 install, there's already a value named 0 and one named 00, so I had to name the new one 000.
Double-click on your new value, and set its value to DejaVu Sans Mono.
Reboot. (Yes, this step is necessary, at least on OSes up to and including Windows 7.)
Now you can open a console window, open the window menu, go to Defaults > Font tab, and "DejaVu Sans Mono" should be available in the Font list box. Select it and OK.
Now Mocha's output will display in all its glory.
Update: this issue has now been fixed. Starting from Mocha 1.7.0, fallbacks are used for symbols that don't exist in default console fonts (√ instead of ✔, × instead of ✖, etc.). It's not as pretty as it could be, but it surely beats empty-box placeholder symbols.
For details, see the related pull request: https://github.com/visionmedia/mocha/pull/641