OpenSceneGraph: how to check whether ogsText::Font supports a provided character? - unicode

I am trying to check whether the font contains a glyph for a given character. To achieve this, I have loaded the font by readFontFile() and get a glyph of the character. Then, I wanted to check whether the glyph texture is available. I tried the next code
osg::ref_ptr<osgText::Font> font = osgText::readFontFile("path_to_fft_font_file");
auto glyph = font->getGlyph(std::make_pair(32.0f, 32.0f), charcode);
auto texture_info = glyph->getTextureInfo(osgText::ShaderTechnique::GREYSCALE);
For all char codes (that are really supported by the font and that are not) the texture_info is nullptr.
I also tried to check glyph->getTotalDataSize(). It gives not a zero value if the character is not supported by the font but the font contains a glyph for none Unicode (usually looks like ▯).
Is there a way to check if the osgText::Font object contains a non-none glyph for the given character?

Related

Font whose glyphs all display as missing characters, might be named 'nosuchglyph'

I could use a font that might be named 'nosuchglyph'. This could be used for testing a font on the web with a font stack specified something like
style="font-family:'Some Font Regular', 'nosuchglyph'"
Every glyph would be an appropriate missing-character glyph. It could be minimally the same glyph for every single character -- something like a black rectangle or a rectangle outline with a '?' in the middle, or some similar image to convey that this glyph is missing. More ideal would be a rectangle per Unicode character code with a tiny display of that character code, something like this ASCII art for, say, the character code U+4f4f:
[4f4f]
Whatever it is, for any glyph missing in "Some Font Regular" (in this example) the glyph in the output would come from "nosuchglyph".
This is nice for testing in order to see for a given font, say, "Some Regular Font" as in the example, which characters are from that font and which are missing. This is meant to ensure you do not get the normal substitution for missing characters, which would show the glyph from a font later in the stack or else from some default fallback font.

Using characters larger than 0xFFFF

I have an OpenType font with some optional glyphs selected by features. I've opened it in FontForge and I can see that the associated unicode code point is, for example, 0x1002a.
Is it possible to use this value to render the glyph in iText? I've tried calling showText() with a string containing the corresponding surrogate pairs ("\uD800\uDC2A") but nothing appears.
Is there another way to do this, or am I barking up the wrong tree?

What is different between encoding and font

Encoding is maping that gives characters or symbols a unique value.
If a character is not present in encoding no matter what font you use it won't display correct fonts
Like Lucida console, arial or terminal
But problem is terminal font is showing line draw characters but other font is not showing line draw characters
My question is why terminal is behaving different to other font
Plz note
Windows 7
Locale English
For the impatient, the relevant link is at the bottom of this answer.
Encoding is maping that gives characters or symbols a unique value.
No, that are the specifics of a character-set, which maps certain characters to code points (using the Unicode terminology). Lets ignore the above for now.
If a character is not present in encoding no matter what font you use it won't display correct fonts Like Lucida console, arial or terminal
Font formats map Unicode code points to glyphs. Not all code points may be mapped for specific fonts - somebody has to create all these symbols. Again, lets ignore this.
Not all binary encodings may map to code points within a certain character set; this is possibly what you mean.
But problem is terminal font is showing line draw characters but other font is not showing line draw characters
Your terminal seems to operate on a different character set, probably the "OEM" or "IBM PC" character set instead of a Unicode compliant character set or Windows-1252 / ISO 8859-1 / Latin.
If it is the latter than you are out of luck unless you can set your output-terminal to another character set, as Windows-1252 doesn't support the box drawing characters at all.
Solutions:
If possible try and set the output to OEM / IBM PC character set.
If it is Unicode you can try and convert the output to Unicode: read it in (decode it) using the OEM character set and then re-encode it using the box drawing subset.

iText -- How do I identify a single font that can print all the characters in a string?

This is wrt iText 2.1.6.
I have a string containing characters from different languages, for which I'd like to pick a single font (among the registered fonts) that has glyphs for all these characters. I would like to avoid a situation where different substrings in the string are printed using different fonts, if I already have one font that can display all these glyphs.
If there's no such single font, I would still like to pick a minimal set of fonts that covers the characters in my string.
I'm aware of FontSelector, but it doesn't seem to try to find a minimal set of fonts for the given text. Correct? How do I do this?
iText 2.1.6 is obsolete. Please stop using it: http://itextpdf.com/salesfaq
I see two questions in one:
Is there a font that contains all characters for all languages?
Allow me to explain why this is impossible:
There are 1,114,112 code points in Unicode. Not all of these code points are used, but the possible number of different glyphs is huge.
A simple font only contains 256 characters (1 byte per font), a composite font uses CIDs from 0 to 65,535.
65,535 is much smaller that 1,114,112, which means that it is technically impossible to have a single font that contains all possible glyphs.
FontSelector doesn't find a minimal set of fonts!
FontSelector doesn't look for a minimal set of fonts. You have to tell FontSelector which fonts you want to use and in which order! Suppose that you have this code:
FontSelector selector = new FontSelector();
selector.addFont(font1);
selector.addFont(font2);
selector.addFont(font3);
In this case, FontSelector will first look at font1 for each specific glyph. If it's not there, it will look at font2, etc... Obviously font1, font2 and font3 will have different glyphs for the same character in common. For instance: a, a and a. Which glyph will be used depends on the order in which you added the font.
Bottom line:
Select a wide range of fonts that cover all the glyphs you need and add them to a FontSelector instance. Don't expect to find one single font that contains all the glyphs you need.

Determining whether or not a font can render a Unicode character in Cocoa Touch

Is there a way to determine whether or not a font can render a particular Unicode character in Cocoa? Alternatively, is it possible to specify the default substitute character?
You can use CGFontGetGlyphWithGlyphName() for iOS older versions of iOS (2.0). Apple doesn't seem to document the glyph names but I believe they correspond to this Adobe list:
http://partners.adobe.com/public/developer/en/opentype/glyphlist.txt
For example, the glyph for é (U+00E9) is named "eacute" and the code to get the glyph would be:
CFStringRef name = CFStringCreateWithCString(NULL, "eacute", kCFStringEncodingUTF8);
CGGlyph glyph = CGFontGetGlyphWithGlyphName(font, name);
Check out CTFontGetGlyphsForCharacters in the Core Text framework. If it returns zero for a given Unicode character's glyph index, then that character isn't supported in that font. The function returns false if any of the glyphs couldn't be found.
Since it is on iOS, you might need to build or find an app that does this.
Ultimately, if the info is present for any individual glyph it is in the tables in the font file itself and that is pretty low level C work.
Much easier to build a tool that inspects the font and shows the glyph and glyph index (CGGlyph or NSGlyph value) then use the glyph index.