Recent, i bought a font type otf (*).
Look in this font, i see many glyphs which are rotated 90 degrees clockwise as below :
How to display character same as above (use ItextPdf) with font (*).
Related
I'm trying to create a font using FontForge which is 'simply' the alphanumeric characters from Lato (original version) as well as the greek alphabet characters from Lato (extended version). I opened both fonts in FontForge and copied and pasted the greek letters over to the original version. I generated the font and displayed it in Google Chrome. But the characters that I've copied come out jagged / distorted. I've also tried opening the original font, then going to element and then merge fonts and selecting the extended version font and then generating a font from that but the same problem occurs.
The screenshot provided below shows the distortion going on. The top line shows how the letters should look. The bottom line shows how the letters actually look. Many of the letters - even the ones that were in the original (such as the 'a') - aren't displaying properly. The bottom and top of some characters has been flattened and for other characters, extended. Notice the top of the alpha, rho and epsilon are pointed and aren't smooth as they should be. The top of the beta has been flattened. Look at the top and bottom of the 'o' - both have been flattened as if to fit into a minimum allowable area.
If I zoom in a lot, the jagged edges become smooth again.
What can I do to fix this?
Within Unity we get to set "font size" to direct the size of a given text.
However nowhere does it seem to state what unit "font" is measured in within Unity.
All it says in the docs is:
Font Size: The size of the font, based on the sizes set in any word processor.
however "any word processor" doesn't say alot...
Is it in:
Points (pt)
Pixels (px)
Ems (em)
Percent (%)
Something else?
On glyphs it states:
Adjusting the font size effectively changes how many pixels
are used for each glyph in this generated texture.
Which would make me think that font is also in pixels, but doesn't actually confirm it.
If I specify the width of a <div> tag using CSS as 96px, how many device pixels should that occupy on the screen of a first generation iPhone?
I added <meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0, user-scalable=no"/> to the page, took a screenshot on the simulator, and measured the div to be 96 device pixels wide. Now, I read the W3 spec for CSS pixels and it states that 1px is 1/96th of an inch. So 96 CSS pixels should translate to 1 inch. Since the original iPhone has a DPI of 163, one inch on the screen should occupy 163 device pixels. Why am I not getting that measurement? In other words, should 96 CSS pixels be equal to 1 inch?
I saw that the spec also mentions anchoring to a reference pixel. It seems to me that the reference pixel is simply a device pixel in this case. If I was to work backwards to get the CSS pixel values from a screenshot, would it generally be correct to assume that one device pixel equals one CSS pixel on the iPhone (non-retina display)?
Iphone pixels are like any other pixel. A 96px wide <div> is always 96px wide in any device. DPI (Dots Per Inch) just tell you the ratio between physical pixels (dots) on a screen (or paper) and inches and don't represent any size. DPI are only a ratio between pixels and a real world unit of measurement.
A 96px div would look 6x bigger in a 50 DPI screen than a 300 DPI screen.
DPI vary depending on the device or print/scan quality, therefore 1 inch is NOT always equal to 96 pixels. W3C is just saying that the absolute length units are fixed in relation to each other (it is just an arbitrary approximation to make CSS units consistent). This does not mean that real world units of measurement (inches, cm) can be given a fixed ratio to pixels.
The best help i can give you to understand this is that 1px is only and always equal to 1px. Any comparison between pixels and real world units depends on the DPI of a specific device, not on a standard like the W3C.
The absolute length units are fixed in relation to each other and
anchored to some physical measurement. They are mainly useful when the
output environment is known.
Unicode 6.0 added several characters with descriptions that suggest those characters are supposed to be rendered in a specific color:
RED APPLE U+1F34E
GREEN APPLE U+1F34F
BLUE HEART U+1F499
GREEN HEART U+1F49A
YELLOW HEART U+1F49B
PURPLE HEART U+1F49C
GREEN BOOK U+1F4D7
BLUE BOOK U+1F4D8
ORANGE BOOK U+1F4D9
LARGE RED CIRCLE U+1F534
LARGE BLUE CIRCLE U+1F535
LARGE ORANGE DIAMOND U+1F536
LARGE BLUE DIAMOND U+1F537
SMALL ORANGE DIAMOND U+1F538
SMALL BLUE DIAMOND U+1F539
UP-POINTING RED TRIANGLE U+1F53A
DOWN-POINTING RED TRIANGLE U+1F53B
UP-POINTING SMALL RED TRIANGLE U+1F53C
DOWN-POINTING SMALL RED TRIANGLE U+1F53D
I had thought font symbols were always grayscale.
Did the unicode authors forsee that these might be rendered in different colors?
Within the official unicode.org PDFs (http://www.unicode.org/charts/PDF/U1F300.pdf), these are portrayed only as having different types of crosshatching.
Is there any current mechanism that would allow for specific characters to be rendered in a specific color, based only on its codepoint, and not any other rich-text formatting? (eg. a color property within TrueType or OpenType font files)
From the Unicode FAQ: Emoji and Dingbats, bolding mine:
Q: What about characters whose name specifies a color?
A: Some of the characters from the core emoji sets have names that include a color term, for example, BLUE HEART or ORANGE BOOK. These color terms in the names do not imply any requirement about how a character must be presented; they are intended only to help identify the corresponding character in the core emoji sets. Even names of symbols such as BLACK MEDIUM SQUARE or WHITE MEDIUM SQUARE are not meant to indicate that the corresponding character must be presented in black or white, respectively; rather, the use of black and white is generally just to contrast filled versus outline shapes, or a darker color fill versus a lighter color fill. [PE]
There was quite a bit of debate on the mailing lists at the time on whether these should be named with colors, or generic names that didn't reference color, and whether that was setting a bad precendent. The Emoji Symbols: Background Data includes "old names" such as APPLE-1 instead of RED APPLE and BOOK-3 instead of ORANGE BOOK.
The final names use this principle:
Symbols with an inherent color shall bear this color in their
name unless the entity denoted by the name has identifies the color
anyway (e.g., a BANANA is uniquely yellow and therefore does
not need to be called YELLOW BANANA, while a RED APPLE must be
named so as there are also green apples).
Unicode 6.1 have a feature to change glyph for the same unicode code point, by specifying Variation Selector(U+FE0x).
For example, left-pointing triangle(#"\U000025C0", ◀) can be colored by adding "\U0000FE0F" ◀️* and non-colored by adding "\U0000FE0E" as suffix. (#"\U000025C0\U0000FE0E", ◀︎**).
*looks default on Mac OS X 10.8**This is default on Linux.
From https://learn.microsoft.com/en-us/typography/opentype/spec/otff#tables-related-to-color-fonts:
Tables Related to Color Fonts
COLR: Color table
CPAL: Color palette table
CBDT: Color bitmap data
CBLC: Color bitmap location data
sbix: Standard bitmap graphics
SVG : The SVG (Scalable Vector Graphics) table
In short,
CBDT/CBLC contain colored bitmaps (in PNG). They were proposed by Google.
sbix contains colored bitmaps (in JPG, PNG, or TIFF). It was proposed by Apple.
COLR defines one or more accompanying color glyphs (in vector format) for each glyph, and when they are overlapped they create the final colored glyph. CPAL defines several color themes (dark-on-white, white-on-dark, ...) since COLR is merely paletted images. COLR/CPAL were proposed by Microsoft.
SVG was proposed by Mozilla and Adobe. It may be used with CPAL.
FreeType (part of Android, iOS, and macOS) supports CBDT/CBLC and sbix since 2.5 and 2.5.1 (released in 2013), and COLR/CPAL since 2.10.0 (released in 2018). DirectWrite (part of Windows) supports COLR/CPSL since 8.1 (released in 2013) and all four above since 10 1607 (released in 2016).
Noto Color Emoji (default on Android) uses CBDT/CBLC. Segoe UI Emoji (default on Windows) uses COLR/CPAL. Apple Color Emoji (default on iOS and macOS) uses sbix.
See also https://en.wikipedia.org/wiki/OpenType#Color
I don't know that there's any standard mechanism for colored fonts, but obviously there are colored fonts. For example, the emoji font in iOS and OS X. Emoji characters in any text view on OS X will result in colored symbols, and they won't be affected by choosing a text color. These emoji even show up in Terminal.app.
(From this page.)
I (think) have every values for Text-Rendering in a PDF.
* Position (Text Matrix)
* FontDescriptor with Widths Array
* FontBBox
* StemV/StemH
* FontName
* Descent
* Ascent
* CapHeight
* XHeight
* ItalicAngle
My problem is I don't know what to do with these values. I went through the PDF Spec 1.7 a couple of times and cannot find a formular to calculate the real pixel sizes of every glyph in PDF. Can you give me a hint?
Thank you.
What are you trying to do? Rendering PDF is a lot of work and you also need to factor in leading, Text raise, kerning, CTM and several other factors.
Position: (optional, you can avoid it)
Text Matrix: (optional, you can avoid it)
Widths Array: (use empty array [], PDF can read it directly from CFF (FontFile3 stream))
FontBBox: font file->'CFF ' table->Top DICT INDEX->DICT-> 4 operands for 'FontBBox' operator
StemV: (optional, you can avoid it)
StemH: (optional, you can avoid it)
FontName: font file->'name' table->records
or: font file->'CFF ' table->Top DICT INDEX->string by index 0 for 'fonts names' operator
Descent: font file->'hhea' table->'Descender' parameter
Ascent: font file->'hhea' table->'Ascender' parameter
CapHeight: font file->'OS/2' table->'sCapHeight' parameter
XHeight: font file->'OS/2' table->'sxHeight' parameter
ItalicAngle: font file->'OS/2' table->'sxHeight' parameter
Actually, you can calculate Widths array. For each glyph:
Decoding array(PDF) -> Glyph name (PDF) -> Glyph index (CFF table of font file) -> table 'hmtx' -> Glyph 'hMetrics'[Glyph index] = array ('advanceWidth', 'leftSideBearing')
I spent a WEEK, to understand it...
If you want just highlight a text, it's not necessary calculate the text. You can add as much as you want content objects to the page (rectangle, image, line, semi-transparent stuff) and re-calculate the PDF structure. It is really simple. Ask your mouse about the selection coordinates))
These values are designed to properly typeset type, not draw glyphs, so you can't get the exact pixel size of each glyph from these attributes. The only way to get the exact pixel dimensions of a glyph is to draw the glyph into an image and analyze that image.
The FontBBox (font bounding box) is the smallest box that will hold each glyph. The Widths Array holds information on how far apart each character should be drawn, not the actual glyph image size. Some fonts will draw some glyphs outsize that width.
When you highlight text in a typical text editor, the highlight will be the full height of the font, and the width of each individual character. This highlight is made by getting the FontBBox height, and each character's width from the Widths Array, and transforming those values to match the current font's attributes (size, etc.). This information is sufficient to make your app draw type like typical applications.