FreeType2 which size does FT_Request_Size actaully set? - freetype2

I'm not clear on what I'm actually specifying when I call FT_Request_Size (or FT_Set_Pixel_Sizes). This appears to be some kind of maximum size for the glyph. That is, depends on the proportional size, ascenders, etc. the resulting glyph can, and will, actually be smaller than this size.
Is my interpretation correct? I can't find anything in the API docs which says precisely what it does.

Based on answers from the FreeType maintainers I got the docs updated to add a bit of clarification. Basically the font itself determines the resulting sizes.
For RT_Request_Size
The relation between the requested size and the resulting glyph size
is dependent entirely on how the size is defined in the source face.
The font designer chooses the final size of each glyph relative to
this size. For more information refer to
‘http://www.freetype.org/freetype2/docs/glyphs/glyphs-2.html’
For FT_Set_Pixel_Sizes
You should not rely on the resulting glyphs matching, or being
constrained, to this pixel size. Refer to FT_Request_Size to
understand how requested sizes relate to actual sizes.

Related

What unit is font measured in

Within Unity we get to set "font size" to direct the size of a given text.
However nowhere does it seem to state what unit "font" is measured in within Unity.
All it says in the docs is:
Font Size: The size of the font, based on the sizes set in any word processor.
however "any word processor" doesn't say alot...
Is it in:
Points (pt)
Pixels (px)
Ems (em)
Percent (%)
Something else?
On glyphs it states:
Adjusting the font size effectively changes how many pixels
are used for each glyph in this generated texture.
Which would make me think that font is also in pixels, but doesn't actually confirm it.

BMP image header - biXPelsPerMeter

I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot
biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)
Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.

FreeType2: Get global font bounding box in pixels?

I'm using FreeType2 for font rendering, and I need to get a global bounding box for all fonts, so I can align them in a nice grid. I call FT_Set_Char_Size followed by extracting the global bounds using
int pixels_x = ::FT_MulFix((face->bbox.xMax - face->bbox.xMin), face->size->metrics.x_scale );
int pixels_y = ::FT_MulFix((face->bbox.yMax - face->bbOx.yMin), face->size->metrics.y_scale );
return Size (pixels_x / 64, pixels_y / 64);
which works, but it's quite a bit too large. I also tried to compute using doubles (as described in the FreeType2 tutorial), but the results are practically the same. Even using just face->bbox.xMax results in bounding boxes which are too wide. Am I doing the right thing, or is there simply some huge glyph in my font (Arial.ttf in this case?) Any way to check which glyph is supposedly that big?
Why not calculate the min/max from the characters that you are using in the string that you want to align? Just loop through the characters and store the maximum and minimum from the characters that you are using. You can store these values after you rendered them so you don't need to look it up every time you render the glyphs.
I have a similar problem using freetype to render a bunch of text elements that will appear in a grid. Not all of the text elements are the same size, and I need to prerender them before I know where they would be laid out. The different sizes were the biggest problem when the heights changed, such as for letters with descending portions (like "j" or "Q").
I ended up using the height that is on the face (kind of like you did with the bbox). But like you mentioned, that value was much to big. It's supposed to be the baseline to baseline distance, but it appeared to be about twice that distance. So, I took the easy way out and divided the reported height by 2 and used that as a general height value. Most likely, the height is too big because there are some characters in the font that go way high or way low.
I suppose a better way might be to loop through all the characters expected to be used, get their glyph metrics and store the largest height found. But that doesn't seem all that robust either.
Your code is right.
It's not too large.
Because there are so many special symbols that is vary large than ascii charater. . view special big symbol
it's easy to traverse all unicode charcode, to find those large symbol.
if you only need ascii, my hack method is
FT_MulFix(face_->units_per_EM, face_->size->metrics.x_scale ) >> 6
FT_MulFix(face_->units_per_EM, face_->size->metrics.y_scale ) >> 6

iPhone -- the input parameter to UIView's sizeThatFits method

The signature of this method is:
- (CGSize)sizeThatFits:(CGSize)size
I don't understand what the size parameter is used for. Apple's documentation states that it is "The current size of the receiver."
But the receiver presumably knows its current size. So why does it need to be passed in?
When I experimentally pass in other values, the method appears to use the receiver's current size anyway.
Can anyone explain? And is there any case where this parameter matters?
First of all, this method is AppKit legacy (not in the negative sense of the word).
Yes, any view has some current size at any given moment and can retrieve it from the bounds property. But there are tricky situations during layout when the best size depends on not-quite-static factors. Take a text label, for example. It may be flowed in one or more lines and the number of lines may depend on the maximum allowed width. So a possible UILabel implementation could derive its bounds size from the width of the CGSize passed to sizeThatFits:, in which case that size is not literally the current size of the receiver, but some desired/limit size.
Thus any UIView subclass can implement -sizeThatFits: as it sees fit (pun intended), and is even free to ignore the size parameter. Most often when I have to implement this method, I ignore it because I can calculate it from the internal state of the view, but in a more complex scenario you might need to use the size parameter to hint yourself about certain restrictions in layout.
It is not just the size of the receiver is is the potential size size you want to fill. The result is that size that the view believes will best show its contents for the given input size.
The default behavior is to simply return the size parameter (i.e. the size that fits the default view is the size you give it)- so yes, this parameter matters by default.
Subclasses could enforce constraints like width==height or other things like that using this method.

Get dpi settings via GTK

Using GTK, how do I query the current screen's dpi settings?
The current accepted answer is for PHPGTK, which feels a bit odd to me. The pure GDK library has this call: gdk_screen_get_resolution(), which sounds like a better match. Haven't worked with it myself, don't know if it's generally reliable.
The resolution height and width returned by screen includes the full multi-monitor sizes (e.g. combined width and length of the displayer buffer used to render multi-monitor setup). I've not check of the mm (millimeter width/height) calls returns the actual physical sizes but if it report combined physical sizes then the dpi computed from dividing one with another would be meaningless, e.g. to draw a box on screen that can be measured using a physical ruler.
See GdkScreen. You should be able to compute it using the get_height and get_height_mm or with get_width and get_width_mm.