iPhone -- the input parameter to UIView's sizeThatFits method - iphone

The signature of this method is:
- (CGSize)sizeThatFits:(CGSize)size
I don't understand what the size parameter is used for. Apple's documentation states that it is "The current size of the receiver."
But the receiver presumably knows its current size. So why does it need to be passed in?
When I experimentally pass in other values, the method appears to use the receiver's current size anyway.
Can anyone explain? And is there any case where this parameter matters?

First of all, this method is AppKit legacy (not in the negative sense of the word).
Yes, any view has some current size at any given moment and can retrieve it from the bounds property. But there are tricky situations during layout when the best size depends on not-quite-static factors. Take a text label, for example. It may be flowed in one or more lines and the number of lines may depend on the maximum allowed width. So a possible UILabel implementation could derive its bounds size from the width of the CGSize passed to sizeThatFits:, in which case that size is not literally the current size of the receiver, but some desired/limit size.
Thus any UIView subclass can implement -sizeThatFits: as it sees fit (pun intended), and is even free to ignore the size parameter. Most often when I have to implement this method, I ignore it because I can calculate it from the internal state of the view, but in a more complex scenario you might need to use the size parameter to hint yourself about certain restrictions in layout.

It is not just the size of the receiver is is the potential size size you want to fill. The result is that size that the view believes will best show its contents for the given input size.
The default behavior is to simply return the size parameter (i.e. the size that fits the default view is the size you give it)- so yes, this parameter matters by default.
Subclasses could enforce constraints like width==height or other things like that using this method.

Related

Make intrinsicContentSize adapt to external constraints

The Context
I often have situations where I want multiple NSTextViews in a single NSStackView. Naturally, Auto Layout is not pleased with this since this makes height ambiguous (assuming the stack view is not set to fill equally). Even after adding constraints to resolve these issues, macOS Interface Builder appears to have a bug where it refuses to actually update frames when asked.
For this reason and others, I'm attempting to create a TextBox class (subclassing NSView) to encapsulate an NSTextView (and associated scroll view) and include an intrinsic content size to avoid layout issues. The intrinsic content size would be calculated based on a user-specified min and max number of lines (to display without requiring scroll). In other words, up to a certain max number of lines, TextBox will resize itself so that scrolling is unnecessary.
The Problem
This would seem to require an intrinsicContentSize that is dependant on frame width.
But, intrinsicContentSize documentation states:
The intrinsic size you supply must be independent of the content frame, because there’s no way to dynamically communicate a changed width to the layout system based on a changed height.
However, Auto Layout Guide states:
A text view’s intrinsic content size varies depending on the content, on whether or not it has scrolling enabled, and on the other constraints applied to the view. For example, with scrolling enabled, the view does not have an intrinsic content size. With scrolling disabled, by default the view’s intrinsic content size is calculated based on the size of the text without any line wrapping. For example, if there are no returns in the text, it calculates the height and width needed to layout the content as a single line of text. If you add constraints to specify the view’s width, the intrinsic content size defines the height required to display the text given its width.
Given that when scrolling is disabled in a text view:
If you add constraints to specify the view’s width, the intrinsic content size defines the height required to display the text given its width.
Then it seems there is a way to do what I want by perhaps looking at existing constraints.
The Question
How can I define an intrinsic content size that calculates height based on otherwise specified width, as described in the last quoted sentence above?
The solution should produce the effect described in "The Context" and not produce errors or warnings when used inside a Stack View.

FreeType2 which size does FT_Request_Size actaully set?

I'm not clear on what I'm actually specifying when I call FT_Request_Size (or FT_Set_Pixel_Sizes). This appears to be some kind of maximum size for the glyph. That is, depends on the proportional size, ascenders, etc. the resulting glyph can, and will, actually be smaller than this size.
Is my interpretation correct? I can't find anything in the API docs which says precisely what it does.
Based on answers from the FreeType maintainers I got the docs updated to add a bit of clarification. Basically the font itself determines the resulting sizes.
For RT_Request_Size
The relation between the requested size and the resulting glyph size
is dependent entirely on how the size is defined in the source face.
The font designer chooses the final size of each glyph relative to
this size. For more information refer to
‘http://www.freetype.org/freetype2/docs/glyphs/glyphs-2.html’
For FT_Set_Pixel_Sizes
You should not rely on the resulting glyphs matching, or being
constrained, to this pixel size. Refer to FT_Request_Size to
understand how requested sizes relate to actual sizes.

Get image width and height in pixels

so i have looked at a couple other questions like this and from what i saw none of the answers seemed to answer my question. I created a program that creates ASCII art, which is basically a picture of text instead of colors. the way i have the program set up at the moment you have to manually set the Width and Height of the pixels. If the width and height of the pixels is too large it simply wont work. so basically what i want to do is have a function to automatically set the width and height to the size of the picture. http://www.mediafire.com/?3nb8jfb8bhj8d is the link to the program now. I looked into pixel grabber but the constructor methods all needed a range of pixels. I also have another folder for the classes, http://www.mediafire.com/?2u7qt21xhbwtp
on another note this program is incredibly inefficient, i know that it is inefficient in the grayscaleValue() method, but i dont know if there is any better way to do this. Any suggestions on this program would be awesome too. Thanks in advance! (this program was all done on eclipse)
After you read the image into your BufferedImage, you can call getWidth() and getHeight() on it to get this information dynamically. See the JavaDocs. Also, Use a constructor for GetPixelColor to create the BufferedImage once and for all. This will avoid reading the entire file from disk for each channel of each pixel.
For further code clean up, change series of if statements to a switch construct, or an index into an array, whichever is more natural. See this for an explanation of the switch construct.
One last comment: anything inside a class that logically represents the state of an object should be declared non static. If, say, you wanted to render two images side by side, you would need to create to instances if GetPixelColor, and each one should have its own height and width attributes. Since they're currently declared static, each instance would be sharing the same data, which is clearly not desireable behavior.

question about UIView's sizeThatFits [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
iPhone — the input parameter to UIView's sizeThatFits method
Specifically, what's its argument supposed to be? The documentation says that it's the receiver's current size, but a view can always use self.bounds.size, so that doesn't make sense.
Is it supposed to be the available space? In other words, is the parent asking the child, "given that there's available space of X x Y points, how big do you want to be?".
I simply believe the doc is wrong when it says "The current size of the receiver". It is only a use case but there are others: for example if you want the view to return its very best size given an arbitrary size you will call this method passing your arbitrary size as the parameter.
Is it supposed to be the available space? In other words, is the parent asking the child, "given that there's available space of X x Y points, how big do you want to be?".
Yes, you've got the idea BUT don't restrict the meaning of the argument to the "available space". It's just an arbitrary size that may or not correspond to an available space. It depends on how you are using the method. However the view is supposed to always return what it considers to be its best size (the size that best fits its subviews) if it has to fit into the size passed as an argument.
Look here, this should answer your question: iPhone -- the input parameter to UIView's sizeThatFits method
Pretty much exactly that. Classes like UIPickerView and UILabel have content that works best at particular sizes, and as such they return those specific sizes rather than the more general bounds size.
Simply sizeThatFit: method has to be overriden with new CGSize, and sizeToFit method calls sizeThatFit for resizing.
sizeThatFits can be used when the superview is laying out its children.
Let's say you have a superview that's 300 pixels wide and you don't know yet how tall it's going to be - the height will be based on the sum of the height of the children. In this case, the superview would pass in a size to sizeToFit that represents the remaining available space, which would be different than the bounds of the superview.

Always getting the same height using sizeWithFont:minFontSize:actualFontSize:forWidth:lineBreakMode:

The CGSize returned by sizeWithFont:minFontSize:actualFontSize:forWidth:lineBreakMode: contains always the same height. Why is that, and is there a way around this?
I want to align a string vertically and it may not be truncated, unless it can't fit on a single line using the minimum font size. So I try to use this method to get the line height but it always returns 57px no matter what the actualFontSize is.
Any ideas?
Once you have the actualFontSize from sizeWithFont:minFontSize:actualFontSize:forWidth:lineBreakMode:
Then you can calculate the height required to render using:
CGFloat actualHeight = [font fontWithSize:actualFontSize].ascender-[font fontWithSize:actualFontSize].descender;
(I found this page useful in understanding the various size values relating to fonts.)
I believe you are misunderstanding the actualFontSize parameter. This is an out parameter only, not an in parameter. It does not matter what value you provide for actualFontSize; if the pointer is non-NULL, then it will be overwritten with the font size used in determining the returned size. The returned actual font size will be between the minFontSize: and the size of the UIFont instance provided as the argument to the sizeWithFont: portion of the selector.
You also must be sure to send this message to the string you intend to render. You would always get the same value, for example, if you are asking a dummy string that's less than a line long how much height it would take for the supplied width.
If you update your question with the actual code used in calling the method, everyone would be better able to help you.