The opaque property of a UIView is by default always set to "YES". But the UIView class reference states this:
An opaque view is expected to fill its bounds with entirely opaque content—that is, the content should have an alpha value of 1.0. If the view is opaque and either does not fill its bounds or contains wholly or partially transparent content, the results are unpredictable.
Since changing the alpha of a view is quite common, especially during transitions or animations, then the above statement would imply that you must always manually set opaque to NO if you are going to change the alpha property as well.
But I have never manually adjusted opaque and haven't had any noticeable symptoms. How necessary is it to make this consideration?
The answer is that iOS is smart enough to recognize that if your view's alpha is less than 1, it needs to draw the content behind your view, regardless of your view's opaque property.
In response to comments: from my limited experimentation, I don't think the view's opaque property has any effect. (I think the documentation is wrong.) The view's layer's opaque property does have an effect: it controls whether the CGContext passed to drawRect: has an alpha channel. If the layer's opaque property is YES, the context has no alpha channel (and is treated as if every pixel has alpha of 1.0).
Changing the view's opaque property has no effect on the layer's opaque property. This is different than (for example) the view's alpha property, which is just a wrapper for the layer's opacity property.
In theory, having documented that the opaque property allows them to optimize drawing, Apple could implement that optimization in the future. In practice, doing so would probably break a lot of apps, so they probably won't make such a change apply to apps linked against older SDKs. (They have the ability to make UIKit behave differently depending on which version the app was linked with.)
As long as the view contents itself (not its subviews) don't have alpha you're fine. So if you init an UIImageViews with an PNG image with alpha, opaque will be set to NO automatically.
Normally you don't really need many non-opaque views. But alpha of the complete view is smth different anyway.
Related
If I set shouldRasterize = YES on a CALayer, do I have to set it on each of the sublayers as well if I wanted the whole hierarchy to be flattened for better animation performance?
I'm asking because when I set shouldRasterize = YES on my root layer and enable "Color Blended Layers" in Instruments, all the sublayers are still there and marked as blended. It's not flattening anything.
Setting shouldRasterize does not do quite what you are thinking it does. In order to composite the look of the parent view, rasterized or not, it has to check subviews to see if they are opaque or transparent. When child objects are opaque, they do not need to be blended. When they are transparent, the view needs to be blended with whatever is behind them (or higher in the hierarchy).
So, shouldRasterize will not affect the green/red you see using Instruments. In order to have everything green, you'll need to not use transparency and have all your child objects be opaque. Sometimes its unavoidable to still have red areas depending on your design. The instrument is just there to help you optimize ones that could be opaque and reduce the amount of blending the GPU has to do.
Edit:
To explain further, suppose you have a UILabel and its sitting on top of a photo. You only want to see the text and not its background color, so you set its backgroundColor to clear, and the opaque property to NO. In instruments, this will now appear red. The GPU has to blend this transparency over the image behind it, performing two draw operations instead of one.
If we had set opaque to YES and gave it a solid background color, the view would now show up green in instruments because it didn't have to blend that view with any other view.
So, whether the layer is rasterized or not, it still has to composite its child views so shouldRasterize really has no effect either way on what you see in Instruments.
I've noticed that when animating things in UIKit, certain types of animations can be composited using standard block-based animations while others cannot. For instance, view.transform interferes with view.frame, but not with view.center. Are these things documented anywhere?
On a related note, because of these compositing issues, I've often resorted to animating mainly using CGAffineTransforms, since they can be composited very easily. Is this a good idea? It seems that applying a transform is different under the hood than simply changing the frame, so I'm not sure if I should be using them to permanently move a view or change its size. Do CGAffineTransforms and view.frame-related changes overlap at all?
Thanks!
For what it's worth, here's Apple's stance on this:
You typically modify the transform property of a view when you want to
implement animations. For example, you could use this property to
create an animation of your view rotating around its center point. You
would not use this property to make permanent changes to your view,
such as modifying its position or size a view within its superview’s
coordinate space. For that type of change, you should modify the frame
rectangle of your view instead.
Source: View Programming Guide for iOS, View and Window Architecture
(I suppose one exception would be permanently rotated views, which would be impossible to accomplish with frame modifications.)
I've also determined that CGAffineTransforms appear to modify the underlying rendered image of a view, not its content, so (for example) applying a CGAffineTransformScale is fundamentally different from expanding the frame. I'm not sure if this is necessarily true, or if it depends on contentMode/other factors.
I'm still not entirely clear on how the frame, bounds, and transform of a view interact. You can, for example, set the frame of a view after applying a rotation, and it'll be relative to the rotated view, whereas modifying the bounds will apply the transformation to the view pre-rotation (IIRC).
I tried to find other reasons why opaque views are better than transparent. However, the only sensible reason I came up with is that the view behind the opaque one doesn't need to draw its content at that place.
Is this a wrong assumption and are there other good reasons?
Thanks.
This is the right assumption. Directly from Apple documentation:
opaque
A Boolean value that determines
whether the receiver is opaque.
#property(nonatomic, getter=isOpaque) BOOL opaque
Discussion
This property provides a hint to the drawing system
as to how it should treat the view. If
set to YES, the drawing system treats
the view as fully opaque, which allows
the drawing system to optimize some
drawing operations and improve
performance. If set to NO, the drawing
system composites the view normally
with other content. The default value
of this property is YES.
An opaque view is expected to fill its
bounds with entirely opaque
content—that is, the content should
have an alpha value of 1.0. If the
view is opaque and either does not
fill its bounds or contains wholly or
partially transparent content, the
results are unpredictable. You should
always set the value of this property
to NO if the view is fully or
partially transparent.
Aside from being able to avoid drawing the background view, there are a few other reasons it's faster, related to compositing:
1) There's no need to blend the foreground and background views, which saves some math done on every pixel
2) If the view moves, its pixels can simply be blitted to their new locations without any redrawing at all. This is a huge deal for scrolling performance, even on desktop computers, let alone mobile devices.
Is there a way to make the map somewhat transparent but keep all the added overlays opaque?
Adjusting the alpha for the map will of course set it for the overlays as well.
You would have to fiddle around with MKMapView's subviews which is possible but completely undocumented and not recommended by Apple. Basically you'd need to find the subview with the actual map and set its alpha. No guarantee it'll actually work!
The documentation says that the clipsToBounds property of UIView will clip the drawing to the bounds, or more precisely that the subView can't draw outside of the bounds of the superView.
Sounds nice, but what does that mean in practice?
If I set that to YES, then my subView will automatically only draw those parts which are not outside the bounds of the superView. so it increases the overall performance or do I still have to make sure that I don't create any views that are not visible, i.e. inside a UIScrollView ?
I think it's the opposite: turning on clipping hurts performance because it has to set up a clipping mask. I vaguely remember reading this somewhere, but I can't find it now.
The use case for clipsToBounds is more for subviews which are partially outside the main view. For example, I have a (circular) subview on the edge of its parent (rectangular) UIView. If you set clipsToBounds to YES, only half the circle/subview will be shown. If set to NO, the whole circle will show up. Just encountered this so wanted to share.
The (possible) performance hit is only deterministic if you know the view hierarchy. As mentioned above, usually the renderer will use GPU cycles to draw the view UNLESS some view within the hierarchy uses drawRect:. This does not affect OpenGL ES application because drawRect:is omitted in this type of apps.
From my understanding, determining and displaying the clipped area may take less cycles than actually calculating/drawing/blending the whole view. As of OpenGL ES 2.0 clipping can be calculated in GPU.