I have a mask (loaded from a 256 grey PNG) that I want to apply to an image that's being used as part of my process for drawing a UITableViewCell's imageView.image property.
When the cell isn't selected/highlighted, I CGImageCreateWithMask with a square of the proper color and the mask, then drawAtPoint: it into the image I'm building. This works fine.
However, when the cell is selected or highlighted, I'd like to instead use the mask to instead punch through my image appropriately. That is, when my mask specifies full opacity, I want the image I'm building to be completely transparent so the tableview's background is drawn through it. Where my mask specifies 0 opacity, I want the alpha channel untouched. I want nothing other than the alpha channel affected.
I guess what I mean is that I want to draw clearColor over a UIImage, with a varying level of opacity according to a mask.
First, what is this called? And second, how do I do it?
I think you have to manipulate the CALayers for that. You can use the mask property of the cell's CALayer : CALayer mask attribute.
That is, something in the way of (if myMask is descendent of UIView) :
myCell.layer.mask = myMask.layer
Related
how can i add an X looking red stroke to a UIImageView.
i would like to add 2 diagonal red lines to a UIImageView, is there a way to do it programmatically using layers or masks? (not in drawRect)
Use a CAShapeLayer with your X shape as its path. Depending on how you've drawn the path, you may want to set a nil fill colour (since a path just made of two crossed lines should not be filled).
Add the shape layer as a sublayer of your image view.
I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
How do opaque alpha and the opacity of the background work together for a UIView and what are the differences between them?
UIView http://i.minus.com/jb2IP8TXbYTxKr.png
opaque means don't draw anything underneath, even if you are transparent.
The background color's alpha only affects the background color's transparency, not anything else drawn on the view.
alpha affects everything drawn on the view.
The opaque property can give you a speed increase - if you know that your view will never have transparency you can set this to YES and when iOS renders your view it can make some performance optimisations and render it faster. If this is set to NO iOS will have to blend your view with the view underneath, even if it doesn't happen to contain any transparency.
The alpha will also affect the alpha of the backround color i.e. if the background color is 0.5 transparent and the alpha is also 0.5, this has the effect of making the background view's alpha 0.25 (0.5 * 0.5).
To the very good answer by deanWombourne it's worth to add that, unless you don't draw your own content using the drawRect: method, the opaque property has no effect.
Apple's doc:
You only need to set a value for the opaque property in subclasses of
UIView that draw their own content using the drawRect: method. The
opaque property has no effect in system-provided classes such as
UIButton, UILabel, UITableViewCell, and so on.
If you draw your own content, keep in mind, that opaque is just a hint
This property provides a hint to the drawing system as to how it
should treat the view.
and some more guidelines from the same Apple's doc:
If the view is opaque and either does not fill its bounds or contains
wholly or partially transparent content, the results are
unpredictable. You should always set the value of this property to NO
if the view is fully or partially transparent.
iOS alpha vs opacity vs opaque
[Color Blended Layers]
UIView.alpha is equal to CALayer.opacity - [0.0 - 1.0] - apply alpha to all view(subviews, sublayers). It's like make a single flat bitmap image based on all content and apply alpha. That is why if view contains another subview or sublayer - no Blended Layers and is Off-screen Rendered is applied
UIView.backgroundColor is equal to CALayer.backgroundColor - applies color only on background(not subviews, sublayers)
UIView.opaque is equal to CALayer.opaque - Boolean property which hint's framework about some optimizations. But on practice it is not visible
Experiments
input
alpha
view1.alpha = 0.5
//or
view1.layer.opacity = 0.5
result and Off-screen Rendered
backgroundColor
view1.backgroundColor = .cyan.withAlphaComponent(0.5)
//or
view1.layer.backgroundColor = UIColor.cyan.withAlphaComponent(0.5).cgColor
result and Blended Layers
I would like to apply a "stroke" or outline to a png, identically to how Photoshop does it. I have a feeling this can be done with CALayer, but after some tinkering, it is not immediately obvious. setBorderWidth + setBorderColor is almost what I want, except that it only adds a border to the entire dimension of the image, rather than the outline of the png image itself.
Once the stroke is applied, I'd like to also knockout the fill of the png, leaving only an outlined border of the initial shape.
There is no automatic way to do what you're asking. You have to know the path of the shape within your png that you want to "knockout". Once you've defined that, you can create a CAShapeLayer, which accepts a CGPathRef, containing your points. You can stroke and fill the path layer with whatever color you choose and then add it to the layer hierarchy of the displaying view or use it to define a mask of one of the layers in your view.
Let's say I have an UIImageView, and I wanna show only blue and green colore from it, (RGB colors). Like in Photoshop, I can choose what colors to show, red, blue, and green, so in the image to delete the red color.
So how can I do it? I really new in Objective-c and all :)
There are many ways to do this. If you have access to the image bitmap data, you can just set the red channel to zero on all the pixels. You might need to draw the image into a CGBitmapContext, get the pointer to the bitmap data, and then create a new CGImage that gets turned into a UIImage.
Or you can create a UIView subclass with a custom drawRect: method which first draws the image normally, and then draws a color set to RGB (0, 1, 1) with the blend mode (set by CGContextSetBlendMode) to kCGBlendModeMultiply.